In recent years, many state of the art algorithms are based not on a conventional neural networks densely connected in a feed forward topology, but instead the result of research work that has produced network architectures (topologies) specifically adapted to the problem the model is meant to solve.

It can be seen particularly in deep network models such as ResNet, Inception Network or Residual Network: each of them presents an innovative topology that enables state of the art results.

Although these models may have the same number of parameters as basic convolutional networks (CNNs) explored a few years ago, they are very different, they are innovative in the way that layers are interconnected (their topological structure), and it is this topology which gives them their strength.


Finding these new models / topologies is currently an extremely active area of research.

In the past, this has mainly been done manually by changing networks slowly by trial and error. But recently, research has focused on exploring methods to find these topologies automatically.

In the last two years in particular, much improvement in 'state of the art' models and results has been achieved by automated searches of meta-parameters and topologies, e.g. Uber in deep neuroevolution and Waymo in Population Based Training (1,2). Uber en deep-neuroevolution or in Waymo en Population Based Training.

Future advances in the field are likely to be the result of neural networks created by AI and search algorithms.


RAISE is a framework for unconstrained topology search based on DXNN, a neuroevolutive method described by Dr. Sher (3). The system can start from a simple layer to build a network adapted to a specific problem, but it can also start from a 'state of the art' model and further improve it.

A deep-neuroevolution algorithm optimizes a neural network by an evolutionary method: the characteristics of the network (its synaptic weights and topology) form its genotype which is used to generate a population of new neural networks whose genotypes will be more or less different, “mutant” versions. Only the best performing mutant models will be kept to create the next generation.

One of the strong characteristics of deep neuroevolution is that the evolved models are no longer constrained to rigid feedforward or rigid model structures; instead, we are able to evolve any topology, from any type of layers, connected in the way that best fits the particular problem domain.

RAISE uses optimized yet unconstrained topological search. There are no constraints on the topology that can be evolved, which greatly expands the type of problem domains we can solve, and the level of performance we can reach.

Gene SHER - Handbook of Neuroevolution Through Erlang
illustration virtuelle d'un reseau de neurone biologique lumineux


  • RAISE simultaneously uses gradient descent to optimize the parameters of the model (local search), and the neuroevolutive method to optimize the topology (global search).


  • RAISE simultaneously uses gradient descent for a 'local' optimum and the neuroevolutive method to optimize the topology.


  • Finally, RAISE can also seek to reduce the size of the network for lower memory footprint, or optimize the speed of inference, without sacrificing performance. Our evolved deep networks are generally of the smallest possible size which allows them to run on embedded systems, even the IOT.
RAISE by Datavaloris logo