Today, everyone is talking about Artificial Intelligence, even though often it’s just a pure marketing argument. The term “AI” includes numerous technologies.Here is how it is organized:
Machine learning:Most of artificial intelligence is Machine Learning, which is based on statistics.
Neural Networks:Are graph based models that learn from experience (in a supervised & unsupervised manner) to reach a defined goal (recognition of image, text, predictions…). These systems have numerous applications, ranging from image classification, autonomy & robotics, and more recently, even art (exp. DeepFakes).
Deep learning:When neural networks are composed of multiple layers, they are referred to as deep neural networks, and the domain which utilizes such systems is known as Deep Learning.
Each neuron within the network is a signal processing unit, it takes data from neurons and functions connected to it, processes this data in a non-linear way, and then passes the result onwards to other neurons and functions that it’s connected to. The artificial neurons and artificial neural networks are greatly simplified versions of biological neurons and neural networks. But, the topology of neural network models currently being used does not resemble that of a biological brains, not yet. Neural network models are built by Data Scientists, they sometimes rely on a libraries & frameworks (such as Google’s "TensorFlow" ) to save some time with some pre-wired topologies.
FROM DEEP LEARNING TO DEEP NEUROEVOLUTION
Unlike current neural network (NN) techniques like CNN, BPNN, Kohonen ... our technology does not impose any restrictions on the architecture of the final NN topology.There are multiple reasons for this: by allowing the NN model to evolve the topology without constraints (within reason), it can reach the most optimized topology for the specific problem domain it is being applied to.
Our R & D team strives daily to go beyond the state of the art by leveraging concepts based on Biomimicry.In the last few years, the excellent results that have been reached within the deep learning community have been primarily by increasing the size of the NN models, the size of the datasets, and through the use of substantially more powerful computational hardware (by leveraging GPUs & hundreds of gigabytes of data). We believe that the key to further improvement, is in the topology rather than the size of the network. We believe that the next great leap will be through finding new neural network topologies.
Our results prove it to be true.
"It is not the most intelligent of the species that survives; it is not the strongest that survives; the species that survives is the one that is best able to adapt and adjust to the changing environment in which it finds itself."
Most of the research in neurobiology today is done on the topology of the brains. The biological brain is not simply a succession of neurons wired end to end, but is instead the fruit of an evolutionary process resulting in a complex and precise architecture.Deep neuro-evolution is based on the notion that we can use Darwinian evolution to evolve and refine the topology of a neural network model, and perfectly adapt it to a given problem. This combination of genetic algorithms and neural networks is the key to our approach. If 90% of companies are satisfied with existing libraries because their issues seem "classic" and can be resolved with off the shelf models, than it is those that are willing to go beyond the off-the-shelf models, those that can achieve that extra performance and push beyond “average”, that will be the ones to get ahead. We chose an approach where topology is a variable element of the selective process of our NN. It is perfectly complementary to NN libraries that have already proven themselves and that can serve as a base on which to further improve and build on. We can take your model beyond it’s current capabilities, we can take it that extra step forward you need to get the edge.
RAISE is using a unique generation & optimisation of neural networks engine called NNTO (Neural Network Topology Optimizer).It applies topological mutations between the learning phases, to optimize the neural network model topology to the problem you’re applying it to.
What kind of topological mutations* will be applied ?Here are some of the available mutations:
- Addition of neurons
- Addition of layers
- Removal of layers
- Removal of useless connections
- Change of activation functions
How do you select the best models?Models are compared between themselves based on their fitness, such that only the best performing models are chosen. The fitness by which the models are selected can be based on accuracy/performance of the model, inference speed, size of the model (memory footprint size), or all of the above and more.
* non exhaustive list