Evolving Neural Networks through Augmenting Topologies – Part 1 of 4

This four part series will explore the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. Parts one and two will briefly out-line the algorithm and discuss the benefits, part three will apply it to the pole balancing problem and finally part 4 will apply it to market data.

This algorithm recently went viral in a video called MarI/O where a network was developed that was capable of completing the first level of super mario see the video below.

Typically when one chooses to use a neural network they have to decide how many hidden layers there are, the number of neurons in each layer and what connections exist between the neurons. Depending on the nature of the problem it can be very difficult to know what is a sensible topology. Once the topology is chosen it will most likely be trained using back-propagation or a genetic evolution approach and tested. The genetic evolution approach is essentially searching through the space of connection weights and selecting high performing networks and breeding them (this is known as fixed-topology evolution).

The above approach finds optimal connection weights, it’s then down to an “expert” to manually tweak the topology of the network in an attempt to iteratively find better performing networks.

This led to the development of variable-topology training, where both the connection space and structure space are explored. With this came a host of problems such as networks becoming incredibly bushy and complex slowing down the machine learning process. With the genetic approaches it was difficult to track genetic mutations and crossover structure in a meaningful way.

The NEAT algorithm aims to develop a genetic algorithm that searching through neural network weight and structure space that has the following properties:

  1. Have genetic representation that allows structure to be crossed over in a meaningful way
  2. Protect topological innovations that need a few evolutions to be optimised so that it doesn’t disappear from the gene pool prematurely
  3. Minimise topologies throughout training without specially contrived network complexity penalisation functions

A through treatment of the algorithm can be found in the paper Evolving Neural Networks through
Augmenting Topologies by Kenneth O. Stanley and Risto Miikkulainen (http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).


The information about the network is represented by a genome, the genome contains node genes and connection genes. The node genes define nodes in the network, the nodes can be inputs (such as a technical indicator), outputs (such as a buy / sell recommendation), or hidden (used by the network for a calculation). The connection genes join nodes in the network together and have a weight attached to them.

Connection genes have an input node, an output node, a weight, an enabled/disabled flag and an innovation number. The innovation number is used to track the history of a genes evolution and will be explained in more detail in part two.

This post will look at some of the mutations that can happen to the network, it is worth noting that each genome has embedded inside it a mutation rate for each type of mutation that can occur. These mutation rates are also randomly increased or decreased as the evolution progresses.

Point Mutate

Randomly updates the weight of a randomly selected connection gene

The updates are either:

New Weight = Old Weight +/- Random number between 0 and genome$MutationRate[[“Step”]]


New Weight = Random number between -2 and 2

Link Mutate

Randomly adds a new connection to the network with a random weight between -2 and 2


Node Mutate

This mutation adds a new node to the network by disabling a connection, replacing it with a connection of weight 1, a node and a connection with the same weight as the disabled connection. In essence it’s been replaced with an identically functioning equivalent.


Enable Disable Mutate

Randomly enables and disables connections




10 thoughts on “Evolving Neural Networks through Augmenting Topologies – Part 1 of 4

  1. Pingback: Evolving Neural Networks through Augmenting Topologies – Part 2 of 4 | Gekko Quant – Quantitative Trading

  2. Can you write a simple NEAT example for R. I am searching for a NEAT tutorial in R and find nothing. And thank for posting this nice article. I am new to ML but your articles are good way to learn from basics.

    Good Luck

    • I’m in the process of developing a R package for it. As you can imagine it’s not a trivial thing to code up (especially in R).

        • No deadline as such, currently have ported the LUA example in the MarI/O video to R and just in the process of fixing a few bugs. Hasn’t solved the pole balance task yet, the good news is that the fitness improves with each generation, the bad news is that R crashes before the problem is solved!

          • Hey thanks for the quick reply. When you completely port MariI/O’s lua programming to R then please upload the video to youtube :). Once more thank you for your good work

  3. Pingback: Quantocracy's Daily Wrap for 03/14/2016 | Quantocracy

  4. Pingback: Best Links of the Last Two Weeks | Quantocracy

Leave a Reply

Your email address will not be published. Required fields are marked *