Evolving Neural Networks through Augmenting Topologies – Part 4 of 4 – Trading Strategy

This post explores applying NEAT to trading the S&P. The learned strategy significantly out performs buying and holding both in and out of sample.


A key part of any machine learning problem is defining the features and ensuring that they’re normalised in some fashion.
The features will be rolling percentiles of the following economic data, a rolling percentile takes the last n data points and calculates what % of data point the latest data point is greater than.

  • Non-farm payrolls
  • Unemployment Rate
  • GDP


Fitness Function

The fitness function is final equity, and aims to maximise the final equity

Termination Function

Any genome that has a 20% draw down, or attempts to use a leverage greater than +/- 2 is terminated. In practise you wouldn’t want to make your system machine learn the risk controls as there is potential that they don’t get learned. The reason they are embedded inside the strategy is to speed up the learning process as we can kill genomes early before the simulation is complete based upon breaking the risk rules.

Plot of all data / features

It appears that when non-farms fall to their lower percentiles / unemployment reaches it’s highest percentiles the day to day returns in the S&P become more volatile. It is hoped that the learning can take advantage of this.


Training Results

The learning has identified a strategy that out performs simply buying and holding. The proposed strategy has a max drawdown around 20% vs the buy and hold having a draw down of 40%. Additionally the strategy shorted the index between 2000-2003 as it was selling off before going long to 2007. Generating a return of 80% vs buy and hold of 7%!

neat-trading-fitness neat-trading-trainingdata-fitness-maxequity

Out of sample results

In the out of sample data (not used during the training) the strategy significantly out performed buying and holding, approx 250% return vs 50% with a max drawdown close to 20% vs buy and hold draw down of 50%.


Onto the code:

?View Code RSPLUS
install_github("RNeat","ahunteruk") #Install from github as not yet on CRAN
marketSymbol <- "^GSPC"
econmicDataSymbols <- c("UNRATE","PAYEMS","GDP")
mktData <- new.env() #Make a new environment for quantmod to store data in
economicData <- new.env() #Make a new environment for quantmod to store data in
#Specify dates for downloading data, training models and running simulation
dataDownloadStartDate <- as.Date("2000-06-01")
trainingStartDate = as.Date("2001-01-01") #Specify the date to start training (yyyy-mm-dd)
trainingEndDate = as.Date("2006-12-31") #Specify the date to end training
outOfSampleStartDate = as.Date("2007-01-01")
outOfSampleEndDate = as.Date("2016-07-15")
#Download Data
getSymbols(marketSymbol,env=mktData,from=dataDownloadStartDate) #S&P 500
getSymbols(econmicDataSymbols,src="FRED",env=economicData,from=dataDownloadStartDate) #Payems is non-farms payrolls 
nEconomicDataPercentileLookbackShort <- 20
nEconomicDataPercentileLookbackMedium <- 50
nEconomicDataPercentileLookbackLong <- 100
rollingPercentile <- function(data,n){
  percentile <- function(dataBlock){
  return (as.zoo(rollapply(as.zoo(data),width=n,percentile,align="right",by.column=TRUE)))
stockCleanNameFunc <- function(name){
clClRet <- as.zoo((lag(Cl(get(stockCleanNameFunc(marketSymbol),mktData)),-1)/Cl(get(stockCleanNameFunc(marketSymbol),mktData))-1))
payemsShortPercentile <- rollingPercentile(economicData$PAYEMS,nEconomicDataPercentileLookbackShort)
payemsMediumPercentile <- rollingPercentile(economicData$PAYEMS,nEconomicDataPercentileLookbackMedium)
payemsLongPercentile <- rollingPercentile(economicData$PAYEMS,nEconomicDataPercentileLookbackLong)
unrateShortPercentile <- rollingPercentile(economicData$UNRATE,nEconomicDataPercentileLookbackShort)
unrateMediumPercentile <- rollingPercentile(economicData$UNRATE,nEconomicDataPercentileLookbackMedium)
unrateLongPercentile <- rollingPercentile(economicData$UNRATE,nEconomicDataPercentileLookbackLong)
gdpShortPercentile <- rollingPercentile(economicData$GDP,nEconomicDataPercentileLookbackShort)
gdpMediumPercentile <- rollingPercentile(economicData$GDP,nEconomicDataPercentileLookbackMedium)
gdpLongPercentile <- rollingPercentile(economicData$GDP,nEconomicDataPercentileLookbackLong)
#join the data sets, fill in any missing dates with the previous none NA value
mergedData <- na.locf(merge(economicData$PAYEMS,merge(Cl(get(stockCleanNameFunc(marketSymbol),mktData)),
                      economicData$PAYEMS,payemsShortPercentile,payemsMediumPercentile,payemsLongPercentile,                                       economicData$UNRATE,unrateShortPercentile,unrateMediumPercentile,unrateLongPercentile,
mergedData <- mergedData[,-1]           
ClClRet <- as.zoo(lag(mergedData[,1],-1)/mergedData[,1]-1)
ClTZero <- as.zoo(mergedData[,1])
ClTOne <- as.zoo(lag(mergedData[,1],-1))
mergedData <- merge(ClClRet,ClTOne,ClTZero,mergedData)                                                           
mergedData <- window(mergedData,start=dataDownloadStartDate)
colnames(mergedData) <- c("ClClRet","ClTOne","ClTZero","Price","Payems","Payems.short","Payems.medium","Payems.long",
plot(mergedData[,"Price"], main="S&P Close Price",ylab="Close Price")
plot(mergedData[,"ClClRet"], main="S&P Close Price",ylab="Close Price")
plot(mergedData[,"Payems"], main="Non-Farm Payrolls",ylab="Thousands of Persons")
plot(mergedData[,"Payems.short"], main="Non-Farm Payrolls Rolling Percentile",ylab="Percentile")
lines(mergedData[,"Payems.medium"], col="red")
lines(mergedData[,"Payems.long"], col="blue")
 legend(x='bottomright', c(paste(nEconomicDataPercentileLookbackShort,"Points"),
                          paste(nEconomicDataPercentileLookbackLong,"Points")), fill=c("black","red","blue"), bty='n')
plot(mergedData[,"Unrate"], main="Unemployment Rate",ylab="Percent")
plot(mergedData[,"Unrate.short"], main="Unemployment Rate Rolling Percentile",ylab="Percentile")
lines(mergedData[,"Unrate.medium"], col="red")
lines(mergedData[,"Unrate.long"], col="blue")
legend(x='bottomright', c(paste(nEconomicDataPercentileLookbackShort,"Points"),
                          paste(nEconomicDataPercentileLookbackLong,"Points")), fill=c("black","red","blue"), bty='n')
plot(mergedData[,"Gdp"], main="GDP",ylab="Billions of USD")
plot(mergedData[,"Gdp.short"], main="GBP Rolling Percentile",ylab="Percentile")
lines(mergedData[,"Gdp.medium"], col="red")
lines(mergedData[,"Gdp.long"], col="blue")
legend(x='bottomright', c(paste(nEconomicDataPercentileLookbackShort,"Points"),
                          paste(nEconomicDataPercentileLookbackLong,"Points")), fill=c("black","red","blue"), bty='n')
featuresTrainingData <- window(mergedData,start=trainingStartDate,end=trainingEndDate)
featuresOutOfSampleData <- window(mergedData,start=outOfSampleStartDate,end=outOfSampleEndDate)
#Genetic algo setup
simulationData <- featuresTrainingData
trading.InitialState <- function(){
  state <- list()
  state[1] <- 100 #Equity
  state[2] <- 0 #% of Equity allocated to share (-ve for shorts)
  state[3] <- state[1] #Maximum equity achieved
  state[4] <- 1 #Trading day number
  state[5] <- simulationData[1,"Price"]
  state[6] <- simulationData[1,"Payems.short"]
  state[7] <- simulationData[1,"Payems.medium"]
  state[8] <- simulationData[1,"Payems.long"]
  state[9] <- simulationData[1,"Unrate.short"]
  state[10] <- simulationData[1,"Unrate.medium"]
  state[11] <- simulationData[1,"Unrate.long"]
  state[12] <- simulationData[1,"Gdp.short"]
  state[13] <- simulationData[1,"Gdp.medium"]
  state[14] <- simulationData[1,"Gdp.long"]
trading.ConvertStateToNeuralNetInputs <- function(currentState){
  return (currentState)
trading.UpdateState <- function(currentState,neuralNetOutputs){
  equity <- currentState[[1]]
  equityAllocation <- neuralNetOutputs[[1]]
  maxEquityAchieved <- currentState[[3]]
  tradingDay <- currentState[[4]]
  pctChange <- as.double((simulationData[tradingDay+1,"Price"]))/as.double((simulationData[tradingDay,"Price"]))-1
  pnl <- equity * equityAllocation * pctChange
  equity <- equity + pnl
  maxEquityAchieved <- max(maxEquityAchieved,equity)
  tradingDay <- tradingDay + 1
  currentState[1] <- equity
  currentState[2] <- equityAllocation
  currentState[3] <- maxEquityAchieved
  currentState[4] <- tradingDay
  currentState[5] <- simulationData[tradingDay,"Price"]
  currentState[6] <- simulationData[tradingDay,"Payems.short"]
  currentState[7] <- simulationData[tradingDay,"Payems.medium"]
  currentState[8] <- simulationData[tradingDay,"Payems.long"]
  currentState[9] <- simulationData[tradingDay,"Unrate.short"]
  currentState[10] <- simulationData[tradingDay,"Unrate.medium"]
  currentState[11] <- simulationData[tradingDay,"Unrate.long"]
  currentState[12] <- simulationData[tradingDay,"Gdp.short"]
  currentState[13] <- simulationData[tradingDay,"Gdp.medium"]
  currentState[14] <- simulationData[tradingDay,"Gdp.long"]
  return (currentState)
trading.UpdateFitness <- function(oldState,updatedState,oldFitness){
  return (as.double(updatedState[1])) #equity achieved
trading.CheckForTermination <- function(frameNum,oldState,updatedState,oldFitness,newFitness){
  equity <- updatedState[[1]]
  equityAllocation <- updatedState[[2]]
  maxEquityAchieved <- updatedState[[3]]
  tradingDay <- updatedState[[4]]
  if(tradingDay >= nrow(simulationData)){
  if(abs(equityAllocation) > 2){ #Too much leverage
  if(equity/maxEquityAchieved < 0.8){    #20% draw down
  } else {
    return (F)
trading.PlotState <-function(updatedState){
  equity <- currentState[[1]]
  equityAllocation <- currentState[[2]]
  maxEquityAchieved <- currentState[[3]]
plotStateAndInputDataFunc <- function(stateData, inputData, titleText){
   buyandholdret <- inputData[,"Price"]/coredata(inputData[1,"Price"])
   strategyret <- stateData[,"Equity"]/100
   maxbuyandholdret <- cummax(buyandholdret)
   buyandholddrawdown <- (buyandholdret/maxbuyandholdret-1)
   strategydrawdown <- (stateData[,"Equity"]/stateData[,"MaxEquity"]-1)
  par(mfrow=c(4,2),oma = c(0, 0, 2, 0))
  plot(buyandholdret,main="Performance (Return on Initial Equity)", ylab="Return", ylim=c(min(buyandholdret,strategyret),max(buyandholdret,strategyret)))
  legend(x='bottomright', c('Buy & Hold','Strategy'), fill=c("black","red"),  bty='n')
  plot(inputData[,"ClClRet"],main="Stock Returns", ylab="Return")
  plot(maxbuyandholdret*100,main="Max Equity", ylim=c(min(maxbuyandholdret*100,stateData[,"MaxEquity"]),max(maxbuyandholdret*100,stateData[,"MaxEquity"])),ylab="Equity $")
  legend(x='bottomright', c('Buy & Hold','Strategy'), fill=c("black","red"), bty='n')
  plot(inputData[,"Payems.short"], main="Payrolls Rolling Percentile",ylab="Percentile")
  lines(inputData[,"Payems.medium"], col="red")
  lines(inputData[,"Payems.long"], col="blue")
  legend(x='bottomright', c(paste(nEconomicDataPercentileLookbackShort,"Points"),
                          paste(nEconomicDataPercentileLookbackLong,"Points")), fill=c("black","red","blue"), bty='n')
  plot(buyandholddrawdown,main="Draw Down",ylab="Percent (%)")
  legend(x='bottomright', c('Buy & Hold','Strategy'), fill=c("black","red"), bty='n')
  mtext(titleText, outer = TRUE, cex = 1.5)
config <- newConfigNEAT(14,1,500,50)
tradingSimulation <- newNEATSimulation(config, trading.InitialState,
tradingSimulation <- NEATSimulation.RunSingleGeneration(tradingSimulation)
for(i in seq(1,35)){
      save.image(file="tradingSim.RData")  #So we can recover if we crash for any reason
      tradingSimulation <- NEATSimulation.RunSingleGeneration(tradingSimulation)
stateHist <- NEATSimulation.GetStateHistoryForGenomeAndSpecies(tradingSimulation)
colnames(stateHist) <- c("Equity","Allocation","MaxEquity","TradingDay","Price",
stateHist <- as.zoo(stateHist)
plotStateAndInputDataFunc(stateHist,simulationData,"Training Data")
simulationData <- featuresOutOfSampleData
stateHist <- NEATSimulation.GetStateHistoryForGenomeAndSpecies(tradingSimulation)
colnames(stateHist) <- c("Equity","Allocation","MaxEquity","TradingDay","Price",
stateHist <- as.zoo(stateHist)
plotStateAndInputDataFunc(stateHist,simulationData,"Out of Sample Data")

RNeat – Square Root Neural Net trained using Augmenting Topologies – Simple Example

A simple tutorial demonstrating how to train a neural network to square root numbers using a genetic algorithm that searches through the topological structure space. The algorithm is called NEAT (Neuro Evolution of Augmenting Topologies) available in the RNeat package (not yet on CRAN).

The training is very similar to other machine learning / regression packages in R. The training function takes a data frame and a formula. The formula is used to specify what columns in the data frame are the dependent variables and which are the explanatory variable. The code is commented and should be simple enough for new R users.


The performance of the network can be seen in the bottom left chart of the image above, there is considerable differences between the expected output and the actual output. It is likely that with more training the magnitude of these errors will reduce, it can be seen in the bottom right chart that the maximum, mean and median fitness are generally increasing with each generation.

?View Code RSPLUS
install_github("RNeat","ahunteruk") #Install from github as not yet on CRAN
#Generate traing data y = sqrt(x)
trainingData <- as.data.frame(cbind(sqrt(seq(0.1,1,0.1)),seq(0.1,1,0.1)))
colnames(trainingData) <- c("y","x")
#Train the neural network for 5 generations, and plot the fitness
rneatsim <- rneatneuralnet(y~x,trainingData,5)
#Continue training the network for another 5 generations
rneatsim <- rneatneuralnetcontinuetraining(rneatsim,20)
#Construct some fresh data to stick through the neural network and hopefully get square rooted
liveData <- as.data.frame(seq(0.1,1,0.01))
colnames(liveData) <- c("x")
liveDataExpectedOutput <- sqrt(liveData)
colnames(liveDataExpectedOutput) <- "yExpected"
#Pass the data through the network
results <- compute(rneatsim,liveData)
#Calculate the difference between yPred the neural network output, and yExpected the actual square root of the input
error <- liveDataExpectedOutput[,"yExpected"] - results[,"yPred"]
results <- cbind(results,liveDataExpectedOutput,error)
layout(matrix(c(3,3,3,1,4,2), 2, 3, byrow = TRUE),heights=c(1,2))
plot(x=results[,"x"],y=results[,"yExpected"],type="l", main="Neural Network y=sqrt(x) expected vs predicted",xlab="x",ylab="y")
legend(x='bottomright', c('yExpected','yPredicted'), col=c("black","red"), fill=1:2, bty='n')

Evolving Neural Networks through Augmenting Topologies – Part 3 of 4

This part of the NEAT tutorial will show how to use the RNeat package (not yet on CRAN) to solve the classic pole balance problem.

The simulation requires the implementation of 5 functions:

  • processInitialStateFunc – This specifies the initial state of the system, for the pole balance problem the state is the cart location, cart velocity, cart acceleration, force being applied to the cart, pole angle, pole angular velocity and pole angular acceleration.
  • processUpdateStateFunc – This specifies how to take the current state and update it using the outputs of the neural network. In this example this function simulates the equations of motion and takes the neural net output as the force that is being applied to the cart.
  • processStateToNeuralInputFunc – Allows for modifying the state / normalisation of the state before it is passed as an input to the neural network
  • fitnessUpdateFunc – Takes the old fitness, the old state and the new updated state and determines what the new system fitness is. For the pole balance problem this function wants to reward the pendulum being up right, and reward the cart being close to the middle of the track.
  • terminationCheckFunc – Takes the state and checks to see if the termination should be terminated. Can chose to terminate if the pole falls over, the simulation has ran too long or the cart has driven off of the end of the track.
  • plotStateFunc – Plots the state, for the pole balance this draws the cart and pendulum.

Onto the code:

?View Code RSPLUS
install_github("RNeat","ahunteruk") #Install from github as not yet on CRAN
drawPoleFunc <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x-poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
              col = fillColour, border=borderColour)
drawPendulum <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,radius,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x-poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
#Parameters to control the simulation
simulation.timestep = 0.005
simulation.gravity = 9.8 #meters per second^2
simulation.numoftimesteps = 2000
pole.length = 1 #meters, total pole length
pole.width = 0.2
pole.theta = pi/4
pole.thetaDot = 0
pole.thetaDotDot = 0
pole.colour = "purple"
pendulum.centerX = NA
pendulum.centerY = NA
pendulum.radius = 0.1
pendulum.mass = 0.1
pendulum.colour = "purple"
cart.centerX = 0
cart.centerY = 0
cart.centerXDot = 0
cart.centerXDotDot = 0
cart.mass = 0.4
cart.force = 0
track.limit= 10 #meters from center
track.x = -track.limit
track.y = 0.5*track.height
track.colour = "blue"
leftBuffer.colour = "blue"
rightBuffer.colour = "blue"
#Define the size of the scene (used to visualise what is happening in the simulation)
scene.width = 2*max(rightBuffer.x+rightBuffer.width,track.limit+pole.length+pendulum.radius)
scene.bottomLeftX = -0.5*scene.width
scene.bottomLeftY = -0.5*scene.height
poleBalance.InitialState <- function(){
   state <- list()
   state[1] <- cart.centerX
   state[2] <- cart.centerXDot
   state[3] <- cart.centerXDotDot
   state[4] <- cart.force
   state[5] <- pole.theta
   state[6] <- pole.thetaDot
   state[7] <- pole.thetaDotDot
poleBalance.ConvertStateToNeuralNetInputs <- function(currentState){
    return (currentState)
poleBalance.UpdatePoleState <- function(currentState,neuralNetOutputs){
   #print("Updating pole state")
   cart.centerX <- currentState[[1]]
   cart.centerXDot <- currentState[[2]]
   cart.centerXDotDot <- currentState[[3]]
   cart.force <- currentState[[4]]+neuralNetOutputs[[1]]
   pole.theta <- currentState[[5]]
   pole.thetaDot <- currentState[[6]]
   pole.thetaDotDot <- currentState[[7]]
   costheta = cos(pole.theta)
   sintheta = sin(pole.theta)
   totalmass = cart.mass+pendulum.mass
   masslength = pendulum.mass*pole.length
   pole.thetaDotDot = (simulation.gravity*totalmass*sintheta+costheta*(cart.force-masslength*pole.thetaDot^2*sintheta-cart.mu*cart.centerXDot))/(pole.length*(totalmass-pendulum.mass*costheta^2))
   cart.centerXDotDot = (cart.force+masslength*(pole.thetaDotDot*costheta-pole.thetaDot^2*sintheta)-cart.mu*cart.centerXDot)/totalmass
   cart.centerX = cart.centerX+simulation.timestep*cart.centerXDot
   cart.centerXDot = cart.centerXDot+simulation.timestep*cart.centerXDotDot
   pole.theta = (pole.theta +simulation.timestep*pole.thetaDot )
   pole.thetaDot = pole.thetaDot+simulation.timestep*pole.thetaDotDot
   currentState[1] <- cart.centerX
   currentState[2] <- cart.centerXDot
   currentState[3] <- cart.centerXDotDot
   currentState[4] <- cart.force
   currentState[5] <- pole.theta
   currentState[6] <- pole.thetaDot
   currentState[7] <- pole.thetaDotDot
   return (currentState)
poleBalance.UpdateFitness <- function(oldState,updatedState,oldFitness){
   #return (oldFitness+1) #fitness is just how long we've ran for
   #return (oldFitness+((track.limit-abs(updatedState[[1]]))/track.limit)^2) #More reward for staying near middle of track
   height <- cos(updatedState[[5]]) #is -ve if below track
   heightFitness <- max(height,0)
   centerFitness <- (track.limit-abs(updatedState[[1]]))/track.limit
   return (oldFitness+(heightFitness + heightFitness*centerFitness))
poleBalance.CheckForTermination <- function(frameNum,oldState,updatedState,oldFitness,newFitness){
   cart.centerX <- updatedState[[1]]
   cart.centerXDot <- updatedState[[2]]
   cart.centerXDotDot <- updatedState[[3]]
   cart.force <- updatedState[[4]]
   pole.theta <- updatedState[[5]]
   pole.thetaDot <- updatedState[[6]]
   pole.thetaDotDot <- updatedState[[7]]
   oldpole.theta <- oldState[[5]]
   if(frameNum > 20000){
     print("Max Frame Num Exceeded , stopping simulation")
     return (T)
   height <- cos(pole.theta)
   oldHeight <- cos(oldpole.theta)
   if(height==-1 & cart.force==0){
    if(oldHeight >= 0 & height < 0){
      #print("Pole fell over")
      return (T)
    if(cart.centerX < track.x | cart.centerX > (track.x+2*track.limit)){
      #print("Exceeded track length")
       return (T)
    } else {
      return (F)
poleBalance.PlotState <-function(updatedState){
   cart.centerX <- updatedState[[1]]
   cart.centerXDot <- updatedState[[2]]
   cart.centerXDotDot <- updatedState[[3]]
   cart.force <- updatedState[[4]]
   pole.theta <- updatedState[[5]]
   pole.thetaDot <- updatedState[[6]]
   pole.thetaDotDot <- updatedState[[7]]
                 main="Simulation of Inverted Pendulum - www.gekkoquant.com",xlab="",
config <- newConfigNEAT(7,1,500,50)
poleSimulation <- newNEATSimulation(config, poleBalance.InitialState,
for(i in seq(1,1000)){ 
      poleSimulation <- NEATSimulation.RunSingleGeneration(poleSimulation,T,"videos","poleBalance",1/simulation.timestep) 

Evolving Neural Networks through Augmenting Topologies – Part 2 of 4

This part of the tutorial on using NEAT algorithm explains how genomes are crossed over in a meaningful way maintaining their topological information and how speciation (group genomes into species) can be used to protect weak genomes with new topological information from prematurely being eradicated from the gene pool before their weight space can be optimised.

The first part of this tutorial can be found here.

Tracking Gene History through Innovation Numbers

Part 1 showed two mutations, link mutate and node mutate which both added new genes to the genome. Each time a new gene is created (through a topological innovation) a global innovation number is incremented and assigned to that gene.

The global innovation number is tracking the historical origin of each gene. If two genes have the same innovation number then they must represent the same topology (although the weights may be different). This is exploited during the gene crossover.

Genome Crossover (Mating)

Genomes crossover takes two parent genomes (lets call them A and B) and creates a new genome (lets call it the child) taking the strongest genes from A and B copying any topological structures along the way.

During the crossover genes from both genomes are lined up using their innovation number. For each innovation number the gene from the most fit parent is selected and inserted into the child genome. If both parent genomes are the same fitness then the gene is randomly selected from either parent with equal probability. If the innovation number is only present in one parent then this is known as a disjoint or excess gene and represents a topological innovation, it too is inserted into the child.

The image below shows the crossover process for two genomes of the same fitness.



Speciation takes all the genomes in a given genome pool and attempts to split them into distinct groups known as species. The genomes in each species will have similar characteristics.

A way of measuring the similarity between two genomes is required, if two genomes are “similar” they are from the same species. A natural measure to use would be a weighted sum of the number of disjoint & excess genes (representing topological differences) and the difference in weights between matching genes. If the weighted sum is below some threshold then the genomes are of the same species.

The advantage of splitting the genomes into species is that during the genetic evolution step where genomes with low fitness are culled (removed entirely from the genome pool) rather than having each genome fight for it’s place against every other genome in the entire genome pool we can make it fight for it’s place against genomes of the same species. This way species that form from a new topological innovation that might not have a high fitness yet due to not having it’s weights optimised will survive the culling.

Summary of whole process

  • Create a genome pool with n random genomes
  • Take each genome and apply to problem / simulation and calculate the genome fitness
  • Assign each genome to a species
  • In each species cull the genomes removing some of the weaker genomes
  • Breed each species (randomly select genomes in the species to either crossover or mutate)
  • Repeat all of the above



Evolving Neural Networks through Augmenting Topologies – Part 1 of 4

This four part series will explore the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. Parts one and two will briefly out-line the algorithm and discuss the benefits, part three will apply it to the pole balancing problem and finally part 4 will apply it to market data.

This algorithm recently went viral in a video called MarI/O where a network was developed that was capable of completing the first level of super mario see the video below.

Typically when one chooses to use a neural network they have to decide how many hidden layers there are, the number of neurons in each layer and what connections exist between the neurons. Depending on the nature of the problem it can be very difficult to know what is a sensible topology. Once the topology is chosen it will most likely be trained using back-propagation or a genetic evolution approach and tested. The genetic evolution approach is essentially searching through the space of connection weights and selecting high performing networks and breeding them (this is known as fixed-topology evolution).

The above approach finds optimal connection weights, it’s then down to an “expert” to manually tweak the topology of the network in an attempt to iteratively find better performing networks.

This led to the development of variable-topology training, where both the connection space and structure space are explored. With this came a host of problems such as networks becoming incredibly bushy and complex slowing down the machine learning process. With the genetic approaches it was difficult to track genetic mutations and crossover structure in a meaningful way.

The NEAT algorithm aims to develop a genetic algorithm that searching through neural network weight and structure space that has the following properties:

  1. Have genetic representation that allows structure to be crossed over in a meaningful way
  2. Protect topological innovations that need a few evolutions to be optimised so that it doesn’t disappear from the gene pool prematurely
  3. Minimise topologies throughout training without specially contrived network complexity penalisation functions

A through treatment of the algorithm can be found in the paper Evolving Neural Networks through
Augmenting Topologies by Kenneth O. Stanley and Risto Miikkulainen (http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).


The information about the network is represented by a genome, the genome contains node genes and connection genes. The node genes define nodes in the network, the nodes can be inputs (such as a technical indicator), outputs (such as a buy / sell recommendation), or hidden (used by the network for a calculation). The connection genes join nodes in the network together and have a weight attached to them.

Connection genes have an input node, an output node, a weight, an enabled/disabled flag and an innovation number. The innovation number is used to track the history of a genes evolution and will be explained in more detail in part two.

This post will look at some of the mutations that can happen to the network, it is worth noting that each genome has embedded inside it a mutation rate for each type of mutation that can occur. These mutation rates are also randomly increased or decreased as the evolution progresses.

Point Mutate

Randomly updates the weight of a randomly selected connection gene

The updates are either:

New Weight = Old Weight +/- Random number between 0 and genome$MutationRate[[“Step”]]


New Weight = Random number between -2 and 2

Link Mutate

Randomly adds a new connection to the network with a random weight between -2 and 2


Node Mutate

This mutation adds a new node to the network by disabling a connection, replacing it with a connection of weight 1, a node and a connection with the same weight as the disabled connection. In essence it’s been replaced with an identically functioning equivalent.


Enable Disable Mutate

Randomly enables and disables connections