RNeat – Square Root Neural Net trained using Augmenting Topologies – Simple Example

A simple tutorial demonstrating how to train a neural network to square root numbers using a genetic algorithm that searches through the topological structure space. The algorithm is called NEAT (Neuro Evolution of Augmenting Topologies) available in the RNeat package (not yet on CRAN).

The training is very similar to other machine learning / regression packages in R. The training function takes a data frame and a formula. The formula is used to specify what columns in the data frame are the dependent variables and which are the explanatory variable. The code is commented and should be simple enough for new R users.

squareRootNetwork

The performance of the network can be seen in the bottom left chart of the image above, there is considerable differences between the expected output and the actual output. It is likely that with more training the magnitude of these errors will reduce, it can be seen in the bottom right chart that the maximum, mean and median fitness are generally increasing with each generation.

?View Code RSPLUS
install.packages("devtools")
library("devtools")
install_github("RNeat","ahunteruk") #Install from github as not yet on CRAN
library("RNeat")
 
#Generate traing data y = sqrt(x)
trainingData <- as.data.frame(cbind(sqrt(seq(0.1,1,0.1)),seq(0.1,1,0.1)))
colnames(trainingData) <- c("y","x")
 
#Train the neural network for 5 generations, and plot the fitness
rneatsim <- rneatneuralnet(y~x,trainingData,5)
plot(rneatsim)
 
#Continue training the network for another 5 generations
rneatsim <- rneatneuralnetcontinuetraining(rneatsim,20)
plot(rneatsim)
 
#Construct some fresh data to stick through the neural network and hopefully get square rooted
liveData <- as.data.frame(seq(0.1,1,0.01))
colnames(liveData) <- c("x")
 
liveDataExpectedOutput <- sqrt(liveData)
colnames(liveDataExpectedOutput) <- "yExpected"
 
#Pass the data through the network
results <- compute(rneatsim,liveData)
 
#Calculate the difference between yPred the neural network output, and yExpected the actual square root of the input
error <- liveDataExpectedOutput[,"yExpected"] - results[,"yPred"]
results <- cbind(results,liveDataExpectedOutput,error)
 
dev.new()
layout(matrix(c(3,3,3,1,4,2), 2, 3, byrow = TRUE),heights=c(1,2))
plot(x=results[,"x"],y=results[,"yExpected"],type="l", main="Neural Network y=sqrt(x) expected vs predicted",xlab="x",ylab="y")
lines(x=results[,"x"],y=results[,"yPred"],col="red",type="l")
legend(x='bottomright', c('yExpected','yPredicted'), col=c("black","red"), fill=1:2, bty='n')
plot(rneatsim)
plot(rneatsim$simulation)

Evolving Neural Networks through Augmenting Topologies – Part 3 of 4

This part of the NEAT tutorial will show how to use the RNeat package (not yet on CRAN) to solve the classic pole balance problem.

The simulation requires the implementation of 5 functions:

  • processInitialStateFunc – This specifies the initial state of the system, for the pole balance problem the state is the cart location, cart velocity, cart acceleration, force being applied to the cart, pole angle, pole angular velocity and pole angular acceleration.
  • processUpdateStateFunc – This specifies how to take the current state and update it using the outputs of the neural network. In this example this function simulates the equations of motion and takes the neural net output as the force that is being applied to the cart.
  • processStateToNeuralInputFunc – Allows for modifying the state / normalisation of the state before it is passed as an input to the neural network
  • fitnessUpdateFunc – Takes the old fitness, the old state and the new updated state and determines what the new system fitness is. For the pole balance problem this function wants to reward the pendulum being up right, and reward the cart being close to the middle of the track.
  • terminationCheckFunc – Takes the state and checks to see if the termination should be terminated. Can chose to terminate if the pole falls over, the simulation has ran too long or the cart has driven off of the end of the track.
  • plotStateFunc – Plots the state, for the pole balance this draws the cart and pendulum.

Onto the code:

?View Code RSPLUS
install.packages("devtools")
library("devtools")
install_github("RNeat","ahunteruk") #Install from github as not yet on CRAN
library("RNeat")
drawPoleFunc <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x-poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
 
  polygon(c(fixedEnd.x,floatingEnd.x,floatingEnd.x,fixedEnd.x),
             c(fixedEnd.y,floatingEnd.y,floatingEnd.y,fixedEnd.y),
              col = fillColour, border=borderColour)
}
 
drawPendulum <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,radius,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x-poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
  createCircleFunc(floatingEnd.x,floatingEnd.y,radius,fillColour,borderColour)
}
 
#Parameters to control the simulation
simulation.timestep = 0.005
simulation.gravity = 9.8 #meters per second^2
simulation.numoftimesteps = 2000
 
pole.length = 1 #meters, total pole length
pole.width = 0.2
pole.theta = pi/4
pole.thetaDot = 0
pole.thetaDotDot = 0
pole.colour = "purple"
 
 
pendulum.centerX = NA
pendulum.centerY = NA
pendulum.radius = 0.1
pendulum.mass = 0.1
pendulum.colour = "purple"
 
cart.width=0.5
cart.centerX = 0
cart.centerY = 0
cart.height=0.2
cart.colour="red"
cart.centerXDot = 0
cart.centerXDotDot = 0
cart.mass = 0.4
cart.force = 0
cart.mu=2
 
 
track.limit= 10 #meters from center
track.x = -track.limit
track.height=0.01
track.y = 0.5*track.height
track.colour = "blue"
 
leftBuffer.width=0.1
leftBuffer.height=0.2
leftBuffer.x=-track.limit-0.5*cart.width-leftBuffer.width
leftBuffer.y=0.5*leftBuffer.height
leftBuffer.colour = "blue"
 
rightBuffer.width=0.1
rightBuffer.height=0.2
rightBuffer.x=track.limit+0.5*cart.width
rightBuffer.y=0.5*rightBuffer.height
rightBuffer.colour = "blue"
 
#Define the size of the scene (used to visualise what is happening in the simulation)
scene.width = 2*max(rightBuffer.x+rightBuffer.width,track.limit+pole.length+pendulum.radius)
scene.bottomLeftX = -0.5*scene.width
scene.height=max(pole.length+pendulum.radius,scene.width)
scene.bottomLeftY = -0.5*scene.height
 
poleBalance.InitialState <- function(){
   state <- list()
   state[1] <- cart.centerX
   state[2] <- cart.centerXDot
   state[3] <- cart.centerXDotDot
   state[4] <- cart.force
   state[5] <- pole.theta
   state[6] <- pole.thetaDot
   state[7] <- pole.thetaDotDot
   return(state)
}
 
poleBalance.ConvertStateToNeuralNetInputs <- function(currentState){
    return (currentState)
}
 
poleBalance.UpdatePoleState <- function(currentState,neuralNetOutputs){
   #print("Updating pole state")
   #print(neuralNetOutputs)
   cart.centerX <- currentState[[1]]
   cart.centerXDot <- currentState[[2]]
   cart.centerXDotDot <- currentState[[3]]
   cart.force <- currentState[[4]]+neuralNetOutputs[[1]]
   pole.theta <- currentState[[5]]
   pole.thetaDot <- currentState[[6]]
   pole.thetaDotDot <- currentState[[7]]
 
   costheta = cos(pole.theta)
   sintheta = sin(pole.theta)
   totalmass = cart.mass+pendulum.mass
   masslength = pendulum.mass*pole.length
 
   pole.thetaDotDot = (simulation.gravity*totalmass*sintheta+costheta*(cart.force-masslength*pole.thetaDot^2*sintheta-cart.mu*cart.centerXDot))/(pole.length*(totalmass-pendulum.mass*costheta^2))
 
   cart.centerXDotDot = (cart.force+masslength*(pole.thetaDotDot*costheta-pole.thetaDot^2*sintheta)-cart.mu*cart.centerXDot)/totalmass
 
   cart.centerX = cart.centerX+simulation.timestep*cart.centerXDot
   cart.centerXDot = cart.centerXDot+simulation.timestep*cart.centerXDotDot
   pole.theta = (pole.theta +simulation.timestep*pole.thetaDot )
   pole.thetaDot = pole.thetaDot+simulation.timestep*pole.thetaDotDot
 
   currentState[1] <- cart.centerX
   currentState[2] <- cart.centerXDot
   currentState[3] <- cart.centerXDotDot
   currentState[4] <- cart.force
   currentState[5] <- pole.theta
   currentState[6] <- pole.thetaDot
   currentState[7] <- pole.thetaDotDot
   return (currentState)
}
 
 
 
poleBalance.UpdateFitness <- function(oldState,updatedState,oldFitness){
   #return (oldFitness+1) #fitness is just how long we've ran for
   #return (oldFitness+((track.limit-abs(updatedState[[1]]))/track.limit)^2) #More reward for staying near middle of track
 
   height <- cos(updatedState[[5]]) #is -ve if below track
   heightFitness <- max(height,0)
   centerFitness <- (track.limit-abs(updatedState[[1]]))/track.limit
   return (oldFitness+(heightFitness + heightFitness*centerFitness))
}
 
poleBalance.CheckForTermination <- function(frameNum,oldState,updatedState,oldFitness,newFitness){
   cart.centerX <- updatedState[[1]]
   cart.centerXDot <- updatedState[[2]]
   cart.centerXDotDot <- updatedState[[3]]
   cart.force <- updatedState[[4]]
   pole.theta <- updatedState[[5]]
   pole.thetaDot <- updatedState[[6]]
   pole.thetaDotDot <- updatedState[[7]]
 
   oldpole.theta <- oldState[[5]]
   if(frameNum > 20000){
     print("Max Frame Num Exceeded , stopping simulation")
     return (T)
   }
 
   height <- cos(pole.theta)
   oldHeight <- cos(oldpole.theta)
   if(height==-1 & cart.force==0){
     return(T)
   }
 
    if(oldHeight >= 0 & height < 0){
      #print("Pole fell over")
      return (T)
    }
    if(cart.centerX < track.x | cart.centerX > (track.x+2*track.limit)){
      #print("Exceeded track length")
       return (T)
    } else {
      return (F)
    }
}
 
poleBalance.PlotState <-function(updatedState){
   cart.centerX <- updatedState[[1]]
   cart.centerXDot <- updatedState[[2]]
   cart.centerXDotDot <- updatedState[[3]]
   cart.force <- updatedState[[4]]
   pole.theta <- updatedState[[5]]
   pole.thetaDot <- updatedState[[6]]
   pole.thetaDotDot <- updatedState[[7]]
 
     createSceneFunc(scene.bottomLeftX,scene.bottomLeftY,scene.width,scene.height,
                 main="Simulation of Inverted Pendulum - www.gekkoquant.com",xlab="",
                 ylab="",xlim=c(-0.5*scene.width,0.5*scene.width),ylim=c(-0.5*scene.height,0.5*scene.height))
 
    createBoxFunc(track.x,track.y,track.limit*2,track.height,track.colour)
    createBoxFunc(leftBuffer.x,leftBuffer.y,leftBuffer.width,leftBuffer.height,leftBuffer.colour)
    createBoxFunc(rightBuffer.x,rightBuffer.y,rightBuffer.width,rightBuffer.height,rightBuffer.colour)
    createBoxFunc(cart.centerX-0.5*cart.width,cart.centerY+0.5*cart.height,cart.width,cart.height,cart.colour)
    drawPoleFunc(cart.centerX,cart.centerY,2*pole.length,pole.theta,pole.colour)
    drawPendulum(cart.centerX,cart.centerY,2*pole.length,pole.theta,pendulum.radius,pendulum.colour)
 
}
 
config <- newConfigNEAT(7,1,500,50)
poleSimulation <- newNEATSimulation(config, poleBalance.InitialState,
                                poleBalance.UpdatePoleState,
                                poleBalance.ConvertStateToNeuralNetInputs,
                                poleBalance.UpdateFitness,
                                poleBalance.CheckForTermination,
                                poleBalance.PlotState)
 
for(i in seq(1,1000)){ 
      poleSimulation <- NEATSimulation.RunSingleGeneration(poleSimulation,T,"videos","poleBalance",1/simulation.timestep) 
}

Evolving Neural Networks through Augmenting Topologies – Part 2 of 4

This part of the tutorial on using NEAT algorithm explains how genomes are crossed over in a meaningful way maintaining their topological information and how speciation (group genomes into species) can be used to protect weak genomes with new topological information from prematurely being eradicated from the gene pool before their weight space can be optimised.

The first part of this tutorial can be found here.

Tracking Gene History through Innovation Numbers

Part 1 showed two mutations, link mutate and node mutate which both added new genes to the genome. Each time a new gene is created (through a topological innovation) a global innovation number is incremented and assigned to that gene.

The global innovation number is tracking the historical origin of each gene. If two genes have the same innovation number then they must represent the same topology (although the weights may be different). This is exploited during the gene crossover.

Genome Crossover (Mating)

Genomes crossover takes two parent genomes (lets call them A and B) and creates a new genome (lets call it the child) taking the strongest genes from A and B copying any topological structures along the way.

During the crossover genes from both genomes are lined up using their innovation number. For each innovation number the gene from the most fit parent is selected and inserted into the child genome. If both parent genomes are the same fitness then the gene is randomly selected from either parent with equal probability. If the innovation number is only present in one parent then this is known as a disjoint or excess gene and represents a topological innovation, it too is inserted into the child.

The image below shows the crossover process for two genomes of the same fitness.

crossover1

Speciation

Speciation takes all the genomes in a given genome pool and attempts to split them into distinct groups known as species. The genomes in each species will have similar characteristics.

A way of measuring the similarity between two genomes is required, if two genomes are “similar” they are from the same species. A natural measure to use would be a weighted sum of the number of disjoint & excess genes (representing topological differences) and the difference in weights between matching genes. If the weighted sum is below some threshold then the genomes are of the same species.

The advantage of splitting the genomes into species is that during the genetic evolution step where genomes with low fitness are culled (removed entirely from the genome pool) rather than having each genome fight for it’s place against every other genome in the entire genome pool we can make it fight for it’s place against genomes of the same species. This way species that form from a new topological innovation that might not have a high fitness yet due to not having it’s weights optimised will survive the culling.

Summary of whole process

  • Create a genome pool with n random genomes
  • Take each genome and apply to problem / simulation and calculate the genome fitness
  • Assign each genome to a species
  • In each species cull the genomes removing some of the weaker genomes
  • Breed each species (randomly select genomes in the species to either crossover or mutate)
  • Repeat all of the above

 

 

Evolving Neural Networks through Augmenting Topologies – Part 1 of 4

This four part series will explore the NeuroEvolution of Augmenting Topologies (NEAT) algorithm. Parts one and two will briefly out-line the algorithm and discuss the benefits, part three will apply it to the pole balancing problem and finally part 4 will apply it to market data.

This algorithm recently went viral in a video called MarI/O where a network was developed that was capable of completing the first level of super mario see the video below.

Typically when one chooses to use a neural network they have to decide how many hidden layers there are, the number of neurons in each layer and what connections exist between the neurons. Depending on the nature of the problem it can be very difficult to know what is a sensible topology. Once the topology is chosen it will most likely be trained using back-propagation or a genetic evolution approach and tested. The genetic evolution approach is essentially searching through the space of connection weights and selecting high performing networks and breeding them (this is known as fixed-topology evolution).

The above approach finds optimal connection weights, it’s then down to an “expert” to manually tweak the topology of the network in an attempt to iteratively find better performing networks.

This led to the development of variable-topology training, where both the connection space and structure space are explored. With this came a host of problems such as networks becoming incredibly bushy and complex slowing down the machine learning process. With the genetic approaches it was difficult to track genetic mutations and crossover structure in a meaningful way.

The NEAT algorithm aims to develop a genetic algorithm that searching through neural network weight and structure space that has the following properties:

  1. Have genetic representation that allows structure to be crossed over in a meaningful way
  2. Protect topological innovations that need a few evolutions to be optimised so that it doesn’t disappear from the gene pool prematurely
  3. Minimise topologies throughout training without specially contrived network complexity penalisation functions

A through treatment of the algorithm can be found in the paper Evolving Neural Networks through
Augmenting Topologies by Kenneth O. Stanley and Risto Miikkulainen (http://nn.cs.utexas.edu/downloads/papers/stanley.ec02.pdf).

genotype-phenotype

The information about the network is represented by a genome, the genome contains node genes and connection genes. The node genes define nodes in the network, the nodes can be inputs (such as a technical indicator), outputs (such as a buy / sell recommendation), or hidden (used by the network for a calculation). The connection genes join nodes in the network together and have a weight attached to them.

Connection genes have an input node, an output node, a weight, an enabled/disabled flag and an innovation number. The innovation number is used to track the history of a genes evolution and will be explained in more detail in part two.

This post will look at some of the mutations that can happen to the network, it is worth noting that each genome has embedded inside it a mutation rate for each type of mutation that can occur. These mutation rates are also randomly increased or decreased as the evolution progresses.

Point Mutate

Randomly updates the weight of a randomly selected connection gene

The updates are either:

New Weight = Old Weight +/- Random number between 0 and genome$MutationRate[[“Step”]]

or

New Weight = Random number between -2 and 2

pointMutate
Link Mutate

Randomly adds a new connection to the network with a random weight between -2 and 2

linkMutate

Node Mutate

This mutation adds a new node to the network by disabling a connection, replacing it with a connection of weight 1, a node and a connection with the same weight as the disabled connection. In essence it’s been replaced with an identically functioning equivalent.

nodeMutate

Enable Disable Mutate

Randomly enables and disables connections

enableDisableMutate

 

 

Inverted Pendulum Simulation in R

This post will derive the equations of motion and simulate the classic inverted pendulum control problem. Subsequent posts will apply machine learning to figure out how to control the pendulum and keep it up in the air.

A video of the simulation can be found at:

The derivation of the maths follows the approach outlined in the following video, however I have decided to model the friction between the cart and track.

Cart

cart

Free body diagram of cart

cartFBD

Free body diagram of pendulum

pendulumFBD

Resolve the forces on the free body diagrams and set equal to their acceleration

\text{Cart (0),}\hat{i}:F-TSin\theta-\mu\dot{x}=m_{c}\ddot{x}
\text{Pendulum (1),}\hat{i}:TSin\theta=m_{p}a_{px}
\text{Pendulum (2),}\hat{j}:-TCos\theta-m_{p}g=m_{p}a_{pg}

Definition of e co-ordinate system

cart

The acceleration of the pendulum is the acceleration of the cart plus the acceleration of the pendulum relative to the cart

\underline{a_{p}}=\underline{a_{c}}+\underline{a_{p/c}}=\ddot{x}\hat{i}+L\ddot{\theta}\hat{e}_{\theta}+L\dot{\theta^{2}}\hat{e}_{r}

Convert the co-ordinate system back into the \hat{i} and \hat{j} components

\underline{a_{p}}=\ddot{x}\hat{i}+L\ddot{\theta}[-Cos\theta\hat{i}-Sin\theta\hat{j}]-L\dot{\theta^{2}}[-Sin\theta\hat{i}+Cos\theta\hat{j}]

Substitute the accelerations into equation (1) and (2)

(1):TSin\theta=m_{p}\ddot{x}-m_{p}L\ddot{\theta}Cos\theta+m_{p}L\dot{\theta^{2}}Sin\theta\text{ Eq(3)}
(2):-TCos\theta-m_{p}g=-m_{p}L\ddot{\theta}Sin\theta-m_{p}L\dot{\theta^{2}}Cos\theta\text{ Eq(4)}

It is undesirable to have an unknown tension T so eliminate using a trick.

(3)Cos\theta+(4)Sin\theta:
TSin\theta Cos\theta-TCos\theta Sin\theta-m_{p}gSin\theta=m_{p}\ddot{x}Cos\theta-m_{p}L\ddot{\theta}Cos^{2}\theta+m_{p}L\dot{\theta^{2}}Sin\theta Cos\theta+-m_{p}L\ddot{\theta}Sin^{2}\theta-m_{p}L\dot{\theta^{2}}Cos\theta Sin\theta
-m_{p}gSin\theta=m_{p}\ddot{x}Cos\theta-m_{p}L\ddot{\theta}Cos^{2}\theta+Sin^{2}\theta)+m_{p}L\dot{\theta^{2}}(Sin\theta Cos\theta-Cos\theta Sin\theta)
-m_{p}gSin\theta=m_{p}\ddot{x}Cos\theta-m_{p}L\ddot{\theta}\text{ Eq(5)}

Substitute equation (1) into equation (0)

(0)\&(1):F-m_{p}\ddot{x}+m_{p}L\ddot{\theta}Cos\theta-m_{p}L\dot{\theta^{2}Sin\theta}-\mu\dot{x}=m_{c}\ddot{x}
(0)\&(1):F+m_{p}L\ddot{\theta}Cos\theta-m_{p}L\dot{\theta^{2}Sin\theta}-\mu\dot{x}=(m_{c}-m_{p})\ddot{x}\text{ Eq(6)}

Rearranging equation (6) and (5) gives the system equations in known measurable variables

\ddot{x}=\frac{F+m_{p}L[\ddot{\theta Cos\theta}-\dot{\theta^{2}}Sin\theta]-\mu\dot{x}}{m_{c}+m_{p}}
\ddot{\theta}=\frac{\ddot{x}Cos\theta+gSin\theta}{L}

Both the acceleration terms \ddot{x} and \ddot{\theta} depend on each other which is undesirable, substitute the equation for \ddot{x} into the equation for \ddot{\theta} to remove the dependency

L\ddot{\theta}=\ddot{x}Cos\theta+gSin\theta
L\ddot{\theta}=\frac{FCos\theta+m_{p}LCos\theta[\ddot{\theta}Cos\theta-\dot{\theta^{2}}Sin\theta]-\mu Cos\theta\dot{x}}{m_{c}+m_{p}}+gSin\theta
L(m_{c}+m_{p})\ddot{\theta}=FCos\theta+m_{p}LCos\theta[\ddot{\theta}Cos\theta-\dot{\theta^{2}}Sin\theta]-\mu Cos\theta\dot{x}+g(m_{c}+m_{p})Sin\theta
L(m_{c}+m_{p}-m_{p}Cos^{2}\theta)\ddot{\theta}=FCos\theta-m_{p}L\dot{\theta^{2}}Cos\theta Sin\theta-\mu Cos\theta\dot{x}+g(m_{c}+m_{p})Sin\theta
\ddot{\theta}=\frac{FCos\theta-m_{p}L\dot{\theta^{2}}Cos\theta Sin\theta-\mu Cos\theta\dot{x}+g(m_{c}+m_{p})Sin\theta}{L(m_{c}+m_{p}-m_{p}Cos^{2}\theta)}
\ddot{\theta}=\frac{g(m_{c}+m_{p})Sin\theta+Cos\theta[F-m_{p}L\dot{\theta^{2}}Sin\theta-\mu\dot{x}]}{L(m_{c}+m_{p}-m_{p}Cos^{2}\theta)}

The system can then be simulated using Euler update equations:

x_{t+\Delta t}=x_{t}+\dot{x}\Delta t
\dot{x}_{t+\Delta t}=\dot{x}_{t}+\ddot{x}\Delta t
\theta_{t+\Delta t}=\theta_{t}+\dot{\theta}\Delta t
\dot{\theta}_{t+\Delta t}=\dot{\theta}_{t}+\ddot{\theta}\Delta t

On to the code:

?View Code RSPLUS
library("animation")  #Library to save GIFs
 
#Function to create a blank canvas / scene for drawing objects onto later
createSceneFunc <- function(bottomLeftX, bottomLeftY, width,height,main="",xlab="",ylab="",ann=T,xaxt=NULL,yaxt=NULL,xlim=NULL,ylim=NULL){
        plot(c(bottomLeftX, width), c(bottomLeftY,height), type = "n",ann=ann, xaxt=xaxt, yaxt=yaxt,xlim=xlim,ylim=ylim,main=main,xlab=xlab,ylab=ylab )
}
 
#Function to draw a box on the scene
createBoxFunc <- function(topLeftX, topLeftY, width, height, fillColour=NA, borderColour="black"){
     polygon(c(topLeftX,topLeftX+width,topLeftX+width,topLeftX),
             c(topLeftY,topLeftY,topLeftY-height,topLeftY-height),
              col = fillColour, border=borderColour)
}
 
#Function to draw a circle on the scene
createCircleFunc <- function(centerX,centerY,radius,fillColour=NA, borderColour="black"){
     symbols(centerX,centerY,circles=radius,inches=F,add=T,fg=borderColour,bg=fillColour)
}
 
drawPoleFunc <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x+poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
 
  polygon(c(fixedEnd.x,floatingEnd.x,floatingEnd.x,fixedEnd.x),
             c(fixedEnd.y,floatingEnd.y,floatingEnd.y,fixedEnd.y),
              col = fillColour, border=borderColour)
}
 
drawPendulum <- function(fixedEnd.x,fixedEnd.y,poleLength, theta,radius,fillColour=NA, borderColour="black"){
  floatingEnd.x <- fixedEnd.x+poleLength * sin(theta)
  floatingEnd.y <- fixedEnd.y+poleLength * cos(theta)
  createCircleFunc(floatingEnd.x,floatingEnd.y,radius,fillColour,borderColour)
}
 
#Parameters to control the simulation
simulation.timestep = 0.02
simulation.gravity = 9.8 #meters per second^2
simulation.numoftimesteps = 2000
 
pole.length = 1 #meters, total pole length
pole.width = 0.2
pole.theta = 1*pi/4
pole.thetaDot = 0
pole.thetaDotDot = 0
pole.colour = "purple"
 
 
pendulum.centerX = NA
pendulum.centerY = NA
pendulum.radius = 0.1
pendulum.mass = 1
pendulum.colour = "purple"
 
cart.width=0.5
cart.centerX = 0
cart.centerY = 0
cart.height=0.2
cart.colour="red"
cart.centerXDot = 0
cart.centerXDotDot = 0
cart.mass = 1
cart.force = 0
cart.mu=2
 
 
track.limit= 2.4 #meters from center
track.x = -track.limit
track.height=0.01
track.y = 0.5*track.height
track.colour = "blue"
 
leftBuffer.width=0.1
leftBuffer.height=0.2
leftBuffer.x=-track.limit-0.5*cart.width-leftBuffer.width
leftBuffer.y=0.5*leftBuffer.height
leftBuffer.colour = "blue"
 
rightBuffer.width=0.1
rightBuffer.height=0.2
rightBuffer.x=track.limit+0.5*cart.width
rightBuffer.y=0.5*rightBuffer.height
rightBuffer.colour = "blue"
 
#Define the size of the scene (used to visualise what is happening in the simulation)
scene.width = 2*max(rightBuffer.x+rightBuffer.width,track.limit+pole.length+pendulum.radius)
scene.bottomLeftX = -0.5*scene.width
scene.height=max(pole.length+pendulum.radius,scene.width)
scene.bottomLeftY = -0.5*scene.height
 
 
 
#Some variables to store various time series values of the simulation
logger.trackposition = rep(NA,simulation.numoftimesteps)
logger.force = rep(NA,simulation.numoftimesteps)
logger.cartvelocity = rep(NA,simulation.numoftimesteps)
logger.poletheta = rep(NA,simulation.numoftimesteps)
 
#Some settings to control the charts used to plot the logged variables
plotcontrol.trackposition.ylim = c(0,10)
plotcontrol.force.ylim = c(-6,6)
plotcontrol.yacceleration.ylim = c(-simulation.gravity,400)
plotcontrol.poletheta.ylim = c(0,360)
 
 
 
runSimulationFunc <- function(){
   simulationAborted = FALSE
  #Main simulation loop
  for(i in seq(1,simulation.numoftimesteps)){
 
   costheta = cos(pole.theta)
   sintheta = sin(pole.theta)
   totalmass = cart.mass+pendulum.mass
   masslength = pendulum.mass*pole.length
 
   pole.thetaDotDot = (simulation.gravity*totalmass*sintheta+costheta*(cart.force-masslength*pole.thetaDot^2*sintheta-cart.mu*cart.centerXDot))/(pole.length*(totalmass-pendulum.mass*costheta^2))
 
   cart.centerXDotDot = (cart.force+masslength*(pole.thetaDotDot*costheta-pole.thetaDot^2*sintheta)-cart.mu*cart.centerXDot)/totalmass
 
   cart.centerX = cart.centerX+simulation.timestep*cart.centerXDot
   cart.centerXDot = cart.centerXDot+simulation.timestep*cart.centerXDotDot
   pole.theta = (pole.theta +simulation.timestep*pole.thetaDot )
   pole.thetaDot = pole.thetaDot+simulation.timestep*pole.thetaDotDot
 
   if(cart.centerX <= track.x | cart.centerX >= (track.x+2*track.limit)){
        cart.colour="black"
        simulationAborted = TRUE
   }
 
 
     #Log the results of the simulation
     logger.trackposition[i] <- cart.centerX
     logger.force[i] <- cart.force
     logger.cartvelocity[i] <- cart.centerXDot
     logger.poletheta[i] <- pole.theta
 
    #Plot the simulation
    #The layout command arranges the charts
    layout(matrix(c(1,2,1,3,1,4,1,5,6,6), 5, 2, byrow = TRUE),heights=c(2,2,2,2,1))
    par(mar=c(3,4,2,2) + 0.1)
 
    #Create the scene and draw the various objects
    createSceneFunc(scene.bottomLeftX,scene.bottomLeftY,scene.width,scene.height,
                 main="Simulation of Inverted Pendulum - www.gekkoquant.com",xlab="",
                 ylab="",xlim=c(-0.5*scene.width,0.5*scene.width),ylim=c(-0.5*scene.height,0.5*scene.height))
 
    createBoxFunc(track.x,track.y,track.limit*2,track.height,track.colour)
    createBoxFunc(leftBuffer.x,leftBuffer.y,leftBuffer.width,leftBuffer.height,leftBuffer.colour)
    createBoxFunc(rightBuffer.x,rightBuffer.y,rightBuffer.width,rightBuffer.height,rightBuffer.colour)
    createBoxFunc(cart.centerX-0.5*cart.width,cart.centerY+0.5*cart.height,cart.width,cart.height,cart.colour)
    drawPoleFunc(cart.centerX,cart.centerY,2*pole.length,pole.theta,pole.colour)
    drawPendulum(cart.centerX,cart.centerY,2*pole.length,pole.theta,pendulum.radius,pendulum.colour)
    #Plot the logged variables
    plot(logger.trackposition, type="l",ylab="Cart Position")#, ylim=plotcontrol.trackposition.ylim, ylab="Y POSITION")
    plot(logger.force, type="l", ylab="Cart FORCE") #, ylim=plotcontrol.force.ylim, ylab="Y VELOCITY")
    plot(logger.cartvelocity, type="l", ylab="Cart VELOCITY")#,ylim=plotcontrol.yacceleration.ylim, ylab="Y ACCELERATION")
    plot(logger.poletheta*360/(2*pi), type="l", ylab="Pole THETA")#,ylim=plotcontrol.yacceleration.ylim, ylab="Y ACCELERATION")
 
    #Plot a progress bar
    par(mar=c(2,1,1,1))
    plot(-5, xlim = c(1,simulation.numoftimesteps), ylim = c(0, .3), yaxt = "n", xlab = "", ylab = "", main = "Iteration")
    abline(v=i, lwd=5, col = rgb(0, 0, 255, 255, maxColorValue=255))
 
    if(simulationAborted){
      break
    }
  }
}
 
#runSimulationFunc()
oopt = ani.options(ani.width = 1200, ani.height = 800, other.opts = "-pix_fmt yuv420p -b 600k")
saveVideo(runSimulationFunc(),interval = simulation.timestep,ani.options=oopt,video.name="inverted-pendulum.mp4")
ani.options(oopt)