# K-Nearest Neighbour Algo : Fail

In this post I’m going to look at an implementation of the k-nearest neighbour algorithm.

The algorithm is very simple and can be split into 3 components:

1. A Data Measure – What features / observation describe the current trading day? (vol, rsi, moving avg etc…, don’t forget to normalise your measurements) (variable dataMeasure)

2. Error Measure – How to measure the similarity between data measures (just use MSE) this identifies the K-most similar trading days to today (function calculateMSE)

In the data measure we look to come up with some quantitative measures that capture information about the current trading today. In the example presented below I’ve used a normalised volatility measure (vol(fast)/(vol(fast)+vol(slow)) where fast and slow indicate the window size, slower = longer window. The same procedure but for linear regression curves is used, additionally i’ve included a fast / slow rsi. We take this measure and compare it to the measures on all of the previous trading days, trying to identify the most similar K days in history.

You should look to normalise your signals in some fashion. The reason you need to do this is so that during the MSE calculation you haven’t unexpectedly put a large weight on one of your measurement variables.

Now that you’ve found a set of trading days that are most similar to your current trading day you still have to determine how to convert those days into a trading signal. In the code I take the Knearest neighbours and look at what occurred on the day after after them. I take the open close return and calculate the sharpe ratio of the K neighbors and use this as the number of contracts to buy the following day. If the K neighbors are unrelated their trading will be erratic and the sharpe ratio close to 0, hence we will only trade a small number of contracts.

This algo is potentially interesting when using vol as one of the data measures, it naturally captures the different regimes in the market. If today is a high vol day, it’ll be compared to the historical days that also have high vol. It is hoped that todays market still behaves in the same fashion as in a historically similar day.

Sadly the performance of this strategy is terrible (could just be poor input parameter selection / poor data measures). I suspect that there are better forms of K-nearest neighbour to use. I take today, and compare it to single days in history. There could be significant gains to be had if I take 1 month of data and find the historical most similar month. This will identify patterns of similar behavior which may be more tradeable. I will investigate this in my next post.

On to the code:

?View Code RSPLUS
 library("quantmod") library("PerformanceAnalytics") library("zoo")   #INPUTS marketSymbol <- "ARM.L"   nFastLookback <- 30 #The fast signal lookback used in linear regression curve nSlowLookback <- 50 #The slow signal lookback used in linear regression curve   nFastVolLookback <- 30 #The fast signal lookback used to calculate the stdev nSlowVolLookback <- 50 #The slow signal lookback used calculate the stdev   nFastRSILookback <- 30 #The fast signal lookback used to calculate the stdev nSlowRSILookback <- 50 #The slow signal lookback used calculate the stdev   kNearestGroupSize <- 50 #How many neighbours to use normalisedStrengthVolWeight <- 2 #Make some signals more important than others in the MSE normalisedStrengthRegressionWeight <- 1 fastRSICurveWeight <- 2 slowRSICurveWeight <- 0.8     #Specify dates for downloading data, training models and running simulation startDate = as.Date("2006-08-01") #Specify what date to get the prices from symbolData <- new.env() #Make a new environment for quantmod to store data in   stockCleanNameFunc <- function(name){ return(sub("^","",name,fixed=TRUE)) }   getSymbols(marketSymbol, env = symbolData, src = "yahoo", from = startDate) cleanName <- stockCleanNameFunc(marketSymbol) mktData <- get(cleanName,symbolData)   linearRegressionCurve <- function(data,n){ regression <- function(dataBlock){ fit <-lm(dataBlock~seq(1,length(dataBlock),1)) return(last(fitfitted.values)) } return (rollapply(data,width=n,regression,align="right",by.column=FALSE,na.pad=TRUE)) } volCurve <- function(data,n){ stdev <- function(dataBlock){ sd(dataBlock) } return (rollapply(data,width=n,stdev,align="right",by.column=FALSE,na.pad=TRUE))^2 } fastRegression <- linearRegressionCurve(Cl(mktData),nFastLookback) slowRegression <- linearRegressionCurve(Cl(mktData),nSlowLookback) normalisedStrengthRegression <- slowRegression / (slowRegression+fastRegression) fastVolCurve <- volCurve(Cl(mktData),nFastVolLookback) slowVolCurve <- volCurve(Cl(mktData),nSlowVolLookback) normalisedStrengthVol <- slowVolCurve / (slowVolCurve+fastVolCurve) fastRSICurve <-RSI(Cl(mktData),nFastRSILookback)/100 #rescale it to be in the same range as the other indicators slowRSICurve <-RSI(Cl(mktData),nSlowRSILookback)/100 #Lets plot the signals just to see what they look like dev.new() par(mfrow=c(2,2)) plot(normalisedStrengthVol,type="l") plot(normalisedStrengthRegression,type="l") plot(fastRSICurve,type="l") plot(slowRSICurve,type="l") #DataMeasure will be used to determine how similar other days are to today #It is used later on for calculate the days which are most similar to today according to MSE measure dataMeasure <- cbind(normalisedStrengthVol*normalisedStrengthVolWeight,normalisedStrengthRegression*normalisedStrengthRegression,fastRSICurve*fastRSICurveWeight,slowRSICurve*slowRSICurveWeight) colnames(dataMeasure) <- c("normalisedStrengthVol","normalisedStrengthRegression","fastRSICurve","slowRSICurve") #Finds the nearest neighbour and calculates the trade signal calculateNearestNeighbourTradeSignal <- function(dataMeasure,K,mktReturns){ findKNearestNeighbours <- function(dataMeasure,K){ calculateMSE <- function(dataMeasure){ calculateMSEInner <- function(dataA,dataB){ se <- ((as.matrix(dataA) - as.matrix(dataB))^2) apply(se,1,mean) } #Repeat the last row of dataMeasure multiple times #This is so we can compare dataMeasure[today] with all the previous dates lastMat <- last(dataMeasure) setA <- lastMat[rep(1, length(dataMeasure[,1])),] setB <- dataMeasure mse <- calculateMSEInner(setB,setA) mse[is.na(mse)] <- Inf #Give it a terrible MSE if it's NA colName <- c(colnames(dataMeasure),"MSE") dataMeasure <- cbind(dataMeasure,mse) colnames(dataMeasure) <- colName return (dataMeasure) } rowNum <- seq(1,length(dataMeasure[,1]),1) dataMeasureWithMse <- as.data.frame(calculateMSE(dataMeasure)) tmp <- c("rowNum", colnames(dataMeasureWithMse)) dataMeasureWithMse <- cbind(rowNum,dataMeasureWithMse) colnames(dataMeasureWithMse) <- tmp dataMeasureWithMse <- dataMeasureWithMse[order(dataMeasureWithMse[,"MSE"]),] #Starting from the 2nd item as the 1st is the current day (MSE will be 0) want to drop it return (dataMeasureWithMse[seq(2,min(K,length(dataMeasureWithMse[,1]))),]) } calculateTradeSignalFromKNeighbours <- function(mktReturns,kNearestNeighbours){ rowNums <- kNearestNeighbours[,"rowNum"] rowNums <- na.omit(rowNums) if(length(rowNums) <= 1) { return (0) } print("The kNearestNeighbours are:") print(rowNums) #So lets see what happened on the day AFTER our nearest match mktRet <- mktReturns[rowNums+1] #return (sign(sum(mktRet))) return (SharpeRatio.annualized(mktRet)) } kNearestNeighbours <- findKNearestNeighbours(dataMeasure,K) tradeSignal <- calculateTradeSignalFromKNeighbours(mktReturns,kNearestNeighbours) return(tradeSignal) } ret <- (Cl(mktData)/Op(mktData))-1 signalLog <- as.data.frame(ret) signalLog[,1] <- 0 colnames(signalLog) <- c("TradeSignal") #Loop through all the days we have data for, and calculate a signal for them using nearest neighbour for(i in seq(1,length(ret))){ print (paste("Simulating trading for day",i,"out of",length(ret),"@",100*i/length(ret),"%")) index <- seq(1,i) signal <- calculateNearestNeighbourTradeSignal(dataMeasure[index,],kNearestGroupSize,ret) signalLog[i,1] <- signal } dev.new() tradeRet <- Lag(signalLog[,1])*ret[,1] #Combine todays signal with tomorrows return (no lookforward issues) totalRet <- cbind(tradeRet,ret) colnames(totalRet) <- c("Algo",paste(marketSymbol," Long OpCl Returns")) charts.PerformanceSummary(totalRet,main=paste("K nearest trading algo for",marketSymbol),geometric=FALSE) print(SharpeRatio.annualized(tradeRet)) # Linear Regression Curves vs Bollinger Bands In my last post I showed what a linear regression curve was, this post will use it as part of a mean reverting trading strategy. The strategy is simple: • Calculate a rolling ‘average’ and a rolling ‘deviation’ • If the Close price is greater than the average+n*deviation go short (and close when you cross the mean) • If the Close price is less than the average-n*deviation go long (and close when you cross the mean) Two cases will be analysed, one strategy will use a simple moving average(SMA), the other will use the linear regression curve(LRC) for the average. The deviation function will be Standard Devation, Average True Range, and LRCDeviation (same as standard deviation but replace the mean with the LRC). Results (Lookback = 20 and Deviation Multiplier = 2: Annualized Sharpe Ratio (Rf=0%) • GSPC = 0.05257118 • Simple Moving Avg – Standard Deviation = 0.2535342 • Simple Moving Avg – Average True Range = 0.1165512 • Simple Moving Avg – LRC Deviation 0.296234 • Linear Regression Curve – Standard Deviation = 0.2818447 • Linear Regression Curve – Average True Range = 0.5824727 • Linear Regression Curve – LRC Deviation = 0.04672071 Optimisation analysis: Annoyingly the colour scale is different between the two charts, however the sharpe ratio is written in each cell. Lighter colours indicate better performance. Over a 13year period and trading the GSPC the LRC achieved a sharpe of ~0.6 where as the SMA achieved a sharpe of ~0.3. The LRC appears superior to the SMA. I will update this post at a later point in time when my optimisation has finished running for the other strategies. ?View Code RSPLUS  library("quantmod") library("PerformanceAnalytics") library("zoo") library("gplots") #INPUTS marketSymbol <- "^GSPC" nLookback <- 20 #The lookback to calcute the moving average / linear regression curve / average true range / standard deviation nDeviation <- 2 #Specify dates for downloading data, training models and running simulation startDate = as.Date("2000-01-01") #Specify what date to get the prices from symbolData <- new.env() #Make a new environment for quantmod to store data in stockCleanNameFunc <- function(name){ return(sub("^","",name,fixed=TRUE)) } getSymbols(marketSymbol, env = symbolData, src = "yahoo", from = startDate) cleanName <- stockCleanNameFunc(marketSymbol) mktData <- get(cleanName,symbolData) linearRegressionCurve <- function(data,n){ regression <- function(dataBlock){ fit <-lm(dataBlock~seq(1,length(dataBlock),1)) return(last(fitfitted.values)) } return (rollapply(data,width=n,regression,align="right",by.column=FALSE,na.pad=TRUE)) }   linearRegressionCurveStandardDeviation <- function(data,n){   deviation <- function(dataBlock){ fit <-lm(dataBlock~seq(1,length(dataBlock),1)) quasiMean <- (last(fitfitted.values)) quasiMean <- rep(quasiMean,length(dataBlock)) stDev <- sqrt((1/length(dataBlock))* sum((dataBlock - quasiMean)^2)) return (stDev) } return (rollapply(data,width=n,deviation,align="right",by.column=FALSE,na.pad=TRUE)) } reduceLongTradeEntriesToTradOpenOrClosedSignal <- function(trades){ #Takes something like #000011110000-1-1000011 (1 = go long, -1 = go short) #and turns it into #00001111111100000011 #trades[is.na(trades)] <- 0 out <- trades #copy the datastructure over currentPos <-0 for(i in 1:length(out[,1])){ if((currentPos == 0) & (trades[i,1]==1)){ currentPos <- 1 out[i,1] <- currentPos next } if((currentPos == 1) & (trades[i,1]==-1)){ currentPos <- 0 out[i,1] <- currentPos next } out[i,1] <- currentPos } return(out) } reduceShortTradeEntriesToTradOpenOrClosedSignal <- function(trades){ return(-1*reduceLongTradeEntriesToTradOpenOrClosedSignal(-1*trades)) } generateTradingReturns <- function(mktPrices, nLookback, nDeviation, avgFunction, deviationFunction,title,showGraph=TRUE){ quasiMean <- avgFunction(mktPrices,n=nLookback) quasiDeviation <- deviationFunction(mktPrices,n=nLookback) colnames(quasiMean) <- "QuasiMean" colnames(quasiDeviation) <- "QuasiDeviation" price <- Cl(mktPrices) upperThreshold = quasiMean + nDeviation*quasiDeviation lowerThreshold = quasiMean - nDeviation*quasiDeviation aboveUpperBand <- price>upperThreshold belowLowerBand <- pricequasiMean belowMAvg <- price 0){ colorFunc <- rgb(0,(255*x/4)/255 , 0/255, 1) } else { colorFunc <- rgb((255*(-1*x)/4)/255,0 , 0/255, 1) } } optimiseTradingStrat <- function(mktData,lookbackStart,lookbackEnd,lookbackStep,deviationStart,deviationEnd,deviationStep,strategy,title){ lookbackRange <- seq(lookbackStart,lookbackEnd,lookbackStep) deviationRange <- seq(deviationStart,deviationEnd,deviationStep) combinations <- length(lookbackRange)*length(deviationRange) combLookback <- rep(lookbackRange,each=combinations/length(lookbackRange)) combDeviation <- rep(deviationRange,combinations/length(deviationRange)) optimisationMatrix <- t(rbind(t(combLookback),t(combDeviation),rep(NA,combinations),rep(NA,combinations),rep(NA,combinations))) colnames(optimisationMatrix) <- c("Lookback","Deviation","SharpeRatio","CumulativeReturns","MaxDrawDown") for(i in 1:length(optimisationMatrix[,1])){ print(paste("On run",i,"out of",length(optimisationMatrix[,1]),"nLookback=",optimisationMatrix[i,"Lookback"],"nDeviation=",optimisationMatrix[i,"Deviation"])) runReturns <- strategy(mktData,optimisationMatrix[i,"Lookback"],optimisationMatrix[i,"Deviation"]) optimisationMatrix[i,"SharpeRatio"] <- SharpeRatio.annualized(runReturns) optimisationMatrix[i,"CumulativeReturns"] <- sum(runReturns) optimisationMatrix[i,"MaxDrawDown"] <- maxDrawdown(runReturns,geometric=FALSE) print(optimisationMatrix) } print(optimisationMatrix) dev.new() z <- matrix(optimisationMatrix[,"SharpeRatio"],nrow=length(lookbackRange),ncol=length(deviationRange),byrow=TRUE) colors <- colorFunc(optimisationMatrix[,"SharpeRatio"]) rownames(z) <- lookbackRange colnames(z) <-deviationRange heatmap.2(z, key=TRUE,trace="none",cellnote=round(z,digits=2),Rowv=NA, Colv=NA, scale="column", margins=c(5,10),xlab="Deviation",ylab="Lookback",main=paste("Sharpe Ratio for Strategy",title)) } if(FALSE){ dev.new() plot(Cl(mktData),type="l",main=paste(marketSymbol, "close prices")) lines(SMA(Cl(mktData),n=50),col="red",type="l") lines(linearRegressionCurve(Cl(mktData),n=50),col="blue",type="l") legend('bottomright',c("Close",paste("Simple Moving Average Lookback=50"),paste("Linear Regression Curve Lookback=50")),lty=1, col=c('black', 'red', 'blue'), bty='n', cex=.75) } nLookbackStart <- 20 nLookbackEnd <- 200 nLookbackStep <- 20 nDeviationStart <- 1 nDeviationEnd <- 2.5 nDeviationStep <- 0.1 #optimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategySMAandSTDEV,"AvgFunc=SMA and DeviationFunc=STDEV") #optimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategySMAandATR,"AvgFunc=SMA and DeviationFunc=ATR") #optimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategySMAandLRCDev,"AvgFunc=SMA and DeviationFunc=LRCDev") #optimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategyLRCandSTDEV,"AvgFunc=LRC and DeviationFunc=STDEV") #optimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategyLRCandATR,"AvgFunc=LRC and DeviationFunc=ATR") #doptimiseTradingStrat(mktData,nLookbackStart,nLookbackEnd,nLookbackStep,nDeviationStart,nDeviationEnd,nDeviationStep,strategyLRCandLRCDev,"AvgFunc=LRC and DeviationFunc=LRCDev") # Genetic Algorithm in R – Trend Following This post is going to explain what genetic algorithms are, it will also present R code for performing genetic optimisation. A genetic algo consists of three things: 1. A gene 2. A fitness function 3. Methods to breed/mate genes ## The Gene The gene is typically a binary number, each bit in the binary number controls various parts of your trading strategy. The gene below contains 4 sub gene, a stock gene to select what stock to trade, a strategy gene to select what strategy to use, paramA sets a parameter used in your strategy and paramB sets another parameter to use in your strategy. Gene = [StockGene,StrategyGene,ParamA,ParamB] Stock Gene 00 Google 01 Facebook 10 IBM 11 LinkedIn Strategy Gene 0 Simple Moving Average 1 Exponential Moving Average ParamA Gene – Moving Average 1 Lookback 00 10 01 20 10 30 11 40 ParamB Gene – Moving Average 2 Lookback 00 15 01 25 10 35 11 45 So Gene = [01,1,00,11] Would be stock=Facebook, strategy=Exponential Moving Average,paramA=10,paramB=45]. The strategy rules are simple, if the moving average(length=paramA) > moving average(length=paramB) then go long, and vice versa. ## The fitness function A gene is quantified as a good or bad gene using a fitness function. The success of a genetic trading strategy depends heavily upon your choice of fitness function and whether it makes sense with the strategies you intend to use. You will trade each of the strategies outlined by your active genes and then rank them by their fitness. A good starting point would be to use the sharp ratio as the fitness function. You need to be careful that you apply the fitness function to statistically significant data. For example if you used a mean reverting strategy that might trade once a month (or what ever your retraining window is), then your fitness is determined by 1 or 2 datapoints!!! This will result in poor genetic optimisation (in my code i’ve commented out a mean reversion strategy test for yourself). Typically what happens is your sharpe ratio from 2 datapoints is very very high merely down to luck. You then mark this as a good gene and trade it the next month with terrible results. ## Breeding Genes With a genetic algo you need to breed genes, for the rest of this post i’ll assume you are breeding once a month. During breeding you take all of the genes in your gene pool and rank them according to the fitness function. You then select the top N genes and breed them (discard all the other genes they’re of no use). Breeding consists of two parts: Hybridisation – Take a gene and cut a chunk out of it, you can use whatever random number generator you want to determine the cut locations, swap this chunk with a corresponding chunk from another gene. Eg. Old gene: 00110010 and 11100110 (red is the randomly select bits to cut) New gene: 00100110 and 11110010 You do this for every possible pair of genes in your top N list. Mutation – After hybridisation go through all your genes and randomly flip the bits with an fixed probability. The mutation prevents your strategy from getting locked into an every shrinking gene pool. For a more detailed explanation with diagrams please see: http://blog.equametrics.com/ scroll down to Genetic Algorithms and its Application in Trading Annualized Sharpe Ratio (Rf=0%) 1.15 On to the code: ?View Code RSPLUS  library("quantmod") library("PerformanceAnalytics") library("zoo") #INPUTS topNToSelect <- 5 #Top n genes are selected during the mating, these will be mated with each other mutationProb <- 0.05 #A mutation can occur during the mating, this is the probability of a mutation for individual chromes symbolLst <- c("^GDAXI","^FTSE","^GSPC","^NDX","AAPL","ARMH","JPM","GS") #symbolLst <- c("ADN.L","ADM.L","AGK.L","AMEC.L","AAL.L","ANTO.L","ARM.L","ASHM.L","ABF.L","AZN.L","AV.L","BA.L","BARC.L","BG.L","BLT.L","BP.L","BATS.L","BLND.L","BSY.L","BNZL.L","BRBY.L","CSCG.L","CPI.L","CCL.L","CNA.L","CPG.L","CRH.L","CRDA.L","DGE.L","ENRC.L","EXPN.L","FRES.L","GFS.L","GKN.L","GSK.L","HMSO.L","HL.L","HSBA.L","IAP.L","IMI.L","IMT.L","IHG.L","IAG.L","IPR.L","ITRK.L","ITV.L","JMAT.L","KAZ.L","KGF.L","LAND.L","LGEN.L","LLOY.L","EMG.L","MKS.L","MGGT.L","MRW.L","NG.L","NXT.L","OML.L","PSON.L","PFC.L","PRU.L","RRS.L","RB.L","REL.L","RSL.L","REX.L","RIO.L","RR.L","RBS.L","RDSA.L","RSA.L","SAB.L","SGE.L","SBRY.L","SDR.L","SRP.L","SVT.L","SHP.L","SN.L","SMIN.L","SSE.L","STAN.L","SL.L","TATE.L","TSCO.L","TLW.L","ULVR.L","UU.L","VED.L","VOD.L","WEIR.L","WTB.L","WOS.L","WPP.L","XTA.L") #END INPUTS #Stock gene stockGeneLength <- 3 #8stocks #stockGeneLength<-6 #Allows 2^6 stocks (64) #Strategy gene strateyGeneLength<-2 #Paramter lookback gene parameterLookbackGeneLength<-6 #Calculate the length of our chromozone, chromozone=[gene1,gene2,gene3...] chromozoneLength <- stockGeneLength+strateyGeneLength+parameterLookbackGeneLength #TradingStrategies signalMACross <- function(mktdata, paramA, paramB, avgFunc=SMA){ signal = avgFunc(mktdata,n=paramA)/avgFunc(mktdata,n=paramB) signal[is.na(signal)] <- 0 signal <- (signal>1)*1 #converts bools into ints signal[signal==0] <- (-1) return (signal) } signalBollingerReversion <- function(mktdata, paramA, paramB){ avg <- SMA(mktdata,paramB) std <- 1*rollapply(mktdata, paramB,sd,align="right") shortSignal <- (mktdata > avg+std)*-1 longSignal <- (mktdata < avg-std)*1 signal <- shortSignal+longSignal signal[is.na(signal)]<-0 return (signal) } signalRSIOverBoughtOrSold <- function(mktdata, paramA, paramB){ upperLim <- min(60*(1+paramB/100),90) lowerLim <- max(40*(1-paramB/100),10) rsisignal <- RSI(mktdata,paramB) signal <- ((rsisignal>upperLim)*-1)+((rsisignal0)*1)/length(tradingRet) #% of trades profitable #tradingFitness <- -1*maxDrawdown(tradingRet) return(tradingFitness) } #This function performs the mating between two chromozones genetricMating <- function(chromozoneFitness,useTopNPerformers,mutationProb){ selectTopNPerformers <- function(chromozoneFitness,useTopNPerformers){ #Ranks the chromozones by their fitness and select the topNPerformers orderedChromozones <- order(chromozoneFitness[,"Fitness"],decreasing=TRUE) orderedChromozones <- chromozoneFitness[orderedChromozones,] ##Often there are lots of overlapping strategies with the same fitness ##We should filter by unique fitness to stop the overweighting of lucky high fitness orderedChromozones <- subset(orderedChromozones, !duplicated(Fitness)) print(orderedChromozones) return(orderedChromozones[seq(1,min(nrow(orderedChromozones),useTopNPerformers)),]) } hybridize <- function(topChromozones,mutationProb){ crossoverFunc <- function(chromeA,chromeB){ chromeA <- chromeA[,!colnames(chromeA) %in% c("Fitness")] chromeB <- chromeB[,!colnames(chromeB) %in% c("Fitness")] #Takes a number of chromes from B and swaps them in to A nCross <- runif(min=0,max=ncol(chromeA)-1,1) #the number of individual chromes to swap swapStartLocation = round(runif(min=1,max=ncol(chromeA),1)) swapLocations <- seq(swapStartLocation,swapStartLocation+nCross) #Can run over the end of our vector, need to wrap around back to start swapLocations <- swapLocations %% ncol(chromeA)+1 #Performs the wrapping chromeA[1,swapLocations] <- chromeB[1,swapLocations] #Performs the swap return (chromeA) } mutateFunc <- function(chrome,mutationProb){ return((round(runif(min=0,max=1,ncol(chrome)) # Is ‘risk’ rewarded in the equity markets? This post looks to examine if the well known phrase “the higher the risk the higher the reward” applies to the FTSE 100 constituents. Numerous models have tried to capture risk reward metrics, the best known is the Capital Allocation Pricing Model (CAPM). CAPM tries to quantify the return on an investment an investor must receive in order to be adequately compensated for the risk they’ve taken. The code below calculates the rolling standard deviation of returns, ‘the risk’, for the FTSE 100 constituents. It then groups stocks into quartiles by this risk metric, the groups are updated daily. Quartile 1 is the lowest volatility stocks, quartile 2 the highest. An equally weighted ( amt) index is created for each quartile. According to the above theory Q4 (high vol) should produce the highest cumulative returns.

When using a 1 month lookback for the stdev calculation there is a clear winning index, the lowest vol index (black). Interestingly the 2nd best index is the highest vol index (blue). The graph above is calculated using arithmetic returns.

When using a longer lookback of 250 days, a trading year, the highest vol index is the best performer and the lowest vol index the worst performer.

For short lookback (30days) low vol index was the best performer

For long lookback (250days) high vol index was the best performer

One possible explanation (untested) is that for a short lookback the volatility risk metric is more sensitive to moves in the stock and hence on a news announcement / earnings the stock has a higher likelihood of moving from it’s current index into a higher vol index. Perhaps it isn’t unreasonable to assume that the high vol index contains only the stocks that have had a recent announcement / temporary volatility and are in a period of consolidation or mean reversion. Or to put it another way for short lookbacks the high vol index doesn’t contain the stocks that are permanently highly vol, whereas for long lookbacks any temporary vol deviations are smoothed out.

Below are the same charts as above but for geometric returns.

On to the code:

?View Code RSPLUS
 library("quantmod") library("PerformanceAnalytics") library("zoo")   #Script parameters symbolLst <- c("ADN.L","ADM.L","AGK.L","AMEC.L","AAL.L","ANTO.L","ARM.L","ASHM.L","ABF.L","AZN.L","AV.L","BA.L","BARC.L","BG.L","BLT.L","BP.L","BATS.L","BLND.L","BSY.L","BNZL.L","BRBY.L","CSCG.L","CPI.L","CCL.L","CNA.L","CPG.L","CRH.L","CRDA.L","DGE.L","ENRC.L","EXPN.L","FRES.L","GFS.L","GKN.L","GSK.L","HMSO.L","HL.L","HSBA.L","IAP.L","IMI.L","IMT.L","IHG.L","IAG.L","IPR.L","ITRK.L","ITV.L","JMAT.L","KAZ.L","KGF.L","LAND.L","LGEN.L","LLOY.L","EMG.L","MKS.L","MGGT.L","MRW.L","NG.L","NXT.L","OML.L","PSON.L","PFC.L","PRU.L","RRS.L","RB.L","REL.L","RSL.L","REX.L","RIO.L","RR.L","RBS.L","RDSA.L","RSA.L","SAB.L","SGE.L","SBRY.L","SDR.L","SRP.L","SVT.L","SHP.L","SN.L","SMIN.L","SSE.L","STAN.L","SL.L","TATE.L","TSCO.L","TLW.L","ULVR.L","UU.L","VED.L","VOD.L","WEIR.L","WTB.L","WOS.L","WPP.L","XTA.L") #Specify dates for downloading data startDate = as.Date("2000-01-01") #Specify what date to get the prices from symbolData <- new.env() #Make a new environment for quantmod to store data in clClRet <- new.env() downloadedSymbols <- list() for(i in 1:length(symbolLst)){ #Download one stock at a time print(paste(i,"/",length(symbolLst),"Downloading",symbolLst[i])) tryCatch({     getSymbols(symbolLst[i], env = symbolData, src = "yahoo", from = startDate) cleanName <- sub("^","",symbolLst[i],fixed=TRUE) mktData <- get(cleanName,symbolData) print(paste("-Calculating close close returns for:",cleanName)) ret <-(Cl(mktData)/Lag(Cl(mktData)))-1 if(max(abs(ret),na.rm=TRUE)>0.5){ print("-There is a abs(return) > 50% the data is odd lets not use this stock") next; } downloadedSymbols <- c(downloadedSymbols,symbolLst[i])   assign(cleanName,ret,envir = clClRet) }, error = function(e) {     print(paste("Couldn't download: ", symbolLst[i])) })     }     #Combine all the returns into a zoo object (joins the returns by date) #Not a big fan of this loop, think it's suboptimal zooClClRet <- zoo() for(i in 1:length(downloadedSymbols)){ cleanName <- sub("^","",downloadedSymbols[i],fixed=TRUE) print(paste("Combining the close close returns to the zoo:",cleanName)) if(length(zooClClRet)==0){ zooClClRet <- as.zoo(get(cleanName,clClRet)) } else { zooClClRet <- merge(zooClClRet,as.zoo(get(cleanName,clClRet))) } } print(head(zooClClRet))     #This will take inzoo or data frame #And convert each row into quantiles #Quantile 1 = 0-0.25 #Quantile 2 = 0.25-0.5 etc... quasiQuantileFunction <- function(dataIn){ quantileFun <- function(rowIn){ quant <- quantile(rowIn,na.rm=TRUE) #print(quant) a <- (rowIn<=quant[5]) b <- (rowIn<=quant[4]) c <- (rowIn<=quant[3]) d <- (rowIn<=quant[2]) rowIn[a] <- 4 rowIn[b] <- 3 rowIn[c] <- 2 rowIn[d] <- 1 return(rowIn) }   return (apply(dataIn,2,quantileFun)) }   avgReturnPerQuantile <- function(returnsData,quantileData){ q1index <- (clClQuantiles==1) q2index <- (clClQuantiles==2) q3index <- (clClQuantiles==3) q4index <- (clClQuantiles==4)   q1dat <- returnsData q1dat[!q1index] <- NaN q2dat <- returnsData q2dat[!q2index] <- NaN q3dat <- returnsData q3dat[!q3index] <- NaN q4dat <- returnsData q4dat[!q4index] <- NaN   avgFunc <- function(x) { #apply(x,1,median,na.rm=TRUE) #median is more resistant to outliers apply(x,1,mean,na.rm=TRUE) } res <- returnsData[,1:4] #just to maintain the time series (there must be a better way) res[,1] <- avgFunc(q1dat) res[,2] <- avgFunc(q2dat) res[,3] <- avgFunc(q3dat) res[,4] <- avgFunc(q4dat)   colnames(res) <- c("Q1","Q2","Q3","Q4") return(res) }   nLookback <- 250 #~1year trading calendar clClVol <- rollapply(zooClClRet,nLookback,sd,na.rm=TRUE) clClQuantiles <- quasiQuantileFunction(clClVol) returnPerVolQuantile <- avgReturnPerQuantile(zooClClRet,clClQuantiles) colnames(returnPerVolQuantile) <- c("Q1 min vol","Q2","Q3","Q4 max vol") returnPerVolQuantile[is.nan(returnPerVolQuantile)]<-0 #Assume if there is no return data that it's return is 0 #returnPerVolQuantile[returnPerVolQuantile>0.2] <- 0 #I was having data issues leading to days with 150% returns! This filters them out cumulativeReturnsByQuantile <- apply(returnPerVolQuantile,2,cumsum) dev.new() charts.PerformanceSummary(returnPerVolQuantile,main=paste("Arithmetic Cumulative Returns per Vol Quantile - Lookback=",nLookback),geometric=FALSE) print(table.Stats(returnPerVolQuantile)) cat("Sharpe Ratio") print(SharpeRatio.annualized(returnPerVolQuantile))   dev.new() par(oma=c(0,0,2,0)) par(mfrow=c(3,3))   for(i in seq(2012,2004,-1)){ print(as.Date(paste(i,"-01-01",sep=""))) print(as.Date(paste(i+1,"-01-01",sep=""))) windowedData <- window(as.zoo(returnPerVolQuantile),start=as.Date(paste(i,"-01-01",sep="")),end=as.Date(paste(i+1,"-01-01",sep=""))) chart.CumReturns(windowedData,main=paste("Year",i,"to",i+1),geometric=FALSE) } title(main=paste("Arithmetic Cumulative Returns per Vol Quantile - Lookback=",nLookback),outer=T)   dev.new() charts.PerformanceSummary(returnPerVolQuantile,main=paste("Geometric Cumulative Returns per Vol Quantile - Lookback=",nLookback),geometric=TRUE) print(table.Stats(returnPerVolQuantile)) cat("Sharpe Ratio") print(SharpeRatio.annualized(returnPerVolQuantile))   dev.new() par(oma=c(0,0,2,0)) par(mfrow=c(3,3))   for(i in seq(2012,2004,-1)){ print(as.Date(paste(i,"-01-01",sep=""))) print(as.Date(paste(i+1,"-01-01",sep=""))) windowedData <- window(as.zoo(returnPerVolQuantile),start=as.Date(paste(i,"-01-01",sep="")),end=as.Date(paste(i+1,"-01-01",sep=""))) chart.CumReturns(windowedData,main=paste("Year",i,"to",i+1),geometric=TRUE) } title(main=paste("Geometric Cumulative Returns per Vol Quantile - Lookback=",nLookback),outer=T)

# Analysis of returns after n consecutive up/down days – Predicting the Sign of Open to Close Returns

This weekend I was spammed for a “binary option trading system with 90% accuracy”. The advert caught my curiosity  in essence it detailed a method that was a variation of the well known roulette playing strategy that mathematically guarantees a profit (assuming infinite money, and no table limit).

Roulette Strategy

If you double your bet size after a loss and repeat the same bet you are guaranteed a profit, your next winning will cover all the preceding loses.

e.g Bet $1 on red, lose, Bet$2 on red, Win get $4 back ($2 is your stake, and $1 covers the loss from your first bet) giving$1 profit.

Exponential growth of lot size, no thanks

Binary options are analogous to betting on red, they offer virtually fixed odds for up or down directional bets. Naturally I want to know whats is the maximum number of consecutive up or down days in the market, how much pain would I have to suffer with this strategy.

Occurrences of n Consecutive Up or Down Days

Analysing the last 12 years of returns data for the S&P 500, the maximum consecutive number of up days is 9 (occurred in 2004-2005), the maximum consecutive number of down days is -8 which, you guessed it, occurred in 2008-2009.

So 9 days of pain should we always short the direction of the market.

Instead of enduring the 9 days of draw down, it is interesting to see what the consecutive number of up/down days says about the probability the next day is an up day. The maximum likelihood probability of an up day is count(up days)/count(up and down days). Naturally we will condition this data on the consecutive number of up/down days.

Consecutive Up or Down Days vs Maximum Likelihood Probability the next day is up

This data is fairly nice looking, for example in 2012-2013 there is a clear relationship. The more down days in a row the higher the likelihood of an up day. 6 down days implied the probability of an up day is 80%! I must raise a note of caution here, 6 down days in a row was seen less than 5 times in the year. Hence the probability estimate is based on 5 points and not statistically significant, perhaps looking at 5 years of returns might be better.

I appreciate that most people don’t trade binary options can we trade the index/stock out right, it is interesting to see what the consecutive number of up/down days says about future Open to Close Returns. The image below regresses Open to Close Returns (time t) with Consecutive Up/Down days (time t-1).

Consecutive Up or Down Days vs Next Day Open to Close Returns

Very disappointing chart, doesn’t really show much relationship between returns and consecutive up/down days. For some of the data points the up move is more probable than a down move but the magnitude of the up moves are significantly smaller than that of the down moves. These charts vary greatly by asset class and by security, single stocks have much more favorable plots.

Prediction Accuracy

The plot below shows the accuracy of using this maximum likelihood estimate approach. The model takes the last 250 days of returns and calculates the probability of an up move given that the current day has seen n consecutive days of trading. If the prob of an up move or is over a certain threshold go long, if its below a certain threshold go short.

Heatmap of Accuracy vs Model Parameters

The histogram on the heatmap shows that approximately half of the parameter combinations can predict the direction with accuracy greater than 50%.

The beauty of this approach is that it’s simple, it can be applied to any asset class but most importantly it can be applied across different time frames.

Onto the code:

?View Code RSPLUS

# Statistical Arbitrage – Trading a cointegrated pair

In my last post http://gekkoquant.com/2012/12/17/statistical-arbitrage-testing-for-cointegration-augmented-dicky-fuller/ I demonstrated cointegration, a mathematical test to identify stationary pairs where the spread by definition must be mean reverting.

In this post I intend to show how to trade a cointegrated pair and will continue analysing Royal Dutch Shell A vs B shares (we know they’re cointegrated from my last post). Trading a cointegrated pair is straight forward, we know the mean and variance of the spread, we know that those values are constant. The entry point for a stat arb is to simply look for a large deviation away from the mean.

A basic strategy is:

• If spread(t) >= Mean Spread + 2*Standard Deviation then go Short
• If spread(t) <= Mean Spread – 2*Standard Deviation then go Long
There are many variations of this strategy
Moving average / moving standard deviation (this will be explored later):
• If spread(t) >= nDay Moving Average + 2*nDay Rolling Standard deviation then go Short
• If spread(t) <= nDay Moving Average – 2*nDay Rolling Standard deviation then go long
Wait for mean reversion:
• Advantage is that we only trade when we see the mean reversion, where as the other models are hoping for mean reversion on a large deviation from the mean (is the spread blowing up?)
All the above strategies look to exit their position when the spread has reverted to the mean. Personally I wouldn’t trade any of the above as they don’t specify an exit strategy for adverse trades. Ie if there is a 6 standard deviation move in the spread is this an amazing trade opportunity? OR more likely did the spread just blow up.

This post will look at the moving average and rolling standard deviation model for Royal Dutch Shell A vs B shares, it will use the hedge ratio found in the last post.

Sharpe Ratio Shell A & B Stat Arb Shell A
Annualized Sharpe Ratio (Rf=0%):

Shell A&B Stat Arb 0.8224211

Shell A 0.166307

The stat arb has a Superior Sharpe ratio over simply investing in Shell A. At a first glance the sharpe ratio of 0.8 looks disappointing, however since the strategy spends most of it’s time out of the market it will have a low annualized sharpe ratio. To increase the sharpe ratio one can look at trading higher frequencies or have a portfolio pairs so that more time is spent in the market.

Onto the code:

?View Code RSPLUS