High Probability Credit Spreads – Using Linear Regression Curves

I came across this video series over the weekend, an option trader discusses how he trades credit spreads (mainly looks for mean reversion). Most of you will be familiar with bollinger bands as a common mean reversion strategy, essentially you take the moving average and moving standard deviation of the stock. You then plot on to your chart the moving average and an upper and lower band(moving average +/- n*standard deviations).

It is assumed that the price will revert to the moving average hence any price move to the bands is a good entry point. A common problem with this strategy is that the moving average is a LAGGING indicator and is often very slow to track the price moves if a long lookback period is used.

Video 1 presents a technique called “linear regression curves” about 10mins in. Linear regression curves aim to solve the problem of the moving average being slow to track the price.

Linear Regression Curve vs Simple Moving Average

demo of linear regression curve good tracking

 

See how tightly the blue linear regression curve follows the close price, it’s significantly quicker to identify turns in the market where as the simple moving average has considerable tracking error. The MSE could be taken to quantify the tightness.

How to calculate the linear regression curve:

linear regression diagram

In this example you have 100 closing prices for your given stock. Bar 1 is the oldest price, bar 100 is the most recent price. We will use a 20day regression.

1. Take prices 1-20 and draw the line of best fit through them
2. At the end of your best fit line (so bar 20), draw a little circle
3. Take prices 2-21 and draw the line of best fit through them
4. At the end of your best fit line (so bar 21) draw a little circle
5. Repeat upto bar 100
6. Join all of your little circles, this is your ‘linear regression curve’
So in a nutshell you just join the ends of a rolling linear regression.


Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2013/09/demo-of-linear-regression-curve-good-tracking-1024x521.jpegDigg ThisSubmit to redditShare via email

Is ‘risk’ rewarded in the equity markets?

This post looks to examine if the well known phrase “the higher the risk the higher the reward” applies to the FTSE 100 constituents. Numerous models have tried to capture risk reward metrics, the best known is the Capital Allocation Pricing Model (CAPM). CAPM tries to quantify the return on an investment an investor must receive in order to be adequately compensated for the risk they’ve taken.

The code below calculates the rolling standard deviation of returns, ‘the risk’, for the FTSE 100 constituents. It then groups stocks into quartiles by this risk metric, the groups are updated daily. Quartile 1 is the lowest volatility stocks, quartile 2 the highest. An equally weighted ($ amt) index is created for each quartile. According to the above theory Q4 (high vol) should produce the highest cumulative returns.

When using a 1 month lookback for the stdev calculation there is a clear winning index, the lowest vol index (black). Interestingly the 2nd best index is the highest vol index (blue). The graph above is calculated using arithmetic returns.

When using a longer lookback of 250 days, a trading year, the highest vol index is the best performer and the lowest vol index the worst performer.

For short lookback (30days) low vol index was the best performer

For long lookback (250days) high vol index was the best performer

One possible explanation (untested) is that for a short lookback the volatility risk metric is more sensitive to moves in the stock and hence on a news announcement / earnings the stock has a higher likelihood of moving from it’s current index into a higher vol index. Perhaps it isn’t unreasonable to assume that the high vol index contains only the stocks that have had a recent announcement / temporary volatility and are in a period of consolidation or mean reversion. Or to put it another way for short lookbacks the high vol index doesn’t contain the stocks that are permanently highly vol, whereas for long lookbacks any temporary vol deviations are smoothed out.

Below are the same charts as above but for geometric returns.

On to the code:

?View Code RSPLUS
library("quantmod")
library("PerformanceAnalytics")
library("zoo")
 
#Script parameters
symbolLst <- c("ADN.L","ADM.L","AGK.L","AMEC.L","AAL.L","ANTO.L","ARM.L","ASHM.L","ABF.L","AZN.L","AV.L","BA.L","BARC.L","BG.L","BLT.L","BP.L","BATS.L","BLND.L","BSY.L","BNZL.L","BRBY.L","CSCG.L","CPI.L","CCL.L","CNA.L","CPG.L","CRH.L","CRDA.L","DGE.L","ENRC.L","EXPN.L","FRES.L","GFS.L","GKN.L","GSK.L","HMSO.L","HL.L","HSBA.L","IAP.L","IMI.L","IMT.L","IHG.L","IAG.L","IPR.L","ITRK.L","ITV.L","JMAT.L","KAZ.L","KGF.L","LAND.L","LGEN.L","LLOY.L","EMG.L","MKS.L","MGGT.L","MRW.L","NG.L","NXT.L","OML.L","PSON.L","PFC.L","PRU.L","RRS.L","RB.L","REL.L","RSL.L","REX.L","RIO.L","RR.L","RBS.L","RDSA.L","RSA.L","SAB.L","SGE.L","SBRY.L","SDR.L","SRP.L","SVT.L","SHP.L","SN.L","SMIN.L","SSE.L","STAN.L","SL.L","TATE.L","TSCO.L","TLW.L","ULVR.L","UU.L","VED.L","VOD.L","WEIR.L","WTB.L","WOS.L","WPP.L","XTA.L")
#Specify dates for downloading data
startDate = as.Date("2000-01-01") #Specify what date to get the prices from
symbolData <- new.env() #Make a new environment for quantmod to store data in
clClRet <- new.env()
downloadedSymbols <- list()
for(i in 1:length(symbolLst)){
  #Download one stock at a time
  print(paste(i,"/",length(symbolLst),"Downloading",symbolLst[i]))
  tryCatch({
    getSymbols(symbolLst[i], env = symbolData, src = "yahoo", from = startDate)
     cleanName <- sub("^","",symbolLst[i],fixed=TRUE)
     mktData <- get(cleanName,symbolData)
     print(paste("-Calculating close close returns for:",cleanName))
      ret <-(Cl(mktData)/Lag(Cl(mktData)))-1
      if(max(abs(ret),na.rm=TRUE)>0.5){
      print("-There is a abs(return) > 50% the data is odd lets not use this stock")
      next;
      }
      downloadedSymbols <- c(downloadedSymbols,symbolLst[i])
 
      assign(cleanName,ret,envir = clClRet)
    }, error = function(e) {
    print(paste("Couldn't download: ", symbolLst[i]))
    })
 
 
}
 
 
#Combine all the returns into a zoo object (joins the returns by date)
#Not a big fan of this loop, think it's suboptimal
zooClClRet <- zoo()
for(i in 1:length(downloadedSymbols)){
  cleanName <- sub("^","",downloadedSymbols[i],fixed=TRUE)
  print(paste("Combining the close close returns to the zoo:",cleanName))
  if(length(zooClClRet)==0){
    zooClClRet <- as.zoo(get(cleanName,clClRet))
  } else {
    zooClClRet <- merge(zooClClRet,as.zoo(get(cleanName,clClRet)))
  }
}
print(head(zooClClRet))
 
 
#This will take inzoo or data frame
#And convert each row into quantiles
#Quantile 1 = 0-0.25
#Quantile 2 = 0.25-0.5 etc...
quasiQuantileFunction <- function(dataIn){
    quantileFun <- function(rowIn){
        quant <- quantile(rowIn,na.rm=TRUE)
        #print(quant)
        a <- (rowIn<=quant[5])
        b <- (rowIn<=quant[4])
        c <- (rowIn<=quant[3])
        d <- (rowIn<=quant[2])
        rowIn[a] <- 4
        rowIn[b] <- 3
        rowIn[c] <- 2
        rowIn[d] <- 1
        return(rowIn)
    }
 
  return (apply(dataIn,2,quantileFun))
}
 
avgReturnPerQuantile <- function(returnsData,quantileData){
      q1index <- (clClQuantiles==1)
      q2index <- (clClQuantiles==2)
      q3index <- (clClQuantiles==3)
      q4index <- (clClQuantiles==4)
 
      q1dat <- returnsData
      q1dat[!q1index] <- NaN
      q2dat <- returnsData
      q2dat[!q2index] <- NaN
      q3dat <- returnsData
      q3dat[!q3index] <- NaN
      q4dat <- returnsData
      q4dat[!q4index] <- NaN
 
      avgFunc <- function(x) {
           #apply(x,1,median,na.rm=TRUE) #median is more resistant to outliers
            apply(x,1,mean,na.rm=TRUE)
      }
      res <- returnsData[,1:4] #just to maintain the time series (there must be a better way)
      res[,1] <- avgFunc(q1dat)
      res[,2] <- avgFunc(q2dat)
      res[,3] <- avgFunc(q3dat)
      res[,4] <- avgFunc(q4dat)
 
      colnames(res) <- c("Q1","Q2","Q3","Q4")
      return(res)
}
 
nLookback <- 250 #~1year trading calendar
clClVol <- rollapply(zooClClRet,nLookback,sd,na.rm=TRUE)
clClQuantiles <- quasiQuantileFunction(clClVol)
returnPerVolQuantile <- avgReturnPerQuantile(zooClClRet,clClQuantiles)
colnames(returnPerVolQuantile) <- c("Q1 min vol","Q2","Q3","Q4 max vol")
returnPerVolQuantile[is.nan(returnPerVolQuantile)]<-0 #Assume if there is no return data that it's return is 0
#returnPerVolQuantile[returnPerVolQuantile>0.2] <- 0 #I was having data issues leading to days with 150% returns! This filters them out
cumulativeReturnsByQuantile <- apply(returnPerVolQuantile,2,cumsum)
dev.new()
charts.PerformanceSummary(returnPerVolQuantile,main=paste("Arithmetic Cumulative Returns per Vol Quantile - Lookback=",nLookback),geometric=FALSE)
print(table.Stats(returnPerVolQuantile))
cat("Sharpe Ratio")
print(SharpeRatio.annualized(returnPerVolQuantile))
 
dev.new()
par(oma=c(0,0,2,0))
par(mfrow=c(3,3))
 
for(i in seq(2012,2004,-1)){
print(as.Date(paste(i,"-01-01",sep="")))
print(as.Date(paste(i+1,"-01-01",sep="")))
  windowedData <- window(as.zoo(returnPerVolQuantile),start=as.Date(paste(i,"-01-01",sep="")),end=as.Date(paste(i+1,"-01-01",sep="")))
  chart.CumReturns(windowedData,main=paste("Year",i,"to",i+1),geometric=FALSE)
}
title(main=paste("Arithmetic Cumulative Returns per Vol Quantile - Lookback=",nLookback),outer=T)
 
dev.new()
charts.PerformanceSummary(returnPerVolQuantile,main=paste("Geometric Cumulative Returns per Vol Quantile - Lookback=",nLookback),geometric=TRUE)
print(table.Stats(returnPerVolQuantile))
cat("Sharpe Ratio")
print(SharpeRatio.annualized(returnPerVolQuantile))
 
dev.new()
par(oma=c(0,0,2,0))
par(mfrow=c(3,3))
 
for(i in seq(2012,2004,-1)){
print(as.Date(paste(i,"-01-01",sep="")))
print(as.Date(paste(i+1,"-01-01",sep="")))
  windowedData <- window(as.zoo(returnPerVolQuantile),start=as.Date(paste(i,"-01-01",sep="")),end=as.Date(paste(i+1,"-01-01",sep="")))
  chart.CumReturns(windowedData,main=paste("Year",i,"to",i+1),geometric=TRUE)
}
title(main=paste("Geometric Cumulative Returns per Vol Quantile - Lookback=",nLookback),outer=T)
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2013/07/arithmetic30-1024x512.pngDigg ThisSubmit to redditShare via email

Parameter Optimisation & Backtesting – Part 2

This is a follow on from: http://gekkoquant.com/2012/08/29/parameter-optimisation-backtesting-part1/

The code presented here will aim to optimise a strategy based upon the simple moving average indicator. The strategy will go Long when moving average A > moving average B. The optimisation is to determine what period to make each of the moving averages A & B.

Please note that this isn’t intended to be a good strategy, it is merely here to give an example of how to optimise a parameter.

Onto the code:

Functions

  • TradingStrategy this function implements the trading logic and calculates the returns
  • RunIterativeStrategy this function iterates through possible parameter combinations and calls TradingStrategy for each new parameter set
  • CalculatePerformanceMetric takes in a table of returns (from RunIterativeStrategy) and runs a function/metric over each set of returns.
  • PerformanceTable calls CalculatePerformanceMetric for lots of different metric and compiles the results into a table
  • OrderPerformanceTable lets us order the performance table by a given metric, ie order by highest sharpe ratio
  • SelectTopNStrategies selects the best N strategies for a specified performance metric (charts.PerformanceSummary can only plot ~20 strategies, hence this function to select a sample)
  • FindOptimumStrategy does what it says on the tin
Note that when performing the out of sample test, you will need to manual specify the parameter set that you wish to use.
?View Code RSPLUS
 
library("quantmod")
library("PerformanceAnalytics")
 
 
nameOfStrategy <- "GSPC Moving Average Strategy"
 
#Specify dates for downloading data, training models and running simulation
trainingStartDate = as.Date("2000-01-01")
trainingEndDate = as.Date("2010-01-01")
outofSampleStartDate = as.Date("2010-01-02")
 
 
#Download the data
symbolData <- new.env() #Make a new environment for quantmod to store data in
getSymbols("^GSPC", env = symbolData, src = "yahoo", from = trainingStartDate)
trainingData <- window(symbolData$GSPC, start = trainingStartDate, end = trainingEndDate)
testData <- window(symbolData$GSPC, start = outofSampleStartDate)
indexReturns <- Delt(Cl(window(symbolData$GSPC, start = outofSampleStartDate)))
colnames(indexReturns) <- "GSPC Buy&Hold"
 
TradingStrategy <- function(mktdata,mavga_period,mavgb_period){
  #This is where we define the trading strategy
  #Check moving averages at start of the day and use as the direciton signal
  #Enter trade at the start of the day and exit at the close
 
  #Lets print the name of whats running
  runName <- paste("MAVGa",mavga_period,".b",mavgb_period,sep="")
  print(paste("Running Strategy: ",runName))
 
  #Calculate the Open Close return
  returns <- (Cl(mktdata)/Op(mktdata))-1
 
  #Calculate the moving averages
  mavga <- SMA(Op(mktdata),n=mavga_period)
  mavgb <- SMA(Op(mktdata),n=mavgb_period)
 
  signal <- mavga / mavgb
  #If mavga > mavgb go long
  signal <- apply(signal,1,function (x) { if(is.na(x)){ return (0) } else { if(x>1){return (1)} else {return (-1)}}})
 
  tradingreturns <- signal * returns
  colnames(tradingreturns) <- runName
 
  return (tradingreturns)
}
 
RunIterativeStrategy <- function(mktdata){
  #This function will run the TradingStrategy
  #It will iterate over a given set of input variables
  #In this case we try lots of different periods for the moving average
  firstRun <- TRUE
    for(a in 1:10) {
        for(b in 1:10) {
 
          runResult <- TradingStrategy(mktdata,a,b)
 
          if(firstRun){
              firstRun <- FALSE
              results <- runResult
          } else {
              results <- cbind(results,runResult)
          }
        }
    }
 
   return(results)
}
 
CalculatePerformanceMetric <- function(returns,metric){
  #Get given some returns in columns
  #Apply the function metric to the data
 
  print (paste("Calculating Performance Metric:",metric))
 
  metricFunction <- match.fun(metric)
  metricData <- as.matrix(metricFunction(returns))
  #Some functions return the data the wrong way round
  #Hence cant label columns to need to check and transpose it
  if(nrow(metricData) == 1){
    metricData <- t(metricData)
  }
  colnames(metricData) <- metric
 
  return (metricData)
}
 
 
 
PerformanceTable <- function(returns){
  pMetric <- CalculatePerformanceMetric(returns,"colSums")
  pMetric <- cbind(pMetric,CalculatePerformanceMetric(returns,"SharpeRatio.annualized"))
  pMetric <- cbind(pMetric,CalculatePerformanceMetric(returns,"maxDrawdown"))
  colnames(pMetric) <- c("Profit","SharpeRatio","MaxDrawDown")
 
  print("Performance Table")
  print(pMetric)
  return (pMetric)
}
 
OrderPerformanceTable <- function(performanceTable,metric){
return (performanceTable[order(performanceTable[,metric],decreasing=TRUE),])
}
 
SelectTopNStrategies <- function(returns,performanceTable,metric,n){
#Metric is the name of the function to apply to the column to select the Top N
#n is the number of strategies to select
  pTab <- OrderPerformanceTable(performanceTable,metric)
 
  if(n > ncol(returns)){
     n <- ncol(returns)
  }
  strategyNames <- rownames(pTab)[1:n]
  topNMetrics <- returns[,strategyNames]
  return (topNMetrics)
}
 
FindOptimumStrategy <- function(trainingData){
  #Optimise the strategy
  trainingReturns <- RunIterativeStrategy(trainingData)
  pTab <- PerformanceTable(trainingReturns)
  toptrainingReturns <- SelectTopNStrategies(trainingReturns,pTab,"SharpeRatio",5)
  charts.PerformanceSummary(toptrainingReturns,main=paste(nameOfStrategy,"- Training"),geometric=FALSE)
  return (pTab)
}
 
pTab <- FindOptimumStrategy(trainingData) #pTab is the performance table of the various parameters tested
 
#Test out of sample
dev.new()
#Manually specify the parameter that we want to trade here, just because a strategy is at the top of
#pTab it might not be good (maybe due to overfit)
outOfSampleReturns <- TradingStrategy(testData,mavga_period=9,mavgb_period=6)
finalReturns <- cbind(outOfSampleReturns,indexReturns)
charts.PerformanceSummary(finalReturns,main=paste(nameOfStrategy,"- Out of Sample"),geometric=FALSE)
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2012/08/MovingAverage-backtest.jpegDigg ThisSubmit to redditShare via email

Trading Strategy – VWAP Mean Reversion

This strategy is going to use the volume weighted average price (VWAP) as an indicator to trade mean version back to VWAP. Annualized Sharpe Ratio (Rf=0%) is 0.9016936.

This post is a response to http://gekkoquant.com/2012/07/29/trading-strategy-sp-vwap-trend-follow/ where there was a bug in the code indicating that VWAP wasn’t reverting (this didn’t sit well with me, or some of the people who commented). As always don’t take my word for anything, backtest the strategy yourself. One of the dangers of using R or Matlab is that it’s easy for forward bias to slip into your code. There are libraries such as Quantstrat for R which protect against this, but I’ve found them terribly slow to run.

Trade logic:

  • All conditions are checked at the close, and the trade held for one day from the close
  • If price/vwap > uLim go short
  • If price/vwap < lLim go long

Onto the code:

?View Code RSPLUS
library("quantmod")
library("PerformanceAnalytics")
 
#Trade logic - Look for mean reversion
#If price/vwap > uLim go SHORT
#If price/vwap < lLim go LONG
 
#Script parameters
symbol <- "^GSPC"     #Symbol
nlookback <- 3 #Number of days to lookback and calculate vwap
uLim <- 1.001  #If price/vwap > uLim enter a short trade
lLim <- 0.999  #If price/vwap < lLim enter a long trade
 
 
#Specify dates for downloading data
startDate = as.Date("2006-01-01") #Specify what date to get the prices from
symbolData <- new.env() #Make a new environment for quantmod to store data in
getSymbols(symbol, env = symbolData, src = "yahoo", from = startDate)
mktdata <- eval(parse(text=paste("symbolData$",sub("^","",symbol,fixed=TRUE))))
mktdata <- head(mktdata,-1) #Hack to fix some stupid duplicate date problem with yahoo
 
#Calculate volume weighted average price
vwap <- VWAP(Cl(mktdata), Vo(mktdata), n=nlookback)
#Can calculate vwap like this, but it is slower
#vwap <- runSum(Cl(mktdata)*Vo(mktdata),nlookback)/runSum(Vo(mktdata),nlookback)
 
#Calulate the daily returns
dailyRet <- Delt(Cl(mktdata),k=1,type="arithmetic") #Daily Returns
 
#signal = price/vwap
signal <- Cl(mktdata) / vwap
signal[is.na(signal)] <- 1 #Setting to one means that no trade will occur for NA's
#Stripping NA's caused all manner of problems in a previous post
trade <- apply(signal,1, function(x) {if(x<lLim) { return (1) } else { if(x>uLim) { return(-1) } else { return (0) }}})
 
#Calculate the P&L
#The daily ret is DailyRet(T)=(Close(T)-Close(T-1))/Close(T-1)
#We enter the trade on day T so need the DailyRet(T+1) as our potential profit
#Hence the lag in the line below
strategyReturns <- trade * lag(dailyRet,-1)
strategyReturns <- na.omit(strategyReturns)
 
#### Performance Analysis ###
#Calculate returns for the index
indexRet <- dailyRet #Daily returns
colnames(indexRet) <- "IndexRet"
zooTradeVec <- cbind(as.zoo(strategyReturns),as.zoo(indexRet)) #Convert to zoo object
colnames(zooTradeVec) <- c(paste(symbol," VWAP Trade"),symbol)
zooTradeVec <- na.omit(zooTradeVec)
 
#Lets see how all the strategies faired against the index
dev.new()
charts.PerformanceSummary(zooTradeVec,main=paste("Performance of ", symbol, " VWAP Strategy"),geometric=FALSE)
 
 
#Lets calculate a table of montly returns by year and strategy
cat("Calander Returns - Note 13.5 means a return of 13.5%\n")
print(table.CalendarReturns(zooTradeVec))
#Calculate the sharpe ratio
cat("Sharpe Ratio")
print(SharpeRatio.annualized(zooTradeVec))
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2012/07/sp-500-VWAP-mean-revert.jpegDigg ThisSubmit to redditShare via email

Trading Strategy – S&P VWAP Trend Follow (BUGGY)

UPDATE: The exceptional returns seen in this strategy were due to a 2 day look forward bias in the signal (and then subsequent trade direction), ie when returns were calculated for day T the trade signal used was actually from day T+2.

This bias occurred in the lines:

?View Code RSCODE
signal <- na.omit(signal)

Both the signal and trade dataframe had the correct dates for each signal/trades however when indexRet*trade happened then trade was treated as undated vectors (which is 2 elements shorter than index ret) hence the 2 day shift. The moral of this story is to merge dataframes before multiplying!

Thank you for everyone that commented on this, a corrected post is to follow!

Original Post

This strategy is going to use the volume weighted average price (VWAP) as an indicator to determine the direction of the current trend and trade the same direction as the trend. Annualized Sharpe Ratio (Rf=0%) is 8.510472.

Trade logic:

  • All conditions are checked at the close, and the trade held for one day from the close
  • If price/vwap > uLim go long
  • If price/vwap < lLim go short

Initially I thought that the price would be mean reverting to VWAP (this can be see in high freq data) however this didn’t appear to be the case with EOD data. For such a simple strategy I’m amazed that the Sharpe ratio is so high (suspiciously high). The code has been double&tripple checked to see if any forward bias has slipped in, however I haven’t spotted anything.

Onto the code:

?View Code RSPLUS
library("quantmod")
library("PerformanceAnalytics")
 
#Trade logic - Follow the trade demand, ie if price > vwap then go long
#If price/vwap > uLim go LONG
#If price/vwap < lLim go SHORT
 
#Script parameters
symbol <- "^GSPC"     #Symbol
nlookback <- 3 #Number of days to lookback and calculate vwap
uLim <- 1.001  #If price/vwap > uLim enter a long trade
lLim <- 0.999  #If price/vwap < lLim enter a short trade
 
 
#Specify dates for downloading data
startDate = as.Date("2006-01-01") #Specify what date to get the prices from
symbolData <- new.env() #Make a new environment for quantmod to store data in
getSymbols(symbol, env = symbolData, src = "yahoo", from = startDate)
mktdata <- eval(parse(text=paste("symbolData$",sub("^","",symbol,fixed=TRUE))))
mktdata <- head(mktdata,-1) #Hack to fix some stupid duplicate date problem with yahoo
 
#Calculate volume weighted average price
vwap <- VWAP(Cl(mktdata), Vo(mktdata), n=nlookback)
#Can calculate vwap like this, but it is slower
#vwap <- runSum(Cl(mktdata)*Vo(mktdata),nlookback)/runSum(Vo(mktdata),nlookback)
 
#Calulate the daily returns
dailyRet <- Delt(Cl(mktdata),k=1,type="arithmetic") #Daily Returns
 
#signal = price/vwap
signal <- Cl(mktdata) / vwap
signal <- na.omit(signal)
trade <- apply(signal,1, function(x) {if(x<lLim) { return (-1) } else { if(x>uLim) { return(1) } else { return (0) }}})
 
#Calculate the P&L
#The daily ret is DailyRet(T)=(Close(T)-Close(T-1))/Close(T-1)
#We enter the trade on day T so need the DailyRet(T+1) as our potential profit
#Hence the lag in the line below
strategyReturns <- trade * lag(dailyRet,-1)
strategyReturns <- na.omit(strategyReturns)
 
#### Performance Analysis ###
#Calculate returns for the index
indexRet <- dailyRet #Daily returns
colnames(indexRet) <- "IndexRet"
zooTradeVec <- cbind(as.zoo(strategyReturns),as.zoo(indexRet)) #Convert to zoo object
colnames(zooTradeVec) <- c(paste(symbol," VWAP Trade"),symbol)
zooTradeVec <- na.omit(zooTradeVec)
 
#Lets see how all the strategies faired against the index
dev.new()
charts.PerformanceSummary(zooTradeVec,main=paste("Performance of ", symbol, " VWAP Strategy"),geometric=FALSE)
 
 
#Lets calculate a table of montly returns by year and strategy
cat("Calander Returns - Note 13.5 means a return of 13.5%\n")
print(table.CalendarReturns(zooTradeVec))
#Calculate the sharpe ratio
cat("Sharpe Ratio")
print(SharpeRatio.annualized(zooTradeVec))
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2012/07/sp-500-VWAP.jpegDigg ThisSubmit to redditShare via email

Trading Strategy – Volatility Carry Trade

This strategy is going to look at a vega neutral volatility carry trading strategy. Two different futures contract will be traded, the VXX and VXZ. These contracts are rolling futures on the S&P 500 Vix index, the VXX is a short term future and the VXZ is a medium term future. Annualized Sharpe Ratio (Rf=0%) is 1.759449.

The strategy is very simple, the rules are:

  • If VXX / VXZ > 1 then in backwardation so do a reverse carry trade (buy VXX, sell VXZ)
  • If VXX / VXZ < 1 then do a carry trade (sell VXX, buy VXZ)

If the volatility spot price doesn’t change, then we’re extracting the cost of carry. Due to buying and selling (or vice versa) the short to mid term futures the vega exposure is hedged.

In the script the above two rules have been slightly changed, a slight offset is added/subtracted from the ratio. Essentially we want to be deep into contango zone or deep into backwardation zone before we trade, if we’re close to the flip point then don’t trade.

Section 1: Downloaded the data, and calculate the Open to Close return. This strategy will look for entry at the open and exit at the close.

Section 2: Regress the daily returns of VXX with VXZ to calculate the hedge ratio

Section 3: Generate the backwardation / contango signal

Section 4: Simulate the trading

Section 5: Analyse the performance

Onto the code:

?View Code RSPLUS
library("quantmod")
library("PerformanceAnalytics")
 
#Control variables for entering a trade
#Used to check for the level of contango / backwardation
#IF signal < signalLowLim then in contango and do a carry trade
#IF signal > signalUpperLim then in backwardation so do a reverse carry
#ELSE do nothing
signalLowLim <- 0.9
signalUpperLim <- 1.1
 
#Use volatility futures, shortdate vs medium dated
#VXX iPath S&P 500 VIX Short-Term Futures ETN (VXX)
#VXZ iPath S&P 500 VIX Mid-Term Futures ETN (VXZ)
symbolLst <- c("VXX","VXZ")
 
#Specify dates for downloading data, training models and running simulation
startDate = as.Date("2009-01-01") #Specify what date to get the prices from
hedgeTrainingStartDate = as.Date("2009-01-01") #Start date for training the hedge ratio
hedgeTrainingEndDate = as.Date("2009-05-01") #End date for training the hedge ratio
tradingStartDate = as.Date("2009-05-02") #Date to run the strategy from
 
### SECTION 1 - Download Data & Calculate Returns ###
#Download the data
symbolData <- new.env() #Make a new environment for quantmod to store data in
getSymbols(symbolLst, env = symbolData, src = "yahoo", from = startDate)
 
#Plan is to check for trade entry at the open, and exit the trade at the close
#So need to calculate the open to close return as they represens our profit or loss
#Calculate returns for VXX and VXZ
vxxRet <- (Cl(symbolData$VXX)/Op(symbolData$VXX))-1
colnames(vxxRet) <- "Ret"
symbolData$VXX <- cbind(symbolData$VXX,vxxRet)
 
vxzRet <- (Cl(symbolData$VXZ)/Op(symbolData$VXZ))-1
colnames(vxzRet) <- "Ret"
symbolData$VXZ <- cbind(symbolData$VXZ,vxzRet)
 
### SECTION 2 - Calculating the hedge ratio ###
#Want to work out a hedge ratio, so that we can remain Vega neutral (the futures contact are trading VEGA)
#Select a small amount of data for training the hedge model on
subVxx <- window(symbolData$VXX$Ret,start=hedgeTrainingStartDate ,end=hedgeTrainingEndDate)
subVxz <- window(symbolData$VXZ$Ret,start=hedgeTrainingStartDate ,end=hedgeTrainingEndDate)
datablock = na.omit(cbind(subVxx,subVxz))
colnames(datablock) <- c("VXX","VXZ")
 
#Simply linearly regress the returns of Vxx with Vxz
regression <- lm(datablock[,"VXZ"] ~ datablock[,"VXX"]) #Linear Regression
 
#Plot the regression
plot(x=as.vector(datablock[,"VXX"]), y=as.vector(datablock[,"VXZ"]), main=paste("Hedge Regression: XXZret =",regression$coefficient[2]," * RXXret + intercept"),
  	 xlab="Vxx Ret", ylab="Vxz ", pch=19)
abline(regression, col = 2 )
hedgeratio = regression$coefficient[2]
 
 
### SECTION 3 - Generate trading signals ###
#Generate Trading signal
#Check ratio to see if contango or backwarded volatility future
#If shortTermVega < midTermVega in contango so do carry trade
#If shortTermVega > midTermVega in backwardation so do reverse carry
#If VXX less than VXZ then want to short VXX and long VXZ
 
#Calculate the contango / backwardation signal
tSig <- Op(symbolData$VXX)/Op(symbolData$VXZ)
colnames(tSig) <- "Signal"
 
### SECTION 4 - Do the trading ###
#Generate the individual buy/sell signals for each of the futures contract
vxxSignal <- apply(tSig,1, function(x) {if(x<signalLowLim) { return (-1) } else { if(x>signalUpperLim) { return(1) } else { return (0) }}})
vxzSignal <- -1 * vxxSignal
 
#Strategy returns are simply the direction * the Open-to-Close return for the day
#Include the hedge ratio here so that we remain vega neutral
strategyReturns <- ((vxxSignal * symbolData$VXX$Ret) + hedgeratio * (vxzSignal * symbolData$VXZ$Ret) )
strategyReturns <- window(strategyReturns,start=tradingStartDate,end=Sys.Date(), extend = FALSE)
#Normalise the amount of money being invested on each trade so that we can compare to the S&P index later
strategyReturns <- strategyReturns * 1 / (1+abs(hedgeratio))
colnames(strategyReturns) <- "StrategyReturns"
#plot(cumsum(strategyReturns))
 
#SECTION 5
#### Performance Analysis ###
 
#Get the S&P 500 index data
indexData <- new.env()
startDate = as.Date("2009-01-01") #Specify what date to get the prices from
getSymbols("^GSPC", env = indexData, src = "yahoo", from = startDate)
 
#Calculate returns for the index
indexRet <- (Cl(indexData$GSPC)-lag(Cl(indexData$GSPC),1))/lag(Cl(indexData$GSPC),1)
colnames(indexRet) <- "IndexRet"
zooTradeVec <- cbind(as.zoo(strategyReturns),as.zoo(indexRet)) #Convert to zoo object
colnames(zooTradeVec) <- c("Vol Carry Trade","S&P500")
zooTradeVec <- na.omit(zooTradeVec)
#Lets see how all the strategies faired against the index
dev.new()
charts.PerformanceSummary(zooTradeVec,main="Performance of Volatility Carry Trade",geometric=FALSE)
 
#Lets calculate a table of montly returns by year and strategy
cat("Calander Returns - Note 13.5 means a return of 13.5%\n")
print(table.CalendarReturns(zooTradeVec))
#Calculate the sharpe ratio
cat("Sharpe Ratio")
print(SharpeRatio.annualized(zooTradeVec))
#Calculate other statistics
cat("Other Statistics")
print(table.CAPM(zooTradeVec[,"Vol Carry Trade"],zooTradeVec[,"S&P500"]))
 
dev.new()
#Lets make a boxplot of the returns
chart.Boxplot(zooTradeVec)
 
dev.new()
#Set the plotting area to a 2 by 2 grid
layout(rbind(c(1,2),c(3,4)))
 
#Plot various histograms with different overlays added
chart.Histogram(zooTradeVec, main = "Plain", methods = NULL)
chart.Histogram(zooTradeVec, main = "Density", breaks=40, methods = c("add.density", "add.normal"))
chart.Histogram(zooTradeVec, main = "Skew and Kurt", methods = c("add.centered", "add.rug"))
chart.Histogram(zooTradeVec, main = "Risk Measures", methods = c("add.risk"))
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2012/06/VolatilityCarryTrade.jpegDigg ThisSubmit to redditShare via email

Trading Strategy – Buy on Gap (EPChan)

This post is going to investigate a strategy called Buy on Gap that was discussed by E.P Chan in his blog post “the life and death of a strategy”. The strategy is a mean reverting strategy that looks to buy the weakest stocks in the S&P 500 at the open and liquidate the positions at the close. The performance of the strategy is seen in the image below, Annualized Sharpe Ratio (Rf=0%) 2.129124.

All numbers in this table are %(ie 12.6 is 12.6%)
      Jan  Feb  Mar  Apr  May  Jun  Jul  Aug  Sep  Oct  Nov  Dec BuyOnGap S&P500
2005  0.0  0.0  0.0  0.0  0.0 -0.4 -0.6  1.1  0.7  1.0 -0.2  0.3      1.8   -0.1
2006  0.2 -0.6 -0.3  0.0  1.1  0.1  0.0  0.4  0.1  0.1  0.2 -0.2      1.1   -2.0
2007  0.8  0.9  0.1 -1.1  0.3 -0.2 -1.5  0.2 -0.2  0.9 -0.4  0.3      0.0    1.0
2008  4.3 -1.9  0.8 -0.5  0.0  0.7  0.2 -0.7  2.0  3.3  2.0  2.0     12.6    6.0
2009 -2.9  0.1  1.4 -1.1  1.3 -0.8  0.4  0.0 -0.3 -1.9  0.8 -1.0     -4.0   -7.3
2010 -1.0 -0.1  0.0 -1.1 -0.7 -0.6  1.1  0.7 -0.5  0.6  0.5  0.1     -1.1   -5.9
2011  0.6  0.3  0.2 -0.1  0.2  0.4  0.7  0.1 -1.4 -1.2  1.4 -0.2      1.0    2.1
2012 -0.3 -0.5 -0.1  0.0 -1.0   NA   NA   NA   NA   NA   NA   NA     -1.8   -0.8

From the post two trading criterion were mentioned:

  1. Buy the 100 stocks out of the S&P 500 constituents that have the lowest previous days lows to the current days opening price
  2. Provided that the above return is less than the 1 times the 90day standard deviation of Close to Close returns
The criterion are fairly specific however it is important to write flexible code where it is easy to change the main model parameters, below is a list of variable names that specify the parameters in the R script:
  • nStocksBuy – How many stocks to buy
  • stdLookback – How many days to look back for the standard deviation calculation
  • stdMultiple – Number to multiply the standard deviation by (was 1 in criterion 2.), the larger this variable the more stocks that will satisfy criterion 2.

The code is split into 5 distinct sections.

Section 1: Loop through all the stocks loaded from the data file, for each stock calculate the previous day close to current days open (lowOpenRet). Calculate the Close Close return and calculate the standard deviation (stdClClRet). Also calculate the Open to Close return for every day (dayClOpRet), if we decide to trade this day this would be the return of the strategy for the day.

Section 2: This section combines columns from each of the individual stock data frames into large matrices that cover all the stocks. retMat contains the lowOpenRet for each stock. stdMat contains the stdClClRet for all stocks, dayretMat contains the dayClOpRet for all stocks.

Essentially instead of having lots of variables, we combine them into a  big matrix.

Section 3: This will check if matrices in section 2 match the trade entry criterion. This section produces two matrices (conditionOne and conditionTwo). The matrices contain a 1 for a passed entry criterion and a 0 for a failed entry criterion.

Section 4: This multiples the conditionOne with conditionTwo to give conditionsMet, since those matricies are binary multiplying them together identifies the regions where both conditions passed (1*1=1 ie a pass). This means enter a trade.

conditionsMet is then used as a mask, it has 1′s when a trade should occur and 0′s when no trade should happen. So multiplying this with dayClOpRet gives us the Open to Close daily returns for all days and stocks that a trade occurred on.

The script assumes capital is split equally between all the stocks that are bought at the open, if less than 100 stocks meet the entry criteria then it is acceptable to buy less.

Section 5: This section does simple performance analytics and plots the equity curve against the S&P 500 index.

Onto the code (note the datafile is generated in Stock Data Download & Saving R):

?View Code RSPLUS
#install.packages("quantmod")
library("quantmod")
#install.packages("caTools") #for rolling standard deviation
library("caTools")
#install.packages("PerformanceAnalytics")
library("PerformanceAnalytics") #Load the PerformanceAnalytics library
 
datafilename = "stockdata.RData"
 
stdLookback <- 90 #How many periods to lookback for the standard deviation calculation
stdMultiple <- 1 #A Number to multiply the standard deviation by
nStocksBuy <- 100 #How many stocks to buy
 
load(datafilename)
 
#CONDITION 1
#Buy 100 stocks with lowest returns from their previous days lows
#To the current days open
 
#CONDITION 2
#Provided returns are lower than one standard deviation of the
#90 day moving standard deviation of close close returns
#Exit long positions at the end of the day
 
 
#SECTION 1
symbolsLst <- ls(stockData)
#Loop through all stocks in stockData and calculate required returns / stdev's
for (i in 1:length(symbolsLst)) {
cat("Calculating the returns and standard deviations for stock: ",symbolsLst[i],"\n")
sData <- eval(parse(text=paste("stockData$",symbolsLst[i],sep="")))
#Rename the colums, there is a bug in quantmod if a stock is called Low then Lo() breaks!
#Ie if a column is LOW.x then Lo() breaks
oldColNames <- names(sData)
colnames(sData) <- c("S.Open","S.High","S.Low","S.Close","S.Volume","S.Adjusted")
 
#Calculate the return from low of yesterday to the open of today
lowOpenRet <- (Op(sData)-lag(Lo(sData),1))/lag(Lo(sData),1)
colnames(lowOpenRet) <- paste(symbolsLst[i],".LowOpenRet",sep="")
 
#Calculate the n day standard deviation from the close of yesterday to close 2 days ago
stdClClRet <- runsd((lag(Cl(sData),1)-lag(Cl(sData),2))/lag(Cl(sData),2),k=stdLookback,endrule="NA",align="right")
stdClClRet <- stdMultiple*stdClClRet + runmean(lag(Cl(sData),1)/lag(Cl(sData),2),k=stdLookback,endrule="NA",align="right")
colnames(stdClClRet) <- paste(symbolsLst[i],".StdClClRet",sep="")
 
#Not part of the strategy but want to calculate the Close/Open ret for current day
#Will use this later to evaluate performance if a trade was taken
dayClOpRet <- (Cl(sData)-Op(sData))/Op(sData)
colnames(dayClOpRet) <- paste(symbolsLst[i],".DayClOpRet",sep="")
 
colnames(sData) <- oldColNames
eval(parse(text=paste("stockData$",symbolsLst[i]," <- cbind(sData,lowOpenRet,stdClClRet,dayClOpRet)",sep="")))
}
 
#SECTION 2
#Have calculated the relevent returns and standard deviations
#Now need to to work out what 100 (nStocksBuy) stocks have the lowest returns
#Make a returns matrix
for (i in 1:length(symbolsLst)) {
  cat("Assing stock: ",symbolsLst[i]," to the returns table\n")
  sDataRET <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".LowOpenRet\"]",sep="")))
  sDataSTD <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".StdClClRet\"]",sep="")))
  sDataDAYRET <- eval(parse(text=paste("stockData$",symbolsLst[i],"[,\"",symbolsLst[i],".DayClOpRet\"]",sep="")))
  if(i == 1){
  retMat <- sDataRET
  stdMat <- sDataSTD
  dayretMat <- sDataDAYRET
  } else {
  retMat <- cbind(retMat,sDataRET)
  stdMat <- cbind(stdMat,sDataSTD)
  dayretMat <- cbind(dayretMat,sDataDAYRET)
  }
}
 
#SECTION 3
#CONDITON 1 test output (0 = failed test, 1 = passed test)
#Now will loop over the returns matrix finding the nStocksBuy smallest returns
conditionOne <- retMat #copying the structure and data, only really want the structure
conditionOne[,] <- 0 #set all the values to 0
for (i in 1:length(retMat[,1])){
orderindex <- order((retMat[i,]),decreasing=FALSE)  #order row entries smallest to largest
orderindex <- orderindex[1:nStocksBuy] #want the smallest n (nStocksBuy) stocks
conditionOne[i,orderindex] <- 1 #1 Flag indicates entry is one of the nth smallest
}
 
#CONDITON 2
#Check Close to Open return is less than 90day standard deviation
conditionTwo <- retMat #copying the structure and data, only really want the structure
conditionTwo[,] <- 0 #set all the values to 0
conditionTwo <- retMat/stdMat #If ClOp ret is < StdRet tmp will be < 1
conditionTwo[is.na(conditionTwo)] <- 2 #GIVE IT FAIL CONDITION JUST STRIPPING NAs here
conditionTwo <- apply(conditionTwo,1:2, function(x) {if(x<1) { return (1) } else { return (0) }})
 
#SECTION 4
#CHECK FOR TRADE output (1 = passed conditions for trade, 0 = failed test)
#Can just multiply the two conditions together since they're boolean
conditionsMet <- conditionOne * conditionTwo
colnames(conditionsMet) <- gsub(".LowOpenRet","",names(conditionsMet))
 
#Lets calculate the results
tradeMat <- dayretMat
colnames(tradeMat) <- gsub(".DayClOpRet","",names(tradeMat))
tradeMat <- tradeMat * conditionsMet
tradeMat[is.na(tradeMat)] <- 0
tradeVec <- as.data.frame(apply(tradeMat, 1,sum) / apply(conditionsMet, 1,sum)) #Calculate the mean for each row
colnames(tradeVec) <- "DailyReturns"
tradeVec[is.nan(tradeVec[,1]),1] <- 0 #Didnt make or loose anything on this day
 
plot(cumsum(tradeVec[,1]),xlab="Date", ylab="EPCHAN Buy on Gap",xaxt = "n")
 
#SECTION 5
#### Performance Analysis ###
 
#Get the S&P 500 index data
indexData <- new.env()
startDate = as.Date("2005-01-13") #Specify what date to get the prices from
getSymbols("^GSPC", env = indexData, src = "yahoo", from = startDate)
 
#Calculate returns for the index
indexRet <- (Cl(indexData$GSPC)-lag(Cl(indexData$GSPC),1))/lag(Cl(indexData$GSPC),1)
colnames(indexRet) <- "IndexRet"
zooTradeVec <- cbind(as.zoo(tradeVec),as.zoo(indexRet)) #Convert to zoo object
colnames(zooTradeVec) <- c("BuyOnGap","S&P500")
 
#Lets see how all the strategies faired against the index
dev.new()
charts.PerformanceSummary(zooTradeVec,main="Performance of EPCHAN Buy on Gap",geometric=FALSE)
 
#Lets calculate a table of montly returns by year and strategy
cat("Calander Returns - Note 13.5 means a return of 13.5%\n")
table.CalendarReturns(zooTradeVec)
 
dev.new()
#Lets make a boxplot of the returns
chart.Boxplot(zooTradeVec)
 
dev.new()
#Set the plotting area to a 2 by 2 grid
layout(rbind(c(1,2),c(3,4)))
 
#Plot various histograms with different overlays added
chart.Histogram(zooTradeVec, main = "Plain", methods = NULL)
chart.Histogram(zooTradeVec, main = "Density", breaks=40, methods = c("add.density", "add.normal"))
chart.Histogram(zooTradeVec, main = "Skew and Kurt", methods = c("add.centered", "add.rug"))
chart.Histogram(zooTradeVec, main = "Risk Measures", methods = c("add.risk"))

Possible Future Modifications

  • Add shorting the strongest stocks so that the strategy is market neutral
  • Vary how many stocks to hold
  • Vary the input variables (discussed above)
  • Try a different asset class, does this work for forex?
Share on FacebookShare on TwitterSubmit to StumbleUponhttp://gekkoquant.com/wp-content/uploads/2012/06/Buy-On-Gap-EPCHAN-GekkoQuant-PerformanceSummary.jpegDigg ThisSubmit to redditShare via email