7 Point Backtesting Protocol
In 2018, Robert D. Arnott, Campbell R. Harvey, and Harry Markowitz published a paper listing a backtesting protocol in the era of machine learning. In fact, that was the name of the paper.
The research protocol applies to quantitative finance in general, and not only to machine learning applications. They list 7 categories which we mention below but the paper, of course, dives into much greater detail.
Note
Underlying Literature
-
A Backtesting Protocol in the Era of Machine Learning, by Robert D. Arnott, Campbell R. Harvey, and Harry Markowitz.
The Protocol
As per Exhibit 2, pg.16:
-
- Research Motivation
-
-
Does the model have a solid economic foundation?
-
Did the economic foundation or hypothesis exist before the research was conducted?
-
-
- Multiple Testing and Statistical Methods
-
-
Did the researcher keep track of all models and variables that were tried (both successful and unsuccessful) and are the researchers aware of the multiple-testing issue?
-
Is there a full accounting of all possible interaction variables if interaction variables are used?
-
Did the researchers investigate all variables set out in the research agenda or did they cut the research as soon as they found a good model?
-
-
- Data and Sample Choice
-
-
Do the data chosen for examination make sense? And, if other data are available, does it make sense to exclude these data?
-
Did the researchers take steps to ensure the integrity of the data?
-
Do the data transformations, such as scaling, make sense? Were they selected in advance? And are the results robust to minor changes in these transformations?
-
If outliers are excluded, are the exclusion rules reasonable?
-
If the data are winsorized, was there a good reason to do it? Was the winsorization rule chosen before the research was started? Was only one winsorization rule tried (as opposed to many)?
-
-
- Cross-Validation
-
-
Are the researchers aware that true out-of-sample tests are only possible in live trading?
-
Are steps in place to eliminate the risk of out-of-sample “iterations” (i.e., an in-sample model that is later modified to fit out-of-sample data)?
-
Is the out-of-sample analysis representative of live trading? For example, are trading costs and data revisions taken into account?
-
-
- Model Dynamics
-
-
Is the model resilient to structural change and have the researchers taken steps to minimize the overfitting of the model dynamics?
-
Does the analysis take into account the risk/likelihood of overcrowding in live trading?
-
Do researchers take steps to minimize the tweaking of a live model?
-
-
- Complexity
-
-
Does the model avoid the curse of dimensionality?
-
Have the researchers taken steps to produce the simplest practicable model specification?
-
Is an attempt made to interpret the predictions of the machine learning model rather than using it as a black box?
-
-
- Research Culture
-
-
Does the research culture reward quality of the science rather than finding the winning strategy?
-
Do the researchers and management understand that most tests will fail?
-
Are expectations clear (that researchers should seek the truth not just something that works) when research is delegated?
-
Presentation Slides
References
-
A Backtesting Protocol in the Era of Machine Learning, by Robert D. Arnott, Campbell R. Harvey, and Harry Markowitz.