Light is the most precious resource to a photographer, everything you can do with your camera is budgeted by the amount of light available.
Financial analysis is similarly constrained by the amount of data available.
So more available data is always good.
With Big 'O' Sharpe you can generate as much data as the data is granular.
E.g. if you usually make rebalancing decisions weekly (say every Monday) and have 10,000 days of returns, that would results in 2k decisions, and 2k weekly returns. Reducing your sample by about a fifth.
But with Big 'O' you bump the starting date by a day (Tuesday is now 'd-day') or more (Wednesdays, Thursdays...) every day becomes a rebalancing day, we have ~10k returns again, based on 5 different runs of 2k
Great! Our data dropped by a fifth but we then rebounded back up to 10k.
Obviously, with monthly or quarterly returns this bounce back is even bigger.
However tempting it would be to combine our day bumped runs back together again, we shouldn't.
It turns out that in our original example, yes we do have 10k returns; but we can't combine them, so we have to choose from 5 groups of 2k.
Statistical analysis only works on data that is 'independent'. When the returns 'overlap' - e.g. 80% of Monday-Monday returns are the same as Tuesday-Tuesday returns. There is well and truly no independence, any statistics would be redundant.
The name Big 'O' comes from the computer science idea of assuming the worst case data input for calculating algorithm efficiency, and it's the same in this case. We only report the worst case 2k data set.
Big 'O' analysis works best when a day's difference in decision making makes all the difference.
For example, it will work well for a strategy that goes long or shorts the market next week based on yesterday's return. Every decision could be completely different as we bump decision days.
We'd have 5 different datasets with very different results, which will shed a lot more light on our problem.
On the other end of the scale, if I make a decision based on a year's worth of historical data on a Monday and then bump a day and make a similar decision on Tuesday, 99% of the data used to make the decision is the same and there's a large market return overlap. Generating Big 'O' data sets won't really tell you much more.
So, we can measure strategy return independence in two dimensions.
- Market return overlap
- Decision data overlap
(E.g. strategy return = market return * decision weight)
High degrees of overlap in both is a big no no.
A lot of 1 but less of 2 can be very useful, in terms of Big 'O' and understanding our strategies better.
How about the case of zero 1 but a lot of 2?
This is the normal case for many strategies, and how to account for it during backtesting has worried me.
Ideally I would prefer to have each decision and its result completely insulated from any other, ensuring no extra dependence is introduced, and each strategy return is completely independent.
Once you go down this route though, and introduce 200 day decision lookbacks for example, you reduce the number of strategy returns available for analysis substantially!
The reason why people can blithely get away with huge decision data overlap however is precisely the same reason why Big 'O' works.
The nasty side effects of decision data overlap is brushed under the carpet because market returns are independent.
As with successful Big 'O' - non overlapping decision data can wash away some of the massive market data overlaps.
I never recombine all the data back together (no more dependence is added) and just use the most conservative looking results. Although you could imagine scenarios when you could safely get away with this.
I.e. when one hand washes the other.