'O' is notation in mathematics which describes getting ever closer to something.

For example, in computer science it describes the efficiency that algorithms chomp inputs and produce results.

In statistics it's related to how probabilities converge. E.g. the probability of very large result from a normal distribution converges to zero which is nice and gives us finite variance.

A while ago I noticed a problem with back testing methodologies.

Changing backtesting samples by as little as one day can result in very different results!

The technical term is...

A big 'uh-oh'!

I love the idea of sketching out different scenarios, but producing different copies of results which overlap so much is not conducive to good statistics.

Then the big 'oh' hit me.

How about I approach the problem like they do with O in computer science?

When reporting the efficiency of algorithms the least efficient inputs are assumed [e.g. completely random rather than orderly] so the reported O is always conservative.

That's precisely how we should report in finance.

When I get around to it, the Lazy Backtest IDE will report results from the lowest Sharpe found from changing samples and 'weighting dates' by a handful of days here and there.

Stingy yes, but anything to avoid 'uh-ohs' when your strategy is live.