Many, many weeks ago I wrote about this puzzle.

Why does Lazy PCA always output components which have +1 and -1 slopes? Which I interpret as a measure of momentum or mean reversion (i.e. positive or negative auto correlation.

(EEM quarterly returns)

Rogier Swierstra and Emlyn Flint were quick off the mark.

You can break a covariance matrix down into principal components by finding the eigenvectors.

E.g.

`A = [[11,2],[4,11]];`

`e = numeric.eig(A)`

(Try this code out online here)

If our 'before' and 'after' volatilities in our 2x2 variance covariance matrix are about equal (pretty much always the case with a big enough sample) we will end up with eigenvectors where the absolutes of the coordinates are also about equal.

`Math.atan2(e.x[0][0],e.x[0][1])*180/Math.PI`

which always results in a rotation of our vectors from the horizontal axis by +-45 or +-135 degrees!

Usually PCA has a mind of its own. The components are not easily mapped back into intuitive reality, but in this case we have forced PCA's hand, it has to give us back a solution which can only be mapped onto vectors with +-1 slopes.

Is this cheating? I don't think so!

In fact I wonder whether there are other sly ways to coerce powerful PCA into doing our bidding...