How Things Go Wrong


Carpenter Analytix

Investments sometimes turn out badly.  What you bought goes down; what you sold goes up.  Or a big paper gain is over-stayed, and slips away, etc.  The typical response is to say decisions were "wrong."  Wrong to have bought; wrong to have sold; wrong not to have diversified (or to overdiversify), or any of many other ways to be wrong.

What's really "wrong"?

But a loss is not always evidence of being "wrong."  In fact, a substantial number of excellent decisions can and do incur losses.  Occurrence of loss can therefore not be proof of error.   In the stochastic-but-trendy market perspective, we find four ways that outcomes can turn out badly:

Only the first of these is actually a case of being "wrong."  Bad analysis is just bad analysis; maybe incomplete, maybe biased, maybe based on bad data.  Such errors are generally avoidable, but no amount of effort will eliminate them altogether.

Unlike faulty analysis, losses due to new-news isn't really wrong, unless we expect to know and foresee "everything."  The relevant question about new-news is whether the news content was an active component of an investment decision.  If we predict that housing starts will be up 5% (and make decisions based on that premise), but they are actually down 5%, the analysis and decision were wrong.  But if we make a decision with no basis in housing or construction, and get hit with a loss when housing starts report down 5%, it's hard to say the decision was wrong.

Then there is randomness.  Market outcomes don't follow any model strict enough to set definitive outcomes in any case.  There is always noise in the system, and when the noise runs unfavorably it is hardly a case of error.  (If we bet even money on a roll of two fair dice coming up higher than six, it is a "right" decision... whether or not that bet wins.)

Decay

Model decay is different.  It occurs when "things are different" today than in the past. In effect, the model is out of date--but we don't yet know it.  A decision in this case may be quite literally "wrong" but not necessarily "faulty."

So of four ways things go badly, only two involve actually being "wrong." And if not wrong, then attempts to remediate are inappropriate. The wish to avoid error (especially repeated error) can easily lead to on-the-fly policy adjustment without first distinguishing between "wrong" decisions vs "unsuccessful" decisions.

Making the right-or-wrong distinction is not easy or obvious. But it's not (usually) necessary to distinguish, because probability and significance help sort things out.  The key question is not whether an unsuccessful decision was wrong at the time, but whether the same decision under the same facts would be wrong today.

Every decision-and-outcome pair brings new information. When decisions are based on objectively significant state-and-outcome relationships, each new occasion (win or lose) simply folds into the statistical pool for confidence update.  In this context, a losing decision (or even a gain) is neither right nor wrong, but an additional and current observation that automatically updates the decision model.

Nothing at this site is offered as advice or recommendation.
All site content is subject to full Disclaimer statement and Terms of Use.
Copyright© CarpenterAnalytix.com 2004. All rights reserved.