You collected the sample data, you cleaned it, organized it and feed it to R or your favorite estimator.  You applied 23 different models and found the best fit.  You tested with new data and confirm your model choice with a review of the confusion matrix.  And viola your predictions are sound and strong.  Ready to go live.  Nice work.  

But a week later larger errors are creeping into your predictions.  Your customers are complaining.  What went wrong?  New conditions cropped up in the data - that's what went wrong.  In other words, the world changed but your model did not.  It's the bane of all good data scientists .... data is not static and dull, changes not observed when you estimated your model come into being and throw a monkey wrench into things. So how to deal with it?

Add probability!  You can think forward to the kind of events that may occur and include them in your algorithm by specifying them in terms of probability, timing and impact.  Then when estimating the model you can do so in a simulation fashion to see how, when and where one or more such events you worried might come true impact the predictions.  Because you have, in effect, practiced the future, your algorithms and thus predictions have become much more resilient. 

Adding this type of thinking and technique to your predictive analysis bag of tricks helps get you out of the quandary of constantly monitoring, re-estimating and re-deploying.  You'll be a stronger data scientist for it. 

Comment