DO NOT ADD CONTENT ABOVE HERE

NGData_Full-Color-Mobile
Uncategorized

Musings from the Predictive Analytics World Show

Heading home from PAW London. It’s the London edition of a popular series of conferences on predictive analytics, sharing use cases and war stories on the wizardry art of discovering meaning behind numbers.

I attended this conference to get to know our clientèle better, as it’s often the marketing analysts who own the process that involves transforming operational data into strategic insights. As it was my first foray into an analysts-only conference, I was eager to discover what makes their clocks tick, and how Big Data and Lily can improve their processes. Herewith a number of insights.

The conference chair kicked off the event with a provocative comparison between Big Data being a can of industrial-prepared ravioli, and predictive analytics fine connoisseur cooking. That got my attention right away obviously. However. it became clear further down in his story that the ideal balance is between good human talent predictive analytics (PA) and the ability to process huge amounts of data in a qualitative way (Big Data).

One of the apparent pain points of PA was the inability to fully operationalize their processes, in the sense that lots of these are basically user-workstation-bound, generating cleverly formatted, insightful strategic data in the form of reports and presentations through iterations of algorithmic refinement.

Big Data however is more of an infrastructure play. That means any native Big Data application is designed (or at least should be) with operationalization in mind, resulting in the ability to dynamically drive, augment or optimize (marketing) business processes. So the admittedly lesser refined algorithms, reporting capabilities and human-intervention-less operations (compared to PA) are off-set by the possibility of making (actual! real-time!) data derived from user behavior an integral part of the consumer service process. Providing improved customer touch points when it matters – at the right time, in the right location – might probably be the more important business driver.

Also, the fact that Big Data systems touch the entirety of the available data set makes for very precise and complete processing, catering the 90 percentile as well as the long tail in the same run. Having all consumer-related data available in real-time from a Big Data Management System (BDMS) could support myriads of future applications by adding new algorithms to the same processing infrastructure – plug in the Code, the Data is already there.

Contrasting that with PA, a common pattern appears for a predictive analyst to spend time preparing new, or tweaking existing, data sets for new questions to be answered, and of course that iterative process doesn’t lend itself well for operationalization.

One of the most interesting observations from the conference was that predictive analytics projects (as well as Big Data projects, I must admit) are usually focused very heavily on new business (processes). A speaker from a big industrial concern, however, made the point that the ROI of data projects can also be found in optimization of existing business. Using Big Data to drive personalized recommendations for up-selling springs immediately to mind, but predicting user behavior to improve sales/service processes is equally interesting from the return point of view.

All in all, my trip to PAW London taught me that while there’s certainly still a small zone of discomfort between PA practitioners and tools, and the new Big Data kids on the block, the zone of complementarity is much larger and more interesting to explore. The quest for scaling out what they have been doing on a workstation-level to the capabilities of a computing cluster has only started, and there’s much to be learned from both sides.