Much of the article focuses on Dan Wagner, the chief analytics officer for Obama 2012 and the work he and his team did to fuel the campaign. In particular, it notes:
The significance of Wagner’s achievement went far beyond his ability to declare winners months before Election Day. His approach amounted to a decisive break with 20th-century tools for tracking public opinion, which revolved around quarantining small samples that could be treated as representative of the whole. Wagner had emerged from a cadre of analysts who thought of voters as individuals and worked to aggregate projections about their opinions and behavior until they revealed a composite picture of everyone. His techniques marked the fulfillment of a new way of thinking, a decade in the making, in which voters were no longer trapped in old political geographies or tethered to traditional demographic categories, such as age or gender, depending on which attributes pollsters asked about or how consumer marketers classified them for commercial purposes. Instead, the electorate could be seen as a collection of individual citizens who could each be measured and assessed on their own terms. (italics added.)
The article contrasts the Obama approach to Romney campaign’s, which was much more rooted in the “20th century tools for tracking public opinion”. How much Wagner’s team contributed to the surprising (to many) margin of victory will undoubtedly be debated for years to come, but I will be very surprised if the “collection of individuals” approach doesn’t quickly become the new standard for any political campaign.
I want to highlight one other quote from the article before turning to our own industry. In his conclusion, Issenberg writes:
In many respects, analytics had made it possible for the Obama campaign to recapture [the small town] style of politics. … They enabled a presidential candidate to view the electorate the way local candidates do: as a collection of people who make up a more perfect union, each of them approachable on his or her terms… “What that gave us was the ability to run a national presidential campaign the way you’d do a local ward campaign,” [David Simas, the director of opinion research] says. “You know the people on your block. People have relationships with one another, and you leverage them so you know the way they talk about issues, what they’re discussing at the coffee shop.”
As I write this, I am reminded of a 1990 United Airlines Commercial where a boss addresses his staff after getting fired by a long time account and tells them they are going to have a “face-to-face chat with every customer we have.” They’d lost touch with the individual and replaced personal relationships with a dependency on modern technology – faxes in this era. This boss was going to use the airline to get back to his customers.
Technology, the very thing that got this fictitious company out of touch with its customers, is what the Obama campaign used to get in touch with 65 million of its “customers”.
Customer experience research came of age in the 1990’s and is very much rooted in the social science thinking of the time. We survey individual customers in order to learn about the group, not about that individual. The “group” could be the business unit (usually a branch or a region) or the brand itself, but that is the focus rather than the customer. Yes, most programs incorporate hot alerts and other customer recovery tools into the process and we rarely ignore a customer in need, but our methods and processes are all based on understanding the group, not the individual. Advocates of the Net Promoter approach will say the 2-question survey is more customer friendly, but it is still a methodology designed to understand the group. And, while some technology-based companies or even industries claim to be doing things differently, the reality is that they have all adopted the prevailing mindset and are just as dependent upon 20th century social science thinking as the market research industry. Whether you call it market research or Enterprise Feedback Management, any program that treats all customers as essentially the same and has a one-size-fits-all approach to survey construction and administration is still based on the old social science model. Anytime the feedback process has been designed for the user of the information rather than the provider of that information, the customer, the program is based on the old social science paradigm. If there are truly individualized programs out there, I would love to see them.
Note that advocating for an individual approach is not abandoning the need to understand the branch or the brand. After all, brands and branches are built one customer at a time; they are collections of individuals. But systemic progress is very difficult when all you have are the individual responses. Even when you design a program from the customer back, you still need to be able to analyze the aggregated data and use that understanding to improve the overall functioning of the unit. The MIT article makes it very clear that the Obama campaign relied on the combination of individual voter information and data from traditional research. And, while I do think moving to an individually-based approach will create measurement challenges, it is worth noting that according to the MIT article, the Obama team was able to predict individual elections outcomes with “improbable accuracy”; not with polls, but by counting votes “one by one.”
Regardless of what you think about President Obama, the reality is that his 2012 campaign has changed politics. The use of modern data analytics to understand the electorate through a focus on the individual voter will be with us for the foreseeable future. The question is: when will these same techniques drive the next generation of customer experience measurement?
* Republished with permission from the original at Maritz Research's Blog: Sound Check
By: Jim Stone