Tuesday, July 22, 2014

Marketing ex Machina


How Marketers May be Replaced By Algorithms and Why Non-marketers Should Care

 Originally published 05/08/2013 at Maus Strategic Consulting.com.

 The majority of U.S. equity trading in 2012 was executed without human input and, over the next 10 years, financial algorithms may largely compete many, if not most, human market traders out of the industry, according to The Future of Computer Trading in Financial Markets: An International Perspective, a comprehensive independent study by Foresight commissioned by the British government. Because many of these algorithms (especially the newer ones being implemented) are designed to learn and evolve dynamically based on the results of their experiences and/or were optimized by other algorithms, these programs will continue to become ever more effective, but also ever more incomprehensible and unpredictable from a human standpoint. Even the resulting dramatic transformation of the financial industry will have a relatively minor impact on society overall relative to how comparable algorithms will revolutionize marketing, and subtly influence our culture.

With the advent of social media, social scientists now have a direct window into the dynamics of the proliferation and evolution of ideas in a natural social environment--and what a window it is! Determining the precise amount of data on social networking sites is difficult, but in 2012 users uploaded over 500 terabytes per day to Facebook alone, approximately 50 times the total amount of data in the physical collection of the U.S. Library of Congress. This data, in conjunction with other online activity and the detailed demographic data that consumer research giants like Axciom have gathered over the past decades, is truly staggering And, depending on how they are regulated and adopted by the public, Google Glass and similar devices could open avenues for ubiquitous real world data collection.
As if this wasn't enough information, researchers are also able to use automated processes to fairly reliably determine even more about users based on this data, such as one's sexual orientation, intelligence, and many personality traits based on Facebook Likes alone--which is, again, a fraction of the data at the command of researchers to reference and cross-reference in developing a more detailed picture of you.
ViralSearch, designed by Microsoft Research is intended to model and visualize the spread of viral content across the Internet.
Analysis of this data has led to researchers an incredible breadth of new insights, such as unrecorded drug side effects based on search queries. The most important sorts of insights for our purposes, however, are the dynamics of idea transmission and evolution, which many different academic and private organizations are studying, and more importantly quantifying.
These studies are currently rather rudimentary, but already the studies have resulted in basic, but precise quantitative modeling of such areas as the diffusion of positive and negative sentiments, opinion formation, and the impact of emotions on the virality of news articles.

The next logical step, from the perspective of marketers and researchers, is to continue to refine these models and integrate them together, retesting new models against the wealth of data available to them until they begin to approach a unified model for many of the dynamics associated with the transmission and evolution of ideas.
And when one has such a developed model of all the factors and a constant stream of data, one can start making forecasts, much like meteorologists of today.

Memetic modelers could one day be used to predict the course of public discussion and thought formation, much like modern-day atmospheric modelers
Of course, the dynamics of idea transmission and evolution are possibly more complicated and more subject to low-probability, high-impact events than weather patterns and there may be more interrelated factors to take into account, but such forecasters would also have access to far more data than meteorologists. However, due to the complexity, even the rudimentary predictive models would likely be so complex as to be practically incomprehensible to any single person.

Another key difference between weather patterns and idea transmission is that forecasters can easily influence the latter, after all we do it every time we communicate with one another. However, predicting the reaction to one's communications, especially the long-term reaction, can be difficult, as advertisers well know. This is where automated prediction programs have a firm upper hand, as they can not only have a full understanding of the predictive models, but can process the reams of incoming data. Using algorithmic modelers to predict the probable outcomes of various strategies is already being handled by the epidemiological modeling program developed by Intellectual Ventures Lab. which determines the likely outcomes of various disease containment strategies.

Similarly, these idea evolution/profusion prediction algorithms, or memetic modelers, as I'll call them, may be able to simulate the probable outcomes of various marketing campaigns with higher accuracy than even that of experienced marketers. Further, they could make suggestions as to which of several proposed marketing campaigns would be most effective as well as proposing minor tweaks to even the best campaign based on analysis of the current memetic environment--suggestions made for reasons too subtle and complicated for the marketers to understand.
The computer system Iamus, designed at the University of Malaga can compose entire classical symphonies in eight minutes using iterated generative processes
The image above was generated algorithmically using Structure Synth
Perhaps the final step in the automation of marketing would be such modelers actually designing and delivering content for advertisements (whether explicit advertisements or not).

However, this is much less far fetched then it might seem. Algorithmic composition of music is mostly used for experimental music at this point, but has already borne impressive results, such as Iamus, discussed to the left and a wide variety of software that anyone can use to design their own music composing algorithms (check the link for a list.)

Visual content generation as well is also being created algorithmically with programs such as the open source Structure Synth.

Text content has proven perhaps the easiest of all to generate, as demonstrated by the success of Narrative Science, a firm that produces news stories algorithmically entirely based on the data input. The stories thus generated have apparently been featured in publications like Forbes according to Wired--with many distinguished industry insiders named in the article predicting that such algorithms will be writing the majority of content in 5-15 years.

It's not hard to imagine that with further development, algorithms like these could be tailored to produce content designed to evoke particular emotional reactions and accomplish particular marketing objectives far faster and perhaps eventually even more effectively than content designed by a team of marketers.

Crucially, these campaigns need not actually appear to be marketing campaigns at all, but could be designed to appear entirely consumer driven, like subtle astroturfing. The astroturf would seem to be all the more organic because the algorithms could seed the key memes in multiple locations very quickly, and make minor alterations as events develop. Even prior to that, the algorithms could cultivate the social environment to ensure ideal grounds for diffusion of desired memes, such as fostering social groups amenable to spreading particular memes and resisting opposing memes (like those of competitors).

Where matters really begin to become concerning, however, is when such algorithms begin designing strategies and content to counteract the moves made by opposing algorithms, such as Pepsi's memetic modelers vs. Coke's modelers or Republican modelers vs. Democrat modelers, each of them trying to mold society to the ends of their owners. This would mean constant arms race, with the algorithms evolving (likely self-evolving, like the newer trading algorithms) to become ever more effective, ever more subtle, and ever more strategically advanced in their manipulations of our culture. This course is naturally fraught with many perils, as one might imagine from allowing multiple conflicting self-evolving super-humanly manipulative algorithms, subtly using the whole of human culture as a chessboard for conflicts that only they could understand.

For an extreme example of how this could go horribly wrong if the algorithms were given free reign, suppose Coke's memetic modelers identify that Pepsi is popular with a certain ethnic group in a country and that its popularity might spread from that ethnic group to others, which would make promotion of Coke to these other groups more difficult. To slow diffusion from this ethnic group, the modeler might decide to subtly spark and fan prejudice against this group, thus reducing this group's ethnic substantial communication with other groups and preventing them from passing on their love of Pepsi to others. Because the modeler might be implementing strategies more complicated and far-reaching than the oversight at Coke can understand, and because the modeler may be seeding far more memes in pursuit of these strategies than the organization can track, it is quite likely that the damage could be done long before the people at Coke know what is happening, if they ever learn of it at all.
Even aside from these sorts of extreme outcomes, these memetic modelers could cause aberrant social phenomena, comparable to the Flash Crash of March 2010, in which the Dow plunged nearly 1,000 points--the largest one day decline in history--and then largely recovered those points within minutes, an event which the Securities and Exchange Commission attributed largely to high-frequency trader algorithms in their report on the subject six months later.
The March 10, 2010 Flash Crash, widely considered to have been caused or accelerated by automated High-Frequency Traders is a concerning example of what can happen when many algorithms independently influence complicated systems

But extensive regulation and oversight or even outright legal bans will hardly prevent the use of such modelers. Many governments (especially authoritarian ones) might find such systems very useful for controlling their populations, as well as the populations of other countries. China in particular is concerning as a potential user, given its government's near-total control of its social media environment and its penchant for social control.


  1. Really appreciate you sharing this blog.Really looking forward to read more.

  2. There's no doubt i would fully rate it after i read what is the idea about this article. You did a nice job..
    digital marketing westchester New York