AI (or artificial intelligence) is a staple of science-fiction in dystopia, so much so that it almost becomes a cliché. Just talk of AI conjures up visions of cruel robot overlords who try to kill the human race with one hand. In 2014 Stephen Hawkins and Elon Musk echoed fears of an extinction at the hands of our own technological creations. Yet the research industry is steadily marching towards an automated future led by machine-learning algorithms and smart models. Surely, we are building the future of which we have been warned?
Well, no. The future of market research is one in which researchers and AI work in unison to generate insights that would previously have been unachievable. Knowing why artificial intelligences play a collaborative rather than hierarchical function in science requires understanding what it means to be human, and what it does not.
Replicating vs. Understanding
One achievement is replicating a human decision-making cycle, but knowing (and empathizing more importantly) with it is another. It is the secret to good market research – going beyond words and statistics and coming to grips with behaviour-based feelings, perceptions and values. It is this valley that artificial intelligences have to cross to make the transition from artist to researcher.
But even looking at the most advanced models of machine learning (such as seq2seq experiments by Google), there is only a limited degree of understanding. This is because robotic intelligence refers to the capacity to learn and to adapt independently of human feedback. Free thought isn't equal – an important distinction to make. At its heart, machine learning models are learning by trial and error. Psychological conditioning is digitized.
The idea of classical conditioning as a form of social learning can be traced back to the early 1900s-where Ivan Pavlov's now controversial experiments popularized it. Repeated in several ways, these studies demonstrated that repetition and feedback could produce a physical (non-conscious) response by generating mental associations with stimuli. For instance, a dog would know by association that those wearing lab coats bought food (the stimulus) – and over time it started drooling when they saw lab coats, rather than the food itself.
Conditioning Operator, invented by B.F. Skinner drew on this concept almost three decades later to explain how positive and negative reinforcement (the absence of adverse stimuli) strengthened perceptions and acted as a more effective mechanism for changing behaviour. In several ways, this operant conditioning paradigm can be compared to machine learning algorithms (the global pioneer for AI models). The algorithm learns the answer is right and which is wrong based on input-either automatic or manual.
Comparing Today’s AI with the Future’s
While today's AI is able to learn through operand conditioning, its use in market research is still somewhat limited. Their function remains one of automation-to replace basic, repetitive tasks. All activities that could be feasibly automated are the review of quantitative data, the sending of customized prompts to survey non-complete and even the collection of suitable testing methods. Via trial and error, they can all be learnt (given adequate time and computing power).
But artificial intelligences of today aren't able to become a researcher, or even a moderator. Those are activities that involve empathy over comprehension and emotional intelligence over awareness. An effective moderator needs to do more than consider the emotion behind words-they need to understand why the subject is emotional and what deep-set (potentially unconscious) values drive it.
It's these dynamic activities that are where A.I. To tackle the human dimension of market research, technology will go beyond automation. Attempts to reproduce neural networks and organic learning have sown the seeds of more natural, human intelligences. Will this lead to a day when computers understand better what it means to be human than we do ourselves? If we had to enjoy science fiction's macabre, dystopian future-then yes, definitely.
But in actual it, no. Even if science could create the perfect artificial intelligence that is organically learning, it would be able to do little more than current researchers – shape an opinion. Why? Why? Since the explanation for the emotions can be hypothesized only, not confirmed. If we want to embrace the decision on an A.I. or not. Researcher's in our possession yet.
Thus, instead of focusing on creating artificial intelligences that attempt to replicate human judgment, those that complement it would serve the research industry better. Automation is the correct starting-point. Removing the need for boring, time consuming activities frees researchers time to focus on what they do best – thought, reflecting and feeling.
Where artificial research intelligence goes next, it's up to the whole industry. But one thing is for sure: its function is to complement and enhance (not substitute and destroy). And now, we are free from the robot overlords.