Alfa

Tastemakers: can a robot really know what we’ll want to eat?

A startup aims to serve hyper-personalized meals. Are we willing to trade our data to satisfy private hungers?

“My take is that pretty much all the food and beverage products on the market today are awful,” Jason Cohen tells me, with fierce conviction. “There are literally no products engineered for me.”

Cohen is the founder and CEO of Analytical Flavor Systems, an NYC-based startup that aims to usher in a new era of hyper-personalized food. We are meeting at a swanky Australian coffee shop near the company’s office in the financial district – the kind of place that offers multiple single-origin pour-over options – so he can tell me about his artificial intelligence (AI) platform, Gastrograph, which he says can be used to map taste preferences with unprecedented ease and precision. Cohen is lanky and self-possessed, with hair the color of damp straw. He drinks his coffee with the studied concentration of someone who takes flavor extremely seriously.

Like many startup CEOs, Cohen interprets his own dissatisfaction as a sign of a more general problem. It’s not just that most grocery store offerings, from snack-cakes and yogurt to green tea and IPA, don’t fully thrill our senses. They’re also aimed at the lowest common denominator: there’s nothing out there truly designed for you. The world of food and beverage manufacturing, Cohen says, is still oriented around “the predominant demographic”, the flavors of things tailored to please a coarse approximation of majority appetites. The result, in his view? Endless shelves of products that most people like, but few people really love.

Yes, the processed food industry has gotten pretty good at making food to please the masses. New Yorker staff writer Helen Rosner once argued that anyone who’s tasted chicken tenders loves them, even if they no longer choose to ingest them. New York Times reporter Michael Moss famously explained how snack food companies have learned to lure us by tweaking proportions of salt, sugar and fat to a “bliss point” ratio most human beings find irresistible. But Cohen’s argument is that existing models of flavor design only work in crude broad strokes. And he thinks his AI tool is the doorway into a new landscape where food and beverage companies know more about us than ever before, with product offerings that respond to ever more individualized hungers.

In its quest to make food that knows more about you, Analytical Flavor Systems’ main data collection tool is its smartphone app, Gastrograph. The app’s central feature is a wheel with 24 spokes, where each sliver represents a discrete category of sensory experience – such as “meaty”, “bitter” or “mouthfeel”. Tasters map the contours of flavor perception by tracing the spokes corresponding to the qualities they detect, designating the intensity of each on a scale from one to five. A submenu allows for a more granular record of experience: specifying that “meaty” quality, for instance, as beefy, sausage-like, or more exotic options (moose, kangaroo). Tasters are then prompted to give the product a preference rating, on a scale from one to seven.

The Gastrograph app also gathers data about the person doing the tasting –demographic information, socioeconomic status, past experience with the product, smoking habits and more – as well as information about the ambient environment, such as temperature, barometric pressure and noise levels, all of which can shade our experience of how things taste. “We literally turn on every sensor that the device has,” Cohen explains, including the microphone, light meter and GPS. “We even collect magnetic field data, which is right now not predictive of anything, but one day …” He shrugs. Who knows what the data might reveal about the influence of the magnetosphere on our ability to detect saltiness?

All this is an attempt to crack open that most private realm: the intimate, ineffable world of flavor. The tastes that tantalize and repel us, after all, are highly individual, shaped by biology, culture and personal history. But the power of the Gastrograph AI lies in its ability to model and predict the flavor preferences of increasingly narrow slices of the consuming public, giving food and beverage companies the information they need to develop products optimized for more and more specific sensibilities. Cohen dreams of a day when we’ll each have a Dorito of our own.

An algorithm has no tastebuds; a neural net never gets the munchies. So can a robot brain really tell us what we’ll want to eat? The question is whether AI systems will be able to excel in the sensual, creative work of tasting and developing new foods – and what we stand to gain or lose by inventing foods that really have our number.

How should food be made to taste? This question has vexed manufacturers since the earliest days of factory-made foods, when industrial processing created new challenges – and new possibilities – for flavor. The unprecedented ability to manipulate raw ingredients raised two connected conundrums, both still top-of-mind for the industry today. The first has to do with consistency. No grain of wheat, no cocoa bean, is identical – yet each Oreo that tumbles off the production line must be, as far as possible, indistinguishable from the next. How can nature’s variety be commoditized and rendered uniform, with sensory experience that’s guaranteed? Second, there is the problem of deliciousness. What makes one creme-filled cookie preferable to another creme-filled cookie? How can pleasure be measured?

By the middle of the 20th century, food scientists, chemists, home economists, consumer researchers and experimental psychologists forged a new discipline to address these questions: sensory science. If you’re like most people, you’ve probably never heard of sensory science. But its methods shape basically everything you eat. It’s what ensures the uncanny consistency of Budweiser from can to can, calculates the ideal crunch of a Pringle and determines the optimal cheesiness for a Ritz cracker.

The achievements of sensory science depend on the work of an exclusive group of highly specialized human beings: the trained sensory panel. Sensory panel tasters learn to taste and smell analytically. They learn to describe their experiences using standardized vocabularies of taste, aroma, texture and mouthfeel. They are taught to set their personal preferences aside, and report only what they perceive. Over the years, electronic noses, electronic tongues and other sensing devices have threatened to replace human tasters with automated machine equivalents, but for now the tasting panel remains the primary instrument of sensory science. Today, sensory panels operate within large food and beverage companies; in academic, government and military research labs; and at an international network of sensory evaluation and consulting companies.

Christopher Findlay is the chairman and founder of one of the world’s leading sensory consulting companies, Compusense, with a client roster that includes many of the top food, beverage, and flavor brands. “We keep a pool of 30 panelists,” he told me – not full-time employees, but local Guelph, Ontario, residents who are chosen for specific projects (such as a recent six-week French fry study) and compensated for their time. “They get paid,” Findlay says, “but it’s not all about the money. They cheerfully make the commitment. They come here in the middle of snowstorms.”

Each individual is trained using Compusense’s trademarked Feedback Calibration Method, which, Findlay explains, allows tasters to achieve exceptional precision in describing their flavor experiences. This approach aims for a kind of objectivity, with participants expected to respond in reasonably coherent and consistent ways to the same sample. Ultimately, flavor panels are thought to produce reliable, reproducible information about subjective flavor experiences – allowing companies to figure out how their grapefruit seltzer compares with other grapefruit seltzers, for instance, or determine whether an ingredient change will cause a noticeable alteration to its taste.

Of course, there is the messier problem of consumer behavior – “a different animal,” says Findlay. Compared with trained tasters, ordinary consumers respond in unsystematic, inconsistent ways when asked about what they taste, and whether they like it. Understanding consumer desires, and using those insights to forecast trends, is a large part of what sensory evaluation companies do. Still, a lot of forecasting comes down to intuition – which may be why new product development in the food and beverage industry is generally recognized as a bloodbath. The claim that 90% of new food products crash and burn within a year gets bandied about a lot; that figure is . But however you crunch the numbers, launching a new food or beverage into the highly competitive US market is a perilous venture, one that’s expensive and fraught with hazards. One recent calculation that new product failure costs the US food industry $20bn a year. Suffice it to say, food and beverage companies are hungry for more reliable ways to foresee the future of flavor.

This is where Cohen believes sensory science has failed to deliver, and where Analytical Flavor Systems sees its opportunity. If he’s right, it could mean a fundamental realignment in the field of flavor science, a shift away from the monolithic declarations handed down by tasting panels. It would mean, instead, embracing the ostensible chaos of popular opinion, and finding new ways to map complex patterns long dismissed by the industry as noise.

Cohen pulls up a slide on his laptop to show me an image of a tasting panel at Miller Molson Coors. It shows five or so people sitting around a table with plastic sample cups of what looks like beer in front of them; they are writing their notes on sheets of paper, smiling and chatting with each other. He can barely conceal his disdain. “This is obviously not a scientific way to develop products,” he says. What’s more, “every single one of these individuals is white. They are going to develop products for national or international launch, and they literally cannot perceive what other demographics around the world perceive when they are tasting these products.”

Conventional sensory evaluation methods, according to Cohen, are both fundamentally flawed and fundamentally misguided. By treating the trained tasters on panels as neutral instruments, who are expected to set aside their personal preferences and achieve consensus on matters of taste, sensory science denies the clear differences between people. When tasters on a panel are “calibrated” to use a standardized language to describe their experiences, “you are literally throwing away data about the variations in perception in the broader underlying market,” Cohen says. (Experts disagree with this characterization of sensory science, but more on that later.)

In other words, Cohen accuses traditional sensory science of artificially straitjacketing tasters in order to achieve a false objectivity. His beef with sensory science dates to his undergraduate days at Penn State, where he was one of the founders of the Tea Institute, a student group devoted to the appreciation and study of tea and tea ceremonies. A tea obsessive, Cohen was determined to find a way to determine whether a rare green tea was actually what it claimed to be, or calculate the changes to the taste of a pu-erh as it ages. A basic course in sensory science seemed to yield only unsatisfactory answers.

Then he discovered data science. In 2010, Cohen began graduate work with Professor John Yen at Penn State’s College of Information Sciences and Technology, where he built a basic AI platform that recorded tea information and tasting notes. The goal was a program that could tell you the kind of tea you were drinking when you entered in what you were tasting. “We achieved superhuman performance on some things in 2012,” he told me, when the system outstripped his and his peers’ ability to pinpoint a tea’s place of origin. This was the basis of the Gastrograph AI. Cohen left graduate school before completing his degree – in order to protect his intellectual property, he says – and devoted himself full-time to Analytical Flavor Systems beginning in 2013. Two members of the Tea Institute, Ryan Ahn and Aislynn van Clief, soon joined him, and now serve as the company’s director of research and creative director, respectively. Along with a data scientist, they form the core of Analytical Flavor Systems’ small team.

Rather than calibrate to a universal baseline, Gastrograph’s guiding principle is that everyone experiences flavor a little differently. The tool, then, looks for demographic patterns in the very places where consensus falls apart. The superior power of machine learning, Cohen explains, is that it can descry patterns in what traditional sensory methods dismissed as noise. Even more, Cohen says, it can locate unanticipated drivers of preference in the information that sensory science discards. “We think of flavor as an infinite-dimensional Hilbert space,” he says, referring to the mathematical concept of a complex algebraic system with an infinite number of variables. What he means is that the sensory possibilities for how a flavor may be experienced are, practically speaking, limitless. Sensory science constrains the data it collects, for instance, by limiting tasters to standard vocabularies – but an AI has no need to respect these limits. The idea is that Gastrograph achieves “superhuman performance” precisely by ruminating on the things we record but fail to notice.

AI can raise the specter of technology run amok: a post-singularity future with robots in command. Of course, AI-based technologies are already widespread, shaping many of our interactions with the digital world. Essentially, AI describes any system that utilizes machine learning – computational algorithms, including neural networks and natural language processing – to churn meaning from aggregations of data, finding patterns, making predictions and, crucially, displaying the capacity for self-improvement. Whether filtering spam, identifying potential new drugs, or recommending the next show to binge-watch, AI-based systems decrease their error rate over time. They get better at giving us what we seem to want.

In order to do all this, however, AI needs something from us in return: our data

Source: The Guardian

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *