Two Paths For AI
People are famously willing to choose dancing pigs over digital security. It’s becoming apparent that many are also willing to choose convenience over their own autonomy. Google announced Allo last week, a messaging app that includes Google’s assistant, a bot that will, among other things suggest replies for you, if you let it follow along with the conversation. For example, if you’re sent a graduation photo, you can choose from “Congratulations!” or “You look great!”. The bot presents suggestion “chips” if you’re discussing dinner with friends. These chips will, for example, suggest Italian restaurants nearby.
This is a great example of presenting a false or constricted menu of choices for “what to do with your time and attention”. Other options will be available, but only in the way that the second page of Google search results are available. It’s possible to go outside of the bot’s suggestions, but the path of least resistance is to pick the most immediately appealing one presented. It seems innocuous but at its core it’s fundamentally an affront to free will.
There are two possibilities for how things like Allo and AI’s like it develop.
The first is that we inch and then crawl and then slide towards the dystopian Axiom from WALL-E. It won’t be as caricatured of course, but choices that today seem on the far extreme of prioritizing convenience over autonomy will become normal and widespread. Depending on your opinion, this scenario is at worst collapse-of-western-civilization disastrous; at best, it’s sort of a neutral lukewarm inevitability, since there’s nothing really wrong with it, just like there’s nothing really wrong with moving to a beach town and working part time for the rest of your life.
The other possibility for AI assistants is that they get really good. In this scenario the assistant that you train is able to recognize that you, personally, always get annoyed when it tries to sell you on something without you explicitly requesting it. So when you ask for information about the Taj Mahal, it shows you or reads from the Wikipedia page about the palace in India, even though its bot-instincts are screaming that it should totally tell you about the restaurant with that name that’s down the street and that’s promoting a happy hour right now.
Most likely, though, we’ll get both. We’ll see the meme of “if you’re not paying for a product, you are the product” repeat itself. The “free” AI’s from tech giants will each be more likely to shunt you towards purchasing things via their stores. But I hope we’ll also see other AI’s emerge that are more malleable and that will learn (that’s what they do, after all) that sticking paid placements into a conversation is very rarely the optimal response.
People have already realized they should pause when interacting with the predecessors to these AIs. I might want to click on the buzzfeed cat gif listicle, but I don’t want to want to click on it. Put another way, I might want to see it right now, but I don’t want the algorithm to count this as a vote for “show me more of this in the future”. This conflict isn’t unique to the digital world, but it’s been accelerated by it. Good (as in high-quality) AIs should recognize this subtlety and be able to draw the distinction.