This year I spent a considerable amount of time researching Artificial Intelligence as it was one of the major hypes in technology (and will likely continue to be).
I’ve written a few pieces on the issue and did several interviews/podcasts. What I consider most important, however, is the Artificial Stupidity problem, an argument I developed in a piece of the same name. It’s core thesis reads:
My concern in brief: We increasingly cease control over several affairs and hand it to machines and algorithms that are marketed and sold to us as artificial intelligence. Given the current state of the technology, however, those ‘intelligent’ algorithms, assistants and tools are still very binary. They are mono-dimensional in their ‘thinking’ and largely directed by the underlying assumptions that were implemented by their creators. Hence, if those assumptions turn out to be wrong, we might have accumulated a lot of risk by basing our decisions on flawed, artificial ‘reasoning’ or, in short, stupidity
It’s for exactly those reasons that I’m highly wary when I come across articles like “Rise of the Strategy Machines” by Tom Davenport. While Davenport himself is rather reasonable in his conclusions, bold, suggestive claims like the headline contribute to the risk I outlined in the Artificial Stupidity piece. Which is, in essence, to use AI systems in domains where the technology is not well-equipped for because marketers of said technology tell businesses otherwise while the latter lack the technological and data know-how to critically assess it. After all, the buyers are often going to sit in functional departments, a point well articulated by Ben Thompson in here.
I’m not in denial that there are great use cases for A.I. in businesses. The opposite is true. However, when it comes to strategic decision-making – certainly a complex domain and thus an area where current A.I. approaches come with deficits -the technology isn’t fit for that. Ir’s not only a problem of accessing the necessary data. It’s a problem of not knowing what the necessary data is in the first place. (Of course, this comes down to your definition of “strategy”. Certain narrow types of decisions can well be handled or assisted by algorithms but they’ll fail in any broader reading of the term)
So, don’t hope for A.I. systems to successfully run your business anytime soon. And definitely build up the necessary techno-literacy to critically review any solution you consider (and make sure its used in any such buying decision).
You can read my original piece here.