In the highly competitive AI market, blaming “human error” can help companies hide serious flaws in their systems.
![]() |
| Blaming employees for AI deficiencies: a new business strategy? |
Truthout serves as a crucial news outlet, chronicling the ongoing political struggles. If you appreciate our work, kindly consider supporting us with a contribution, no matter the amount.
In the midst of a staggering 9.1 percent year-over-year increase in Canadian grocery prices as of June 2023, Microsoft recently posted an article on MSN.com, offering travel advice for Ottawa, the capital city of Canada. However, this article drew attention for a peculiar recommendation: “Life is already difficult enough. Consider going into it on an empty stomach,” suggesting a visit to the Ottawa Food Bank.
After facing ridicule from commentators, the article was quickly removed, and Microsoft attributed the issue to “human error,” clarifying that the content resulted from a blend of algorithmic techniques and human review, not solely from a large language model or AI system.
While it's challenging to ascertain precisely what transpired, blaming a human reviewer appears somewhat disingenuous. Although the reviewer might have made an oversight, the content likely originated from a machine. Given Microsoft's history of algorithmic missteps, such as the ill-fated chatbot Tay and the problematic Bing AI language model, which Bill Gates partially attributed to user "provocation," the involvement of artificial intelligence in this incident isn't far-fetched. Regardless of the actual culprit behind the Ottawa Food Bank mishap, Microsoft's response to it raises intriguing questions.
Let's contrast this incident with a 2017 revelation involving Expensify, a purportedly AI-powered startup. It turned out that Expensify was not as technologically advanced as it claimed. Reports exposed that Expensify relied on Amazon Mechanical Turk, a platform that employs human workers to perform tasks that algorithms cannot, to process sensitive financial documents. This incident fueled a common criticism of the AI industry: that overhyped AI conceals the essential human labor operating behind the scenes, often referred to as “Potemkin AI” or “fauxtomation.”
However, Microsoft's mishap reveals a different dynamic. Rather than human workers hidden by a faux AI facade, it showcases an AI hidden behind an anonymous human error. In this case, human labor is presented as a scapegoat to shoulder the blame for a machine's malfunction.
To delve deeper into this phenomenon, we can draw on anthropologist Madeleine Clare Elish's concept of "moral crumple zones," which explores how responsibility for an action may be inaccurately attributed to a human actor who had limited control over an automated or autonomous system. While Elish's study primarily examines traditional contexts, her insights hold relevance for the evolving landscape of AI ethics. Moral crumple zones can be weaponized by parties interested in deflecting scrutiny from their machines. As the Ottawa Food Bank story illustrates, anonymous human error can conveniently absolve the failures of complex automated systems.
This is a significant development because it suggests that the AI industry is transitioning from actually feigning AI deployment to actually implementing it. Often, driven by competition, these implementations occur prematurely, increasing the risk of failures. With the advent of consumer-facing AI technologies like ChatGPT and the proliferation of large language models, these failures are more visible to the public and carry significant consequences.
While the Ottawa Food Bank incident and its use of a moral crumple zone had relatively minor repercussions, serving mainly to protect Microsoft's public image, other instances of algorithmic moral crumple zones have more serious implications. In 2022, an autonomous semi-truck from startup TuSimple unexpectedly veered into a concrete median on a highway. While TuSimple attributed the accident to human error, analysts contested this explanation. In 2013, during Vine's heyday as a social media platform, explicit content surfaced as the "Editor's Picks" recommended video on the app's homepage. Again, the company blamed "human error."
Whether human error genuinely played a role in these incidents is almost relevant. The crucial point is that the AI industry may utilize moral crumple zones to its advantage, if it hasn't already. Notably, Madeleine Clare Elish now serves as the Head of Responsible AI at Google, suggesting that Google may employ this conceptual framework in its public-facing AI endeavors. The Ottawa Food Bank incident underscores the need for AI users and those affected by its data processing to scrutinize how blame is assigned within complex sociotechnical systems. It prompts us to question whether explanations rooted in human error are too readily accepted and what aspects of the system they divert our attention from.

Post a Comment