Artificial Intelligence Made Me Write this…

10079324662?profile=RESIZE_400xScientists and technology visionaries have envisioned a day when computers become so powerful that they become smarter than the human race. There is no shortage of science fiction stories and movies about robot uprisings. We are still far from that scary scenario (we hope), but at the same time, artificial intelligence (AI) is no longer sci-fi. Many applications of AI abound today in business and it is now being used in some creative professions.

New behavioral experiments by Alok Gupta from the University of Minnesota and Andreas Fügener, Jörn Grahl, and Wolfgang Ketter from the University of Cologne in Germany bring a cautionary tale for current AI applications. The research, published in late 2021, uncovers risks, consequences, and solutions to overreliance on AI in business and creative decisions.[1]

Gupta and his colleagues studied how humans and AI collaborate and complement each other to make decisions. They developed experiments with a simple image classification task (identifying the breed of a dog) to see whether and how AI-supported decision-making improved task performance. Their first major finding was that humans are not very good at knowing when they should delegate decisions to AI. As a consequence, they can end up relying on an AI tool even when it recommends the wrong path. When making such mistakes, a team of humans that uses an AI tool can perform worse than a team that does not use it.  Unfortunately, humans will not second guess an AI recommendation, this may prove to be a dangerous human fault.

In creative professions, this human flaw can have important implications. AI is being used by content creators and marketers to produce both media and entertainment content.  This had led to allowing AI to make important creative decisions. For example, AI has been used to create news stories, to produce personalized ads, and to make film production go/no go decisions.

Over-reliance on AI for these content creation decisions may negatively affect revenue, as consumers start to defect due to potential no-changing content. Gupta and his colleagues showed experimentally that a viable solution to avoid suboptimal AI-based decisions is to educate humans on the limitations of AI so that they can decide when they should ask for AI assistance and when they should make decisions for themselves. Gupta states, “Using AI for tasks it can do more efficiently is not bad, but over-reliance on AI advice can lead to bad decisions that just get reinforced over time.”

This new research also shows that there can be long-term consequences of overreliance on AI, which could get us to a sci-fi doomsday scenario, not so much because computers become more intelligent, but because humans become ‘dumber’ by losing their unique knowledge.

Collective intelligence emerges in humans and society when diverse minds that have access to different data sources come together to find solutions to problems, also known as the wisdom of crowds. Gupta and his team show that overreliance on AI can lead to a decrease in the diversity of thinking, leading to suboptimal collective performance. Gupta added, “Essentially humans start mimicking AI and stop taxing their own brains, therefore they all act smart similarly like borgs”.

Note:  Borg is a collective proper noun for a fictional alien race.

A good example is an overreliance by social media platforms on AI engines to power news feeds. If the AI algorithm converges to certain types of personalized content for a group of individuals, it can lead to an echo chamber within this group. Group members, in turn, can become content with a consistent, self-indulging, AI-filtered message, which is reinforced by peers in the social circle. Is this already happening in some circles?

Those who rely too much on news from social media platforms, which in turn rely too much on AI tools, can slowly become drones, subject to the echo chambers of AI-enabled news feeds where diversity of thought is gradually lost. As different groups separate in their collective thinking, they cannot appreciate different perspectives, and at one extreme, they live in alternative realities.

The overuse of AI can turn humans into drones in the long run. For media and entertainment firms, it can start with suboptimal content creation decisions that can then have adverse social outcomes. The solution, according to this new research, does not seem to be that difficult and you can already be a part of it.  Share this fresh cautionary tale with your group(s) that AI has limitations and overusing it can kill creativity and diversity of thought.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization that has long collected and analyzed cyber indicators.  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs.com    

Weekly Cyber Intelligence Briefings:

Weekly Cyber Intelligence Briefings:

 

[1] https://www.forbes.com/sites/nelsongranados/2022/01/31/human-borgs-how-artificial-intelligence-can-kill-creativity-and-m

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!