The Kids Are Alright

11027495256?profile=RESIZE_400xThis year millions of people have tried and been wowed by artificial-intelligence systems.  That is in no small part thanks to OpenAI’s chatbot ChatGPT.  When it launched last year, the chatbot became an instant hit among students, many of whom embraced it as a tool to write essays and finish homework.  Some media outlets went as far as to declare that the college essay is dead.  Alarmed by an influx of AI-generated essays, schools around the world moved swiftly to ban the use of the technology.  But, many teachers now believe that far from being just a dream machine for cheaters, ChatGPT could actually help make education better.[1]

What is clear from a MIT Technology Review article that ChatGPT will change the way schools teach.  But the biggest educational outcome from technology might not be a new way of writing essays or homework.  It is AI literacy.

AI is becoming an increasingly integral part of our lives, and tech companies are rolling out AI-powered products at a breathtakingly fast pace.  AI language models could become powerful productivity tools that we use every single day.  There is a lot written about the dangers associated with artificial intelligence, from biased avatar generators to the impossible task of detecting AI-generated text.

When asked about what ordinary people can do to protect themselves from these types of harm, the answer was always the same.  They say there is an urgent need for the public to be better informed about how AI works and what its limitations are, in order to prevent ourselves from being fooled or harmed by a computer program.

Up until now, the uptake of AI literacy schemes has been sluggish.  But ChatGPT has forced many schools to quickly adapt and start teaching kids an ad hoc curriculum of AI 101.  The teachers interviewed had already started applying a critical lens to technologies such as ChatGPT.  One writing tutor and educational developer at the University of Mississippi, said she thinks that ChatGPT could help teachers shift away from an excessive focus on final results.  Getting a class to engage with AI and think critically about what it generates could make teaching feel more human, she says, “rather than asking students to write and perform like robots.”  And because the AI model has been trained with North American data and reflects North American biases, teachers are finding that it is a great way to start a conversation about bias.

A professor of bioscience education at Sheffield Hallam University in the UK, allows his undergraduate students to use ChatGPT in their written assignments, but he will assess the prompt as well as, or even rather than, the essay itself.  “Knowing the words to use in a prompt and then understanding the output that comes back is important,” he says. “We need to teach how to do that.”

One of the biggest flaws of AI language models is that they make stuff up and confidently present falsehoods as facts.  This makes them unsuitable for tasks where accuracy is extremely important, such as scientific research and health care.  But an associate professor of instructional technology at Old Dominion University in Norfolk, Virginia, has found the AI’s model’s “hallucinations” a useful teaching tool too.  “The fact that it’s not perfect is great,” she says. It’s an opportunity for productive discussions about misinformation and bias.  These kinds of examples give me hope that education systems and policymakers will realize just how important it is to teach the next generation critical thinking skills around AI.

For adults, one promising AI literacy initiative is a free online course called Elements of AI, which is developed by startup MinnaLearn and the University of Helsinki.  It was launched in 2018 and is now available in 28 languages.  Elements of AI teaches people what AI is and, most important, what it can and cannot do.  Some have tried it and claim “it’s a great resource.”

A bigger concern is whether education will be able to get adults up to speed quickly enough.  Without AI literacy among the internet-surfing adult population, more and more people are bound to fall prey to unrealistic expectations and hype.  Meanwhile, AI chatbots could be weaponized as powerful phishing, scamming, and misinformation tools.

The kids will be alright.  It’s the old-timers we need to worry about !! 

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  For questions, comments or assistance, please contact the office directly at 1-844-492-7225, or feedback@wapacklabs.com            

Weekly Cyber Intelligence Briefings:

  • Reporting: https://www. redskyalliance. org/
    •       Website:        https://www. wapacklabs. com/
    •       LinkedIn:       https://www. linkedin. com/company/64265941    

Weekly Cyber Intelligence Briefings:

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5504229295967742989

[1] https://www.technologyreview.com/2023/04/12/1071397/ai-literacy-might-be-chatgpts-biggest-lesson-for-schools/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!