Trusting AI may be the initial step towards acceptance and implementation into education.
Yesterday, I was at Calgary Teacher’s Convention listening to an inspiring speech by Peter Mansbridge, one of the keynote speakers. His presentation discussed “trust”. His underlying message was how the public invests a lot of trust in teachers and he touched on the huge responsibility that lays on educator's shoulders.
He began by reading an excerpt from his speech that described the role of teachers and the respect they deserve. Afterwards, he specified that he had to read it, because he didn’t write it – in fact, no human did! His son retrieved that excerpt from ChatGPT when he was working on his speech. This catapulted a new direction for his speech. As Canadians, specifically educators, we rely on the trust of the public. Now, with the possibility of educators and students collaborating with AI bots, “trust” is challenged – it looks differently and expects more from people. Now, the public may have to trust the influence of robots, not just humans.
Image Retrieved from The Training Journal, https://www.trainingjournal.com/articles/features/truth-and-trust-recruiting-and-retaining-talent
This sparked the topic for my blog post as I began to reflect on how vulnerable trust is and the challenge in gaining trust. The world of education deals with young, curious minds – how can we trust something we don’t know?
So, how will the public react to the possible implementation of AI bots in education with the intention of collaboration and improved student success? Will they be more or less trusting?
How to TRUST an AI Chat Bot
Typically, people do not initially use machines unless they trust them and due to this, companies or institutions can jeopardize their legitimacy by implementing them (Aoki, 2020). This means that school boards and provinces will have to gain the trust of parents, students, teachers, and staff prior to fully implementing AI bots into programming. Aoki (2020) states that in order for governments to make informed decisions, research gaps need to be addressed in the areas of AI bots and societal impacts. As well, it is the AI chat bot’s performance that influences people’s trust in using AI chatbots (Aoki, 2020).
In terms of education and implementing AI chatbots into classroom to increase collaboration and student skill set and success, teachers, students and parents would need to have some “buy-in”. This may require a lot of background work trialling AI chat bots to “build trust” with robots to actually believe that they will contribute to interaction and academic achievement.
As of right now, Aoki (2020) reminds us that there are no theories around public trust in chat bots, however there are studies in Japan that can propose a general understanding of initial feelings towards the bots. What was found was that public trust depended on the area of inquiry – what was being required of AI chatbot and in what area of study (Aoki, 2020)?
Image retrieved from Chatbots Life, https://chatbotslife.com/7-ways-to-increase-trust-for-your-chatbot-19f7be70ead8
Can CONNECTION Play a Role in Trust?
When I think of trust, I think of factual evidence, connection, sincerity, and understanding. In order for people to trust AI chat bots, do they need to exhibit the same traits? We know that chat bots are typically factually relevant and accurate as they draw on evidence from the world wide web and professional programming. But can they connect, show understanding and build trust with their audience and users?
A study by Leite et al. (2013) in which a robot played chess against a human partner tested the level of empathy and social companionship of the artificial intelligence involved. It did this by examining facial expressions and verbal comments by the robot towards the human partner based on the human’s moves in the chess game. What they found was that when the robot behaved empathically, the human player perceived the robot as friendlier and as a companion (Leite et al., 2013). If humans can perceive AI as “friendly” and as “companions”, perhaps this will build our trust in them. For chat bots, this would require authentic communication and responses in conversations since empathy fosters social bonds between people (Leite et al., 2013).
Image retrieved from AI magazine, https://aimagazine.com/machine-learning/the-impact-of-artificial-intelligence-on-kids-and-teens
The topic of “trusting” machines reminds me of what my students said when I was using online dice during our class probability game.
“We don’t trust a machine!” they yelled.
I wonder now if this was because the online dice was not performing in a way that benefitted their game play. They thought it was programmed to roll certain rolls. They almost thought it was TOO smart for the purpose of rolling dice, or it was "rigged".
I’m not sure I will ever completely “trust” an AI chat bot like Chat GPT. Our smart devices listen to us, track our data, and personalize advertisements and information. Who’s to say AI chat bots won’t do the same?
Commenti