By Kimberly Ramkhalawan
May 9, 2023
It used to be that people looking to cheat on their academic papers, would often hire someone online or off campus to write their papers for them. But, with the advent of artificial intelligence like ChatGPT mimicking human nature in the most remarkable ways, those in the academic world are now forced to arm themselves with the very clever tell-tale signs, if any at all, in discerning the authenticity of research work presented by would-be students.
This was the focus of the UWI Vice-Chancellor’s Forum titled Artificial Intelligence (AI) – A Blessing or Curse for Higher Education which featured a panel of speakers in the tertiary education world.
\Dr Margaret Niles, who is currently the Manager of the Research, Insights and Product Innovation Department at the Caribbean Examinations Council (CXC), sees it as a testament of the power and creativity of the human mind, but to understand and recognize the genius of AI in the same breath, as not a fair game, which has the potential to impact teaching and learning, at any level.
She adds that it is an opportunity for a recalibration of the teaching-learning transaction, which begs the question as to how does the academic world exist with this tool, “does it recognize it, resist it or circumvent it”. However, in her opinion, it ought to be used as a teaching tool, as ways in which students can “reinvent themselves so they have the necessary tools in which they can exist in a rapidly changing labour market”, thereby “harnessing the benefits of AI by understanding it and mitigating the challenges that it can present”.
Patti West-Smith, Director of the Customer Engagement team at Turnitin, which is responsible for checking plagiarism and ensuring academic integrity of papers around the world, views AI as neither a blessing or a curse, but a tool, one that offers opportunities as much as obligations, but essentially comes down to the individuals using it. Smith says it begs us to rethink the way and processes in which we write and assess, but one that asks us to start at a place of understanding it first, as to what it does well, its weaknesses and the risks associated with it. Something she says calls on educators to position themselves to support their students. This Smith says entails letting students know how to harness it, what are its limitations and how to use it ethically.
She notes that if this is not done, students are shortchanged in going out into the world, and also calls for equity in ensuring all have access to these tools.
In attempting to answer the question on whether AI is a benefit or curse, Dr Emma Sabzalieva, Head of Research and Foresight, UNESCO International Institute for Higher Education in Latin America and the Caribbean (IESALC), asks the question as to who is benefiting from AI and also provokes the equity question where ChatGPT is not available in some parts of the world.
Arianna Valentini, Research and Foresight Analyst, with UNESCO IESALC, says chatGPT has bias based on what information it intakes, and how it is programed.
This offers up many challenges including the presentation of biases and the inability to distinguish right from wrong, whereas it is a relatively new learning from what is present on the world wide web.
However, Professor Justin Robinson, Pro Vice-Chancellor for the Board for Undergraduate Studies at The University of the West Indies posed the question to the panel on whether ChatGPT should be banned all together by universities and higher learning institutions, with the aim of getting original thinking. Instead, he was met with most already on the fence of being pro-ChatGPT.
Smith in adding her say to the matter notes while there is a normal and natural fear associated with this tool when it comes to academic integrity, which aims at supporting students to think originally, she notes that bans often come from a place of fear, as there can be downsides to prohibiting its use. She instead puts across the scenario of where the world of work will require its use, which prompts teachers to put in place proper guardrails in using these tools ethically, or else students will create their own set of rules and ethos in what is appropriate or not, and is calling for a balance in its use.
She likens any attempt to ban these tools as “a fools’ errant as the gates have been thrown wide open”, and will likely continue to perpetuate the privileged. Smith says focus should be emphasized on how to manage the risk, and training on the language models, and including AI as part of the new training in digital literacy that higher education institutions will be forced to learn.
But human literacy is also at risk, according to Dr.Niles, and ensuring that people can keep the edge over AI by “instilling distinctly human traits such as ethics, leadership, entrepreneurship, values all make up part of an understanding of intercultural relationships in a context that prepares one, not only for lifelong learning, but adaptable and in the end, better citizens”.
As to what platforms like Turnitin use to gatekeep original work, Smith says they have implemented an AI detector tool for looking over writings and explains that any attempts to divulge how this works will take too much time, but notes that Turnitin’s Vice President of AI, Eric Wang views ‘AI generated tools being so far as ‘exceptionally average’, lacking the nuances that would be found in the human language, and it will be some time before AI comes up to par with grasping human language expressions.