AI-education, not AI-scaremongering

June 30, 2025
Stanislav Lvovsky
Stanislav Lvovsky

Universities must set an example of how rejecting the securitization paradigm in AI thinking can make the adoption of new technology faster, less painful, and beneficial for everyone.

What you see here is a word cloud of public sentiment towards AI by UK adults as of July-August 2024, visualising 50 words most often mentioned by 2 204 respondents of the DSIT Wave 4 Survey, published in mid-December last year.

I came across this visual, searching for broader context first, for the article I recently read (it was published literally a day before the DSIT report) about what its author calls the “cheating crisis” in British education. In short, the situation described looks roughly like this: generative AI (ChatGPT, Claude, Perplexity, etc.) is used by more than half of students, mainly turning to these tools for article summarization and explanation of concepts/ideas (36%). 21% of students use LLMs for writing assignments. Of these, 15% edit the text after the model, 5% edit it using the same (or another) model, and 3% don’t edit at all (data sourced from HEPI Policy note on the subject).

AI-generated content detection tools are at the same time unreliable (see, for example, here and here) while “humanizing” AI tools, particularly when used alongside some human editing, make it easy enough to outsmart even relatively reliable detectors.

Tools Universities employ to detect plagiarism are ineffective but still in wide use, which leads to high levels of false positives and, accordingly, many students being falsely accused of cheating, while those using the above-mentioned “humanizing” tools get away with cheating or whatever any particular educational institution considers cheating.

This leads to the erosion of trust between all the actors involved. Hardworking students may feel their efforts are undervalued, especially when they are accused of cheating without substantial evidence. More often than not, no meaningful guidance for either educators or students exists.

Tools Universities employ to detect plagiarism are ineffective but still in wide use, which leads to high levels of false positives and, accordingly, many students being falsely accused of cheating, while those using the above-mentioned “humanizing” tools get away with cheating or whatever any particular educational institution considers cheating. This leads to the erosion of trust between all the actors involved. Hardworking students may feel their efforts are undervalued, especially when they are accused of cheating without substantial evidence. More often than not, no meaningful guidance for either educators or students exists.

Meaningless guidance leads to situations similar to one described by a friend of mine, who is employed by one of the Russell Group universities. As per regulations, he has to report all instances of generative AI tools in student assessments. The reality, he explains, is that almost everyone uses them. Reporting even the most obvious cases is pointless since the unofficial practice is that, by far and large, the matter is resolved in favor of the student — which, under existing conditions, appears to be the most reasonable practice possible. So now, my friend, being a diligent educator as he is, has to deal with an additional workload: he guides students in the field of LLM use and ethics of such use, formulating rules for his students on the go, informing them about best practices in generative AI and so on. It’s worth mentioning that his area of expertise is pretty far from either machine learning or, for that matter, even linguistics per se. My friend, — and many others like him, deserve better than the article’s suggestion that own their use of new technological tools — forced, of course — facilitates student cheating. In fact, they do what has to be done at their own expense in their own free time.

This rather clear situation cannot be mitigated by prohibition of the tools, more control over students, or even better guidance. Neither students nor teachers — not to mention the LLM technology itself — can be blamed for the present confusion, sometimes descending into chaos. Surprisingly, university bureaucracy is only partly to blame for the situation. All stakeholders — students, teachers, and education managers — are in the same situation as most of us and experience similar feelings (here’s time to take another look at the picture we started with).

Apparently, LLMs have made the existing system of student assessment obsolete: we need a new assessment methods toolkit, which cannot rely on traditional essays and exams anymore. However, no top-down reform can help.

Such a toolkit can only be created “bottom-up”: it should emerge as a result of the teacher-student negotiation process. For this process to be meaningful and productive, both sides should be well aware of what LLMs are, what they can and cannot do, what are the possibilities (endless, yes) and limitations (still significant). After all, after receiving their degrees, students will have to seek work in a world where LLMs will be used more and more widely — and in an increasing number of areas of human activity.

In other words, the key to the successful adoption of AI technology by schools and universities is, somewhat unsurprisingly, education. Educate both teachers and students about LLMs — and let them figure out the best practices and create any regulation they need by themselves.Otherwise, the further rapid devaluation of degrees, a concern for the Guardian author, will only continue, I’m afraid, at an ever-accelerating speed.

Stanislav Lvovsky
Poet, historian, researcher. Is the author of books, numerous articles in humanities publications, and scholarly works. At the Prague Media School, is responsible for negotiations with large language models (LLMs), the training of AI assistants, and the philosophy of consciousness.