For years, there has been a trend towards more new technology for every new school year. Some are completely harmless, others may be dangerous if not used properly. AI is one of the latter, but the negative impacts are considerably reduced by explaining examples to the pupil.

The new academic year has started, with more artificial intelligence (AI) than ever before. Most recently, thanks to imaging apps and the ChatGPT erupted at the end of November, students, teachers and parents took part in a semi-annual AI rapid event.

Some educational institutions, such as state schools in New York City, have reacted to the chatbot in the most classic way: banned. But only to lift the ban several months later. In the meantime, there was a great deal of awareness of technology and schools started to explore how to exploit the opportunities it offers – for example to teach children to think critically.

However, AI is not only in the form of chatbots. From Netflix’s programme proposals to Alexa’s voice assistant answering Amazon questions, interactive image filters from social media, or the way in which the smartphone lock is unlocked, AI is present everywhere in children’s lives (and, of course, in our lives).

Some students are legally more interested than others, but the basics of the new systems should be known to all because they become indispensable elements of digital writing. They must acquire this knowledge by the end of the secondary school, says Regina Barzilay, Israeli-American Natural Language Processing Expert at Massachusetts Institute of Technology (MIT), Head of the AI sector of Jameel Klinika, researching the use of MIT machine learning in health. This year, the clinic held a summer programme on AI health opportunities for fifty-one secondary school students.

Photo: Pexels

— Children should be encouraged to become interested in systems that are increasingly important in their lives.

Understanding the functioning of AI by students and professionals with only higher education institutions in data science and information technology, or in related disciplines will further increase inequalities in the future

— warned by a world-famous researcher.

MIT is one of the world’s leading players in AI research and development. This is why the students’ parents are advised by the Technology Review magazine of the educational institution. They summarise their views in six points.

Photo: Pexels

Let’s remember that AI is not our friend 

— the first sounds. It seems banal, but not: it is better to keep this fact alive at all times.

Chatbots are developed for what their name suggests: talk to the user. They respond with friendliness, so children can easily forget that they are inspired with an AI system rather than a haver. As they are trusted, they are more confident about what they say, although many examples show that their response and proposals are not treated with healthy doubts.

If they can be seen as a sympathetic person, learning from data gathered from the internet is trying to imitate – increasingly convincingly – the human conversation.

— Children should always be reminded not to divulge sensitive personal information because they all enter huge databases and as soon as they are, it is almost impossible to remove the data hidden.

Without the student’s consent, tech companies will be able to earn even more money and, worse, fall into the hands of hackers

— for caution, parents are Helen Crompton, Professor of Digital Innovation at Old Dominion University.

Photo: Picasa

AI models do not replace search engines

All large language models (LLM, such as ChatGPT) are just as good as their training data. Although the chatbot seems to provide correct textual answers to questions, not all are precise, or even: he often says misstatements. If harmful stereotypes have been found in the practice data, it repeats them.

Children need to treat AI responses as critically as anyone else.

Let’s remember that these devices do not represent everyone, for example those who do not have any internet connection at all. This is why it should also be carefully shared and AI’s response transmitted, as it can be smoothly native.

Although it seems to be a quick solution to certain issues, it should never be forgotten: the chatbot is not Google or another search engine and does not replace it. Whatever the answer you give, the pupil must check if it is true or not.

Photo: Pexels

Teachers can accuse a learner of having used AI, although not

The third element is one of the main challenges for educators: with the spread of genetic AI, the technology is used by masses and students often feel tempted to use AI to write instead. The teacher must establish: read a stand-alone job or have been prepared by a machine.

As technology evolves, it will become increasingly difficult to decide.

Many companies offer their AI products together with a fraud detection tool. These means state that the author of the text is a man or machine. They also state in principle that, on the one hand, they are often mistaken and, on the other hand, easy to scrutinise them. There have been repeated cases of unjustified accusation of the student by the teacher: his script was written by MI, although that was not the case. Obviously, at least the opposite has happened: educators have not noticed student fraud.

The solution would be to familiarise parents and children with the AI code of the educational institution, if it already exists. If so, the pupil should always be reminded of the importance of complying with it. And if unfairly accused, stand and, if necessary, show how he used the ChatGPT for the script, the answers provided by AI to its questions, what and how they were taken over from them.

Photo: Pexels

In particular, it should be attentive to prove that: the text of the machine was not copied once, without any change. AI use is not the problem, but an uncritical use if the learner writes the work with the machine.

Recommender systems are designed to hide them and even show false things to the user

It is very important to explain to children how recommender algorithms work. They should not forget: tech companies pay money from watching advertisements on their platforms. This is why, for example, effective AI algorithms recommending YouTube videos are being developed. The user is influenced to spend as much time as possible on the platform.

The algorithm monitors and tracks the videos we watch, then recommending something similar. If, for example, children have seen many Messi videos, she “considers” that they want to see even more in the future.

Teemu Roos develops an AI curriculum for Finnish schools at the University of Helsinki. According to him, there is a tendency for these services to lead the user to harmful content, mainly false sources of information. Reason: people want to read and watch borehole or shocking content. It’s easy to get caught and spread false health information for example.

In the case of children, this is even more harmful because they are more controllable.

Photo: Pixabay

Children should be reminded: always use AI safely and responsibly

While all students know, it is not harmed by constantly explaining to them that generative AI is not only about text, but there is a lot of free deep fake that can be used to produce camera images of people who have never lived, but also, for example, to place someone’s head on another body within the moments.

Students are obviously warned of the dangers of sharing intimate images online, but they also need to be explained that their friends’ photos are not used with risky apps. Nor, for example, because unauthorised use may have legal consequences. Teenagers have already been found guilty of spreading child pornography, but they have abused not only their acquaintances but also their own photographs.

Children also need to be talked about responsible online behaviour in order to ensure their safety and to leave others in peace. The image is more dangerous than the word: malicious rumours can harm a lot, but the impact is more harmful if false still or moving images are used to illustrate the vocabulary.

It is even better to illustrate this with online examples. If they see the risk of AI facial editing or reading articles from pocket platforms, they believe better than being warned in general: “use AI safely and responsibly”

Photo: Pexels

Let’s remember that AI is really good

Most of the early discussions on the use of AI in schools highlighted the negative side of technology, in particular fraud opportunities. On the other hand, if the pupil uses it sensitiously, it helps him a lot.

For example, if you don’t understand a topic, you can ask the chatbot to be taken step-by-step, or put it differently or personalise, say a geography teacher. It is also useful to draw up detailed tables to compare the advantages and disadvantages of something.

In such cases, he/she will be able to work quickly, while the pupil would be able to spend lessons to reach a solution. You can create a glossary for words that are difficult to understand or generate an image of it on the basis of a text order, prompt. You can evaluate responses, e.g. giving tips for historical quizzes.

AI, when used properly, contributes to a wider uptake of digital writing. It is up to educational institutions and professionals to explore the ideal – and personalised – use of AI by students.