Artificial Intelligence (AI) falls into three general categories: Reactive, Predictive, or Generative.
AI could refer to mobile phone FaceID or voice assistants (reactive);
AI can create preferred music or video lists (predictive); or
AI can create new content (generative) such as ChatGPT and other LLMs.
Central to AI literacy is:
Recognizing AI’s power while understanding its bias and potential for mistakes;
Identifying privacy risks and addressing ethical considerations;
Building skills including how to interact with AI through prompts;
Thinking critically, using guiding questions, checking AI responses
Adopting academic honesty and accountability techniques.
Safety in use, risk of bias, equity, and inclusiveness in the use of AI must be considered; and
AI accuracy, authenticity, and copyright considerations are also key when using AI.
Academic honesty and accountability procedures should be adopted and critical thinking must be fostered.
Educators need to strive for responsible, inclusive, and respectful integration of AI.
A team should be designated to:
Create or adapt AI guidelines,
Educate teachers about the impact and responsible use of GenAI, and
Develop resources for students to identify and manage biases and inaccuracies in AI.
Have a clear policy on when and how AI tools can be used.
Teach students about the ethical use of AI tools and their integrity.
Design assignments requirng critical thinking and creativity.
Implement process-oriented assessments (e.g. drafts, outlines, presentations) where students ‘show their work’.
Incorporate in-class, live activities and presentations.
Currently, no tools can reliably detect AI-generated text. AI detection tools like TurnItIn often flag ESL students incorrectly as cheaters (Liang et al, 2023) and can produce both false positives and negatives. Using these services also means giving student work to third parties who may use it however they choose. Additionally, AI tools like Claude cannot reliably confirm whether they authored a specific text.
See: Anderson et al. (2023); Elkhatat et al. (2023); Foltýnek et al. (2023); Gegg-Harrison & Quarterman (2024); Liu et al. (2023); Sadasivan et al. (2023); Waltzer et al. (2023); Weber (2023); Weber-Wulff et al. (2023); and https://www.bloomberg.com/news/features/2024-10-18/do-ai-detectors-work-students-face-false-cheating-accusations https://www.linkedin.com/news/story/when-ai-cheat-detection-goes-wrong-6200812/