Back

Evidence Summary: Artificial Intelligence in education

Dr Carmel Kent, Senior Research Fellow, UCL EDUCATE
  • articles

Artificial Intelligence (AI) is everywhere but the messages we get about it are mixed and contradictory. At once, it is dangerous yet supportive, all-consuming and yet freeing. AI feels like a moving target. If there is one definitive fact about AI, it’s that it will require us to learn throughout our lives.

To understand AI, we first need to understand human intelligence and human learning, and to identify the difference between AI and Human Intelligence (HI) if we are to reap its potential. Since our students and children will experience the greatest impact of AI – both from an employment perspective, but also from cultural and sociological perspectives – we need to evaluate how AI impacts education.

What (or who) is AI?

In 1956, John McCarthy (McCarthy & Hayes, 1969) began what became known as the Dartmouth College workshop, to “proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it”. McCarthy coined the term Artificial Intelligence, and defined it as “the science and engineering of making intelligent machines that have the ability to achieve goals like humans do”. This definition is instrumental to our understanding of AI today.

Machines have the advantage of a scaled and fast storage and processing capabilities, whereas humans can solve complex problems through a complicated (and sometimes hard to realise) interaction network of sensory cues, memory heuristics, emotions, experiences and cultural contexts. Machines cannot yet fully imitate the complex phenomenon of HI.

Another element of McCarthy’s definition emphasises AI’s interdisciplinarity.

Looking at the interdisciplinary roots of AI, Russel and Norvig (2016) have extended McCarthy’s definition to four forms of artificial achievement of human goals, as summarised in Figure 1, taken from their book.

Figure 1: Some definitions of AI, organised into four categories (Russel & Norvig, 2016)

Our curiosity about intelligence stems from the Greek philosopher Aristotle (384–322 BC), and rational thinking and logic impact heavily on thinking about AI. Many of the first AI systems to appear – such as tutoring systems or medical expert systems, such as that for monitoring psychiatric treatment (Goethe & Bronzino, 1995) – were based on logical rules and deductive reasoning, because well-structured, formal rules are very easily coded into machine language.

However, these AI systems are hard to maintain as the number of rules needed to implement a complex problem can very quickly reach the hundreds and thousands. Gödel (1931) showed mathematically that deductive thinking is limited, and that there are always observations that cannot be obtained from generalised rules (Russel & Norvig, 2016). Not everything can be computed by an algorithm in the form of a set of rules to be followed to reach a conclusion or solve a problem.

By contrast, inductive reasoning – as proposed by Hume (1739) in the form of what is now known as the principle of induction (“bottom-up logic”) – describes universal rules that are acquired by generalising from repeating exposure to observations. For example, we might induce that since the sun was shining every morning in the last week, it will also shine tomorrow morning.

Machine Learning (ML) is most usually based on inductive reasoning, in the sense that ML models are developed on the basis of statistical patterns found in the observed data.

What does it mean to ‘think humanly’?

Cognitive psychology and neuroscience (the study of the nervous system, particularly the brain) has led to much of our understanding of human cognition, and to our thinking about AI systems as ‘thinking humanly’. For example, Rashevsky (1936) was the first to apply mathematical models to the study of the nervous systems, showing that neurons (which are ‘observable’) can lead to thought and action (Russel and Norvig, 2016).

Machine Learning (ML): a sub-field of AI

Machine learning is a sub-field of AI, associated with machines’ ability to learn inductively – that is, to “improve automatically through experience” as phrased by Tom Mitchell, one of the field’s early contributors. ML applications process sets of historical observations (data records) to infer new patterns or rules arising from the data itself. Whenever the data is changed, an ML algorithm ‘learns’, picking up the changed or modified patterns to present or predict a new result.

A classic illustration of ML can be seen in the way that Google has transformed some of its technologies, such as speech recognition and translation. ML frees AI from having to formalise and maintain coded human knowledge, but it is replaced with a dependency on historical data. ML is generally sensitive to the data it is trained on. If the data is inaccurate, irrelevant, insufficient or missing, an ML application will not be able to meaningfully induce rules or models from it.

Two types of the commonly used ML algorithms: supervised and unsupervised learning

The term ‘supervised learning’ is used to describe ML algorithms that are trained on a data set that includes the outcome values. An example of supervised machine learning is that of an image processing system trained on a set of images that has been annotated by humans to identify whether or not the image includes a car. The supervised ML algorithm will try to learn and predict whether a new, unannotated image includes a car.

‘Unsupervised learning’ is when we do not know the values of outcomes, and there is no ‘human guidance’ or supervision inherent to the algorithm. We would still, however, like to identify patterns hidden in the data. For example, we may want to find groups of similarly able students, in order to tutor them separately or to use different interventions with each group. Figure 2 shows four identified clusters that might be treated by teachers using different strategies.

Figure 2: Unsupervised ML resulting clusters

A third, less common, type of ML is ‘reinforcement learning’. Like supervised learning, it uses feedback to find and learn the ‘correct behaviour’. Unlike supervised learning, however, reinforcement techniques do not use a given outcome as feedback. Instead, they use a set of rewards and punishments as signals for positive and negative patterns of behaviour.

AI in education

The research field of AI in education (AIEd) has existed for at least 30 years. AIEd “brings together AI … and the learning sciences … to promote the development of adaptive learning environments and other AIEd tools that are flexible, inclusive, personalised, engaging, and effective … AIEd is also a powerful tool to open up what is sometimes called the ‘black box of learning,’ giving us deeper, and more fine-grained understandings of how learning actually happens” (Luckin et al., 2016).

As an example, to develop an AIEd application providing individualised feedback to students, Luckin et al. (2016) argue that research from the learning sciences needs to be assimilated into three types of computational models: the pedagogical model (expressing teaching methods), the domain model (expressing the taught subject knowledge) and the learner model (expressing the personal cognitive, affective and behavioural attributes of learners). AI is only just starting to change the educational ecosystem and as yet it has not necessitated that all educational stakeholders engage with AI and its implications for education. However, education and educators need to prepare for the inevitable progress of AI into education. Luckin and Cukurova (under review) propose three main required actions to effectively connect AI and education, as summarised in Figure 3.

Figure 3: Luckin and Cukurova’s intelligent approach to AI in education and training

We discuss the first two actions in the next section, and the third action in a future article. 

Human or machine learning?

Skinner (1938), an influential behaviourist psychologist, developed a learning method based on the notion that people learn when they adopt an association between a particular behaviour and its consequence (reward or punishment). When the learner associates certain behaviour with a reward, for example, they are likely to repeat it.

This notion is similar to the ML’s supervised and reinforcement methods today, in which a statistical model is ‘taught’ by associating an observation with a consequence (reward, punishment, or simply an already known outcome).

The reasons that behaviourist methods are less acceptable today are similar to the reasons that human learning is so different from that of a machine: there is human cognition sitting in between, which is more complicated than simply responding to rewards. Human and machine cognition is a complicated, evolving relationship.

Unlike the human brain, machines cannot solve problems to which they have not previously been explicilty introduced. As Russel (1997) emphasises: “imagine a chicken that gets fed by the farmer every day and so, quite understandably, imagines that this will always be the case… until the farmer wrings its neck! The chicken never expected that to happen; how could it? – given it had no experience of such an event and the uniformity of its previous experience had been so great as to lead it to assume the pattern it had always observed (chicken gets fed every day) was universally true. But the chicken was wrong”. In other words, AI inductive systems will not consider any choice of action to which historical evidence was not introduced.

Humans are inherently ‘designed’ to do both deductive and inductive learning. As we collect observations through our senses and process them to fit into our long-term memory schemas, we are inducing. As we use heuristics and our long-term schemes and scripts, we are inducing predictions and possible explanations for our observations (e.g., Atkinson & Shiffrin, 1968).

Figure 4: Human induction and deduction, adapted from Atkinson & Shiffrin (1968)

Learning by imitation is not enough

The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started using wind tunnels and learning about aerodynamics

Russel & Norvig, 2016

AI systems are good at picking up patterns, repeating and generalising them – but they lack creativity and cannot transfer skills. Luckin (2018a) named seven elements to human intelligence, that still do not have any complete analogue in artificial cognition:

  1. Multi and interdisciplinary academic intelligence, described as knowledge and understanding about the world.
  2. Social intelligence. Luckin explains: “social interaction is the basis of individual thought and communal intelligence. AI cannot achieve human-level social interaction. There is also a meta aspect to social intelligence (see also meta-subjective intelligence) through which we can develop an awareness of our own social interactions and hone our ability to regulate them.”
  3. Meta-knowing intelligence, the understanding of “what knowledge is, what it means to know something, what good evidence is and how to make judgements based on that evidence and our context.”
  4. Meta-cognitive intelligence, the ability to “interpret our own ongoing mental activity: interpretations that need to be grounded in good evidence about our contextualised interactions in the world.”
  5. Meta-subjective intelligence. It encompasses “both our emotional and our motivational self-knowledge and regulatory skills; our ability to recognise our emotions and the emotions of others; to regulate our emotions and behaviours with respect to other people and with respect to taking part in a particular activity.”
  6. Meta-contextual intelligence. It is described as “our understanding of the way in which our physical embodiment interacts with our environment, its resources, and other people. This includes physical intelligence; our intellectual bridge to our instinctive mental processes. This helps us recognise when we are biased and when we are succumbing to post-hoc rationalisation.”

    and most importantly, connecting all the above six elements:
     
  7. Perceived self-efficacy, requiring “an accurate, evidence-based judgement about ourselves: our knowledge and understanding; our emotions and motivations; and our personal context. We need to know our ability to succeed in a specific situation and to accomplish tasks both alone and with others.”

Luckin suggests that the human ability to reflect about our learning, and to understand and process contextual and subjective knowledge through experience, is the core difference between human cognition and machine cognition.

Human heuristics and cognitive biases

To deal with our limited processing capacity, memory loss, and memory decay while still making sense of the world, humans use heuristics, schemas and scripts. Heuristics are mental shortcuts people often use to make decisions, usually focusing on just a few aspects of a situation (for example, rule of thumb, educated guesses and stereotypes).

Machines do not need to use such shortcuts. Even computational methods that are used to reduce the number of considered dimensions and aspects (such as feature selection) are based on statistics. Thus, machines could help us to identify biases and make better-informed decisions.

AIEd allows human deficiencies to be complemented by machine abilities and vice versa, rather than seeing human cognition and machine cognition as supplementary and in conflict.

AI in education

The modern AIEd community generally argues for a ‘pedagogy first’ approach, in which education technology innovations undergo a thorough exploration of the educational problems and gaps for which the technology will be tailored (Rosé et al., 2018).

Augmenting learning

The historic factory model of the classroom and its structured organisation poses many challenges to the individual learner, to which AI – using automation (mostly via the deductive approach) and adaptivity (mostly via the inductive approach) – is being used.

Personalised and adaptive learning

In most formal education settings, most individuals are taught in a large, diverse group of learners, facing a single teacher. Unlike teachers, AI systems can scale easily and quickly, and facilitate a one-on-one interface with a learner, taking into account a large number of sensory inputs in real-time, and calculating on-the-spot recommendations for the most suited content, pace or instruction method for that specific learner at that specific time.

Personalised systems can take advantage of AI’s computational abilities to consider many inputs about the learner within a single statistical model, resulting in a single recommendation about the next step.

Adaptive learning (which is often implemented alongside personalised learning) is about an AI system’s ability to adapt in real-time to the dynamically changing needs of the learner. Adaptability can be gained, for example, by harnessing ML’s ability to re-craft a statistical model from newly introduced data.

Intelligent Tutoring Systems (ITS) use AI techniques to simulate one-to-one human tutoring, usually using personalised and adaptive learning. Examples include Kidaptive, which collects a diverse set of measures and uses AI to adapt content and feedback to the learners; ALEKS, which uses a diagnostic approach throughout the learning journey to provide each learner with recommended topics; IBM and Pearson’s cognitive tutor; and CENTURY Tech, which collects behavioural and performance data to recommend the next step, while also providing tracking analytics for teachers and auto-marking to give instant feedback to students (Luckin et al., 2016).

From teaching machines to tutors’ systems to augmenting teachers

Almost side-by-side with the term AI, the counterpart term Intelligence Augmentation (IA) (van Emden, 1991) has developed. While AI traditionally pushed towards autonomous systems that would eventually replace human cognitive functions, IA aims to use similar techniques to support humans by complementing cognitive functions.

Education tends to favour IA over technology for its own sake and systems that assist teachers over systems that actually ‘teach’ for them. IBM’s Watson Teacher Advisor, for example, aims to reduce tutors’ workload. Writing about the AI Teaching Assistant, Colin, Luckin and Holmes (2017c) note: “Through working with Colin, [the teacher] has become somewhat of a metaphorical judo master, harnessing the data and analytical power of AI to tailor a new kind of education to each of her students. Her role at the helm of the classroom, however, is fundamentally unchanged… From time to time, when Colin recognises that a group is off topic, he intervenes with an alternative suggestion to stimulate new discussion, via individual students’ tablets, or he links students to other conversations that are taking place elsewhere in the classroom. Meanwhile, [the teacher] is free to wander around the room and observe, giving personalised guidance and feedback, and joining in with students’ conversations. By now, she is an experienced and skilled problem-solving practitioner, attuned to recognising when her human help and social skills are particularly needed.”

Augmenting assessment

Learning assessment is a crucial process, tightly coupled to learning itself. It should be designed to ensure that learners are making progress towards acquiring the knowledge and skills targeted by the learning system.

What is assessed

Our examination system excels at assessing numeracy and factual knowledge, but falls short in assessing other skills, such as creative problem-solving, empathy, and collaboration (Luckin, 2017b). The assessment system perpetuates intelligence that resembles that of a machine instead of encouraging diverse and rich human intelligence.

The examination system is not beyond politics, either. It rewards certain types of skills, certain subjects even, and therefore encourages a certain type of student. Luckin (2017b) argues that instead of rewarding humans for displaying skills we can easily automate, we should encourage the non-cognitive skills that differentiate us from machines.

How we assess

Overall, the ‘single-point-in-time’ exam method for assessment has proven less than optimal. Advances in AI data collection and modelling techniques can significantly contribute to providing “a fairer, richer assessment system that would evaluate students across a longer period of time and from an evidence-based, value-added perspective” (Luckin, 2017a).

Conclusions

Machines need to augment rather than replace humans. But we must understand human learning to understand how learning can amplify it. The outcome should be effective and egalitarian systems that leave teachers empowered to work with empowered, self-regulating and engaged students.

You can read a longer version of this article on:
www.educate.london/long-read-ai-in-education

Download the bite-sized summary of this content here:
www.educate.london/byte-sized-edtech-research

 

 

 

References

Atkinson, R. C., & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes1. In Psychology of learning and motivation, 2, 89-195. Academic Press.

Gödel, K. (1931). Uber¨ formal unentscheidbare Sätze der Principia mathematica und verwandter Systeme I. Monatshefte für Mathematik und Physik, 38, 173–198.

Goethe, J. W., & Bronzino, J. D. (1995). An expert system for monitoring psychiatric treatment. IEEE Engineering in Medicine and Biology, November/December, 776–780.

Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education.

Luckin, R. (2017a). Towards artificial intelligence-based assessment systems. Nature Human Behaviour, 1(3), 0028. https://doi.org/10.1038/s41562-016-0028

Luckin, R. (2017b). The Implications of Artificial Intelligence for Teachers and Schooling. In L. Loble, T. Creenaune, & J. Hayes (Eds.), Future frontiers: education for an AI world,109. Melbourne University Press & New South Wales Department of Education.109-125

Luckin, R. & Holmes, W. (2017c). A.I. Is the New T.A. in the Classroom, Available at: https://howwegettonext.com/a-i-is-the-new-t-a-in-the-classroom-dedbe5b99e9e

Luckin, R. (2018a). Machine Learning and Human Intelligence: The future of education for the 21st century. UCL IOE Press.

Luckin, R. (2018b). Enhancing Learning and Teaching with Technology: What the Research Says. UCL IOE Press. UCL Institute of Education, University of London, 20 Bedford Way, London WC1H 0AL.

McCarthy, J., & Hayes, P. J. (1969). Some philosophical problems from the standpoint of artificial intelligence. Michie D. Machine Intelligence., 463. https://doi.org/10.1016/B978-0-934613-03-3.50033-7

Rashevsky, N. (1936). Physico-mathematical aspects of excitation and conduction in nerves. In Cold Springs Harbor Symposia on Quantitative Biology. IV: Excitation Phenomena, 90–97

Rosé, C. P., Martínez-Maldonado, R., Hoppe, H. U., Luckin, R., Porayska-Pomsta, M. M. K., Mclaren, B., … Goebel, R. (Eds.). (2018). Artificial Intelligence in Education II. In 19th International Conference, AIED 2018 London, UK, June 27–30, 2018 Proceedings, Part II (p. 580). https://doi.org/10.1007/978-3-319-61425-0

Russell, B. (1997). Religion and science (No. 165). Oxford University Press, USA.

Russel, S., & Norvig, P. (2016). Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited. https://doi.org/10.1017/S0269888900007724

Skinner, B. F. (1938). The Behavior of organisms: An experimental analysis. New York: Appleton-Century.

van Emden, M. H. (1991). Mental Ergonomics as Basis for New-Generation Computer Systems. University of Victoria, Department of Computer Science