AN0160177584;eje01dec.22;2022Nov15.04:59;v2.2.500
Still w(AI)ting for the automation of teaching: An exploration of machine learning in Swedish primary education using Actor‐Network Theory
<sbt id="AN0160177584-2">INTRODUCTION</sbt>
Machine learning and other artificial intelligence (AI) technologies are predicted to play a transformative role in primary education, where these technologies for automation and personalization are now being introduced to classroom instruction. This article explores the rationales and practices by which machine learning and AI are emerging in schools. We report on ethnographic fieldwork in Sweden, where a machine learning teaching aid in mathematics, the AI Engine, was tried out by 22 teachers and more than 250 primary education students. By adopting an Actor‐Network Theory approach, the analysis focuses on the interactions within the network of heterogeneous actors bound by the AI Engine as an obligatory passage point. The findings show how the actions and accounts emerging within the complex ecosystem of human actors compensate for the unexpected and undesirable algorithmic decisions of the AI Engine. We discuss expectations about AI in education, contradictions in how the AI Engine worked and uncertainties about how machine learning algorithms 'learn' and predict. These factors contribute to our understanding of the potential of automation and personalisation—a process that requires continued re‐negotiations. The findings are presented in the form of a fictional play in two acts, an ethnodrama. The ethnodrama highlights controversies in the use of AI in education, such as the lack of transparency in algorithmic decision‐making—and how this can play out in real‐life learning contexts. The findings of this study contribute to a better understanding of AI in primary education.
- VLADIMIR A—. What are you insinuating? That we have come to the wrong place?
- ESTRAGON He should be here.
- VLADIMIR He did not say for sure he'd come.
- ESTRAGON And if he does not come?
- VLADIMIR We'll come back tomorrow.
- ESTRAGON And then the day after tomorrow.
(Beckett, 1957, p. 5)
In this article, we approach the automation of teaching through artificial intelligence (AI) as a vague and unsettled phenomenon, comparable to the fictional character of Godot in Beckett's famous play Waiting for Godot. More specifically, we report on findings from an innovation and research project on AI in mathematics instruction in primary education in Sweden. We elaborate on the practice of, and contradictions experienced in, teaching with a specific digital teaching aid, AI Engine, and we analyse how this reflects ideas of automated and personalised instruction.
In a new era of automation (Andrejevic, 2019), machine learning and other artificial intelligence (AI) technologies are predicted to play a transformative role in the field of education (Tuomi, 2018). Accelerated by the COVID‐19 pandemic, AI is already entering the realms of education policy (e.g., Miao et al., 2021; WEF, 2020) and practice (e.g., Facer & Selwyn, 2021; Luckin & Cukurova, 2019). While representatives from both the education technology (EdTech) industry and research in Artificial Intelligence in Education (AIEd) assure that the purpose of these systems is not to replace teachers in the classroom, for decades AI‐supported one‐on‐one tutoring has been a major focus for AI researchers (VanLehn, 2011). The notion that increasingly intelligent machines can automate teaching is hardly new. It can be traced back to pre‐digital teaching machines first proposed by Sidney Pressey in the 1920s and later elaborated and operationalised by behaviourist B.F. Skinner (Skinner, 1968). Today these "EdTech imaginaries" of automation and personalisation (Watters, 2021 p. 10) are being revitalised through a wide range of AI‐powered educational technologies. Among these is a new generation of Intelligent Tutoring Systems that use machine learning (Selwyn, 2019). When promoted, Intelligent Tutoring Systems are said to be "delivering learning activities best matched to a learner's cognitive needs and providing targeted and timely feedback, all without an individual teacher having to be present" (Luckin et al., 2016, p. 24).
In this discourse, the limited capacity of teachers can be circumvented with advanced decision‐making algorithms. This assumption, however, is increasingly being questioned by researchers drawing from theoretical orientations and empirical approaches from science and technology studies (STS; e.g., Bayne, 2015; Eynon & Young, 2020; Selwyn et al., 2021; Wajcman, 2017). By challenging the idea of education technology as a neutral tool in the service of teachers and administrators, many of these studies reveal complex entanglements between social and material interactions and problematise causal relations and effects between technology on one side and behavioural outcomes on the other (e.g., Eynon & Young, 2020; Knox et al., 2020; Perrotta & Selwyn, 2019). Given the recent increase in use of different forms of AI at all levels of education (Holmes et al., 2019), few empirical studies exist that describe how machine learning technologies developed to automate teaching are enacted in schools (Castañeda & Williamson, 2021). This article offers insights from such an empirical account.
The study on which we report draws on fieldwork from a nationally co‐funded 1.5‐year innovation and research project on AI in mathematics teaching, 2020–2021, in Sweden. The project was a collaboration between a local school authority, a teaching aid company, and education researchers. Research participants included 22 teachers and more than 250 students in primary education. We used Actor‐Network Theory (ANT) to inform the ontological and methodological approach in this study (Callon, 1984; Latour, 2007). In this approach, booming techno‐scientific domains, such as AI in education, can be represented as the creation of specific assemblages, so‐called networks between human and non‐human entities, here referred to as actors. By exploring interactions between actors within such a network we sought to empirically demonstrate how ideas, like the automation of teaching, become materialised in classroom practices. To do so, the following theoretically motivated research questions were articulated:
- How do teachers, students, teaching aid authors, education researchers, and the AI Engine interact ?
- What makes actors interact?
To set the scene, we first introduce the preconditions of the study and its main actor, the AI Engine. Then, we give an overview of Actor‐Network‐Theory and describe the methods. We use the form of an ethnodrama to present our findings (Saldaña, 2003). An ethnodrama in two acts is used here for displaying how the automation of teaching emerges in classroom instruction. The research questions are addressed in the analysis of the two acts. The article concludes with a discussion on the future of AI in education, in light of our findings.
THE AI ENGINE
The innovation and research project on which this article reports, was funded as part of efforts by Vinnova—the national innovation agency in Sweden—to stimulate the public sector in Sweden to get started faster and take advantage of AI in all areas (Vinnova, 2019). The project explored and evaluated the potential of a machine learning‐based technology applied in a commercial teaching aid for mathematics. Specifically, the teaching aid by the company AI Engine. A goal was to see if an AI‐based teaching aid can automate a particular aspect of teaching arithmetic, by identifying students' knowledge gaps and by personalising the content. The AI Engine is an Intelligent Tutoring System for automating teaching processes and optimising student learning by one‐on‐one tutoring. The machine learning algorithms of the AI Engine were developed by a third‐party AI company that combines a set of machine learning models to predict and recommend content primarily based on learners' answers and response times. Each time a student answers a question, it is saved as an interaction event and a real‐time calculation is done which predicts the probability for a student to complete a task correctly. These calculations always refer to historical training data. As more students use the tool, the prediction model is improved, and the algorithmic adaptation is said to become more precise. The algorithm also uses some pattern recognition to predict the most successful learning paths. A so‐called 'adaptive engine' was integrated with the teaching aid through an application programming interface (API) solution. The learning content of the AI Engine in the study consisted of five exercise modules, developed by the teaching aid author in collaboration with the project team. Modules 1–3 included exercises within the number intervals 1–10, 1–20, 1–100 that would allow students to work with arithmetic (addition and subtraction). Module 4 contained exercises from the multiplication table. There was also a fifth module, Taljakten (Swedish for the number chase), which mixed exercises from modules 1–4. After the first intervention, also additional free‐pacing modules were added. The user interface was stripped from all visual details and kept simple to avoid any distractions by showing only one exercise at a time and a box in which the students were to insert the correct answer. The instructions for how to practice with the modules differed for students in grades 2, 5 and 8. In grade 2, students completed all four different modules before working in Taljakten. In grades 5 and 8, only Taljakten was used.
As the project unfolded, our attention shifted from the way the AI Engine worked, or did not work, to the interactions between the AI Engine and the other participants in the project. Also, to the ways in which the idea of the technological promises of AI in education, such as the automation of teaching, became materialised in such interactions. The theoretical lens through which these interactions came to be understood and analysed will be outlined next.
ACTOR‐NETWORK THEORY
Actor‐Network Theory (ANT) is a compilation of theoretical and methodological principles sprung out of science and technology studies (STS) and attributed foremost to Bruno Latour (1987), Michael Callon (1984), and John Law (1992). In line with ANT, we focused in this study on how all things, whether social or material, are in fact effects constructed in complex, fragile webs of relations, in networks between heterogeneous actors. An actor[1] is a human or non‐human entity that is capable of exerting force and coming together, changing and being changed within a constantly fluent and associative network of other actors (Latour, 2007). As actors are in constant motion, the interactions between human and non‐human actors are never fixed territories but always changing (Latour, 1999). The shift from a more common human‐centred approach to what has been termed as a symmetrical treatment of the social and the material deserves to be repeated as it contradicts the more common (human‐centred) way of understanding and talking about technology in education (Fenwick & Edwards, 2017). Latour (2007) describes this epistemological standpoint as "to be symmetric, for us, simply means not to impose a priori some spurious asymmetry among human intentional action and a material world of causal relations" (Latour, 2007, p. 76).
From this stance, educational facts, and artefacts, like curriculum, routines, or AI technologies, are constructed by what equally important, human actors (e.g., teachers, students, teaching aid authors, researchers) and non‐human actors (e.g., machine learning algorithms, paper tests, computers, math tasks, data points) do in relation to each other. Also, the AI Engine is a complex and messy web—of code, databases, infrastructures, platforms and interfaces, new technical settings, human experts, scientific and commercial settings founded on a vast proliferation of techniques, methods, and research traditions—that actively sets up and constructs specific ways of thinking about and acting upon other actors (Decuypere, 2021; Latour, 2007). The more allies and interactions, the stronger the network becomes.
The way networks are composed is particularly visible when things go wrong; conversely, the contributing inter‐connections tend to be hidden when things work smoothly. Thus, an AI‐based teaching aid appears as successful when the actor‐network is stable and durable, exercising force while concealing all the complex interactions between heterogeneous entities that created it and continue to maintain it. This phenomenon–also referred to as 'blackboxing', a term for how the inner workings of technology are obscured from users, means that once a machine, (for example a camera) functions efficiently, attention is paid only to its inputs and outputs and not to its internal complexity. Paradoxically this also implies that the more science and technology succeed, the opaquer and obscure they become (Latour, 1999 p. 304).
Obligatory passage point
To reach a deeper understanding and describe how heterogeneous actors come together in networks, and what holds these networks together, the concept of obligatory passage point (OPP; Callon, 1984, p. 205) was used in this study. Thought of as the narrow end of a funnel, the OPP forces heterogeneous actors with different objectives to converge on a specific aim mediating all interactions within the network. Put differently, the OPP imposes itself as a point through which all other actors within the network have to pass, independently of goals. The OPP is not only an agential actor within the network¸ and part of a fluent actor‐network but is indispensable and what prevents the actor‐network from dissolving. Thereby, in this study, the OPP is seen as the node in the network where there are especially dense connections to explore. To conclude, ANT was used to identify empirically traceable effects by (1) exploring how heterogeneous actors within the innovation and research project interact and (2) distinguishing what mediates their interactions. As education studies drawing on ANT are relatively scarce compared to other fields, (Fenwick & Edwards, 2017) we believe this approach to AI in education can bring new insights and perspectives on this relatively poorly understood phenomenon (Hrastinski et al., 2019).
METHOD
We understand ANT as a set of sensibilities (Mol, 2010) to trace complex relational interactions that emerge when AI technologies in education are introduced in classrooms. Methodologically this calls for engaging with the actors ethnographically, while striving for very specific descriptions of interactions within the actor‐network and its enacted realities (Latour, 2007).
Tracing the network and following the actors
The research materials analysed draw on nationally co‐funded ethnographic fieldwork over 1.5 years on AI in mathematics teaching in Sweden. To work empirically and analytically, the actor‐network was limited to, or 'cut' (Fenwick & Edwards, 2017) around, the interactions emerging between the project team (representatives from a local school authority, a teaching aid company, and education researchers), teachers, more than 250 primary education students, and the AI Engine. These actors were followed from the time that the funding application was written in December 2019, through all the stages in the project—from its official start in May 2020 until it formally ended in September 2021. The main author was, as an employee at the local authority, involved in the project already in the process of applying for funds, and a member of the project team. On several occasions between June and August 2020, the main author informed all participants (project members, teachers, and students), that the purpose of the research was to contribute to an understanding of what happens when AI‐based technologies are introduced to classroom instruction. Guidelines of the Swedish Research Council (Vetenskapsrådet, 2017) for informed consent and the confidential and careful handling of data have been adhered to. The insider position as a researcher facilitated first author access to the research process and familiarity with the context of the organisation. Nevertheless, it has been challenging to separate roles and conflicting interests (Teusner, 2016). For an employee at the local school authority, a positive project result with improved student learning is of great value. From a researcher's point of view, contradictory and ambiguous findings can be equally interesting and important to report on. These dilemmas should not be neglected, but as they have been openly discussed and were acknowledged from the very beginning, our assessment is that they have not, in a negative way, obscured the results of the study.
The dual affiliation of the main author also meant that she was involved in the evaluation of the project and consequently had access to the survey responses from two surveys that were sent out to all participating teachers during and after the intervention period. Results from these surveys have contributed to our understanding of the findings from this study. The surveys, with low response rates served only a minor purpose for this study, a "digital‐ethnographic artefact" (Perrotta & Selwyn, 2019, p. 255) for actors to discuss and refer to—helpful primarily for identifying teachers willing to participate in a follow‐up interview. The same can be said about the (big) data file that was generated and that captured all the interactions between students and the AI Engine that was eventually provided to the main author. After a brief investigation into what the data file contained, it was not analysed further in the study. It is handled as a digital‐ethnographic artefact in our section on findings. Relevant policy documents and commercial websites were also consulted, to broaden the analysis.
Observations and interviews
Of particular importance for constructing the ethnographic description were field notes from four classroom observations and seven video‐recorded and transcribed interviews carried out as a part of this study. The observations took place in three primary schools (grades 2 and 5) during the 2nd and 3rd week of the first of two interventions when students in grades 2, 5 and 8 (students aged 8–9, 11–12 and 14–15) carried out mental arithmetic exercises using the AI Engine. Each intervention lasted six weeks and took place between October to November 2020 and January to February 2021. During these periods, students exercised mental arithmetic with the AI Engine three times per week, each training session lasting ten minutes. The interventions replicated a research design from a study that concluded that even short arithmetic training with non‐adaptive software or with pen and paper substantially improved students' general performance in mathematics (Engvall et al., 2020). Both interventions included pre‐, peri‐ and post‐tests of students' skills. The test results are not reported here as they are not yet compiled nor the focus of this study.
Due to Covid‐19 restrictions, further classroom observation that would also include grade 8 was not possible. Data were collected through seven online interviews (45–90 min) carried out between May and June 2021. All the interviewed respondents had a professional background as primary education teachers. Three of them were from participating schools while the remaining four respondents were part of the project team as project manager, development teacher at the local authority, teaching aid author and education researcher. The interviews were semi‐structured due to the topic and diversity of experiences that could be captured within the study (Kvale & Brinkmann, 2009). Interviews focused on respondent experiences of the project, their perception of how the AI Engine worked in classroom instruction, and how they envision AI in the future of mathematics learning. We understand interviews as assembled spaces for actors to speak, and a way to capture accounts from insiders of the network about interactions and associations that were not possible to directly witness (Mazzei, 2013).
Producing the ANT account
An abductive process was used in the ANT‐ analysis of the ethnographic fieldwork (Dubois & Gadde, 2002). Unlike inductive and deductive reasoning, an abductive approach means moving back and forth between inductive and more deductive attempts using the following ANT concepts: actors, interactions, actor‐network, and obligatory passage point. The focus was to identify interactions between actors that were salient in the construction of the actor‐network and that could also describe the effects emerging from these interactions. The way findings are presented in two dramatised acts is part of an intentional methodology for attending to the opaque, or black box like, workings of educational technologies in the making. The ANT account draws on qualitative inquiry to dramatise ethnographic research data, including interview transcripts, field notes, and written artefacts. This approach is also known as ethnodrama (Saldaña, 2005). According to Humphreys and Watson (2009), manipulated styles of ethnographies enable the publication of sensitive data whilst safeguarding informants' confidentiality. At the same time, this allegorical narrative is motivated by the methodological assumptions that underpin ANT, allowing the researcher to communicate the empirical accounts in more extensive and vivid ways (cf. Law, 2004; Saldaña, 2003). In other words, it is an attempt to adhere to Latour's description of a "good ANT‐account" that allows actor expressions and behaviours to come out stronger than those of the analyst (Latour, 2007, p. 30). Hence, the two dramatised acts are primarily based on excerpts from interviews and field notes from classroom observation. For the adaptation of interview transcripts, both verbatim extracts and edited and slightly revised passages have been used while trying to remain faithful to the narratives of respondents. Each act is centred around one of the research questions and followed by ANT analysis. From an ethical point of view, the dramatised narrative has also been selected to better illustrate the main author's double affiliation as an actor (an employee at the local authority) and the one asking the questions and drawing the conclusions (a researcher).
FINDINGS IN TWO ACTS
Act I
Compensatory interactions
The stage is in this act set as a classroom with desks and chairs. The desks are arranged in three rows centred in front of a big whiteboard. A total of 20 students, aged 8–9 years, sit at their desks in pairs or groups of three, each student equipped with a laptop computer. They are using the AI Engine to practice addition and subtraction. A teacher circulates the room, occasionally stops, and leans over some students. Katarina, a member of the research project team, sits in the back of this setting, taking notes and reading them aloud.
Teachers Lucy and Becky and research project team members Ester and Diane enter the stage and sit next to Katarina. A conversation around some of the actions of the AI Engine follows.
- Katarina The AI Engine displays the same numbers to several students: 64 − 56 against a white backdrop. The AI Engine then continues to recommend the exercises 51 − 42, 90 − 1, 22 + 17, 37 + 19 on one of the laptops. As soon as an answer is inserted, the AI Engine displays the next exercise in the same manner. During their interaction with the AI Engine, some students clearly use their fingers to count. They seem concentrated when using the keyboard to write the numbers into the small, coloured box of the minimalistic interface. Some students interact individually with the AI Engine, others consult their neighbouring peers to get the answer right. One student says 'hello, this is too difficult'. During the session, the teacher helps those students that struggle with the exercises suggested by the AI Engine. After ten minutes, the teacher ends the activity, and it is time for a lunch break. The teacher later tells me that the students seem to get numbers that correspond to their ability. He also thinks students are challenged in a positive way when they get numbers that are difficult but adds that it would have never worked without his help or without altering the instructions prior to the exercise sessions.
- Lucy [Cheerfully and with emphasis.] And there were some programming errors so when the students discovered them, oh [... leans her head ...] the many discussions we had to have about [in an exaggerated tone] that when you are in a project like this when you develop software then things can go wrong [... pauses...then continues hastily] and then I said: [cheerful] 'I will email and speak to the researcher about [hesitates ...] that we found errors here and that this is not so good but that we must keep up the good work' [...] and then in the end they were [smiles] happy!
- Katarina [Surprised and excited] So, what kind of errors did the students react to?
- Lucy [With reservation] Well, it was like this programming error [hesitates and stutters ...]. I, I do not have it in my head, but it was like two minus one [...] equals one, but the computer told them that it equals three. [Pause ...] So, it was the wrong calculation method, the computer like miscalculated these super easy [seriously ...] it was minus and [more cheerfully] that was something that some though t that [emphasising an irritated face ...] OH!
- Katarina [Shaking her head in surprise, smiling] And what did they say? Were they just upset?
- Lucy [In a wronged voice, trying to mimic the voice of one of the students] 'You can't expect to practice one thing and then it shows you an incorrect answer' [... changes back to cheerful voice] so they were a bit frustrated that learning could go wrong, which is a bit charming. [Pauses and continues more seriously] So no, they did not like that.
- Katarina [In a confirming tone] No [...] but [gets interrupted].
- Lucy Actually, then we skipped that part with subtraction until we were certain that errors had been corrected. [Shorter break] What we did late in the autumn when these subtraction exercises had this programming error was that we continued to practice a little on our own [...] I assigned exercises in a different digital tool for mathematics, and that also worked!
- Diane [Commenting on the programming error] When this was to be adjusted, it turned out that it was not as easy as correcting one or two exercises in this tool [...hesitates]. You had to redo a set of questions so instead of changing 1 or 2 questions, you had to have the teaching aid developer write about hundreds of exercises and replace the whole module.
- Katarina [In a confirming tone] How come?
- Diane [With uncertainty] Oh [...] somehow it seems like the AI Engine learns the error and that is why it was not enough only to correct it [...] and as it was explained to me, since the AI Engine had given a certain answer this many times it knew that it was the right one although it was wrong.
- Becky [Hesitating] Sometimes it felt like the students got the same or similar exercise for a very long period, [with uncertainty] but I think it is because they were not so good at it then or that they inserted wrong answers [... pause]. But above all, it was that they could not write anything in the small box, as if it froze a bit.
- Katarina Mmm [in a confirming tone ...] and what did you do?
- Becky Oh well [...] then we switched to the next one [exercise], as there were different modules that you needed to complete [pause]. And then this extra module came with more exercises that especially some students did. For some, it was too difficult. The problem here was that they had to write the answers in ways that did not work [...].
- Ester [In a resigned voice] I have spent more time on it than I would have had to do.
- Katarina [Turns to Ester for acknowledging her point] Mmm [...] right?
- Ester That's how it is [... hesitates] let us put it like this, it has worked quite well [...] I mean [... with some disappointment] my concern is that I think it is a pity that the AI Engine was not used the way I hoped it would.
- Katarina [In a confirming tone] Mmm [...]
- Ester What is good, of course, is that I get to know what I did not know, like the fact that the tasks end [...]. So, like when students had completed everything the AI Engine said 'let's drop this, we don't have to count anymore' [...] which meant I had to solve this [... pause]. So, I added 2000 extra tasks of many different levels so that the student can work and test the AI Engine for real, so to speak.
Actors leave the stage.
Analysis of Act I
The first act demonstrates how heterogeneous entities (machine learning algorithms, students, teachers, whiteboards, assignments, exercises, computers, classroom spaces, desks, teaching aid authors educators) form a network in which they all contribute as actors that interact. However, actors are enacted, enabled, and adapted by their associates in a reciprocal way (Mol, 2010). This manifests in contrasting accounts where the AI Engine appears as both an actor that (1) works—or functions as expected, and (2) does not work—does not function as expected. Rather than abandoning the AI Engine when students are presented with erroneous or too difficult exercises or no content at all, human actors (students, teachers, and research project team members) enabled, and adapted to, the AI Engine in different ways. By subordinating their actions to what the AI Engine does, human actors re‐construct and alter the actor‐network in which a negotiated AI Engine can act, at least temporarily. In response to the first research question: How do teachers, students, teaching aid authors, educational researchers and the AI Engine interact? Act I displays how human actors enable the actions of the AI Engine in ways that can be described as compensatory in relation to the unfulfilled hope of what AI can do in education, we call this a perceived promise of technology. The next sections will further elaborate on these compensatory interactions.
Enabling personalisation
In the first act, we are introduced to the interactions between a group of students in second grade (8–9 years), their teachers and the use of AI Engine during a math lesson. The AI Engine recommends problems that make some students complain. Still, they appear to do their best to answer the problems correctly. Their teacher expresses certain conviction about the ability of AI Engine to personalise assignments, but also emphasises her role in supporting students before and during the ten‐minutes exercise sessions. However, the idea of personalisation does not appear as something that the AI Engine does. Rather, the idea of personalisation and automated teaching emerges as an effect from the entangled web of exercises, algorithms predicting and delivering the exercises, computers, desks, students trying to insert the correct answers through keyboards via specific interfaces, and a supporting teacher in constant movement within a classroom.
Becky's account of the new modules is another example of how the AI Engine is being enabled. Becky persuades her students to use different modules when the AI Engine stops delivering assignments. As new modules are added, Becky directs her students to try these out. Hesitant to whether the AI Engine is adapting to the students' ability, Becky seems aware of which students benefit from these kinds of assignments and for whom the new modules are too difficult. This suggests that in the established actor‐network the AI Engine recruits co‐workers according to its interest, as other actors—here Becky—are enrolled to do its work. Becky is in fact the one constantly monitoring interactions between the students and the AI Engine, providing differentiated content accordingly. Also in this example, personalisation and automation emerges as an effect from the enactments of actors within the actor‐network. Teachers appear indispensable for this effect to emerge.
Adapting to algorithms
The accounts of teachers Becky and Lucy exemplify how unexpected and erroneous answers or frozen screens are compensated for by human actors. Lucy and Becky's account also shows how the AI Engine enrols other actors to adapt within the network. A good example of such adaptation is Lucy's story of how she persuaded her students, who were upset when the AI Engine displayed the wrong answers, to keep exercising while reporting the errors back to the project team. Besides directing students to other exercise modules, Lucy also prepared similar problems using another software application, seemingly unaware that this could bias the intervention results.
When tracing the programming errors encountered, from a non‐human perspective, a slightly different story can be told. In the example where the AI Engine displayed incorrect answers, Ester corrected an error that she had originally introduced. Notwithstanding, the AI Engine continued repeating the same incorrect answers. The accepted explanation within the actor‐network is that the AI Engine has 'learnt' the wrong solution which is an example of how training data can result in algorithmic bias (Tuomi, 2018; Williamson & Eynon, 2020). This paradoxically indicates that the AI Engine acts as could be expected, only this is not what human actors desire. Rather than abandoning the AI Engine and dissolving the actor‐network the teaching aid company represented by Ester, reconstructs the entire module where the errors occurred. This compensatory action shows how not only teachers and students adapt to the AI Engine, but adjustments were needed also from the teaching aid company providing the technology. Rather than questioning the algorithmic setup, Ester recruits a new non‐human actor, a new module with the same exercises, which replaces the one with errors. This adaptation stabilises the network and prevents it from dissolving.
As for the other suddenly frozen screens described by Becky, the given explanation by human actors relates to the decision‐making actions of the AI Engine. When a student completes a task correctly at speed, the AI Engine predicts that the individual is very likely to complete the task again. If this prediction applies to all the problems within the module, the AI Engine will stop displaying assignments. Such algorithmic governance (Williamson, 2017) indicates that the AI Engine works, only not as human actors expect. Ester's solution to these computational prerequisites is to, as before, recruit more actors (new modules with more problems) that can stabilise the network. However, having to stabilise the actor‐network also means that the configuration of the actor‐network has changed, rendering the technology less visible. Algorithmic decision‐making is becoming less visible to human actors, which contributes to its success. In addition to obscuring, or 'blackboxing', the inner complexity of the AI Engine, humans' compensatory actions give rise to what has been termed 'fauxtomation' (Taylor, 2018)—the false illusion that technologies are functioning autonomously when they are actually dependent on the compensatory labour of humans.
Next, the second act describes how the actor‐network is further consolidated and how the workings of technology are rendered less visible as a result. This time, however, uncertainties and contradictions prevent the actor‐network from dissolving.
Act II
Contradictory interactions
The stage is in this act set as a spacious classroom with seventeen students aged 11–12 years. Most of them sit at their desks and work on assignments in their printed textbooks. The teachers have assembled four students in front of a large whiteboard instructing them how to solve a math problem. Katarina sits in the back of the room behind three students with laptops. She reads the notes she has taken aloud:
Katarina takes up a paper with diagrams from her backpack. Dimitri from the project team enters the stage, also with a colourful diagram. Sits down on a chair next to Katarina.
Teachers Lucy, Samuel and Becky, as well as research team members Diane and Ester enter the stage, adding to the conversation.
- Katarina Amina, Lea and Salma have already started doing exercises in the module Taljakten. They do not seem to make any effort to complete as many exercises delivered by the AI Engine as possible during the ten minutes. Salma deliberately begins to answer incorrectly, as if to test the software. Lea exclaims demonstratively: 'first I get super easy, then I get super difficult, then I get super easy again'. After a while, the teacher approaches us and says loudly that this talented student should get increasingly more difficult exercises. I answer that I do not know why she is not.
- Katarina [Reads from the paper in a monotonous voice] In March 2021 a follow‐up survey is distributed to eighteen teachers, as four of the initially 22 had dropped out. After three reminders eleven teachers responded. The responses about how teachers experienced the intervention vary. Many teachers experienced problems in relation to the AI Engine, however, for six of them the technology had met their expectations. About two‐thirds of the respondents were able to observe improvement in their students' math skills, but comments were questioning to what degree the AI Engine was personalising the content [... contemplating].
- Dimitri [Puts the diagram on the floor. Speaks in a hesitant voice, arms crossed] It seems like grade 2 (aged 8–9 years) have had a fairly positive development when it comes to the simple number combinations. They have developed better than all the groups we had in the previous year's interventions [pause]. It may very well be the AI who went in and did a job here so that they have had the chance to exercise the tasks that they have needed to exercise in a good way. [More cheerfully] It would be cool if that was the case!
- Katarina Mmm [... with scepticism]. There are indications that the AI has not matched all students' prior knowledge if you look at the survey we sent out [...] what are your thoughts on that? That students get better, I mean?
- Dimitri [Seriously] You would have to look at each individual student [... pause]. That's what we might be able to look at in this data file, how much they have trained.
- Katarina [In a confirming tone] Yes, that will become a hard nut to crack. We have not received it [the file] yet [... hesitantly]. What do you think can be found out from the file?
- Dimitri [Arms crossed, serious]. Well, I reckon the most important thing is how much they have exercised and then how long they contemplated on each task so that we get a different measurement than what we have now [pause]. So, I think we can get useful information.
- Lucy It is always a challenge to get students who have not always consolidated the most basic mental arithmetic, so I am grateful for anything that could help! [Smiles and nods affirmatively]. I have a group of students that [pauses and gets more serious] when I got them in grade 4 there were a pretty large number of them that had not passed the national exams so when it comes to the maths in my class, I have [... with effort] the entire grade range within my 27 children [nods affirmatively].
- Diane Could an AI Engine help [hesitates] so that students can achieve better results and achieve them faster? Because I think a lot of it has to do with the way education is organised [with emphasis]. Today it is more about time than it is about when I have finished things and can move on [pauses]. I think that AI can be a part of [... with more enthusiasm] that there will be a change around what you learn.
- Samuel [Speaks quickly] It is a bit fascinating how the AI Engine works, I would like to be better at explaining to the students how it can know [pauses]. And then the teething problems that I had to explain to them in the beginning [searching for words] that it takes time for the computer to get to know you and to know, was it a sloppy mistake you just did? [Pauses] and this is a bit fascinating [...] and I think it is great [...] and I think there will be [hesitates] even more of this personalisation. I also think it is good because often, teaching is [...] perhaps at the wrong level.
- Becky [Uncertain] The engine scans each student individually on each occasion during a lesson while I see them all at the same but only during a tiny occasion, so of course that it can be more attentive than I can be [...].
- Ester [With confidence]. The AI does not take over the role of the teacher, rather it makes it possible to keep together the class so that you can have joint teaching even if the students after a while will sit and work maybe with different things.
- Dimitri [Sceptic, arms crossed]. The question is how many mistakes can a student make for the AI to understand that it is a task that the student has difficulties with? Is it enough with a careless mistake or is it two mistakes or three mistakes? How many times do you have to solve the problem? There we have no idea how that AI [application] works [...].
Curtain.
Analysis of Act II
In the second act, accounts of human actors that question the capacity and actions of the AI Engine are contrasted with expressions of trust in the technology. Human actors seem uncertain of how the machine learning algorithms of the AI Engine learn and predict, yet express loyalty towards the promise of technology for the automation and personalisation of education. This gives us reason to assume that the AI Engine, is what prevents the actor‐network from dissolving. By rendering itself indispensable, actors need to act in compensatory and contradictory ways. For responding to our second research question: What mediates the interactions of the actors? we use the concept obligatory passage point (OPP) to demonstrate how the AI Engine imposes itself as the central node of the network mediating all the interactions.
Obligatory passage point
As demonstrated in Callon's (1984) influential paper about the domestication of scallops in Saint‐Brieuc Bay, heterogeneous actors (researchers, fishermen, scallops, and scientists) can be bound to each other for different reasons and with different goals. The actors are "fettered: they cannot attain what they want by themselves" (Callon, 1984, p. 206). The same image could be transferred to our actor‐network where the AI Engine is both an actor and the OPP through which all relationships must pass.
The obligatory passage point, illustrated in Figure 1, describes how actors within the observed network associated in relation to the AI Engine and their possible goals. From the two acts, we can define the possible goals of actors as (1) researchers want to explore AI educational technology from different viewpoints, and (2) the local authority wants to improve results in schools without increasing costs. The AI Engine will (3) simulate one–one tutoring in successful ways through precise predictions, and (4) the teaching aid company wants to deliver an innovative and commercially successful teaching aid. The national agency that funds the project wants to (5) see an uptake of AI in the education sector. Teachers want to (6) participate in a research and development project to improve their students' results in mathematics, and (7) students want to participate in a research project that can improve their results in math. This depicts possible goals that help to illuminate the concept of OPP. The goals in relation to the OPP help us to identify the AI Engine as what mediates the interactions, and explain the contrasting accounts presented in the second act.
Figure 1. Obligatory passage point Source: Figure constructed by authors as an adaption of concepts from Callon (1984).
Contradictions, expectations, and not‐knowing
The second act begins in a different classroom with students aged 10–11 years. The students and their teacher express mistrust in that they question the capacity of AI Engine to personalise subject matter content. This account together with the information from a survey presented by Katarina challenge the stability of the actor‐network, and the very node of the network, the AI Engine that keeps actors together. There is very little evidence that the AI Engine is living up to the expectation of providing seamless automation and personalisation; it could, as an OPP, easily be dismissed. This would prevent actors from reaching their possible goals. However, both the survey that Katarina refers to and the preliminary research results presented by Dimitri indicate that students have improved their math skills. As this essential goal for teachers appears to have been reached according to Dimitri, the AI Engine, and its role as an OPP for the network grows in importance. The contradiction between the actions of the AI Engine and improved student results are sustained by the expectations related to an obscure data file that contains all interactions done by the participating students. A file that could potentially be used for gaining more insight to student learning. As long as this data file is not fully accessed and understood, the OPP remains valid and instils hope within the actor‐network. When Samuel, Diane, Becky, and Ester join Katarina and Dimitri in the conversation, the AI Engine is negotiated into something successful. Independently of how well the AI Engine met their initial expectations, the improved version of the AI Engine is believed to deliver teaching with more accuracy and relevance than a human teacher. Although, as stated by Ester "the AI does not take over the role of the teacher." The statement demonstrates that the communication about the role of the AI Engine in the context of teaching is not static but can be negotiated. The uncertainties of whether the algorithms are adapting to student proficiency levels or not, together with a lack of understanding of how the AI Engine knows, expressed by both Samuel and Dimitri, are critical components in the mediation process. Not‐knowing prevents actors from dismissing the OPP and contributes to obscuring the workings of the AI Engine, where" technical work is made invisible by its own success" (Latour, 1999 p. 304). This is likely to be true for AI in education at large (cf. Hrastinski et al., 2019). However, as illustrated by the findings presented here, the success is rather an unsettled and contradictory negotiation around ideas of the automation of teaching and personalisation rather than around what the AI Engine did or did not do.
CONCLUSION
In times when AI is proliferating in the conversation about the future of teaching and learning, this article contributes with an Actor‐Network Theory exploration of the state of the actual (Selwyn, 2010). A machine learning‐based teaching aid in mathematics was tried out in primary education classrooms in Sweden. Focusing on the interactions between teachers, teaching aid authors, educators, researchers, and the AI Engine, this study not only demonstrates that ideas such as the automation of teaching is materialised through socio‐material interactions. It concretely shows how this construction emerges as an effect from the network of heterogeneous actors bound by the AI Engine (cf. Fenwick & Landri, 2012). Human actors seem to compensate with hidden labour for the unexpected and undesirable algorithmic decision‐making(s) of the non‐human AI Engine. In contrast to more common beliefs that AI technologies can (algorithmically) adapt to the needs of individual students, this study demonstrates that technologies designed to personalise and automate, in fact, require mutual adaptation from human and non‐human actors in the network. Together with expectations to the performance of technology, contradictions as well as obscure or 'not‐known' workings, the hoped for promises of AI technology in education prevail. The substantial amount of time and effort invested by the different stakeholders to realise their different objectives could indicate at least two things: (1) there seems to exist a discrepancy between ideas and reality in relation to AI in education and (2) the hidden labour of human actors speaks against the time and cost‐saving arguments with which AI in education is so often promoted (cf. Edwards & Cheok, 2017; Selwyn, 2019).
The reported findings draw on a single case study where specific computational methods were used to predict student performance and individualise the learning process. The project was carried out during exceptional pandemic times. Covid‐19 restrictions in schools had a negative impact on the possibilities to carry our fieldwork. This means findings rely more on human accounts than on direct observations of how the AI Engine was used in practice. Despite these limitations, we consider the findings valuable as they shed light on critical issues in relation to the interactions in classrooms where the AI Engine was used; and therefore, should be considered in future studies. Rather than suggesting that human actors will always be needed to compensate for technology‐enhanced learning or that teachers' tasks cannot be automated, the study reveals that AI in education is a complex social and material phenomenon in the making that involves many human and non‐human stakeholders and depends on collaboration between heterogeneous actors, all with their different interests and goals. Interactions are not predetermined, and obligatory passage points are not fixed, which make them, as well as the emerging effects, unpredictable. Things that seem to be durable are not (cf. Fenwick & Edwards, 2017). The prominent disciplines involved in AI in education research are from Computer Science and STEM fields and quantitative methods are the most frequently used in empirical studies (Zawacki‐Richter et al., 2019). Accordingly, there is still room for both research and technological development to be done in this domain which should engage education researchers, educators, teachers, and students with no formal computer science training. Future ethnographically oriented research capable of broadening the object of inquiry from what works (Biesta, 2007) to how AI‐based teaching aids are appropriated in primary education classrooms is strongly advised to further inform such engagements. Fieldwork inquiries could potentially also bring important insights into still poorly understood machine learning controversies (Williamson & Eynon, 2020). The accounts of such controversies provided in this study (e.g., lack of transparency, algorithmic biases, and governance) is not necessarily an argument against the use of machine learning in education or the use of the AI Engine per se. They do however raise some ethical and critical questions that could inspire future work, including the question of how data‐driven algorithmic decision‐making technologies can be informed by, and stay under the control of, teachers and educators. Also, what can data from these systems tell us (and not tell us)? How can we ensure that AI in education reduces existing inequalities and works well for all students? What assumptions about learning/learners and teaching/teachers underpin how these technologies are constructed? In whose interest are these technologies implemented? While these and many other questions remain unanswered in this article, the findings point to the necessity of a broad ethical discussion around AI with a practice centred orientation. Such a discussion depends on increased transparency around the premises upon which the decision‐making algorithms are constructed, and facilitated access to data from private sector companies, so that independent, interdisciplinary research can be conducted. As stated in the Recommendation on the ethics of artificial intelligence adopted by UNESCO's General Conference at its 41st session in November 2021:
[...] of AI research and proper monitoring of potential misuses or adverse effects, Member States should ensure that any future developments with regards to AI technologies should be based on rigorous and independent scientific research, and promote interdisciplinary AI research to ensure a critical evaluation by including disciplines other than science, technology, engineering and mathematics (STEM), such as cultural studies, education, ethics, international relations, law, linguistics, philosophy, political science, sociology and psychology. (UNESCO, 2021, p. 17)
With AI in education turning into an increasingly commercial concern, globally estimated to expand from $1.1 billion in 2019 to $6 billion by 2024 (Miao et al., 2021, p. 5) and $25.7 billion by 2030 (Facer & Selwyn, 2021 p. 12) it is very likely that technologies that fall under this umbrella term will stay in education. Global teacher shortage is often highlighted as the main reason for automating certain routine tasks together with arguments that machine learning AI can deliver mass individualisation in teaching and learning at a lesser expense than a teacher (cf. Edwards & Cheok, 2017; Selwyn, 2019). The perception of cost‐effectiveness through automation alone will probably (but not necessarily) be enough to drive the implementation of machine learning technologies in classrooms (Rowe, 2019).
However, as demonstrated by our findings, the automation of teaching can (still) be considered as vague and enacted—as the wait for Godot—but underpinned by great interest and hopes directed at this application of new technology as well as work behind the scenes. Therefore, it seems legitimate to wonder if there is a risk that the speculative and imaginary future of AI technologies in education stand in the way for understanding their impact in the future(s) of education. Rather than (still) waiting for technology‐promises to be fulfilled by obscure algorithms, we recommend for education researchers, educators, and teachers to get involved in the complex and multifaceted enactments of AI education, and its role in the future of learning, with informed curiosity.
Footnotes
1
Early actor‐network theory (ANT) analyses made a distinction between actor and actant to differentiate between different levels of agency within the networks. While the working entity is an actor with agency, that which goes into the network to enable this activity is the actant. In this study we have tried to limit the ANT terminology, as we did not find this distinction necessary nor helpful and will only use the term actor. For the same reason we have not used the ANT term translations (Callon, [5]) but instead use the term interactions.
REFERENCES
Andrejevic, M. (2019). Automated media. Routledge.
2
Bayne, S. (2015). Teacherbot: Interventions in automated teaching. Teaching in Higher Education, 20 (4), 455 – 467. https://doi.org/10.1080/13562517.2015.1020783
3
Beckett, S. (1957). Waiting for Godot: A tragicomedy in two acts. Acting edition. Grove Press.
4
Biesta, G. (2007). Why "what works" won't work: Evidence‐based practise and the democratic deficit in educational research. Educational Theory, 57 (1), 1 – 22. https://doi.org/10.1111/j.1741‐5446.2006.00241.x
5
Callon, M. (1984). Some elements of a sociology of translation: Domestication of the scallops and the fishermen of St Brieuc Bay. The Sociological Review, 32 (1_suppl), 196 – 233. https://doi.org/10.1111/j.1467‐954X.1984.tb00113.x
6
Castañeda, L., & Williamson, B. (2021). Assembling new toolboxes of methods and theories for innovative critical research on educational technology. Journal of New Approaches in Educational Research, 10 (1), 1 – 14. https://doi.org/10.7821/NAER.2021.1.703
7
Decuypere, M. (2021). The topologies of data practices: A methodological introduction. Journal of New Approaches in Educational Research, 10 (1), 67 – 84. https://doi.org/10.7821/naer.2021.1.650
8
Dubois, A., & Gadde, L. E. (2002). Systematic combining: An abductive approach to case research. Journal of Business Research, 55 (7), 553 – 560. https://doi.org/10.1016/S0148‐2963(00)00195‐8
9
Edwards, B. I., & Cheok, A.D. (2017). Why not robot teachers: Artificial intelligence for addressing teacher shortage. Online publication. Preprints, 2017120022. https://doi.org/10.20944/preprints201712.0022.v1.
Engvall, M., Samuelsson, J., & Östergren, R. (2020). The effect on students' arithmetic skills of teaching two differently structured calculation methods. Problems of Education in the 21st Century, 78 (2), 167 – 194. https://doi.org/10.33225/pec/20.78.167
Eynon, R., & Young, E. (2020). Methodology, legend, and rhetoric: The constructions of AI by academia, industry, and policy groups for lifelong learning. Science, Technology, & Human Values, 46 (1), 166 – 191. https://doi.org/10.1177/0162243920906475
Facer, K., & Selwyn, N. (2021). Digital technology and the futures of education—Towards 'non‐stupid' optimism. Paper commissioned for the UNESCO Futures of Education report. https://unesdoc.unesco.org/ark:/48223/pf0000377071
Fenwick, T. J., & Edwards, R. (2017). Actor‐Network Theory in education (2nd ed.). Routledge.
Fenwick, T. J., & Landri, P. (2012). Materialities, textures and pedagogies: Socio‐material assemblages in education. Pedagogy, Culture & Society, 20 (1), 1 – 7. https://doi.org/10.1080/14681366.2012.649421
Holmes, W., Bialik, M., & Fadel, C. (2019). Artificial intelligence in education: Promises and implications for teaching and learning. Centre for Curriculum Redesign.
Hrastinski, S., Olofsson, A. D., Arkenback, C., Ekström, S., Ericsson, E., Fransson, G., Jaldemark, J., Ryberg, T., Öberg, L.‐M., Fuentes, A., Gustafsson, U., Humble, N., Mozelius, P., Sundgren, M., & Utterberg, M. (2019). Critical imaginaries and reflections on artificial intelligence and robots in postdigital K‐12 education. Postdigital Science Education, 1, 427 – 445 https://link.springer.com/article/10.1007/s42438‐019‐00046‐x
Humphreys, M., & Watson, T. J. (2009). Ethnographic practices: From 'writing‐up ethnographic research' to 'writing ethnography'. In S. Ybema, D. Yanow, & H. Wels (Eds.), Organizational ethnography: Studying the complexities of everyday life (pp. 40 – 55). SAGE Publications Ltd. https://doi.org/10.4135/9781446278925.n3
Knox, J., Williamson, B., & Bayne, S. (2020). Machine behaviourism: Future visions of "learnification" and "datafication" across humans and digital technologies. Learning, Media & Technology, 45 (1), 31 – 45. https://doi.org/10.1080/17439884.2019.1623251
Kvale, S., & Brinkmann, S. (2009). Den kvalitativa forskningsintervjun (2nd ed.). Studentlitteratur.
Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Harvard University Press.
Latour, B. (1999). On recalling ANT. The Sociological Review, 47 (Suppl 1), 15 – 25.
Latour, B. (2007). Reassembling the social: An introduction to Actor‐Network‐Theory. University Press.
Law, J. (Ed.). (1992). A sociology of monsters: Essays on power, technology and domination. Routledge Sociological Review Monograph.
Law, J. (2004). After method: Mess in social science research. Routledge.
Luckin, R., & Cukurova, M. (2019). Designing educational technologies in the age of AI: A learning sciences‐driven approach. British Journal of Educational Technology, 50 (6), 2824 – 2838. https://doi.org/10.1111/bjet.12861
Luckin, R., Holmes, W., Griffiths, M., & Forcier, L. B. (2016). Intelligence unleashed: An argument for AI in education. Pearson Education https://www.researchgate.net/publication/299561597%5fIntelligence%5fUnleashed%5fAn%5fargument%5ffor%5fAI%5fin%5fEducation
Mazzei, L. A. (2013). A voice without organs: Interviewing in posthumanist research. International Journal of Qualitative Studies in Education, 26 (6), 732 – 740. https://doi.org/10.1080/09518398.2013.788761
Miao, F., Holmes, W., Huang, R., & Zhang, H. (2021). AI and education: Guidance for policymakers. UNESCO https://unesdoc.unesco.org/ark:/48223/pf0000376709
Mol, A. (2010). Actor‐Network Theory: Sensitive terms and enduring tensions. Kölner Zeitschrift für Soziologie und Sozialpsychologie. Sonderheft, 50, 253 – 269 https://pure.uva.nl/ws/files/1050755/90295_330874.pdf
Perrotta, C., & Selwyn, N. (2019). Deep learning goes to school: Toward a relational understanding of AI in education. Learning, Media and Technology, 45 (3), 251 – 269. https://doi.org/10.1080/17439884.2020.1686017
Rowe, M. (2019). Shaping our algorithms before they shape us. In J. Knox, Y. Wang, & M. Gallagher (Eds.), Artificial intelligence and inclusive education. Perspectives on rethinking and reforming education. Springer. https://doi.org/10.1007/978‐981‐13‐8161‐4_9
Saldaña, J. (2003). Dramatizing data: A primer. Qualitative Inquiry, 9 (2), 218 – 236. https://doi.org/10.1177/1077800402250932
Saldaña, J. (Ed.). (2005). Ethnodrama: An anthology of reality theatre. AltaMira Press.
Selwyn, N. (2010). Looking beyond learning: Notes towards the critical study of educational technology. Journal of Computer Assisted Learning, 26 (1), 65 – 73. https://doi.org/10.1111/j.1365‐2729.2009.00338.x
Selwyn, N. (2019). Should robots replace teachers? AI and the future of education. Polity Press.
Selwyn, N., Pangrazio, L., & Cumbo, B. (2021). Knowing the (datafied) student: The production of the student subject through school data. British Journal of Educational Studies, 70 (3), 345 – 361. https://doi.org/10.1080/00071005.2021.1925085
Skinner, B. F. (1968). Technology of teaching. Appleton‐Century‐Crofts.
Taylor, A. (2018). The automation charade. Blogtext. https://logicmag.io/failure/the‐automation‐charade/
Teusner, A. (2016). Insider research, validity issues, and the OHS professional: One person's journey. International Journal of Social Research Methodology, 19 (1), 85 – 96. https://doi.org/10.1080/13645579.2015.1019263
Tuomi, I. (2018). The impact of artificial intelligence on learning, teaching, and education. Policies for the future. JRC113226. Publications Office of the European Union https://publications.jrc.ec.europa.eu/repository/bitstream/JRC113226/jrc113226_jrcb4_the_impact_of_artificial_intelligence_on_learning_final_2.pdf
UNESCO. (2021). Recommendation on the ethics of artificial intelligence. Programme and meeting document. United Nations Educational Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000380455.locale=en
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46 (4), 197 – 221. https://doi.org/10.1080/00461520.2011.611369
Vetenskapsrådet. (2017). Good research practice. Swedish Research Council https://www.vr.se/english/analysis/reports/our‐reports/2017‐08‐31‐good‐research‐practice.html
Vinnova. (2019). Starta er AI resa. Utlysningstext. Dnr: 2019‐01292. https://www.vinnova.se/globalassets/utlysningar/2019‐01053/omgangar/ai‐resa‐utlysningstext‐v‐3.pdf939759.pdf?cb=20190425135700
Wajcman, J. (2017). Automation: Is it really different this time? The British Journal of Sociology, 68 (1), 119 – 127. https://doi.org/10.1111/1468‐4446.12239
Watters, A. (2021). Teaching machines: The history of personalized learning. MIT Press.
Williamson, B. (2017). Big data in education: The digital future of learning, policy and practice. Sage.
Williamson, B., & Eynon, R. (2020). Historical threads, missing links, and future directions in AI in education. Learning, Media and Technology, 45 (3), 223 – 235. https://doi.org/10.1080/17439884.2020.1798995
World Economic Forum. (2020). Schools of the future defining new models of education for the fourth industrial revolution. Report REF 09012020. http://www3.weforum.org/docs/WEF%5fSchools%5fof%5fthe%5fFuture%5fReport%5f2019.pdf
Zawacki‐Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education – where are the educators? International Journal of Educational Technology in Higher Education, 16 (39). https://doi.org/10.1186/s41239‐019‐0171‐0
By Katarina Sperling; Linnéa Stenliden; Jörgen Nissen and Fredrik Heintz
Reported by Author; Author; Author; Author