Treffer: Systematic academic instruction for students with EBD : the construction and use of a tool for teachers

Title:
Systematic academic instruction for students with EBD : the construction and use of a tool for teachers
Source:
Journal of Research in Special Educational Needs. 17(1):31-40
Publisher Information:
John Wiley and Sons Inc.
Publication Year:
2017
Physical Description:
10
Document Type:
Fachzeitschrift article
Language:
English
Accession Number:
edshbo.hanzepure.oai.research.hanze.nl.publications.a464febc.e735.4653.9ea4.33fb1644e027
Database:
HBO Kennisbank

Weitere Informationen

Educating students with behavioural, emotional and social difficulties requires a thorough systematic approach with the focus on academic instruction. This study addresses the development of a tool, consisting of two questionnaires, for measuring systematic academic instruction. The questionnaires cover the Plan-Do-Check-Act cycle and academic versus behavioural instruction. The questionnaires are both practically oriented as well as theoretically well founded. The reliability turned out to be acceptable (0.76) to high (0.89). Observation scales were developed to determine the validity of both questionnaires. Moderate correlations between questionnaires and observation scales were found (0.31, 0.32). Bland–Altman plots offered us valuable information about the differences between questionnaires and observation scales, supplying us with important issues for further research. It is concluded that the questionnaires might be a valuable tool for assessing teachers' systematic academic instruction. DOI: 10.1111/1471-3802.12096

AN0120533347;0lc01jan.17;2018Jul03.10:02;v2.2.500

Systematic academic instruction for students with EBD: the construction and use of a tool for teachers. 

Educating students with behavioural, emotional and social difficulties requires a thorough systematic approach with the focus on academic instruction. This study addresses the development of a tool, consisting of two questionnaires, for measuring systematic academic instruction. The questionnaires cover the Plan‐Do‐Check‐Act cycle and academic versus behavioural instruction. The questionnaires are both practically oriented as well as theoretically well founded. The reliability turned out to be acceptable (0.76) to high (0.89). Observation scales were developed to determine the validity of both questionnaires. Moderate correlations between questionnaires and observation scales were found (0.31, 0.32). Bland–Altman plots offered us valuable information about the differences between questionnaires and observation scales, supplying us with important issues for further research. It is concluded that the questionnaires might be a valuable tool for assessing teachers' systematic academic instruction.

Systematic; academic; instruction; teachers; BESD; questionnaires

Students with behavioural, emotional and social difficulties (BESD) are a serious challenge to education systems (Cooper and Jacobs, [8] ). Such students not only show a wide range of external and internal behavioural problems, numerous of these students also show very little progress in their academic learning over the course of a full academic year (Siperstein, Wiley and Forness, [30] ; Yell et al., [45] ). Moreover, the developmental delays that many students with BESD experience compared with the development of typical students increase rapidly over the years (Ledoux et al., [18] ), including negative emotions about learning (Al‐Hendawi, [1] ). Although not all students with BESD show this gap in academic progress, it is a matter of continuing concern. Due to governmental pursuits of higher academic outcomes for all students (e.g., US: ‘No Child Left Behind’; UK: ‘Every Child Matters’; the Netherlands: ‘Tailored Education’), the focus on academic instruction for these particular students is growing. Given the fact that a large degree of problem behaviour is precisely related to problems in academic learning (Umbreit et al., [34] ), academic instruction for BESD students is clearly worth studying.

Much problem behaviour observed in classrooms appears to originate from a discrepancy between the demands of the tasks offered and the skills of students with BESD (Lewis et al., [20] ; Umbreit et al., [34] ). Kern and Clemens note that:

Frequently, problem behaviours result from a mismatch between the environment and a student's skills, strengths, or preferences. For instance, work assignments that are too difficult for a student are a common cause of problem behaviour in the classroom. Appropriately matching instruction to a student's skill and performance corrects this environmental problem. ([15] , p. 66)

This calls for systematically planned academic instruction that is carefully orchestrated and appropriately adapted to fit each unique student's needs (Simpson, Peterson and Smith, [29] ). Deming ([9] ) proposed that processes of planning should be placed in a feedback loop, making it possible to change the parts of a plan that was not working (did not match) and needed improving. He created a diagram to illustrate this iterative, on‐going process, commonly known as the Plan‐Do‐Check‐Act (PDCA) cycle. This widely used popular concept, frequently used for improving the quality of education (Kartikowati, [14] ), is one of the core tactics in Dutch education (The Dutch Inspectorate of Education, [33] ).

Another important aspect of academic instruction and directly related to school performance outcomes of students with BESD is instructional time (Kurz, Talapatra and Roach, [17] ; Matheson and Shriver, [21] ; Vannest et al., [38] ). Winn, Menlove and Zsiray ([42] ) stated that the link between instructional time and learning is one of the most consistent findings in educational research. The time a teacher spends on academic instruction is in inverse proportion to the time a teacher spends correcting misbehaviour in the classroom (Berliner, [3] ; Brophy and Good, [6] ). Thus much precious instruction time for students with BESD is lost because teachers pay a great deal of attention to controlling behaviour (Pianta and Hamre, [24] ; Wehby, Tally and Falk, [40] ). Confronted with too challenging tasks, students often develop problem behaviour that ‘helps’ them avoid academic settings (Gunter and Coutinho, [11] ; Scott, Nelson and Liaupsin, [27] ). Teachers who tend to shift their attention from academic instruction to handling problem behaviour often just reinforce that behaviour (Sutherland and Oswald, [31] ), whereas increasing students' exposure to academic instruction could have demonstrably positive impact on classroom behaviour as well as the academic achievement of students with BESD (Brigham et al., [5] ; Van der Worp‐van der Kamp et al., [35] ). Surely, as stated by Kern, Hilt‐Panahon and Sokol ([16] ), academic instruction is closely linked to behavioural instruction. We agree with Hagaman ([12] ), who warned against addressing academic learning and behaviour as separate issues. However, as long as teachers give too little attention to academic instruction (Vannest and Hagan‐Burke, [37] ), it is important to consider this distinction.

Consequently, the degree of systematic teaching and academic instruction seems to link closely to the problem behaviour of students with BESD. We define teachers' activities as systematic if they are (1) planned with concern for students' special needs (plan), (2) realised according to prior planning (do), (3) monitoring students' progress regarding the defined goals (check) and (4) acting on the outcomes of the check phase (act). This general cycle of continuous improvement and adaptation can be filled with substantive educational content by using core aspects from ‘effective instruction’ as identified by Glaser ([10] ), Van Gelder et al. ([36] ), Yell and Rozalski ([43] ) (in Yell et al., [45] ) and, in the Netherlands, The Dutch Inspectorate of Education ([33] ). Reactive teaching styles, instant solutions, frequent changes of plan, following the curriculum blindly, activities without a clear goal and instruction that does not match the students' needs are all examples of a non‐systematic approach. Furthermore, we define teachers' instruction as academic when they focus on academic content through explaining, motivating, asking and answering academic questions, whereas instructions concerning behaviour include explaining behavioural rules, reacting on behaviour, providing non‐academic tasks as well as punishing and rewarding behaviour.

These two important aspects of instruction, ‘degree of systematic teaching’ and ‘degree of academic instruction’, may be visually displayed in a coordinate system (Figure [NaN] ), depicting instruction along the two dimensions. The y‐axis displays the PDCA cycle. The x‐axis involves a bipolar dimension: the more teachers focus on academic instruction, the higher they score on the x‐axis. The more teachers focus on behaviour, the lower they score on the x‐axis.

Although a simplified representation of reality, the model can function as a framework to categorise different types of teachers. Research shows that teachers confronted with problem behaviour of students in their classrooms react in different ways. For instance, some teachers with a high level of systematic teaching focus mainly on behaviour. Such teachers see reducing problem behaviour as a prerequisite for academic instruction (Sutherland et al., [32] ). Confronted with problem behaviour, they tend to focus on systematic behavioural or emotional interventions using allocated instruction time for redirecting behaviour (these teachers are in the upper left quadrant). Conversely, other teachers with a high level of systematic teaching focus on academic rather than behavioural instruction during instructional time. These teachers (in the upper right quadrant) reinforce their instruction techniques or adapt the task to the skills of the students in order to increase on‐task behaviour (Raggi and Chronis, [26] ; Van der Worp‐van der Kamp et al., [35] ). The teacher (P), in the coordinate system ‘systematic academic instruction’ (Figure [NaN] ), tends to focus more on academic instruction than on redirecting behaviour, often in a more systematic manner and is thus represented in the upper right quadrant.

Teachers with a low level of systematic teaching, on the other hand, often work unprepared and more informal, regularly implementing interventions on an ad hoc basis (Kern et al., [16] ; Mooij and Smeets, [22] ). Confronted with problem behaviour, some of these teachers may persist on focusing on academic tasks, without considering students' skills. The offered tasks are too easy or too difficult and the curriculum is not always carefully constructed, for example because the teachers feel pressure to complete the curriculum regardless of student mastery (Brigham et al., [5] ). These teachers are placed in the lower right quadrant. Other teachers (in the lower left quadrant) spend most of their time redirecting student behaviour at the expense of academic instruction, or they may even remove these students from the lessons (Pianta and Hamre, [24] ).

Literature shows that many teachers work somewhat ad hoc (Banks and Zionts, [2] ; Kern et al., [16] ; Mooij and Smeets, [22] ) with a focus on behaviour (Levy and Vaughn, [19] ; Pianta and Hamre, [24] ). However, based on the aforementioned theory, systematic academic instruction seems to bode best for positive academic outcomes and could be a key strategy for decreasing problem behaviour. The empirical basis for this is still rather small, however. To date, scarce research has been done on the lasting effect of teaching academic learning through a systematic, cyclic, on‐going approach to the academic and behavioural outcomes of students with BESD (Van der Worp‐van der Kamp et al., [35] ). Further research is necessary to ascertain if teachers in the upper right quadrant actually show better results (behaviourally as well as academically) than teachers in the other quadrants. Therefore, tools are needed to measure systematic teaching and academic instruction.

Teachers' behaviour can be measured through direct observation, interviews or questionnaires. Given the fact that the on‐going process of systematic teaching cannot be captured in one or two observations and because certain parts of the PDCA cycle cannot be observed immediately and take place in the teachers' mind before and after the overt lesson, observation does not provide a suitable measurement tool. The advantage of questionnaires over individual interviews is that the former are less time consuming than the latter, making it possible to assess a larger number of teachers. Moreover, a questionnaire makes it possible for teachers to measure their own position on systematic academic instruction. Research shows that teacher self‐reports are reasonably accurate (Clunies‐Ross, Little and Kienhuis, [7] ; Porter, [25] ). A search through literature, however, did not yield questionnaires concerning teacher use of the PDCA cycle, nor could questionnaires on academic versus behavioural instruction be found. Therefore, we decided to develop two questionnaires, one concerning the PDCA cycle and the other concerning academic instruction.

Method

Design

The development of the questionnaires began with an orientation visit to a real educational setting for students with BESD. Based on a study of lesson plans, observation of lessons, meetings with specialists and cognitive interviews (Figure [NaN] , step 1), items for both questionnaires were drawn up. Large‐scale data collection, with teachers as respondents, was used as the basis for calculating the reliability of both questionnaires (step 2). In order to validate both scales (step 3), the agreement between teachers' self‐reported and observed use of systematic teaching and academic instruction was studied. For this purpose, observation scales comprising the PDCA cycle and academic instruction were developed (step 2.1). The procedure for developing the questionnaires and observation scales is given in Figure [NaN] . To differentiate between the questionnaires and the observation scales, these were named PDCA<subs>Q</subs> versus PDCA<subs>Obs</subs> and AI<subs>Q</subs> versus AI<subs>Obs</subs>.

Participants

All participating schools were situated in Northern Netherlands.

Step 1 took place in a particular school for special education, whose school policy is to offer students adequate academic instruction so much time is scheduled for teaching academic skills. Eight teachers participated in this phase. During the period of item construction, these teachers used extended lesson plans (based on the Deming cycle of PDCA).

In step 2, the questionnaires were sent to the administrators of five special schools for students with BESD (aged 7–12), who distributed the questionnaires among 80 teachers. Fifty‐six (72%) teachers completed and returned the questionnaire. Years of teaching experience ranged from <5 (28%), 5–10 (19%), 10–15 (21%), 15–20 (12%) to >20 (20%). Of these teachers, 19 per cent were teaching students in grade 3/4, 26 per cent in grade 5/6 and 40 per cent in grade 7/8 while the remaining 15 per cent were teaching in another combination of grades.

Thirty teachers (54%) agreed to be observed (step 3), but due to illness, too many demands in the classroom and switching jobs, 10 of these changed their minds. Therefore, 20 teachers (36%) were observed for step 3, five of whom were selected, based on their availability, to be observed in order to determine inter‐observer agreement (step 2.1). All observations were conducted by an experienced, regular primary school teacher. The second observations (step 2.1) were undertaken by the first author of this paper.

Development

Questionnaires

The item construction began with open lesson observations, accompanied by semi‐structured interviews. Teachers were asked to hand over their lesson plans before every observation. Plans and observations were discussed with the teachers afterwards. This empirical input and the extensive feedback from teachers resulted in a number of key topics. These were discussed during two meetings with specialists and translated into items for the questionnaires. Next, these items were optimised through cognitive interview techniques (Willis, [41] ) in which the researcher submitted the items to each of the eight teachers in turn, inviting them to think out loud about what they felt each item was about and what they felt certain words and phrases in the item meant. Ambiguous items and terms were replaced and read again to the next teacher. This resulted in two finite questionnaires comprising 36 items about the PDCA cycle (PDCA<subs>Q</subs>) and 24 items on behavioural and academic instruction (AI<subs>Q</subs>). A 4‐point Likert response scale varying from ‘no or rarely’ (1) to ‘very often’ (4) was used. The mean score determined respondents' degree of systematic and academic instruction. Table [NaN] shows a couple of examples of the final items used.

Examples of items of the Questionnaires

<table><tr><th>PDCA<sub>Q</sub></th><th>no or rarely</th><th>occasionally</th><th>regularly</th><th>very often</th></tr><tr><td>I adjust the learning goals to the capabilities of the students in advance</td><td /><td /><td /><td /></tr><tr><td>A great deal of allocated learning time is lost to unexpected happenings</td><td /><td /><td /><td /></tr><tr><td>During the complete lesson, I check the attainability of the learning goals</td><td /><td /><td /><td /></tr><tr><td>I use evaluation data for the preparation of my lessons</td><td /><td /><td /><td /></tr><tr><td>I discuss the realization of the learning goals with my students</td><td /><td /><td /><td /></tr></table>

<table><tr><th>AI<sub>Q</sub></th><th>no or rarely</th><th>occasionally</th><th>regularly</th><th>very often</th></tr><tr><td>I spend allocated instruction time actually on teaching academic skills</td><td /><td /><td /><td /></tr><tr><td>In case of disruptive behaviour, I check the appropriateness of the learning task</td><td /><td /><td /><td /></tr><tr><td>In case of disruptive behaviour I discuss behavioural rules</td><td /><td /><td /><td /></tr><tr><td>I mainly reward good behaviour</td><td /><td /><td /><td /></tr></table>

The items of the AI<subs>Q</subs> comprised both behavioural and academic instruction. High scores on academic instruction were expected to go along with low scores on behavioural instruction. Since these scores have the opposite result, the items on behavioural instruction were re‐coded. The higher the teachers score on items concerning behaviour, the lower they score on the AI<subs>Q</subs>.

Observation scales

All items of the PDCA<subs>Q</subs> were used for constructing the PDCA<subs>Obs</subs> observation scale. The items included specific descriptions of teacher behaviour concerning the PDCA cycle and was rated from 1 to 4 points. Teachers' preparation and evaluation of the lessons (some items of the plan and most items of the act phase) took place before and after the observed lessons. These items were hard to observe and therefore were looked at by checking written lesson plans and by interviewing teachers after these observed lessons. The mean score determined teachers' degree of systematic instruction. Table [NaN] shows some examples of the items, with corresponding points.

Examples of observation items of the PDCA Obs (Observation scale)

<table><tr><th>1 point</th><th>2 points</th><th>3 points</th><th>4 points</th></tr><tr><td>The teacher appointed no/a superficial goal for the whole class</td><td>The teacher appointed an adaptive/measurable goal for the whole class</td><td>The teacher appointed an adaptive and measurable goal for the whole class</td><td>The teacher appointed an adaptive, measurable goal for groups/individual students</td></tr><tr><td>In case of incidents, the teacher reacts ad hoc</td><td>In case of incidents, the teacher uses a planned general approach</td><td>In case of incidents, the teacher uses a planned approach adapted to the group</td><td>In case of incidents, the teacher uses a planned approach adapted to specific students</td></tr><tr><td>The teacher offers the whole group a general instruction, strictly from the method/book</td><td>The teacher offers the whole group a specific/adaptive instruction.</td><td>The teacher offers specific groups a specific/adaptive instruction.</td><td>The teacher offers specific students a specific/adaptive instruction</td></tr><tr><td>The teacher does not address the learning goal/process</td><td>The teacher addresses the learning goal/process slightly</td><td>The teacher addresses the learning goal/process extensive</td><td>The teacher verifies the learning goal/process and discusses it with the students</td></tr></table>

When developing the AI<subs>Obs</subs>, a distinction was made between academic and behavioural oriented remarks. Everything a teacher said regarding the content of the academic task was obviously noted as academic instruction, i.e., ‘Look at assignment three on the whiteboard’, ‘Which part of this task do you not understand?’ or ‘Continue with page three’. Likewise, every remark about students' behaviour was noted as behavioural instruction, i.e., ‘Sit on your chair’, ‘Keep working’, ‘Be quiet’ or ‘Pay attention please’. Although these remarks may be intended to get pupils to work, they do not assign concrete academic tasks and are thus behavioural focused. Time spent on matters unrelated to the lesson, i.e., small talk, handing out medication, talking to persons outside the classroom and periods of silence were noted as ‘other’. For the precise scoring of the instruction, teachers were asked to record the observed lesson using a small portable audio device. Afterwards, the time spent on academic, behavioural and ‘other’ instruction was noted. The AI<subs>Obs</subs> score was determined through dividing the percentage of time a teacher spends on academic instruction by the time he spends on both academic and behavioural instruction. Thus, the more teachers spent on behavioural instruction, the lower the score on the AI<subs>Obs</subs>. To facilitate the comparison with the outcomes from the 4‐point scale of the AI<subs>Q</subs>, this score was multiplied by 4.

Data analyses

The first step of the construction of the questionnaire involved an on‐going and iterative process of collecting and analysing data. The analyses of all next steps as well as the final outcomes of the questionnaires in the coordinate system were performed in SPSS (IBM, Armonk, New York, United States). The reliability of both questionnaires and the PDCA<subs>Obs</subs> was assessed by calculating Cronbach's alpha. Non‐essential items that correlated poorly with the total score (item total correlation <0.2) were removed. The inter‐observer agreement of the PDCA<subs>Obs</subs> and AI<subs>Obs</subs> was established by using Bland–Altman plots (Bland and Altman, [4] ). The validity of the questionnaires was determined by the correlation between the outcomes of the questionnaires and observation scales. Because a correlation does not automatically imply an agreement between two measurements, Bland–Altman plots were also used for measuring the agreement between the questionnaires and observation scales

Bland–Altman plots are based on graphical techniques and provide information about the agreement and nature of the differences between two measurements, i.e., in this case those of two different observers concerning the PDCA<subs>Obs</subs> and the AI<subs>Obs</subs> (inter‐observer agreement) on the one hand and systematic teaching and academic instruction by two different instruments, namely the questionnaires and the observation scales, on the other. Providing the differences have an approximate normal distribution, about 95 per cent of the calculated differences will fall between the mean difference, plus or minus two standard deviations (the latter being entitled as the limits of agreement). The smaller the range between these two limits the better the agreement. The mean differences between the two measurements as well as the limits of agreement are also presented in the plots (represented in Figures [NaN] , [NaN] , [NaN] , [NaN] by a black line and two dotted lines). The acceptability of the differences between the two measurements was determined by the limits of agreement. A difference in a score exceeding these limits was regarded as a lack of agreement.

Finally, the outcomes of the questionnaires of the sample of 56 teachers were processed in a scatterplot representing the above‐mentioned coordinate system of systematic academic instruction.

Results

The first step resulted in the following key topics on systematic academic instruction:

    preparation of concrete academic goals as well as their communication and evaluation consideration of differences between students during planning, throughout instruction and as a result of the evaluation understanding of problem behaviour and knowledge how to handle it anticipation on problem behaviour and incidents teachers' focus on behavioural versus academic approach.

All these topics formed the basis for 60 items.

Before establishing the reliability of both instruments (step 2), six items of the PDCA<subs>Q</subs> and four of the AI<subs>Q</subs> were deleted because they correlated extremely weakly with the total score. One item with a low item – total score (0.15) was considered to be essential for the PDCA process (preparing additional tasks for students who completed their task) and was therefore not deleted. After removal of the 10 items, the alphas for the remaining items of the PDCA<subs>Q</subs> and AI<subs>Q</subs> were 0.89 and 0.76 respectively, suggesting that the scales with these items were reliable. The alpha of the PDCA<subs>Obs</subs> was 0.89.

Concerning the inter‐observer agreement of the observation scales, the Bland–Alman plot for the PDCA<subs>Obs</subs> (Figure [NaN] ) showed that observer 1 scored an average 0.03 point higher than observer 2 [standard deviation (SD) 0.16]. All differences fell between the limits of agreement (−0.30 and 0.36) and were nicely scattered around the mean difference. Regarding the AI<subs>Obs</subs>, observer 1 scored an average 0.31 points (SD 0.20) lower than observer 2 (Figure [NaN] ). All differences fell within the lines of agreement (−0.71 and 0.09) but the disagreement about teacher 3 (a difference of 0.63 points) was remarkable. To understand this disagreement, the audiotape of that particular teacher was played once again by both observers. It turned out that observer 1 and observer 2 had a different interpretation of one aspect of academic instruction, namely those concerning learning conditions, like ‘get your book’, ‘open your book’, ‘pick up your pencil’ etc. Observer 1 scored these actions as being behavioural focused, while observer 2 as academically based. Particularly teacher number 3 spent much time on these particular actions. But because these remarks do not include specific academic instruction, the AI<subs>Obs</subs> was improved by describing these actions as behavioural focused. Apart from this difference, both observers were in broad agreement.

For the validity of the questionnaires (step 3), 20 lessons with a mean duration of 46.2 (SD 8.5) minutes were observed. The results indicated that most of the observed instructional time was spent on academic instruction (56%). It should be noted that the audio recordings revealed that much of the academic instruction was given in a one‐to‐one interaction during independent practice: learning tasks actually assigned to be done by students without supervision. All outcomes of the observed teachers are displayed in Table [NaN] .

Outcomes questionnaires and observation scales (N = 20)

<table><tr><th /><th>Mean</th><th>SD</th></tr><tr><td>PDCA<sub>Q</sub></td><td>2.90</td><td>0.27</td></tr><tr><td>AI<sub>Q</sub></td><td>2.66</td><td>0.38</td></tr><tr><td>PDCA<sub>Obs</sub></td><td>2.54</td><td>0.53</td></tr><tr><td>AI<sub>Obs</sub></td><td>2.97</td><td>0.49</td></tr><tr><td>Percentage of time spent on academic versus behavioural instruction</td></tr><tr><td>Academic instruction</td><td>56.0%</td><td>12.8</td></tr><tr><td>Behavioural instruction</td><td>19,5%</td><td>10.2</td></tr><tr><td>Other</td><td>24,5%</td><td>11.3</td></tr></table>

For the PDCA<subs>Q</subs> and the PDCA<subs>Obs</subs> as well as for the AI<subs>Q</subs> and AI<subs>Obs</subs>, we found a moderate Pearson correlation of 0.32 and 0.31 respectively. Bland–Altman plots (Figures [NaN] and [NaN] ) provided us with more specific information on the agreement between the questionnaires and observation scales. As Figure [NaN] shows, the mean difference between the scores on the PDCA<subs>Q</subs> and PDCA<subs>Obs</subs> was 0.36 (SD 0.53). The 95 per cent limits of agreement were −0.7 and 1.42 and all differences fell within these limits of agreement. Generally, teachers ranked higher on the PDCA<subs>Q</subs> than the PDCA<subs>Obs</subs>, while the distribution of the dots in the plot reveals that teachers with a high mean score on the x‐axis of the plot (>3) also ranked higher on the observation scale. Consequently, most teachers assessed their systematic teaching higher than the observer did, except for a couple of teachers with a high mean score on the PDCA<subs>Q</subs> and PDCA<subs>Obs</subs>. Analyses of the differences between the scores revealed that the greatest differences lay in the plan and do part of the cycle. The degree of planning as indicated by the outcomes of the questionnaires was not observed during the lessons, mainly because an overt, written plan was seldom accessible (except for the teachers with a high mean score). As Figure [NaN] shows, the mean difference between the scores on the AI<subs>Q</subs> and AI<subs>Obs</subs> was −0.31 (SD 0.52). Generally, teachers ranked lower on the AI<subs>Q</subs> than the AI<subs>Obs</subs>. There was no pattern in the relation between the mean score and the differences. The dots were nicely scattered around the mean difference and all differences fell between the limits of agreement (−1.35 and 0.73). Even though Figures [NaN] and [NaN] indicate that the differences between questionnaires and observation scales were acceptable, the limits of agreement were considered wide.

The final outcomes of the questionnaires of all 56 teachers are shown in the coordinate system ‘systematic academic instruction’ (Figure [NaN] ). The mean score on the PDCA<subs>Q</subs> and AI<subs>Q</subs> were 2.89 (0.31 SD) and 2.68 (0.38 SD) respectively. In the upper quadrants, teachers scored 69% (upper right) and 22% (upper left). In the lower quadrant, teachers' scores were 7% (lower right) and a meagre 2% (lower left).

Discussion

The aim of this study was to develop a tool measuring teachers' systematic academic instruction to students with BESD. Various steps were taken to assure that the tool, comprising two questionnaires, was reliable, valid and came close to the daily practice of teaching students with BESD. The empirical data showed that the psychometric quality of both questionnaires was satisfactory to good. The reliability of both questionnaires was high (PDCA<subs>Q</subs>) and acceptable (AI<subs>Q</subs>) as demonstrated by Cronbach's α of 0.89 and 0.76 respectively. Their validity was supported by fair correlations between the questionnaires and the observation scales (being 0.32 and 0.31). Because correlation coefficients can be misleading in method agreement studies (Bland and Altman, [4] ), Bland–Altman plots were used to quantify the differences between the two methods. These plots not only made it easy to interpret differences, they also showed their magnitude. The wide limits of agreement revealed some differences between questionnaires and observation scales. Regarding the different approaches of both methods (self‐reported and observed behaviour), we consider the limits of agreement small enough to be confident that both questionnaires reflected teachers' systematic academic teaching in a satisfactory manner.

Next, the questionnaires made it possible to include all participants in the coordinate system ‘systematic academic instruction’. As Figure [NaN] reveals, the majority of teachers scored in the upper right quadrant, indicating a focus on academic instruction in a systematic manner. These outcomes do not confirm statements from literature that numerous teachers tend to work in an ad hoc manner, focusing more on behaviour than on academic instruction. A possible explanation may be that systematic academic instruction is a point of particular interest in Dutch special education for students with BESD, especially since the Dutch education inspectorate has emphasised the importance of high academic outcomes for all students, including those with BESD. Further analysis of each single axis of the coordinate system may enhance our understanding of the outcomes. The aforementioned differences between questionnaires and observation scales can also provide us with certain indications.

Looking at the y‐axis, we see that teachers scored relatively high on systematic teaching, with a mean score of 2.89. Observation, however, revealed that teachers hardly ever write out lesson plans. Although this finding is in agreement with literature (John, [13] ; Morine‐Dershimer, [23] ; Sutherland and Oswald, [31] ), the question arises to what extent this implicit, covert planning can still be regarded as systematic. Detailed and daily written lesson plans are considered to be important organisational blueprints of what will be taught (Yell, Busch and Rogers, [44] ). We agree with Shen et al. ([28] ) who describe lesson plans as ‘important yet often overlooked sources of professional growth’. The PDCA<subs>Q</subs> makes no explicit distinction between overt and covert planning, but the score would probably be lower if the items involved explicitly written plans. Future studies regarding the importance and feasibility of written lesson plans are recommended.

Considering the x‐axis of the coordinate system, the mean score of 2.68 on behavioural versus academic instruction suggests that teachers tend to focus slightly more on the latter. The observations confirm this outcome, but also reveal that a striking amount of academic instruction was offered to students individually. This corresponds with the outcomes of Vaughn et al. ([39] ) who concluded that students in special education services receive more individual instruction compared with their peers without disabilities. An unwanted side effect, however, could be that numerous students spend considerable time waiting for their own individual instruction. Especially for students with BESD, the latter could be a source for problem behaviour. The AI<subs>Q</subs> does not distinguish between an individual, group or classroom instruction. The PDCA<subs>Q</subs>, however, measures if all students receive enough instruction. We can therefore assume that teachers who score high on both axes (upper right quadrant) give sufficient instruction to every individual student. This highlights the importance of using the two questionnaires together.

The final aim when developing the questionnaires was to determine the position in the coordinate system of the most effective teachers. Comparing the positions of teachers in the coordinate system with academic and behavioural outcomes of students might reveal the most effective approach. For example, teaching students with BESD might require a very high score on both axes. If so, effective teachers have to be positioned in the extreme upper right corner of the coordinate system. On the other hand, it is also possible that academic instruction to students with BESD always requires a certain degree of behavioural instruction. In that case, effective teachers are positioned near the upper centre of the coordinate system. More research is required to define limits of effective teacher positions in the coordinate system. The questionnaires are very useful for this purpose. Furthermore, they can be used to measure the baseline and effect of an intervention. It is also possible for teachers to measure their own degree of systematic academic instruction with this tool.

Although the questionnaires reveal important information concerning systematic academic instruction, they should be handled carefully. The modest sample size makes it hard to draw firm conclusions about the reliably and validity of the questionnaires as well as about the outcomes of the coordinate system. Bland–Altman plots proved to be very applicable to clarify the inter‐observer agreement as well as the differences between observation scales and questionnaires. The interpretation of the width of the limits of agreement, however, is challenging. Only if the limits are narrow enough are two different methods of measurement essentially equivalent. The interpretation of ‘narrow enough’ is not statistically determined but is a question of clinical judgement. Unfortunately, it was not possible to compare the new developed questionnaires to established instruments. However, this emphasised the need for instruments assessing systematic academic instruction. Although more research is necessary to optimise the questionnaires, with this study, an important step has been taken in this process.

References

1 Al‐Hendawi, M. ( 2012 ) ‘ Academic engagement of students with emotional and behavioral: existing research, issues and future directions.’ Emotional & Behavioural Difficulties, 17 ( 2 ), pp. 125 – 141.

2 Banks, T. & Zionts, P. ( 2009 ) ‘ Teaching a cognitive behavioral strategy to manage emotions: rational emotive behavior therapy in an educational setting.’ Intervention in School and Clinic, 44 ( 5 ), pp. 307 – 313.

3 Berliner, D. C. ( 1988 ) ‘ Effective classroom management and instruction: a knowledge base for consultation.’ In J. L. Graden, J. E. Zins & M. J. Curtis (eds), Alternative Educational Delivery Systems: Enhancing Instructional Options for All Students. pp. 309 – 325. Washington, DC : National Association of School Psychologists.

4 Bland, J. M. & Altman, D. G. ( 1986 ) ‘ Statistical methods for assessing agreement between two methods of clinical measurement.’ Lancet, 327 ( 8476 ), pp. 307 – 310.

5 Brigham, F. J., Gustashaw, W. E., Wiley, A. L. & Brigham, M. ( 2004 ) ‘ Research in the wake of the no child left behind act: why the controversies will continue and some suggestions for controversial research.’ Behavioral Disorders, 29 ( 3 ), pp. 300 – 310.

6 Brophy, J. E. & Good, T. L. ( 1986 ) ‘ Teacher behavior and student achievement.’ In M. C. Wittrock (ed.), Handbook of Research on Teaching. ( 3rd Edit ), pp. 328 – 375. New York : Macmillan.

7 Clunies‐Ross, P., Little, E. & Kienhuis, M. ( 2008 ) ‘ Self‐reported and actual use of proactive and reactive classroom management strategies and their relationship with teacher stress and student behaviour.’ Educational Psychology, 28 ( 6 ), pp. 693 – 710.

8 Cooper, P. & Jacobs, B. ( 2011 ) From Inclusion to Engagement: Helping Students Engage with Schooling through Policy and Practices. Chichester : Wiley‐Blackwell.

9 Deming, W. ( 1986 ) Out of Crisis. Cambridge, MA : M.I.T. Center for Advanced Engineering Study.

10 Glaser, R. ( 1962 ) R. Glaser (ed.), Psychology and Instructional Technology: Training Research and Education. Pittsburgh, PA : University of Pittsburgh Press.

11 Gunter, P. L. & Coutinho, M. J. ( 1997 ) ‘ Negative reinforcement in classrooms: what we're beginning to learn.’ Teacher Education and Special Education, 20 ( 3 ), pp. 249 – 264.

12 Hagaman, J. L. ( 2012 ) ‘ Chapter 2 academic instruction and students with emotional and behavioral disorders.’ In J. P. Bakken, F. E. Obiakor & A. F. Rotatori (eds), Behavioral Disorders: Practice Concerns and Students with EBD. (Vol. 23 ), Advances in Special Education, pp. 23 – 41. Bingley, UK : Emerald Group Publishing Limited.

13 John, P. D. ( 2006 ) ‘ Lesson planning and the student teacher: re‐thinking the dominant model.’ Journal of Curriculum Studies, 38 ( 4 ), pp. 483 – 498.

14 Kartikowati, R. S. ( 2013 ) ‘ The technique of “plan do check and act” to improve trainee teachers' skills.’ Asian Social Science, 9 ( 12 ), pp. 268 – 275.

15 Kern, L. & Clemens, N. H. ( 2007 ) ‘ Antecedent strategies to promote appropriate classroom behavior.’ Psychology in the Schools, 44 ( 1 ), pp. 65 – 75.

16 Kern, L., Hilt‐Panahon, A. & Sokol, N. G. ( 2009 ) ‘ Further examining the triangle tip: improving support for students with emotional and behavioral needs.’ Psychology in the Schools, 46 ( 1 ), pp. 18 – 32.

17 Kurz, A., Talapatra, D. & Roach, A. T. ( 2012 ) ‘ Meeting the curricular challenges of inclusive assessment: the role of alignment, opportunity to learn, and student engagement.’ International Journal of Disability, Development and Education, 59 ( 1 ), pp. 37 – 52.

18 Ledoux, G., Roeleveld, J., van Langen, A. & Paas, T. ( 2012 ) COOL Special Technisch Rapport Meting Schooljaar 2010/2011. Amsterdam, UK : Kohnstamm Instituut.

19 Levy, S. & Vaughn, S. ( 2002 ) ‘ An observational study of teachers' reading instruction of students with emotional or behavioral disorders.’ Behavioral Disorders, 27 ( 3 ), pp. 215 – 235.

20 Lewis, T. J., Hudson, S., Richter, M. & Johnson, N. ( 2004 ) ‘ Scientifically supported practices in emotional and behavioral disorders: a proposed approach and brief review of current practices.’ Behavioral Disorders, 29 ( 3 ), pp. 247 – 259.

21 Matheson, A. S. & Shriver, M. D. ( 2005 ) ‘ Training teachers to give effective commands: effects on student compliance and academic behaviors.’ School Psychology Review, 34 ( 2 ), pp. 202 – 219.

22 Mooij, T. & Smeets, E. ( 2009 ) ‘ Towards systemic support of pupils with emotional and behavioural disorders.’ International Journal of Inclusive Education, 13 ( 6 ), pp. 597 – 616.

23 Morine‐Dershimer, G. ( 1978 ) ‘ Planning in classroom reality: an in‐depth look.’ Educational Research Quarterly, 3 ( 4 ), pp. 83 – 99.

24 Pianta, R. C. & Hamre, B. K. ( 2009 ) ‘ A lot of students and their teachers need support: using a common framework to observe teacher practices might help.’ Educational Researcher, 38 ( 7 ), pp. 546 – 548.

25 Porter, A. C. ( 2002 ) ‘ Measuring the content of instruction: uses in research and practice.’ Educational Researcher, 31 ( 7 ), pp. 3 – 14.

26 Raggi, V. & Chronis, A. ( 2006 ) ‘ Interventions to address the academic impairment of children and adolescents with ADHD.’ Clinical Child & Family Psychology Review, 9 ( 2 ), pp. 85 – 111.

27 Scott, T. M., Nelson, C. M. & Liaupsin, C. J. ( 2001 ) ‘ Effective instruction: the forgotten component in preventing school violence.’ Education & Treatment of Children, 24 ( 3 ), pp. 309 – 322.

28 Shen, J., Poppink, S., Cui, Y. & Fan, G. ( 2007 ) ‘ Lesson planning: a practice of professional responsibility and development.’ Educational Horizons, 85 ( 4 ), pp. 248 – 260.

29 Simpson, R. L., Peterson, R. L. & Smith, C. R. ( 2011 ) ‘ Critical educational program components for students with emotional and behavioral disorders: science, policy and practice.’ Remedial and Special Education, 32 ( 3 ), pp. 230 – 242.

30 Siperstein, G. N., Wiley, A. L. & Forness, S. R. ( 2011 ) ‘ School context and the academic and behavioral progress of students with emotional disturbance.’ Behavioral Disorders, 36 ( 3 ), pp. 172 – 184.

31 Sutherland, K. S. & Oswald, D. P. ( 2005 ) ‘ The relationship between teacher and student behavior in classrooms for students with emotional and behavioral disorders: transactional processes.’ Journal of Child and Family Studies, 14 ( 1 ), pp. 1 – 14.

32 Sutherland, K. S., Lewis‐Palmer, T., Stichter, J. & Morgan, P. L. ( 2008 ) ‘ Examining the influence of teacher behavior and classroom context on the behavioral and academic outcomes for students with emotional or behavioral disorders.’ The Journal of Special Education, 41 ( 4 ), pp. 223 – 233.

33 The Dutch Inspectorate of Education. ( 2011 ) Toezichtkader bve 2012. < http://www.onderwijsinspectie.nl/binaries/content/assets/ Actueel_publicaties/2011/Toezichtkader+bve+2012.pdf > (accessed 7 February 2014).

34 Umbreit, J., Ferro, J., Liaupsin, C. & Lane, K. ( 2007 ) Functional Behavioral Assessment and Function‐Based Interventions: An Effective, Practical Approach. Upper Saddle River, NJ : Prentice‐Hall.

35 Van der Worp‐van der Kamp, L., Pijl, S. J., Bijstra, J. O. & Van den Bosch, E. J. ( 2014 ) ‘ Teaching academic skills as an answer to behavioural problems of students with emotional or behavioural disorders: a review.’ European Journal of Special Needs Education, 29 ( 1 ), pp. 29 – 46.

36 Van Gelder, L., Oudkerk Pool, T., Peters, J. & Sixma, J. ( 1973 ) Didactische Analyse. Groningen, UK : Wolters‐Noordhoff.

37 Vannest, K. J. & Hagan‐Burke, S. ( 2010 ) ‘ Teacher time use in special education.’ Remedial and Special Education, 31 ( 2 ), pp. 126 – 142.

38 Vannest, K. J., Soares, D., Harrison, J., Brown, L. & Parker, R. ( 2010 ) ‘ Changing teacher time.’ Preventing School Failure, 54 ( 2 ), pp. 86 – 98.

39 Vaughn, S., Levy, S., Coleman, M. & Bos, C. S. ( 2002 ) ‘ Reading instruction for students with LD and EBD: a synthesis of observation studies.’ Journal of Special Education, 36 ( 1 ), pp. 2 – 13.

40 Wehby, J., Tally, B. B. & Falk, K. B. ( 2004 ) ‘ Identifying the relation between the function of student problem behavior and teacher instructional behavior.’ Assessment for Effective Intervention, 30 ( 1 ), pp. 41 – 50.

41 Willis, G. B. ( 2005 ) Cognitive Interviewing: A Tool for Improving Questionnaire Design. Thousand Oaks, CA : Sage.

42 Winn, D. D., Menlove, R. S. & Zsiray, W. A., Jr. ( 1997 ) ‘ An invitation to innovation: rethinking the high school day.’ NASSP Bulletin, 81 ( 588 ), pp. 10 – 18.

43 Yell, M. L. & Rozalski, M. E. ( 2013 ) ‘ Teaching students with EBD I: Effective teaching.’ In M. L. Yell, N. Meadows, E. Drasgow & J. Shriner (eds), Evidence Based Practices for Educating Students with Emotional and Behavioral Disorders. ( 2nd edn ). Upper Saddle River, Pearson.

44 Yell, M. L., Busch, T. W. & Rogers, D. C. ( 2008 ) ‘ Planning instruction and monitoring student performance.’ Beyond Behavior, 17 ( 2 ), pp. 31 – 38.

45 Yell, M. L., Meadows, N., Drasgow, E. & Shriner, J. ( 2009 ) Educating Students with Emotional and Behavioral Disorders in General and Special Education. Upper Saddle River, NJ : Merrill/Prentice Hall.

Graph: Coordinate system: ‘systematic academic instruction’

Graph: Construction of the questionnaires and observation scales

Graph: Bland–Altman plot concerning inter‐observer agreement observation scale concerning Plan‐Do‐Check‐Act (PDCAObs)

Graph: Bland–Altman plot concerning inter‐observer agreement observation scale concerning academic instruction (AIObs)

Graph: Bland–Altman plot concerning questionnaire concerning Plan‐Do‐Check‐Act (PDCAQ) and observation scale concerning Plan‐Do‐Check‐Act (PDCAObs)

Graph: Bland–Altman plot concerning questionnaire concerning academic instruction (AIQ) and observation scale concerning academic instruction (AIObs)

Graph: Outcomes questionnaire concerning Plan‐Do‐Check‐Act (PDCAQ) and questionnaire concerning academic instruction (AIQ) presented in coordinate system ‘systematic academic instruction’ (N = 56)

By Lidy Worp‐van der Kamp; Sip Jan Pijl; Wendy J. Post; Jan O. Bijstra and Els J. Bosch