One Way to Measure Graduated Teachers' Impact

Background

Accreditation takes various forms as is administered by nationally recognized organizations. In education, accreditors make sure that institutions preparing teachers meet standards at various levels. They do this by examining how the Educator Preparation Provider (EPP) prepares new teachers regarding content and pedagogical knowledge, and the impact graduated educators have on their students.

Accreditation in the United States involves non-governmental entities as well as federal and state government agencies. Only students from nationally accredited institutions may receive federal student aid from the U.S. Department of Education. Another reason why institutions seek accreditation is because the process compels educators’ preparation providers to engage in self-

assessment and reflection with a foundation of evidence-based analyses regarding the efficacy of their programs. This is a constant process shaped by evidence-based modifications. Typically accreditation reviews are conducted every 7 to 10 years.

The accreditation of education programs has been around since the 1950s, but in 2010, the National Council for Accreditation of Teacher Education (NCATE) called for more immersive teacher education programs. Some of the drastic changes triggered new curricula with constant feedback and interaction between teacher education programs and school districts. Two years later, NCATE merged with the Teacher Education Accreditation Council and formed the Council of the Accreditation of Educator Preparation (CAEP), which is a national commission tasked to strengthen accrediting standards and performance reporting for teacher education programs. Missouri State University (MSU) seeks re-accreditation for educators’ preparation under CAEP.

The principles behind the CAEP standards for educator preparation programs focus on the provider’s graduates’ ability to help their students learn and the provider’s culture of assessment is associated with continuous improvement.

The focus of this research brief is to collect data relevant to program 

impact (CAEP Standard 4). It is important to emphize that under CAEP policy, all four components of Standard 4 must be met for an EPP to be fully accredited.

CAEP has noted that the Standard 4 components are challenging to measure. The emphasis of the first three CAEP standards is on preparation, where the focus of Standard 4 is on the teacher performance when they are employed as teachers. Components 4.1 and 4.2. are especially challenging. CAEP reports EPP representatives note the following about this standard:

  • [EPPs] have little or no control over in-service data and would face difficult hurdles in gaining access; 
  • States and districts are increasingly gathering data for some or all of the components, but there are differences across states and school districts in what is measured and how; 
  • While P-12 student surveys are more widely used each year, they still are completed by only a small fraction of enrolled students and may or may not be linked with teacher evaluations; and 
  • If states or districts fail to share results of their measures with educator preparation providers, then EPPs will need to make more substantial efforts to document their evidence for Standard 4.

In September of 2017, CAEP reported on the first 17 institutions that went through the accreditation process under the new standards and guidelines[1]. CAEP found that a third of these institutions drew evidence for program impact from their states or district measures of student learning. In Missouri, educator preparation providers do not receive P-12 student learning or growth data from the state. Nevertheless, through a laborious process, MSU was able to collect teacher impact data from several partnering school districts.

note

Purposes

The primary goal of this study is to examine the impact of MSU graduates for component 4.1 (Student learning and development). To do so, we designed a case study following CAEP’s guidelines for evidence[2]. In this case study, we used data from Missouri’s largest school district to learn more about the effectiveness of our completers when they are teaching.

[1] http://caepnet.org/~/media/Files/caep/standards/caepstnd4-faq.pdf?la=en

[2] http://caepnet.org/~/media/Files/caep/standards/guidancecomponent41september2017.pdf?la=en

[3] http://caepnet.org/~/media/Files/caep/knowledge-center/caep-evidence-guide.pdf?la=en

CAEP Standard 4:

The provider demonstrates the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation.

Impact on P-12 Student Learning and Development

4.1 The provider documents, using multiple measures, that program completers contribute to an expected level of student-learning growth. Multiple measures shall include all available growth measures (including value-added measures, student-growth percentiles, and student learning and development objectives) required by the state for its teachers and available to educator preparation providers, other state-supported P-12 impact measures, and any other measures employed by the provider.

Indicators of Teaching Effectiveness

4.2 The provider demonstrates, through structured and validated observation instruments and/or student surveys, that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve.

Satisfaction of Employers

4.3 The provider demonstrates, using measures that result in valid and reliable data and including employment milestones such as promotion and retention, that employers are satisfied with the completers’ preparation for their assigned responsibilities in working with P-12 students.

Satisfaction of Completers

4.4 The provider demonstrates, using measures that result in valid and reliable data, that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective.

Participants

In the Springfield Public School District, more than 25,000 students attend 36 elementary schools, an intermediate school (fifth and sixth grades), nine middle schools, five high schools, and special programs. Due to the proximity to MSU, most MSU teacher candidates and many graduates are placed and work for the district.

For this study, we focus on MSU elementary teachers hired by SPS recently (in the previous three years) from whom student achievement data (third, fourth, and fifth grades) were available. We collected data on 10 teachers in 2014, 17 teachers in 2015, and 11 teachers in 2016. In all, this study includes data from 38 teachers who are MSU graduates and 784 students.

Measures

The Missouri Assessment Program (MAP) was designed to measure how well students acquire the skills and knowledge described in Missouri’s Learning Standards. The assessments provide information on academic achievement at the student, class, school, district, and state levels. MAP results are used to identify individual student strengths and weaknesses related to the Missouri’s Learning Standards, and to measure the overall quality of education throughout the state.

The MAP grade-level assessment is a yearly standards-based test that measures specific skills defined for each grade by the state of Missouri. All students in third to eighth grades in Missouri public and charter schools take the grade-level assessment. The English language arts (ELA) and mathematics assessments are administered in 3rd to 8th grades. The science assessment is administered in fifth to eighth grades. For this study, we only considered ELA and mathematics scores. MAP results are provided as a scale score. For each grade and subject, there are three cut scores that classify students into performance levels: Below Basic, Basic, Proficient, and Advanced.

Procedures

The first step to collect student achievement data was to identify recently graduated MSU teachers who work in the school district. We requested the list of work places for graduates from the state Department of Elementary and Secondary Education (DESE). The second step was to request an approval to conduct this study from the MSU institutional research board and the SPS research review. We worked with the district to create a memorandum of understanding. To create this agreement, the Deputy Superintendent of Operations, the contract analyst, and the Head of Human Resources of the district worked with the Associate Dean and Vice President of Research at MSU. After the SPS Board signed off, we were able to collect the aggregated student data at teacher level.

District data are publicly available on the website of the department of education. We retrieved data for ELA and mathematics for third to fifth grades in years 2014 to 2016. Data from the sample teachers are also included in the overall district data. Due to this reason, no statistical tests were performed.

Results

We collected aggregated student achievement data on 784 students from 38 teachers across years and grades: 10 teachers in 2014, 17 teachers in 2015, and 11 teachers in 2016. The grade distribution is displayed in Table 1.

table

We compared the percentages of students in the different performance levels (ELA and mathematics) from new MSU graduated teachers to the percentages from all the teachers in the district. Because the sample teachers are included in the overall district data, we did not perform statistical tests. We also plotted MAP scale scores for our sample and the district. Scores on the grade level assessments are not comparable across years.

In the next two figures, the 2014 ELA MAP performance levels (Below Basic, Basic, Proficient, and Advanced) and the scale sores for third to fifth grades are presented. Students from the teachers in the sample did not perform better than the average students in the district. In third grade, 8% of the students from the teachers in the sample were proficient or advanced, while in the district, 36% of students were in these categories. In fourth grade, the differences among groups were small; 46% of the students of the teachers in the sample were proficient or advanced, and 41% of students in the district were in these categories. In fifth grade, students from teachers in the sample performed better than the all students in the district. Seventy-two percent of students from the sample were in the proficient or advanced category while the percentage at district level was 50%. The 2014 ELA MAP scale average scores by grade are displayed in Figure 2. The largest difference is found in third grade. The difference is smaller for fourth grade, and by fifth grade, the average of the sample was slightly larger than the overall district average.

1                

Figure 1. Comparison of 2014 ELA Performance Levels by Grade      

2

Figure 2.Comparison of 2014 ELA Average MAP Scores by Grade

In the next two figures, the 2015 ELA MAP performance levels and the scale sores for third to fifth grades are displayed. Note that the scale of the scores is different, but the results are similar. In third grade, students from the teachers in the sample performed at a lower level than the average students in the district (36% and 51% of students proficient or advanced, respectively). In fourth grade, the differences of percentage of students in the top two performance levels among groups were smaller (41% and 52%). In fifth grade, the distribution of students in the performance level categories was very similar for both groups. Fifty-five percent of students in the sample were labeled as proficient or advanced as opposed to 53% in the district. The 2015 ELA MAP scale average scores by grade are displayed in Figure 4. The average scale score differences were consistent in all grades. The district averages were higher.

3            

Figure 3. Comparison of 2015 ELA Performance Levels by Grade       

4

Figure 4.Comparison of 2015 ELA Average MAP Scores by Grade

 The 2016 ELA MAP performance levels and the scale scores for third to fifth grades are in the figures below. The scale of the scores changed once more, and that is why we cannot compare scores from year to year. From Figure 5, we can see that third grade students from the teachers in the sample performed at a lower level than the average students in the district (47% of students in the sample in the proficient and advanced categories versus and 57% in the district). In fourth grade, the differences among groups were similar (34% and 59% in the top two performance levels). In fifth grade, 67% of students in the sample were labeled as proficient or advanced as opposed to 58% in the district. The 2015 ELA MAP scale scores average by are displayed in Figure 6. The average scale score differences are consistent in third and fourth grades with district higher averages; but in fifth grade, the average ELA MAP scale score from the sample was higher than the district’s.

5     

Figure 5. Comparison of 2016 ELA Performance Levels by Grade           

  6

Figure 6.Comparison of 2016 ELA Average MAP Scores by Grade

The next two figures displayed the 2014 Mathematics MAP performance levels and the scale scores for third to fifth grades. Figure 7 displays the distribution of students by performance levels. In third grade, students from the teachers in the sample performed lower than the average students in the district. The percentage of students in the sample compared to those in the district in the top two performance levels were 14% and 45%, respectively. In fourth grade, the differences among groups were almost non-existent (41% and 40% in those top levels). In fifth grade, the difference in percentages of students in the proficient and advanced categories was also very small (56% and 53%). The 2014 mathematics MAP scale score averages by grade are displayed in Figure 8. The district averages are slightly higher for third and fourth grades but differences in fifth grade are essentially undetectable.

7                     

Figure 7. Comparison of 2014 Math Performance Levels by Grade                 

  8

 Figure 8. Comparison of 2014 Math Average MAP Scores by Grade

The next two figures display the data from the 2015 Mathematics MAP performance levels and the scale scores for third to fifth grades. In Figure 9, students from the teachers in the sample performed lower than the average students in the district. In third grade, the percentages of students in the sample compared to those in the district in the top two performance levels were 21% and 46%, respectively. In fourth grade, the differences among groups were smaller (38% and 44%). In fifth grade, the percentage of students in the proficient and advanced categories was exactly 38% for both groups. The district mathematics MAP scale score averages are higher for all grades compared to the sample.

9     

Figure 9. Comparison of 2015 Math Performance Levels by Grade    

   10

  Figure 10. Comparison of 2015 Math Performance Levels by Grade

Finally, the data for the 2016 Mathematics MAP performance levels and the scale scores for third to fifth grades are presented next. Figure 11 shows that in third grade, the percentages of students in the sample compared to those in the district in the top two performance levels were 41% and 46%, respectively. In fourth grade, the differences among groups were smaller (26% and 42%). In fifth grade, the percentage of students in the proficient and advanced categories was very small (39% and 35%). The district mathematics MAP scale score averages are higher for third and fourth grades compared to the sample. But in fifth grade, the sample average outperformed the district average.

11

Figure 11. Comparison of 2016 Math Performance Levels by Grade     

12

Figure 12. Comparison of 2016 Math Average MAP Scores by Grade

Conclusions and Limitations

The purpose of this study is to provide evidence of the impact new teachers graduated from MSU (i.e., program completers) are having on their students. We found that the students of recently graduated elementary MSU teachers, performed slightly lower than the average students of teachers in the school district in 3rd and 4th grade. In 5th grade, the students from the sample outperformed the average students in the district in years 2014 and 2016. The patterns found in English language arts (ELA) are very similar to the patterns found in mathematics.

The design of the study is highly constrained due to a number of limitations. One of the biggest limitations is the structure of the comparison. We compared the data from newly graduated teachers to data from all teachers in the district. One issue is that the district group included the teacher in the sample. But a prominent issue is that we were comparing recently graduated MSU teachers to teachers with a wide range of experience. At the same time, the district group includes other MSU graduates that at the time had been teaching in the district for more than three years. For this reason, we did not perform a statistical test to compare the scores, nor the classifications. The data we provide is merely illustrative.

Analyzing data from one school district is not representative and should not be generalized. But a more important issue is the fact that student achievement scores are not only influenced by the quality of the teacher, but also by other aspects of schooling and additional non-school factors. Judging the impact of teachers on students based solely on achievement scores provides an incomplete view of the full picture.

This study is one of many attempts to demonstrate the impact our graduated teachers have in schools. As CAEP recognizes, this is not an easy task. After developing a memorandum of understanding with a school district, the collection of aggregated student data at the teacher level should be less complicated. Nonetheless, this process is not easy, and for school districts where just a small number of graduates are present, this might not be an efficient procedure to collect impact data.