Skip to main content

The UNC EPP believes content and pedagogical knowledge are critical to quality teacher preparation _ to prepare teacher candidates and assess their abilities to draw upon knowledge, skills, and professional dispositions (UNC Evidence 1 – listed hereafter as UNC 1) and apply them (UNC 3) appropriately with students. To this end, UNC candidates are prepared to: 1) design research-informed instruction and assessments for their students (UNC 2); 2) align to and address College- and Career-Ready Standards (UNC 4); and 3) integrate technology in meaningful and appropriate ways (UNC 5). These goals reflect the UNC Conceptual Framework that supports all educator preparation programs at UNC.

 

Strengths and Weaknesses In Standard 1:

As noted in UNC 1, the teacher preparation course of study is informed by and aligned with the InTASC Model Core Teaching Standards. To address the four InTASC categories_the Learner and Learning, Content, Instructional Practice, and Professional Responsibility_the UNC EPP leverages several measures in the UNC Quality Assurance System (QAS) (see UNC 18) to address candidate proficiency across the CAEP Standard 1 components. This broad, integrated approach using multiple measures is an EPP strength and provides evidence of not only candidate proficiency, but also the EPP’s continuous improvement efforts. The data demonstrate that our candidates are committed to high-quality, responsive instruction for all students and plan instruction to be meaningful to students in terms of their prior learning and cultural assets. The data also help us to recognize a need to focus on and model strategies for supporting and deepening student learning in classroom practice.

In CAEP Standard 1, we demonstrate that our candidates perform at or above proficient levels on multiple measures, each of which is aligned with InTASC Categories and Standards. Licensure examinations, administered through ETS and/or Pearson, also provide a meaningful third-party evaluation of our candidates. Of those candidates who pursue licensures, our EPP has a very high rate of passage, including scores well above the national mean. The NCTCR is yet another proprietary assessment that provides assurance that our candidates are proficient in all inTASC categories. A concern with NCTCR as an indicator here is the quality of our clinical educator training, a concern we address more fully in UNC 7.

With edTPA, as we completed our fourth year of implementation, we recognized the need to shift from our original local evaluation protocol to engage in official, external scoring with Pearson (see UNC 21). The shift to official edTPA scoring shines a light on EPP strengths and some areas of concern. In the data, we see an “implementation dip.” The move to official scoring provides an opportunity for third-party confirmation that our candidates perform well in relation to the Learner and Learning, Content, Instructional Practice, and Professional Responsibility. While we note lower mean scores, we still see these official scores as promising and representative of proficient capacity among our candidates. For the EPP, it also signals a shift from extended pilot to full implementation of edTPA as a QAS measure and possible confirmation that our local evaluation protocols had waned in their consistency over time.

In the edTPA data, we see at least one area for concern in candidate performance on edTPA rubric 9_Subject Specific Pedagogy. While candidates’ scores are not significantly lower on this rubric, the comparatively lower scores do not align with other measures of UNC candidate content knowledge on licensure exams and NCTCR ratings aligned to content knowledge standards. We hypothesize that edTPA rubric 9 scores reflect a weak link between content knowledge development and pedagogical content knowledge development across the program. This is a ripe area for improvement as the new MAT launches; we plan to leverage embedded clinical practice opportunities into opportunities to further develop enacted practice by teacher candidates in their clinical practice triads.

 

Trends in Standard 1:

As we examine the data presented in UNC 1, UNC 2, UNC 3, UNC 4, and UNC 5, we identify several trends.

  1. Multiple Measures to Address Candidate Proficiency:

To demonstrate candidate proficiency, the UNC EPP relies upon a series of multiple measures, all strongly aligned with InTASC standards, NC Professional Teaching Standards, ISTE (International Society of Technology in Education) standards, and 21st Century Skills. For example, in UNC 2, we rely upon edTPA and NCTCR data; in UNC 3, we add licensure exams; in UNC 4, we add data from the QRC pilot. With each perspective on candidate proficiency, we are table to strengthen claims through corroborating evidence from multiple measures. Further, the multiple measures engage local, UNC-based evaluations by faculty and EPP-based clinical educators and external evaluations by LEA-based clinical educators and testing agencies (consider Pearson and ETS).

  1. edTPA Data Shift:

As noted previously, the EPP’s shift in edTPA scoring from local evaluation protocol to official scoring created noise in our available edTPA data for review. Though an “implementation dip” is evident in the data, official scores mirror the strengths and weaknesses evident from local evaluations. The edTPA data trends, rubrics with higher or lower scores, provide a valuable, reliable evidence-base for program improvements (see UNC 19). As more cycles of official data are generated, the evidence-based will only continue to strengthen. The official data serves to increase the rigor and reliability of a valuable performance assessment. The move to official scoring also provides an opportunity to engage faculty, EPP-based clinical educators and LEA-based clinical educator with externally-scored assessment data; these “places and spaces” are critical to the EPP’s continuous improvement model (see UNC 19).

  1. Need Data Analysis at Varying Levels:

Throughout the evidence presented in support of CAEP Standard 1, we find UNC teacher candidates score well on a variety of measures. Again, as in support of CAEP Standard 4 evidences (UNC 14, UNC 15, UNC 16, and UNC 17), UNC candidates are consistently above par on multiple measures. In analyzing these data, we can’t rest at this level, if we are truly seeking continuous improvement. We need to continue the trend of examining the elements of our measures to identify areas for improvement. This issue of the “grain size” of analysis is critical to identifying and developing target improvements in teacher candidate performance and proficiency. One prime opportunity may be deeper analysis of licensure test data (UNC 13), specifically the sub-scales of those tests, to identify areas for improvement within and across licensure areas.

  1. Technology Anchor and Tether:

With the launch of the new MAT program, we also launch a new course focused on Innovative and Engaging Teaching that includes digital and technological pedagogies appropriate to the content (UNC 5). This course is intended to serve as an anchor for modeling and applying technology standards that is tethered to both clinical practice opportunities and program assessments. A result of this self-study report is the realization that the UNC EPP needs to strengthen the tether in support of its quality assurance system (UNC 18).

To summarize, the UNC SOE is in a period of transition related to candidates’ preparation regarding capacity to model and apply instructional technology. We are excited about the comprehensive alignment of instructional technology across programs, as well as the opportunity for all students to engage in a coursework (EDUC 614) that focuses explicitly on this area. At the time of the site visit in February 2018, we will have officially transitioned to the BA/MAT program and the revised approach to instructional technology.

 

Implications and Evidence-based Changes:

First, BA/MAT rollout offers opportunity invest in CT/US training that will improve data quality of measures we rely upon for determining candidate proficiency. Our data are only as good as our evidence and evaluators. One evidence-based change was the shift from local evaluation of edTPA portfolios to official scoring as previously described in this summary (see UNC 21). We also recognize the need to improve assessment data quality wherever possible. One opportunity exists with the NCTCR evaluations by EPP-based clinical educators. As noted in UNC 7, more training for EPP-based clinical educators and LEA-based clinical educators together will yield more reliable evaluations and bolster the actionable nature of the data.

Second, the evidence supporting CAEP Standard 1 supports the EPP’s desire to expand and improve the QAS. In particular, data in support of these components are leading to the following evidence-based changes being enacted:

  1. Pursue validation of the UNC TPP Exit Survey (UNC 17, UNC 19) and as part of the refinement and study process, add new items to strengthen the technology tether at program completion.
  2. Provide training for clinical educators (EPP- and LEA-based) on NCTCR and QRC to improve validity and reliability of the assessments (UNC 7). Continue QRC implementation for more coaching feedback on candidate progress (UNC 23).

Finally, we conclude that a more granular analysis of data will help to provide a deeper evidence-base for program improvement efforts in the EPP. As noted in UNC QAS (UNC 18), as the system matures, we need to invest in more data triangulations (UNC 19) and opportunities to engage faculty and partners in the data. We conclude the need for more investigation of our data at the element or sub-scale level.

When assessing the totality of the UNC EPP’s evidence in support of CAEP Standard 2, we find strong foundations upon which the program has relied over time, but has not tended to through multiple program and leadership shifts in current years. The strengths of the program have sustained our clinical partnerships, continued to engage clinical educators, and offered diverse opportunities for clinical practice; strong foundations exist. However, weaknesses in these foundations have emerged over time and trends have developed within our practices and processes which led to evidence-based changes within the EPP. In this summary, we present EPP strengths and weaknesses, trends across our data and programs, and examples of evidence-based changes initiated as a result of these data.

An overwhelming program change is the shift from traditional undergraduate teacher preparation at UNC to a BA/MAT model. This change, while presenting many challenges, is viewed by program faculty and our partners in the professional community as a ripe opportunity to unite around the new program, contribute insights and expertise to its successful launch, and co-construct practices and processes informed by program data.

 

Strengths and Weaknesses in Standard 2:

As noted in UNC Evidence 6 (all evidences referred to as UNC #), UNC has a rich history of partnership across the state, but primarily in the Research Triangle region. UNC partners with school districts (LEAs) through its membership in the Triangle Alliance for Improvement in the Preparation of Teachers and other Certified Personnel and particularly with neighboring LEAs. Within these LEAs, our school-based clinical educator (CE) partners are long-time mentors to UNC teacher candidates and are often graduates of UNC. Our EPP-based clinical educators, or university supervisors (US), are highly qualified faculty and doctoral students. Together, the CEs and USs form strong triads of support for UNC teacher candidates.

These clinical triads operate in diverse settings within partner LEAs. A strength of the UNC program is the diversity of field experience opportunities, from school settings to community organizations, always with a focus on students as learners within families and communities. UNC also benefits from diverse placement sites in terms of school characteristics and demographics which provide teacher candidates with opportunities to work in rural and urban schools, high and low socio-economic status schools, and racially, ethnically, linguistically, and ability diverse school populations (see UNC 6). Our partners’ investment in creating technology-rich learning environments benefits our teacher candidates and provides valuable guidance to UNC for meaningful instructional technology integration and skill-building in our programs (UNC 8).

These strengths often counter-balance areas of concern or weaknesses. For example, our strong partnerships lack regular, formalized, and consistent feedback mechanisms. In the past, the UNC EPP relied upon personal relations and informal feedback mechanisms to inform the program. Our CEs and USs, while highly qualified in their roles, lack specific training on their roles and responsibilities in UNC clinical triads. For long-standing CEs, the lack of consistent training and easy access to EPP-developed clinical triad documents and expectations led to interpretive gaps between CEs and USs (see UNC 7).

These examples provided an evidence-base for the development of the UNC SIP and its clinical practice focus. The need for more co-construction of clinical practice and the need for better data collection for continuous improvement drive our UNC SIP development.

 

Trends in Standard 2:

As we examine the data presented in UNC 6, UNC 7, and UNC8, we identify several trends.

  1. Program and leadership changes necessitate program reflection.

The UNC SOE experienced significant changes in both school and program leadership over the course of the last three academic years. Additionally, the launch of the new BA/MAT program necessitated a closer look at existing policies and procedures directed at both operations and continuous program improvement. In experiencing these shifts, we realized that some policies and procedures we believed to be meaningful and consistent were difficult to maintain amidst myriad necessary EPP changes (see UNC 15). Our leadership team recognized the need for better documentation of program data, including policies and procedures, such that multi-level changes (EPP, state, federal) will not have an adverse effect on our EPP.  One area that we have targeted for improvement is identifying specific repositories for collecting program data, along with responsible faculty members, and annual schedules for collecting data.

  1. LEA partners value the presence of UNC teacher candidates, USs, and faculty in their schools.

As teacher education enrollments drop nationally and in NC and fewer instructional aides are present in NC classrooms, UNC teacher candidates provide a valuable instructional resource for partner LEAs. Teacher candidates supplement and lead instruction, being offered more opportunities to demonstrate leadership in the classroom than in the past. Additionally, LEA partners, through our partner meetings and other informal conversations, express a desire to host more teacher candidates because it give them a recruitment advantage (see UNC 6). LEA success in hiring UNC completers in their districts is evident in the Job Placement data presented in UNC 16_Employer Satisfaction.

  1. Clustered placements are mutually beneficial.

While UNC does not employ a traditional Professional Development School (PDS) model, it has developed a common practice, particularly in elementary settings, to cluster clinical placements, having 4-6 student teacher placed together at one school (see UNC 6). By clustering placements, UNC develops its capacity at many levels. For student teachers, a professional learning community (PLC) forms which helps to support their development and contribute to teacher leadership development. A similar PLC forms among the school-based CEs at the cluster placement sites. Integral to the success of these cluster placements are the USs who evaluate teacher candidates and serve as a communication conduit between LEAs and UNC. While cluster placements have many positive outcomes, they are at the possible risk of losing the diversity of placement opportunities the UNC EPP values. The EPP is monitoring its cluster placements and communicating with LEA partners (through partner meetings) to ensure placement equity overtime (UNC 8).

  1. Modernization of EPP processes aids data collection, but does not replace personal relationships.

The UNC SIP’s (UNC 23) focus on enhancing and improving co-constructed clinical placements necessitates a robust feedback model to inform decision-making. Lacking consistent and actionable data, the UNC EPP invested in developing data collection mechanisms to inform clinical practice-focused discussion with LEA partners and subsequent continuous improvement efforts. Highlighted examples_the new online form for Clinical Educator volunteers and Principal Approval and the new Clinical Placement Evaluation Tri-Survey_have: 1) reduced the paperwork burden on CEs to collaborate with UNC; 2) increased the efficiency and accuracy of EPP data collection; 3) provided an evidence-base for clinical practice decisions. In particular, early Tri-Survey data confirms the strength of CEs expertise as mentors and teacher leaders. The Tri-Survey data also indicates that more training for US is needed to develop content knowledge engagement.

The trend toward modernization and efficiency in EPP data collection (see UNC 19), though, will not replace the fundamental strength of the UNC EPP’s partnerships and the personal relationships among faculty, partners, and candidates.

  1. New clinical practice assessments are informing co-construction of clinical partnerships and practices.

As noted above, new assessments like the Clinical Placement Evaluation Tri-Survey and the Quality Responsive Classrooms (QRC) observation instrument provide more valuable data upon which UNC and its partners may co-construct clinical practice changes overtime. Preliminary data from both assessments align with other existing EPP assessments, furthering their value for Quality Assurance (see UNC 18) efforts. For example, UNC TPP Exit Survey data (see UNC 17) indicated that UNC completers feel less prepared to meet the needs of diverse learners (e.g., ESL/ELL, AIG, and special needs), but the NCTCR did not provide much actionable guidance for teacher candidate coaching. QRC was selected for implementation for its ability to inform candidate coaching. In time, we hope to see positive data trends emerge that align QRC data and candidate outcomes related to meeting the needs of diverse learners.

 

Implications and Evidence-based Changes:

Based on available EPP data and input, we can draw several conclusions about our EPP’s data related to CAEP Standard 2.

First, we can’t ignore the powerful opportunity inherent in launching a new program, like the BA/MAT. The program launch is a uniting force across all program stakeholders, including EPP and partner faculty and administrators. The launch provides an opportunity to examine past practices, keep those which are mutually beneficial and refine those which are not. Examples of evidence-based changes related to the program launch and informed by reestablished partner meetings include:

  1. Expansion of clustered clinical placements to include secondary placements. Through focused conversations, our clinical partners embraced the opportunity to place multiple student teachers in partner high schools with the hope of similar success as at the elementary level (see UNC 6);
  2. Development of coherent training requirements for clinical triad members in support of teacher candidates. Training to acculturate new USs in the BA/MAT model is critical to the success of the program (see UNC 7 and UNC 23).

Second, we also can’t ignore the impact of new legislative mandates on the content and structure of clinical practice in NC. Focused conversations on the impact of new legislation on LEAs and teacher preparation programs in NC are beginning to yield new collaborative approaches to meeting new legislative demands. Together, we will co-navigate new requirements. The Triangle Alliance MOU discussion at the May 2017 LEA Partner meeting is an example of such co-construction of practices. We plan to leverage other networks available to the EPP to address state legislation and move toward becoming contributors to teacher preparation policy development rather than being receivers.

Finally, we recognize the value of our positive and prolonged relationships with clinical partners. Strong cultivation of clinical triads has been and will continue to be a highlight of our program (see UNC 17). We also recognize, as our past practice and partner feedback indicate, the need to move beyond informal conversations with our partners to create measures and processes to collect data targeted toward quality assurance and continuous improvement in clinical practice. Implementation of the Clinical Placement Evaluation Tri-Survey and the Quality Responsive Classrooms (QRC) observation instrument are prime examples of the EPP’s evidence-based changes in action. These actions not only support EPP efforts to address CAEP Standard 2, but also support UNC’s Quality Assurance Plan (UNC 18).

The UNC School of Education Conceptual Framework embraces equity and excellence for all learners and citizens. To this end, the EPP believes, “Decisions grounded in equity must establish that a wide range of learners have access to high quality education in order to release the excellence of culture and character which can be utilized by all citizens of a democratic society.” We conceptualize “all learners” broadly to include SOE teacher candidates, as well as PK-12 students; therefore, our teacher candidate population should reflect the diversity of our partner communities and be of high quality.

To recruit and retain graduate teacher candidates, the EPP invests in knowing the needs of its community and state in terms of high needs content areas and high needs geographic areas. The EPP attends to teacher recruitment activities in collaboration with local, state, and national partners. The EPP attracts high quality candidates, as assessed on multiple measures, and sees candidates through to program completion. These candidates meet high standards for teacher knowledge, skills, and dispositions.

 

Strengths and Weaknesses In Standard 3:

A clear strength of the UNC EPP is the high-quality candidates attracted to and enrolled in the teacher preparation programs. Cohort GPAs of teacher candidates at UNC average above 3.0 at the EPP-level and at the program/licensure-area level (see UNC11). Teacher candidates at UNC have mean SAT scores well above the 50% at the EPP-level and program/licensure-area level (see UNC 11). Further, the data indicate UNC candidate performance on nationally normed test of academic achievement is in the top 60%. In UNC 11, we identify the few occurrences when a teacher candidate was admitted to a UNC teacher preparation program not meeting the minimum GPA or SAT thresholds, but the cohort averages remained above the CAEP Component 3.2 requirements. In those cases, conditional admits were offered based on other applicant data and characteristics of value to the EPP. Overall, the high quality of selected candidates at UNC is a strength of the program.

Another strength of the UNC EPP are the high pass rates and score means on NC required licensure tests (Praxis and Pearson). Analysis of the EPP-level data indicates that UNC candidates score well above the NC passing scores and often above the national mean. Only in K-12 Music do we find test scores close to the national mean. Due to the high-stakes nature of licensure testing in NC, very few candidates do not take the tests until they achieve a passing score; however, UNC currently does not monitor the number of attempts by candidates. This is an area in which data analysis and program processes are ripe for improvement.

Finally, we consider it a strength that the EPP has a Teacher Recruitment Plan that: 1) aligns with the EPP missions; 2) addresses high need content areas for teachers; and 3) includes specific strategies to recruit and retain teacher candidates from a broad range of backgrounds and diverse populations (UNC 10). Though a nascent plan, early in its implementation, the plan exists and is actively being tended to by the UNC SOE Director of Recruitment (UNC 9).

Among these EPP strengths, we find areas of concerns and weaknesses. The most glaring weakness in CAEP Standard 3 may be the striking lack of candidate diversity. So, while the Teacher Recruitment Plan exists, it is yet to bear fruit. UNC’s goals to attract more diverse teacher candidates and produce more teachers in STEM fields will be monitored through the term of the plan. The success of the plan will benefit UNC, our partners, and NC.

Another concern is the EPP’s assessment of teacher candidate dispositions and non-academic attributes (see UNC 12). New assessments, like the Teacher Dispositions Index (TDI), will help to inform EPP decision making at admission, but more is needed. To better inform program decision-making, the EPP needs to link and enhance its dispositional assessments.

 

Trends in Standard 3

As we examine the data presented in UNC 9, UNC 10, UNC 11, UNC 12, and UNC 13, we identify several trends.

  1. High Quality Candidates Across Measures, Across the Program

As in the evidence presented in support of CAEP Standard 1 and 4, UNC candidates are highly qualified when assessed on the selection criteria associated with CAEP Standard 3. Cohort GPAs are above 3.0 in all areas; SAT scores exceed the top 50%, reaching the top 60% and above (see UNC 11). As they progress through the program, UNC candidates pass licensure exams at high rates (UNC 13) and then, they achieve high edTPA portfolio scores (UNC 21) and proficiency on NCTCR (UNC 22). As completers, UNC candidates generate top impact ratings when assessed by SAS® EVAAS, VAM and Teacher Evaluation Ratings (see UNC 14, UNC 15). Together, a clear trend of high candidate quality exists across the program.

  1. High Quality_Low Diversity

In contrast to the high quality of UNC candidates, we struggle to attract and retain diverse teacher candidates. The racial and ethnic diversity of UNC teacher candidates is less than that of the UNC campus as a whole. We must focus recruitment efforts on attracting high quality, diverse candidates to education. This is a struggle for teacher preparation programs across NC and nationally. While the EPP would benefit from broadening its conception of diversity to include diverse background and the sorts of diversity we value in clinical placements (consider diversity of placements presented in UNC 8), we must focus efforts on recruiting diverse candidates.

  1. Disposition Assessment Thread

The EPP’s QAS must mature to include a strong thread of dispositional assessments from program selection, through coursework, to program completion. Positive dispositional data exist currently_from the Teacher Dispositions Index, coursework, and to NCTCR (UNC 22)_but trends within the data are still emerging and must be connected to other assessments with intentionality. In time, these data will form a stronger thread of dispositional assessment for the EPP.

 

Implications and Evidence-based Changes:

First, teacher candidate diversity must be a top priority. Whispers across the UNC SOE may have acknowledged the lack of diversity among UNC teacher candidates, but the analysis of program data as part of CAEP review solidifies it as an issue to be addressed. We conclude that the EPP has many elements in place to address the lack of diversity, including:

  • The EPP is armed with more data to enter into its recruitment activities and admissions decisions;
  • The UNC Teacher Recruitment Plan (UNC 10) provides a starting point for more collaborative recruitment efforts; strategies and partners included;
  • The newly hired Director of Recruitment is poised to launch more systemic and systematic recruitment activities for the unit. In fact, the new position and hire is a result of evidence-based decision making by EPP leadership.

Second, teacher candidate program progression must be tended to through the EPP’s QAS and other processes. Two of the Standard 3 trends highlighted above_High Quality Candidates and Disposition Assessment Thread_both contribute to this conclusion. UNC QAS will grow to include teacher candidate program progression (see UNC 18). As part of our continuous improvement efforts, the QAS must grow to include selectivity, non-academic/dispositional, and diversity data. By linking these measures to other, more established QAS measures the EPP will be able to connect data across the program, including to performance and outcome data. The seeds of this work have been planted with data with initial connections between TDI and NCTCR data presented in UNC 12.

Finally, evidence-based changes resulting from this CAEP SSR are still emerging; they are currently the conclusions presented above. Yet, one evidence-based change made based on selection data and licensure requirements is evident in the new MAT program’s admission process. Since Praxis and Pearson tests remain a licensure requirement in NC, the faculty decided, based on the high pass rates of candidates and the new shorter program of study, to shift the licensure tests from a program completion requirement to a program admissions requirement. The program was granted approval from the UNC Graduate School to substitute Praxis and Pearson tests for GRE exams for admissions purposes. This affordance, now in its first year of implementation, will: 1) shift licensure exams from a program completion measure to an admissions measure in the QAS; 2) allow the program to conduct deeper investigations of licensure exam scores (overall and sub-scales) as described in the CAEP Standard 1 summary; and 3) reduce program costs to teacher candidates by leveraging licensure exams for admission purposes. While a new program change, we anticipate many positive outcomes from it.

CAEP Standard 4 shines a light on the results of teacher preparation in terms of the impact completers have on the job. It focuses attention on the outcomes of teacher preparation as viewed through the lenses of: 1) impact on P-12 student-learning outcomes (UNC 14); 2) teaching effectiveness (UNC 15); 3) employer satisfaction (UNC 16); and 4) completer satisfaction (UNC 17). In NC, EPPs in the University of North Carolina System are fortunate to have access to a variety of program impact data as a result of its Teacher Quality Research initiative. The UNC Educator Quality Dashboard provides data and visual analytics for three of the four components under CAEP Standard 4, and a new EPP-created survey to address the remaining component is currently being piloted. Through the UNC Educator Quality Dashboard and a variety of EPP activities, the UNC EPP has verifiable data upon which to examine and use in developing evidence-based changes within its programs.

 

Strengths and Weaknesses in Standard 4:

Examining the Standard 4 evidences for strengths and weaknesses across all components leads the EPP to highlight the many positive findings. Across measures informing the impact of UNC completers on P-12 student-earning and teaching effectiveness, we find that UNC completers are consistently among the top performing completers from UNC System Institutions. When considering Value-Added Models (VAM), six of ten UNC programs had positive ratings, ratings about the 0% mean (see UNC 14). In SAS® EVAAS ratings, there are only three instances (of 36 comparisons) when the mean for UNC completers fell below the UNC System average (see UNC 14). In the Teaching Effectiveness data, UNC completers overall are the top rated teachers of all the UNC System Institutions (see UNC 15). These outcome measures confirm the beliefs of many UNC EPP faculty_our completers are prepared to make a positive impact on PK-12 student learning_with verifiable and actionable data.

When candidates provide feedback on their preparation program at UNC, their feedback aligns with the SAS® EVAAS and Teaching Effectiveness (Teacher Evaluation Ratings; see UNC 15) data that indicate strong content knowledge preparation. Our ability to triangulate these measures with licensure completion and employment data not only corroborates our impact data, but also strengthens the EPP’s Quality Assurance System (UNC 18, UNC 19). The UNC TPP Exit Survey at program completion and the Recent Graduate Survey at the completion of the first year of teaching both indicate that UNC completers are well prepared to meet the requirements of their jobs (UNC 17). This preparation includes being well prepared in their content, in their abilities to develop positive relationships with students and positive classroom environments, and in setting high expectations and high ethical standards. Furthermore, UNC completers are licensed (92%+) and employed within one year (55%) in NC at high rates (see UNC 16).

Despite these overall strengths, we find pockets of concern within these outcome measures. The most significant area of concern is in middle grades teacher preparation at UNC. The data on UNC completers in middle grades education are, at best, mixed. Consider the following evidence in UNC 14: 1) In VAM data, UNC completers in middle grades math, reading and science have significantly positive findings, but UNC completers in middle grades algebra are the lowest rated teachers of all in the UNC system; and 2) In SAS® EVAAS data, UNC completers in middle grades social studies have yielded 15% shifts from one reporting year to the next. One conclusion from these findings is that UNC’s middle grades program should address content and pedagogical content knowledge in their coursework as there appears to be a mismatch between aspects of preparation and meeting college and career ready standards included in the NC testing model.

Another area of concern, a weakness, is present in the completer feedback (see UNC 17). UNC completers consistently report being less prepared to: 1) Meet the needs of diverse learners, such as AIG, ESL/ELL, special needs students; and 2) Work with parents and families. The new BA/MAT program of study, with its add-on licensure opportunities in ESL/ELL and special education, is designed to address one of these concerns. The new Schools and Community course is designed to provide more interactions with families and communities prior to student teaching.

 

Trends in Standard 4

As we examine the data presented in UNC 14, UNC 15, UNC 16 and UNC 17, we identify several trends.

  1. UNC completers are highly rated on the UNC Educator Quality Dashboard.

Overall, UNC completers are among the highest rated on the UNC Educator Quality Dashboard. Whether SAS® EVAAS, VAM, or teaching effectiveness data, one will find UNC completers near the top of the overall ratings (UNC 14) when compared to peer institutions. Based on these high-level analyses, it would be simple to point to these data and rest on the UNC laurels. Yet, when we are able to disaggregate data at the program level, we find chinks in the proverbial UNC armor. As noted previously among the weaknesses, there are certain grade levels and content area combinations where UNC completers are not finding consistent success. Whether is it middle grade algebra (VAM), elementary science (SAS® EVAAS), or high school social studies (SAS® EVAAS), pockets of concern exist across the UNC EPP and require attention.

  1. Program impact measures rely upon reliable, state-mandated PK-12 student assessments.

In presenting evidence for CAEP Standard 4, in UNC 14 and UNC 15, we document multiple PK-12 student assessments. These data demonstrate positive UNC completer impact at all levels using data aligned to state standards and college- and career-ready standards. These data contribute to the strength of the QAS (see UNC 19) by providing valid and reliable data to triangulate with other measures, including employer and completer feedback data. An example to such triangulations is presented in UNC 19. These data also provide valuable opportunities to benchmark UNC completers against ratings/scores from other UNC System institution completers.

While these data provide an evidence-base informed programmatic discussion among faculty and clinical partners, they rely upon enough UNC completers teaching in those specific, tested subject areas. In some cases, like in elementary level tested areas, this is not an issue. In contrast, at the secondary level there have been reporting years when there were not enough UNC completers teaching in tested areas to generate findings (see UNC 14). In some cases, the lack of UNC completers in a tested area may be due to teaching assignments or due to shifts in the testing model (new assessments introduced and some assessments removed by legislative action).

  1. Multiple feedback measures and methods are valued.

In presenting evidence for CAEP Standard 4, in UNC 16 and UNC 17, we are able to begin to triangulate data across program impact measures. Triangulating UNC TPP Exit Survey data with Recent Graduate Survey data and Recharge and Reconnect (R&R) data provides a more powerful lens into completer feedback than any one measure alone. In addition to benefitting from multiple measures, the measures are also of different methods, representing quantitative and qualitative approaches. This diversity of assessment methodology not only dispels criticism that accreditation is overly quantitative in nature, they also provide rich, robust data for continuous improvement efforts with partners.

 

Implications and Evidence-based Changes:

Based on available EPP data and input, we can draw several conclusions about our EPP’s data related to CAEP Standard 4.

First, the UNC Educator Quality Dashboard provides a wealth of program impact data for use in documenting UNC completer impact. Without it, UNC 14, UNC 15, UNC 16, and UNC 17 would not be possible. Despite low numbers of completers in tested areas and shifting testing models, the Dashboard data_with linkages between preservice preparation and in-service student-learning outcomes_are the envy of many other states. What these data do not make as clear, is that only those UNC completers who are employed in NC Public School contribute to these data. UNC completers who teach in NC private or independent schools or who move to teach in other states are “lost” to the EPP. Based on this consideration, we reach two conclusions: 1) data from the UNC Educator Quality Dashboard must be a part of our Quality Assurance Model because our primary stakeholders are the people of NC and their students; and 2) we must consider what resources, if any, we should invest in assessing program impact for the small percentage of UNC completers not employed in NC Public Schools.

Second, the new BA/MAT program of study will help to address key gaps in past UNC teacher preparation programs, specifically, preparation to meet the needs of diverse learners (see UNC 17). The new program design was influenced by completer feedback indicating less than optimal preparation to meet the needs of AIG students, ESL/ELL students, and special needs students. The new program option to add-on licensure in ESL/ELL and/or special education provides a valuable opportunity to not only address the evidence-based gap in UNC’s preparation program, but also addresses a lack of teachers in certain high needs content areas, like special education (see UNC 9).

Finally, a review of available measures in the UNC Quality Assurance Measures (see UNC 18, UNC 19), over time, led the EPP to develop and/or adopt new assessments of program impact. One example of such evidence-based change is the development of the UNC TPP Exit Survey (see UNC 17). Recognizing that UNC lacked a consistent exit survey across the EPP, we developed and administered a new survey in 2016 to fill the feedback gap at the point of program completion. Since implementation, data from the new UNC TPP Exit Survey has been triangulated with data from Recent Graduate Survey and the R&R induction program. The alignment of findings across these completed feedback measures is a benefit to the EPP. Another example of the evidence-based change exists in the development of the new NC Employer Satisfaction Survey. Realizing the EPP lacked specific employer feedback, the Assistant Dean of Educator Preparation and Accreditation sought out the opportunity to work with the NCDPI work group in developing the survey.

 

In summary, the UNC EPP benefits from openly available data about its program impact. The data and reporting on the UNC Educator Quality Dashboard help to inform continuous improvement efforts and to build a data-engaged culture in teacher preparation at UNC. However, with more available data, we are beginning to question are these the right data and what additional data might the EPP want or need. These questions demonstrate how the UNC EPP is organically addressing continuous improvement (see UNC 20) as it digests program impact data.

The Quality Assurance System (QAS)

The UNC EPP finds evidence the QAS is working through a few key feedback mechanisms and actions by EPP faculty and partners. First, the EPPs ability to coalesce multiple, previously stand-alone assessment and outcome measures into a unified system to inform continuous improvement is evidence of its development and impact. Like at many UNC EPPs, the outcome data provided on the UNC Educator Quality Dashboard once set apart from EPP activities and was not valued by EPP faculty as an indicator of program impact. Through more focused faculty conversations—and the triangulation of UNC Dashboard data with other EPP measures—the UNC QAS has grown to include a robust mix of internal and external measures, to become viewed as an integrated piece of EPP work, and most importantly, to address more EPP questions about its programs and candidates.

The evolving QAS is being questioned by EPP faculty to address programmatic issues and modifications are being developed and implemented. One example is the shift from local evaluation of edTPA portfolios to official scoring. After several years of local scoring by EPP faculty and PK-12 partners, data in the QAS indicated a shift in the reliability of local scoring. The EPP did not have the personnel and fiscal resources to engage in local scorer training that would yield reliable results. While EPP faculty were reluctant to release the scoring process to an external entity, they valued the trade-off of time and energy to: 1) pilot the QRC as a new measure in the QAS; and 2) to engage P-12 partners who previously scored edTPA portfolios with EPP faculty in deeper conversations about practice. As a result, the modification in the QAS led to not only valid and reliable edTPA scoring data, but a new observation tool for the EPP (a purposeful change) and reestablished partner meetings as a valuable feedback mechanism for the EPP and its professional community.

Other examples of how the QAS has matured over time include:

  • Formalizing an anecdotal feedback loop with clinical placement partners through the development and implementation of the Clinical Placement Tri-Survey.
  • Developing and implementation the UNC TPP Exit Survey as a feedback mechanism at program completion.

 

Data in the QAS

As a maturing system, the UNC QAS has strengths is must engage and weaknesses it must address. At presents, its strengths are the quality of the available data and the partners (PK-12 partners, NC DPI, UNC Dashboard) with whom we work to collect and analyze data for program improvement. Areas for continued development also exist. UNC’s new survey measures will require refinement and validation to ensure on-going data quality. Furthermore, while QAS measures (other than the EPP-developed surveys) are relevant, verifiable, representative, cumulative and actionable, the EPP needs to leverage or create more places and spaces for data conversations among faculty, candidates and completers, and partners. In short, the QAS needs to mature to become a robust and regular feature of program. We see opportunities for this through the reestablished LEA partner meetings, through new program assessment reports for the institution, and through NC’s new program approval process and annual reporting.

Despite these growth areas, a clear strength of UNC’s QAS is the ability to triangulate program and outcome data. In UNC 19, we presented two examples of such triangulations, one with program data, one with outcome data. More opportunities exist to look across QAS measures to study various aspects of EPP processes and outcomes and make evidence-based changes to improve teacher candidate outcomes.

 

Use of Data for Continuous Improvement

Having data and using data for continuous improvement are two different tasks and required two different types of EPP champions. The hiring of an Assistant Dean of Educator Preparation and Accreditation jump-started the EPP’s focus on having a strong, comprehensive QAS. With more data, comes more focus on data use for continuous improvement. One example is the shift to official edTPA scoring. EPP faculty and administrative leaders collaborated to explore the costs and benefits of local scoring versus official scoring and determined that, over the long term, official scoring would yield benefits other than more verifiable data. To reach those conclusions, faculty and leadership needed more data and more opportunity to study and question the data. Together, such activities build a foundation for not only future faculty engagement, but also an EPP culture that values data and dialogue around the data.

As the system matures, the faculty are asking more questions about QAS system data—why these data? Are these the right data? Are more or different data available to inform our decision-making? Data-informed discussions are leading to purposeful change within the program. Nascent examples include: 1) expansion of clustered student teaching placements into secondary grade levels; 2) new content knowledge requirements at program entry; and 3) implementation of the Clinical Placement Tri-Survey. Additionally, QAS data are providing more common ground for discussions with PK-12 partners. Since most QAS measures are aligned with the NC Professional Teaching Standards and NC Standard Course of Study, EPP data share a common language that PK-12 partners understand. We can build more bridges to practice with our partner because many QAS measures align from pre-service to in-service settings.

 

Outcome Measures

Review of EPP Outcome Measures over the past three years indicate the following:

  • Licensure Rate:

More UNC completers are becoming licensed in NC. We believe this si a result of EPP efforts to encourage completers to seek a first license in NC, even if they are planning to move to another state to teach. We also credit the new NC DPI Online Licensure System for making the process a simpler and faster one for completers and EPPs.

  • Completion Rate:

UNC teacher candidates complete the program at a high rate; however, as part of this CAEP self-study, we recognize the need to improve our tracking mechanisms for candidates who may struggle in the program and do not complete. Evidence presented in support of CAEP Standard 3 indicates that UNC completers are very successful. Now the EPP needs to ensure that we are successful in providing adequate and appropriate support to our teacher candidates.

  • Employment Rate:

UNC completers are hired and retained in NC Public Schools at high rates, often above those rates for other UNC institutions (see UNC xx). We are beginning to see more completers pursue teaching positions outside of our primary partner LEA and outside of the Research Triangle Park region; we consider this a positive indicator of the program’s social justice mission. While the UNC EPP benefits from comprehensive and valuable data about its completers hired in NC Public Schools, we acknowledge that we have very little data about the small percentage of candidates who actively seek first teaching positions outside of NC. This is an issues EPPs grapple with nationally.

  • Consumer Information:

Through the UNC Dashboard and the IHE Report Cards, the EPP is able to examine places of employment for completers employed in NC Public Schools. Initial teacher compensation in NC it determined by a state salary table and available local supplements provided by individual LEAs. UNC provides additional Consumer Information in this linked document: https://www.unc.edu/studentaid/pdf/misc/ConsumerInformation.pdf

 

[table “” not found /]