by: Zachary Everett, CAEP Communications Associate
July 7, 2016
Many of you, reading this article right now, have met him at one point or another. He’s a sharp thinker, a witty conversationalist, and a snappy dresser to boot. His name is Emerson Elliott and I have had the honor of working with him over the last four years.
Last month, Emerson was awarded a truly deserved distinction: 20 years of exemplary service to CAEP (technically, three to CAEP, the rest to NCATE) and over 50 years of service to the field of education. I recently sat down with him to find out how he got here and what he’s learned through all of the experience garnered over this time.
ZE: 50 years of service… When and where did it all begin for you – and what has changed?
EE: I went to public administration graduate school at the University of Michigan, which is where I became attracted to management and budget in government. In 1957, I applied to the Bureau of the Budget (now known as the Office of Management and Budget), got in, and my wife and I moved to Washington. I joined the professional career staff for President Eisenhower, preparing the President’s budget, which covers everything: spending, legislation, management, statistical forms clearance, and even the regulatory burden of government programs on Americans.
In 1960, I became the Examiner for Education. At the time, the federal government was hardly doing anything in education, but a program called Impact Aid had started for education of children of the military and civilians who worked on military bases – it was a big deal during World War II. Federal money went to schools operated by districts located near the bases or on them. There was also a post-Sputnik education program, the “National Defense Education Act” that provided several fellowships and education grants.
ZE: What did you learn during these early experiences at the federal level?
EE: I learned how to talk to policy people – to draft memoranda that would be used to brief policy officers who were appointed to serve the president. How do you use words, how do you use data to support an argument? This has been a valuable skill in every position since.
The federal education role mushroomed under Lyndon Johnson, who figured out legislation that would appeal to Congress authorizing spending on education, such as the 1965 Elementary and Secondary Education Act. During the Johnson Administration, in 1963, the Commissioner of Education, Frank Keppel, came up with the national assessment idea. Developments in technology and uses of assessments have been a continuing thread throughout my career.
ZE: Based on your experience, what would you say is the biggest barrier to student achievement?
EE: What we now say is the real problem, is the enduring achievement gap between a lot of children, who have very few opportunities in school but also in life, and everybody else. I asked the education assessment staff at [the Organisation for Economic Co-operation and Development] OECD, “why do the US achievement results on PISA turn out the way they do?” They said that we have two populations – one very large population that performs poorly, and another population that performs like other developed countries; because the low-performing population is so large, it affects the overall average and we don’t do well on international comparisons.
Many educators, policy leaders, and media people explain the achievement gap’s connections with poverty, housing, parental education, and so on. This is frustrating because these are the same things were said fifty years ago. We’ve hardly moved in achievement and hardly moved in the results of our political discourse.
We’ve had all these ideas: we were going to put more money into Title I, into school services; we were going to raise spending in schools with high proportions of at-risk children – but the performance is about where it was. There was a slight narrowing of the gap in NAEP scores over 40 years, but it opened up again. These are very difficult questions for America. They require political actions that people are not willing to take. It’s not a shortage of ideas.
One of the things that pleased me very much was a recent panel sponsored by the National Assessment Governing Board about the reasons that five districts participating in the NAEP Trial Urban District Assessment (TUDA) were showing an improvement in math and reading. The district leaders on the panel cited several reasons for their success, and a prominent one was working with the surrounding community governments: making health services available, making the schools safe places for the kids when they are located in high-crime areas and aren’t safe on the street, and offering free meals to kids. These services require school leaders to do things they’re sometimes not comfortable with and don’t necessarily know how to do; they require reaching out to the communities and building relationships with other governmental units and parents. It can be done, but it is still frustrating that the conversation is not more widespread.
ZE: Why aren’t we learning from the information we have?
EE: Data regularly conflict with each other. It happens in every field. But many policymakers and educators feel uncomfortable with this, so we keep redoing the same things.
ZE: Is there a misconception about CAEP’s data expectations?
EE: A reality at CAEP is that EPPs sometimes view accreditation as ‘one more thing’ they are asked to do, and they are frustrated about having to figure out new steps. Not enough providers internalize what we call the culture of evidence as a way to think about “what am I doing here? What do I want to do? What are the ways I can move toward what I want to do, and what does it look like when I get there?” These are important questions to ask. Many institutions will do what’s in the CAEP guidelines, but not absorb the concept.
ZE: Outside of the mark of quality assurance – that is, the nature of accreditation – is there a bigger promise to CAEP accreditation?
EE: In 2013, the CAEP Standards commission brought a lot of different parts of the education professional community together – not only deans and superintendents, but faculty, entrepreneurs, assessment professionals – and decided there was leverage with accreditation that could be used to support preparation reforms. Before that, nobody had thought about using accreditation as a means of change.
The standards created a foundation to move onto that next stage, to use that leverage as a part of reform. Components 3.2 (academic achievement of candidates) and 4.1 (P-12 student learning by a teacher’s students) are central to this concern; I could live with changes around the periphery, but if CAEP backed away from those two, I believe it would give up its growing reputation as a leader in accreditation and in education policy that can make a difference for P-12 students.
In the last few minutes of the final commission meeting, the question was posed:
“There are things embedded in the commission recommendations that some individual commissioners can’t agree with, but do you agree with this as a package? Should this be where the profession is moving?”
And, one by one, each member made a statement about what their concerns were, but every last one of them agreed with the question – yes, this is where the profession should be moving. It was moving. Our Board chair, Mary Brabeck, was on the commission at the time. She said she had never seen this kind of coming together around a set of ideas to bring the field forward. The co-chair of the Commission, Camilla Benbow, emphasized this sentiment when the standards were adopted in 2013. Much policymaking focuses on tweaks, but does not focus on this broad coming together of ideas.
ZE: What, more than anything else, would you like professionals at educator preparation providers (EPPs) reading this to know?
EE: EPPs must have a system that allows them to track the progress of their candidates. If a provider’s stance is that it is too costly to build and maintain an individual data record system, then CAEP accreditation is not going to be an effective mechanism for continuous improvement in that institution.
The culture of evidence has to be a central focus for each provider. I don’t want us to make it a burden for people. It’s like preparation for testing at schools – there shouldn’t be any preparation for tests; how schools should respond is to look at what the test covers, and to figure out what the framework of that test is. Teach to that framework, not to the test. Spending 3-4 weeks on test prep for a particular test is not a rational thing to do. If the test is any good, you should prepare kids using regular instruction and the tests would pick it up. The same applies to the culture of evidence.