Tuesday, June 20, 2017

Program Evaluation and Human Performance

Reflections on Instructional Design & Technology
Week 3

Evaluating Programs & Human Performance

While different evaluation models are suitable for considering the impact of instructional design, I find Brinkerhoff’s Success Case Method (SCM, page 101) intriguing. His model requires the evaluators to survey a population or sample population to identify outliers. The outliers are those who experienced either increased levels or diminished levels of performance when evaluating a specific tool. After analyzing survey data, outliers respond to an extensive interview, sometimes by phone. They identify strategies and supports which led to their success or implementation issues which paved the way to implementation failure. This model appeals to me because the instructors respond to the survey and the interviews, and it identifies those in an organization who push the organization forward. Often, they will provide honest and helpful feedback because they are growth minded and want to see the organization succeed. Instructors also provide tweaks that can further leverage the instructional tool. When we analyze data after a summative assessment, we identify the teacher outliers. We discuss what additional variables might have skewed the data, and then we drill down to how specific teachers interpreted lessons or intervened differently. We have been able to use this approach to increase teacher and student performance significantly. I feel that this model could have some drawbacks. Sometimes, outliers don’t know precisely what they did that worked or didn’t work. It may be necessary to conduct observations to comprehend their approach to implementation. Overall, Brinkerhoff’s SCM provides a manageable, strategic model for evaluation and evolution of instruction.

Another model which applied to my new role as a curriculum designer was Patton’s Utilization-Focused Evaluation (U-FE page 102). Patton’s method required ongoing evaluation and response until successful implementation is achieved, and then more evaluation about the process. As a curriculum designer, I am stepping into a mountain of work and there are many factors, apart from the curriculum itself, which will lead to my success and the successful implementation of this curricula. My first consideration is personalities and politics. I must establish rapport with campus principals in order to gather the data about what their campus (or specific teams) needs to facilitate implementation. I must identify teachers from across the district who could be key in paving the way for successful implementation and those teachers who may attempt to undermine implementation. These teachers can identify specific strengths and shortcomings of our current units and help set goals for improvements. They will also help carry the goals out in their classrooms and communicate with their campuses regarding goals and progress. We will look at district summative data and also feedback to determine the success of carrying out our goals and why certain campuses were more or less successful in implementation. This process will be carried out annually because our district sees curriculum as a living, breathing tool created with and for teachers aimed at propelling student learning. I will be revisiting and reading more about U-FE as I carry out my role and self-reflect on progress made in ELAR curriculum implementation.

Questions in Evaluation

The questions asked in evaluation of instruction should always involved the learning and the learner’s engagement, but I believe there are other responsible questions which facilitate analysis of instructional tools. Our district asks, “Will this lead to the graduate profile?” The graduate profile represents the vision Prosper ISD has for preparing students not only with knowledge, but with civic responsibility, collaboration and communication skills, and the ability to solve increasingly complex problems. This question helps us consider the level and breadth of learning that need to occur for students to become people who will contribute to the economy and society. I think having a question centered on the vision of your organization is crucial so that you avoid losing sight of the big picture.
We also evaluate tools based on their cost. In public education, waste presents a huge risk. The more costly a tool, the more carefully its impact will be tracked and supported.
Another question we raise involves how parents and community members will respond. We understand that these stakeholders must buy in if we are to successfully implement new tools. Often, there are parents, students, and community members present at proposals to help gauge the reaction of those groups. This can help us have a more robust picture of what barriers there may be to implementation.
Research cannot be left out when considering evaluation. This helps us to avoid repeating mistakes other similar organizations have already learned from. It can help us hone down on which areas of our organization to focus on evaluating most urgently.
We are also very focused on innovation as an organization. We feel that this will continue to grow our brand to attract teachers and families to our schools. When considering effectiveness, we will value innovation as a measure of success. However, innovation alone falls short. The instruction must be innovate and effective in producing high levels of learning and engagement.

Non-instructional Solutions to Performance Problems

Reader’s and Writer’s Conferences are commonly known as best practices for increasing student achievement. However, it seems that teachers across our organization are not systematically implementing these tools. I perceive some barriers at the root of this issue. Teachers are not convinced that these practices really will generate results. Teachers feel insecure about their ability to carry out these conferences. Lastly, I think these trainings occur in a lump at certain busy times of the year rather than in a systematic way.

I think we could have a short but engaging session during teacher professional development to present the “case” for implementing conferences for instruction. We could present data and a couple of engaging case studies in the form of videos. Then, we could train key ELAR teachers a month or so into school. They tend to be passionate about improving reading and writing and will have likely spent time thinking and planning how they might do this in their classrooms. Part of this training could be conferences, which administrators use as incentives used to recognize forward-thinking teachers. I could meet with those teachers on google hangouts to discuss implementation ideas and issues. We could use social media to share examples, links to articles or blogs, and to remind teachers about the importance of small group instruction. This approach applies both performance support systems and informal learning. I considered a knowledge management system, but I think sometimes we build those and they aren’t used because they aren’t built by teachers. It would be wiser to begin gathering support for implementation of Reader’s and Writer’s Conferences and then ask teachers to reflect on which supports were most impactful. These could then be catalogued for future use. The whole concept of re-valuing our human capital can be seen in schools in Professional Learning Communities (Rosenholtz 1989) and  Understanding by Design (McTighe and Wiggins 1998). Professional Learning Communities consider the collective value of teachers and capitalize on their tacit value to improve learning for all students. This approach helps teachers refine their craft, collaborate, and problem solve together. Understanding by Design theorizes that teachers who truly comprehend the key goals of instruction will facilitate more effective learning. Both of these models value human capital of teachers and supporting their professional growth. This section was the most fascinating portion of the text thus far!



Johnson and Dick(2012). Evaluatlon |n Instructlonal Deslgn: A Comparlson of Evaluation Models. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.

Stolovitch(2012). The Development and Evolution of Human Performance Improvement. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.

Nyugen (2012). Performance Support. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.

Rosenberg (2012). Knowledge Management and Learning: Perfect Together. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.

Reiser, R.A., and Dempsey, J. (2012). Trends and Issues in Instructional Design and Technology. Boston, MA: Pearson.


  1. I found Patton's Utilization Focused evaluation intriguing. I think you gave a good explanation of how it works. As a teacher, I am constantly evaluating by instruction and responding by tweaking the mistakes or difficult parts. This constant evaluating and responding is what makes our teaching more refined and our student's scores better. When you wrote that our evaluation questions should always involve the learning and the learner's engagement, you were on point. If the evaluation does not do those two things then it is just a waste of our time and the student's time.

  2. Heather, I found your post very interesting coming from a different perspective. Do you plan curriculum for the entire district for ELAR? You did a great job addressing solutions to those problems (conferences). I love that my campus is so invested in PLCs. As the only English II Pre-AP teacher, I generally PLC with my vertical alignment team. At the beginning of the year, I meet with the on-level English II team, but often it isn't "team planning." The team lead essentially has already made decisions and the others just follow along. There is some problem-solving at the beginning and refinement of plans, but I believe we could benefit from more of a collective approach.

  3. Your post is well thought out and I like how well you are able to make connections to your personal career.These evaluations are useful to anyone in any position, and serve a valuable purpose.

    The Graduate Profile that your district is doing sounds intriguing, and I like how it looks at long term as well as short term. I like how you've mentioned stakeholders, I think that it's very important to get the feedback from parents and the community. They also have an expectation of what they want to see in children's learning and development. Excellent post!

  4. I appreciate the "graduate profile" and that your district identifies it as important, because it includes more than just learning/teaching. When my principal asked me to reflect on my class this year, one of the things I mentioned is that I try to help my students become responsible and respectful folks who are confident in themselves. This looks a little different at a first grade level than it does in high school, but he still looked a little surprised that I expect (and teach) anything of the sort. You mentioned the importance of the organization's "vision", and I wonder if maybe ours could be a little more clear.

    PLCs will be a new concept for us this year. So far, each campus has a "PLC Lead Teacher" for each grade, but aside from that I'm not entirely sure about the expectations or goals. In the past, teachers have said they do not feel valued, so maybe implementing PLCs will help.

    I think you're on track with changing how your conference training is done. It is hard when you start school with a mountain of information, with zero time to process it. By spreading it out over time, and systematically training select people to start with, I think you'll find the program more successful. Teachers will likely appreciate the informal learning more than the added "training", and the small group setting should help them become more comfortable with the process.