Reflections on Instructional Design & Technology
Evaluating Programs & Human Performance
While different evaluation models are suitable for considering the impact of instructional design, I find Brinkerhoff’s Success Case Method (SCM, page 101) intriguing. His model requires the evaluators to survey a population or sample population to identify outliers. The outliers are those who experienced either increased levels or diminished levels of performance when evaluating a specific tool. After analyzing survey data, outliers respond to an extensive interview, sometimes by phone. They identify strategies and supports which led to their success or implementation issues which paved the way to implementation failure. This model appeals to me because the instructors respond to the survey and the interviews, and it identifies those in an organization who push the organization forward. Often, they will provide honest and helpful feedback because they are growth minded and want to see the organization succeed. Instructors also provide tweaks that can further leverage the instructional tool. When we analyze data after a summative assessment, we identify the teacher outliers. We discuss what additional variables might have skewed the data, and then we drill down to how specific teachers interpreted lessons or intervened differently. We have been able to use this approach to increase teacher and student performance significantly. I feel that this model could have some drawbacks. Sometimes, outliers don’t know precisely what they did that worked or didn’t work. It may be necessary to conduct observations to comprehend their approach to implementation. Overall, Brinkerhoff’s SCM provides a manageable, strategic model for evaluation and evolution of instruction.
Another model which applied to my new role as a curriculum designer was Patton’s Utilization-Focused Evaluation (U-FE page 102). Patton’s method required ongoing evaluation and response until successful implementation is achieved, and then more evaluation about the process. As a curriculum designer, I am stepping into a mountain of work and there are many factors, apart from the curriculum itself, which will lead to my success and the successful implementation of this curricula. My first consideration is personalities and politics. I must establish rapport with campus principals in order to gather the data about what their campus (or specific teams) needs to facilitate implementation. I must identify teachers from across the district who could be key in paving the way for successful implementation and those teachers who may attempt to undermine implementation. These teachers can identify specific strengths and shortcomings of our current units and help set goals for improvements. They will also help carry the goals out in their classrooms and communicate with their campuses regarding goals and progress. We will look at district summative data and also feedback to determine the success of carrying out our goals and why certain campuses were more or less successful in implementation. This process will be carried out annually because our district sees curriculum as a living, breathing tool created with and for teachers aimed at propelling student learning. I will be revisiting and reading more about U-FE as I carry out my role and self-reflect on progress made in ELAR curriculum implementation.
Questions in Evaluation
The questions asked in evaluation of instruction should always involved the learning and the learner’s engagement, but I believe there are other responsible questions which facilitate analysis of instructional tools. Our district asks, “Will this lead to the graduate profile?” The graduate profile represents the vision Prosper ISD has for preparing students not only with knowledge, but with civic responsibility, collaboration and communication skills, and the ability to solve increasingly complex problems. This question helps us consider the level and breadth of learning that need to occur for students to become people who will contribute to the economy and society. I think having a question centered on the vision of your organization is crucial so that you avoid losing sight of the big picture.
We also evaluate tools based on their cost. In public education, waste presents a huge risk. The more costly a tool, the more carefully its impact will be tracked and supported.
Another question we raise involves how parents and community members will respond. We understand that these stakeholders must buy in if we are to successfully implement new tools. Often, there are parents, students, and community members present at proposals to help gauge the reaction of those groups. This can help us have a more robust picture of what barriers there may be to implementation.
Research cannot be left out when considering evaluation. This helps us to avoid repeating mistakes other similar organizations have already learned from. It can help us hone down on which areas of our organization to focus on evaluating most urgently.
We are also very focused on innovation as an organization. We feel that this will continue to grow our brand to attract teachers and families to our schools. When considering effectiveness, we will value innovation as a measure of success. However, innovation alone falls short. The instruction must be innovate and effective in producing high levels of learning and engagement.
Non-instructional Solutions to Performance Problems
Reader’s and Writer’s Conferences are commonly known as best practices for increasing student achievement. However, it seems that teachers across our organization are not systematically implementing these tools. I perceive some barriers at the root of this issue. Teachers are not convinced that these practices really will generate results. Teachers feel insecure about their ability to carry out these conferences. Lastly, I think these trainings occur in a lump at certain busy times of the year rather than in a systematic way.
I think we could have a short but engaging session during teacher professional development to present the “case” for implementing conferences for instruction. We could present data and a couple of engaging case studies in the form of videos. Then, we could train key ELAR teachers a month or so into school. They tend to be passionate about improving reading and writing and will have likely spent time thinking and planning how they might do this in their classrooms. Part of this training could be conferences, which administrators use as incentives used to recognize forward-thinking teachers. I could meet with those teachers on google hangouts to discuss implementation ideas and issues. We could use social media to share examples, links to articles or blogs, and to remind teachers about the importance of small group instruction. This approach applies both performance support systems and informal learning. I considered a knowledge management system, but I think sometimes we build those and they aren’t used because they aren’t built by teachers. It would be wiser to begin gathering support for implementation of Reader’s and Writer’s Conferences and then ask teachers to reflect on which supports were most impactful. These could then be catalogued for future use. The whole concept of re-valuing our human capital can be seen in schools in Professional Learning Communities (Rosenholtz 1989) and Understanding by Design (McTighe and Wiggins 1998). Professional Learning Communities consider the collective value of teachers and capitalize on their tacit value to improve learning for all students. This approach helps teachers refine their craft, collaborate, and problem solve together. Understanding by Design theorizes that teachers who truly comprehend the key goals of instruction will facilitate more effective learning. Both of these models value human capital of teachers and supporting their professional growth. This section was the most fascinating portion of the text thus far!
Johnson and Dick(2012). Evaluatlon |n Instructlonal Deslgn: A Comparlson of Evaluation Models. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.
Stolovitch(2012). The Development and Evolution of Human Performance Improvement. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.
Nyugen (2012). Performance Support. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.
Rosenberg (2012). Knowledge Management and Learning: Perfect Together. In R. Reiser and J. Dempsey's, Trends and Issues in Instructional Design and Technology (pp. 1-34). Boston, MA: Pearson.
Reiser, R.A., and Dempsey, J. (2012). Trends and Issues in Instructional Design and Technology. Boston, MA: Pearson.