Occasional Paper 23

Transcript

1 National Institute for Learning Outcomes Assessment December 2014 A Simple Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig Keston H. Fulcher, Megan R. Good, Chris M. Coleman, and Kristen L. Smith knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge create achievement connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge knowledge accountability connection understand communicate listen learn access quality innovation success ingenuity self-reflection educate action understand intellect knowledge accountability connection self-reflection educate action understand communicate curiosity challenge create achievement connection self-reflection curiosity challenge create achievement connection self-reflection knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge educate innovation success ingenuity intellect curiosity challenge create achievement knowledge accountability connection self-reflection educate action understand communicate curiosity challenge create achievement connection self-reflection understand communicate listen learn access quality action educate action understand communicate listen learn action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge knowledge accountability connection access quality self-reflection curiosity challenge create achievement learn access quality innovation success ingenuity self-reflection educate action understand intellect knowledge accountability connection self-reflection educate action understand knowledge accountability connection self- reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge connection knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity challenge create achievement connection self-reflection educate action understand connection self-reflection understand communicate listen learn access quality action create achievement connection self-reflection educate action understand communicate listen learn access quality innovation success educate action communicate listen learn access quality action educate action understand communicate educate innovation success self- reflection knowledge accountability communicate listen learn achievement connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect access quality innovation success self-reflection curiosity challenge create achievement connection self-reflection understand educate action understand communicate listen learn action understand communicate listen learn access quality innovation success ingenuity curiosity challenge create achievement connection self-reflection understand communicate listen learn Occasional Paper #23 www.learningoutcomesassessment.org 1 | National Institute for Learning Outcomes Assessment

2 About the Authors Contents Keston H. Fulcher, Center for Assessment and Research Studies, Department of Graduate Psychology, James Madison University Abstract...3 Megan R. Good, Department of Graduate Psychology, James Madison A Simple Model for Learning Improve - ment: Weigh Pig, Feed Pig, Weigh University Pig...4 Chris M. Coleman, Department of Graduate Psychology, James Madison Learning Improvement: The Simple University (now at Babson College). Model...5 Kristen L. Smith, Department of Graduate Psychology, James Madison Why Learning Improvement is Rare...5 University Structuring a University for Learning Improvement: Our “Aha” Moment...8 Special thanks to Sara J. Finney, Teresa A. Gonzalez, Linda C. Halpern, Carol A. Hurney, Cara C. Meixner, Carole L. Nash, and Donna L. Sundre PLAIR Part 1: Readiness for Evidenced for their conceptual and practical contributions to the learning improvement Learning Improvement...9 model at James Madison University. PLAIR Part 2: Plan an Intervention...10 Conclusion...16 References...17 NILOA National Advisory Panel...18 About NILOA...19 Please cite as: Fulcher, K. H., Good, M. R., Coleman, C. M., & Smith, K. L. (2014, December). A simple model for learning improvement: Weigh pig, feed pig, weigh pig. (Occasional Paper No. 23). Urbana, Il: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment. 2 | National Institute for Learning Outcomes Assessment

3 Abstract Assessing learning does not by itself result in increased student accomplishment, much like a pig never fattened up because it was weighed. Indeed, recent research shows that while institutions are more regularly engaging in assessment, they have little to show in the way of stronger student performance. This paper clarifies how assessment results are related to improved learning – assess, effectively intervene, re-assess – and contrasts this process with mere changes in assessment methodology and changes to pedagogy and curriculum. It also explores why demonstrating improvement has proven difficult for higher education. We propose a solution whereby faculty, upper administration, pedagogy/curriculum experts, and assessment specialists collaborate to enhance student learning. 3 | National Institute for Learning Outcomes Assessment

4 A Simple Model for Learning Improvement: Weigh Pig, Feed Pig, Weigh Pig Keston H. Fulcher, Megan R. Good, Chris M. Coleman, and Kristen L. Smith A pig never fattened up only because it was weighed. A racehorse never ran faster because a stopwatch was clicked. A fevered dog’s temperature never dropped as a result of reading a thermometer. Brown and Knight (1994) Often, emphasis is placed on made this point twenty years ago. Nevertheless, some infer that student assessment mechanics rather learning will automatically improve as a result of assessment. Indeed, test than effective pedagogy and vendors often convince administrators that new instruments X and Y will bring about student learning. Such tools have not, do not, and will not by curricula. themselves improve learning. A more refined version of this perspective is that faculty members and administrators will understand their programs better through assessment and, in turn, they will make policy or programmatic decisions that positively impact student learning. Unfortunately, recent findings suggest that regardless of the assumed process, little action is generally taken on results (Blaich & Wise, 2011). In other words, the promise of assessment is mechanics rather than not realized. Often, emphasis is placed on assessment effective pedagogy and curricula. Assessment, pedagogy, and curriculum are not mutually exclusive. In fact, they should work hand in hand, yet most institutions have yet to intentionally connect them effectively. For these reasons Hersh and Keeling (2013) argued that higher education should strive for a culture of learning rather than a culture of assessment. In this vein, we propose integrating the three pillars of learning– assessment, pedagogy, and curriculum– at the program level with the aim of evidencing learning improvement. First, to put our observations in context, we clarify different interpretations of “using assessment results” and discuss a model for conceptualizing learning improvement. We then examine why higher education currently falls short of evidencing learning improvement. The phrase, use of results is often used, but it is not always clear what use means. For example, some people interpret use of results as (a) changes to assessment mechanics (such as better sampling), while others cite (b) changes made to a program (e.g., curricular or pedagogical modification). In this paper, we define use of results as (c) making a change to a program and then re-assessing to determine that the change positively influenced student learning. The latter definition is consistent with how Banta and Blaich (2011) define “closing the loop.” Many faculty and staff in higher education, as well as assessment professionals, confuse (a) and (b) with (c). They make statements like, “We made x, y, and z improvements to the program.” But they really mean that they made x, y, and z changes . A change is only an improvement when one can demonstrate its positive effect on student learning. 4 | National Institute for Learning Outcomes Assessment

5 For this article, we define assessment as everything typically encompassed in the process – defining learning outcomes, mapping them to the curriculum, selecting an instrument, collecting data, analyzing results, reporting results, PLAIR is the Program with the exception and communicating with stakeholders – of using results Learning Assessment, for improvement. The purpose of doing so is to separate the assessment Intervention, and mechanics from use of results for improvement (i.e., faculty- or staff-driven changes to programming/curricula that are re-assessed and then deemed Re-assessment model. improvements). Learning Improvement: The Simple Model 1 for improvement is: assess, In a nutshell, the simplest model evidencing intervene, re-assess. Or: weigh pig, feed pig, weigh pig; henceforth referred to as Program Learning Assessment, Intervention, and Re-assessment (PLAIR). Improved learning is demonstrated when a re-assessment suggests greater learning proficiency than did the initial assessment. Although the model sounds simple, evidence of using results in this way is surprisingly rare. Banta, Jones, and Black (2009) reviewed almost 150 profiles of good assessments in higher education. They found that only 6% could demonstrate improved learning. Even more sobering, one would assume that this modest percentage would be far lower in a random sample of academic programs. Regarding the Wabash study, where universities were provided with ample assessment resources, Kuh (2011) observed that few schools showed how they intentionally changed their policy or practice based on assessment information. He further stated, “Rarer still were colleges or universities where changes in policies or practices made a positive difference in student attainment” (p. 4). In this context, Kuh’s use of “attainment” refers to improved student learning outcomes as captured by the Collegiate Learning Assessment. Why Learning Improvement is Rare Our institution, James Madison University, has a rich history of high quality student learning outcomes assessment. We have some examples of evidenced learning improvement, but far fewer than we would like. We have read assessment reports or heard stories that encompass almost all combinations of assess, intervene, and re-assess where one or more of the critical three components is missing. We illustrate such basic breakdowns of PLAIR through hypothetical examples where programs attempt to improve students’ writing. We follow these examples with more nuanced ways in which the model can fail. While the examples are based on a skill – writing - the PLAIR is equally applicable to other kinds of outcomes such as knowledge or dispositions, and could be implemented in academic programs or student affairs units. 1 As a technical aside, this simple model can be operationalized as a pseudo-longitudinal design where, for example, seniors are assessed and then, after the program has made substantial changes, a later cohort of seniors is assessed. In this case, a Cohen’s d would suggest the difference in proficiency between the two cohorts. If the latter cohort performs better than the former, then the model was executed successfully. The model could also be implemented as a comparison of growth (i.e., pre- post results) between two different cohorts with one receiving the new intervention. In this case, two effect sizes (Cohen’s d) are computed: one for the growth of each cohort. If the effect size for the second group is larger than the first then the model has been successfully implemented. 5 | National Institute for Learning Outcomes Assessment

6 Basic Breakdowns in the Model Assess , intervene, re-assess. Program A’s faculty are not satisfied with students’ writing proficiency. To address this issue, the faculty met numerous times. The point is that, to evidence From these meetings, several initiatives were launched. A course in writing writing improvement at the was added. Students wrote more papers in existing classes too. After students program level, the pedagogical went through this new curriculum, the program implemented a program- or curricular intervention must level writing assessment rubric. They found that students on average met be implemented consistently in their expectations around writing. all pertinent sections and the This story sounds like a good one. The problem is that Program A would assessment must be administered have difficulty demonstrating that this new curriculum was more effective before and after. at fostering student learning in relation to writing than the previous one because no pre-assessment was implemented. Back to the pig example, the pig was fed and then weighed. It is unknown how much weight the pig actually gained, if any. Program B’s faculty were dissatisfied with students’ intervene, Assess, re-assess. writing. Year after year they implemented a robust writing assessment. And, every year, the results suggested the same problem: students were graduating with sub-standard writing skills. Nevertheless, no systematic change in curriculum or pedagogy was made. Some faculty tweaked their individual sections but did not coordinate with other faculty. In this scenario, despite good methodology, learning improvement was not evidenced because no coordinated intervention was implemented. The pig was weighed and then weighed again. However, no weight gain was evidenced because the pig was not fed. re-assess . Program C assessed their students’ writing and Assess, intervene, were not pleased. In response they required additional papers through their curriculum. Also, the department head paid for several in-service workshops where faculty learned from writing experts how to provide better feedback to students. Unfortunately, before the first affected student cohort received the full intervention, the assessment coordinator took a job at a different university. Unfortunately, the program did not assess subsequent cohorts. Given that no follow-up assessment was conducted after the intervention was implemented, the efficacy of the new curriculum and better trained faculty was unknown. The pig was weighed and then fed. Unfortunately, the pig was not weighed after the feeding, thus obfuscating legitimate claims about weight gain. Although none of these programs successfully implemented the PLAIR, some benefits accrued nonetheless. For Programs A and C, it is quite possible that students wrote better because of the programmatic changes. Indeed, the faculty could relay anecdotes of student success. Unfortunately, they could not demonstrate persuasively this improvement to an external audience. For Program B, some individual sections may have improved, which is good for individual faculty and some students, but at the program level the needle did not move. The point is that, to evidence writing improvement at the program level, the pedagogical or curricular intervention must be implemented consistently in all pertinent sections and the assessment must be administered before and after. 6 | National Institute for Learning Outcomes Assessment

7 Nuanced Breakdowns in the Model In addition to the aforementioned basic process breakdowns, more nuanced In conversations we have problems can undermine the model. In the methodology context, sampling had with faculty and staff, a may be unrepresentative, instruments unproven, students unmotivated, incorrect analyses performed, etc. In other words, the data may not common theme is that many accurately reflect the targeted student learning. if not most educators make adjustments to their pedagogy From the intervention perspective, problems arise as well. Two notable or curricula. Unfortunately, ones include lack of alignment and lack of successful implementation. For these changes are rarely some programs, there is little alignment or mapping between curricular/co- curricular activities and outcomes. Basically, students engage in activities implemented at a program but there is no clear plan about how these activities relate to program-level level. outcomes. Even if the program has clear student learning outcomes and a logical curriculum to engender them, students still may not improve on those program-level outcomes for any number of reasons. Perhaps the program- level curriculum map – while looking good on paper – has little in common with what is actually taught by faculty across several sections; perhaps the pedagogical techniques used in courses don’t effectively help students learn 2 the desired skills. Acknowledging many of the same shortcomings, Banta and Blaich (2011) suggested that colleges evaluate their assessments to ascertain how such processes can help programs reach their aims. We have conducted such meta- assessment at our institution (Fulcher & Bashkov, 2012). We found that, in general, the quality of programs’ assessment – in terms of methodology and communication – has improved dramatically. Unfortunately, evidence of improved learning has not increased proportionally. Referencing the earlier scenarios, we feel it is the Program B example that is most common to our institution: assessment but no difference in intervention. Most programs are caught in a yearly ritual of weighing the pig without feeding it more in the time in between. Banta and Blaich (2011) offered their take on why learning improvement is rare: “...the current state of affairs at almost every institution is based on a delicate set of compromises and optimizations in which many parties have participated and which few care to alter” (p. 27). While we agree with Banta and Blaich that many programs are unable to evidence improved learning due to a variety of issues, including campus cultures that privilege the status quo, we propose an additional hypothesis: Faculty and student affairs practitioners are not well trained in how to improve learning , particularly . Note the emphasis on program. In conversations we at the program level have had with faculty and staff, a common theme is that many if not most educators make adjustments to their pedagogy or curricula. Unfortunately, these changes are rarely implemented at a program level. On this topic, former Harvard President Derek Bok (2013) roundly criticized graduate schools for their lack of teaching preparation. Indeed, relative to research, master’s and doctoral students receive far less training in teaching 2 For those who wish to read more about these issues, we recommend Gerstner and Finney (2013) as a primer on implementation fidelity. 7 | National Institute for Learning Outcomes Assessment

8 during their graduate programs. In fact, it seems that conversations about teaching are taboo compared to frequent conversations about scholarship. Even if a faculty member adopts an evidence-based pedagogy, that lone A program that overlooks programmatic changes faculty member will not bring about . Programs are any part of assess, intervene, made up of teams of faculty, and everyone must be on board (a challenge re-assess will de facto within itself ) to generate meaningful changes. be unable to evidence In sum, a program must overcome many obstacles to evidence learning improvement. assess, intervene, re-assess improvement. A program that overlooks any part of will de facto be unable to evidence improvement. Even if the PLAIR model is adopted, there is no guarantee that the program will be able to tell a story about learning improvement. Breakdowns in assessment methodology and/ or intervention can thwart the best intentions. With those obstacles in mind, the next section opens with a realization that drew our attention to program learning improvement. It then provides our current thoughts regarding how a university could truly close the loop and demonstrate improved learning at the program level. Structuring a University for Learning Improvement: Our “Aha” Moment We had an epiphany recently at our institution. When programs needed help with assessment, the Center for Assessment and Research Studies provided state-of-the-art consultation. Challenged and strongly supported by the administration, faculty put forth great effort with assessment mechanics. They worked together to articulate program learning outcomes; curriculum maps identified where students theoretically learned these skills; instruments were specifically developed to map to the program outcomes; data were collected at the program level; clear reports were written. In other words, the assessment “gears” were in place and effectively spinning at the program level. Missing in the assessment consultation, however, was guidance on how a program could use results to improve student learning. Our assessment consultants had little training in this area, and thus faculty received little support. Perhaps not surprisingly, the “use of results” section in assessment reports most typically featured changes to assessment mechanics and an occasional programmatic change. Rarely did we see improvement to student learning a la the PLAIR model. On the other side of campus – literally and figuratively – the faculty development office was helping individual faculty develop better classes and providing support for best practices in pedagogy, course design, and alignment at the course-section level. Unfortunately, up to that point, the assessment office and the faculty development office coordinated in only nominal ways. Further, the assessment office provided methodological assistance at the program level, whereas the faculty development office provided help at the individual section level. In other words, there were two problems: the offices were not collaborating, and they were helping faculty at different levels (program vs. section). When these two offices began to talk, however, a synergistic solution seemed obvious. Properly coordinated, with support from administration, these units could help faculty create a system whereby effective interventions could be implemented and assessed at the program level. 8 | National Institute for Learning Outcomes Assessment

9 While focusing on interventions was a revelation to us, we discovered others had arrived at similar conclusions. Blaich and Wise (2011) realized that the first iteration of the Wabash Study “...focused too much on gathering, The PLAIR is a two-part analyzing, and reporting assessment evidence and not enough on helping process that breaks down into institutions use it” (p. 11). Banta and Blaich (2011) further explored smaller steps: readiness and the issue of underuse of results. One of their finest suggestions was that implementation. institutions should “...spend more time and money on using data than on gathering it” (p. 26). More specific to combining assessment and faculty development, Hutchings (2011) suggested that a faculty development office facilitate conversations with faculty and assessment practitioners. She hypothesized that jointly these groups could think about assessment and interventions as the scholarship of teaching and learning. Doing so would lead to a deeper engagement of faculty and affect, for the better, an institution’s culture toward improving student learning. Baker et al. (2012) conducted a series of case studies focused on institutions that use assessment results well. Juniata College stands out in particular. As Hutchings imagined, this institution established a scholarship of teaching and learning center, which has helped align interests among faculty, learning, and assessment. At James Madison University we are in the process of coordinating the work of the assessment unit with the faculty development unit within a larger university strategy. The main goal is to facilitate faculty- and staff- driven improvement efforts with synchronized assessment and intervention consultation. Although still in the planning stages, the logic underlying the PLAIR is worth sharing. While several authors (e.g., Baker, Jankowski, Provezis, & Kinzie, 2012; Blaich & Wise, 2011; Hersh & Keeling, 2013) have provided excellent suggestions about how assessment could be more useful, we believe our model is the most concrete regarding how to do so. Specifically, the PLAIR is a two-part process that breaks down into smaller steps: readiness and implementation. In this context, refers to the readiness degree to which the university environment is primed for program-level Implementation improvement to occur. refers to the strategic steps a program takes to evidence learning improvement. PLAIR Part 1: Readiness for Evidenced Learning Improve - ment The readiness nucleus comprises a team of student learning advocates (e.g., faculty or student affairs staff) representing a particular program. They seek a more effective learning environment for students and are content experts in their respective areas. The group must be willing to work together, conceptually and pragmatically. While we acknowledge that Banta and Blaich (2011) are on target – many programs are content with status quo – every university or college is likely to have at least a few programs with a disposition for improvement. A key facilitating condition is a campus administration that encourages and praises improvement. In a sense, while program faculty tackle the effort from the bottom up, the administration can meet them in the middle in a 9 | National Institute for Learning Outcomes Assessment

10 couple of ways: (1) endorse the PLAIR – assess, intervene, re-assess – as the university standard for learning improvement and communicate it to faculty, administration, and other stakeholders; (2) provide resources to faculty and A key facilitating condition staff to help them prepare for and implement the PLAIR. This support could is a campus administration include access to assessment expertise, pedagogy and curriculum expertise, that encourages and praises dedicated time/space to work through the various PLAIR components, as well as program-specific support like course releases or funding over the improvement. summer. Another part of readiness is sound assessment methodology. As illustrated assess , intervene, re-assess example, programs that roll out a learning in the intervention but lack a “pre-assessment” will be unable to tell a compelling improvement story. On the other hand, programs that have clearly articulated outcomes and a quality methodology to evaluate them are well positioned to progress to interventions. Because there are technical aspects to assessment methodology, consultation with assessment experts can help programs with the readiness component. PLAIR Part 2: Plan an Intervention If the program faculty and the university are “ready,” then the next part of the learning improvement process is to strategically plan an intervention. We suggest the following steps: 1. Identify Targeted Objective(s). What area of learning does your program endeavor to improve, and what is your rationale for making the change? A) Include the specific learning objective(s) that will be targeted. B) Indicate why each objective is important for your program/field, linking it with long-term benefits related to graduate school, job preparedness, civic engagement, etc. C) Reference assessment results that suggest this area needs improvement. The results MUST include direct measures of student learning. Programs are also encouraged to reference indirect measures (e.g., self-report surveys), disciplinary trends, or program review recommendations. 2. Investigate Current Efforts. What is your program currently doing to help students attain these learning objectives? What are your hypotheses about why this approach is not as effective as it could be? (Note: Please do not mention particular faculty members’ names. This process is NOT about singling out particular instructors.) A) Include the curriculum map showing where the objectives are theoretically addressed. B) Is the curriculum map accurate? Are those objectives truly covered in the classes/activities? 10 | National Institute for Learning Outcomes Assessment

11 C) If so, is it simply that insufficient time is spent on the area? D) If not, is there a breakdown in communication or coordination What is your program across sections? currently doing to help 3. Propose Learning Modifications. Based on what was discovered in students attain program step 2, what interventions (i.e., curricular or pedagogical changes) does the learning outcomes? program intend to make to enhance the learning environment with regard to the targeted learning objective(s)? Provide a specific timeline showing 4. Lay Out Improvement Timetable. when the changes will be implemented in the curriculum and when they will be assessed. From this plan, a reader should be able to ascertain (A) when the pre-assessment will be implemented, (B) the intervention “dosage” across cohorts, and (C) when the post-assessment will be administered to gauge the full effect of the modified curriculum/programming. To illustrate these steps, we provide a hypothetical example in which an academic degree program takes action to improve students’ oral communication skills. The example is adopted from an online help package provided to faculty at James Madison University. 1. Identify Targeted Objective(s). For the 80s pop culture program (B. A.), we endeavor to improve students’ oral communication skills. While students are performing well on most program-level objectives, they have struggled with oral communication. A. These skills are articulated via the fourth program objective. “Students graduating from the BA program in 80s pop culture will (A) deliver effectively a presentation with an (B) engaging introduction, (C) a logical and fluid body, (D) a conclusion that reinforces the main ideas of the presentation and closes smoothly.” B. Why are these skills important? According to our alumni survey results, our students often pursue marketing jobs where presentation skills are critical. Additionally, the Journal of Pop Culture Education cited oral communication as the third most important skill for graduate students in the field. C. According to work samples rated using our Oral Communication Rubric, for the last several years seniors’ skills have fallen below faculty standards in three of the four sub-areas (delivery skills, introduction, and conclusion). Additionally, among all program objectives, students self-report that their lowest gains are in oral communication. See table 1 for a summary of these results. 11 | National Institute for Learning Outcomes Assessment

12 Scale or 2011 Results 2012 Results *2013 Results Desired Results **2013 Different Corresponding Objective(s) Subscale 2013 Mean from 2012? Mean Mean (sd) Oral Communication Rubric (n=25): 1 = unsatisfactory, 2 = emerging, 3 = competent, 4 = highly competent 4 2.8 2.5 2.6(.42) 3 No Delivery Skills 4 2.7 2.9 2.8(.55) 3 No Introduction 4 2.9 3.1 Body 3.0(.38) 3 No Conclusion 2.9 2.7 2.7(.49) 3 No 4 Graduation Survey (n = 91): 1 = no gain, 2 = small gain, 3 = moderate gain, 4 = large gain, 5 = tremendous gain Oral Comm 4 2.7 2.6 2.6(.8) 3 No Table 1. Oral Communication Senior Assessment Results of Three Cohorts. * Green color coding represents the degree to which the observed results were better than the desired results (the darker green, the better). Red coding indicates the degree to which results were worse than desired. ** Based on independent t-tests, using p < .01 as signifance level (lower alpha due to multiple comparisons). 2. Investigate Current Efforts. We have included the curriculum map (see Table 2) showing where and how intensively our program objective are theoretically addressed. Obj 2 (Research Course/Learning Experiences Obj 1 (Identification Obj 4 (Oral Obj 3 (Writing of 80s Components) Methodology) Comm) Critically) PCUL201(Introduction to the 1 0 3 0 80s) 0 1 2 PCUL301 (80s Music) 3 3 0 1 0 PCUL302 (80s Fads) 3 0 0 2 PCUL303 (80s TV and Movies) PCUL304 (80s Technology) 3 1 1 0 PCUL361 (Methods and 0 3 1 0 Analysis) PCUL401 (80s Politics and 1 1 3 0 Culture) PCUL402 (Profiles of 80s 3 1 0 1 Icons) 0 2 0 0 PCUL403 (The Music Video) PCUL404 (The 80s and Today) 0 2 3 0 2 2 PCUL480 (Capstone) 0 2 Table 2. Curriculum Map of Pop Culture Program (Oral Communication is Objective 4). Coverage of objective: 0 = No Coverage, 1 = Slight Coverage, 2 = Moderate Coverage, 3 = Major Coverage 12 | National Institute for Learning Outcomes Assessment

13 According to the curriculum map, four courses address oral communication: three with moderate coverage, and one with major coverage. On paper, it would seem students have ample opportunity to learn these skills. On paper, it would seem students Nevertheless, the assessment evidence clearly indicates that students are not have ample opportunity to as proficient as the program faculty expect. To dig deeper, the six faculty learn these skills, but the faculty members teaching these courses met with the program coordinator three dig deeper to investigate, as a times in the month of March to investigate, as a program, why students were program. falling short. What follows is a summary of these discussions: • Indeed, students did present orally in all of the aforementioned courses. • However, how these oral communication experiences were implemented varied greatly by course and instructor. More often than not, the emphasis both in preparation and in grading was more heavily weighted toward the content of the course rather than oral communication skills per se. One professor characterized this trend as follows: If the presentation was reasonably accurate, the student would receive an *A* despite lackluster oral communication skills. I would make comments on the feedback sheet like, ‘seemed a bit nervous and spoke too quickly...’ but that was about it. On the other hand, I would provide much more specific feedback regarding the accuracy of content. Nevertheless, the presentation quality was far, far away from what would be considered professional or polished. • Although we use the program-level oral communication rubric for the capstone class, professors teaching other courses were unaware of its existence. Many said that the rubric would have been helpful in providing feedback to students in their classes. • Several of the faculty, while acknowledging the importance of oral communication skills, revealed that they did not feel comfortable providing feedback in that area. Indeed, they had received little or no training regarding how to do so effectively. 3. Propose Learning Modifications. This plan has been discussed and supported by all six faculty who teach program courses with an oral communication component. Note that ALL four of those courses will be affected to some degree; however, the biggest interventions will be in PCUL301 (80s Music) and PCUL480 (capstone). What follows is a short description of each intervention. Clarify Expectations Early. One of the first required (1) Intervention 1. courses in the major is PCUL301 (80s Music). In this class, students present on their final project at the end of the semester. The three faculty who teach 301 will explain both the importance of oral communication and the expectations of program faculty. They will communicate that this has, in general, been an area of weakness for graduates; furthermore, employers and graduate schools desire students to have competency with such skills. To ensure mastery, both faculty and students will need to 13 | National Institute for Learning Outcomes Assessment

14 work hard. Students will watch videos of the three best senior capstone presentations from the previous year. Faculty will then describe to their students how each of these presentations would be evaluated on the oral To ensure mastery, both communication rubric. faculty and students will need to work hard. (2) Intervention 2: Align Class-Level Assessments, Using Program-Level Oral Communication Rubric. Presentations will be evaluated on content (70% of the task grade) but also specifically on oral communication (30%). Each faculty member will use the oral communication rubric for that 30% of the grade. (3) Intervention 3. Emphasize Practice . In all classes with an oral communication component, faculty will urge students to practice their presentations at least four times before the in-class performance. Students will be encouraged to work with their classmates to receive feedback using the rubric and to tape and review their practice efforts. (4) Intervention 4. Increase the Rigor of Capstone Presentations . For the capstone, the ante will be raised. The final oral presentation will be open to all program faculty and to all majors; it will also be recorded. The three capstone professors will emphasize to students that this presentation will demonstrate not only what students have learned in the program, but also how well prepared they are for jobs or graduate school. Special Note: While not an intervention that directly impacts students, faculty will spend three days of in-service training prior to the first week of classes in Fall 2014. There they will discuss how to encourage students to practice before presentations and how to use the oral communication rubric consistently across courses. The faculty development office will help facilitate this training module. To coordinate the interventions with 4. Lay Out Improvement Timetable. assessment, we created an improvement timetable (see Table 3). Because the interventions affect several courses that span students’ juniors and senior years, the total effect will not be realized for several years. We will collect data each year, which corresponds to differing levels of intervention. In Year 0 we collect data on seniors (Class of 14’) who have not experienced any new intervention. In Year 1, we collect data on students (Class of 15’) who will receive partial intervention: only senior-level courses are enhanced for this group (PCUL 402 and 480). In Year 3, we collect assessment data on seniors (Class of 16’) who will receive oral-communication-enhanced classes as juniors and seniors (301, 303, 402, and 480). We hope to find that oral communication scores move higher every year, consistent with the amount of intervention each successive cohort will receive. The improvement table reminds the reader that once the intervention is implemented, it is necessary to re-assess. Assuming the intervention plan is effective, the last step is to celebrate. Everyone on campus should hear about this story. The program is featured prominently on the university website. The president mentions the faculty in his yearly opening speech. The faculty 14 | National Institute for Learning Outcomes Assessment

15 National Institute for Learning Outcomes Assessment | 15

16 publish an article about their important work leading to improved student outcomes. They receive travel stipends to present in their own discipline. Upper administration communicates such stories to the Board of Visitors Higher education has an and the state and federal governments. Further, everyone celebrates what is obligation to continuously most important: students learned more. They are better positioned for post- improve, especially regarding college endeavors such as graduate school and the job market. student learning. Conclusion Higher education has an obligation to continuously improve, especially regarding student learning. Unfortunately, evidence of learning improvement is virtually non-existent. While assessment certainly has a role in the improvement process, it alone is insufficient. In this article, we provided a model of how a university might take strategic steps to facilitate learning improvement (i.e., the PLAIR). Will the process work at any institution? We hope so, but we will not immediately know its effectiveness. It takes a few years for the typical program to assess, identify an area to improve, research and create an intervention, implement that intervention, and then re-assess to determine whether learning actually improved. One thing we do know is that currently few universities can evidence even one example of improved learning. We encourage more universities to work towards evidencing improved learning and share conceptual and applied processes. Even if these efforts are not at first successful, at least academe will be focused on the right problem. Returning to our original examples, a pig will fatten up if it eats more. A racehorse will run faster if it trains well. A dog’s temperature will drop given the right medication. In a similar vein, students will learn better if provided a more coherent learning environment. Much like a scale, a stopwatch, or a thermometer, learning outcomes assessment is merely a measurement tool. In and of itself, it will not produce change; but a good assessment process can document the state of student learning at a given time and, following an intervention, quantify the extent to which learning improved. The challenge for higher education is to coordinate resources to focus on “fattening the pig.” 16 | National Institute for Learning Outcomes Assessment

17 References Using assessment results: Promising practices of institutions Baker, G. R., Jankowski, N., Provezis, S. & Kinzie, J. (2012). that do it well. Urbana, IL: University of Illinois and Indiana University, National Institute for Learning Outcomes Assessment (NILOA). Banta, T. W., & Blaich, C. (2011). Closing the assessment loop. Change: The Magazine of Higher Learning, 43 (1), 22-27. doi: 10.1080/00091383.2011.538642 Banta, T.W., Jones, E.A., & Black, K. E. (2009). Designing effective assessment: Principles and profiles of good practice. San Francisco, CA: Jossey-Bass. Blaich, C., & Wise, K. (2011). From gathering to using assessment results: Lessons from the Wabash national study. Occasional Paper # 8. Champaign, IL: National Institute for Learning Outcomes Assessment. The Chronicle of Bok, D. (2013, November 11). We must prepare Ph.D. students for the complicated art of teaching. Higher Education, Retrieved from http://chronicle.com , London: Kogan Page. Assessing Learners in Higher Education Brown S., & Knight, P. (1994) Fulcher, K. H., & Bashkov, B. M. (2012, November-December). Do we practice what we preach? The accountability of an assessment office. Assessment Update, 24 (6). 5-7, 14. Gerstner, J., & Finney, S. J. (Winter 2013). Measuring the implementation fidelity of student affairs programs: A critical component of the outcomes assessment cycle. Journal of Research & Practice in Assessment, 8 , 15-29. Hersh, R. H., & Keeling, R. P. (2013). Changing institutional culture to promote assessment of higher learning . Occasional Paper #17. Champaign, IL: National Institute for Learning Outcomes Assessment. Change, 43 Hutchings, P. (2011). From departmental to disciplinary assessment: Deepening faculty engagement. (5), 36- 43. From gathering to using assessment results: Lessons from the Wabash Kuh, G. (2011). Foreword. In Blaich, C., & Wise, K. Occasional Paper # 8. Champaign, IL: National Institute for Learning Outcomes Assessment. national study. 17 | National Institute for Learning Outcomes Assessment

18 NILOA National Advisory Panel Randy Swing Joseph Alutto Provost Executive Director NILOA Mission The Ohio State University Association for Institutional Research NILOA’s primary objective is to Trudy W. Banta Carol Geary Schneider discover and disseminate ways that Professor President academic programs and institutions Indiana University-Purdue University Association of American Colleges and can productively use assessment data Indianapolis Universities internally to inform and strengthen Wallace Boston Michael Tanner - undergraduate education, and exter President and CEO Chief Academic Officer/Vice President nally to communicate with policy American Public University System Association of Public and Land-grant - makers, families and other stake Molly Corbett Broad Universities holders. President Belle Wheelan American Council on Education President NILOA Occasional Paper Series Judith Eaton Southern Association of Colleges and Schools President NILOA Occasional Papers Ralph Wolff Council for Higher Education Accreditation are commissioned to examine Trustee Richard Ekman contemporary issues that will inform United States International University Kenya President the academic community of the Council of Independent Colleges current state-of-the art of assessing Ex-Officio Members learning outcomes in American higher Mildred Garcia Timothy Reese Cain education. The authors are asked to President Associate Professor write for a general audience in order California State University - University of Georgia to provide comprehensive, accurate Fullerton information about how institutions and Peter Ewell Susan Johnston other organizations can become more Vice President Executive Vice President proficient at assessing and reporting National Center for Higher Education Association of Governing Boards student learning outcomes for the Management Systems Stephen Jordan purposes of improving student learning Stanley Ikenberry President and responsibly fulfilling expectations President Emeritus and Regent Professor Metropolitan State University - Denver for transparency and accountability University of Illinois Mary Kalantzis to policy makers and other external George Kuh Dean, College of Education audiences. Director, National Institute for Learning University of Illinois Urbana-Champaign Outcomes Assessment Paul Lingenfelter Adjunct Research Professor, University of President Emeritus Comments and questions about this Illinois Urbana-Champaign State Higher Education Executive Officers paper should be sent to Chancellor’s Professor of Higher Education George Mehaffy [email protected] Emeritus, Indiana University Vice President for Jillian Kinzie Academic Leadership and Change Senior Scholar, NILOA; Associate Director, American Association of State Colleges and Indiana University Universities Kent Phillippe Associate Vice President, Research and Student Success American Association of Community Colleges 18 | National Institute for Learning Outcomes Assessment

19 About NILOA • The National Institute for Learning Outcomes Assessment (NILOA) was estab - lished in December 2008. • NILOA is co-located at the University of Illinois and Indiana University. • The NILOA website contains free assessment resources and can be found at http:// www.learningoutcomesassessment.org/ . • The NILOA research team has scanned institutional websites, surveyed chief academic officers, and commissioned a series of occasional papers. • One of the co-principal NILOA investigators, George Kuh, founded the National Survey for Student Engagement (NSSE). • The other co-principal investigator for NILOA, Stanley Ikenberry, was president of the University of Illinois from 1979 to 1995 and of the American Council of Education from 1996 to 2001. NILOA Staff NATIONAL INSTITUTE FOR LEARNING OUTCOMES ASSESSMENT Stanley Ikenberry, Co-Principal Investigator Co-Principal Investigator and Director George Kuh , Natasha Jankowski, Associate Director Peter Ewell Senior Scholar , Jillian Kinzie, Senior Scholar Pat Hutchings , Senior Scholar Timothy Reese Cain , Senior Scholar Paul Lingenfelter, Senior Scholar Katie Schultz , Project Manager Carrie Allen , Research Analyst Erick Montenegro , Research Analyst Terry Vaughan III , Research Analyst Suhas Hoskote Muralidhar , Research Analyst Sharanya Bathey , Research Analyst Laura Giffin , Research Analyst NILOA Sponsors Lumina Foundation for Education University of Illinois, College of Education Produced by Creative Services | Public Affairs at the University of Illinois for NILOA. 10.032 19 | National Institute for Learning Outcomes Assessment

20 knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge create achievement connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge knowledge accountability connection understand communicate listen learn access quality innovation success ingenuity self-reflection educate action understand intellect knowledge accountability connection self-reflection educate action understand communicate knowledge accountability curiosity challenge create achievement connection self-reflection curiosity challenge create achievement connection self-reflection tellect curiosity challenge educate connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity in innovation success ingenuity intellect curiosity challenge create achievement knowledge accountability connection self-reflection educate action understand communicate curiosity challenge create achievement connection self-reflection understand communicate listen learn access quality action educate action understand communicate listen learn action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge knowledge accountability connection access quality self-reflection curiosity challenge create achievement learn access quality innovation success ingenuity self-reflection educate action understand intellect knowledge accountability connection self-reflection educate action understand knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect curiosity challenge connection knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity challenge create achievement connection self-reflection educate action understand connection self-reflection understand communicate listen learn access quality action create achievement connection self-reflection educate action understand communicate listen learn access quality innovation success educate action communicate listen learn access quality action educate action understand communicate educate innovation success self-reflection knowledge accountability communicate listen learn achievement connection self-reflection educate action understand communicate listen learn access quality innovation success ingenuity intellect access quality innovation success self-reflection curiosity challenge create achievement connection self-reflection understand educate action understand communicate listen learn action understand communicate listen learn access quality innovation success ingenuity curiosity challenge create achievement connection self-reflection understand communicate listen learn access quality action create achievement connection self- reflection educate action understand communicate listen learn access quality innovation success educate action communicate listen learn access quality action educate action understand create achievement connection self-reflection understand communicate listen learn access quality action create achievement connection self-reflection educate action understand communicate listen communicate educate innovation success self-reflection knowledge accountability connection self-reflection educate action understand communicate listen learn access quality innovation ingenuity intellect connection self-reflection understand communicate listen learn access quality action create achievement connection self-reflection educate action understand communicate listen learn National Institute for Learning Outcomes Assessment For more information, please contact: National Institute for Learning Outcomes Assessment (NILOA) University of Illinois at Urbana-Champaign 340 Education Building Champaign, IL 61820 learningoutcomesassessment.org [email protected] Phone: 217.244.2155 Fax: 217.244.5632 20 | National Institute for Learning Outcomes Assessment

Related documents