Assessment as an Agent for Change

Yuerong Sweetland, PhD
Tuesday, June 20, 2017
Scantron test with pencil

 

I recently attended two assessment-related conferences: the AALHE (Association for the Assessment of Learning in Higher Education) 7th Annual Conference and the SAARC (Student Affairs Assessment and Research Conference) at the Ohio State University where I served as a panelist. These two conferences were quite different, with the second one leaning more towards assessment and research in student affairs or co-curricular areas and the first one having a more comprehensive focus on assessment, learning and teaching. In spite of the many differences, I felt that a common challenge was being addressed, either explicitly or implicitly: how to make assessment meaningful and rewarding.

Whether we would like to admit it or not, assessment is frequently started as a compliance act at some, if not many, higher education institutions. As such, it can become mundane, isolated, and less and less impactful as time goes on. In the ever changing and evolving environment of higher education, assessment needs to stay meaningful, relevant, and engaging. Based on presentations that I heard, conversations with fellow assessment professionals, as well as my own experiences, here are a few thoughts:

  1. Assessment must be an agent for change, to stay relevant. In my earlier years of assessment career, I spent a lot of time and energy and focused my attention on technical aspects of the assessment processes: creating measurable outcomes, identifying actionable survey items, conducting training and norming sessions for improving interrater reliability – the list can go on and on. Meanwhile, we also had to stay constantly mindful of requirements from accrediting agencies. As a result, we seemed to have created a pretty good system of outcome assessment, where data was collected and analyzed and then assessment reports compiled and filed. However, we soon realized that sometimes assessment did not lead anywhere and sometimes there was a disconnect between assessment findings and occurred changes. Consequently, we included specific action items in assessment reports, which was then followed by another item for evaluation and reflection: has student learning changed as a result of implemented changes? All of these changes seem to have helped tremendously. In addition, there are other strategies that could help further engage faculty in assessment. For example, Flateby and Gatch (2017) from Georgia Southern University emphasized recognition and reward for excellence in assessment work in their AALHE presentation. Even though some faculty have the internal motivation and intellectual curiosity to assess and improve student learning, it still is nice to recognize the devotion and commitment, and to encourage and reward assessment as a form of scholarship. The recognition and rewarding will help sustain and strengthen a culture of assessment and learning.    
     
  2. We all know that to have a strong assessment culture, assessment folks have to work closely with faculty members. Some of us who are already faculty at our institutions might be at a particular advantage of being able to better appreciate and address faculty concerns in the assessment work. In addition, we also have to work closely with instructional designers and the centers of teaching and learning, both of which are important partners for driving changes in curriculum and instructional practices across individual courses, no matter where they might be located in an organizational structure. Our partners could also include student learning centers, libraries, and other places (where co-curricular programs/experiences are occurring) on campus that support student learning and success. Last but not least, institutional research or institutional effectiveness offices frequently have the data query capacity that allows detailed analysis of learning (e.g., transfer students vs. non-transfer students, Pell recipient vs. non Pell recipients). Some campuses have also started using learning analytics data to further understand learning and identify improvement opportunities. To work well with the variety of individuals and groups, assessment professionals have to be flexible yet resilient.
     
  3. As assessment professionals, we are all aware of the importance of validity of assessment instruments. However, validity can be contextual (Skinner 2013). Even though Skinner used contextual validity to refer to implementation considerations for “validated” intervention strategies in different contexts, I argue that contextual validity should be a major concern in our selection of the assessment instrument, whether it is a test or a survey. Two years ago, in revising the course survey instrument at Franklin University (which is one of the most heavily used indirect assessment instrument at the University), we referred to the famous SRI (Student Ratings of Instruction) tools from IDEA (IDEA 2017). Given Franklin’s centralized academic model and curricular design framework, we had to tweak the original SRI items, to be valid for Franklin’s context. Data from the revised survey instrument have been used extensively across the University to inform changes in course design and teaching improvement. Clearly, if we just adopted the SRI instrument as it is, we might be able to benchmark ourselves again other institutions; however, we might not necessarily be able to translate the benchmarking findings into actionable items for our local contexts.

In order for assessment to serve as an agent for change, the assessment instrument has to have contextual validity. The campus stakeholders need to work together in reviewing an instrument to determine whether it has a sufficient level of contextual validity.

What are your thoughts on making assessment meaningful and rewarding? I welcome your thoughts and opinions.

References:

Flateby, T., & Gatch, D. B. (2017). Enhancing the value of assessment: Developing and fostering affective outcomes. Presentation at AALHE Conference 2017. Louisville KY.

Skinner, Christopher H. (2013). Contextual validity: Knowing what works is necessary, but not sufficient. The School Psychologist, Winter 2013, p. 14-21.

IDEA (2017) Student Ratings of Instruction. Retrieved from http://www.ideaedu.org/Services/Student-Ratings-of-Instruction-Tools.

Blog Category: 
Assessment

About the Author