ARCHIVED – Report & Plan Use of Results

Archived Date: 26 February 2024

Congratulations are in order if you have finished evaluating evidence of student learning or compiling the perceptions of student learning!

Using the results for improvement or program evolution is the goal of assessment. Programs use learning assessment results to inform decision making and to improve teaching and learning. Results can also highlight successes such as the following:

  • a better alignment of the curriculum with desired learning outcomes;
  • the creation of useful rubrics;
  • a set of explicit standards and corresponding samples of student work (“anchors”);
  • evidence that students are meeting or exceeding learning expectations.

Additional Resources

Below are elements that are typically found in a report on learning assessment (note: the elements may appear in a different sequence and some may not be appropriate in your report):

Elements in the Report and Guiding Questions (not all of these may apply)
1. The learning outcome(s) and assessment question(s) that were investigated.
2. The type of (learning) evidence that was collected and when it was collected.

Did students demonstrate their learning in a course assignment? A course exam? A program-level requirement such as a portfolio?
3. Description of the sampling method, if applicable (e.g., if not all students in the program participated, briefly describe who did participate and why that subset).

(a) Who and how many submitted learning evidence/data/information?
(b) How many students are in the program (or, how many graduate each year)?
(c) How many were asked to participate?
(d) How were they selected?
(e) How many actually participated?
(f) How many non-responses were there?
(g) What steps were taken to ensure a representative group participated (representative in terms of groups of interest to the program and/or the University such as first-generation college students; ethnicity; transfer status, etc.)?
4. The process to evaluate the evidence.

(a) What scoring rubric was used? (include the rubric in the report)
(b) How was the exam scored? (include the key in the report)
(c) Were benchmark samples of student work used?
5. A timeline of key events.

When did the following occur: select learning outcome to investigate; collect evidence of student learning; evaluate student learning?
6. Summary of the results.

(a) How many students scored “1,” “2,” “3,” or “4”?
(b) How many students passed the exam?
(c) How many students agreed, disagreed, were neutral?
7. Answer(s) to the assessment question and whether the criteria for success were met.

(a) Did a sufficient percentage of students meet faculty expectations?
(b) When the results are disaggregated by student characteristics of interest (e.g., first-generation college students, ethnicity) do all groups perform at the same level?
8. What are the plausible reasons for the results? E.g., what are the plausible reasons that students scored high in written communication but did not score high in problem solving?

To guide interpretation, the faculty and other stakeholders (students) may ask questions such as the following:
(a) Was the assignment explicitly aligned with the rubric and the learning outcome?
(b) Did the course activities mirror the assessment task?
(c) Were students given sufficient opportunities to learn, get feedback, and practice?
(d) Did the instructional methods match the needs of the students?
(e) Were students motivated to do their best?
(f) Are there any equity issues–are groups performing at the same level?

Read the handout, Interpret and Use Results.
9. What specific actions will be taken (e.g., celebrate success, make a change aimed at optimizing student learning)?

Consider whether any of these actions are appropriate give the reasons for the assessment results.

Curriculum-related.
(a) Change how courses are taught–for example, by increasing the amount of scaffolding?
(b) Change the assignment, test, or other performance task? Note: if students do not “see themselves” in the assignment, work with students to create an assignment that resonates with them (this is an equity move). Good assessment tasks benefit students and faculty.
(c) Incorporate place-based, culturally-responsive, and/or culture-based assignments?
(d) Revise course content?
(e) Widely share illustrative samples of high and low performance with students?
(f) Modify frequency or schedule of course offerings?
(g) Revise prerequisites?
(h) Revise the course sequence?
(i) Add or delete course(s)?

Resource-related actions. Given the results, should we:
(a) Hire or re-assign faculty and/or staff?
(b) Increase or change classroom space?
(c) Train faculty and/or staff in a particular area, such as integrating Native Hawaiian Place of Learning elements?

Academic processes. Given the results, should we:
(a) Improve how we use technology?
(b) Revise advising standards or processes?
(c) Revise admission criteria?
10. Reflection on or an evaluation of the assessment plan and processes. Make changes if needed.

Assessment processes. Given our experience carrying out this assessment plan, do we need to:
(a) Revise the program or course learning outcomes?
(b) Change the criteria for success? Modify our expectations?
(c) Revise data-collection or data-evaluation methods?
(d) Revise the assignment, exam, or other measurement approach?
(e) Change the timeline?
(f) Collect and analyze additional data?
(g) Involve students more in the process?
11. The action(s) to be taken and a timeline of who is responsible for implementing the action(s).

(a) What specific action did the faculty agree to take?
(b) What are the major steps to implement the plan?
(c) Who is responsible and what is the timeline?
(d) When will this learning outcome be re-assessed? How could we investigate whether learning improvement occurred when we re-assess?

Tips for Writing up the Results

  • Determine the specific goal(s) of the report. For example, the goal may be to
    • communicate to colleagues in the department the student skills that were assessed, how evidence was collected and evaluated, what the results mean and how they will be used.
    • explain why a particular data-collection method was selected, how evaluation took place, and how results will be used as part of program improvement.
    • entice faculty to attend a meeting to interpret the results and decide on appropriate actions that are likely to improve student learning.
  • Attend to the level of detail: faculty in the department will appreciate more details and deans/directors/provosts will appreciate fewer details.
  • Report the results at a level of understanding appropriate for the audience receiving the report. Use language that will be understood by the individuals receiving the report. Explain technical terms. If a statistician is hired, be sure to ask him/her for a layman’s description of statistical terms.
  • Keep it short and concise–be careful not to overwhelm. If a written report is lengthy, include a 1-page executive summary.
  • Be accurate and be careful to not mislead.
  • Use visual displays, bullet lists, active voice.
  • If other assessment results exist, bring them into the discussion.

Tips

  • Distribute the report of the results as widely as possible.
  • Store in a shared location such as the department’s Google Drive folder.
  • Provide information that will assist faculty’s understanding of student learning.
  • Act on the results in ways that will improve the assessment process, student learning, or both. Assessment results are important evidence on which to base requests for funding, to make curriculum changes, rethink faculty lines, and more.
  • Disappointing (negative) assessment results can have a positive effect when they are used to improve the learning process.
  • Present the report in several ways: face-to-face meeting, written report, workshop format in which the report serves as the springboard for brainstorming possible next steps.
  • Engage the program faculty members, staff, and students in discussions about the results and how they might be used. Questions like these can start the conversation:
    • Do the results live up to our expectations?
    • Are our expectations appropriate? Should expectations be changed?
    • What were the most effective tools to assess student learning? Can those tools be shared and used in other courses or programs?
  • Once there is consensus on the action(s) to be taken, create an action plan that describes the actions the program will take, who will take those actions, and the timeline for implementing actions.
  • Monitor changes as they are implemented to determine whether they have the desired effect(s).
Sources Consulted