The primary purpose of preparing and submitting annual assessment reports is to demonstrate how departments and schools evaluate the success of their academic programs in achieving Program Student Learning Outcomes (PSLOs) for both degree and certificate offerings. These reports provide a structured, systematic approach to gathering data, analyzing findings, and making informed improvements to the curriculum, thereby supporting high-quality education and fostering student success.
Overview
Program-level Student Learning Outcomes (PSLOs) specify the knowledge, skills, abilities, and attitudes that students should demonstrate upon completing a degree or certificate program. These outcomes guide the development of curriculum, instructional approaches, assessment methods, and program evaluations, making PSLOs essential for faculty, staff, and both current and prospective students.
For new programs, clearly defined, measurable PSLOs ensure students are prepared for academic and professional success.
For established programs, regular review of PSLOs maintains their relevance and supports the program's ongoing role in preparing students for success in their fields.
Requirements for PSLOs:
Recommended Steps for Articulating and Reviewing PSLOs:
A. Clarify the Program’s Purpose and Scope:
B. Ensure Alignment with Discipline-Specific Standards (if applicable):
C. Identify Core Learning Domains:
Determine the primary areas of learning that your degree or certificate program will emphasize. These may include:
Use action verbs in Bloom’s Taxonomy to describe learning outcomes:
D. Write Clear and Measurable PSLOs:
Each outcome should be:
Example PSLO Formats:
E. Cover a Broad Spectrum of Outcomes:
PSLOs should encompass a wide range of learning domains, from foundational knowledge to advanced skills and professional growth. This may include:
F. Review and Refine PSLOs:
Once the PSLOs are articulated, consult with faculty and other stakeholders (e.g., industry experts, discipline-specific accrediting bodies if applicable, and student representatives) to ensure they meet the needs of all parties and are clear, realistic, and measurable.
G. Document and Integrate PSLOs into Program Design:
Once finalized, incorporate the PSLOs into the program structure, course syllabi, and assessment plans. Communicate them clearly to students, faculty, and staff to ensure alignment and transparency.
H. Establish a Plan for Ongoing Review and Improvement:
Regularly review and update the PSLOs as the program develops, taking into account student performance, stakeholder feedback, and changes in the field. Incorporate mechanisms for evaluating and revising PSLOs into the program’s assessment cycle.
Example of Program Student Learning Outcomes:
Program: Master of Science in Data Science
The Rubric for Reviewing Program-Level Student Learning Outcomes (PSLOs) will be used to provide feedback for both new and established degree and certificate programs.
Curriculum maps are very helpful in demonstrating where in the program’s curriculum learning outcomes are being addressed. In essence, a curriculum map consists of a table with two axes, one pertaining to program learning outcomes, the other pertaining to courses in the major.
Example of a curriculum map:
Courses Required for the Major* | Learning Outcome 1 | Learning Outcome 2 | Learning Outcome 3 | Learning Outcome 4 | Learning Outcome 5 |
---|---|---|---|---|---|
ABC 1234 | L | L | L | ||
ABC 2345 | M | M | L | M | |
ABC 3456 | M | H | H | ||
ABC 4568 (Capstone) | H | H | H | H | H |
*Required courses: Program Learning Outcomes should be assessed in required courses for the major since all majors in the program must take them. Electives should not be used to assess any program-level student learning outcome as not all students in the major are required to take them.
Note: L, M, and H describe the extent to which students experience the learning outcome. L=Low emphasis on the learning outcome; M=Moderate emphasis; H=High emphasis. Every required course should contribute to addressing one or more learning outcomes.
Assessment methods describe how each student learning outcome is measured. There are two types of assessment measures (direct and indirect).
1) Direct Measures AND Number of Students Assessed: This section requires description of direct measure(s) used to assess students’ progress toward achievement of each learning outcome. Direct measures require students to demonstrate acquired knowledge and skills. Below are examples.
*These measures are suitable for assessing graduate level learning outcomes. Focus on assessment of graduate learning outcomes should be on formative assessments that prepare students for culminating experiences (e.g., comprehensive or general examinations, thesis and thesis defense, dissertation and dissertation defense).
Each student learning outcome should be measured using a direct measure as that is the only way to determine the extent to which students are able to demonstrate knowledge, skills and abilities they have acquired in a course or program. Please remember to indicate the total number of students assessed using a given measure.
2) Indirect Measures AND Number of Students Assessed: Indirect measures are commonly used to seek student opinions regarding knowledge and skills acquired in the program. Findings from indirect measures should be used to augment those of direct measures. Examples include surveys, focus groups and interviews. Please include the total number of students participating in each activity.
3) Performance Target (a.k.a. Criteria for Success): This refers to the desired level of performance faculty want to see, based on a measure of method of assessment, that represents success at achieving a given student learning outcome.
Additional Guidelines/Recommendations regarding assessment measures:
Examples of well documented Assessment Methods or Measures:
Direct Measure(s) AND Number of Students Assessed:
Assessment of senior Capstone research papers (N=30) by a faculty panel using a locally constructed 5-point rating scale where 5=“excellent,” 4=“good,” and 3=“satisfactory.” 85% should score satisfactory or better, and the mean score should reflect a better than satisfactory performance for the graduating class.
(Undergraduate – Dept. of Communication)
Indirect Measure(s) AND Number of Students Completing Surveys:
Graduating seniors (N=30) will be surveyed to gather their perceptions on basic knowledge and background in the communication discipline using a 5-point Likert scale, where 1=very well, 2=fairly well, 3=somewhat well, 4=not very well, and 5=not at all. 80% of respondents will rate their basic knowledge and background in the communication discipline at “very well” or fairly well”.
(Undergraduate – Dept. of Communication)*
______________________________
*Statement was revised for accuracy.
This step focuses on documentation of results of the analysis of assessment data to indicate how students actually performed in each learning outcome based upon the assessment methods faculty selected for each outcome. The following key questions can be used to guide analysis of data:
It is very important to analyze assessment results in order to learn whether or not the criteria on the student learning outcomes were met. Analysis of data may provide important information regarding relationship between assessment outcomes and relevant program indicators such as course grades. Further, department faculty may be able to find out the extent to which students change overtime and/or whether or not students meet specified program expectations.
Analyzing data should include organizing, synthesizing, interrelating, comparing, and presenting the assessment results. These processes should be based upon the nature of the assessment questions asked, the types of data that are available, as well as the needs and wants of the faculty, students and the whole university community including stakeholders. Since the outcome of data analysis lends itself to multiple interpretations, it may be critical to work in conjunction with others in looking through the analyzed data as this will lead to greater understanding often via different perspectives.
Data can be compared to results from previous assessments, baseline data, existing criteria/standards, etc. For instance, department faculty may be interested in finding out if their majors learned or developed more as a result of participating in a course or program than students who did not participate.
The express purpose of assessment is to continuously improve student learning. In light of this, it is very important that assessment results are analyzed, interpreted, reflected upon, and most importantly, used by faculty to make programmatic changes in the context of continuous improvement. The assessment process cannot be complete without “closing the loop” – that is, using assessment results for program change and improvement - this is the most important part of the assessment process.
The key questions regarding this step: How do faculty intend to enhance student learning based on results of student performance? Where are students performing well (as expected)? Where are they falling short? Which areas of the curriculum should be emphasized more?
Assessment results can be used in a variety of ways including, but not limited to the following: