The University finished the first two-year assessment plan. Three major assessment measures were used: the Modified Value Rubric, GEA and NSSE survey to collect data for institutional learning outcomes. For more information about the result detail, see Evidence of Student Learning. To improve the communication of the general education assessment activities, we identified four lead gen ed faculty for each competency to lead the efforts in closing the loop.
1. Modified Value Rubric: Results showed that some students' artifacts had N/A in the results, university provided assignment design training to ensure the assignments align with all dimensions in the rubric. In addition, to improve the reliability of the data, we conducted training sessions for each competency to practice the calibration process. The goal of the additional training is to improve the validity and reliability of the data in the next data collection cycle.
2. GEA: All competencies met the university benchmark. Only Competency 3 (Quantitative Literacy) is below the state performance indicator. University continues to follow up with 17-18 actions (online tutorials for students to prepare for GEA and national math pathway) to see if the actions have any impact or need additional actions for the following years.
3. NSSE: The data were shared with university committees and department to facilitate closing the loop discussions.
All programs are encouraged to use the assessment results to make improvements in student learning such as:
From 15-16 to 17-18, "No Actions" decreased significantly from 24% to 13%. Program made changes that had a direct impact on students (teaching and learning, curricular change and SLOs) increased slightly from 26% to 33%. University continues to encourage programs to make actions with direct impact on student learning.
Data collected is used to improve the university’s assessment activities. First is updating the university assessment timeline. Most of the steps in the 2015-2016 university assessment timeline aligned closely with the four steps of the assessment cycle (Program SLOs, Assessment Methods, Assessment Results, and Actions for Improvement). In addition, the timeline indicated the use of a program assessment rubric to self-evaluate their own assessment report. In 2016-2017, the university assessment timeline added the best practice requirement in some steps of assessment cycle. For example, in the first step of assessment cycle, program coordinators were required to provide evidence of validated program SLOs from their advisory boards. In the last step of assessment cycle, actions for improvement, programs are required to provide evidence of faculty meeting to discuss the assessment results and to reach an agreement of actions for improvement.
Secondly, to improve the quality of the program assessment process, the university implemented multiple models of peer review (meta-assessment). In 2015-2016, the peer review of the assessment report mainly relied on the self-evaluation of program coordinators. In 2016-2017, the peer review from the Faculty Senate University Assessment Council (FSUAC) members were included into the timeline. In 2017-2018, five major things were updated in the University assessment timeline:
The updated items reflect the full implementation of CQIP model, the current trend in assessment, and the best practice in closing the assessment loop. In order to improve the quality of program assessment process, additional criteria were added each year to the components of the program assessment rubric.
In 2018-2019, The Assessment Coordinator and Vice Provost of Academic Programs and Services met with Deans and Chairs from the four colleges to encourage the use assessment result for resource allocation to close the institutional effectiveness loop. University also collect faculty feedback in assessment survey to prepare three-year assessment report. University assessment committee will use the results to make improvement in assessment process next year
Last updated on April 1, 2019