Glossary of Terms
Glossary of Terms
An agreed upon improvement strategy among faculty to address the results gathered from an ongoing assessment plan.
2. AssessmentA systematic, ongoing process of gathering and analyzing evidence of student learning for continuous improvement of programs and courses.
3. Assessment MethodsAssessment methods are strategies, techniques, tools, and instruments used to determine the extent to which students are meeting the desired learning outcomes.
4. BenchmarkBenchmarks is a collection of relevant internal or external data, which can be used to compare and monitor progress toward meeting performance goals.
5. Data-driven EvidenceThe use of evidence-based data to make informed decisions about student achievement, assess progress towards goals, and improve outcomes.
6. Closing the LoopAnalyzing the results from assessment outcomes and using the results to make changes to improve student learning or enhance programs. Closing the loop is a process of continuous assessment improvement.
7. Institutional EffectivenessThe extent to which an institution achieves its mission and goals.
8. Institutional GoalsInstitutional-level action statements that implement, support, and are derived from the Mission and Strategic Plan.
9. Institutional MissionA broad statement of institutional philosophy, role, scope, etc.
10. Learning OutcomesLearning outcomes statements are specific, narrow, and measurable in nature. They demonstrate the knowledge, skills or values the student have achieved upon completion of a program or course.
11. PortfolioA compilation of student-generated evidence which might contain work samples, lesson plans, case studies, research papers, photographs, videotapes, newsletters, resumes, and observations for the purpose of assessing student progress, effort, and achievement over time.
12. Direct Assessment of LearningDirect assessment of learning assesses student performance based on student's actual work. Examples of direct assessment of learning are pre/posttest, course-embedded assignments (i.e., essays, research papers, portfolio evaluation, case studies, etc.) standardized exams, or capstone course evaluation.
13. Course-Embedded AssessmentCourse-embedded assessment refers to class assignments, activities, or exercises used to assess student performance or aggregated to provide data about a particular course or learning outcome. Examples of course-embedded assessment are exam questions, pre/posttest, work samples, field observations, and presentations.
14. Formative AssessmentFormative assessment refers to the ongoing process of gathering information or data about student learning during a course or program to guide further improvements in teaching and learning process. The primary goal for formative assessment is to identify areas that need improvement. Examples of formative assessment are reflection journals, homework exercises, class discussions, questions and answer sessions, and observations.
15. Indirect Assessment of LearningIndirect assessment of learning uses opinions, thoughts, reflections, or perceptions to make inferences about student learning. Some examples of indirect assessment of learning are student surveys, student self-assessment reports, employer surveys, focus groups, interviews, alumni surveys, course grades, completion rates, and job placement data.
16. Inter-rater reliabilityInter-rater reliability is the degree to which raters will reach a consensus on the same score for the same sample of work. If the inter-rater reliability is high, there is a high degree of agreement between raters. Low inter-rater reliability indicates that the raters will not reach a consensus on the evaluation of the assignment using the same rubric. Thus, the rubric has low inter-rater reliability.
17. Program Level AssessmentProgram assessment is a systematic, ongoing process of gathering and analyzing evidence of student learning to determine if the program is meeting its learning outcomes and then, using the information to improve learning. The evidence gathered and used for improvement or accreditation purposes can be quantitative, qualitative, formative, and summative.
18. Reliability
Reliability is the extent to which an instrument (i.e., rubric) measures the same way each time it is used and under the same condition with the same subjects.
19. RubricRubric is a scoring tool developed around established criteria and performance standards that can be used for summative and program level assessments. Components of a rubric include clear descriptors of the work associated with each component, at varying levels of mastery.
20. Summative AssessmentSummative assessment is evidence gathered at the conclusion of a course, or program to improve learning or determine if a program goals are met. Some common examples include standardized tests, chapter tests, final exams, capstone projects, and portfolio presentations.
21. ValidityValidity refers to how well a test measures what it claims to measure when repeated a second time.
22. Value AddedThe increase in contribution a course, program, or institution makes to student learning from when students first enroll to the time they graduate.