1999 Annual Report

University of Nebraska-Lincoln
September 2000

Back to University-wide Assessment Report 1999

 

Appendix D

 

Highlighted in this appendix are examples of the variety of ways graduate programs are using outcomes assessment. These effects have resulted primarily from efforts undertaken by the College of Arts and Sciences.1 In 1997-98, the college initiated assessment of graduate outcomes because assessment of the undergraduate major was well underway and running smoothly. Because the completion requirements for graduate programs (i.e. comprehensive exams, thesis/dissertations, etc.) are consistent across different disciplines, the college provided departments a listing of possible measurement dimensions (i.e. process/resource and outcomes) and methods for measuring each dimension. These guidelines led to the implementation of three assessment methods by most programs.

  1. Creation of a system for tracking student progress through the program that gathers information on indicators such as time to completion, funding, area of study, etc.
  2. Implementation of a checklist for rating student achievement of learning objectives on thesis/dissertations and comprehensive exams. This checklist is completed by those faculty members serving on the student's committee.
  3. Use of data from surveys of graduate faculty, graduate students, and graduate alumni funded and administered by the college.2 These surveys contained a core set of items generally related to all graduate programs and each program had the opportunity to add questions specific to its own concerns. Survey results provided to a program included responses from those associated with the program as well as a summary of college-wide results to serve as a benchmark.

Survey results served as the primary basis for the effects highlighted in this appendix. Not enough information has been collected through the tracking system and checklist to determine reliably how to improve learning in the program.

The following is a description of the kinds of activities spurred from assessment evidence collected in 1998-99.

  • Students' poor performance on several learning objectives led one department to consider changes in curriculum requirements and add an examination of these objectives after the students first year of study.

  • To improve student performance on the program's learning objectives a group advising session was held before registration each semester to explain the courses available and why those courses are important to the graduate study.
  • One department began offering courses in their own department previously covered by courses in other departments after students responded that the courses in other departments failed to meet their educational needs.
  • In the process of creating a structure for collecting ongoing assessment data from students, one department initiated a review of the structure of their graduate program.
  • Several departments implemented a program to improve the training of graduate teaching assistants. These programs involved: a) requiring or encouraging participation in the college's Preparing Future Faculty program which assisted students in preparing a teaching dossier and discussing issues they would face, b) attending the Teaching and Learning Center's workshops geared toward graduate teaching assistants, and c) offering a required teaching seminar before students could be given full responsibility for teaching a course.
  • Student weaknesses in areas important to their future employment (i.e. teaching experience, professional development, broad coverage in the discipline, etc.) encouraged several departments to consider strategies for making students more competitive in the job market.
  • Several departments planned to use their assessment methods for evaluating the benefits and disadvantages of combining introductory graduate courses with upper level undergraduate courses (i.e. 400/800 level courses).
  • One department identified the need for a method of assessing teaching outside the classroom (i.e. dissertation supervision, student achievements, and placement success) for recognition and reward.
  • Assessment results indicated to two departments that they needed to consider whether graduate student teaching loads were delaying program completion. In contrast, another department recognized that completing a program too early might keep graduate students from fully developing their research skills and professional development.
  • To improve students' professional development one department schedule several brown bag seminars to discuss research, teaching, and career searches. Another department added a one-hour seminar on professional development that each graduate student was required to take during the spring of their first year.
  • One department plans to use future assessments to monitor efforts to improve student involvement in research.
  • One department plans to assess the effect of their revised program guidelines by using survey results from faculty, student, and alumni as a benchmark.
  • Assessment evidence indicated to several departments that recruitment of graduate students was an area in need of improvement.
  • Some departments planned to use the tracking system as a way to improve students' time to completion.