1998 Annual Report

University of Nebraska-Lincoln
July 1999

Hard copies of the 1998 University-Wide Assessment Report have been sent to the deans of all UNL colleges and to the Teaching and Learning Center. Questions concerning this report or university assessment activities should be directed to the Director of Institutional Assessment.

 


 

Table of Contents

 

Introduction and Recommendations

Section I - University-wide Assessment Committee Activities

Activity 1: Integrating assessment into UNL's program planning and budgeting process
     Academic Program Review
     External Accreditation
     Mid-Cycle Reviews

Activity 2: Assessment of the Comprehensive Education Program - Results & Refinement
     1997-98 Comprehensive Education Program Assessment Activities
     Modifications to the Existing CEP Assessment Plan

Activity 3: Development and Annual reporting of college, departmental, and program assessment
     Annual Dean's Assessment Reports
     Special Project: Review of Outcomes Assessment Efforts at Other Research I & II Universities

Activity 4: Collection of surveys institution-wide

Activity 5: Timeline for implementing assessment of graduate student learning outcomes

Activity 6: Analysis of existing data and the production of shared assessment documents

Activity 7: Communications strategy for providing useful assessment information to university faculty and administration

Section II - University-Wide Assessment Issues, Ideas, and Effects

Part I. Issues
     Improving Outlined Learning Objectives/Outcomes
     Striving for Value and Usefulness
     Revising and Revisiting Plans
     Use of Indicators of Student Learning
     Faculty & Student Communication
     Establishing Feasible Plans of Implementation
     Efficient Assessment
     Developmental Assessment
     Focus for Changes Based on Assessment Results
     Where can Colleges and Departments Get Help

Part II. Ideas
     Broad Generalizations about Student Achievement
     Capstone Course Assessment
     Linking Learning Objectives
     Value Added Studies
     College Wide Surveys
     Revision of Assessment Plans and Activities
     Evaluation of Course Learning Objectives
     Faculty Involvement
     Long-term Implementation Plans
     Graduation Cycles
     Integrating the Assessment of CEP
     Assessing across the Curriculum
     Improving Teaching, Course Assignments, and Student Experiences

Part III. Effects

Conclusions

Appendices

     Appendix A - Mid-Cycle Assessment Review schedule and document
     Appendix B - Review of Outcomes Assessment Efforts at Other Research I & II Universities
     Appendix C - Survey Audit Forms and Results

 


 

Introduction and Recommendations

 

During the 1998-99 academic year, the University-wide Assessment Committee continued its charge of monitoring and guiding UNL's efforts to "systematically and routinely establish patterns of evidence of student learning at the program level and reviewing those patterns for purposes of continuous quality improvement." (UNL Assessment Plan, 1996, p. 6). To document the accomplishments of this charge the University-wide Assessment Coordinator, in conjunction with the University-wide Assessment Steering Committee, is to prepare an annual report providing an overview of student academic achievement during the academic year (UNL Assessment Plan, 1996, pp. 27-28).

From the information provided in the 1997 University-wide Annual and Interim Assessment reports, the committee shaped several recommendations which were to guide actions during the 1998-99 academic year. Those recommendations included:

  1. That the university coordinate its institution-wide surveys and consolidate resources behind those efforts that give us useful outcomes information;
  2. That the Office of the Senior Vice Chancellor for Academic Affairs continue to support and refine assessment activities regarding the Comprehensive Education Program;
  3. That colleges continue to develop and refine their assessment activities for the undergraduate programs;
  4. That colleges, aided by the Dean of Graduate Studies, begin to assess outcomes in their graduate programs;
  5. That all levels of the institution focus on the analysis of existing data and the production of assessment documents that will be shared;
  6. That the University-wide Assessment Steering Committee develop a communications strategy in order to provide useful assessment information to university faculty and administration.

This annual report summarizes the results and conclusions of activities conducted to address these recommendations (Section I - 1998-99 University-wide Assessment Activities). In addition, this report continues the tradition of last year's report by broadly reviewing issues and ideas raised in the college and departmental annual reports and illustrating ways in which assessment has informed program improvement (Section II - University-Wide Assessment Issues, Ideas, and Effects).

Because of the cycle used for colleges and departments to report on their annual assessment activities, it is necessary to clarify the currency of information contained in this report. The information used to develop Section I of this report came from activities conducted during the 1998-99 academic year by the University-wide Assessment Committee. The information used for Section II of this report came from college and departmental activities conducted during the 1997-98 academic year and reported during the fall of 1998.

Information from this report has shaped the following recommendations, which the University-wide Assessment Coordinator in conjunction with the University-wide Assessment Steering Committee will use as a guide for their actions during the 1999-2000 academic year:

Recommendation 1: Continue advisement on the process of integrating assessment into UNL's program planning/budgeting process which includes Academic Program Review and External Accreditation that is discipline-specific;

Recommendation 2: Consider the results of faculty conversations centered around CEP with the Peer Review of Teaching and Assessment Project;

Recommendation 3: Continue to facilitate the development and annual reporting of college, departmental, and program assessment activities;

Recommendation 4: Discuss and coordinate the collection of information about University-wide activities of a specific focus (i.e. Capstone Courses, Faculty Attitudes/Awareness, Availability of Core Data);

Recommendation 5: Initiate the proposed timeline for implementing assessment of graduate student learning outcomes;

Recommendation 6: Continue to support the analysis of existing data and the production of assessment documents that will be shared;

Recommendation 7: Track the implementation of the communications strategy developed to provide useful assessment information to university faculty and administration.

Section I - University-wide Assessment Committee Activities

University-wide Assessment Steering Committee was created to facilitate feedback and coordination among and between the various aspects of assessment. It has been instrumental in the evolution of assessment on the campus. Many of the members are assistant or associate deans who have both curriculum and assessment responsibilities within their respective colleges. During the 1998-99 academic year the committee, with the assistance of the University-wide Assessment Coordinator, has:

Activity 1: Advised on the process of integrating assessment into UNL's program planning/budgeting process which includes Academic Program Review and External Accreditation that is discipline-specific;

Activity 2: Considered the assessment results of student outcomes for the Comprehensive Education Program and modifications of the plan;

Activity 3: Facilitated the development and annual reporting of college, departmental, and program assessment activities;

Activity 4: Coordinated the collection of surveys institution-wide;

Activity 5: Developed a timeline that encourages the assessment of graduate student learning outcomes as faculty develop expertise in assessment;

Activity 6: Supported the analysis of existing data and the production of assessment documents that will be shared;

Activity 7: Developed a communications strategy in order to provide useful assessment information to university faculty and administration.

The first section of University-wide Annual assessment report provides a summary of results and conclusions from the aforementioned processes and activities undertaken by the University-wide Assessment Steering Committee during the 1998-99 academic year.

Activity 1: Integrating assessment into UNL's program planning and budgeting process

Academic Program Reviews (APR), External Accreditations, and Mid-Cycle Assessment reviews are three processes used to integrate assessment in UNL's program planning and budgeting. The functioning of each process during the 1998-99 academic year is discussed below

A. Academic Program Reviews

Academic program reviews are utilized as they provide the primary program and departmental evaluation information that may be used for reallocation of resources, program changes, and possible re-direction of program efforts. The academic program review process has been in place since 1976 and requires a mandatory review for most departments on a five to six year cycle. This review involves a self-study, an external review team visit and report. Although the academic program review procedures were changed to incorporate assessment activities, the first self-study reports to include this information were filed in spring 1997. This delay allowed the programs to implement the assessment plans which they had designed during the 1995-96 academic year.

Outcomes assessment material is now included in all self-study reports developed for academic program reviews. Some review teams have utilized this information, and others have not. For instance, the following comments were made in a 1998-99 review team report:

The department has developed a variety of procedures which are invaluable in assessing or evaluating: the curriculum: the instructional profile; faculty effectiveness and student performance.... The content and intent of (their assessment) procedures is basically consistent with similar procedures evidenced at other universities throughout the country. From all indications the faculty is directly or indirectly involved in establishing and implementing the procedures along with the administration of the Department, and College and the University. The outcomes assessment document... is valuable in helping the Department focus its attention on specific areas and to examine the relationship between or impact that one area has on another.

In other instances, the review team offers neither comments as to the usefulness of assessment or suggestions for its improvement. Recognizing that assessment is sufficiently new that (in some instances) the results may not be particularly informative, the review team may choose not to comment. Another view, however, would suggest that the lack of comment might reflect that the objective of program review and the objective of the assessment activities were not coterminous.

Academic program reviews are intended to insure the quality of academic programming, both instructional and non-instructional. The review team is to consider the environment at the University of Nebraska - Lincoln in addition to evaluating the program quality. Although assessment of student learning outcomes is one factor to consider in evaluating program quality, other issues affecting program quality may be paramount in some program reviews. When that occurs, a review team may offer no comment or suggestion on student learning outcomes assessment.

We have attempted to deal with this coterminous relationship by developing the mid-cycle assessment review, which occurs two to three years before the program review. See the additional discussion of Mid-Cycle Assessment Review in this section.

B. External Accreditations

External Accreditations, similar to academic program reviews, provide program and departmental evaluation information. Unlike the academic program reviews, however, the cycles for external accreditation reviews vary from five to ten years and the university does not control the accrediting standards. Consequently, during the 1997-98 academic year, a systematic review of the major accreditation standards was undertaken to determine whether assessment of student learning outcomes was required. Sufficient similarity was found to suggest that relying on external accreditation self-studies to provide this information would be appropriate. The length of the accreditation cycle was addressed by requiring reviews of assessment activities at the approximate mid-point of the accreditation cycle. See the discussion of Mid-Cycle reviews in this section for further detail.

The following comments, made in a 1998-99 accreditation site visit report, highlight the consideration of outcomes assessment by one accreditation team:

Program goals reflect the mission and philosophy [of the program]. The goals however may be difficult to measure. Consideration needs to be given to fine tuning the goals so that a quantitative evaluation can be carried out.

The evaluation instruments [for the program] were reviewed by the site visitors. These verified that students were meeting curriculum-related goals and achieving the competencies needed for entry level practice.
...
Program evaluation is ongoing. A number of constituencies are involved.... [The constituents] are consulted frequently in the program evaluation process, and several examples were given to the site visitors of how the program has changed in response to this input. This latter information, however, was primarily anecdotal. It would be helpful if there were more formal documentation of a systematic process.

C. Mid-Cycle Assessment Reviews

A Mid-cycle Assessment Review process was recommended by the University-wide Assessment Committee in spring 1998 and was implemented in fall 1998. The need for this process is illustrated by a review team's report in a recent academic program review:

The department's procedures for assessing student learning in the major are remarkably thorough, but whether the benefits to the Department and its teaching mission outweigh the very obvious costs is not clear. In discussing the results of last years assessment with the Undergraduate Chair and his committee, we were not convinced that the attempt to quantify student learning by scoring student performance in a number of inevitably subjective categories had yielded meaningful results. In contrast, the exit survey the Department administered to outgoing seniors apparently provided many helpful insights into the undergraduate experience. The committee [peer review team] suggests that the Department search for ways to simplify its assessment instruments so that the costs in faculty time are not so great, and it urges the administration to accept a range of assessment procedures that reflect realistic possibilities for measuring student outcomes.

The comments of the review team reflect the frustration of those faculty who believe that the costs of assessment outweigh its benefits. Recognizing that the program review serves multiple purposes, we chose to deal with this aspect of faculty frustration outside of the program review process by developing a mid-cycle assessment review which occurs two to three years before the program review. During the mid-cycle review, the University-wide Assessment Coordinator studies the department's assessment plan, reads its reports and meets with the department's faculty to address issues of assessment objectives, design and methodology. We anticipate that this process, given time, will assist faculty in developing useful and reliable assessment measures.

A second concern raised by the review team's comments is their lack of familiarity with either assessment design or university and college policies regarding assessment. Although they suggest assessment simplification and echo the faculty's frustration, they offer no guidance to the faculty as to how they might simplify their assessment and obtain meaningful results. Their suggestion that the administration be urged to accept a range of assessment procedures points out their lack of familiarity with university and college guidelines which already accept a broad range of assessment procedures. And, they appear to be unaware that the assessment plan was designed by the faculty of the department and not imposed on them by administration. Recognizing that few program review teams will have either interest or expertise in assessment design and that many will have only nominal familiarity with university policies, the mid-cycle review process was developed to focus on issues of assessment design. The program review team can then be expected to focus on the issue of program quality.

The preceding example illustrates several of the reasons why University-wide Assessment Committee created the mid-cycle assessment review process. In the document which established the review process, the committee more fully explained its reasons:

The office of the Vice Chancellor for Academic Affairs has a responsibility to ensure that guidelines relating to assessment information have been followed by each unit in its academic program review. Given that the effectiveness of UNL's assessment plan hinges upon assessment being successfully integrated into the APR [academic program review] and accreditation processes, having a way to encourage and monitor this integration is needed. The following plan assumes that units have implemented assessment plans having characteristics described by the university plan and that they have documented their assessment activities in annual reports to their deans.
...
Depending upon accreditation standards and the training and interests of external review teams, there may be no mention in a team's report of the assessment efforts of the faculty, which can convey the impression that such efforts are not valued. This plan ensures that at some point in the program review process, assessment activities are focused upon. In addition, employing explicit criteria makes it clear that the university has standards that should serve as goals for programs as they develop and refine their assessment plans. Nevertheless, despite this degree of standardization, faculty are left great latitude in determining the objectives to be measured and how they measure the achievement.

With responsibility for assessment activities changing frequently, annual reports alone are unlikely to convey the broader picture of how or whether assessment is contributing to the growth of a program. Instituting a formal assessment review is intended to encourage reflection upon the cumulative effects of the assessment process. The mid-cycle review emphasizes the university's commitment to a process of outcomes assessment that provides the information necessary for formative program evaluation.

This mid-cycle review process complements the annual reporting process (begun in fall 1997) in which deans are asked to forward each department's assessment report or a summary of assessment activities (program and college level) and the self-studies for academic program reviews and accreditation. This new mid-cycle review process provides an opportunity for the University-wide Assessment Coordinator to interact with faculty responsible for the assessment process within their college or program and to offer suggestions as to how their assessment activities might be improved or to gain insights to share with other programs and colleges. A 5-year rotation schedule has been established for these reviews. Four reviews have been completed and eight reviews are expected to be conducted in 1999-2000. A revised schedule of reviews can be found in Appendix A. Appendix A also contains a brief document outlining the logistics for conducting the mid-cycle review which is distributed to departments at a meeting prior to the review.

Following the completion of the first mid-cycle reviews, the University-wide Assessment Steering Committee revised the review process to provide a complete copy of the mid-cycle review report (rather than a summary) to the college dean. This change provided the same level of confidentiality as in an academic program review but gave the dean more information on assessment activities within a department. A second modification to the process was made at the conclusion of the fourth mid-cycle report to provide the department an opportunity to respond to the mid-cycle report. This response enabled departments to correct or clarify information contained in the report and to comment on suggestions offered by the University-wide Assessment Coordinator.

To show the tone of the report and the nature of the suggestions which are offered in it, a few excerpts from one of the recent mid-cycle reports follow:

The approach of the department in outlining objectives for those courses which are core to the discipline is an important and appropriate start for measuring student learning in the major. The initiation by the department to inform all ... instructors of the learning objectives for the courses is a very productive step in standardizing what students should achieve once they have completed the course.
...
Although most of your learning objectives appear clear and measurable, discussion with your faculty indicated that there are a few objectives that do not directly state the goal of student learning. For example, the learning objectives for ... indicate that students should have the knowledge and ability to engage in the historical and contemporary debates in the discipline and express ideas orally. These two objectives relate to goals for student learning, but do not provide the direct expectation that would make measurement of them easier. To achieve a clearer statement of the learning expected, these two particular objectives might be changed to say students will have the knowledge and ability to effectively debate an historical or contemporary issue in the discipline and effectively express an idea orally so that other gain insights into the issue.
...
... I commend your department for selecting direct measures that capitalize on information produced in everyday course work of key courses. These intact products help eliminate the burden on students and faculty to produce additional measures for assessment purposes. However, there are issues that should be considered given that your assessment is based on individualized artifacts from courses with different instructors making judgments about students' achievement of the objectives.

In the future, the mid-cycle review will continue to be developed into a forum for faculty to discuss the usefulness of their assessment plans and results, how assessment can address their issues of interest in student learning, and ways in which their assessment efforts can be facilitated by various campus resources. The hope for this review is that it will provide programs feedback needed to implement meaningful, useful, and efficient plans for assessment. Results from these assessment plans can then be used at the time of Academic Program Review to document program strengths while at the same time gaining information about where to concentrate limited resources effectively.

Activity 2: Assessment of the Comprehensive Education Program - Results & Refinement

The Comprehensive Education Program has neither a department or college based assessment program because it is a university-wide program of general education to which neither academic program reviews nor accreditation apply.

A. 1997-98 Comprehensive Education Program Assessment Activities

A wide group of faculty were consulted in designing the initial assessment plan for the Program. These faculty included a faculty steering committee, members of the Academy of Distinguished Teachers, a professional consultant and the Bureau of Sociological Research. Past assessment plan activities included portfolio assessment, faculty surveys, and student surveys. Activities conducted during the 1997-98 school year included a student survey, alumni survey, and a curriculum review. Results from these three activities can be found in the 1998 Comprehensive Education Program Assessment Report in Appendix B.

The Bureau of Sociological Research will continue to administer the student and alumni surveys and the former Comprehensive Education Program Coordinator has been retained to continue to analyze and prepare reports based on the survey results. Future reports will be disseminated as part of the University-wide Assessment Report.

B. Modifications to the Existing CEP Assessment Plan

Three principal modifications were made to the assessment of the Comprehensive Education Program:

  • Enhanced involvement of departments and colleges in the assessment of Comprehensive Education Program
  • Development of faculty leadership and direct measures of student learning outcomes by connecting assessment with the Peer Review of Teaching Project
  • Elimination of the original portfolio project.

These modifications of the University Assessment Plan were made in recognition of the need for greater faculty involvement in the assessment of the Comprehensive Education Program. Perhaps this was best said by Professor Bergstrom, CEP Coordinator, in the Introduction to the 1998 Comprehensive Education Assessment Report:

The 1997-98 academic year was a time of transition for assessment, both of the Comprehensive Education Program (CEP) and of the programs across the UNL campus. The intense concentration on assessment prior to the 1997 North Central Accreditation review was followed by the slow, difficult work of building or reinforcing an infrastructure to ensure ongoing and useful assessment at the college and department levels. Those responsible for CEP assessment wrestled with the growing realization that the Office of the Vice Chancellor for Academic Affairs was not the platform from which student learning outcomes attributable to UNL's new general education could be measured. Such assessment must occur at the site of instruction and learning, and it must be integrated with other assessment efforts in departments and colleges. Throughout the year the CEP Assessment Team addressed three issues:

  • How could those responsible for the maintenance of the Comprehensive Education Program help departments and colleges merge CEP assessment with their other assessment efforts, avoiding unnecessary duplication and resulting in educational benefits to students and faculty in the units, as well as pertinent information about a university-wide program?
  • What assessment standards pertinent to CEP were available or could be developed to be employed by the wide variety of undergraduate programs at UNL?
  • What assessment activities were appropriate at the Academic Affairs level?

To reformulate a large-scale assessment effort begun only a year earlier is slow, complex, and sometimes frustrating work. Some avenues explored by the Assessment Team proved to be dead-ends; others yielded only minimal results. Nonetheless, the year's efforts have resulted in the gathering of very useful information and a level of institutionalization for CEP assessment that one would have hardly thought possible in November of 1997.

The following three summaries explain how the assessment plan was modified to create greater faculty involvement in the process and address the identified issues of:

  • Integrating the assessment of Comprehensive Education Program into departments,
  • Developing assessment standards pertinent to Comprehensive Education Program,
  • Defining assessment activities appropriate at the Academic Affairs level.

1. Integrating the assessment of Comprehensive Education into departments and colleges

A study by the University-wide Assessment Steering Committee indicated that the learning objectives of Integrative Studies parallel the learning objectives of degree programs and majors. At the same time, the committee recognized that disciplinary differences exist in critical thinking and communication styles. Coupling this recognition with the knowledge gained from the curriculum review which indicated that students were completing approximately five Integrative Studies courses at the 100- and 200-level with the remaining five courses at the upper division (e.g., major and minor areas of study) allowed the University-wide Assessment Steering Committee and the University Curriculum Committee to work together to encourage students to continue this natural spread of Integrative Studies courses. This action was intended to strengthen the ownership of Integrative Studies courses within departments and create greater responsibility for student learning outcomes with the faculty of those departments.

This action was also supportive of the Academic Program Review Guidelines (revised 1995) which specifically required the department's self study to include goals and plans for assessing the Comprehensive Education Program. The criteria for mid-cycle assessment reviews built on these guidelines by defining the scope of the assessment plan which would be reviewed as follows:

The plan includes assessment of student learning outcomes in ... the Comprehensive Education Program: a) the program's unique contribution to CEP program for non-majors and b) progress of the program's own majors toward CEP objectives. These two aspects of CEP will not be equally relevant for all units, and assessment plans should reflect an emphasis appropriate to the discipline.

The need for integrating the assessment of the Comprehensive Education Program into the departments and programs is further illustrated by the portfolio project. The portfolio project (which was intended to provide a direct measure of Integrative Studies learning objectives) failed because of a lack of student commitment and the lack of consistent material. Placing the primary responsibility for the assessment of Integrative Studies objectives within colleges and programs should address both these issues. For instance, students building the portfolios or participating in capstone courses in which portfolios may be developed are actively involved in the assessment process and more likely to be committed. Similarly, the portfolios built by students within a common discipline are more likely to achieve the consistency of material needed for assessment.

The next step in the integration of Comprehensive Education Program assessment was to create opportunities which would encourage faculty to utilize student portfolios or other direct measures of student learning outcomes for the purpose of programmatic improvement. At the same time, the importance of a supportive climate was recognized if these student portfolios or other direct measures of student learning outcomes were to be viewed more widely (e.g., outside the specific discipline). To aid in this process, we looked to the existing Peer Review of Teaching Project (i.e., a process which had campus-wide support and was faculty driven). An initial effort was made to revise the portfolio project by reviewing classroom artifacts gathered for the Peer Review of Teaching Project but the nature of the available material again proved unsatisfactory for reasons similar to those noted in the original portfolio project: Artifact collection must be better focused (i.e., on a single aspect of learning) and more localized (i.e., aimed at students doing the same individual assignments). At the same time, the stated objectives of the Peer Review of Teaching Project suggested that this approach warranted further exploration.

2. Utilizing Peer Review in assessment of the Comprehensive Education Program

To redesign assessment of the Comprehensive Education Program so that it enhances faculty involvement and focuses on the learning environment created within classrooms, Professor Daniel Bernstein who had a major leadership role in the Peer Review of Teaching Project was asked to develop a proposal which would build the assessment of the Comprehensive Education Program on what we had learned from the nationally recognized Peer Review of Teaching Project. The Senior Vice Chancellor supported this proposal, and, the university agreed to provide funding. In spring 1999, matching funds were provided by a two-year grant from the Hewlett Foundation.

The proposal which Professor Bernstein developed builds upon and improves:

1. The original Peer Review of Teaching Project by:

  • Building and supporting department-based working teams of faculty who will examine their teaching and the resulting student learning outcomes,
  • Creating long-term faculty relationships within departments and across departments and colleges,
  • Clarifying the role of the department, and
  • Broadening the scope from improvement of student learning outcomes in individual classes to improvements in outcomes from courses of study and programs.

2. The original assessment methods for the Comprehensive Education Program by:

  • Developing faculty leadership for formative assessment,
  • Focusing on the learning environment faculty create in their classrooms and connecting that environment with student learning outcomes, and
  • Engaging faculty in the collection and evaluation of portfolios of student work.

Although the original Peer Review of Teaching model built and supported working teams of faculty and created long-term faculty relationships which enabled meaningful discussions to occur, the assessment activities associated with the Comprehensive Education Program failed to develop similar faculty leadership. Incorporating what we have learned from the Peer Review of Teaching Project into our assessment is developing faculty leadership for formative assessment. In addition, because one goal of the Peer Review of Teaching Project was to direct the focus of teaching evaluation toward measures that include the assessment of student performance and of faculty practices that improve student performance, some faculty leadership already exists and will be helpful in widening the discussion of assessment. Although the proposal builds upon the two original projects, it also differs from and improves upon them by being a departmental project and by focusing on formative assessment.

The original Peer Review of Teaching Project had worked with teams composed of two faculty members scattered across the campus. Although this more quickly disseminated information on the Peer Review of Teaching Project across the campus, the department was not asked to take responsibility. This new proposal asks that departments take collective responsibility for the quality of teaching; what is proposed is a departmental project - not an individual team project. It involves departments in the establishment of learning objectives and utilizing the data gathered from the courses to evaluate student learning outcomes based on these objectives. Departmental teams, typically of four faculty members, will be sufficiently large to represent differing departmental perspectives. Formal internal peer review of student performance will be at the heart of the department's internal consultations. We believe this approach is better because it recognizes the importance of broad-based departmental support for any curricular or pedagogical change.

In the spring of 1999, 2000 and 2001, four to five departments with either significant Essential Studies teaching responsibilities or substantial Integrative Studies offerings and campus visibility will be identified and asked to participate. Each fall, these departmental teams will begin working together to examine their core disciplinary and service offerings (i.e., three to five courses), consider the teaching practices used in delivering these courses and evaluate how successful students are in developing intellectual skills and understanding content. In the summer following, a one-week seminar will allow faculty teams to report their work and conclusions and discuss teaching issues of general interest. During the next academic year, the department teams will continue their activities.

The work of these teams will connect directly with the objectives of the Comprehensive Education Program because Integrative Studies are offered within each of the departments. Essential Studies and Integrative Studies will be among the courses identified for review and evaluation. Their work will provide better assessment of the Comprehensive Education Program because faculty will evaluate the learning environment they are creating in their classrooms and how that environment affects student learning outcomes. For instance, recognizing that parallels exist between Integrative Studies learning objectives and the broadly stated intellectual skills the various colleges and departments were seeking to develop in their students, faculty discussions may clarify differences in what they mean when they use terms such as critical thinking in an Integrative Studies course. In addition, the long-term relationships created through the peer review process will enable faculty to discuss what student learning outcomes faculty outside their discipline expect from the service courses they offer.

3. Defining Activities Appropriate at the Academic Affairs level

At the present time, the Office of Academic Affairs has concentrated its efforts in two areas:

  • Assisting in the development of assessment expertise, and
  • Development of core data

Several initiatives have been undertaken to develop assessment expertise and have been discussed, but a brief review may be helpful. The University-wide Assessment Coordinator position was modified to required greater expertise in assessment. The Teaching and Learning Center is developing expertise in assessment of student learning outcomes. Assessment expertise among faculty is being developed through mid-cycle assessment reviews and Peer Review of Teaching and Assessment Project.

Core data is being developed through surveys which continue to be supported by this office. In addition, the survey study conducted by the University Assessment Steering Committee and the Peer Review of Teaching and Assessment Project provide other forms of core data.

Activity 3: Development and annual reporting of college, departmental, and program assessment

A. Annual Dean's Assessment Reports

In response to the NCA Team's comment that some colleges and programs were significantly further along in the implementation of assessment programs than others, the University-wide Assessment Steering Committee developed an on-going process for the colleges to communicate their development of assessment activities. This annual reporting process was begun in fall 1997. In January 1998, guidelines were given to the deans as to the types of information they should be collecting from their programs, and alternatives were suggested for constructing the college's annual report. Although the specific format of reports was not standardized, deans were asked to provide the following information in future annual reports:

  • Detailed annual reports of college-level (if applicable) and department-level assessment committees which were expected to include program level objectives for student learning, a statement of how responsibility for assessment was assigned and results communicated, results of data collection and interpretation of results, and action proposed or to be taken as a result of the assessment information;
  • A brief commentary offering the dean's perspective on how assessment is proceeding in the college.

Deans were asked to either send copies of each annual report or to summarize the annual reports and retain file copies of the detailed reports for reference by the University-wide Assessment Coordinator in preparing for mid-cycle reviews or academic program reviews. In fall 1998, deans were sent a reminder with a copy to the appropriate member of the University- wide Assessment Steering Committee. Information from these annual reports is used in preparing the University-wide Assessment Report.

In December 1998, the libraries and nine of the ten colleges in the university filed timely reports of their activities during the 1997-98 year. After several contacts the remaining college filed its report in April 1999. These reports reflect that faculty are taking assessment seriously, that they are improving their assessment plans, and that they are beginning to obtain results that they can utilize for programmatic improvement. Although some faculty do not see a clear need to do assessment or any benefit derived from it, faculty in majority of colleges and programs are making progress in the assessment of student learning outcomes. Greater detail and additional information from these reports can be found in Section II of this report.

B. Special Project: Review of Outcomes Assessment Efforts at Other Research I & II Universities

A review of outcomes assessment efforts at Research I & II Universities was conducted to obtain a benchmark regarding the structure and organization of outcomes assessment programs at large universities and the types of assessment data being collected at the institution, college, and department level. The results of the study did verify UNL's outcomes assessment program is fairly similar in structure and organization to other large universities. However, the review was unable to uncover differences in the types of data being collected at the different levels of the institution.

The results of the review will provide a resource for future networking. Information gathered in this review can be used to identify ideas and expertise at peer institutions and this information can be tapped when an example of assessment processes is needed to respond to institutional or faculty issues. The copy of the report resulting from the review can be found in Appendix C.

Activity 4: Collection of surveys institution-wide

An audit of surveys currently conducted by all units at UNL was undertaken by the University-wide Assessment Committee and the University-wide Assessment Coordinator. Members of the University-wide Assessment Committee were asked to collect copies of all surveys used in their college, along with some details of the surveys' administration (a copy of the request to the committee and the accompanying cover sheet questionnaire are included in Appendix D). This survey audit was intended to serve several purposes:

  • evaluate redundancy among surveys that are currently implemented
  • identify areas in which activities could be coordinated
  • identify sources of information having potential use in institutional assessment
  • evaluate whether gaps exist that should be addressed at the university level
  • create a bank of material that may be a resource for departments seeking ideas and examples of instruments

All surveys were forwarded to the University-wide Assessment Coordinator, who created a database of information from the approximately 50 surveys received related to student campus experiences. A list of surveys by college/unit can be found in Appendix D. Besides general information from the cover sheets, the database contains information concerning the topics covered by the questions on each survey. These are loosely organized into themes in Appendix D, which also contains the frequency of occurrence of each category of question (the numbers in parentheses).

The survey audit answered the University-wide Assessment Committee's immediate concerns. First, there is little evidence that groups are being asked to complete so many surveys that they are likely to experience "survey fatigue." Although there were a few particular instances where a college could probably benefit from coordination of their programs' surveys, especially if programs were allowed to append a section in which information such as program-specific skill development was addressed.

The audit has also proven to be successful in building a bank of material for those interested in developing survey instruments for department or college use. It has already been useful to the University-wide Assessment Coordinator in providing access to copies of many instruments that have not been included in past assessment reports

In addition the results of the survey audit began to identify some sources of information that may be potentially useful in institutional assessment (i.e. questions regarding participation in co-curricular activities and job experiences). However additional information is needed to further establish what information would potentially be useful as well as what gaps exists. Therefore, the results of this survey audit may be a need to built upon to further respond to these issues of interest.

Activity 5: Timeline for implementing assessment of graduate student learning outcomes

The University-wide Assessment Steering Committee has asked faculty in the colleges and departments to initially focus their assessment efforts on their undergraduate majors. Once a program has their undergraduate assessment program running smoothly, they will then be asked to turn their attention towards the assessment of their graduate program.

To facilitate this plan for implementing graduate assessment the Associate Dean of Graduate Studies was invited to sit on the University-wide Assessment Steering Committee. This action was taken to improve communication between the Office of the Dean of Graduate Studies and the University-wide Assessment Steering Committee. In addition to committee representation, the Office of Graduate Studies will be notified by the University-wide Assessment Coordinator once a program's mid-cycle review suggests that their implementation of undergraduate assessment is well underway and they were ready to undertake assessment of their graduate program.

This cycle provides faculty the time to implement a solid and useful undergraduate assessment plan as well as develop faculty expertise before they are asked to expand outcomes assessment to their graduate programs. In addition, sharing information about assessment processes and progress with the Dean of Graduate Studies through the University Assessment Committee and Coordinator facilitates their support of departments when the departments are ready to address assessment in their graduate programs.

The University-wide Assessment Steering Committee agrees the model for assessment of graduate programs may need to be different than the model used for undergraduate assessment. The education and assessment of students in graduate programs is very different from that of undergraduate programs. Therefore, when undergraduate assessment appears to be well underway across UNL the committee along with the Office of Graduate Studies will consider how the University's current processes for engaging undergraduate programs in outcomes assessment will be different or similar for graduate programs. This consideration is necessary so that current processes outlined in UNL's assessment plan will address the unique context of graduate education.

Activity 6: Analysis of existing data and the production of shared assessment documents

Departments' use of existing data for outcomes assessment and the sharing of positive examples of assessment plans and methods will be facilitated by the University-wide Assessment Coordinator. To achieve these two aims the Coordinator will rely primarily on the process of mid-cycle review. There are two roles the coordinator will play in the mid-cycle review. One role is as a consultant, with expertise in assessment, to assist departments in developing assessment strategies that are efficient and provide useful information about student learning. The second role is as a purveyor of positive and useful ideas for addressing outcomes assessment that exist in other departments and may relate to the needs and issues in a particular department.

Activity 7: Communications strategy for providing useful assessment information to university faculty and administration

Three avenues will be used to communicate assessment issues and ideas with university faculty and administration. The first, as alluded to in previous discussions, will be the mid-cycle review. This review provides the University-wide Assessment Coordinator the opportunity to discuss in detail with the department their assessment plans and progress so that needs and issues can be addressed by sharing ideas for addressing the same needs and issues in other departments. In addition, the mid-cycle review can also be an opportunity to communicate new strategies for assessing student learning.

The second source will be through workshops, consultations with individual departments, and newsletters from the Teaching and Learning Center which serves the entire campus community in meeting various needs related to teaching and learning. For example, the Teaching and Learning Center conducted a "Rubric Design" workshop during the 1998-99 academic year for the College of Architecture. As the mid-cycle review process expands to other departments, addressing the needs of individual departments by the Teaching and Learning Center will be expanded. The resources at the Teaching and Learning Center will be used to meet those needs as well as address campus wide issues regarding assessment and its integration with teaching and learning.

The third source is the website that has been developed to disseminate more broadly the assessment activities on the campus: http://www.unl.edu/svcaa/Resources/Assess/index.html. On that site are the University-wide Assessment Reports and condensed versions of the two Comprehensive Education Program Assessment Reports. To encourage communication, members of the University-wide Assessment Steering Committee are listed on the site.

Section II - University-Wide Assessment Issues, Ideas, and Effects

Various issues and ideas for conducting assessment of student learning outcomes and effects from the assessment of student learning arose from this year's annual college assessment reports and the four mid-cycle reviews. The college and departmental plans for assessing student outcomes are evolving and maturing; however, to continue that progress guidance about where it should go and how we should get there is needed and this report attempts to provide that guidance. The purpose of this section is to serve as a focus for next year's assessment initiatives. In addition to the discussion of issues and ideas, the effects from the assessment of student learning provides an indication of how the University as a whole is progressing on the assessment of student learning outcomes. Part I of this section contains a discussion of Issues and Part II gives Ideas of how departments or colleges have dealt with the issues. Part III is a review of the Effects of assessment.

Part I. Issues in the Assessment of Student Learning

As colleges and departments design and implement their assessment plans they may be facing the following issues. All issues do not apply to all programs, but if they do they can serve as an area of concentration for future assessment activities for the college or its departments.

Improving Outlined Learning Objectives/Outcomes

Good objectives for student learning focus on what the student will achieve rather than what the program/instruction will achieve. However, it is sometimes difficult to discern between the two. Because of this difficulty, several issues regarding the development of learning objectives are discussed.

Clarification of the Expectations for Learning Objectives

Program/Instructional objectives often are written with an appropriate level of specificity to serve as general goals for student learning. However, further development of these learning objectives is needed to improve their measurability. Discussions that clarify and expand the general learning objectives into a set of concrete, more detailed learning objectives and performance criteria would support more meaningful measurements of those objectives. This suggests the need for development of expectations for each learning objective. In other words, for each learning objective the question should be asked, "What must the learner be able to do to illustrate mastery of a learning objective?" What is the outcome if a student masters a learning objective? The key to these questions is focusing our assessment activities on what the learner should demonstrate as a result of the instruction rather than stopping at only describing the instructional intent.

For example, a commonly identified learning objective in departmental assessment plans is for the students to "be able to clearly express their ideas orally". This is the intent of the instruction but how does the student demonstrate mastery of this learning objective? Will the student demonstrate it through conversations with other students, an oral presentation, or an oral interview/defense? What type of ideas will they be expected to express? And, what differentiates a clear expression from an unclear expression?

The questions highlighted in this example indicate that it is not only necessary to specify the student performance that represents mastery of the learning objective, it may also be necessary to specify additional information. This additional information might include the conditions under which that performance should occur and the acceptable level of student performance. Note that the level of acceptable performance in some cases could be "marginal performance" or it could be "perfect performance" depending on faculty's expectations for students at the time of assessment. It is likely that students' achievement of some learning objectives may represent "marginal performance" when they graduate whereas there is the expectation their performance will improve with additional practice and experience when using that skill in their future roles.

Focus of Learning Objectives

Another issue to consider, especially in those situations where there is vast disagreement among faculty, is whether it is more important to state learning objectives related to skills, processes, and dispositions rather than content areas. In some areas it is difficult to identify a canon of knowledge and, therefore, it might be more relevant to measure student achievement on something other than content learning objectives.

Scope for Learning Objectives

A third issue concerns focusing learning objectives on broad outcomes of the major and collecting and reporting information for all students rather than individual students. The purpose of outcomes assessment is to obtain an understanding of student achievement across the program and not to focus on individual students or individual courses and their instructors. Exceptions to this occur when individual course(s) are the cornerstone of a program or entail integration and application of all the knowledge and skills acquired by a student in their program (i.e. senior capstone course).

  • See "Broad Generalization about Student Achievement" and "Capstone Course Assessment" in the Part II - Ideas section for possible solutions.

Linking Learning Objectives to Measurements

A simple task that clearly lays out how each learning objective is to be measured would be to develop a table or chart linking measurements to objectives. The absence of this task can lead to discrepancies between what the learning objectives are, and what is actually being measured. This one-to-one correspondence will provide better idea of whether a department's plan for assessment is on track and is a source from which a long term plan of implementation can be developed (See "Establishing a feasible plan of implementation").

  • See "Linking Learning Objectives" in the Part II-Ideas section for possible solutions.

Striving for Value and Usefulness

The particular interests of faculty in student learning may have been overlooked in the haste to develop and implement comprehensive plans for assessing student learning outcomes. Therefore, it is important for departments to reflect on their assessment plans and implementation. Questions to be ask include:

  • Does this plan or the assessment data answer questions about student learning that are of particular interest to the faculty?
  • Are current issues regarding our teaching, the curriculum, and their relationships with student learning being addressed in our assessment plans or represented by our assessment data?
  • Is the data from our assessment plan valuable and useful in addressing areas suggested for improvement in our last Accreditation or APR? Are we providing evidence of what learning objectives are, or are not, being achieved by our students? Are we determining how to address student achievement issues?

If assessment plans do not provide this useful information, they may need to be altered. Making those assessment activities that address questions of interest is a high priority and can enhance the value of the assessment results.

  • See "Value Added Studies", "College Wide Surveys", and "Capstone Courses" in the Part II-Ideas section for possible solutions.

Revising and Revisiting Plans

In the formative stages of assessing student outcomes, it is difficult to determine when to revisit and revise an assessment plan. The above section suggests the need for reflection on assessment plans. The prime indicators of this need are when:

  1. Implementation is not occurring because the original plans were not feasible or practical,
  2. implemented plans exists, but they were an exercise to complete and not a source of information for discovering more about student learning in the program, or
  3. improvements based on the assessment data may not be occurring because the faculty do not value the data collected or believe that it is an accurate representation of student learning.

Even when assessment plans have become established, it is likely that the plan will need to be changed over time; as changes in the program occur due to the results of assessment data, changes in the major, or reorganizations of departments. Plans that are periodically revisited and reviewed will evolve into better plans that match the changing goals of the program.

  • See "Revision of Assessment Plans and Activities" in the Part II-Ideas section for possible solutions.

Use of Indicators of Student Learning

Examples of indicators of student learning include course/assignment grades, percentages of students passing an exam, or percentages of student employed after graduation. Although this type of data can provide some information about the success of the program, it usually provides very little information needed to determine how to improve student learning.

For example, let's assume that students' achievement of learning objectives was based on the grades in particular course. The results indicate that 10% of the student received an A, 40% received a B, 35% received a C, and 15% received D's and F's. What do these results tell us about student achievement? Do they tell us what differentiates the learning of an 'A' student from a 'B' student? Do they tell us how to help the 'C' student improve their learning? Do they describe where our 'D' and 'F' students are deficient in their learning?

Although grades as indicators may appear to provide a clear-cut representation of our students' achievement, usually grades provide very little information about what our students have and have not learned or how we might be able to improve learning. There are circumstances in which indicators may provide the kind of detail needed to inform program improvement but this usually calls for further specification of the meaning behind the indicators. For example, it is possible for grades to provide more detailed information on student learning as the Math department discovered.

  • See "Evaluation of Course Learning Objectives" in the Part II-Ideas section for possible solutions.

Faculty & Student Communication

Two groups with whom sharing the progress and results of assessment data would be helpful are the faculty in the department and the students in the programs. Their feedback can be invaluable.

It is important for all departmental faculty to feel ownership of the assessment efforts. Consulting faculty about the results and giving them a chance to provide input can provide important insights regarding the meaning of assessment results for teaching and learning. If this type of regular and periodic communication with the faculty is not occurring, efforts to facilitate the involvement of all faculty are needed. Their involvement will thereby ensure that the information from assessments is used to make legitimate improvements and they address issues that are of interest to faculty.

  • See "Faculty Involvement" in the Part II-Ideas section for possible solutions.

In addition, sharing the objectives for learning with students at the outset of a program or course begins to actively involve students in their own learning. This enables students to know what they are expected to achieve; and therefore, put emphasis on acquiring identifiable knowledge and skills, not just obtaining an acceptable grade. In addition, students are more likely to put forth a worthy effort on assessment if they understand why it is being done and how the program is using the information.

Establishing Feasible Plans of Implementation

Departments can benefit from implementing a cycle in which they successively focus on different pieces of the assessment plan. It is unrealistic to expect that faculty can devote enough time every year to accomplish needed development work on every measure. A productive strategy at this stage might be to quickly try out the suggestions that are of particular interest to faculty or are fairly easy to implement (such as, the use of existing course assessments or the collection of student survey data). For the more time-consuming efforts, an area of concentration can be selected for each year.

For example, if assessing critical thinking skills from an artifact in a portfolio is a priority, you might focus one year on developing a more detailed definition of critical thinking. This could be followed by an analysis of a set of papers that represents a range of student performances. After identifying the dimensions that distinguish the various performance levels, you will be in a good position to develop a detailed scoring rubric. Testing it out and evaluating how reliably it can be applied would naturally follow. Based on the work for this first year, you would have no results to report from the portfolio assessment of critical thinking but you would describe the process of developing the scoring rubric. The following year that portion of the portfolio review could be implemented, results reported, and curricular implications considered. Development activities in successive years would focus on a different learning objective.

Viewing assessment of student learning as an on-going, evolving effort assists in the continual refinement and improvement of the plan by a department. It is not expected that a department will achieve comprehensive and high quality assessment of student learning immediately. But, what is expected is continual building upon the process each year. A long-range plan can assist a department on concentrating on certain aspects of student learning in any given year so that quality assessment of student learning is achieved instead of a large and unwieldy quantity of assessment information about student learning in any one year.

  • See "Long Term Implementation Plans" and "Graduation Cycles" in the Part II-Ideas section for possible solutions.

Efficient Assessment

For assessment plans to be sustained, they must be efficient. The question that arises is:

  • Are the measurements efficient and valuable? Do they make good use of faculty and student involvement given the information extracted from them about student learning?

A solution to this problem is to identify existing course assessments already used by faculty in their classroom or by departments. It is possible that departments have a good deal of pre-existing assessment data, but it may not have been formally analyzed or shared within the department or elsewhere. Once departments have a set of learning objectives, they are encouraged first to survey how they are currently measuring student learning in their classes and majors to see if assessment of those learning objectives already exists. If they don't, it should be decided in what forums that information would most easily be collected. The use of preexisting classroom assessments can be used as is or modified to make the collection of information about student learning more efficient and valid. One of the best places to obtain direct measures of student learning is from capstone courses.

  • See "Capstone Courses" and "Integrating the Assessment of CEP" in the Part II-Ideas section for possible solutions.

Developmental Assessment

Often plans for assessment are summative in nature rather than formative. That is, they measure student learning only at the end of the program rather than throughout the program. Understanding student learning across the curriculum can provide a much better roadmap of where and when student achievement of learning objectives is and isn't occurring and, if necessary, where meaningful changes could be made. A summative measure may provide a good indicator of what a student looks like when they graduate from a program, but it provides very little information on how they got there or what could be changed to improve achievement.

Developmental assessment can be useful in a number of ways:

  1. Provides a benchmark as to where students begin in relation to the learning objectives. Measuring student learning at the beginning, middle, and end of the major can give a more complete and accurate portrayal of learning. It also provides a mechanism for determining what and how much change has occurred in the student due to the program.
  2. Provides data for learning objectives that are acquired by students through several courses and not any one course. It may be expected that by the students' senior years they will have acquired certain skills that would have been developed throughout the curriculum. But, if the seniors have not acquired those skills it is difficult to determine where the breakdown in learning has occurred without developmental assessment of those skills.
  3. Provides richer data of student development especially for those departments that graduate a small number of majors each year. A small number of majors makes comparisons across academic years difficult because of wide variability in the majors from year to year. Instead, tracking the same group of student over time provides a more powerful indicator of improvement.
  • See "Value Added Studies" and "Assessing Across the Curriculum" in the Part II-Ideas section for possible solutions.

Focus for Changes Based on Assessment Results

As colleges and departments interpret the meaning of their assessment results, needs for improvement may not always call for changes in the curriculum. Departments and their instructors may be able to address and solve learning issues by using different teaching strategies, class formats, or structuring how students spend their time outside of class. Focusing on these types of changes, in addition to changes to the curriculum, makes available a much larger realm of possible solutions. Improving student learning may not always require increasing the number of credit hours for students or increasing the demands on faculty for new courses.

  • See "Improving Teaching, Course Assignments, and Student Experiences" in the Part II-Ideas section for possible solutions.

Where Can Colleges and Departments Get Help

If any department wishes assistance with any of the issues previously discussed they are invited and encouraged to seek the assistance of their college assessment representative(s), the University-wide Assessment Coordinator, or the Teaching and Learning Center. Individual consultations or group workshops can be conducted to help address these issues.

Part II. Ideas in the Assessment of Student Learning

These are ideas from which any college or department should borrow freely to meet the needs of their own situations in the assessment of student learning. It is an "idea box" from which one can learn what others are doing and what can be adapted to work for their situation. A representative set of ideas was included in this section, particularly ideas that address some of the issues raised above. If you have other ideas, please forward them on to the University-wide Assessment Coordinator. This additional information will assist in building sources of campus expertise that can be tapped by others in the implementation of their assessment plans.

Broad Generalizations about Student Achievement

The Family and Consumer Science department used multiple sources of data to determine the strengths and weaknesses of their students as well as issues and solutions for those strengths and weaknesses. The sources of data (surveys, internship evaluations, and portfolio reviews by external evaluators) were triangulated to arrive at a broad understanding of achievement in the program. Departmental suggestions for program improvement logically followed the issues identified because of the faculty's broad understanding supported by multiple sources of data.

Capstone Course Assessment

There are at least two examples of how capstone courses are used to assess student learning in the major. These examples reside in the Agronomy and Advertising departments.

Seniors' achievement of the learning objectives in Agronomy is assessed through a study of an actual farm for the purpose of developing a complete farm plan. The students gather relevant information about the farms operations and resources and from the farm family. With this information, they develop a complete farm management plan. To assess student achievement of the learning objectives, the farm operator then scores the student plans according to the published outcomes of the major.

For the Advertising department's senior capstone project students develop an advertising and public relations campaign for local clients (i.e. Lincoln Children's Museum, Urban Indian Health, etc.). Students then present the campaigns to a diverse audience, which includes faculty members from the department and college, the college dean and associate dean, as well as the client. Clients are then given the opportunity to critique the campaigns. These critiques have proven to be active and constructive.

Assessing student learning in capstone courses has been useful in providing a broad overview of student learning in the program by determining how well prerequisite courses have prepared students to integrate and synthesize all that they have learned.

Linking Learning Objectives

The process of assessing student learning by the Architecture department provides a highly structured example of the linking of learning objectives with assessment activities. The set of courses for the Architecture major are described in considerable detail to represent the NAAB requirements. Each learning objectives is then measured as part of the accreditation process with archives of student performance from the courses that represent a low pass to high pass continuum. This provides a representation of the trend of student learning from accreditation year to accreditation year to determine if the improvement of learning throughout the continuum is occurring. Where student learning appears to be deficient, a link can be made to sources of deficiencies and these can easily be identified and addressed.

Value Added Studies

Two colleges, Journalism and Mass Communications and College of Business Administration, and one department, Theatre Arts, conduct value-added studies on their undergraduates.

The College of Journalism and Mass Communications conducted a study in Journalism 123 (The Media Today). They analyzed the changes in student attitudes and knowledge according to whether students are majors or non-majors and by whether or not they are simultaneously participating in a related learning community. They collected parallel data from a similar senior-level course to serve as a control. In addition, changes in the writing abilities of freshman students is being measured using two specially designed writing assignments given at the beginning and end of the freshman course.

In College of Business Administration, the college is conducting a quasi-experiment comparing the performance of 100 sophomores (just entering the program) and 100 seniors on ETS Major Field Test in Business. Faculty reviewed and judged the test to be an appropriate measure of academic achievement for their majors. The comparison is intended to allow a preliminary evaluation of the "value-added" by the business curriculum. In the future, the 100 sophomores will be re-tested during their senior year to provide a more powerful indicator of the effectiveness of the CBA programs on the student learning.

The Department of Theatre Arts plans to evaluate the effects of an audition/screening process that will be instituted beginning next year. They are collecting quantitative and qualitative information from this year's cohort to make future comparisons possible.

All three studies will be instrumental in determining how well new or existing aspects of their programs assist in the learning of their students so that they can make more informed conclusions about the effectiveness of their programs and/or informative decisions of how to improve their program.

College Wide Surveys

The College of Engineering and Technology has implemented a college wide survey from which they obtained some useful results for improving curriculum in their programs and for providing assessment information in next year's ABET accreditation. The survey instrument was developed by Gallup and asks seniors to rate the "importance" and their "preparedness" on a variety of skills and attributes. Seniors and Alumni were ask to rate skills and attributes both general to engineering and specific to their primary degree. These results represent student and alumni perceptions of what skills and attributes in their respective programs were emphasized too much and what skills and attributes were not being emphasized enough. In a discipline in which the college curriculum carries over directly to the student post-graduate careers this type of survey can be a useful instrument in making judgments about the utility and importance of skills emphasized in the curriculum.

The College of Business Administration currently surveys (or plans to survey) several constituent groups. The groups surveyed (or to be surveyed) include: faculty, seniors, graduate students, alumni, and employers. Each survey contains similar items so that responses can be compared across the different groups. The validity of the survey results is reinforced by these comparisons. By triangulating the responses of several groups a much stronger case for the effectiveness of the college's programs as well as areas needing to be addressed can be made to the faculty in the college and during accreditation.

Revision of Assessment Plans and Activities

Two departments have found occasion to consider modification of their assessment plan or how they implement their assessment activities due to the results of previous assessment. Each has changed their plans based on very different reasons.

The History department found after the implementation of their assessment plan during the 1997-98 academic year that the results of their current assessment processes were not telling them anything new about student learning in their program. They have begun to explore other methods of assessment (i.e. alumni surveys, focus groups, etc.) to answer some of their unanswered questions of interest.

Faculty in the Advertising department found the quality of student portfolios to vary greatly and their attempts to assess them difficult because they had no standardized method for evaluating the portfolios across different faculty. Therefore, they are considering a one-hour portfolio course that would structure the assessment process and give students better feedback. In addition, a faculty member in the department consulted with a researcher from Gallup to revise their senior exit interview so that more valid results are obtained. The department is also considering options in administering the survey such as who should administer it and whether it should be administered as part of a required course to improve student response.

Evaluation of Course Learning Objectives

A crucial issue considered by several departments at UNL is how well the student achievement of learning objectives in prerequisite courses is meeting the needs in subsequent courses.

The assessment results in the Mathematics and Statistics department has focused discussion on the meaning of a passing grade. They have found that a passing grade does not always indicate that a student has obtained the necessary prerequisite skills for them to succeed in subsequent courses. There has been a discussion of tying grades to the mastery of core material essential for subsequent courses but various issues (some beyond their control) may inhibit the effectiveness of this solution and need to be addressed.

The discussion of student mastery of prerequisite material is also discussed and evaluated at the end of each academic year in the Architecture department. The department has a meeting at the end of each year for each course in which all instructors for that course are required to attend and instructors from preceding and subsequent courses are invited to attend. These instructors then discuss student achievement of learning objectives and sufficiency of those learning objectives both within the course and for the courses that precede it and are subsequent to it.

Several faculty members from the Engineering Mechanics program were awarded a Teaching Council grant to develop a tool assessing the competency of students in Statics (knowledge and skills from a prerequisite that are built upon by other courses). In addition to identifying competence, the tool will also provide students a method of rapidly reviewing and eliminating any deficiencies. The tool will help in assessing how much content of the Statics course students retain. This information is planned to aid faculty by identifying areas of greatest weakness so that corrective measures may be explored and introduced.

Faculty Involvement

Faculty in the Textiles, Clothing and Design department found their review of students' internship portfolios informative. The process was helpful in seeing the "full spectrum of work required of students in classes and to analyze how the course each (instructor) taught fit into the complete set of student experiences". In addition to this realization, the faculty was able to reach consensus on what their students were doing well and to identify concerns for other students' abilities. These experiences with the portfolio have encouraged the faculty to pursue this assessment activity further by involving more students to gather the maximum amount of information from this rich source of material. To facilitate faculty's review of these portfolios, they decided to hold a "portfolio review day" where the faculty spend the morning session reviewing portfolios and the afternoon session discussing outstanding features and components in need of improvements as well as the impact of their evaluation on curricular offerings.

The interest and involvement of faculty can also be highlighted through several Teaching Council grants received by faculty to conduct research on student assessment. Five projects were given grants during the 1998-99 academic year. One of these grants was discussed in the previous section ("Evaluation of Course Learning Objectives") and the other four projects are highlighted below:

  • Faculty from the Animal Science department in coordination with faculty the Agronomy department and the Agricultural Economics department were awarded a grant to develop two web-based assessment of student learning for a new interdisciplinary major. The first assessment will be used at the beginning of the major's capstone course to assess students' knowledge of basic concepts necessary for the capstone experience. Information from this initial assessment will be used to direct student learning and inform teaching faculty of student learning from prerequisite courses. The second assessment will be conducted at the beginning and end of the capstone course to assess the effectiveness of the course in developing students' abilities to integrate, critically analyze, and synthesize information.
  • The College of Business Administration received a grant to develop a database of background information (i.e. courses taken, GPA, age, etc.) for the students who were involved in their "value-added" study (described earlier in "Value Added Studies"). This additional data will allow the college to conduct further analyses besides a before and after comparison to determine what factors may relate to achievement at each grade level (sophomore or senior) or may relate to performances within each content area of the test. This additional exploration will enhance the college's understanding of the potential impacts on student learning.
  • Two separate groups in the Educational Psychology department were awarded grants to explore avenues in improving the assessment of student learning in the classroom. The goal of the first project is to improve teaching and assessment of students' conceptual understanding in advanced and introductory statistics courses. In the process of accomplishing this goal they plan to develop new forms of assessments to promote students' conceptual understanding. The second project plans to discover how students think about their learning experiences in the context of a specific course and to then assess the match between classroom assessments and students' cognitive levels.

Long-term Implementation Plans

In the past, a majority of the assessment efforts by Teachers College were coordinated through the Dean's office. Once those efforts had reached their full potential a plan was then instituted. In the first year (1997-98), all programs were asked to develop learning objectives and identify measures for those learning objectives for their majors and graduate programs. In the second year (1998-99), departments were then asked to implement one of their measurements and report on those results. In the future departments will focus on other learning objectives and their measurements.

The College of Arts and Sciences also implemented a plan that was adopted by the University and serves as the expectation for all colleges and their departments. Colleges and departments are initially asked to develop and implement plans for the undergraduate major, then their graduate program, and thirdly their CEP courses. In 1997-98 almost all programs in Arts and Sciences had a fully developed and implemented undergraduate plan as well as outlined learning objectives and measurements for their graduate programs.

These directives by the colleges have provided a framework to the departments for finding feasible plans for implementing the assessment of student outcomes over time. Further planning by the department for their individual activities would be helpful as suggested in the "Establishing a feasible plan for implementation" in the Issues section.

Graduation Cycles

To recognize the assessment progress of individual departments and to address faculty workload concerns, the College of Arts and Sciences has implemented a "graduation" plan. A program that is determined by the college assessment committee to have assessment of their major sufficiently developed and running smoothly then move on to a five-year implementation and reporting cycle linked to their APR schedule. The negotiated plan is agreed to in writing by a representative of the program, the college assessment committee, and the University-wide Assessment Coordinator. To even out the workload yet still obtain results needed for decision-making, plans typically have included rotation of activities. Graduated programs would still participate in mid-cycle assessment reviews, at which time a comprehensive assessment report will be due (including assessment results and analyses and details of the uses to which the results have been put in improving the major.) Brief interim reports would be expected annually, to contain only information about what activities had been conducted that year, what level of participation was achieved, and how the results were disseminated within the program. In a year in which a program underwent Academic Program Review, the self-study (incorporating assessment information) would serve as an interim report. Seven programs have been promoted to this plan in fall 1998.

This idea benefits the departments and colleges in several ways. One, it formally recognizes and rewards the maturation of departmental assessment plans from "in development" to "accomplished". And two, it promotes a more manageable on-going plan for assessment by individualizing the scheduling of department's assessment activities based on what and how they are assessing student learning.

Integrating the Assessment of CEP

The Communication Studies departmental assessment plan has provided the necessary data to look at the assessment of their CEP courses. Assignments from IS courses found in portfolios for majors are rated on the extent to which they reflect the learning objectives for IS courses. They found a significant number of the assignments represented all their IS goals. Although Communication Studies has not fully implemented assessment of their CEP courses, they have found a mechanism to begin surveying their coverage of learning objectives through the materials they are collecting in the assessment of the major. This illustrates an efficient use of existing assessments that will assist in making the future assessment process less arduous for the faculty in this department.

Assessing Across the Curriculum

Programs in the College of Fine and Performing Arts have uniquely developed assessment plans that collect information at points in the program much earlier than the senior year. This allows faculty to look at developmental questions. For example, Art History collects assessment information from students at various points in their program.

The process begins with an entry interview in which the student is advised on the skills necessary for various careers along with an assessment of the student's strengths and weaknesses. From these two pieces of information, a general course of study is mapped out for the student. The student's portfolio is then assessed after they have completed 15 hours in their major. At that time, the student's progress is assessed and problems are addressed. During the senior year the student takes a comprehensive written exam and is required to complete a research paper, which is evaluated to determine whether the student has achieved specified knowledge and abilities.

This collection of information across the curriculum provides much richer information about student development by providing a roadmap of student learning and the different aspects of the program that had an impact on that learning.

Improving Teaching, Course Assignments, and Student Experiences

Several departmental assessment plans and activities have or will result in suggested improvements for student learning that go beyond changes in the curriculum. Three departments in particular have found this consideration and emphasis important.

The Communication Studies departmental assessment activities are unique in that they incorporate into their analysis of student portfolios an assessment of the quality of instructor feedback. This information is then used to determine ways in which all instructors can improve student learning through feedback. For example, it was noted from a review of portfolio artifacts that student writing could be improved if all instructors were to give more feedback regarding the students' grammar and style.

Faculty in the French program are considering the use of standardized grading of papers in all courses, the establishment of a writing center, as well as the possibility of a senior level grammar course because assessment results led faculty to a discussion on how to improve language instruction.

The Psychology department has modified their assessment plan to incorporate the procedures and data coding system of the Peer Review of Teaching project. This process will not only determine their major's level of mastery of core course material but also the strategies used by instructors when teaching the courses and the conceptual level of their course assessments. This data could assist the department in linking teaching strategies with success in student learning. It also takes into consideration the cognitive level at which students are assessed which is an important factor often not considered in assessment plans.

Part III. Effects from the Assessment of Student Learning

In reviewing the 1998 annual assessment reports, it is clear that faculty are using assessment information in program planning and improvement. The following selected examples of actions taken from specific departments (listed alphabetically) illustrate a variety of ways in which assessment information is being used. They include changes in course content or teaching methods, curricular requirements or course sequencing, and development of new courses. Some of the changes are still under consideration; they have been included because this, too, represents use of assessment information, namely to promote focused discussion of ways to improve a program. Although the examples were chosen because they focus on change, assessment results have also affirmed the effectiveness of programs. Thus faculty can use the process of assessing outcomes to document program strengths while at the same time gaining information about where to student learning could be enhanced.

Advertising. The department has addressed several weakness noted from the assessment of students' capstone projects which required them to develop an advertising and public relation campaign for a local clients (i.e. Lincoln Children's Museum, Urban Indian Health, etc.). Students' insufficient grasp of marketing concepts, advertising objectives and strategies, and research skills have been addressed by the faculty's concerted efforts to emphasize these concepts more in appropriate prerequisite courses.

Communication Studies. The assessment results from the review of portfolios, instructor feedback on portfolio artifacts, and the responses on a student questionnaire provided the department an overview of students, instructors, and the curriculum strengths and weaknesses. Faculty review of portfolios revealed many strengths among juniors and seniors, however, faculty found they would like juniors to have a clearer idea of their goals and the relationship of their major to their goals. Academic advisors will be consulted on how this objective for junior could be facilitated. Instructor feedback regarding content was found to be good, however, the review indicated that a discussion of giving more feedback on grammar and style should be initiated with all instructors. A review of course assignment requirements in the portfolios indicated that many of the assignments are doing a good job of reflecting the Integrative Studies goals but very few dealt with ethical questions and choices. These deficiencies will be addressed with all course instructors.

English. The department's development of a new major, scheduled for implementation in the 1999-2000 academic year, was prompted and informed by assessment results. Student responses on exit surveys, regarding what they valued and what they would liked to see changed in regards to the major, as well as faculty uncertainty in the assessment of learning objectives were contributors.

Environmental Studies. The department planned a student orientation course for the fall of 1998 to help students understand the requirements, opportunities, and expectations of the program. This action resulted from expressed dissatisfaction by students on the exit survey with the advising process. Seniors' dissatisfaction with their senior thesis has also led to the creation of an evaluation rubric for standardizing the objectives of the project. In addition, the identification of solutions (i.e. new courses, better communication among instructors, etc.) for dealing with various curriculum problems (i.e. frequency of course offerings, course content overlap, etc.) were identified both from the senior exit surveys and faculty ratings of students' achievement of learning objectives.

Family and Consumer Science. Three sources of assessment data (surveys, internship evaluations, and portfolio reviews of external evaluators) were triangulated to arrive at a broad understanding of achievement in the program. This understanding identified several areas in need of attention. To improve writing skills they are considering the use of more writing assignments in upper division classes and the use of APA format as a requirement for all writing assignments. To improve students' understanding and critical evaluation of research they have suggested increased class time on reading, summarizing, and assessing the quality of research articles in the field. And, to respond to student concerns that they lack some essential knowledge necessary for professional work the faculty will assess the curriculum to ensure that courses are delivering that essential information.

French. Based on student performance on the program's grammar test, oral proficiency test (SOPI), and open-ended questionnaire, faculty began to discuss how to improve language instruction. Suggested solutions included instituting as a generalized teaching practice in all courses lowering the grade for literature papers deficient in their grammar and usage, establishing a Writing Center for individual attention, and the possibility of a senior level grammar of style course. Some combination of all three solutions will probably be used. There were also considerations of when students should show mastery of the grammatical instruction due to the fact that poor grammar habits were shown to be adopted early and hard to change.

History. The results of portfolio reviews and responses on exit questionnaires indicated that students have found the opportunity to think in-depth about a specific topic in their senior seminars particularly rewarding and a crucial turning point in improving their skills on certain objectives. This has spurred the faculty in the department who teach large survey courses to consider ways of capturing that aspect of senior seminars by focusing the survey courses on specific themes. Several faculty members are participating in the Peer Review of Teaching and Assessment project in which they would like to address this particular issue. The department has also taken seriously the less positive responses of student on the exit questionnaire regarding a "sense of community" among majors. The department plans to promote this "sense of community" through a mentoring program and reviving an honors society. They also plan to seriously consider whether oral expression should be a learning objective for the major based on student responses that indicated their students were not confident in this ability.

Journalism and Mass Communications. A research study was conducted by the college that compared the student knowledge of and interest in the gubernatorial election as well as their extent of agreement with statements about the operation and regulation of mass media. The comparison of these responses was intended to reveal difference due to a student being a major or non-major or their involvement in learning communities with a senior group of students serving as a control group. However, surprising responses were obtained from the seniors that indicated a regression in their attitudes. This result led faculty in the college to consider the effect of the format of their senior level course (large lecture) on the perceived value the students felt the college placed on the course after they had been enrolled in smaller hands-on courses throughout their major.

Mathematics and Statistics. Assessment results have focused discussion in this department on the meaning of a passing grade. They have found that a passing grade does not always indicate that a student has obtained the necessary prerequisite skills for them to succeed in subsequent courses. There has been a discussion of tying grades to the mastery of core material essential for subsequent courses but various issues (some beyond their control) may inhibit the effectiveness of this solution and need to be addressed.

Nutritional Science and Dietetics. Because the ADA modified the American Dietetics Registration exam by increasing the emphasis on skill development and application of knowledge and because of a lower passing rate by their majors, the faculty in the Dietetics Option modified the curriculum. These curriculum modifications focused particularly on the lab experiences of the student. Changes included increased emphasis of certain topics in existing lab courses, the introduction of new lab courses, and discussion about the content of lab courses offered by other departments. All three changes are geared towards assisting students to meet the ADA's new standards and the expectations of profession with the modified curriculum.

Sociology. The department in their annual report addressed the actions pursued based on their 1996-97 assessment results and additional recommendations based on the 1997-98 assessment results. Actions carried out included allowing only majors to enroll in senior seminar and providing two sections of the seminar so better instruction on writing skills might be produced. In fact the faculty rating of students' writing skills did improve but it was difficult to tell whether this was due to the previously described changes or individual differences. Nonetheless, the department plans to continue any other practical actions to keep the senior seminar sections small.

The Department has also pursued actions to improve students social research skills by submitting a proposal to the American Sociological Association (ASA) to participate in the ASA's Minority Opportunities through School Transformation (MOST) program which will focus on increasing and improving research opportunities and experience for all students. The department's pre-proposal was accepted and the department was chosen for a site visit from the Ford Foundation to represent all Ph.D. granting departments. Other recommendations made by the department to improve students' research skills include: getting students to take research methods courses earlier in the curriculum, providing research training and experiences throughout the curriculum, and considering the use of an assessment measure of research skills in Senior Seminar to assess the development of students research skills over time.

Teachers College. Many of the surveys conducted by the college (some for a number of years) have continued to be useful in informing programmatic improvements in the college's various programs. Three particular activities conducted in 1997-98 had an impact.

  1. The results of a first-year teacher survey of 1997 graduates caused faculty to evaluate current courses related to testing and assessment as well as classroom management and student discipline.
  2. Surveys of Nebraska school superintendents and principals initiated a process of incorporating knowledge about teaching and supporting reading skills in every methods course.
  3. Results of interviews with faculty and Lincoln Public School administrators regarding possible changes in the ways student teachers are evaluated led to adopting a new assessment form and implementing new portfolio assessment of student teachers.

Women's Studies. Faculty evaluation of senior papers and student responses on exit surveys indicated that majors needed and wanted a stronger foundation in Women's Studies methodology and writing/editing skills. Possible solutions suggested by the faculty included a required Women's methodology course taken before Senior Seminar as well as emphasis on editing throughout their curriculum.

Conclusions

 

The 1998 University-wide Annual Assessment report indicates that notable progress has been made by a majority of colleges and programs in their assessment of student learning outcomes and in plans to assess the Comprehensive Education Program. These efforts are evolving and maturing but have not yet reached their full potential. Although the University-wide Assessment Committee has put into place processes which we believe will create a supportive climate for assessment, we recognize that many faculty remain unconvinced that the benefits to be expected from assessment outweigh the high costs associated with it. In addition to assisting colleges and departments in addressing issues discussed earlier in this report, the University-wide Assessment Coordinator and Committee will continue to:

  1. Encourage faculty to view outcomes assessment as a way to collect information about student learning that is of interest to them and the future success of students in their program.
  2. Assist faculty in developing efficient assessment so that there are more benefits than costs from the time spent by faculty.
  3. Promote outcomes assessment information as a tool for improving teaching and learning in a program and not as a mechanism for holding departments or individuals accountable.

Attention to these three focuses will continue to support assessment which will be meaningful and useful in anticipation that the benefits of assessment will then be communicated.

University-wide Assessment Contact Information: