University-wide Assessment

1997 Annual Report

University of Nebraska–Lincoln
April 1998

Hard copies of the 1997 University-Wide Assessment Report have been sent to the deans of all UNL colleges and to the Teaching and Learning Center. Questions concerning this report or university assessment activities should be directed to the Director of Institutional Assessment.


 

Table of Contents

 

Introduction and Recommendations

Processes

Program Planning/Budgeting
Academic Program Review
External Accreditation

Comprehensive Education Program
Annual Report

Assessment Activities and Issues

Role of the University-wide Assessment Steering Committee
University-wide Activities

Career Services Report
ASTIN Survey
Student Omnibus Survey
Evaluation of the Comprehensive Education Program
Learning Objectives Analysis

College-based Activities

Setting college goals and objectives
Overseeing and guiding department assessment plans
Conducting assessment activities

Program Activities and Issues

Appropriate Focus for an Assessment Plan: Planning for Maximum Usefulness
Quality of the Data
High Level of Student Involvement
Cost-effectiveness: Restraining the Growing Demands on Faculty Time
Closing the Loop - Using Information from Assessment

Conclusions

Appendix A: Table of Contents, Interim Assessment Report

Appendix B: Highlights of College Reports

Appendix C: Strengthening the Role of Student Outcomes Assessment in Academic Program Review and Accreditation; Comparison of APR and Accreditation Assessment Standards

Appendix D: The Comprehensive Education Program 1996-97: A Condensed Report

Appendix E: Draft of Learning Objectives for the Integrative Studies Component of the Comprehensive Education Program, Spring 1997

Appendix F: Learning Objectives Analysis

Appendix G: December 1997 letters to the college deans in response to college annual assessment reports
(available from Deans Offices or Assessment Coordinator)

 


 

Introduction and Recommendations

 

The assessment of student learning outcomes at UNL takes place in a context of renewal and commitment to quality improvement in our "academic programs, educational services and campus learning environment" (UNL Assessment Plan, 1996, p. 4). Components of the drive toward quality improvement include the revision of admission standards, the creation of a common general education curriculum, and the establishment of a systematic planning and budgeting process that will be undertaken on a two-year cycle. The assessment process is directly tied into UNL's Strategic Agenda by the following strategy: "Establish department/faculty based strategies to assess student learning and achievements. Incorporate the assessment findings into UNL's biennial planning and budget review process" (UNL Assessment Plan, 1996, p. 3).

Five established processes provide focus for assessment activities within the institution. These are:

  • Program planning/budgeting;
  • Academic Program Review;
  • External Accreditation that is discipline-specific;
  • Assessment of student outcomes as they relate to the new Comprehensive Education Program;
  • Annual Report of college, departmental, and program assessment activities.

College, departmental, and program assessment activities are to be routinely and systematically incorporated into each of these established processes. The University-wide Assessment Coordinator, in conjunction with the University-wide Assessment Steering Committee, is to prepare an annual report providing an overview of student academic achievement during the academic year (UNL Assessment Plan, 1996, pp. 27-28).

An interim report was prepared in spring of 1997 to document the extent to which student outcomes assessment activities were being developed and implemented at all levels within the university. Appendix A contains the table of contents from the interim report, the full text of which was distributed to each of the colleges. Although some changes have occurred since that report was written, the majority of the information remains accurate and much of the body of that document has been integrated into this final report.

However, in describing progress at the program level, the following deviates somewhat from the format used for the interim report. Given that last spring so many programs were in the early stages of planning and implementation of their assessment plans, a cataloguing of activities seemed particularly appropriate. (An updated Highlights of College Reports, based upon college assessment reports, has been included in Appendix B.) In contrast, this final report concentrates more on what we've learned about both the assessment process and the achievement of our students. Issues raised and strategies described have been drawn from the 1997 final assessment reports of UNL programs and colleges, though the discussion is sometimes framed in general terms. The intention is to emphasize the universal nature of the issues and problems. However, a second goal is to illustrate ways in which faculty have already made use of assessment information. Because the way in which faculty interpret and respond to this information depends very much upon the characteristics of their program, specific examples are cited in this section of the discussion.

Information in both reports have shaped the following recommendations, which the University-wide Assessment Coordinator in conjunction with the University-wide Assessment Steering Committee will use as a guide for their actions during the coming year:

  1. That the university coordinate its institution-wide surveys and consolidate resources behind those efforts that give us useful outcomes information;
  2. That the Office of the Senior Vice Chancellor for Academic Affairs continue to support and refine assessment activities regarding the Comprehensive Education Program;
  3. That colleges continue to develop and refine their assessment activities for the undergraduate programs (see detailed suggestions under Department/Program Activities and Issues);
  4. That colleges, aided by the Dean of Graduate Studies, begin to assess outcomes in their graduate programs;
  5. That all levels of the institution focus on the analysis of existing data and the production of assessment documents that will be shared;
  6. That the University-wide Assessment Steering Committee develop a communications strategy in order to provide useful assessment information to university faculty and administration.

The remainder of this report is divided into two sections, Processes and Assessment Activities/Issues. The latter section is organized by level, with separate discussion of issues at the university, college, and department/program levels.

Processes

The processes discussed in this section are well established within the university and are expected to become increasingly useful in focusing assessment activities.

Program Planning/Budgeting

The Guidelines for Strategic Planning and Budgeting Process, October 9, 1996 were developed to "reallocate resources across major administrative units ... to address [the University's] highest priorities and accomplish its mission goals. Colleges, divisions and other units were asked to prepare budgets based on 96%, 100% and 104%. Once programs or positions were identified for reduction in a 96% budget, they were not to be restored in subsequent budgets. To allow decision makers to compare priorities of existing programs or units, deans and unit administrators were asked to assess a program's strengths, weaknesses, centrality, and necessity. For those programs seeking enhancement through the reallocation process, proposals were required to provide benchmarking criteria; the Guidelines suggested discussion of the evaluation plans and performance measures to be used to assess the programs success in 3, 5, and 10 years.

Academic Program Review

Most academic departments complete an internal review and write a self-study report once every five to six years; some programs are permitted to substitute their accreditation self-studies (see discussion under heading of Processes -External Accreditation). However, the UNL Assessment Plan (p.16) requires that all programs provide assessment information consistent with the Academic Program Review Guidelines, Fall 1996:

What are the departments goals/plans for assessing student academic achievement within the major(s), graduate program(s), and Comprehensive Education Program? Program goals should include statements of desired student outcomes. (p.11)

Describe the department's plan for assessing student learning in the major, the graduate program, and in the Comprehensive Education Program courses delivered by the department. The department's student outcomes assessment plan should include at least two measures/indicators of learning for each program. [Examples might include standardized tests, locally developed tests, surveys of employers of graduates, capstone courses, portfolios, performance or exhibition appraisals, etc.] What evidence is there that students in the program(s) have learned the material expected or identified in the program objectives? (p.17)

The self-studies written for 1996-97 Academic Program Reviews (English and Anthropology) included detailed information about the programs' assessment plans, but little data. There were several reasons for this, primarily: a) the self-study was due before assessment activities were completed; b) assessment activities concentrated on graduating majors and the department had few majors graduating at mid-year; and c) assessment activities were newly initiated and in need of refinement before their results could be interpreted with confidence.

By the time the 1997-98 APR self-studies were written, the programs involved had been collecting data on undergraduate learning outcomes for two semesters and had reported analyses and interpretation in reports to their college assessment committee. They had also added a plan for assessment of graduate program outcomes. The way in which this information was integrated into the self-study varied from a very brief description of the unit's assessment activities, with their last assessment report included as an appendix (an approach taken by History and Biochemistry), to extensive discussion of activities, results, and use made of the data incorporated into the body of the document itself (Psychology). The second approach clearly conveyed a stronger impression of assessment of learning outcomes as a valued part of the process of self- improvement. It is anticipated that as programs move forward with implementing and refining their assessment plans, the self-studies will reflect an increasingly important role for the assessment process.

External Accreditation

More than 25 accrediting agencies monitor UNL colleges, programs, and/or units for professional accreditation. Although some accrediting standards for professional colleges have emphasized student outcomes assessment for some time (e.g., NAAB for the College of Architecture), other accrediting standards have only recently moved in that direction (e.g., ABET for the College of Engineering and Technology). Others still rely upon traditional input measures in evaluating programs and require no direct measures of student achievement.

Four units Teachers College, Architecture, Journalism and Mass Communications, and Engineering Technology Programs (Omaha campus) were reviewed by their respective accrediting agencies in fall of 1997. As with the Academic Program Reviews, there was considerable variation in the way in which assessment information was used in the self-studies. Architecture presented a matrix linking every course offered to the 50 criteria laid out by the NAAB. Their self-assessment process was described. Although no assessment results were included in the self-study, examples of student work from every course was displayed for evaluation by the external review team. The Teachers College self-study used information about assessment activities throughout the document, including descriptions of plans, examples of specific activities, and occasionally results. The emphasis was placed on the building of an assessment process within the college and its programs. In the Journalism self-study, assessment information played a less prominent role. A formal assessment plan was mentioned as having been mandated by the NCA, but full development and implementation has not yet been achieved by the college. The plan itself was not included in the self-study; however, portfolio evaluation, exit interviews, program advisory boards, and tracking of job placements were given as examples of assessment activities engaged in by faculty. The self-study for the Engineering Technology Programs included only a brief reference to plans for implementing several indirect measures (e.g., surveys of graduates and employers and senior exit interviews). Student activities, such as capstone courses, were mentioned, but there was no sense that faculty were engaged in formal evaluation of the experiences or in using assessment information for program improvement.

Comparison of Accreditation and APR Assessment Standards

In the UNL Assessment Plan, an assumption was made that accreditation standards could be relied upon to promote the development of assessment activities that would lead to program improvement. To determine if that assumption was valid, an analysis of accreditation and APR guidelines (Appendix C) was conducted to determine the extent to which these sources are consistent with each other and with the North Central Accreditation expectations for student outcomes assessment. Published guidelines were obtained from the American Assembly of Collegiate Schools of Business (AACSB), the American Association of Family and Consumer Sciences, (AAFCS), the Accreditation Board for Engineering and Technology (ABET), the Accrediting Council on Education in Journalism and Mass Communications (ACEJMC), the National Architectural Accrediting Board (NAAB), the Foundation for Interior Design Education and Research (FIDER), the National Association of Schools of Music (NASM), the National Association of Schools of Art and Design (NASAD), the National Association of Schools of Theatre (NAST), and the National Council for Accreditation of Teacher Education (NCATE), as well as the North Central Association (NCA). Though not comprehensive, this list represents organizations that accredit programs affecting a large number of UNL students. Written guidelines for assessment plans that have been developed by colleges for their constituent programs were also included in the analysis. Overall, it was concluded that there is fairly high consistency across sources in requiring written objectives, multiple measures, and use of assessment information in decisionmaking. However, not all accrediting bodies require direct measures of student outcomes and few explicitly state expectations of cost effectiveness, appropriate psychometric rigor, strong faculty involvement, or routine evaluation of the assessment process itself. All of these points are emphasized by the NCA as characterizing sound assessment plans.

Plan to Strengthen the Role of Student Outcomes Assessment in Academic Program Review and Accreditation

In order to encourage good practice in outcomes assessment, a plan was developed that would add a focused assessment review at the midpoint of both the accreditation and Academic Program Review processes. The emphasis would be developmental, with the University Assessment Coordinator working with programs to evaluate their progress since their last self-study and to develop a plan to address any outstanding issues prior to their next APR or accreditation visit. Criteria that would guide this review were based on standards used by the NCA and by accrediting agencies in the various disciplines (as found in the study summarized above), as well as best practices identified in the assessment literature. The complete plan is presented in Appendix C. Administrative details have not been finalized, but it is expected that the plan will be implemented in the 1998-99 academic year.

Comprehensive Education Program

The campus-wide general education program is known as the Comprehensive Education Program (CEP) and was effective for the class entering in fall 1995. Departments not only contribute to the skill development of students from other departments through service courses; they also help their majors develop skills related to CEP objectives within their own disciplines. The cumulative, cross-disciplinary nature of CEP suggests that assessment at both the program and university levels will play important roles in UNL's evaluation of its general education. A separate report has been prepared that describes in detail this year's assessment efforts for CEP; a summary of the activities is included below under University-Wide Activities.

Annual Report

The processes of program planning/budgeting and Academic Program Review typically occur on a 2-5 year cycle. External accreditation is often a 5-10 year cycle. To facilitate the development of assessment activities and enhance their coordination, the annual report process was developed. (UNL Assessment Plan, 1996, pp. 27-28). The annual report is to be prepared by the University-wide Assessment Coordinator in conjunction with the University-wide Assessment Steering Committee and is to provide an overview of student academic achievement during the academic year.

Assessment Activities and Issues

Role of the University-wide Assessment Steering Committee

The University-wide Assessment Steering Committee is charged with the coordination of student outcomes assessment across the University's undergraduate colleges (UNL Assessment Plan, 1996, p. 27). It is made up of one representative from each college, the University's Curriculum Committee, the Teaching and Learning Center, the Office of the Vice Chancellor for Student Affairs and is chaired by an Associate Vice Chancellor for Academic Affairs. It is staffed by the CEP Assessment Coordinator and the University-wide Assessment Coordinator. The staff positions are both .50 FTE. Formed in November 1996, the Committee met twice a month during the 1996-97 academic year and approximately once a month during 1997-98. It has served as a focus group for the development of learning objectives and measurement instruments for CEP and as liaison between the colleges and the Office of the Vice Chancellor for Academic Affairs for purposes of college-based assessment planning and activity.

University-wide Activities

One of the results of beginning to coordinate outcomes assessment across the institution has been the realization that surveys are being administered to UNL students at various times of the year and by various bodies. Often the results of these surveys are not disseminated, limited analysis is prepared and, with some exceptions, subsequent planning activities fail to consider the information gathered. In particular, consider these three university-wide annual surveys:

  • Report on Graduates administered by the Career Services Center for the Vice Chancellor for Student Affairs;
  • Ace Cooperative Institutional Research Program Freshman Study sponsored by the Senior Vice Chancellor for Academic Affairs [ASTIN survey]; and
  • Student Omnibus and Health Survey administered for the Vice Chancellor for Student Affairs.

Details of these surveys and their results may be found in the Interim Assessment Report. Following are brief descriptions of each and an evaluation of the contribution each makes to our knowledge about student learning.

Career Services Report

On an annual basis, the Career Services Center obtains information about the post-graduate plans of seniors who have registered with the Center. The Report on Graduates 1996-1997 provides information not only about graduates' career intentions, but also about employment rates, geographic spread, and employer recruitment activity. The College of Business Administration has tracked its graduates partly through use of the Career Center's report and has related this information to data gathered by other means. Currently, however, the Career Services Center is reconsidering whether to continue this report. The low participation rate, difficulty in locating students after graduation, and cost have been cited as reasons for reconsideration.

ASTIN Survey

The ASTIN survey is a national survey of the characteristics of freshmen attending American colleges and universities for the first time. The institutional categories used for comparison with UNL data are all public universities (PU) and all public universities with low selectivity (PU-LS). UNL participated in the survey from 1967-1973 and from 1994-96. Over the last three years, UNL has asked 15 institution-specific questions in addition to the all-institution questions. The following summary is taken from Virginia B. Vincenti's analysis of the 1994 results:

Our freshman class tends to be homogeneous, of relatively conservative values and modest backgrounds. Our students are confident about their skill levels and general preparedness for post-secondary education. The data indicates that they are better prepared in computer science than the comparison groups (PU, PU-LS), but less well prepared in English and foreign languages. They rate themselves lower on creativity than students in other categories and male UNL students are less confident about their abilities in public speaking and writing than either male and female students in the comparison groups. UNL freshmen expect, in greater numbers than other groups, to finish their university education with a bachelor's degree, and relatively fewer of them plan on seeking advanced degrees in the professions such as law or medicine, or in other academic areas.

UNL freshmen strongly prefer interactive learning environments with faculty actively engaged in research or creative pursuits. Learning about the world is somewhat important and learning to work effectively with diverse racial groups is even more important , although students do not tend to perceive personal responsibility in promoting racial understanding at the same level as do other groups. Students in their first year rate arts and entertainment opportunities highly, but their planned attendance rate tends to be conservative, with over 40% expecting to attend three or fewer arts events per semester. Student leadership activities are considered to be somewhat important, while attendance at intercollegiate athletic events is rated very highly.

The ASTIN Survey was initiated for the purpose of establishing a baseline for tracking students throughout their college years (UNL Assessment Plan, 1996, pp. 21-22). Recognizing that this survey is of minimal use in measuring student learning outcomes, it nonetheless has provided indications of the attitudes, values, and expectations that students bring with them in relation to the goals of the Comprehensive Education Program. Ten of the fifteen campus-specific questions were designed to reflect the components of the Comprehensive Education Program and have been embedded in the CEP student and faculty surveys. These questions deal with preferred learning styles, openness to the world outside the region, racial and cultural diversity, and intentions to engage in co-curricular and extra-curricular activities. These questions and others similar to those in the ASTIN Survey have been embedded in the Student Omnibus Survey in an effort to determine changes in attitudes as the students move through their college years.

Because little change has been seen in the last three years of data, a decision was made to not participate in the ASTIN Survey for the next 3-5 years. Resources will be redirected toward the Student Omnibus Survey (described below), analysis, and dissemination.

Student Omnibus Survey

The Vice Chancellor for Student Affairs commissions from the Bureau of Sociological Research an annual omnibus telephone survey of UNL students. Questions are posed on a wide variety of topics, including health and social issues as well as perceptions of academic life. One section of questions mirrors those asked on the ASTIN Survey. Because the student sample for this survey covers the entire university, both undergraduate and graduate, the data potentially has value for purposes of tracking student development over the span of college attendance.

Evaluation of the Comprehensive Education Program

The Office of the Associate Vice Chancellor for Academic Affairs has taken responsibility for the first steps in assessing the impact of the Comprehensive Education Plan (CEP). During the 1996- 97 academic year, several assessment activities related to CEP were completed:

  • an analysis of enrollment patterns by freshmen and sophomores in Integrative Studies (IS) and Essential Studies (ES) courses;
  • a faculty survey looking at experiences teaching both IS and non-IS courses;
  • a survey of students in IS courses gathering information about the frequency of class activities reflecting IS criteria, student attitudes toward IS, and participation in and attitudes toward co-curricular activities;
  • a portfolio evaluation in which work samples from across the university were rated using a scoring rubric linking performance standards to IS objectives.

Appendix D contains a detailed summary of the implementation and results of these activities; the most fully articulated version of the IS learning objectives can be found in Appendix E. Copies of the full 1997 Comprehensive Education Program Assessment Report have been sent to all colleges.

Learning Objectives Analysis

Another project was undertaken to gauge the extent to which IS objectives are currently incorporated in the learning objectives for the various majors. Given that generally at least half of a program's IS offerings are upper-division courses and that programs are required to assess their majors' achievement of learning objectives, the assessment activities already underway at the program level have the potential for yielding information about student achievement that the university can use as it looks more broadly at CEP assessment.

IS learning objectives can be classified into three broad categories: critical thinking, communication (written and oral), and human diversity. The matrix in Appendix F shows learning objectives from each program and college that were judged to fall within these same three categories. There appears to be considerable overlap of objectives for the majors with those for CEP. Some of the gaps in the matrix are probably a matter of objectives not being included in the particular documents that were analyzed (undergraduate catalog copy, assessment reports, and in some cases, accreditation standards). However, in other instances, the program or department may not yet have taken the crucial first step in outcomes assessment, that of clearly articulating the learning objectives.

College-based Activities

Because the colleges of the University of Nebraska-Lincoln enjoy a high degree of autonomy, the development of student outcomes assessment is unique to each college and its prevailing culture. Such unique cultural context has its advantages, such as the ability that each college has to develop a plan and set of procedures that is tailored to its own academic disciplines, resources available, and reporting structure. On the other hand, it is a challenge for the university as a whole to develop a coherent process for the entire institution. Some colleges are coming to this process for the first time this year and others have been working on assessment planning for a year or more. Most colleges have been doing some form of outcomes assessment within their normal operating contexts for years, some within accreditation and some through academic program review (APR). Professional colleges such as Business Administration, Architecture, and Engineering also receive data from professional examination results. However, this was the first year that the colleges were asked to report both plans and evidence of assessment activities to the Office of the Senior Vice Chancellor for Academic Affairs. Written responses to the deans' reports included a brief commentary by that office on the college's progress and offered suggestions for further improvement. In addition, the letters summarized, in the form of reporting guidelines, the types of information that would be most helpful in determining progress toward full implementation of the UNL Assessment Plan. Copies of the letters can be found in Appendix G. (More detailed information about activities within the various colleges is included in Appendix B, Highlights of College Reports.)

Two general administrative models are now in use at the college level. A college may have a formal assessment committee, as do the Colleges of Arts and Sciences, Business Administration, and Human Resources and Family Sciences. Alternatively, the college may rely more heavily on leadership from the dean's office, as has been the case in Teachers College. Faculty often ask what role the college itself can or should play in assessing student learning outcomes. This question has no simple answer, depending as it does on the nature of the programs and departments that make up the particular college. However, three important functions are currently carried out at the college level:

Setting college goals and objectives

Several colleges have articulated general learning goals and objectives for their graduates. The degree of specificity appropriately reflects the degree of overlap among their constituent programs. For example, the goals in Arts and Sciences are very broad, reflecting the diversity of programs within that college, while those for the College of Business Administration are more detailed, yet still apply to all programs. Colleges whose programs are accredited by external agencies, such as Engineering and Technology, tend to have a set of goals that applies to all their majors, with these goals articulated by the accrediting body. As described in the assessment literature, the ideal may be that department goals are written to be consistent with college goals, which in turn are aligned with the institution's goals or mission. However, in cases where college goals and objectives have not yet been written, there is an opportunity for the faculty work in developing program-level objectives to serve as the basis for articulating broader goals at the college level.

Overseeing and guiding department assessment plans

The College of Arts and Sciences, the College of Human Resources and Family Science, and to a lesser extent, Teachers College all have taken on the role of overseeing the development of program assessment plans. They have not only required that programs have assessment plans, but have established minimum standards for what they are to include and have established timelines for implementation. Other colleges take a less direct role in overseeing the process, but all colleges have the responsibility of promoting meaningful program assessment activities.

Conducting assessment activities

In some instances, it is simply more efficient to collect data (or coordinate its collection) at the college rather than at the program level. Having college-level surveys of seniors, alumni, faculty, and employers (for example, as Arts and Sciences is planning to do for their graduate programs and as Business Administration and Human Resources and Family Sciences already do for their undergraduate programs) avoids the expense of duplicated effort and the alienation of respondents that might occur with multiple, program-level surveys. Coordinating these activities at the college level also allows the college to look at issues common to all its programs. The activities conducted through the dean's office in Teachers College provide many good examples of this: student focus groups and surveys of cooperating teachers, of first-year teachers and their supervisors, and of all K-12 administrators in Nebraska.

Program Activities and Issues

The NCA has described many characteristics of what it views as exemplary assessment plans. Among these characteristics are appropriate focus of the plan (having clear and measurable learning objectives and plans that address important questions), quality of the data gathered, high level of student involvement, cost-effectiveness, and utilization of the information to help make decisions. However, developing and implementing plans that possess these desirable characteristics often presents difficulties. In the following section, some issues that emerged during this year's assessment activities are discussed, along with strategies that programs on this campus have employed in dealing with them.

Appropriate Focus for an Assessment Plan: Planning for Maximum Usefulness

An appropriately focused assessment plan gathers information relevant to questions that concern faculty. It is tied to learning objectives that express the most important and complex goals faculty hold for their students. This is not limited to cognitive objectives, however; other types of issues, such as perceived adequacy of advising or intellectual community of majors, have also been addressed. As with any type of research, asking the right questions is of primary importance.

However, even the right questions may be stated too broadly to be answerable. It is appropriate, even desirable, to begin with broadly stated goals. However, at some stage, measurable objectives and performance criteria related to those goals need to be developed. For example, a goal might be "demonstrates knowledge of subject matter X." Faculty might look at student papers and rate them as superior, satisfactory, or unacceptable with respect to knowledge of subject matter. What if some of the ratings fall into the "unacceptable" category? Where should effort be placed in order to improve performance? For that matter, what could be done to move more students from the "satisfactory" to the "superior" category? The information gathered offers no direction. One way to avoid this problem is to develop more specific objectives or performance criteria that link the rating instrument to the general goal statements. These objectives may be difficult to formulate initially, but they can evolve as faculty engage in the process of evaluating student products. For example, a set of papers could be examined after the general ratings are completed and analyzed to determine what distinguishes work of differing levels of quality. This information may help faculty to articulate the implicit standards that they are applying.

Careful matching of measures to objectives also influences how useful the resulting information is likely to be. Choosing a measure that is easy to implement but that is only tenuously connected to the most important objectives will not be accepted by faculty as providing useful information. Several departments have found that examining work samples or portfolios can be an inadequate, as well as inefficient, way to measure objectives related to knowledge of broad content areas. Standardized exit exams or course exams may be a better option. In contrast, programs that rely exclusively on multiple-choice standardized tests probably are not getting information detailed enough to guide program improvement; they may not even be gaining any information relevant to achievement of their more complex learning objectives. Even the choice between collecting work samples that are unrelated to each other, perhaps from a cross-section of courses, and asking students to integrate their work into a portfolio should depend upon the kind of questions faculty hope to be able to answer. Programs whose questions are focused on how and when complex skills are developed will probably find portfolios that reflect the progress of work over time to be the more useful tool. In contrast, programs that have a relatively small number of required core courses may get better information by intensively sampling work from those specific courses. Programs with a strong emphasis on applications and working with clients may need to focus on evaluating internship or capstone experiences.

One way in which some programs have increased the usefulness of their assessment information is by collecting supplemental data to enrich their interpretation. For example, analyzing assignments for the skills they require (as well as how students actually complete the assignment) can suggest ways in which instruction can be modified to better foster desired learning outcomes. Analysis of transcripts to identify patterns of course selection and sequencing can offer plausible curricular explanations for differences in student performance. Combining holistic and analytic approaches in rating scales (giving work a global rating as well as rating particular characteristics, or rating a student's overall performance in addition to rating specific work samples), may provide interesting information in its own right. Differences in conclusions based on information from the two approaches would suggest that faculty are giving weight to characteristics not articulated in the objectives. Further exploration might provide insights into the adequacy of the objectives as currently stated. And finally, comparing faculty ratings with other sources of data, such as student exit surveys or interviews or employer surveys, can be useful. When the sources agree, the validity of conclusions drawn is supported; when they don't, there is motivation to find the reason for the discrepancy.

Quality of the Data

Outcomes assessment consumes significant time and effort. It is wasteful of resources not to be concerned to maximize the credibility of the findings in the eyes of all constituencies, including faculty, administrators, students, external review teams, and employers. This is not the same as being accountable to those constituencies, though anything that enhances the accuracy and usefulness of the data collected will probably also make addressing accountability issues easier.

The quality of the data is affected by decisions made at every step in the assessment process. Unless objectives are articulated and performance measures clearly tied to those objectives, the data gathered are unlikely to be viewed as providing relevant information about student achievement. At the other end of the process, establishing feedback loops to faculty and students encourages use of the assessment information to make appropriate changes and helps to build the credibility of the entire process. In addition, though, there are several things in the data collection stage that can give the process integrity. Are the students who supply work samples or participate in exit interviews representative of the program? Even a small sample should reflect a range of student achievement and goals. Programs that collect background information about the participants are able to evaluate whether this is the case. Credibility is most often questioned when the assessment method rests upon a judgmental process, such as ratings of a portfolio or project. It can certainly be argued that the person most qualified to rate a performance is the instructor of the course in which it was produced. However, that same person may unintentionally bring to bear knowledge and experiences that do not relate to the objectives we intend to measure. If the results are to provide more information than could be obtained by reviewing course grades, it is sensible to implement a system using multiple raters. Training of raters on the use of the instrument, with reliability checks from time to time, is also a good idea. Clearly defined objectives and performance criteria will help achieve and maintain rater consistency and agreement.

High Level of Student Involvement

Of course, the most meticulous planning of assessment activities will be wasted if students cannot be persuaded to participate. A wide range of methods to maximize student compliance were tried during the last year, including incentives such as food, release time from class, and cash payments. Generally, such approaches were a resounding failure, and were tried primarily because current seniors are not bound by changes in graduation requirements that make participation in assessment activities mandatory. Although we can expect participation in the future to become less of a problem, issues of how to motivate students to perform at their best will remain. The most promising strategies to increase both compliance and motivation are those that build the assessment activities into (or upon) requirements for specific courses. This may involve using projects from a capstone course or papers from a seminar as the basis for assessment, using standardized examinations in required courses, or even creating a course in which papers or projects are designed to reflect the desired objectives. The assessment that occurs for the purpose of program improvement does not influence the student's course grade, but they are given credit for the work within the course and presumably are motivated to perform their best.

It is not absolutely necessary, though, to integrate the assessment into specific courses. Another approach that has been successful for some programs is to require the student to create a portfolio. The most effective of these go beyond merely collecting materials; they result in a product that is useful to the student both educationally, in the sense of encouraging them to reflect upon their own growth, and/or professionally as they seek employment.

Cost-effectiveness: Restraining the Growing Demands on Faculty Time

As the scope of assessment activities grows, so does concern that the demands on faculty time will overwhelm even those most enthusiastic about the benefits of assessment. In reviewing the experiences of the many programs already well into their second, and in some cases, third year of implementation of their assessment plan, it was possible to identify some ways in which programs are trying to control the demands on faculty time. The issue is not strictly one of minimizing the demand, but of getting a high return on the investment of faculty resources.

The efficiency of any assessment plan depends upon the clarity with which goals, objectives, and performance criteria have been articulated. Time spent at this stage of the process pays off in savings later because appropriate measures can be more easily identified and developed, raters find their task easier, less ambiguity surrounds the interpretation of results, and it is easier to maintain continuity over time as faculty responsibilities change. An additional benefit is that the discussion of learning objectives is often provocative and productive in itself, whether or not assessment activities yield immediately useful information. On the other hand, all objectives need not be written at the same level of specificity. It makes sense to concentrate, at least in the early stages, on those objectives the faculty consider most central.

Existing sources of information can often be drawn upon for assessment purposes. Capstone courses, senior seminars, design projects, senior theses, internships, and professional portfolios are examples of sources of information about student performance that may already exist within a program. Using such sources for outcomes assessment still requires designing an evaluation tool that links the student work with the stated program objectives, but this is likely to be much less time consuming than designing an entirely new learning experience.

Programs have been encouraged to include multiple measures in their assessment plans. The rationale for incorporating multiple measures is that every method is not equally appropriate for every objective, and that by including a range of measures, you are more likely to gain information that reflects differing perspectives on an issue (e.g., viewpoints of students/faculty or alumni/employers). However, it is not necessary for each objective to be measured using more than one instrument. Some programs appear to have redundancy among their measures that could be reduced. For example, if a program has a capstone course or senior project as well as student portfolios, faculty might wish to assess some objectives using the products of the capstone and other objectives using the portfolios, or they might wish to experiment with using both measures, but in alternate years.

Programs having very large numbers of majors realize that their assessment strategies must be based on samples of their students. On the other hand, smaller programs may overlook sampling as a useful strategy to keep their time commitments within bounds. Sampling can occur on many levels, for instance:

courses: certain courses can be targeted, perhaps a "core" set or the most integrative, such as a senior seminar or capstone course

assignments: all courses can be used, but a sample of the assignments from each course can be selected for evaluation

students: work from a sample of the majors in a course can be assessed

ratings: for each rater, a sample of the work being judged can be selected for double scoring rather than collecting multiple ratings on every piece of student work (assuming that inter-rater agreement has been found to be satisfactory)

Closing the Loop - Using Information from Assessment

Ultimately, the goal of collecting and interpreting assessment data is to improve a program. Most programs are in the early stages of developing and implementing their assessment plans and may have little data to work with. If assessment information is defined more broadly, though, to include all aspects of the process (e.g., the discussions that occur as objectives are articulated and clarified and measures developed or chosen), then valuable information may be gained even in these early stages. Already there is evidence of assessment information being used to guide changes in curriculum, classroom practice, and the assessment process itself. Some specific examples are described below.

Changes to Instruments or Data Collection Procedures

In several instances, programs that had developed their own standardized tests found evidence that the test items were not appropriate in difficulty for their students. Some faculty responded to this information with an examination of their performance criteria to be sure that the levels reflected reasonable expectations. In one case, some questions appeared to be too easy and were answered correctly even by students who had had minimal exposure to the content. In another instance, student performance was much poorer than expected and faculty are considering whether the questions were unrealistically difficult. After interviewing the students who took the exam and reviewing the actual emphases of core courses, they are also considering the possibility that the test's content should be changed. Another program experimented with using a qualifying exam from their graduate program as an undergraduate comprehensive exam, but found that it did not adequately measure the objectives of interest. This is not an unusual problem when using a measure for a purpose other than the one for which it was developed. Mismatch of test content and curriculum tends to be an even greater problem when commercially developed tests are used, as some UNL programs have discovered. If the mismatch is severe enough, interpreting the results can be virtually impossible. All these experiences are leading to discussions of alternative measurement strategies.

When an assessment plan includes evaluations of student products, such as work samples, portfolios, or projects, other types of issues arise. After one or two semesters of experience using rating scales with student products, faculty have suggested several types of changes that should improve the usefulness of data obtained, require less faculty time, or both. Some involve changes to the rating scale. For example, one common change was the addition of a "don't know" or "insufficient evidence" category to the rating form. Including this category will possibly increase the validity of the scores by distinguishing between ratings that reflect lack of opportunity to demonstrate a skill and those that reflect low levels of achievement. Furthermore, extensive use of the category for a particular objective may indicate that another source of information is needed. Two other suggestions have involved the description of the characteristic to be rated. It is not unusual for raters to be asked to judge whether some work demonstrates an understanding of "subject A and subject B". A problem may arise if the product is rated as being unsatisfactory or the criterion is "partially satisfied." Separate ratings of the two subjects would identify which areas need work. Another suggestion involves more clearly defining terms in the objectives. If a rater is to judge whether work demonstrates knowledge of basic elements of a content area, what exactly are those basic elements? At some point, whether on the rating form or in course syllabi or other curriculum documents, this must be spelled out if the ratings are to have meaning.

The rating procedure itself has also sometimes been evaluated and modified. Little formal training of raters appears to occur, although some faculty have suggested it might be helpful. Such training typically would consist of scoring a common set of examples and discussing differences in interpretation or weighting of criteria. These discussions should occur prior to the rating of work samples for the actual assessment, because those ratings gain in credibility to the extent that they are arrived at independently. To this end, some programs have moved to strengthen the independence of their portfolio ratings by asking students to provide clean copies of papers, having removed the comments and grades of the instructors. Other changes in procedure are intended to reduce the demands on faculty time while still providing the needed information. For instance, in one case, a program decided to ask for a single set of ratings on a set of assignments rather than to rate every characteristic of every assignment individually. And finally, programs are being encouraged to make more use of sampling. For example, even if every student is expected to produce a portfolio, formal assessment might be done on only a sample of them for the purpose of program evaluation. As an illustration of another sampling strategy, one program has been using multiple raters to examine student portfolios, with a single faculty member reading all of them to provide continuity. Faculty are now considering changing this labor-intensive model for one in which each portfolio is looked at by two randomly selected faculty members.

There have also been changes made in the types of material being rated. Procedures that initially consisted of holistic ratings of a student's overall performance in a course or program have in some cases been modified to include evaluation of specific work products. Faculty interested in being able to examine the development of skills over time have changed their portfolio specifications so that papers must be included from particular years and/or courses. Some programs have expanded the types of materials required in student portfolios in order to allow evaluation of additional complex objectives, such as oral presentation skills or application of statistical/research methods. In contrast, others have decided that some of their outcomes related to knowlege of subject matter are not efficiently or effectively measured with a portfolio approach, and so are developing other means to measure those objectives. Perhaps the most important point is that it is necessary to be clear about the goal of the assessment and then to evaluate whether the procedure as structured can provide the needed information.

Changes to Curriculum or Classroom Practice

In reviewing the 1997 final assessment reports, it was clear that faculty are already using assessment information in program planning and improvement. The following selected examples of actions taken in specific departments (listed alphabetically) illustrate a variety of ways in which assessment information can be used. They include changes in course content or teaching methods, curricular requirements or course sequencing, and development of new courses, some with unique formats. Some of the changes are still under consideration; they have been included because this, too, represents a use of assessment information, namely to promote focused discussion of ways to improve a program. Although the examples were chosen because they focus on change, it should be noted that assessment results have more often affirmed the effectiveness of programs. Thus faculty can use the process of assessing outcomes to document program strengths while at the same time gaining information about where to concentrate limited resources most effectively.

Anthropology. Weak performance on the senior exit exam led faculty to question whether the core set of courses in the major was adequately addressing the program's knowledge objectives. More detailed information from exit interviews revealed that students felt faculty teaching these courses sometimes concentrated primarily on those areas with which they were most familiar. Certainly this has potential implications for classroom practice in those courses. However, the disappointing performance also gave rise to discussion of whether the content of the test should be changed to reflect the topics actually being emphasized by faculty in their teaching and by the students in their course selection patterns. Given that the exam closely reflected the original learning objectives articulated by the faculty, this may lead to a revisiting of those objectives.

English. On the basis of the results of student exit interviews, faculty have identified a need to build a sense of community among their majors. Several courses of action that might address this need are being considered: restricting a small number of courses to majors, creating a set of advising tracks through the major, and investigating alternatives to the honors thesis (which current involves only a minority of majors) as a culminating or closure experience.

Environmental Studies. The major direct measure of student achievement for this department is assessment integrated into the senior seminar. The course itself is structured as a series of short seminars offered by experts on different topics in the field. What makes the experience unique from an assessment perspective is that the program's major learning objectives are made clear to the students at the outset, with an expectation that they will attempt to demonstrate their achievement of those objectives through a set of papers that they write. Each paper is evaluated by their emphasis adviser using a grid of the general program objectives and a grid of objectives specific to their emphasis. Results of faculty ratings of the seminar papers and student surveys have both supported the conclusion that the program is not meeting its goal with respect to students' research skills, understanding of ethical issues in research, and familiarity with major environmental policies. Under consideration are increasing required courses as well as broadening the range of courses that would satisfy requirements. Being an interdisciplinary program, they see their next step as identifying specific courses having problems either with content or frequency of offering so that they can work with the relevant departments to meet their majors' needs.

French. Results of a grammar test indicated students' mastery in this area did not meet expectations. Because the initial findings were based on only a small number of students, faculty plan to look at whether these findings hold up after more data have been collected. If so, they will critically examine how grammar is being taught across the curriculum.

Geography. Performance on an essay exam in human geography suggested that although students were able to formulate persuasive syntheses concerning population patterns, their knowledge of statistical detail was often judged to be only barely sufficient. Faculty concluded that instruction needs to do more to reinforce the knowledge of facts after initial exposure. Analysis of performance on an exam covering knowledge and skills in cartography, GIS, and remote sensing took into account information about the specific courses each student had taken. It was found that students did very well in content areas where the subject matter was extended and reinforced in many courses, even in cases where the student had never completed a course devoted to the subject. Faculty are recommending that a similar approach be taken in other content areas.

History. Assessing portfolios of student work has led to discussions of the nature of the developmental curve of skills required to do historical research. This has been enhanced by considering transcript information about the students' selection and sequencing of courses. A second perspective was gained through exit interviews in which students were asked to reflect on their own skill development. One outcome of these analyses and discussion has been to consider developing a research seminar to be taken prior to the senior level. The same discussions have led to suggestions of strategies that instructors might incorporate in order to strengthen skill development. These include designing research exercises, using more primary sources in teaching, and requiring more papers.

Mathematics and Statistics. An undergraduate research seminar has been designed to create an experience that should result in demonstration of each of the program's learning objectives. An evaluation tool has also been created to document achievement of each objective.

Music. Assessment information led to a change from three semesters of music history to two semesters of music history and creation of a new course for freshman music majors that serves as an introduction to music literature and music history. The five semesters of music theory previously required has been reduced to four semesters. Often it is assumed that change means additions to the curriculum, but this is an example of how change can be in the direction of reducing requirements or redistributing resources.

The music department has also used assessment information to guide changes in coverage of certain material within courses and activities. Sight reading has been added to all levels of juries (these are an important aspect of the student's educational experience as well as a means of evaluating achievement), improvisation is being added to all studio teaching, and convocation classes have been restructured to reflect greater emphasis on individual student performance.

Nutritional Science and Dietetics. In 1996, the department's pass rate on the American Dietetics Registration Exam was 92%, well in excess of either the national pass rate of 75% and the department's own goal of at least 80%. However, in 1997, the exam and its passing standards were changed by the ADA. Of the UNL students who took the exam, 71% passed compared to the national average of 79%. Faculty have considered curricular explanations for the lower performance and are discussing the possibility of restructuring clinical nutrition courses to include laboratory sections.

Analyzing responses to the senior exit interviews led to identification of deficiencies in the marketing and management areas. A required marketing course has been added to the curriculum and a management component incorporated into an existing nutrition course.

Philosophy. Evaluation of student work from a required advanced seminar identified four objectives that, in general, were only "partly satisfied": the work has a logical structure, the student states the logical structure of the texts, material is clearly explained in a manner appropriate for the audience, and the work demonstrates an evaluation of the cogency of the philosophical position in the text analyzed (including judging relevant connections between different positions or arguments). The importance of focusing on these areas was confirmed by information from competency grids completed for majors in each course and by student survey results. Recommendations have been made to all faculty to include more critical reading and writing in their instruction. Adoption of specific writing goals that would be disseminated as instructor guides is under discussion.

Psychology. Faculty have reported that discussions of learning objectives led to reconsidering the concept of core subject matter. This has subsequently been translated into changes in the course of study for majors, to require greater breadth and fewer electives. They have also proposed a new course that offers an opportunity to address professional and advising issues.

An analysis of the assignments used in all upper-division courses in terms of the skills students were being asked to apply or demonstrate flagged two areas, interpreting research and using information sources, as occurring less frequently than others. When student performance on the assignments was rated, using information sources was again an area that fell short of the level desired. Anecdotal evidence suggests that faculty have followed up discussions of this information by including more writing and research into their classes.

Sociology. On the basis of an evaluation of senior writing projects, several changes have been recommended in order to improve students' writing skills and their ability to apply social theory. The department is considering requiring all students to take the History of Theory course prior to the senior seminar, a move that would require the restructuring of course offerings to increase availability. A required junior-level pro-seminar has also been proposed. The seminar would be designed to prepare students to meet upper-division writing expectations, while at the same time providing more opportunities for internships and both curricular and career advising, an area identified from student exit interviews as being in need of improvement. Faculty are also planning to examine and perhaps modify the senior seminar writing assignments to try to enhance the development of writing skills and to encourage the application of theory to other disciplinary issues.

Theatre Arts and Dance. After finding upper-level students weak in their knowledge of dramatic literature, faculty developed a new, second-year course in this subject. They have also established majors-only sections of two introductory courses. These changes were implemented during the 1995-96 academic year, but have been included here because they offer a chance to see a program's follow-up of an assessment-based change. Excerpting from the department's annual report, "Having the majors together...in their first semester at the university gives us a more intense forum for assessing what they know and determining what they need to acquire throughout the semester. Having them together as a group from their first semester gives us the opportunity to assess them as a group before they depart from the program. This action has provided incoming students with a greater sense of academic and artistic community within the department, and attrition has decreased considerably."

Conclusions

 

The university as a whole has made considerable progress toward implementation of its assessment plan. It is especially encouraging to see that, despite the relative newness of formal outcomes assessment at UNL, faculty are using assessment information as they make decisions about changes in their programs and courses.

Nevertheless there are several areas in which we need to increase our efforts. The learning goals and objectives of each program form the foundation of outcomes assessment. Some programs still need to articulate what it is that they want their students to know and to be able to do when they leave UNL. Others have written objectives that would benefit from further clarification. Still others may need to revisit their objectives with a critical eye given what they have learned from their assessments.

In terms of assessment measures, it is clear that the most successful strategies are those that are an integral part of the student's learning experience. There are many potential sources of information within any program that we can draw on for program evaluation. Where these sources are insufficient, assessment procedures can often be incorporated as courses or other educational experiences are added or redesigned.

It is understandable that much of the activity related to assessment has so far focused on the immediate need to create and implement an assessment plan. However, if outcomes assessment is to really reach its full potential, the focus must widen. A program that looks beyond the current year in their planning benefits in several ways:

  • Attention can be systematically directed to different objectives, questions, or parts of the program over the course of several years. This allows comprehensiveness of coverage in the long run without overloading faculty by trying to address every aspect each year.
  • Questions that require longitudinal data (such as the effectiveness of program changes) can be addressed. Having long-term plans permits collection of both baseline data and materials (such as examples of different levels of student performance) that may be useful later.
  • Continuity is easier to maintain. Faculty may find themselves responsible for a program's outcomes assessment activities without having adequate background information. Documentation of results of long-range planning can become a tool for improving continuity and reducing duplication of effort.

One final, critical point is that communication about assessment needs to be improved. In some programs, responsibility for assessment has fallen heavily on a small number of individuals, and it is unclear that the necessary feedback to and from the total faculty is occurring. The need for communication across departments must also not be overlooked. This document and the Interim Assessment Report are first steps in addressing this need. It is hoped that faculty will use the information provided here to identify other programs that may serve as resources for ideas or advice.

Back to Table of Contents


 

Acknowledgments

Requests for additional information concerning this report should be directed to University-wide Assessment Coordinator, Office of the Senior Vice Chancellor for Academic Affairs, 208 Canfield Administration Building 0420 (472-3899). Although primary responsibility for producing this report lies with the University-wide Assessment Coordinator, the final result reflects the contributions of many individuals. We wish to gratefully acknowlege the valuable input and support received from members of the University-wide Assessment Steering Committee and the specific written contributions to this document made by Dianne Hartley, Robert Bergstrom, Myra Wilhite, and Jude Gustafson.


 

University-Wide Assessment Committee Roster, 1996-97

John Ballard, Engineering and Technology
Robert Bergstrom, Comprehensive Education Program Assessment Coordinator
William Borner, Architecture
Dianne Hartley, Honors Program
Earl Hawkey, Student Affairs
Melody Hertzog, University-wide Assessment Coordinator
William Meredith, Human Resources and Family Sciences
Harvey Perlman, Law
George Pfeiffer, University Curriculum Committee
Kay Rockwell, CASNR
Linda Shipley, Journalism and Mass Communications
Nancy Stara (chair), VCAA
William Walstead, Business Administration
Ellen Weissinger, Teachers College
Laura White, Arts and Sciences
Rusty White, Fine and Performing Arts
Myra Wilhite, Teaching and Learning Center

Back to text

University-wide Assessment Contact Information: