PRISM  On-Line - October 99
teaching toolbox
Taking Assessment Online

by Arnold B. Urken

Web-based surveys offer many benefits, but make sure you do the math.

Open the newspaper or a magazine or go online on any given day, and you're bound to come across an item touting yet another way that life can be improved by using the Internet. While some of the claims are certainly debatable, most would agree that new technologies are forcing us to look anew at how we do things—and assessment is no exception.

The most obvious potential benefit of Web-based assessment is the opportunity to cut costs and expedite data collection and analysis, but there are other factors that make going online attractive. Creating a Web-based system demonstrates to state and national accreditors that a school is making an effort to develop a curriculum management system, and may help the school remain competitive in traditional on-campus education, as well as asynchronous and distance-learning environments. Looking into the future, one can imagine extending such a system to build assessment into new modes of learning.

While these are reasons for being enthusiastic, there are also reasons for exercising caution. In a recent article published in Prism (Pro/Con, February 1999), readers were warned to question questionnaires in assessing learning satisfaction. In the article, George Hazelrigg highlighted a fundamental difficulty with surveys: respondent data can be scored in different ways that can lead to inconsistent inferences about learning satisfaction.

Hazelrigg's warning has broad implications when considered in the context of Web-based assessment. While technology offers great potential for collecting and analyzing data, it challenges us to rethink not only the assessment process, but also the difficulty of scoring survey data, and the place of surveys and polls in engineering education itself.

Moreover, although the Web enables anyone to quickly create and administer a survey or poll, it does not necessarily produce high-quality results. People in our culture are probably polled more frequently than members of any other culture—in part due to innovative uses of technology—but methodologies and skill levels vary widely. (Imagine what would happen if, for example, we all drove cars that we personally built from plug-and-play car parts!) Our students are likely to share the negativism and skepticism with which surveys are often greeted.

There is an art and science associated with building surveys that engineering students are unlikely to encounter in their studies. Though they may use existing surveys and even create polls to identify consumer needs and priorities in marketing or engineering design courses, there is generally no curricular time devoted to teaching students the artistic and scientific aspects of polling.

Engineering educators can remedy this situation by making Web-based surveys or polls that conform to certain standards. By showing students proper survey methodology, we can begin to give them the knowledge they need to use and evaluate these assessment instruments properly.

Here are some things to consider when designing a Web-based survey:

Don't automatically adopt a sampling approach.

    Make your questions determine the statistics, not vice versa. Develop questions that take account of the educational realities that are familiar to you as a teacher or an administrator. Respondents may feel uncertain about some issues, but you will never know about these attitudes—much less deal with them—unless you design complexity into your questionnaires.

Make surveys part of a learning process.

    Design questionnaires that enable students—and faculty members—to learn about themselves and others, while protecting privacy. Construct feedback mechanisms that allow individual members of an educational community to act on the information they obtain. Allow them to monitor their progress.

Don't become obsessed with response rates.

    While experimental comparisons of Web vs. (scanned) paper course evaluations indicate no significant difference in response rates, be aware that some Web systems and most scanning systems report response rates on the basis of having submitted a questionnaire, but do not identify which questions were answered. And in any case, a high response rate alone is not a sufficient indicator of an effective assessment system.

Examine the implications of different scoring mechanisms.

    Looking at survey results under different scoring systems is essential for good data analysis. Reach out to your colleagues in the social sciences and mathematics who are interested in voting theory and collective choice and discuss setting standards not only for assessment, but also for uses of polling and surveys in marketing and design courses. Involving students in designing an assessment system will help you and your colleagues develop survey standards that can serve as a model for the engineering culture. Since scoring systems can produce many surprising differences, a modest objective would be to identify the systems that make sense for your data and to determine if any of these systems produce varying results. If such differences occur, you will need to have procedures in place for resolving them.

    Arnold B. Urken, professor of political science and associate dean for academic affairs at Stevens Institute of Technology, directs the Stevens Engineering Assessment Center. He can be reached at aurken@stevens-tech.edu .

more Teaching Toolbox articles