A Flawed Model?
SACS does not accept the use of grades as a legitimate assessment option for many reasons: a student's grade is often an inaccurate reflection of what the student knows or can do; test construction and grading methods vary widely from instructor to instructor; focusing on course grades does nothing to promote curriculum integrity; and most competencies outlined by ABET are not gained fully from individual courses.
The new ABET criteria emphasize documenting the existence of competencies (an outcome focus), not documenting that appropriate processes exist (a process focus), as the Aldridge and Benefield model advocates. What practicing engineer would design a machine to produce widgets and not check frequently to see that the widgets produced are within tolerance? The approach described assumes that a student completing the curriculum has the requisite competencies. However, the only way to know if those competencies actually exist is to measure them directly. That is, correctly, the clear focus of both ABET and SACS.
Finally, the model presented is overly complicated and potentially confusing. The intent of the ABET criteria is actually quite simple: engineering graduates should have certain specific competencies, the faculty should determine expected and actual levels of performance on those competencies, and the school must then address the discrepancies with the goal of improving the actual performance.
Our overriding concern is the model's deviation from assessment methodology compatible with the SACS accreditation criteria, which we believe is mirrored in wording and intent by the new ABET criteria. With the adoption of EC 2000, we now tell engineering departments that if they meet ABET criteria, they will certainly meet the SACS criteria and avoid duplicating efforts. However, if schools actually follow the Aldridge/Benefield model, the result will be additional work for the colleges at a minimum and a loss of federal funding and course transferability in the worst case.
The authors respond:
Some have read the article as an authoritative statement about the details of assessment. That was not our intent. The model was intended only to provide faculty members and administrators a common base from which to plan the transition to the EC 2000 criteria, and to demonstrate how an engineering program's assessment processes should work with others across campus.
Research Costs and Rising Tuition Rates
There is little difference between the last two categories, and only a slight increase for the doctoral universities. What causes the big jump for the research universities? Is it better laboratories? Hardly. My experience is that the bigger the university, the more dismal the undergraduate laboratories. Smaller classes? Clearly not, since the big universities invented the 500-person lecture. Better teachers? No way. Tenure at research universities systematically weeds out the dedicated teachers and promotes the exceptional researchers who often care little about undergraduate instruction.
It seems to me that the only reasonable conclusion is that yes, undergraduate tuition does support research. That is not necessarily bad, of course. Students choose to attend research universities and pay high tuition because the university's reputation will help them get better jobs and the alumni network will serve them well throughout their careers. But at the very least don't we have to be honest, rather than ascribing the increase to some mysterious "common underlying dynamics," as the author of the NSF report suggests? PRISM apologizes for the errors.
PRISM apologizes for the errors.______________________________________________