There’s a scene in The Wizard of Oz where the Wicked Witch sends her monkeys to tear the Scarecrow apart, leaving pieces of him strewn in the field. The Tin Man and the Lion arrive after the fact. When the Scarecrow explains what happened, the Tin Man humorously remarks “Well, that’s you all over…” This is what I think of when I hear people talk about having students use Web 2.0 tools to create ePortfolios for their classes. I can understand the desire and even the pedagogy behind instructing students on the building of an ePortfolio, but what happens is that after 4 years, there are pieces of the student all over the web, and there is actually less hope of putting those pieces back into one body, than there was of putting the Scarecrow back together.
While I understand the value of having students use a program like LinkedIn to create an online resume and begin networking, such sites show only one side of the student and can only be used for very targeted activities. To remedy this, I think it is vitally important that institutions of learning begin offering all students access to an institutionally sponsored, comprehensive, student centered ePortfolio program. Yes, a lot of terms there, but not impossible. A program that is student centered, and that does include several other features: blogs, social networking components, public and private views, and the ability to export the contents should it be necessary, are available.
Eportfolios have become an important educational tool, not just for program assessment and not for accreditation purposes, but because they showcase a “whole” student, because they allow for authentic assessment, and because they allow instructors to work with students in forming the students own brand. If we don’t offer a comprehensive ePortfolio solution that students and faculty want to use, they will go elsewhere and the various pieces will be scattered all over the place.
At a recent presentation, I did a survey of the audience asking how many of them had ever taken a survey in which there was at least one question they felt they couldn’t answer because they weren’t sure what was being asked. As you probably have guessed every single person in the audience raised their hand. So I asked them the same question I’m asking you: if the individuals we are surveying have to guess what we are asking in order to answer, then is the data we gather reliable? And if it doesn’t matter, then why are we even asking the question?
One of the ways that this commonly occurs is when a question is worded in such a way that it actually contains more than one question, asks more than one thing, and therefore requires more than one answer. For example: “Course material was up-to-date, well-organized, and presented in sufficient depth” What if the course material was up-to-date, but not well-organized? How can an individual, then, respond accurately to this question? If they choose “neutral” or “agree” or “disagree” what does it tell us? Of what benefit are the ratings to the instructor? If this item was rated poorly would the instructor have any idea what change needs to be made?
Another type of question that can lead to misinterpretation, is the question that is ambiguous: How well did the course meet your expectations? Unless this question is accompanied by a previous question that asks “What were your expectations?”, we do not know what the answer tells us, and yet this appears on many course evaluation surveys. However, just as in the first example, even if we did ask what the expectations were, the answer would only provide us with an average of how well they were met, rather than information regarding how well each of those expectations was met.
All assessment, even when it comes down to individual questions on a survey tool, need to begin with a clear outline of objectives. The questions need to be asked in such a way that they will accurately provide the information desired.