Home » Online Learning

Category Archives: Online Learning

The Importance of Editors for Insuring Quality Courses

note reading "Let's eat Mom!"

What’s for dinner?

Few people would publish a book or paper without having it first reviewed by an editor, or even simply someone they trust to have a good handle on the language. Yet, most colleges and universities publish online courses without a once over by an editor.  I realize that some of this is due to an incorrect interpretation of academic freedom, but I’m not going to discuss that in this article. The consequence of errors in a public publication falls primarily on the author (or journal), however the consequence of errors in an online course primarily falls on the recipient (the student).

My experience, in the field of course design and development, is that while schools build teams of instructional designers, and invest in multimedia programs to enhance their online courses,  few have course editors on their teams. And, while I would never claim to be capable of being an editor myself, my experience as a student in online courses, and as a course reviewer,  is that many courses have significant errors in grammar, spelling, and syntax. In one of my graduate courses, the language was so bad it became a huge distraction.  While I had a genuine respect for this individual’s knowledge of the subject matter,  it was clear the individual had great difficulty in communicating clearly in writing. The large number of errors made some parts of the content almost unreadable, and the descriptions of various assignments extremely unclear. This problem existed throughout the entire semester. A course editor could’ve made all of this better for everyone concerned: the professor and the students.  This is by far not the only course I’ve seen with these issues, though this was probably the worst.

Recently, I experienced this same problem in a MOOC offered by Stanford. The course had probably spent months in development. It was nicely designed, with high-end and entertaining video. But, there were numerous blatant grammatical errors in the audio, and text.  A large portion of the forum posts complained of the language errors–some quite strongly. What made this particularly problematic, was that many of the students were not native English speakers.

What are the negative consequences of poorly written courses? Well, the first is that credibility can be negatively impacted. Cognitively, it increases the effort a student needs to process the information–sometimes significantly. In terms of Cognitive Load Theory, it increases extraneous load–which means it is more difficult for the student to learn the material–it negatively impacts learning. But, it can also give students the incorrect information, and it can create problems with assessment of student learning. For example, if a student misunderstands a poorly written question, or misunderstands what the assessment is asking the student to produce, then the student’s answer is an inaccurate measurement of what the student knows or is capable of doing. The assessment is invalid.

Often, we can use the context of a sentence to help us correctly interpret a poorly written sentence. For example, if we read a note: “Let’s eat Mom”, we would know the sentence is not suggesting we eat Mom, but rather “Let’s eat, Mom”. However, grammatically incorrect content, with which we have little familiarity or limited understanding, may not be  easily interpreted correctly.

It is important for institutions to think of online courses as published materials, and insure they are reviewed by editors to the same degree that the school would like all of their public facing materials reviewed.

 

Cohorts in self-paced environments: do MOOCs make it easier

On Friday, I had a conversation with a colleague who asked how you create cohorts in a self-paced course/program. After all, in most courses you have a group of students theoretically all at the same place in the content, who should be able to carry on discussions and debates. Such a setup allows for the creation of a community of learners within the course structure–one of the  circles in the Community of Inquiry Model.  But, the traditional cohort (or should we say: default cohort model) is not easy, sometimes not even possible, to achieve in a competency-based, self-paced or personalized learning environment.

The easiest method is to have a large enough base of students that you can group students according to where they are in the content. It would function like the gaming sites that match you with awaiting opponents. The group can be as little as two students, who work together throughout the course or, if the course is not linear, two students who happen to enter the “game room” (or discussion forum) at approximately the same time. The LMS could be set up to notify students when someone has entered and posted a comment, for example…most do this now, so the student expectations of how this will work would be the only thing that would need some tweaking.

But, students can also create their own cohorts. The meetups that have developed around some of the MOOCs are one example of this. Cohorts are not the only way of creating a base of individuals that the student can interact with, and gain insight from. In reality, social networks can replace the cohort model. With a bit of research and/or guidance, there are many places in the authentic world where students could get feedback on ideas, thoughts, assignments, etc. and interact with individuals around a given topic. The benefit is that it can increase their professional and personal network.

There are many places where students can engage in a debate on a topic, post a comment, or submit their research for scrutiny. One site in particular comes to mind: PLOS ONE: http://www.plosone.org/ and all of the sister sites. Today, I found an article on PLOS Biology entitled “Right Brain Left Brain: Fact and Fantasies“. It was published on January 21st (less than a month ago). Below are the statistics for this article. Note that there is an area for comments. And because it is published using an Open Access license, others can build and contribute to this research easily.

PLOSBiology data on article: 7801 views, 917 PDF downloads

Manufactured cohorts are another option. These take creativity and time on the part of the ID and/or SME to create. They will not work in a project-based environment, but are possible in discussion forums. Students can make posts and respond to posts.

A bit of creative thinking can create a highly effective and engaging course without the use of a traditional cohort. In many cases the traditional cohort model was not that effective anyway. Students were only theoretically at the same point in the course, and points are often given for less than optimal participation.  The construction is really meant to serve schools, as it allows for easier grading (easier than assessing social networking assignments), easier construction of courses, and quicker ways of assessing the effectiveness of the given content as everything is normed. What all of this is, however, is not authentic, not experiential, and though often designed to use constructivist methodologies, its effect is often less that ideal–time is a factor that significantly impacts the ability of students to construct knowledge.

 

Types of Assessments–to get you thinking

Generally speaking there are two types of assessments in learning: formative and summative. Formative assessments provide useful feedback to the student and are used for the purposes of increasing understanding. Summative assessments measure achievement. In traditional courses, students are given points for formative assessments and summative assessments. However, in the case of true outcomes based education, especially  CBL,  formative assessments should not receive points as they not measure mastery, but help build mastery. Formative assessments involve self-assessment and/or practice of competencies.

Let’s look at specific activities and assessments to see how this works in true outcomes based learning (competency based).

Tests:
Q: Can a test be used to measure a competency?
A: Yes, with caveats. It cannot be an open book test, must have a time limit. Students should not be able to change answers, or be allowed multiple attempts if the test is measuring a competency. It is best to have each question on it’s own page and not allow backtracking. They should be built using ell-constructed questions to measure competency.

Q: What kind of competencies can a test measure?
A: Tests are particularly good for measuring knowledge, or being able to define terms. For example if a competency states: Student will be able to identify the parts of a cell and their function.

Q: When do points apply to tests?
A: Test and quizzes can be wonderful tools for self-evaluations, practice, and even for teaching content especially if they include comprehensive feedback, and (in the case of practice and teaching) allow for multiple attempts. However, these formative assessments should not have points associated with them. When tests are used to measure a competency, they should have points associated with them. The points should be an indicator of the weight and level of the competency it is measuring. Generally, there should be an assignment somewhere in the course that measures the application of the knowledge the test is measuring. Since that assessment will measure both the knowledge and the ability to apply that knowledge, it should have a greater weight.

Discussion Forums:

Q: Should discussion forums be eliminated from a CBL course
A: That depends. The problem with the construction of most discussion forums is that they require at least a small cohort of students who can take part in the discussion. Depending on the construction of a forum, it can be used to measure a competency (summative) or to develop deeper understanding (learning activity or formative assessment).

Q: How can a forum be used to measure competency?
A: Forums (or any social networking activity) requires a great deal of thought in design, and generally a great deal of time and effort on the part of faculty in monitoring the discussion. Here is an example of a competency that might be measured using a forum: “Student demonstrates the ability to debate and argue a case…” The difficulty comes when the courses are designed for self-paced learning. How does one debate if there is no one to debate with? We tackle that in the next question

Q: Can a forum be used in a self-paced course?
A: Yes, with a great deal of planning and assistance from technology. For example, if the technology can work like gaming rooms, a student can enter the forum and “wait” for another student (or group of students) to “enter” the room. This would require a course that is not completely linear, so students can go back to that module when there are enough students in the forum to make it work efficiently.

Research Papers:
Q: Can a research paper be used to demonstrate competency?
A: Yes, but again that depends on how the assignment is constructed and the competency it is measuring. In general they are a learning activity–the student dives deeply into a particular subject and learns about it. But if there is a competency for demonstrating the ability to find and cite sources then a research paper then the that assignment would be good method for assessing the competency. However, generally speaking, the assignment should also include a presentation of the research with an extensive Q & A session (defend their conclusions)—to insure the student has mastered the material, and not merely regurgitated material.

Homework:
Q: Should we assign points for homework?

A: Homework should never be used to measure an outcome/competency. Homework does not measure mastery. It is practice and therefore should not have points associated with it. Homework should, however, receive feedback or be followed by a practice quiz that provides feedback. (test for understanding)

Projects, Scenarios, Simulations:
Q: We use these for learning activities, but can they also be used for measuring mastery?
A: Yes, projects, scenarios, and simulations can provide for authentic (or close to authentic) summative assessments. With scenarios and simulations, there should be a small margin or error for demonstrating competency (or mastery). A scenario or simulation that was used for training purposes, should not be reused, as is, for assessment of mastery. They should be changed somewhat. Also, while there might’ve been hints or other feedback provided during the training, hints should not be allowed in the summative assessment. Projects should be constructed in such a way as to mimic one that a student would need to complete in the “real” world, with the same or similar expectations.

Interviews & Observations: live, video and audio:
Q: What other methods can be used for assessing competency?
A: Interviews are excellent methods for assessing mastery level learning, especially if the student is not given the questions beforehand and is not allowed to use notes. Observing the student, particularly in clinical/classroom/workplace environments are also excellent ways of measuring competency.

Q: Can observations also be a useful tool for helping students gain competency?
A: Yes, observations can be an excellent tool for learning. For example, when an athlete or performer watches a video of themselves, they can observe where they need improvement and/or what is working well for them. It is an excellent tool for self-assessment. For example, having students record themselves giving a speech, allows them to hear how many “ums” they say, as well as their cadence. When others use an observation to provide constructive feedback, it can also be an excellent learning tool.

Blogs and other forms of Journaling:

Q: What about blogs, reflections and personal journals–those are mostly for processing and learning, right?
A: Journals and blogs can be excellent tools for measuring certain types of competencies, especially when they are accompanied by an artifact of learning. For example, if a student in a language acquisition program creates an audio recording of themselves speaking in the language, and accompanies that with a reflection that includes: what they are saying, why they chose that, how it is expressed culturally, etc, you get a much better understanding of the student’s grasp of the language. Reflections and blog posts can also be powerful tools for assessing mastery gained in internships. For example, in a blog post the student can discuss their experiences and what they are learning. Tests can no be designed to measure somethings that only blog posts and personal reflections can reveal.

I hope you found this helpful. Feel free to post your suggestions in the comment section.

Objectives and Innovation

In the consultations I often provide for various online programs, I’ve seen a particular problem over and over again in regards to the integration of technology: educators begin integration from a focus on the technology. At conference after conference I hear educators talking about reaching students through new technologies, with once again the focus on the technology. I’d like to give some real life examples of how this can sometimes be short-sighted and  problematic.There are two questions that should guide the integration of any technology: what problem is it meant to solve and/or what objective does it match to?

Plenty of instructors are adding mobile components to courses because “students want to use their cell phones” or because they read its an upcoming technology.  What the educators do not understand is that this is about accessing their coursework using their mobile device, instead of a computer or laptop.  It does not mean they want you to create an assignment that requires the use of a mobile device! If you are requiring students to have a mobile device for a particular course, then it better well have a measurable objective associated with that requirement. One example, would be the need for majors in Geographical studies to use GIS applications. This is related to program objectives for their career.

Here are two examples of a courses that required the use of mobile devices–one that was a good integration, and one that was a poor integration. School A offered a course on Ethical Uses of Technology for Educators. This course required students to have a mobile device. The objectives associated with this requirement were developed to insure that teachers became familiar with mobile devices and the unethical ways they could be used (advertently or inadvertently) in a classroom.  The students in this course were given activities that required them to test how easy it would be to use a cell phone in unethical ways, and to reflect on how this would impact the classroom. This is a good integration of mobile technologies.  The second example is a poor use: the course was on American Music.  There were four objectives to the course, all of which required an understanding of different aspects of music. A mobile component was added because the developers wanted an innovative course, and that intent was to enable  students to upload and download music on their handheld device. Not a single one of the objectives of the course had any reason to require knowledge of this skill, nor anything related to mobile technologies in general. The purpose in including this activity was so that the professors could research whether students would use a mobile device.  This is an example of a very poor understanding of integration. To make it clear, a better way to do this would be to insure the course was hosted on a site that could be accessed and interacted with via a mobile device.

Support for third party applications can also become a problem relatively quickly, and once again I’m referring to unnecessary third party programs such as various web 2 programs. Instructors get angry that the helpdesk can’t or won’t provide support for whatever application they choose to use, but there are thousands, if not millions of them out there. At smaller colleges, where courses are taught exclusively by the faculty member that developed them, the instructor should insure that they are familiar with the application before requiring students to use it. At large colleges, where courses are developed by a team and taught by adjuncts, the problems are much bigger, and third party applications need to be selected more carefully. Adjuncts assigned to teach the course may not be familiar with the program, or they may have their own favorites, and may not be willing to learn it a third-party  because its a program the developer likes. It is also unfair to expect the helpdesk to learn them all and be prepared to assist students. Again, third party apps should be chosen when they are needed to solve a particular problem, when the helpdesk is willing to support it and/or there is a particular course objective tied to the use of that application.

Access and course objectives should always be the first considerations. Activities and assessments need to be directly related to those objectives. Technologies should be chosen with those in mind.  True student-centered teaching does not require particular technologies because they are cool, but because they will assist the student in achieving the course objectives and provide greater access, otherwise we may be putting undo demands on the students.

Here are some important questions that can help guide the integration of technology:
1. Is the addition of the required technology needed in order for students to achieve course objectives?
2. Will the technology decrease or impede access in anyway?
3. Is support available for students who have difficulty with the technology?
4. Will learning to use the technology detract from students learning the required content of the course?

We all need to make our courses more collaborative and more engaging for our students. We also need to have students exposed to the various technologies they will encounter in the work environment. We just need to insure that the technologies we choose help, not hinder, learning.

Student Feedback for Assessment

About a year ago I was working with an institution on its adoption of an Online Course Evaluation tool. It was a rather long process which included getting information on options and associated costs, piloting the program, responding to the concerns of the faculty about how the surveys would be used, setting up the tool so that the surveys would go out in emails to students, developing the survey tool itself (the questions) and then analyzing the results of the surveys to determine if the questions did, in fact, give us the type of information we were looking for.

Sadly, one of the greatest hurdles was the reluctance on the part of faculty to have a survey at all. I say “sadly” because such surveys can be invaluable for an instructor, a course designer, and the institution itself. If nothing else, well constructed surveys can  help in determining if a program has redundancies, have difficulty navigating the course, or if students are unclear about the learning objectives.

Course surveys should occur, at the very least, at the end of a course.  Even though information gathered at that time would be too late to make the course better for those students, the information can improve the course for the future students, and perhaps even improve the program as a whole. Too many instructors have the belief that students do not know enough to be able to determine whether there were clear objectives, whether they were met, whether there were unnecessary redundancies, or even if the work assigned was too much (or too little) for a 3 credit course.  My experience has taught me that students do know, and we should ask them.

The ideal would be to survey students at the very beginning of a course, at the midterm, and at the end.  The purpose of the beginning survey is to determine what students may already know or have experienced, and what they expect from the class. The midterm survey should be given to determine if the course is meeting its goals, and to determine if the students are struggling with a particular part of the course. The benefit of this is that necessary changes can be made before the course is done and it’s too late. We’ve already discussed the final evaluation.

If the questions are framed right, the survey can even cause students to reflect on their learning–to think about what they’ve done and learned. It can be used to develop metacognitive skills. There is a free tool available to assist with the development of these surveys. The site is called Student Assessment of  Learning Gains, and you can access it here: http://www.salgsite.org/ You can use the wizard to create your own surveys, and you can browse a library of surveys to get an idea of what other instructors are asking.

Assessing what we are doing, all along the way, is an important part of insuring we are meeting our students needs and expectations.