Keep track of what works well and what doesn't

In formal education, evaluation and research are considered very important strategies for keeping track of how well innovations work, in order to guide decisions about whether or not to continue with a program, expand it or apply it in other situations, and under what conditions. Although evaluation has been portrayed as the domain of experts, evaluation is basically just well organized observations about experience.

Community based research strategies enable people with direct experience of a situation to record their observations and compile a picture that can be used to support decision making.70 Learners, instructors and program coordinators obviously have valuable experiences to contribute to building a picture about particular programs and technologies, but so do others in the community - for example, those who could not participate in a program because of cost or access issues also have something important to say, and their story will be more likely included in a "community snapshot" developed by community members, rather than official institutional statistics that count only those who do participate.

Champion good examples

Keeping track of what works well and what doesn't can lead to the discovery of some very good examples of how technologies have been used to support learning that is appropriate, accessible, and meaningful. These mayor may not use the latest technologies, but demonstrate a good application, one from which lessons can be learned, one that may serve as a model or provide a framework that can be adapted for other situations. They demonstrate sustainability and can continue on an affordable and manageable basis even after any start up funding is no longer available. They meet stated evaluation standards and address a defined learning need.

Documenting these examples and championing them achieves a number of goals. It can achieve visibility for a program so that it is less likely to be subject to cutbacks or elimination as long as the program continues to serve its purpose. It can provide positive feedback to planners and participants and develop their confidence, encouraging them to undertake other initiatives. It can illustrate what can be done with a given technology and by confirming the value of that particular technology for certain applications, can help support the sustainability of that technology.

For example, recent reports have (once again) predicted the phase out of audiotape, one of the most useful, user friendly and affordable technologies for learning. It is the one technology other than print that people can use themselves to develop learning materials in almost any situation. It is essential in language learning, and can be an important part of many other learning applications, such as learning technical skills where people are talked through an activity, or where the voice is an important part of learning, such as learning about counselling, music or speech. It has many applications in simple learning packages. Documenting some of these applications might serve to challenge the move to eliminate it as a technology, but it would take a very concerted effort and probably a demonstration that there was profit as well as social benefit in maintaining this technology.

Analyze the bad examples

This is the other sequel to finding out what works and what doesn't. Simply observing that it didn't work well is the beginning. The next step entails asking questions to pinpoint why it didn't work, so that it is possible to use this example for future decision making. It is useful to find out if the problem was intrinsic to the organizational structure or the technology, or was the result of circumstances that could be different next time.

Some examples of questions to ask: Why was the technology used in this case? Was the program well planned, in consultation with everyone involved, including instructors, learners, and managers? Was there enough lead time to put it in place? Was the technology appropriate for this context, and for this type of learning? Was the technology reliable? Did instructors or learners have difficulty using the technology? Could some of the negative outcomes be reduced or eliminated, for example, by using different strategies or by taking more time? Could some of the positive outcomes serve as a lesson for future action?



Back Contents Next