In the last four posts, I’ve talked about why (and why not) to assess your language proficiencies, and what skills to assess. This time, we’ll look at the all-important question of how to actually do the assessment.
To begin, start by using the three (or five, if you’re working in a commonly-written language) broad categories discussed in the last post—listening comprehension, speaking, and conversational ability (and perhaps reading and writing as well)—and choose activities that allow the learner to show his or her skills in each of those categories.
So for example, for the listening comprehension category, you may choose a set of recordings to play to the learner, and ask him or her to summarize what he or she has understood.
For speaking, you may have a list of prompts asking for descriptions, stories, or full persuasive arguments; while for conversational ability you may have a set of role plays, especially ones which require the learner and native speaker to come to some agreement or solution.
How do you use these activities? Have a native speaker of the language in question meet with the learner, and do some of these activities. If they’re too hard, use easier ones. If they’re too easy, jump up to harder ones. This is a technique called “spiraling,” which is commonly used in the Oral Proficiency Interview designed by ACTFL (American Council of Teachers of Foreign Languages). The idea is that you continue to “push” someone’s ability in a certain category until it’s clear he or she is no longer capable of handling a task.
So you keep playing harder and harder recordings until the learner can’t understand them very well, or the native speaker keeps asking harder and harder questions until the learner is unable to respond appropriately. This will give you, the assessor, a clearer idea of the limits of the learner’s proficiencies.
This will give the assessor a clearer idea of the limits of the learner’s proficiencies.
Make sure, however, that you’re not just looking for overall capabilities within each of the three (or five) categories. Look at the sub-skills (I listed sixteen in the last post) and pay attention to when these skills are evidenced (or not) in the assessment process.
If you’re unclear on certain sub-skills, you can have activities ready which will isolate those proficiencies. For example, if you want to test the learner’s ability to understand complex words (words with a lot of parts in them), you can bring a text (or recording), read by the native speaker, which has a lot of those types of words in it. See how well the learner copes.
So that’s the general idea of how an assessment would go. Note a few things:
- First, it’s designed to use natural language, in an interactive environment. You don’t want to assess a learner’s ability in an unnatural setting in which a learner would never find him or herself. For example, don’t depend strictly on recordings—there should be interaction with a native speaker. And don’t let the learner (in most cases) speak for minutes on end without interruption. That rarely happens in real life. In other words, make the assessment as close to real life as possible, even while calibrating it to show forth proficiencies as clearly as possible.
- Second, don’t let learners “study for the test.” That is, don’t choose topics they know well, and base all your activities around those. Try to choose a broad range of topics, looking at a wide range of domains of life, so that you get a better picture of the learner’s overall language ability.
- Third, make sure that the assessment is encouraging and motivating for the learner. As I mentioned in the first post, if an assessment doesn’t encourage the learner to continue on and have hope for future growth, it’s not worth doing. Make sure that, while you push the learner’s ability and make him or her do quite a bit with language over the course of the assessment, you are doing it all with a spirit of encouragement and empathy.
Next time I’ll discuss a few other issues related to how to do an assessment. But now you have the basic idea.