JavaScript DHTML Menu Powered by Milonic
 
 

Dynamic Assessment as a Teaching Tool
Assessment for learning - and learning from assessment
by Erica Garb, PhD


Abstract:

Dr. Garb talks about why we test, in the context of our present use of summative and formative assessments (which she defines) and the subsequent widespread practice of "teaching to the test". She then offers an alternative-dynamic assessment-and tells us how to use it, and how it has impacted on learning according to research.

Why do we test?

Most teachers spend a considerable amount of classroom time on assessment, giving and checking tests, and 'going over' the tests with their students. Of all classroom activities, testing is possibly the activity that pupils and teachers take most seriously.
Why do we test?

Summative and formative assessment

Summative assessment is the attempt to summarize student learning at some point in time, say the end of a course. There's a lot to be said for these summative tests. They provide a snapshot of a school system, and can be easily compared to previous years. High quality summative information can, of course, shape how teachers organize their courses or what schools offer their students.

However, they are not designed to provide the immediate, contextualized feedback useful for helping teacher and student during the learning process. Moreover, summative tests compare students with each other, the prime purpose of which seems to them to be competition rather than personal improvement. This leads pupils to look for ways to obtain the best marks rather than to improve their learning. One reported consequence is that, when they have any choice, pupils avoid difficult learning tasks and spend time and energy looking for clues to the 'right answer.'

Teachers are often able to predict pupils' results on external summative tests because their own tests imitate them. But at the same time, teachers know too little about their pupils' individual learning needs.

By contrast, formative assessment occurs when teachers feed information back to students in ways that enable the student to learn better, or when students can engage in a similar, self- reflective process. If the primary purpose of assessment is to support high-quality learning, then formative assessment ought to be understood as the most important assessment practice.

"Formative assessment is at the heart of effective teaching We start from the self-evident proposition that teaching and learning must be interactive. Teachers need to know about their pupils' progress and difficulties with learning so that they can adapt their own work to meet pupils' needs -- needs that are often unpredictable and that vary from one pupil to another."( Black and William (1998b)
http://www.pdkintl.org/kappan/kbla9810.htm
  • Feedback given as part of formative assessment helps learners become aware of any gaps that exist between their desired goal and their current knowledge, understanding, or skill, and guides them through actions necessary to obtain the goal (Ramaprasad, 1983; Sadler, 1989).
  • The most helpful type of feedback encourages students to focus their attention thoughtfully on the task rather than on simply getting the right answer (Bangert-Drowns, Kulick, & Morgan, 1991; Elawar & Corno, 1985).
  • This type of feedback may be particularly helpful, because it emphasizes that students can improve as a result of effort, and specifically shows pupils how to go about achieving that improvement.
  • Formative assessment helps support the expectation that all children can learn to high levels, and counteracts the cycle in which students attribute poor performance to their own lack of ability and therefore become discouraged and unwilling to invest in further learning (Ames, 1992; Vispoel & Austin, 1995).
Black and William compared the average improvements in test scores of the students involved in an innovation to strengthen formative assessment with the range of scores found for typical groups of students on the same tests, including end-of-the year, summative tests. They found that the innovation produced significant learning gains, and concluded that high quality formative assessment does indeed have a powerful impact on student learning.

"Teaching to the test"

Neither formative nor summative testing alone meets the needs and goals of public schools. The aim should be a combination of low-stakes, ongoing, formative assessment that guides teaching and learning, tied tightly to both the curriculum and the state's high-stakes summative test. However, the public pressure on students, teachers, principals, school superintendents and inspectors to raise scores in high-stake tests (such as 'bagrut scores') is tremendous. This has led, worldwide, to an almost irresistible temptation to tailor instruction only, or mainly, to that which will be tested, a phenomenon widely known in the literature as 'teaching to the test'. Indeed, in many classrooms, instruction is synonymous with preparing students for the final (summative) bagrut test.
It is this teaching to the final test which has become the focus of much of the 'teaching to the test' criticism.

A recurring criticism of tests used in high-stakes decision-making (such as entry to college, university, or prestige faculties) is that they distort instruction by forcing teachers, whether they want to or not, to teach to the test. For example, Herman (1992) states that "time spent on test-taking often neglects higher-order thinking skills, Research suggests that "while student scores will rise when teachers teach closely to the test, learning often does not change" (Shepard, 2000; Smith and Fey, 2000.) Specifically with regard to reading, Neil (2003a) reported cases where pupils had been "taught to read by learning to look at the answer options to questions and then search the passage to find the clue to selecting the correct answer. Independent evaluators found that these pupils could not explain what they had just read even though they got the test item correct." The implication is that there may be a significant number of test-wise students who lack the basic skills needed to be successful in higher education settings. In addition, it seems logical to assume that teaching to the test provides students with a skewed measure of their ability and a false sense of security.

Is teaching to the test all that bad?

From a different perspective, instructing pupils on anything other than the actual test seems illogical. Boser, (2000) for example, reflects what many teachers say: "Our mandate is to get our kids through the exam." Boser states categorically: "States should delineate what students should know and be able to do, teachers should match instruction to those standards, and state tests should measure how well students meet those expectations"

If we don't 'teach to the test', what should we do?

'Teaching to the test' is in sharp contrast with the priority attached to the value of 'learning to learn', one of the key indicators in the European Union report on the quality of school education (2000), and implicit in the desire to foster the development of problem-solving and thinking skills. "Classrooms in which 'learning to learn' takes priority over 'I have learned what I need to know for the exam' are positive learning environments. learning is at its most effective, when learners are actively involved in and take responsibility for their learning" (Freeman, 2001). According to Freeman (and the EU) the purpose of education today is to produce autonomous life-long learners, and the emphasis should be placed on assessing pupils' ability not only to acquire information and skills, but also on their ability to transfer and use information, skills, and thinking and problem-solving strategies in a wide and flexible range of situations (Freeman 2001).

In short, it is by concentrating on the process of learning, and our pupils' conscious engagement in this process that teachers can facilitate the acquisition of effective learning skills for the 21st century.

Dynamic Assessment: a merger between learning and testing

Instead of bemoaning the understandable inclination of teachers to teach to the test, and the understandable inclination of students to take this seriously, we should take advantage of these inclinations. Through using the Dynamic Assessment model, which I will discuss below, we can construct tests that integrate instruction and assessment, literally merging these two elements so that they are virtually indistinguishable.

Dynamic Assessment of EFL text comprehension

At the end of the year, we conduct a summative assessment in the form of matkonet exams, mock exams based on the ubiquitous 'unseens', which are modeled on previous exams. Often, in order to accustom our pupils to the exam format, we use the same kind of assessments during the course of the year as well, as a form of ongoing assessment. However, student feedback has indicated that using this form of assessment during the learning process presents a number of problems.

Problems. Students often complain that:
  • The tests do not assess what they have learned in class. By their very nature, they often contain elements that have not been studied - after all, they are 'unseen'.
  • In class, students are introduced to many different strategies for reading and answering questions. When faced with an unseen, they have difficulty recognizing which strategies to use, and where to use them. That is, should they apply all of these strategies simultaneously, or choose the relevant strategies and adapt them to the specific text or question?
  • Each unseen deals with different content, different level, different vocabulary, and requires different strategies and different background knowledge.
  • Sometimes the test is beyond their own particular level, especially at the beginning of the semester, which saps their self-confidence and drains their motivation.
  • They have no way of assessing their own learning processes except through one inclusive mark.
  • They often feel that although they passed the test, they did not understand the text.


The dynamic assessment model. How does it work?

The model consists of a pre-test, mediation (the heart of the process), a brief period for revision at home (an "information page" is constructed for students to take home), and a post-test. The construction of each of these sections can be tailor-made by teachers to suit their individual classes. Briefly, the dynamic assessment (DA) model uses all the same test items that the teacher wishes to assess in a static assessment, but scaffolds them in order to address the problems mentioned above.

The role of the evaluator is to identify the pupils' problems during the pre-test and to provide the necessary mediation during the learning phase (mediation). Items on the post-test are identical to those of the pre-test in level, background knowledge, grammatical structures, new terminology, and required strategies, but differ in content.

The goal of dynamic assessment is not only to measure a pupil's current performance, but, more important, to reveal the pupils' learning potential,-- the extent to which he is able to absorb and integrate instruction during the mediation process. This enables teachers to formulate an optimal educational intervention for each pupil (Kozulin and Falik, 1995). At the same time, the assessment can also be used to teach strategies for answering the kind of questions that the pupils will encounter in final, summative tests.

Example of items on a dynamic assessment

Let us say, for example, that you have taught and now want to want to test (and reinforce) personal pronouns, question words, auxiliary verbs, and negatives. At the same time, you want to introduce strategies for dealing with multiple-choice questions.

You might give something like the following example as your pre-test.


( )

: Example
Where are you going?
a. I am not at home.
b. You are going to the post office.
c. I am going home.
d. She is going home.

1. What is his name?
a. My name is Tom.
b. His name is Ron.
c. He lives in Jerusalem.
d. He is a boy.

2. Are you happy?
a. No, they are not.
b. No, he is not.
c. Yes, I am sad.
d. Yes, I am.

3. Where do Benny and Dan study?
a. They study in Tel Aviv.
b. She studies in Tel Aviv.
c. They are in the classroom.
d. They are in Tel Aviv university.

4. When did he come to Israel?
a. tomorrow
b. still
c. yesterday
d. home

5. How often does he play tennis?
a. He played last week.
b. He is playing.
c. He plays every day.
d. He is playing at 4 o'clock.

6. Why did you come to this class?
a. I wanted to learn English.
b. I came here.
c. I have not come.
d. Because I didn't want to come.

Methodology

  1. Pre-test: The pre-test is given to students as an ordinary, static test, except for the following preamble. (Note to teachers: Number each of the tests, and give each student the same number on their pre- and post-tests. This will make your life easier when you compare the tests.)

    Instructions: (In Hebrew or English). You are going to do a test in three parts, so that we can see what you know about personal pronouns, question word, auxiliary verbs, and negatives. We also want see how you think, and how you plan your work. This will help us to plan the best possible programme for you for this year. It is very important that you come to all three lessons. First you will do a test, then we will discuss this test with you and show you the best strategies for doing such tests, and then we will give you another test to see how much you learned from the discussion.

    Mediation of the example. Let's begin. Look at the example. We are going to use our first strategy: Look for clues.

    1. Look for clues in the question. The question is, Where are you going? The question is addressed to "you". This is your first clue. What will the answer begin with? That's right, with "I" Only a) and c) begin with "I", so we can immediately eliminate b) and d).

    2. Elimination is your second strategy. You eliminate all the answers that will not fit.

    3. The next clue is the verb, "going"

    4. Your next strategy is comparison. Compare the two answers that remain. a) does not tell us about "going", so the answer must be c). Does this answer make sense? Yes. Please mark the clues "you" and "going" with your high-lighters, and when you do the test, please highlight the clues that you find.
      • Are there any questions?
      • What other words can we use as clues? (Pupils might answer "where"). That's a good idea.
      • What will the answer be if the question is "Where?" That's right, a place.
      • Look at the answers. All of the answers mention a place. Sometimes the question word is a good clue, but here it does not help us much.
      Any questions? Please begin.


    Note to teachers: During the course of the test, do not allow any questions- tell pupils that understanding the questions is part of the test. Gently tell students to look carefully, do the best they can, and if something is difficult, to go on to the next section. Advise them to do whatever is easiest for them first, and then come back later to more difficult questions if they have time.

    The students do the test, and the teacher marks them and records the scores in preparation for the mediation stage, but does not record the marks or correct/incorrect answers on the students' test papers. The test should be marked immediately, and the mediation given at the next lesson.

  2. Mediation
    Mediation is divided into two categories.
    (a) Knowledge required. This can include grammatical or lexical information, as in the information page below, or, with regard to reading comprehension, information such as paragraph structures, awareness of metaphorical language or background knowledge. Among your pupils might be those who do not have this information, or on the other hand, they may have the information, but not use it when required. Mediation therefore also focuses on activating their knowledge through (b) Strategies, which provide the students with a plan for applying what they know.

    Hand students back their tests without any correct or incorrect answers indicated, (uproar!) and the information handout.


INFORMATION PAGE FOR STUDENTS TO STUDY AT HOME

  1. QUESTION WORDS
    what
    where
    when
    why
    who
    how
    how many
    how much
    how often
    how old

  2. PERSONAL PRONOUNS
    I
    you
    he
    she
    it
    we
    they
    my
    your
    his
    her
    its
    our
    their

  3. VERBS THAT CAN ALSO BE HELPING VERBS

    Base
    to be: be
    to do: do
    to have: have
    Simple
    am/ is / are
    do/does
    have/has
    Past
    was /were
    did
    had
    Future
    will be
    will do
    will have

  4. OPPOSITES
    am not, is not, was not, were not, will not be (isn't, wasn't, etc. )
    do not, does not, did not, will not do (don't, doesn't, didn't)
    have not, has not, had not ( haven't, hasn't, hadn't)

We are now going to see how to do this test. I have marked your work, and now you can mark your own work and see how you did.

Mediation before the post-test
The teacher briefly discusses the contents of the information page with the students, advising them to write translations of words they do not know.

Instructions: We are now going to see how we can use our strategies to do this test. Do you remember which strategies we used?
Look for clues
Elimination
Comparison

Write them down on your information sheet.

Now look at question 1.

  • Who wrote a)?
  • Why? What were your clues?
  • Who wrote b) Why? etc.
(The teacher does not give the correct answer but discusses with the students their reasoning and method of answering the questions, to arrive at a consensus.)

Question 1:

  • "What is his name?"
  • What is your first clue? The first clue is "his".
  • What can we eliminate?
  • What other clues can we use? Question word - What. The question is about an object ( a name), therefore the answer should point to the object ( a name). Pronoun - his, possessive, male. e.g. "His name is Ron", etc.

Q-2

  • The first clue is the pronoun - you.
  • If the question is addressed to "you" the answer must begin "I". (eliminate a) and b) Compare c) and d).
  • If the question is "Are you happy?", you cannot say "Yes, I am sad" - this is a contradiction.

Q-3: "Where do Benny and Dan study?"

  • The question is about location, (Where) therefore the answer should include a location.(Tel Aviv).
    But all the answers have a location.
  • The next clue is Benny and Dan - two people-, therefore the pronoun must be plural. (They). Eliminate b).
  • The verb is "study". We need: They - study - and a place. Only a) has all three.

Q-4. "When did he come to Israel?"
The clue is "did". The question is about time in the past, therefore the answer should include a time frame in the past. Only one answer indicates the past. e.g. "yesterday"

Q-5: "How often does he play tennis?" The question is about frequency, therefore the answer should include indication of how many times something happens, e.g. "He plays tennis twice a week."

Q-6: Why did you come to this class?"

  • The question is about the cause of the event, therefore the answer should include an explanation, a reason.
  • The word "because" (d) might indicate an explanation, but look carefully: "Why did you come". The answer cannot be "Because I didn't want to come." This is contradictory. Therefore only a) gives a logical reason.

NOTE: Make sure that after the mediation you get back ALL the tests. You can easily check according to the numbers.
DO NOT ALLOW STUDENTS TO TAKE THESE TESTS HOME. Students take home the information page to prepare for the post-test, which should be given at the next lesson.

Post-test

(Given without any mediation - no dictionaries - no questions)
Example

Where are you going?
a. I am not at home.
b. You are going to the post office.
c. I am going home.
d. She is going home.

1. What is her name?
a. My name is Sarah.
b. She is a girl.
c. She lives in Jerusalem.
d. Her name is Roni.

2. When did he come to Israel?
a. tomorrow
b. home
c. yesterday
d. still

3. Are you American?
a. Yes, I am Israeli.
b. No, he is not.
c. No, they are not.
d. Yes, I am.

4. How often does she phone her mother?
a. She phones every day.
b. She is phoning.
c. She phoned last week.
d. She is phoning at 4.

5. Where do Sue and Jane live?
a. They are in school.
b. She lives in Tel Aviv.
c. They live in Tel Aviv.
d. They are in the house.

6. Why did you go to the hospital?
a. I went to the hospital.
b. I was in a car accident.
c. Because I didn't go.
d. I haven't been.

After the post-test

Students now get back their pre and post- tests and can assess their improvement.
Since pupils are assessed on their improvement rate, they are competing, not against others, but against themselves.
DA (Dynamic Assessment) distinguishes between information and strategies, and the post-test is limited to the items that have been taught in the mediation session. This enables pupils to take control of, and responsibility for their own improvement, since they know exactly which strategies and information they will be required to apply. It also enables the teacher to supply, if necessary, additional material to those students whose improvement was limited.

The same formula can be applied to any material, including reading strategies on increasingly more sophisticated levels.

Theoretical Background: the development of DA.

As early as 1934 the Swiss psychologist Andre Rey proposed basing the evaluation of students' abilities on directly observable learning processes. The concept of learning potential assessment was further developed by Vygotsky (1934-1986) and Feurstein in the field of psychology to assess cognitive functions. (See also Minick 1987, and Kozulin, 1998). While the results of static testing (assessing current performance levels (show us the already existent abilities of the student, DA allows us to evaluate the ability of the student to learn from the interaction with a teacher. This learning ability may serve as a better predictor of the students' educational needs than the static scores.

It is only recently that the DA concept has been adopted (and adapted) for use in domains that depend on the use of cognitive strategies (thus far, physics, astronomy, mathematics and reading comprehension.)

How do we know it works? Well, there has been some promising research. A recent study (Vollmeyer & Rheinbreg, 2002) showed not only that it works, but also produced an additional surprising result. Using the DA format to test learning in physics, the researchers predicted that mediated feedback would affect both motivation and performance. What they found instead was that learners who were told that they would receive explicit feedback on the use of strategies, used better and more systematic strategies (compared to a previous static assessment) even before the mediation stage had begun. In other words, the mere expectation of strategy feedback led to deeper processing of the learning material.

With specific relation to DA of text comprehension, a model has been widely tested in Israel (Kozulin & Garb, 2002,) with a variety of student populations (including native-born Israelis and immigrants.)

The results indicate that the procedure is both feasible and effective in obtaining information on students' learning potential. All students benefited in varying degrees from the mediation and were able to apply the acquired strategies to the new texts, but the DA test also pin-pointed those students who needed extra help with specific strategies. It was confirmed that students with similar static performance levels demonstrate different, and in some cases dramatically different ability to learn and use new text comprehension strategies. The findings confirm the practical value of the EFL dynamic assessment procedure, showing that it provides in-depth information about the different learning needs of students.

At the same time, it can be integrated into the learning process as part of classroom instruction.

For example, a number of students with identical pre-test scores performed very differently at the post- test. Students T. and H., for instance, both had 29% correct answers at the pre-test, but after mediation T. got 59%, while H. only 38%. This was true also of initially higher achieving students. For example, L. and A. both received 62% at the pre-test, but at the post-test A. improved her result to 82%, while L. remained with 65%. These findings enabled the teacher to provide A with enriched materials, knowing that her learning strategies enabled her to benefit from them. L, on the other hand, needed some extra coaching in relevant strategies, despite her high pre-test score.

The instructional value of a dynamic EFL assessment lies in the fact that although DA is given as a group assessment, its results can be used for the development of individual learning plans for students with different learning needs. For example, work with students who demonstrated an average pre-test performance but insufficient learning potential should focus on providing them with learning and information-processing strategies, i.e. teaching them "how to learn". Students with an average pre-test performance and high learning potential should be given more challenging material and more opportunity for independent study. Students with low pre-test performance and low learning potential need an intensive investment into their general learning and problem solving skills that should be based on very simple EFL material. Only after these students acquire the basic learning skills should they be challenged by standard EFL tasks.

Conclusion

DA provides us with a model of how formative assessment can be integrated into the learning process and combined with the goals of summative assessment. In other words, "testing and teaching are not separate entities" (Rudman 1989)

Endnote: For teachers who are interested in seeing further models of DA assessment of EFL reading comprehension, I would be happy to supply examples. Contact me at erica2@netvision.net.il

For suggestions for using formative assessments useful sites are http://www.ericdigests.org/2003-3/concept.htm http://www.umanitoba.ca/publications/cjeap/articles/volante.html

References
Ames, C. (1992). Classrooms: Goals, structures, and student motivation. Journal of Educational Psychology, 84 (3): 261-271.

Angelo and Cross, (1993).Classroom Assessment Techniques: A Handbook for College Teachers

Baker, L. B., and Brown, A.L. (1984a). Cognitive monitoring in reading. In J. Flood (Ed.), Understanding reading comprehension: Cognition, language, and the structure of prose (pp. 21-44). Newark, DE: International Reading Association. Baker, L. B., & Brown, A.L. (1984). Metacognitive skills and reading. In P. D. Pearson (Ed.), Handbook of Reading Research (Vol. 353-394, ). New York: Longman.

Bangert-Drowns, R.L., Kulick, J.A., and Morgan, M.T. (1991). The instructional effect of feedback in test-like events. Review of Educational Research, 61 (2): 213-238.

Black, P., and Wiliam, D. (1998a). Assessment and classroom learning. Assessment in Education, 5 (1): 7-74.

Black, P. and William, D. (1998b). Inside the black box: Raising standards through classroom assessment. Phi Delta Kappan, 80 (2): 139-148. (Available online: http://www.pdkintl.org/kappan/kbla9810.htm

Boser, U. (2000). Teaching to the test? Education Week 19 (39), pp. 1, 10.

Elawar, M.C., and Corno, L. (1985). A factorial experiment in teachers' written feedback on student homework: Changing teacher behaviour a little rather than a lot. Journal of Educational Psychology, 77 (2): 162-173. Freeman. E., Holmes, B. & Tangney, B. (2001). Teaching to the Test: the impact of assessment on teaching and learning. Proceedings of the 12th International Conference of the Society for Information Technology and Teacher Education, Charlottesville, VA, USA

Herman, J.L., Ashbacher, P.R., and Winters, L. (1992). A Practical Guide to Alternative Assessment. Alexandria, VA: Association for Supervision and Curriculum Development

Kozulin, A. (1998) Psychological Tools: A Sociocultural Approach to Education. Cambridge, MA, Harvard Uniersity Press

Kozulin & Garb, (2002). Dynamic assessment of EFL Text Comprehension, School Psychology International, Vol 23, I, pp 112-127.

Kozulin, A. and Falik, L. (1995). Dynamic cognitive assessment of the child. Current Directions in Psychological Science, 4: 192-196.

Minick, N. (1987). Implications of Vygotsky's theory for dynamic assessment. In C. Lidz (ed) Dynamic Assessment pp 116-140, New York: Guilford Press Neil, M. (2003a). High stakes, high risk: The dangerous consequences of high-stakes testing. American School Board Journal, 190(2), 18-21.

Ramaprasad, A. (1983). On the definition of feedback. Behavioral Science, 28 (1): 4-13.

Rey, A. (1934). D'un procede pour evaluer L'educabilite, Archives de Psychologie, 24: 297-337.

Rudman, H.C. (1989). Integrating Testing and Teaching. Practical Assessment, Research and Evaluation, 1 (6) http://pareonline.net/getvn.asp?v=1&n=6 or http://www.ericdigests.org/pre-9214/testing.htm

Sadler, D.R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18 (2): 119-144.

Shepard, L. (2000). The role of assessment in a learning culture. Educational Researcher, 29(7), 4-14.

Smith, M. L., & Fey, P. (2000). Validity and accountability of high-stakes testing. Journal of Teacher Education, 51(5), 334-344.

Vispoel, W.P., and Austin, J.R. (1995). Success and failure in junior high school: A critical incident approach to understanding students' attributional beliefs. American Educational Research Journal, 32 (2): 377-412.

EU [2000] European Report on Quality of School Education: Sixteen quality indicators: http://ec.europa.eu/education/policies/educ/indic/rapinen.pdf

Copyright
1997 - ETNI           DHTML Menu By Milonic JavaScript
Graphic and Web Design by Designed by Cherie