The writing skill most foreign language teachers don’t teach: interactional writing

online-chat-communication-23416424

1.What is writership?

Chatting online or texting via SMS, WATSAPP, etc. has become part and parcel of our daily life, the verb and noun ‘chat’ alluding to the fact that although we are writing we are in fact ‘talking’ to someone. Just like in a face-to-face conversation when chatting online we have to respond to our interlocutor in real time if we want to ‘stay’ in the conversation and, most importantly, if we want to keep him or her engaged.

Since in real-life face-to-face communication applied linguists refer to interactional listening as ‘Listenership’ I will henceforth call the set of skills involved in interactional writing: ‘Writership’.

Listenership and Writership have many similarities in terms of the cognitive processes they involve. There are, however, important differences, too. Besides the most obvious difference , i.e. the fact that  communication does not happen through the oral medium, there is another important one: when texting or chatting we do not usually see our interlocutor. This means that the all-important non-verbal aspects of communication (e.g. the cues we get from our interlocutor’s body language) are missing – which often leads online chatters to use imagery as a compensatory strategy. This entails that effective writership must not simply include fluency (as in: speed of production) but also a level of mastery of TL vocabulary and discourse functions which makes up for the lack of non-verbal cues and pre-empts ambiguity.

2. Should we be concerning ourselves with interactional writing?

Whilst skyping with Steve Smith of www.frenchteacher.net last night we were talking about the time teachers should allocate to writing. Steve made a very important observation that echoed what I have always thought: writing is less important than listening, speaking and reading as students are not very likely to use the target language after passing their GCSEs; hence, not much time at all was the answer– writing can be done by students at home. But then it suddenly dawned on me that that was true of Steve’s generation and mine, but not of the current one.

In the highly inter-connected world where global communication happens at the speed of a few milliseconds on Facebook, Twitter, Watsapp and SMSs many of our students are very likely to engage in interactional TL (target language) writing in the future – in fact many already do on a daily basis. Another example of how emerging technologies affect not only the way we communicate in real life but also, inevitably, the way we teach foreign languages.

3. But how do we teach interactional writing?

First of all let us consider what it involves.

First and foremost, obviously, the ability to understand an interlocutor’s input.

Secondly, the ability to respond to that input in real time, maybe not necessarily at the same speed as one would do in oral interaction but still quite rapidly. In other words, effective writership requires writing fluency.

Thirdly, effective writership requires a high level of intelligibility of output – not necessarily grammatical accuracy and complexity. Spelling becomes more important than it is in essay writing in that the speed of the interactional exchange does not allow the interlocutor a lot of time for working out ambiguous items.

Fourthly, the command of a sizeable repertoire of high frequency lexical items, a fairly wide range of discourse functions, the basic tenses and communication strategies (e.g. ways to compensate for lack of vocabulary).

Fifthly, in dealing with a TL native speaker an effective interactional writer must be able to grasp cultural features in their input, including the jargon and abbreviations used in TL instant-messaging communication (e.g. knowing that in French LOL is MDR).

The obvious corollary of the above is that most of the traditional communicative activities we use to foster autonomous communicative competence (through both the oral and written medium) and listenership apply to the teaching of writership too (e.g. information gap tasks and role-plays). In fact the oral communicative practice that takes place in your classroom will have a major impact on learner writership.

These are some of the activities I use in the classroom to promote writership:

  1. As a starter or plenary I stand in front of the class and type questions in the TL on the classroom screen. The students, equipped with mini-boards or iPads, have two minutes to write an answer including three details. In order to differentiate I usually ask two questions, the second being an extension for the more fluent students. Accuracy is not a concern, but intelligibility is.
  2. Picture tasks. This is similar to the previous task, except that the stimulus the students have to respond to is visual. The rationale for this task is that (a) in social media students often do have to respond to a visual stimulus; (b) it taps into their creativity; (c) it may elicit language that transcends the boundaries of the topic-at-hand.
  3. ‘What is the question?’ tasks. Students are provided with a very short dialogue in the TL where the questions have been omitted. Their task is to provide the missing questions
  4. Social media slow chat. Using Edmodo I ask my students to chat with each other about a given topic. I give out red cards to the chat-initiators and blue cards to the responders. The initiators are in charge of asking questions to any of the responders in the class. Every ten minutes the initiators and responders will switch cards. The students are given a time limit to answer, which varies from group to group – this is hard to monitor, of course, but my students are usually honest. The reason why I use Edmodo rather than Twitter is that (a) Edmodo allows the teacher to edit mistakes (please note: I only correct major intelligibility mistakes); (b) it looks a lot like Facebook but it is safer; (c) the teacher has total control over everything that happens in the interaction. Last year I paired up my class with a class from an overseas school and we chatted on Edmodo for thirty minutes. Great experience that intend to do again this year. This activity is great as a prelude to oral activities as it allows the students more time to make the same communicative choices they will have to make in the oral interaction whilst still putting communicative pressure on them
  1. Very short translations under time constraints; students need to translate the teacher’s input on mini-boards. Again, focus is on intelligibility and fluency here rather than on100 % accuracy.
  2. Agree or disagree. A simple statement appears on the screen (e.g. Tennis is very enjoyable; I like it when it rains; the food in the canteen is great) and the students have to write a response on their mini boards under time constraints.
  3. Fluency assessment. At key stages in the unfolding of a unit of work I use the activity described in point 1, above, but ask students a much broader question and give them a lot more time to answer it on paper or on using google classroom. At the end of the allocated time I ask the students to stop and note down how many words they wrote. The time to word ratio will give me an indication of the levels of writing fluency in my class at that moment in time. I value this activity as fluency is an important pre-requisite of effective writership.

Conclusion

Emergency technologies, especially the internet and social media have transformed the way we live and we use language in communication. On a daily basis I find myself chatting on social media in four different languages and I find the linguistic challenges this poses quite taxing as it requires faster language processing ability and sociolinguistic competences that I do not always possess.

Whether we like it or not, the vast majority of our students communicate via social media or other forms of instant messaging. Hence, if we are to prepare them for communication in the real world this phenomenon cannot be ignored. Teaching interactional writing skills is therefore a must, in my opinion.

Teaching this set of skills has also the added benefit of preparing our students for oral communication as it requires them to process language in real operating conditions whilst allowing more time than the oral medium does. In this sense, the attainment of effective writership may be seen not just as an end in itself but also as instrumental to the attainment of oral fluency intended as the ability to retrieve information from Long-term Memory under communicative pressure. Do you currently work on developing TL writing fluency in your learners?

Seven metacognition-enhancing interventions I will implement this year

download (5)

Metacognition-enhancement is the area of teaching and learning that has always interested me the most as a teacher. As a researcher I investigated this area as part of my PhD 15 years ago and, as a research officer, on a classroom-based research study involving six English state schools under the supervision of Professor Ernesto Macaro of Oxford University –  one of the greatest authorities in the realm of learning-to-learn research.

This year I intend to embed the following metacognition-enhancing interventions in my teaching practice to test their effectiveness as one of my professional development targets. My ‘guinea pigs’ will be a class of 18 year 10 students of French preparing for their IGCSE examination. The reason for wanting to enhance my students’ self-regulation and meta-learning skills has to do with the nature of the examination they will sit next year and the relatively limited contact time available (two hours per week). I do believe that these students – especially the weaker ones – will benefit from the following interventions as they need to become more responsible for their own learning, more aware of their problem areas and must learn to optimize their use of the little teaching and learning time available.

The reader should note that in many cases, as a result of the self-regulatory processes and metacognitive dialogues that the activities below will spark off I will also have to model to my students specific cognitive and affective strategies to address any issues identified in their learning.

1. Reflective journal – Every week I will ask my students a different question which will ‘force’ them to reflect on how their learning is going. I have set up a google folder and google doc per student in which they will write a 50-words-minimum answer to each question. The first question – next week – will be: What aspects of French learning cause you the most anxiety? How can I help you? How can you help yourself? Every week I will change the focus of the students’ reflection; but every so often I will go back to an ‘old’ question to see if there has been any progress in a specific area.

I will not ‘mark’ the students’ journal entries, but will give them a rapid read and respond with a concise comment and/or a request for clarification or expansion. If I do think that the quality of the reflection is below my expectations of a given student, I will have a chat with them at the end of the next lesson.

2. Retrospective verbal reports on essay-writing – the week after next I will ask my students to write a short essay (around 150 words) under time constraints which I will assess through the same criteria used by our examination board. At the end of the essay I will ask them to reflect on and write in as much detail as possible about any issues they encountered in carrying out the task in the areas of grammar and vocabulary as well as any other problem they experienced (e.g. stress; cognitive block). As they write I will walk around and scaffold the process by asking probing questions if I feel they need prodding. I intend to do this twice a term.

Every time I have carried out retrospective verbal reports they have yielded valuable data and have served another very important goal: enhancing students’ awareness of their problem areas. This has always provided me with a very useful platform for starting a very productive metacognitive dialogue with my students.

3. Think-aloud protocols – later on in the term, after identifying the three students who are most seriously underachieving in reading and/or writing I will involve them in think-aloud sessions in which they will perform a reading or writing task whilst verbalizing their thoughts; I will often intervene in the process by asking probing questions to delve further in their thinking process. This technique, as I have already discussed in a previous post, has a double effect: firstly, it yields incredibly useful data as to how the students tackle the tasks and where they go wrong or experience linguistic and/or cognitive deficits; secondly, it engages their metacognition.

I will only focus on three extreme cases not because there is something special about this number, but merely for reasons of manageability/time constraints. I tried bigger numbers before and did not cope very well.

4.L.I.F.T. – I will encourage the students to use L.I.F.T in every single essay of theirs as much as possible – although I will not make it compulsory. As I have already discussed in another post, L.I.F.T stands for Learner Initiated Feedback Technique, i.e.: whenever a student has a doubt about a grammatical or lexical structure she will ask the teacher a question that she will annotate on margin (e.g. have I been right in using the present subjunctive here?). The teacher will then answer the questions in her written or oral feedback on that essay. L.I.F.T enhances students’ metacognition by scaffolding their ownership of the corrective process whilst fostering risk-taking and task-related awareness.

5. Error log – In order to raise their awareness of their problematic areas – which hopefully will have started with the first retrospective verbal report (see 1, above) – I am going to ask my students, on giving their essays back, to log on a google document five different types of mistakes I highlighted in their essays along with a concise explanation of the possible cause of those mistakes (e.g. didn’t know the rule; got confused with Spanish) and a reminder of the grammar rule broken. The process will enhance their awareness of their problem areas and may trigger the future deployment of editing strategies aiming at addressing them.

6.Lesson videoing + student ‘pet hates’ – I will video one lesson per term and ask my students to write down – anonymously – one or more things about that lesson that they found useful and enjoyable and one or more things they found annoying, tedious and/or not very useful. I will then go through the students’ comments and view the video to get a better grasp of the issues they refer to; I might do this with colleagues to get their opinions and suggestions.

This process will serve three important purposes. Firstly, it will involve students more actively in the learning process by getting them to think about how my teaching impacts their learning; secondly, it will give them the feeling that I heed their opinion; thirdly, it will pave the way for the kind of activities illustrated in the next point.

7. Videoing of student speaking performance with introspection – After showing the students that I am willing to be videoed, evaluated and assessed by them, I am less likely to encounter resistance when I ask to do the same to them. Once a term I will video students I have concerns about for five minutes as they converse with me in French and spend 15-20 minutes going through the video together, discussing key points in their performance and possible strategies to address any issue identified. The metacognitive element of this process refers not simply to problem identification but also to the introspection that my questions will trigger.

At the end of the year I will interview my students in order to find out how the above interventions impacted their attitudes to French and their learning.

Although the above list may look like a tall order, it is much more manageable than it seems as it is mostly student-led. I am particularly looking forward to activities 6 and 7.

Why narrowing the speaking assessment focus can have a positive washback effect on L2 learning

download (10)

Every assessment we carry out in an MFL classroom ought to have a positive washback effect on learning. In this post I argue that with pre-intermediate to intermediate students the way MFL learners are typically assessed does not impact learning as much as it ought to, due to a failure to consider the complex nature of oral skill acquisition and the cognitive demands it poses on learners.

What I mean is that most often learners are assessed using complex multi-traits or holistic scales which are designed to rate their performance across a number of dimensions of proficiency such as fluency, intelligibility,  grammatical accuracy, pronunciation, complexity, range of vocabulary, ability to comprehend and respond to an interlocutor, etc. However, this approach has the potential to have a negative washback effect on learning when we deal with novice to intermediate learners due to the huge cognitive demands it puts on them.

This is because, as I have often reiterated in my posts, at this level of proficiency foreign language learners struggle to cope effectively with all of the demands posed by oral production in real operating conditions. Hence, by assessing them using multi-traits or holistic scales which assess simultaneously all of the above components of oral proficiency we are being hugely unfair to them, as we are not taking into account how finite their cognitive resources are.

Although there is indeed a place for a more multi-dimensional type of assessment in high-stake tests (e.g. end of unit tests), when it comes to the all-important low-stake tests we should administer throughout the learning cycle, I advocate a different approach which takes into account developmental factors in the acquisition of cognitive control, i.e. a type of assessment which focuses on one or maximum two traits at a time. For instance, at one key stage in the unfolding of a unit of work one would focus on the assessment of fluency + intelligibility of output; at another one would focus on range of vocabulary and pronunciation; etc. Obviously, your students will be informed at all times as to which trait will constitute the focus of the forthcoming assessment; this will channel their cognitive resources in one or two directions thereby pre-empting the risk of them chasing too many rabbits at the same time and ending up catching none.

This approach, which I have been using for years, not only has the advantage of focusing the learners on one aspect of cognitive control over oral production at the time – with an obvious positive washback effect on learning; but addresses another important pitfall of oral performance assessment carried out using complex multi-traits scales: the cognitive overload such scales cause oral test raters. Unless raters record the students’ oral performances and listen to them over and over again after the test– which rarely happens with low-stake assessments – complex assessment scales are very likely to cause divided attention, as it is extremely challenging to attend to speaker output and evaluate it at the same time across all of the traits and criteria.

In this sense, low-stake speaking tests assessed using a narrow focus approach do kill two birds with one stone. On the one hand, they optimize the use of the student’s cognitive resources; on the other, they facilitate the test-rater’s task. I will add another advantage I have experienced whilst using this approach, which refers to my professional development: by focusing on a different aspect of oral proficiency at a time for each low-stake assessment one gains a higher level of awareness vis-à-vis the variables affecting its development than one would normally do when focusing on several oral proficiency components at the same time.

It goes without saying that in high-stake assessments the use of multi-traits or holistic rubrics is more useful as we do want to have a more comprehensive view of how our students are faring along all the major components of oral proficiency. However, I do feel that many of the holistic and analytical scales adopted by MFL teachers with novice to intermediate speaker learners share a common shortcoming, which has a detrimental washback effect on learning: they do not lay enough emphasis on fluency and on the ability to effectively comprehend and respond to an interlocutor. In their quest for comprehensiveness and for a ‘one fits all’ solution, they fail to consider that each level of proficiency has different developmental features and should therefore be approached differently in terms of assessment. The more novice the learner the more skewed towards fluency the scale should be, the emphasis on accuracy and complexity gradually increasing as we progress further along the language acquisition continuum.

Why the reliability of UK Examination Boards’ assessment of A Level writing papers is questionable

The Language Gym

download

Often, our year 12 or Year 13 students who have consistently scored high in mock exams or other assessment in the writing component of the A Level exam paper, do significantly less well in the actual exam. And, when the teachers and/or students, in disbelief, apply for a remark, they often see the controversial original grade reconfirmed or, as it has actually happened to two of my students in the past, even lowered. In the last two years, colleagues of mine around the world have seen this phenomenon worsen: are UK examinations boards becoming harsher or stricter in their grading? Or is it that the essay papers are becoming more complicated? Or, could it be that the current students are generally less able than the previous cohorts?

Although I do not discount any of the above hypotheses, I personally believe that the phenomenon is also partly due to serious issues…

View original post 1,999 more words

Ten questions to ask foreign language teaching CPD providers

download

Intro

I have attended lots of MFL PD sessions throughout my career dishing out lots of WHAT’s (i.e. activities) and HOW’s (i.e. their classroom implementation). What was usually missing is that quid that has the power to transform teaching, i.e. the answers to the following difficult questions that everyone attending such PD sessions should ask. In writing our ‘MFL teacher’s toolkit’ Steve Smith of http://www.frenchteacher.net and I are keeping these questions very much in our focal awareness throughout the whole process.

Ten questions to ask your CPD provider

  1. Why this approach? – What is the rationale for this approach? Why should I use the activities you are recommending? How do you know they are going to work? Teachers are rarely told this by PD facilitators. This is, in my view, the greatest shortcoming of all.
  1. Where is the evidence that this approach ACTUALLY works? – By this I do not mean ‘conclusive’ evidence with long lists of reference and statistics; but at least some indication based on classroom research, some objective data that the recommended approach has worked with at least some foreign language students. Teachers, in my experience, need some degree of ‘certainty’ that something they are expected to use in their classrooms actually ‘works’ in order to buy into a new methodology or technology.
  1. How do I sequence the great activities you are recommending? Why? – This is one of the most important questions for many teachers, as it affects the nitty gritty of their daily practice. Teachers are very busy people; as much as we want them to be reflective and work the ideal sequencing out by themselves, they want and must be provided by people running inset training sessions with some sort of reference framework.
  1. How do these activities affect students’ cognition and language acquisition? Why? – MFL PD facilitators usually tell you things like ‘This activity develops your students’ vocabulary. They are really effective and fun’; then they show us a video or ask us to try them out with our neighbours. But they never tell us what aspects of grammar or vocabulary learning they impact and why. This in my opinion is crucial in order to empower teachers with the all-important ability to use those activities effectively and flexibly across contexts in the future.
  1. How do I get my students to ACQUIRE the target language grammar, not just LEARN grammar rules? – As it usually happens in MFL PD sessions, you never get to hear about how to bring students from declarative knowledge (knowing grammar rules) learning to actual acquisition (using those rules automatically and accurately in spontaneous speech). You are at best shown ‘activities which aim at memorizing grammar rules and practise them (e.g. mechanical or gap-filling drills; fun games) but you are never told how one gets the students to use them correctly in real time communication.
  1. How do the AFL strategies and the assessment rubrics you are showing actually help me assess my students’ development in term of FLUENCY, COMPLEXITY, ACCURACY, VOCABULARY RANGE and DEPTH and COGNITIVE CONTROL in a principled, valid and consistent way? – the AFL strategies and the assessment rubrics usually shown by UK MFL consultants are usually very limited in their power to asses performance and proficiency and help teachers identify at which developmental stage along the language acquisition continuum MFL learners are located.
  1. How are language skills acquired? How do we scaffold and monitor skill acquisition? – More than often UK PD providers will show you tons of rubrics and how you should use (very simplistic) rubrics to scaffold skill learning. I wish language learning was that simple…
  1. How can I get learners to acquire the memory strategies you recommend and use them autonomously? – PD facilitators often show scores of slides with memory techniques but they regularly fail to tell teachers how you get students to use them autonomously without any teacher prodding. Strategy training in memory strategy requires a specific set of knowledge and skills that the average teacher has not received training in; moreover, it requires extensive training (lasting months) to be effective and intensive scaffolding. Teachers are rarely – if ever – told this.
  1. How can we ensure that PBL implementation actually integrate all four skills and develops and assesses effectively the development of fluency and cognitive control? – Crucial and challenging question everyone should ask anyone showing you great example of projects integrating emerging technologies. Yet, never had or heard of a PD session where this issue is effectively tackled
  1. Does the kind of differentiated teaching you propose actually work? Where is the evidence and/or theoretical rationale behind? – None of the PD sessions I have ever attended on differentiation has ever attempted to provide me with any research evidence that the recommended differentiation strategies actually achieved their intended purpose; or at least with a theoretical rationale for the approach. Yet, since differentiated learning is very laborious and time-consuming to set up one does expect these questions to be answered.

Conclusion

The above ten questions refer to only a few of the many shortcomings of PD sessions I have been involved in in 25 years of teaching. Bizarre how, despite so much research having been carried out in L2 acquisition and pedagogy in the last twenty years or so, lots of MFL PD in the UK seems to have been recycling the same old topics ad nauseam – the only notable additions being PBL and emerging technologies.

As I have often reiterated in my blogs, for MFL PD to be effective it has to empower teachers with the WHY of language acquisition and pedagogy and the WHEN. The HOW should focus more on the process of learning rather than on how to use an activity or App; i.e. on how language learning is impacted by each step we decide to take in our planning, execution and assessment. Take PD in emerging technologies, for instance: the facilitator comes in, shows you a few Apps and web-tools and how to use them; teachers try them out and…that’s it! How about: how do they impact the process of learning at different stages of proficiency and why?

MFL CPD providers should not presume that teachers are not capable of or interested in learning how MFL students learn. For transformational professional development to work, it must provide a clear and convincing rationale as to why the methodological framework or principles proposed have the potential to be effective. Teachers must feel a sense of empowerment which cannot simply be brought about by being provided with new teaching strategies; rather, first and foremost it requires an understanding of how language learning happens; how what we do in the classroom affects acquisition and cognition; what fuels and sustains the development of cognitive control over language reception and production skills; what the markers of fluency, accuracy and complexity are at the various stages of language acquisition; how we bring about learner autonomy, etc.

On the subject of learner autonomy, I find it scandalous that after thirty years of research in learner training (or learning to learn) UK PD providers’ knowledge of this area of research and pedagogy can be so inadequate, especially considering how some very well-known learner-training researchers are actually UK based (e.g. Ernesto Macaro, Vee Harris, Suzanne Graham).

Of course, another major pitfall of PD refers to its follow-up; how firmly the content of the training session(s) is kept by teachers in their focal awareness and implemented and self-monitored in their daily practice well after the PD event(s). But this is beyond the scope of this post.

Six very common flaws of foreign language assessment

download (2)

  1. Teaching – Assessment mismatch

Often foreign language instructors test students on competences that have not been adequately emphasized in their teaching or, in some cases, have not even been taught.

The most common example of this refers to the issue of task unfamiliarity, i.e. the use of an assessment tool or language task the students have never or rarely carried out prior to the test. This can be an issue, as research clearly shows that the extent to which a learner is familiar with a task will affect his/her performance. The reasons for this refer to the anxiety that the unfamiliarity engenders and the higher cognitive load that it poses on working memory, especially when the task is quite complex. By doing a task over and over again prior to an assessment involving that task, the student develops task-related cognitive and metacognitive strategies which ease the cognitive load and facilitate its execution.

Another common scenario is when students are not explicitly focused on and provided sufficient practice in a given area of language proficiency (e.g. accuracy, fluency, vocabulary range, grammar complexity); yet their teachers use assessment scales which emphasize performance in that area (e.g. by given grammatical accuracy a high weighting in speaking performance whilst practicing grammar only through cloze tasks). I have had several colleagues in the past who taught their students through project-based work involving little speaking practice even though they knew that the students would be assessed in terms of fluency at the end of the unit. Bizarre!

Language unfamiliarity is another instance of this mismatch, in my opinion. This refers to administering to students a test which requires them to infer from context or even use unfamiliar words and results in assessing the learners not on the language learnt during the unit but on compensation strategies (e.g. guessing words from context). Although compensation strategies are indeed a very important component of autonomous competence, I do believe that a test needs to assess students only on what they have been taught and not on their adaptive skills – or the assessment might be perceived by the learners as unfair, with negative consequence for student self-efficacy and motivation. A test must have construct validity, i.e. it must assess what it sets out to assess. Hence, unless we explicitly provide extensive practice in inferential skills, we should not test students on them.

Some teachers feel that since the students should possess the knowledge of the language required by the task whether the students are familiar with the task or not will not matter; this assumption, however, is based on a misunderstanding of L2 language acquisition and task-related proficiency.

  1. Knowledge vs control

Very often teachers administer ‘grammar’ tests in order to ascertain whether a specific grammar structure has been ‘learnt’. This is often done through gap-fill/cloze tests or translations. This approach to grammar testing is correct if one is purporting to assess declarative (intellectual) knowledge of the target structure(s) but not the extent of the learners’ control over it (i.e. the ability to use grammar in real operating conditions, in relatively unmonitored speech or written output). An oral picture task or spontaneous conversational exchange eliciting the use of the target structure would be more accurate ways to assess the extent of learner control over grammar and vocabulary. This is another common instance of construct invalidity.

  1. Listening vs Listenership

This refers less to a mistake in assessment design than to a pedagogical flaw and assessment deficit and is a very important issue because of its major wash-back effect on learning. Listening is usually assessed solely through listening comprehension tasks; however, this does not test an important set of listening skills, ‘listenership’, i.e. the ability to respond to an interlocutor (a speaker) in real conversation. If we only test students on this aspect of listening, the grade or level we assign to them will only be reflecting an important set of listening skills (comprehending a text) but not the one they need the most in real-life interaction (listening to an interlocutor as part of meaning negotiation). Listening assessments need to address this important deficit, which, in my opinion is widespread in the UK system.

  1. Lack of piloting

To administer a test without piloting it can be very ‘tricky’ even if the test comes from a widely used textbook assessment pack. Ambiguous pictures and solutions, speed of delivery, inconsistent and/or very subjective grading of tests and construct validity issues are not uncommon flaws of many renowned course-books’ assessment materials. Ideally, tests should be piloted by more than one person on the team, especially when it comes to the grading system; in my experience this is usually the most controversial aspect of an assessment.

  1. ‘Woolly’ assessment scales

When you have a fairly homogenous student population, it is important to use assessment scales/rubrics which are as detailed as possible in terms of complexity, accuracy, fluency, communication and range of vocabulary. In this respect, the old UK National Curriculum Levels (still in use in many British schools) were highly defective and so are the GCSE scales adopted by UK examination boards. MFL departments should invest some quality time to come up with their own scales, making specific reference in the grade descriptors to the traits they emphasize the most in their curriculum (so as to satisfy the construct validity criterion).

  1. Fluency – the neglected factor

Just like ‘listenership’, fluency is another factor of language performance that is often neglected in assessment; yet, it is the most important indicator of the level of control someone has achieved in TL receptive and productive skills. Whereas in speaking UK MFL departments do often include fluency amongst their assessment criteria, in writing and reading this is not often the case. Yet, it can be relatively easily done. For instance, in essay writing, all one has to do is to set a time and word limit for the task-in-hand and note down the time of completion for each student as they hand it in. Often teachers do not differentiate between students who score equally across accuracy, complexity and vocabulary but differ substantially in terms of writing fluency (i.e. the time to word ratio). By so doing we fail to assess one of the most important aspects of language acquisition: executive control over a skill. In my view, this is something that should not be overlooked, both in low-stake and high-stake assessments.

Micro-listening skills (Part 2) – More micro-listening tasks for the foreign language classroom

images (2)

As a follow-up to my post ‘Micro-listening enhancers you may not be using in your foreign language lessons’ here is a new list of micro-listening enhancers I frequenty use in my lessons.

  1. Parallel sentences

I usually read out – or record myself or a native-speaker reading out – ten or more sets of two semi-identical sentences, which differ only slightly, at native speaker speed. For instance, in the example below the difference will be ‘vers’ (around) vs ‘à’ (at). The students have to identify the differences and note them down on mini-boards. I am not at all bothered with the spelling of the target word.

Je suis allée au cinéma vers huit heures

Je suis allée au cinéma à huit heures  

  1. Sudden stop

I give students a transcript of a listening text, then play or read the text at native speed, suddenly stopping when I feel fit. The students write on mini-boards the last word I uttered.

  1. Spot the wrong sound

I pronounce a target language sentence or short text making sure that I make a typical L1 transfer phonetic error. For instance, in the sentence below, I would pronounce the ‘h’ the English way. The students need to identify my pronunciation mistake.

J’habite en Malaisie

  1. Spot the correct transcription

The teacher reads out a sentence at near native speed. The students are provided with a gapped version of that sentence and with three near-homophones (words that sound very similar). The task is to choose the correct option.

Teacher says : J’y vais avec lui

Students see (on screen/whiteboard) : J’y vais avec _______

Options to choose from : Louis – lui – l’huile

  1. Sound-tracking

Students are given a short target language text. The teacher reads a sound, for instance /wa/ (as in ‘moi’) and the students, under time conditions, have to scan the text in search for any letter or combination of letters that refer to that sound and highlight them.

  1. Silent letters (group work)

Students are given a set of sentences and, working in group, take turns to read them out to each other, circling the letter they think will not be voiced. The teacher then provides the answers to confirm or not the students’ assumptions.

  1. Break the flow

This is a classic. Students are presented with sentences (on whiteboard/screen) written out as shown below, with no spacing between words. The teacher will utter the sentence two or three times at native speed and the students will have to rewrite the sentences with the appropriate spacing

What students get: Jenefaispasdesportcarjesuisparesseux (=I do not do sport because I am lazy)

What they are meant to do: je ne fais pas de sport car je suis paresseux

  1. Anagrams

Students are given a set of anagrams of words they have never come across before which the teacher will pronounce one by one two or three times. Based on the target sound – which will have been practised beforehand – the students will have to rewrite the words in their accurate form. I like this exercise because, when the words are truly unfamiliar, it does require a good grasp of TL phonology and inferential skills.

  1. IPA practice – back-transcription

This is meant as training in the International Phonetic Alphabet or IPA. After much work on phonological awareness and practice and teaching of the IPA equivalent of each target sound (e.g. oi = /wa/; en= / ɑ̃ /) the students are presented with a list of phonetic transcription of words which I usually get from www.wordreference.com. Their task is to write them back in their normal graphemic form.

What students are given: [mwa], [pɑ̃dɑ̃], etc.

The correct answers: moi, pendant, etc.

  1. IPA practice – IPA transcription

This is best done after extensive practice with the previous activity and with the IPA symbols in general. The teacher utters a series of words and students write them out on white boards using the symbols provided. Sheets with the target IPA symbols can be given to students as scaffolding. A tip: do not deal with too many symbols at any one time and keep the words as short as possible