Hamish Chalmers is currently a doctoral researcher at Oxford Brookes University and member of the R.E.A.L. (Research in EAL) research group, convened by Prof Victoria Murphy at the University of Oxford. He is a member of the executive committee of NALDIC (the National Association of Language Development in the Curriculum), and was Director of EAL at Shrewsbury International School, Bangkok, Thailand between 2008 and 2013. His research interest is in multilingual children’s use of L1 in the mainstream classroom. His masters dissertation on the use of multilingual Key Stage 1 children’s L1s during parent-led guided reading sessions was awarded a prize for best dissertation in the School of Education at Oxford Brookes University. It was also commended by the British Council and by BERA in 2014 in their respective Masters Dissertations Awards. In addition to language acquisition he is interested in research methodology, especially designs for generalized causal inference.
It is an axiom in the literature that children should be given opportunities to use their L1s in the service of their learning in English. However, uncertainty about the effects of this general advice is reflected in the choice by many teachers to insist on ‘English only’ in their classrooms. Even where teachers are well disposed to L1-mediated learning, the sheer range of languages represented in typical UK classrooms makes this difficult to operationalise. Moreover, intervention research that might help us to understand the effects of L1 mediated learning in linguistically diverse contexts appears to be vanishingly small. The principle aim of this study is, therefore, to assess whether purposeful L1 use in linguistically diverse classrooms is operationisable and helpful. In this randomised crossover trial, 47 primary school children in Oxford were allocated to receive vocabulary instruction using either a combination of English and their L1, or using English only. They watched short videos providing expanded definitions of target words, then they completed consolidation activities. Each video consisted of a series of illustrations and a voiceover giving an expanded definition of the target word in English or one of the 14 L1s represented among the participants. Participants’ receptive and expressive knowledge of the target words in each condition were subsequently assessed and compared. Preliminary results suggest that there was no detectable difference between average scores on the assessment. Little intervention research has been conducted to address the challenge of using L1 in linguistically diverse classrooms, and in particular classrooms in the UK. This project contributes original knowledge to the field of EAL teaching and learning by addressing directly the challenge faced by teachers in UK primary schools. These preliminary findings will be considered in terms of the linguistic characteristics of the participants and their implications for teachers and learners.
This paper describes the methodological approach I took to addressing a long-standing assumption, that children with EAL (English as an Additional language) in linguistically diverse schools are helped in their learning if given opportunities to use their first lanaguge (L1) as a mediating tool.
Use of a child’s L1 as a mediating tool for learning an additional language, and learning in an additional language, is widely considered to be helpful to the child (Cummins 2010, Garcia 2009 inter alia). While this general principle has been demonstrated in studies of bilingual education (see Krashen & McField 2005 for an overview), there is little by way of relevant research conducted in non-bilingual schools. In particular, research in linguistically diverse schools, such as those widely found in the UK, is scarce (Chalmers 2017). Linguistic diversity in a student body makes creating opportunities for structured use of L1s more complex than in the relative homogeneity of languages in bilingual programmes. Moreover, the apparent paradox of saying that L1s should be used by children because they help to improve their English is reflected in the English-Only policies enforced by many schools and teachers. Whether guided by ideology or constrained by logistics, teachers in UK schools tend to provide pedagogical input using only English, and do not make use of L1s as teaching and learning tools (Wallace et al. 2009). Is this the most effective approach to educating EAL learners in UK schools?
My study aimed to compare an approach to using L1s with linguistically diverse groups of children with a similar approach that used only English. This was done in order to a) assess whether an L1 approach was associated with improved educational outcomes; and b) whether an L1 approach is operationalisable in groups of children representing many different L1s.
This paper describes how I designed and conducted the study.
What are the differential effects, if any, on English vocabulary learning of L1-mediated teaching input, compared to English only input, for EAL learners in linguistically diverse UK primary schools?
Forty three children aged from Years 4, 5 and 6 (ages 8 to 11) classified as having EAL were recruited from four primary schools in the City of Oxford. Forty of these received both intervention conditions and were available for follow up. The interventions consisted of participants watching short videos that defined items of vocabulary drawn from the national curriculum for England. The participants then completed a concept map for each item to help consolidate knowledge of the concept it represented. The language used in the voiceover in each video was either in the students’ L1s or in English. I have called these intervention conditions First Language Mediated Vocabulary Teaching (the L1 condition) and English-Only Mediated Vocabulary Teaching (the EO condition).
Outcomes were expressive and receptive knowledge of the target vocabulary items, assessed using gap-fill exercises and multiple choice tests, respectively. The effects of the two conditions were compared using a randomised crossover design.
The target population for the trial was children aged between 8 and 11 (Years 4 to 6) classified as having EAL, in mainstream primary schools in the City of Oxford, UK.
I wrote to the headteachers of eight schools whose latest Ofsted (school inspection) reports indicated that the pupil population included a greater than average proportion of children classified as having EAL. I invited them to take part in the trial. One headteacher responded and agreed to participate. I sent follow up emails to the remaining seven headteachers. This led to three more agreeing to participate and one declining. The remaining three did not respond at all, and no further attempts to recruit them were made.
In the four schools that agreed to participate, 208 children representing 33 different L1s were eligible. Parents of eligible children were contacted using the usual school channels (either a letter taken home by the children or by using the school’s electronic ’ParentMail’ system), or by approaching parents directly during pick up and drop off times at the schools. They were invited to consent to their child’s participation in the study. The parents of 43 of the 208 eligible children agreed, and their children took part in the trial.
Eighteen different L1s were represented by this group of children. More than half of the sample were from Year 4 classes (ages 8 and 9), with the remainder equally split between Year 5 and Year 6 (ages 9 to 11). Just over half were operating at relatively high levels of English language proficiency (‘competant’ and ‘fluent’), based on teacher assessments using the DfE EAL proficiency scales (DfE 2017). There were 17 boys and 26 girls. See Table 1 for participant information.
|Table 1. Participant information at baseline|
|Year Group||n||English Level (teacher assessed)||n||L1 Understanding (self report)||n||Gender||n|
|Year 4||25||New to English||2||Not at all||0||Male||17|
|Year 5||9||Early Acquisition||6||A tiny bit||2||Female||26|
|Year 6||9||Developing Competence||11||Quite well||2|
Design and Group Allocation
The effects of the L1 and EO conditions were compared using a randomised crossover design. In a study using this design, participants are randomly allocated to intervention groups. Each group receives a different condition in Phase 1 of the study. At the end of Phase 1 outcomes are assessed. Participants then ‘cross over’ to receive the condition that they did not receive in Phase 1. This is Phase 2. At the end of Phase 2 outcomes are assessed again. Performance on the assessments can be compared between groups and across phases. In my study, participants received First Language Mediated Vocabulary Teaching (the L1 condition) in Phase 1, then English-Only Mediated Vocabulary Teaching (the EO condition) in Phase 2, or vice versa. I refer to groups as the L1-EO group and the EO-L1 group, to reflect the order in which they received the two conditions. See Figure 1 for information about the flow of participants through conditions and assessment points during the trial.
This design minimises potential threats to internal validity in the following ways. First, concealed random allocation to comparison groups ensures that the average of known and unknown characteristics of the two groups at outset differ only by chance. Second, because these unbiased groups of participants receive both interventions, but in different orders, the design minimises threats to internal validity associated with instructional order and maturation. The design thus allows for more trustworthy within-participant comparisons than would be possible in a cohort pre-post design or an interrupted time series design. This also increases the statistical power of the study; the number of comparisons is effectively twice what would be possible in a group randomised trial design. However, if threats to validity are detected, for example maturation effects are suggested by regression analysis, or participant dropout occurs in the latter phase of the trial (this can happen when participants perceive the Phase 2 intervention to be ‘worse’ than the Phase 1 intervention), then a between-participants comparison of the outcomes in Phase 1 can be conducted (Shadish, Cook and Campbell 2002). This comparison will, however, have lower statistical power.
Figure 1 CONSORT Flow Diagram
Participants were stratified by school, then allocated to comparison groups using a well concealed unbiased allocation schedule. Allocation was at the level of the individual. The relatively small number of participants, and the skew towards Year 4 children and children with higher levels of English proficiency suggested that further stratification would add little to the generation of unbiased comparison groups.
A third party, familiar with the nature of the project, but with no knowldege of the schools or the children recruited to the trial, performed the allocation. He was given four sets of numbers, each consisting of the Unique Identifying Numbers (UINs) of the participants at each of the four schools. Names of the participants’ were not used to ensure that the allocator was unable to guess information about them, such as their ethnicity, gender, and L1. This protected against conscious or unconscious bias in, and subversion of, the allocation process. Taking one school at a time, the allocator entered the UINs for that school into the random sequence generator on the website random.org (Random.org 1998–2017) and set it to randomly sort the UINs into two groups. The first group generated constituted the EO-L1 group and the second group constituted the L1-EO group. Group characteristics following randomization can bee seen in Table 2. Participants and their class teachers were also blind to group allocation, at least until the first intervention session.
|Table 2. Group characteristics
|L1-EO Group||EO-L1 Group|
|Year Group||Year 4||14||11|
|English Level||New to English||0||2|
|(teacher assessed)||Early Acquisition||3||3|
|L1 Understanding||Not at all||0||0|
|(self report)||A tiny bit||1||1|
The interventions were delivered on two sessions per week, over four weeks during the summer term of the academic year 2016/17. The pre- and post-crossover phases lasted two weeks each. Each phase consisted of three teaching sessions followed by an assessment session. After the assessment session, the groups ‘crossed over’ to the comparison condition, i.e. from the L1 condition to the EO condition and vice versa.
Each learning session focused on three items of vocabulary, and was conducted as follows. Individually, participants used a laptop or tablet computer to view a short video that explained the meaning of an item of vocabulary that they were told was important to know for their learning at school. The video included information about characteristics of the item, elicitation of examples of the item, examples of things that are not the item, and modelling of how to succinctly define the item. This information represents the understanding needed to demonstrate concept mastery according to Frayer et al (1969).
After watching the video the participants engaged in a teacher-led discussion, in English, about the item. This included reiteration of the four key areas of concept mastery and elicitation of additional information about the item. Participants then completed a concept map designed to help them structure and organise their understanding of the concept represented by item (see Appendix 1). Using the concept map, participants wrote down characteristics, examples, non-examples and a definition of the item. This process was repeated for each of the three words in each session.
The teaching sessions differed between conditions only in the languages used in the videos and to complete the concept maps. In the L1 condition, participants watched videos in which the voiceover was in their L1. In the EO condition, the voiceovers were in English. The voiceover scripts for each item were L1 translations of the same English language ‘seed’ script, translated and independently verified by proficient bilingual users of the relevant L1s and English. See Appendix 2 for an example. In the L1 condition, participants were told that they could use either their L1s or English or a combination of both to complete the concept map. In the EO condition they were asked to use only English.
In the fourth and final session of each phase the expressive and receptive knowledge of the target items studied in that phase were assessed using gap-fill and multiple choice format tests. These were conducted in English.
Selecting the target items of vocabulary
Target vocabulary items were superordinate concrete nouns drawn from the English national curriculum for Years 4, 5 and 6. Concrete nouns were chosen because they are relatively easy to represent pictorially, and thus easier to describe in terms of their characteristics.
Superordinates were chosen because the Frayer model of concept mastery requires examples of the item being learned; an example of something is, by its nature, a subordinate term. It is more straightforward to find examples of a superordinate term, like ‘cat’ (e.g. tabby, tortoise-shell, lion), than it is to find examples of a subordinate term, like ‘Siamese cat’.
The English national curriculum programmes of study note specific academic or subject-related vocabulary that children are expected to learn. These are not listed, as such, but are incorporated into the learning objectives for each subject. For example, the programme of study for science in Year 5 states that children must be taught to “describe the differences in the life cycles of a mammal, an amphibian, an insect and a bird” (DfE 2013:168). The superordinate concrete nouns constituting academic vocabulary in this example are mammal, amphibian, insect and bird.
I scoured the programmes of study for English, maths, science, history and geography as well as past exam papers for the spelling, punctuation and grammar assessment (SPaG test), for other examples of these kinds of words. I found 30 that fit the criteria (see Appendix 3).
The time available for the intervention allowed for the teaching of 18 words; three sessions of three words per phase. To inform the necessary reduction in number, I created assessments of expressive and receptive knowledge of the longlisted words. These were adminstered to a class of 31 Year 6 predominantly monolingual English children in a school that did not take part in the study in any other respect.
The expressive test was a gap-fill exercise. This was a series of 30 short passages, one for each target word, with the target word missing. For example,
‘Animals that live half in water and half on land are called ____________. Frogs, tadpoles, toads, and salamanders are all this kind of animal.’
Respondents were required to fill in the missing word. The receptive test was a series of mutliple choice questions. For each item, respondents were presented with four pictures, one representing the target item and three distractors, and the instruction to select the picture that best represented the word. Both tests were created and adminstered online, using Google Forms. Items that were correctly identified by at least 80% of the test students in either the expressive or receptive assessments were considered ‘easy’ and removed from the longlist. This generated a shortlist of 20 words. The shortlisted words were included in the baseline assessment used with the actual trial participants, at which point the two highest scoring words were removed to create the final set of 18 target items (see Appendix 3).
Each learning session focused on three target items. I had initially intended to organise these sets thematically. For example, I had hoped to have a set of three words from the maths programme of study, a set of three words from the history programme of study, and so on. However, the word selection process did not generate neat groups of three thematically linked words that would allow this. Instead, I used the random sequence generator at random.org to create a randomly ordered list of the words. I then divided this list into groups of three, taking the first three words as Set 1, the second three as Set 2, and so on until I had six sets of three items. The instructional order of these sets for each school was determined again using the random sequence generator.
By creating random instructional orders across items and across schools I intended to account for the possiblity that learning one item might be affected by prior study of another item. For example, learning the meaning of the word ‘settler’ might be effected if the participant had earlier studied the word ‘settlement’. The random order of instruction was intended to help mitigate against the possible effects of non-random differences in the conceptual and semantic relationships between target items. It also helped to control for the possibility that the time at which each word was studied relative to the assessment might affect performance on the test. That is, children might score higher on words more recently studied than on ones studied longer ago.
Participants were tested initially on their expressive and receptive knowledge of the target items. This was done using the same interactive online assessments described earlier, though reduced from the 30 original words to the shortlisted 20. An example of the tests can be seen in Appendix 4. The baseline assessments were conducted in the first week of April 2017, two months before the start of the interventions, which began in the first week of June 2017. As the same assessment tasks were to be used for the post-intervention assessments, I considered that a long washout period between assessments was desirable to try to minimise the possibility of repeated testing effects. For the same reason, participants were not told their scores or which items they answered correctly or incorrectly.
In addition, at baseline, participants completed a language background questionnaire (LBQ) and a language attitudes questionnaire (LAQ). The LBQ provided information about participants’ exposure to and use of their languages. The LAQ provided information about how participants perceived their L1s in terms of their relationships to their identities, their education, and their home lives. Examples of the LBQ and LAQ can be seen in Appendix 5. The intended purpose of the LBQ was to allow for analysis of outcomes relative to aspects of language background. For example, are differences in outcome associated with differences in L1 exposure, or time in country, or age? The LAQ was conducted for similar purposes. For example, do children with positive attitudes towards their L1s perform differently to children who have negative attitudes towards them? The LAQ was repeated at the end of the study to assess whether attitudes towards the L1 changed over the course of the intervention and if they did, in what direction.
End of Phase 1 Testing
At the end of Phase 1, participants were assessed on their expressive and receptive knowledge of the target words used in that phase. This was done using the same assessment tasks described above, but reduced to include only the Phase 1 words.
End of Phase 2 Testing, and Delayed Posttest
At the end of Phase 2, participants’ expressive and receptive knowledge of all 18 items was assessed, using the same assessment tasks as the baseline assessment. Responses to the items studied by each group in Phase 2 constituted immediate posttest for that phase, while inclusion of items from Phase 1 constituted a delayed posttest for those items. I combined Phase 1 and Phase 2 words in one assessment for logistical reasons and because some participants were showing fatigue and frustration associated with their participation. Participation in the intervention became openly resented by some children towards the end of Phase 2, coming as it did very near to the end of the school year when schools were engaged in exciting extra-curricular activities like sports day, the school play, and day trips. My preference would have been to conduct two separate short assessments, one for the end of Phase 2 assessment and one for the delayed posttest of Phase 1 items. However, concern that participants might drop out through frustration or be unavailable due to commitments to end of term activities led me to consolidate the two assessments into one, longer, session. There was no delayed posttest for items in Phase 2 as the school year ended before an opportunity to collect these data materialised.
Finally, four children from each school took part in a focus group discussion about the intervention. In the discussion, they explored their attitudes towards using their L1s and debated their opinions on how and if their L1s should be used at school in future.
This paper has described the process by which I attempted to address my research questions. The method was intended to allow for an unbiased comparison of two alternative approaches to teaching vocabulary to primary school children with EAL. One approach responded to suggestions that using the L1s of EAL learners is facilitative, by meditating vocabulary teaching using participants L1s. The comparator was identical except that all instruction was in English.
I took steps to reduce systematic differences in the average known and unknown characteristics of the comparison groups by randomly allocating participants to the conditions to be compared. This ensured that the comparison groups differed in these average characteristics only by chance when the intervention began. Allocation bias was prevented by generating an unbiased allocation schedule and concealing it so the allocator was unable to tell anything about individual participants that might have resulted in conscious or unconscious bias in the way they were allocated to groups. Participants and their teachers were also blind to group allocation until the intervention began. Though it is difficult to blind participants to the intervention they are receiving in an education trial, blinding them to group allocation as far or as long as possible helps to reduce expectancy effects or pre-trial dropout because of dissatisfaction with the allocated intervention. Also, outcome measures were objective, to the extent that participants either recalled and recognised the correct word or did not. This helped to reduce experimenter bias of the sort sometimes associated with more subjective measures of attainment, like the overall quality of a piece of writing. It is important to note here though that the outcomes were short-term demonstrations of recall and recognition of specific words. They should not be taken as demonstrations that the word has necessarily been learnt, nor that more general vocabulary learning skills have been affected. The potential of order and maturation effects to confound findings was addressed using a crossover design and by studying each set of words in a different random order in each school.
Data are still being analsysed for this project and it is too early to make any provisional statements about the findings. I intend to analyse the outcomes of each group using appropriate statistical tests to assess whether one approach is, on average, associated with better outcomes than the other. In addition, the data collected in the LBQ and LAQ will allow for regression analysis to assess whether the characteristics of the participants are associated with differences in performance on the assessment tasks. For example, do interactions exist between test scores and the length of time participants had been in English schools? In addition, analysis of the focus group discussions and researcher field notes, will provide a process evaluation to assess whether any aspects of the delivery of the intervention should be reconsidered.
Subject to these analyses, I will consider replicating the study with sub-groups of the population to explore further the potential for L1 mediated vocabulary learning in linguistically diverse UK primary schools.
Chalmers, H (2017) What does research tell us about using the first language as a pedagogical tool? EAL Journal (Summer) 54–58.
Cummins, J (2010) Teaching for transfer in multilingual educational contexts. In Garcia, O & Lin, A (Eds.), Encyclopedia of Language and Education, 3rd Edition, Volume 5: Bilingual education. New York: Springer Science + Business Media LLC 65-76.
DfE (2013) The national curriculum in England Key stages 1 and 2 framework document. Available online https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/425601/PRIMARY_national_curriculum.pdf [Accessed 10 October 2017].
DfE (2017) School census 2016 to 2017 Guide, version 1.6 April 2017. Available online https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/609375/School_census_2016_to_2017_guide_v1_6.pdf [Accessed 10 October 2017].
Frayer D, Frederick W & Klausmeier H (1969) A Schema for Testing the Level of Concept Mastery. Report from the project on situational variables and efficiency of concept learning. Wisconsin Research and Development Centre for Cognitive Learning. The University of Wisconsin.
García, (2009). Bilingual Education in the 21st Century: A Global Perspective. Malden, MA and Oxford: Basil/Blackwell.
Random.org (1998–2017) What’s this fuss about true randomness? Available online http://www.random.org [Accessed 11 October 2017].
Shadish W, Cook T & Campbell D (2002) Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Belmont, CA:Wadsworth.
Wallace, C. & Mallows, D., (2009). English as an additional language (EAL) provision in schools–10 case studies. London: Institute of Education.
Example of an English script with Romanian translation
The English word we are learning now is Polygon. A polygon is a type of 2D shape.
Here are the main characteristics of polygons.
Polygons are 2D shapes. They are also called ‘plane’ shapes. They are made up of only straight lines. The straight lines join up with no gaps. The straight lines are called the ‘sides’ of the polygon. A polygon must have three sides, or more. A polygon can’t have only one or only two sides. Polygons must have three corners, or more. Polygons can’t have only one or only two corners. The corners of polygons are also called ‘angles’. Polygons can be regular. This means all the sides are the same length. And all the corners are the same size. Polygons can be irregular. This means the sides are different lengths. And the corners are different sizes.
Here are some examples of polygons.
Triangles are polygons. Triangles have three sides and three corners. Heptagons are polygons. Heptagons have seven sides and seven corners.
Can you think of other things that are polygons?
Here are some examples of things that are not polygons.
A straight line does not have corners. A straight line is not a polygon. A semi-circle has only two sides. One of those sides is not a straight line. I semi-circle is not a polygon. Shapes that or not 2D are not polygons. A cube is 3D. A cube is not a polygon. Things that are not shapes are not polygons.
Can you think of other things that are not polygons?
Cuvântul englezesc pe care îl învățăm acum este POLYGON. Noi il numim POLIGON în Limba Română. Un poligon este o forma bidimensională.
Aici sunt câteva dintre principalele caracteristici ale poligoanelor.
Poligoanele sunt frome bidimensionale. Se mai numesc și forme plane. Sunt facute numai din linii drepte. Liniile drepte se unesc fără spații. Liniile drepte se numesc și laturi ale poligonului. Un poligon trebuie să aibă trei sau mai multe laturi. Un poligon nu poate avea doar una sau două laturi. Poligoanele trebuie să aibă trei sau mai multe colțuri. Poligoanele nu pot avea doar unul sau doar două colțuri. Colțurile poligonului se mai numesc și unghiuri. Poligoanele pot fi regulate. Acest lucru înseamnă că toate laturile au aceeași lungime. Iar toate colțurile sunt de aceeași mărime. Poligoanele pot fi neregulate. Acest lucru înseamnă că laturile au lungimi diferite. Iar colțurile au mărimi diferite.
Aici sunt câteva exemple de poligoane.
Triunghiurile sunt poligoane. Triunghurile au trei laturi și trei colțuri. Hectagoanele sunt poligoane. Hectagoanele au șapte laturi și șapte colțuri.
Te poti gândi și la alte poligoane?
Aici sunt câteva exemple de lucruri care nu sunt poligoane.
O linie dreaptă nu are colțuri. O linie dreaptă nu este poligon. Un semicerc are doar două laturi. Una dintre aceste laturi nu este o linie dreapta. Un semicerc nu este un poligon. Formele care nu sunt bidimensionale nu sunt poligoane. Un cub este tridimensional. Un cum nu este poligon. Lucrurile care nu sunt forme nu sunt poligoane.
Target words. Longlist, shortlist and final set
|Longlist||Shortlist||Used in the Intervention|
Examples of expressive and receptive assessment questions.
Language Background Questionnaire and Language Attitude Questionnaire
Language Background Questionnaire
You have been invited to do this questionnaire because you are able to use another language as well as English, even if it’s only a little bit. This questionnaire should take about 10 minutes.
We are going to use the term ‘home language’ to mean the language that you use that is not English. Some people call this your ‘mother tongue’ or your ‘first language’.
Your answers will help us to understand the way you use your home language. This will help your teachers to teach you better.
Your answers will be seen only by the researchers who are working with you at your school.
We will not tell anyone else your name or what your individual answers were.
Please answer as many questions as you can. You don’t have to answer any question that you don’t want to answer.
You can ask an adult to help you if you would like to.
On the first four questions you have to click in the circle that you think is best for your answer to each question. For example, for the first question, if you cannot speak your home language at all, click on Circle 1. If you can speak it only a tiny bit, tick Circle 2. If you can speak it quite well, tick Circle 3. If you can speak it very well, tick Circle 4. And if you can speak it perfectly, tick Circle 5.
On other questions you are required to select an answer from a range of options. Select the answer that is most appropriate for you.
Part 1: Using your home language
Q1. How well do you SPEAK your home language?
- I cannot speak it all
- I speak it perfectly
Q2. How well do you understand your home language when you HEAR it?
- I do not understand it all
- I understand it perfectly
Q3. How good are you at WRITING in your home language?
- I cannot write in my home language at all
- I am very good at writing in my home language
Q4. How good are you at READING in your home language?
- I cannot read in my home language at all
- I am very good at reading in my home language
Part 2: Learning your home language
Choose the answer that is best for you from the list.
Q5. Do you have special lessons to teach you how to read and write in your home language?
Yes (go to Q6)
No (go to Part 3)
Q6. Who teaches you how to read and write in your home language?
A member of my family
A teacher who comes to my house
I go to a special school to learn
Other, please specify
Part 3: School in another country
Q7. Have you ever been to school in another country where your home language is used?
Part 4: School in England
Q8. How long have you been in England?
I was born in England
I came to England before I started school
I came to England in Reception or EY
I came to England in Year 1
I came to England in Year 2
I came to England in Year 3
I came to England in Year 4
I came to England in Year 5
I came to England in Year 6
Language Attitude Questionnaire
Thank you for agreeing to take this questionnaire.
The following questions will help me understand how you, and other children like you, feel about the languages that you know. By telling me this information you will be helping teachers to give you lessons that take into account things that are important to you.
It is up to you whether you do this questionnaire. You do not have to answer any question that you do not want to answer.
Only the researchers in this project will see your individual answers to these questions. Nobody else will know how you answered the questions. We will keep your name a secret when we write and talk about this survey.
You can ask for help if there are any questions you are not sure about.
The questionnaire should take about 10 minutes.
If you are happy to help us, please go to the next page to start the questionnaire.
Each question is a statement. You must say how much you agree with each statement by clicking one of the five circles on the line.
Your options, from left to right are:
- Strongly Disagree: I do not agree with this at all.
- Disagree: I do not really agree with this.
- Not Sure: I do not have strong feeling about this statement.
- Agree: I agree with this.
- Strongly Agree: I agree with this completely.
If you would like to say more about your answer to any question you can write in the box at the end.
You don’t have to answer any question that you don’t want to answer.
Q1. I like speaking my home language. 1 2 3 4 5
Q2. I feel proud to be able to speak more than one language. 1 2 3 4 5
Q3. I feel clever because I can speak more than one language. 1 2 3 4 5
Q4. My home language is part of who I am. 1 2 3 4 5
Q5. I get shy when I am asked to use my home language at school. 1 2 3 4 5
Q6. Being able to speak more than one language is a good thing. 1 2 3 4 5
Q7. I get upset or cross when I am not allowed to speak my home language. 1 2 3 4 5
Q8. Having children in my school who speak many different languages is a good thing. 1 2 3 4 5
Q9. Children who know more than one language should be allowed to use them at school. 1 2 3 4 5
Q10. I would like to be better at using my home language? 1 2 3 4 5
Q11. My teachers are interested in my home language? 1 2 3 4 5
Q12. It is important that children learn the languages that their parents know. 1 2 3 4 5
Q13. When I grow up, I hope that I will still be using my home language. 1 2 3 4 5
Q14. I would get better marks at school if I was allowed to use my home language in lessons. 1 2 3 4 5
Q15. I would be happier at school if I was allowed to use my home language in lessons. 1 2 3 4 5
If there is anything else you’d like to tell us about how you feel about your home language, please leave a comment below.