Results of an adult literacy program: reading, writing and numeracy assessment of a group of women in a rural environment
Resultados de um programa de alfabetização de adultos: avaliação de competências de leitura, escrita e numeramento num grupo de mulheres em contexto rural
Resultados de un programa de alfabetización de adultos: evaluación de la lectura, la escritura y las habilidades numéricas de un grupo de mujeres en un entorno rural
Independent researcher
andreajunyentmoreno@gmail.com
María de los Ángeles
Fernández-Flecha
Pontificia Universidad Católica del Perú, Lima, Perú
mfernandez@pucp.edu.pe
Received on July 5, 2024
Approved on August 27, 2024
Published on May 20, 2025
ABSTRACT
This work outlines the development and administration of an assessment instrument designed to evaluate reading, writing and numeracy skills. The instrument was specifically tailored to assess the performance of users of the Dispurse Literacy Programme —an adult literacy initiative conducted in Spanish within the rural Andes region in Peru—, which is designed taking into consideration the learners’ communicative and sociocultural context. The participants’ literacy skills were assessed primarily based on the programme’s goals. Their abilities were evaluated in accordance with the definitions of reading, writing, using numbers and performing basic operations provided by the Peruvian Educational System. Administered to a group of 49 women who had recently completed the programme, the instrument was crafted based on the sociocultural environment of the target demographic, employing items that simulate situations from the participants´ daily lives. The results of the assessment showed that participants had acquired fundamental competencies, with over 80% of them correctly answering most explicit and implicit questions on written texts, 90% capable of writing at least a short text; and the whole group showing a mean overall performance on numeracy items of 87.8%. The interpretation of the complete outcome of the administration provided insights on the resourcefulness with which participants are able to employ their acquired knowledge. Sharing this assessment tool and its development may be potentially useful for researchers engaged in similar studies, particularly those aiming to evaluate the outcomes of literacy programmes tailored to specific populations.
Keywords: Adults’ Literacy; Literacy Assessment; Literacy Program.
RESUMO
Este trabalho delineia o desenvolvimento e a administração de um instrumento de avaliação projetado para avaliar habilidades de alfabetização e numeracia — como leitura e escrita de números, reconhecimento de relações entre eles e realização de adições e subtrações básicas. O instrumento foi especificamente adaptado para avaliar o desempenho dos usuários do Programa de Alfabetização Dispurse — uma iniciativa de alfabetização de adultos em espanhol na região rural de Andes, no Peru —, levando em consideração o contexto comunicativo e sociocultural dos aprendizes. As habilidades dos participantes foram avaliadas principalmente con base nos objetivos do programa y de acordo com as definições de leitura, escrita, uso de números e realização de operações básicas fornecidas pelo Sistema Educacional Peruano. Administrado a um grupo de 49 mulheres que haviam concluído recentemente o programa, o instrumento foi elaborado com base no ambiente sociocultural do grupo-alvo, empregando itens que simulam situações da vida cotidiana dos participantes. Os resultados da administração mostraram que os participantes adquiriram habilidades fundamentais de alfabetização e numeracia, com mais de 80% dos participantes respondendo corretamente à maioria das questões explícitas e implícitas em textos, 90% capazes de escrever pelo menos um texto curto; e todo o grupo demonstrando um desempenho médio geral em itens de numeracia de 87.8%. A interpretação do resultado completo da administração proporcionou percepções sobre a eficácia com que os participantes são capazes de empregar seus conhecimentos adquiridos. Compartilhar esta ferramenta de avaliação e seu desenvolvimento pode ser útil para pesquisadores envolvidos em estudos similares, particularmente aqueles com o objetivo de avaliar os resultados de programas de alfabetização adaptados a populações específicas.
Palavras-chave: Alfabetização de Adultos; Avaliação de Alfabetização; Programa de Alfabetização.
RESUMEN
Este trabajo describe el desarrollo y la administración de un instrumento de evaluación diseñado para evaluar las competencias de lectura, escritura y cálculo. El instrumento se diseñó específicamente para evaluar el desempeño de usuarias del Programa de Alfabetización Dispurse –una iniciativa de alfabetización de adultos en español que se lleva a cabo en una región rural de los Andes peruanos–, cuyo diseño tiene en cuenta el contexto comunicativo y sociocultural de los alumnos. Las competencias de las participantes se evaluaron principalmente en función de los objetivos del programa. La lectura, la escritura, el uso de los números y la realización de operaciones básicas se evaluaron de manera acorde a las definiciones del Ministerio de Educación del Perú. Administrado a un grupo de 49 mujeres que acababan de terminar el programa, el instrumento se elaboró basándose en el entorno sociocultural del grupo demográfico destinatario, empleando ítems que simulaban situaciones de la vida cotidiana de las participantes. Los resultados de la evaluación mostraron que las participantes habían adquirido las competencias fundamentales de lectura, escritura y cálculo: más del 80% de ellas respondieron correctamente a la mayoría de las preguntas explícitas e implícitas sobre textos escritos, el 90% fueron capaces de escribir al menos un texto breve, y todo el grupo mostró un rendimiento global medio en los ítems de cálculo del 87.8%. La interpretación del resultado completo de la administración proporcionó información sobre la habilidad de las participantes para emplear estratégicamente los conocimientos adquiridos. Compartir esta herramienta de evaluación y su desarrollo puede ser potencialmente útil para los investigadores e investigadoras que realizan estudios similares, en particular los que pretenden evaluar los resultados de programas de alfabetización adaptados a poblaciones específicas.
Palabras clave: Alfabetización de Adultos; Evaluación de la Alfabetización; Programa de Alfabetización.
Introduction
Acquiring and improving literacy skills across the lifespan has been recognised by UNESCO (2006) as an intrinsic aspect of the right to education, bringing empowerment and tangible benefits to individuals and communities. Reading, writing, and basic mathematical proficiency are fundamental skills, crucial for attaining success in life, as they enable individuals to capitalise on opportunities. The absence of these abilities poses challenges in acquiring adequate employment, honing other competencies, and making meaningful contributions to society, thereby constraining an individual’s potential for personal and professional advancement. However, access to opportunities for acquiring these skills remains unequal around the world.
In Peru, public policy efforts aimed at eliminating illiteracy have demonstrated significant progress over the past 25 years. Literacy rates have seen an increase from 96.3% in 2011 to 97.3% in 2022 for men, with an even more pronounced improvement for women: from 88.3% to 92.5% for the same period. Nevertheless, considerable disparities in literacy rates persist between genders, and across urban and rural settings. Notably, the lowest literacy rates are observed among women residing in rural areas: 80.7% (2022). Furthermore, gender disparities, independently of the rural effect, are particularly pronounced in certain geographical regions, where rates for women plummet to as low as 78.7% (Huánuco), 82.6% (Apurimac), 83.8% (Cajamarca), and 84.2%(Cusco) (INDEI, n.d.).
Aiming to contribute to reduce illiteracy and transform living conditions of rural women in Peru, Dispurse Foundation—a Swedish organisation focused on fighting poverty with education—, in cooperation with local and national stakeholders, has implemented a literacy programme since 2017. The Dispurse Literacy Programme (DLP) has already been freely offered to more than 2000 women in the provinces of Cusco and Cajamarca, both in the Andean region of the country (Johansson, Franker, 2023). The instrument we present here was elaborated for assessing literacy and numeracy skills in participants that have already completed the programme.
Literacy, once defined as the capacity to read and write, is now understood as a multifaceted concept encompassing various competencies and practices shaped by specific social, cultural, and historical contexts (Street, 1984). Moreover, beyond reading and writing, numeracy has emerged as an integral component of literacy, reflecting its expanded scope and significance in users’ life. While no single definition has achieved universal consensus, this study adopts a conceptualisation of literacy as a generalised set of skills involving reading, writing, and computation (Shi, Tsang, 2008).
Numeracy, as operationally defined in this study, involves both computation —performing basic mathematical operations— and the ability to interpret and use numerical information. This includes tasks such as reading numerical information (e.g., money, weight, time, and date), solving problems that require interpreting numerical data, and performing basic calculations like addition and subtraction.
The concept of literacy extends beyond the individual skills described to include a variety of social practices that are situated in broader social relations and embedded within specific cultural contexts (Goody, 1968; Barton, 2007). The evolving understanding of literacy and its participation within social practices has led, therefore, to recognise multiple “literacies” (Barton, 2007; Hull, 2000; Lonsdale, McCurry, 2004; Culligan, 2005). These include, but are not limited to, functional literacy and critical literacy, each of which plays a unique role in empowering individuals to navigate and participate in various aspects of modern society. Functional literacy highlights the practical abilities that individuals need to manage daily activities and obligations effectively and can be considered foundational for fully participating in society (Freedman et al., 2011; McCaffery et al., 2016; Muscat et al., 2016). Critical literacy extends beyond functionality by adopting an analytical and critical approach to texts. From this perspective, readers question the ideologies and assumptions underlying texts, promoting social justice and empowerment (Kellner, Share, 2005; Janks, 2013; Luke, 2012).
Functional and critical literacy are central to the theoretical framework at the base of the literacy programme assessed in this study which taps on initial literacy acquisition. Functional literacy, at this stage, refers to the ability to read and write short texts, as well as perform basic numerical operations relevant for everyday tasks and responsibilities (Hull, 2000). Critical literacy, in contrast, emphasises the analysis of texts and contexts, fostering the ability to critique and challenge dominant discourses and power structures (Janks, 2010). These dimensions are integrated within the programme to support adult learners in addressing real-world challenges through the development of skills that are both foundational and transformative.
The Dispurse Literacy Programme (DLP)
Theoretical Framework of the DLP
The literacy programme assessed for this study is theoretically grounded in Franker’s (2016) Resource Model[i], an extended adaptation of Freebody and Luke’s Four Resources Model (1990). Franker’s model incorporates learners' existing resources, adapting them to adult literacy acquisition, and aligns with Barton’s (2007) conceptualization of literacy as a set of social practices used for communication and for representing the world to oneself. It also integrates Janks’ (2010) critical perspectives on literacy and power[ii]. Central to Franker’s model is the assumption that adult literacy development begins with learners’ linguistic, literacy-related, and socio-cultural resources. Building on these, learners engage in solving real-world problems through the four key literacy practices: code-breaking, which involves decoding texts and understanding their linguistic structures; meaning-making, which focuses on constructing meaning from texts; text analysis, which entails critiquing and evaluating texts within their socio-cultural contexts; and text application, where literacy skills are applied to address practical challenges and scenario. By embedding these practices into their daily activities, learners cultivate a functional and meaningful literacy foundation.
The DLP is structured around three components: Focus, a mobile application and its complementary materials, literacy spaces, and the promotion of community schools. Focus is a central part of the DLP and has been designed for use on tablets without requiring Wi-Fi access, enabling autonomous and offline learning for individual users. Trained facilitators monitor participants’ progress and guide their learning process during regular meetings. The complementary materials, including printed resources and workbooks, are designed to supplement and deepen the digital activities carried out on the application. These materials provide additional opportunities to enhance literacy skills, supporting both individual and group learning, and enable participants to build on their specific knowledge, experiences and expertise. Within literacy spaces, participants gather to engage collectively with reading materials of shared interest. This community-based approach fosters collaboration and the exchange of experiences among learners, creating a supportive environment that promotes the practical application of literacy skills and the development of meaningful connections (Johansson, Franker, 2023).
The DLP underscores the practical application of the skills it aims to develop, such as reading and using numerical data (e.g. time, dates, and measures) and performing basic calculations, aligning with its functional and critical literacy goals. While functional literacy focuses on the practical abilities required to manage everyday tasks and responsibilities, such as reading instructions or writing applications, or handling financial transactions (Hull, 2000), critical literacy equips individuals to analyse texts and contexts critically, enabling them to challenge dominant discourses and advocate for social change (Janks, 2010).
The programme focuses on fostering both literacies[iii], prioritising skills that are immediately relevant to the learners’ daily lives. This includes the ability to read and comprehend various types of texts, empowering participants to navigate written information effectively in contexts such as understanding instructions or informational materials, as well as analysing the information and discussing it.
Writing skills are similarly emphasised, enabling learners to compose texts for functional purposes, such as filling out forms, writing letters, or taking notes, as well as adopting a critical position on the content and forms of their own writing. Furthermore, the program integrates numeracy by teaching participants to read and use numerical data, including interpreting time, dates, and measurements, which are crucial for tasks such as scheduling, cooking, and basic problem-solving. Learners also develop the ability to perform fundamental calculations, a skill that supports essential activities like budgeting, shopping, and managing resources. Both reading and use of numerical data, as well as basic calculations are worked with a critical perspective. All these practical competencies are central to the program's mission of promoting literacy by ensuring that participants can apply their skills in meaningful and contextually relevant ways to enhance their autonomy and quality of life (Johansson, Franker, 2023).
The assessment tool, a paper-based test, was developed to evaluate basic reading, writing, and numeracy skills among women in rural Cajamarca who participated in the DLP. It aims to capture the abilities developed through the program by proposing situations that mimic real-life experiences, reflecting the focus of the DLP.
The instrument evaluated the participants’ ability to read and write texts; interpret numerical information, such as individual numbers, measures, time, and date; and solve problems involving addition and subtraction taking into consideration the socio-cultural environment. The assessment seeks to recreate, rather than directly test, how participants might engage with these skills in real-life scenarios.
The more advanced items involve reading and writing texts that participants may encounter in their daily lives (e.g. an identification document or an advertisement) or may need to produce, either orally or in writing (e.g. providing recommendations for a child, or completing a form with their personal information), when such situations were applicable. The more advanced items for numeracy skills assess using numbers and basic operations useful for managing real-life situations (e.g. buying different products in the market).
The assessment was designed with the intention of ensuring relevance and practicality, focusing on tasks that reflect situations the participants are likely to find. While it cannot fully replicate real-world conditions, it attempts to create familiar contexts where literacy skills can be applied. By contextualising literacy and numeracy items within real-life situations, we aimed to reflect the broader understanding of literacy as a set of social practices (Beder, 1999; McKenna & Robinson, 1990; Shi, Tsang, 2008), and align the assessment tool with the approach and the goals of the DLP[iv].
Abilities measured by the instrument
The instrument evaluated literacy and numeracy competencies, as defined by the current Peruvian basic curriculum of the Alternative Basic Education system for youngsters and adults for the end of the second grade of Primary or beginning of the third grade of Primary —Initial and Intermediate Cycle, respectively (PEBAJA, Ministry of Education, 2005). This implies prioritising goals over expected results, since learning time for literacy and numeracy in two years of school exceeds the devoted time of an adult to activities in a literacy programme. The instrument measured reading and writing, up to text comprehension and production as defined for the Integral Communication area by PEBAJA[v]. We also measured the use of numbers and basic operations, as in the Mathematics area of PEBAJA[vi]. Our assessment employed standard Peruvian Spanish, the variety used nationwide in school materials as well as in nearly all formal educational resources.
The reading section of the test assessed decoding —the ability to recognise and associate graphemes with their corresponding phonemes—, and applying this knowledge to read aloud. The role of a specific linguistic ability like decoding in reading comprehension performance is well established in the literature (Cain, 2015; Proctor et al., 2014), and there is also evidence that variability in performance on reading comprehension can be explained by individual differences in decoding and language comprehension, both in Spanish as in some other languages (De Mier et al., 2012; Ripoll Salceda et al., 2014 offer a systematic review; Catts, 2018, offer updated references on the topic).
Moreover, it measured text comprehension by asking participants to identify explicit information and to infer implicit information within the text. Besides foundational skills like vocabulary, high-level skills —like inference, integration, knowledge, and use of text structure— play an important role in the construction of the mental model of the text (Cain, 2015; Oakhill, Cain, 2007).
Writing was measured by analysing the microstructure and macrostructure of the texts participants were instructed to write. Finally, numeracy assessment tested the participants´ ability to read and write numbers, to recognise relations between them (like lower vs higher), and to solve basic addition and subtraction operations, as well as to read times and dates.
The instrument was especially designed to evaluate skills by former female participants of the programme dwelling in a rural area in Cajamarca, in the Peruvian Andes. In order to produce a culturally appropriate instrument that could still assess the relevant skills in this group, information about the population was gathered.
A Community Questionnaire was administered to five leaders (four men and a woman) of some the communities where the programme was implemented in the province of Cajamarca: Alto Miraflores, Cashaloma, Cristo Rey, El Triunfo, La Molina, La Retama, La Shilla, Llagonarca, Puylucana and Shilla Moyococha.
The Community Questionnaire provided insights into the characteristics of these communities. Predominant economic activities included agriculture, livestock farming, and trade, with varying degrees of industrialization within some communities. Accessibility to district capitals varied among communities, with travel times ranging from 10 minutes to an hour, contingent upon transportation modes (walking, public transport, etc.) and specific community locations. Access to essential food products and services (water, electricity, and gas) was generally rated as satisfactory, although internet connectivity was reported as inadequate. Spanish is the primary language spoken within these communities.
With respect to gender-based violence, responses were mixed, with all male participants either denying the prevalence of such violence or citing insufficient information, while the female participant confirmed its existence. Assistance programs are prevalent, notably the “Vaso de leche” ("Glass of milk") initiative, primarily aimed at providing sustenance to vulnerable populations, particularly children. Dispurse was highlighted as the sole adult literacy program in the area by four respondents, while the other mentioned various government-run programs. Health services appeared to be lacking, forcing people to travel between 20 minutes and 2 hours to reach hospitals or clinics.
Civil organisations, including religious groups and organised peasant communities, as well as associations dedicated to fighting crime, such as ronderos, were acknowledged as active within the communities.
Moreover, information on the participants of the programme was gathered by frequent communication with the members of the Dispurse team working in Cajamarca, who frequently interact with its users.
The instrument comprised three main parts: reading, writing and numeracy. All employed standard Spanish, even though the variety of the users of the programme is Andean Spanish[vii]. The reason for this choice is two-fold. First, even though this regional dialect has intrinsic communicative and expressive value, the current Peruvian context precludes its speakers from having deserved power and agency in certain standard official domains. Complementary access to powerful forms of language, such as the prestigious variety of Peruvian standard Spanish, currently empowers people and allows them to benefit from more opportunities towards better life conditions. In this context, literacy in Spanish provides women with more life opportunities and the chance to seek better life conditions for them and their families. The second reason is associated with the first one: materials in literacy programmes as well as school reading materials are also written in the standard variety.
Reading
The reading part began with a decoding task. It measured decoding independent of participants´ lexical and morpho-syntactic abilities. Participants were presented with a list of 50 two-syllable pseudowords (non real words that still follow phonotactics rules of Spanish) from the Early Grade Reading Assessment (EGRA)[viii] by Gove et al. (2009; Dubeck, Gove, 2015), to be read aloud within a maximum time of one minute. Two decoding measures were calculated: fluency —number of pseudowords read in a minute— and accuracy —number of pseudowords accurately read in a minute. Decoding was also evaluated in a more natural context with an item instructing participants to read aloud the accompanying title of the continuous narrative text employed for evaluating comprehension[ix].
Reading comprehension was assessed by requiring participants to obtain both explicit and implicit information from four different texts via a multiple-choice format. The first two texts were non-continuous, and so offered extra-linguistic information —such as images, colour variation in fonts, and spatial arrangements of text. The first (R1) was the Peruvian national identity document (DNI, by its acronym in Spanish) in the previous format, as residents of the localities involved usually have not renewed their DNIs to the most current version.
The second text (R2) was an announcement from a campaign providing economic assistance during the Covid-19 pandemic, which received widespread national coverage during that period. The last two were continuous texts a paragraph long: a condensed indigenous narrative unrelated to the participants' context (narrative, 117 words, R3) and a description of the meat dehydration process (instructional, 51 words, R4). Both texts were chosen for specific reasons. On one hand, R3 provided participants with a narrative that did not rely on familiarity with texts from the hegemonic culture and, at the same time, was new to the readers and so allowed assessment of reading as a way of accessing completely new information via text. On the other hand, R4 was an instructional text selected because it presented the description of a process that might be familiar to the participants’ while unlikely to have been learned through reading. Again, this allowed assessment of reading as a means for accessing new information.
All four texts, then, were chosen aiming for a balance between familiarity and novelty, in order to diminish the effect of previous experience with similar types of texts on reading comprehension performance. However, the degree of familiarity to the form and structure of the texts might have varied, favouring text R1 (DNI), probably the most important document to be read by users of the programme.
Writing
The writing part of the test assessed both the microstructure —the language used in narration— and the macrostructure —the global narrative characteristics— of the produced text. The items in the test were presented within contexts that recreated possible real situations in adult life. Through four subparts, participants were required to write personal information (name, surname and address) and texts: advice for a child on his or her first day of school (W1: Advice), the description of an elaboration process (W2: Elaboration process), such as a recipe, and writing a narration (W3: Story) following a story presented through vignettes (MAIN test by Gagarina et al., 2012; in its Spanish version, by Ezeizabarrena, García, 2020[x]).
The first subpart of the test was designed to measure the most basic level of expected writing performance: producing personal information (particularly full name), along with a signature, constitutes the initial instruction in any literacy programme for adults. In a higher level was text W1: participants were provided with specific limited spaces to write on; this way each piece of advice they wished to convey was restricted to one or two sentences. The aim was to assess performance at a level accessible to as many participants as possible, yet meaningful enough to generate natural production.
Texts W2 and W3 imposed more demanding tasks for writers. For eliciting and analysing responses to W3, we chose Gagarina et al.’s (2020) work, originally designed to assess children’s oral production. This selection was due to its significant advantage since its vignettes —which depict a situation easily recognisable within the participants’ context, wherein a goat encounters a fox and a bird— were specifically designed for cross-cultural use of and analysis in multiple languages. Regarding conventional writing, information on knowledge of graphical marks was obtained through a question in the reading part and a specific score derived from text writing (see Table 1). Moreover, awareness of the function of graphical marks (font and colour) during reading was assessed with a multiple-choice item, and employing graphical marks in texts (punctuation, and capital vs lowercase letters) was also considered when evaluating writing.
Numeracy
The numeracy test comprised seven subparts, which assessed these skills both in isolation (decontextualised) as well as in a more familiar way to the participants.
Part 1 assessed number reading or decoding in different formats. First, participants were asked to read aloud numbers presented in isolation (decontextualised, so only digits) in increasing complexity. Then they were asked to read aloud numbers presented in a specific context, considered to be common in their everyday lives, such as money amounts, food weights and time periods. This involved not only decoding the numbers per se (both whole and non-integer) but the symbols accompanying them such as S/ (for soles, the Peruvian currency), kg or gr (for kilograms or grams), as well as min (for minutes) and identifying the word horas (hours), when expressing time durations). Part 2 required participants to write numbers after hearing the test administrator reading them aloud. These were both whole and non-integer numbers, again in increasing complexity; two of these involved writing the S/ symbol, or the word soles or céntimos depending on the participants preferred choice.
Part 3 assessed participants´ knowledge of how numbers are located in a line when required to order them either in ascending or descending distribution. Part 4 also assessed the skill of comparing numbers (which is at the base of performance in part 3) but now in a contextualised way: we asked participants to determine, based on written money quantities, which of two characters (represented by pictures) had more money.
Because the DLP aims to provide participants with numeracy skills that prove useful for their daily lives, in Part 5 we tested addition and subtraction via multiple choice format and situations familiar to them such as paying a bill at the hardware store, going to the market and dealing with delays in doctors´ appointments. This way we tested if participants could use the acquired knowledge in a setting close to ones in their daily lives.
Finally, parts 6 and 7 evaluated participants' abilities to read a clock or watch to tell time, and to read a calendar to tell the date, as well as perform simple calculations based on these. Both multiple choice and open answer questions were used in these two last parts.
Table 1. Instrument structure
|
Part 1: READING |
|
|
Reading from a list of 50 pseudowords (time limit: 1 minute) |
1 item |
|
Reading the title of the (continuous) narrative text |
1 item |
|
Reading out loud (words and phrases) |
3 items |
|
Reading comprehension of non-continuous and continuous texts (narrative and instructional): explicit information |
5 items |
|
Reading comprehension of non-continuous and continuous texts (narrative and instructional): implicit information |
6 items |
|
Part 2: WRITING |
|
|
Personal information |
3 items |
|
Giving advice to a child on her first day of school |
Minimum one sentence |
|
Elaboration process (instructional text) |
1 text |
|
Story (narrative text) |
1 text |
|
GRAPHICAL MARKS |
|
|
Interpreting graphical marks |
1 item in the reading part |
|
Employing graphical marks |
A specific score in the writing part |
|
Part 3: NUMERACY |
|
|
Reading numbers out loud (pure digits) |
11 items |
|
Reading money amounts |
4 items |
|
Reading substances (food) amounts |
3 items |
|
Reading time periods |
3 items |
|
Writing numbers by dictation |
7 items |
|
Ordering numbers from smallest to largest |
3 items |
|
Ordering numbers from largest to smallest |
3 items |
|
Identifying who has more money |
3 items |
|
Maths problem (hardware store) |
3 items |
|
Maths problem (doctor’s appointment) |
2 items |
|
Maths problem (shopping in the market) |
3 items |
|
Reading time from different clocks/watches (and related operations) |
4 items |
|
Reading dates in calendars (and related operations) |
2 items |
The pilot study. We carried out a pilot study to refine the instrument and the administration process, including timing. Members of Dispurse’s local team were trained by the authors on how to administer the tests to five former users of the programme. Additionally, the team was provided with a manual to facilitate the administration process, and maintained continuous communication with the authors to address any queries and exchange relevant information.
Based on the pilot study results, minor modifications were made to the tests and one major change. In the original version of the writing test, text W2 required participants to write rules for peaceful coexistence for children at school. The pilot showed that, in all cases, the resulting texts involved direct instruction or advice for children on appropriate behaviour. For this reason, the final version asked for advice. The literacy component of the evaluation, considered too demanding for the participants, was reduced from its original version without compromising its scope or sensitivity. Furthermore, it was determined that the reading and writing tests would be administered at the beginning of the assessment to mitigate potential fatigue effects observed among participants who completed the literacy component after the numeracy component.
The final study. A total of 49 women who had just finished the DLP participated in the final study. They lived in seven villages, all in Cajamarca province. Their ages varied between 23 and 44 years old (mean [M] = 36.43, standard deviation [SD] = 5.46). They had between 0 and 4 years of schooling (M = 1.67, SD = 1.28) but, as they declared, still felt they needed to learn to read. Most of them were mothers (only one woman had no children), and for those who had children the number varied between 1 and 6 (M = 2.55, SD = 1.32). In terms of their main occupation, 89.8% said they were housewives. Regarding their access to technology and communication devices, only 38.8% had a TV, while a stunning 98% had a mobile phone. However, 89.8% reported having a radio and none had a personal tablet at home. Finally, in terms of their literacy and numeracy level when they entered the program, participants were given a very simple and basic test: 34.7% were classified as being in a very initial learning state, 63% as being in process of acquiring those skills, and 2% (only one participant) as having already achieved the skills, although she chose to go ahead with the program.
Participants were orally asked for their informed consent, which was recorded on video. They were clearly informed that they could refuse to participate in the assessment or stop at any time if they so desired and that there would be no negative consequences if they chose to do so.
The final version of the instrument was administered by the same Dispurse local team members who took part in the pilot, at a time and place of the participants' convenience, previously agreed with them.
Results of the implementation
Reading. Decoding in isolation (pseudowords list) was measured by calculating fluency and accuracy. Two out of a total of 49 participants did not perform this part of the test: one participant was, in fact, not able to read while another chose not to engage. In order to offer an interpretation of the data, the results of the 47 participants who performed the test were compared to results of children in a different pseudoword reading test, even though it is clear that adult literacy acquisition differs, in many aspects, from that of children and different instruments may yield divergent outcomes in decoding even when the measured levels are equivalent. Data from adults was compared to data on pseudoword fluency and accuracy by Peruvian students in third grade on a similar test (Cayhualla et al., 2011, 2013[xi]), who showed a range of performance of 43 to 143 words for fluency, and 33 to 42 words for accuracy. The DLP´s participants' scores showed a minimum of 2 words for fluency and 1 for accuracy and a maximum of 50 for both; with a mean of 23.81 (SD = 12.86) for fluency and 21.85 (SD = 13.36) for accuracy (Table 2). These results mostly fell below the range of scores in Cayhualla et al.’s work (2011, 2013). In terms of fluency, 93.8% of the participants scored below the range, while only 6.3% scored within it. However, accuracy yielded better results, with 72.9% of adult participants' scores falling below the third-grade range, 22.9% within it, and 6.2% above it (Table 3).
Table 2. Scores in pseudoword decoding (N = 47)
|
|
Min |
Max |
Mean |
SD |
|
Fluency (pseudowords in a minute) |
2 |
50 |
23.81 |
12.86 |
|
Accuracy (correct pseudowords in a minute) |
1 |
50 |
21.85 |
13.36 |
Table 3. Participants performance compared to the range data on Cayhualla et al. (2011; N = 47)
|
|
Below the range |
Within the range |
Above the range |
|
Fluency (pseudowords in a minute) |
93.8% |
6.3% |
0% |
|
Accuracy (correct pseudowords in a minute) |
72.9% |
22.9% |
6.2% |
Decoding was directly evaluated in natural context by instructing participants to read aloud the accompanying title of the continuous narrative text (R3, La leyenda de Kospi [The legend of Kospi]) employed for assessing comprehension. When the participants were asked to read the title, 29 out of 49 complied, while 20 did not. However, 14 of the 20 participants who did not read the title subsequently read the entire legend (when asked to do so either silently or aloud before being asked the comprehension questions), leaving only 6 who neither read the title nor the text (Table 4). Three of these 6 participants, however, read the non-continuous texts and wrote short texts when required. We can speculate on some reasons for this apparent incrongruency. It is possible that participants did not feel confident enough to deal with the continuous narrative text presented to them, which, unlike non-continuous texts, was probably not familiar to them since it is an indigenous legend from a completely different part of South America, nor offered information through images, colour variation in fonts, or spatial arrangements of text. Moreover, decoding proficiency may have influenced participants’ performance. Considering the results on pseudoword decoding of these specific participants —those who performed under the selected range of comparison for both fluency and accuracy—, another contributing factor may be that reading a text with a higher number word count than shorter non-continuous texts could pose a significant challenge, particularly when decoding ability is still developing and not fully automated, as the comparison to the selected range for fluency and accuracy suggests.
Table 4. Text and title reading of La Leyenda de Kospi (N = 49)
|
Did not read the text or the title |
Read the text but not the title |
Read both the text and the title |
|
6 |
14 |
29 |
Reading comprehension was measured with items for both identifying explicit information and inferring implicit information from the four texts presented. First, performance on all the questions was examined, independently of the associated text, dividing them into explicit (total: 4) and implicit (total: 5) questions.
Results showed an overall good performance on reading comprehension (Table 5) across both types of questions, with 88% of participants providing correct answers to at least three out of four explicit questions and 84%, to at least three out of five implicit questions (Table 6). This indicates that, despite the initial phase of reading acquisition suggested by low accuracy and fluency in decoding among the participants, they appear to have acquired sufficient skills to read and comprehend texts, as those employed in the test, up to a certain extent.
Table 5. Correct answers for explicit and implicit questions (N = 43)
|
Type and number of questions |
Min |
Max |
Mean |
SD |
|
Explicit questions (4)
|
1 |
4 |
3.26 |
0.73 |
|
Implicit questions (5) |
1 |
5 |
3.51 |
1.08 |
Table 6. Distribution of participants by number of answered questions (maximum number of questions: 4 for explicit information, and 5 for implicit information; N = 43)
|
Number of correctly answered questions |
Explicit questions |
Implicit questions |
|
5
|
- |
8 |
|
4 |
17 |
15 |
|
3 |
21 |
13 |
|
2 |
4 |
5 |
|
1 |
1 |
2 |
|
0 |
0 |
0 |
In order to examine whether there were differences between types of questions and types of texts, a one-way ANOVA —a statistical test that compares the means of three or more groups to determine if there are any statistically significant differences among them— was conducted. The mean scores of the proportion of correct answers across the following four groups were compared: Non-continuous texts - explicit questions, Non-continuous texts - implicit questions, Continuous texts - explicit questions, and Continuous text - implicit questions (see Table 6). The results suggested statistically significant differences among the groups considered: F(3, 168) = 4.67, p < .005, η2 = .08)[xii].
To further investigate the nature of the significant differences between groups found with the ANOVA, a post-hoc analysis was conducted using Tukey’s HSD test. Post-hoc analyses are performed after a significant ANOVA result to identify specifically which groups differ from each other. Tukey’s HSD test was chosen because it allows the comparison between all possible pairs of groups, while controlling for the overall error rate, ensuring reliable findings.
The results of the post-hoc analysis revealed significant differences in mean scores between explicit questions about the non-continuous texts, on one hand, and both explicit (p < .005) and implicit (p < 0.05) questions about the continuous texts, on the other hand. The higher scores on explicit questions related to the non-continuous (and probably familiar) text suggests that these questions were easier for the participants than explicit and implicit questions about the continuous (less familiar) texts. This suggests that question type, text structure and, probably, familiarity, play an important role in the ease with which participants can respond to comprehension questions.
Table 6. Proportion of correct answers per type of text and type of question (N = 43)
|
Groups |
Min |
Max |
Mean |
SD |
|
Non-continuous texts - explicit questions (3 questions) |
0.33 |
1 |
0.88 |
0.19 |
|
Non-continuous texts - implicit questions (2 questions) |
0 |
1 |
0.73 |
0.29 |
|
Continuous texts - explicit questions (1 question) |
0 |
1 |
0.63 |
0.49 |
|
Continuous texts - implicit questions (3 questions) |
0 |
1 |
0.68 |
0.29 |
Finally, awareness of the function of font colour and size was assessed via a question enquiring for the reason of usage of big yellow letters in the advertisement, with 40 participants (out of 48) choosing the correct answer (“because it is important information”).
Writing. Out of 49 participants, 44 were able to write their personal information and at least a very short text. The analysis for text writing was based on form and structure[xiii].
The form of the text was analysed through the number of words and clauses, the difference between conceptual and written words, and an overall error rate that included orthography, capitalisation and punctuation.
For all measures, variability was found between the texts produced by the participants. The length varied in terms of both word and clause count (Table 7). Regarding the difference between the number of conceptual words and that of written words, it served as a measure for word segmentation in writing. Even though this measure was variable (Table 8), the most common situation was having the same number of conceptual and written words (Mode = 0). Finally, the error rate that calculated errors in punctuation, orthography and selection of capital or lowercase letters per total number of written words showed variability as well (Table 9).
Table 7. Text length, in words and clauses (N = 44)
|
|
Min words (clauses) |
Max words (clauses) |
Mean words (clauses) |
SD words (clauses) |
|
Text W1: Advice |
3 (1) |
45 (6) |
8.75 (2.25) |
7.73 (0.85) |
|
Text W2: Elaboration process |
4 (2) |
59 (13) |
16.98 (5.05) |
10.91 (2.18) |
|
Text W3: Story |
13 (3) |
128 (23) |
34.57 (6.61) |
22.24 (3.83) |
Table 8. Difference in number of conceptual and written words (N = 44)
|
|
No difference |
One word |
Two words |
Three words |
Four or five words |
|
Text W1: Advice |
30 |
13 |
1 |
0 |
0 |
|
Text W2: Elaboration process |
33 |
7 |
3 |
1 |
0 |
|
Text W3: Story |
24 |
12 |
2 |
4 |
2 |
Table 9. Punctuation, orthography and capitalisation error rate (N = 44)
|
|
Min |
Max |
Mean |
SD |
|
Text W1: Advice |
0 |
0.79 |
0.30 |
0.19 |
|
Text W2: Elaboration process |
0 |
0.56 |
0.22 |
0.13 |
|
Text W3: Story |
0.08 |
0.39 |
0.17 |
0.07 |
Text structure was examined for texts W2 and W3. Regarding W2, out of 44 participants, 42 named the process (title) and presented the required elements and the elaboration instructions. Moreover, since 40 participants chose to write a recipe, it was assessed whether the text adhered to the conventional structure of a written recipe, which typically presents ingredients first, followed by preparation steps: only 7 cases exhibited this structure. The relationship between structure and vocabulary was examined by counting nouns representing elements of the process and verbs used for describing the process itself. The number of nouns and verbs varied among participants from 1 to 8 (verbs) or 10 (nouns), with means of 3 nouns (SD = 2.26) and 3.34 verbs (SD = 1.64).
In order to analyse the structure of the story (text W3), the guidelines by Gagarina et al. (2012) for children oral production were employed. They served as a tool for organising and identifying the structure of a story based on its vignettes. There was no intention to compare children’s oral production with adult written production. The structure of the texts was examined to ascertain whether it reflected the traditional Western story tale the vignettes are expected to trigger, with a title, a setting, three episodes —a goat taking a bath with its cubs while a fox watches them, the fox attempting to catch one of the little goats, and a bird attacking the fox, causing it to lose the prey—, a goal, an intent and a result.
A general score was obtained by giving a point for the title, setting, initial internal state (at the beginning of the episode), reactive internal state (as a reaction to the events, at the end of the episode), goal, intent and result in each of the three episodes with a maximum score of 17.
The general scores based on this analysis showed considerable variability (Min = 2, Max = 10, M = 5.18, SD = 1.67), with some patterns emerging (see Table 10 for percentages). First, almost all participants chose to give a title to the story (42 out of 44) —which might be a form of conveying the idea that it is a tale (titles are not usually given to real anecdotes) and of introducing the reader to what it is going to be about— and most introduced a setting (27 out of 44). Most participants never presented the internal state of the characters at the beginning of an episode: 63.64% produced 0 instances of expression of initial internal states but the expression of other categories was much higher, with 59.09% of participants expressing at least one reactive internal state; 61.36%, at least one goal for the characters; and 70.45%, one up to three intents of the characters to do something. Moreover, the results of the events in the story were expressed for all participants at least once. These results suggest a narration pattern focused on the action or the events rather than on the mind of the characters.
Table 10. Percentage of participants expressing 0, 1, 2 and 3 instances of internal states, goals, intents and results (N = 44)
|
|
No instances |
One instance |
Two instances |
Three instances |
|
Initial internal state |
63.64 |
34.09 |
2.27 |
0 |
|
Reactive internal state |
40.91 |
45.45 |
13.64 |
0 |
|
Goal |
38.64 |
47.73 |
13.64 |
0 |
|
Intent |
27.27 |
47.73 |
20.45 |
2.27 |
|
Result |
0 |
22.73 |
45.45 |
29.55 |
As for text W2, the relationship between structure and vocabulary was examined by calculating types of words. The number of nouns referring to characters, verbs referring to events, and verbs and adjectives referring to internal states was counted, showing variability among participants (see Table 11).
Table 11. Types of words for referring to characters, events and internal states in text W3 (N = 44)
|
|
Min |
Max |
Mean |
SD |
|
Characters (nouns) |
1 |
4 |
3.20 |
0.62 |
|
Events (verbs) |
1 |
14 |
4.32 |
2.79 |
|
Internal states (verbs, adjectives) |
0 |
3 |
1.00 |
0.76 |
Numeracy. Overall, performance in the numeracy test was good with a mean proportion score of 87.92% (Min = 52.9, Max = 100, SD = 9.6). If we look more closely at the specific results, we can see which were the strengths and which were the weaknesses of the participants (see Table 12). The participants´ ability to read numbers in different formats and contexts (digits, food amounts, money sums) and to order them (both in ascending and descending order) achieved the highest performance levels (reading numbers proportion score: M = 90, Min = 47.6, Max = 100, SD = 13.1; ordering numbers proportion score: M = 94.9, Min = 33.3, Max = 100, SD = 13.7). When participants were required to order numbers, they performed better when an ascending (probably more natural) order was required. Although for both types of tasks the average score was almost the same (2.9 and 2.8 respectively, out of 3 items in each case), variation was higher for performance involving ordering numbers in a descending (less natural) order (SD = 0.7 vs 0.2 for ascending order).
Table 12. Main abilities tested in the Numeracy test
|
|
Min |
Max |
Mean |
SD |
|
Total Reading Involving Numbers Score (Max. 21) |
10 |
21 |
18.9 |
2.7 |
|
Total Reading Involving Numbers (Proportion score) |
47.6 |
100.0 |
90.0 |
13.1 |
|
Total Writing Numbers to Dictation Score (Max. 7) |
1 |
7 |
5.4 |
1.2 |
|
Total Writing Numbers to Dictation (Proportion score) |
14.3 |
100.0 |
77.3 |
17.7 |
|
Total Ordering Numbers Score (Max. 6) |
2 |
6 |
5.7 |
0.8 |
|
Total Ordering Numbers (Proportion score) |
33.3 |
100 |
94.9 |
13.7 |
|
Total Maths Operations Score (Max. 8) |
4 |
8 |
7.0 |
1.1 |
|
Total Maths Operations (Proportion score) |
50.0 |
100.0 |
87.8 |
14.1 |
|
Total Clock Reading Score (Max. 4) |
0 |
4 |
3.0 |
1.2 |
|
Total Calendar Reading (Max. 3) |
1 |
2 |
1.8 |
0.3 |
However, their ability to write numbers after listening to them being dictated, and to “read” the time from clocks and watches were not that good (77.3% and 75%, respectively), appearing to be a somewhat unstable knowledge at this point. A source of difficulty, both when reading and when writing numbers, seems to derive from the decimal point. How to interpret it and its value in adding meaning to the whole number sequence appears to be problematic for participants, who presented responses that do not follow conventional interpretation.
Because reading and writing numbers are basic numeracy abilities, we explored whether these two were correlated using Spearman’s non-parametric correlation. This statistical test helps determine whether two variables have a relationship, without assuming a specific distribution of the data. We found a significant weak positive correlation (r(49) = .309, p < .05)[xiv]: those participants who performed better in reading numbers also tended to do better when writing them after hearing them dictated. It is unclear at this point why a higher correlation was not found between these two skills, as we would have expected. Since participants performed better with reading than writing numbers, aspects related to fine motor and coordination as well as memory might be making the writing task more challenging.
The participants´ performance with maths operations, involving both addition and subtraction, was very good (87.8%), and it was also weakly correlated with how well they write numbers (r(49) = .370, p < .05). Since participants did better with maths operations (a more complex skill) than with writing numbers, and almost as well as with reading them, it seems the maths questions actually tapped into the participants' numeracy knowledge, probably because of the natural settings they presented as context for the operations. This natural familiar feeling the situations evoked could explain the very good performance we found in this test subpart.
In sum, after completing the DLP, participants exhibited improved numeracy performance in general. In terms of specific numeracy skills, their performance was especially good when reading numbers in different contexts, as well as when ordering numbers in both ascending or descending order. They still struggled to interpret the decimal point, a source of difficulty that led them to produce errors both in writing and reading. Telling the time also proves to be a complex task sometimes, especially when faced with analogue clocks. Some of their numerical abilities seem to be weakly related, revealing possible avenues to further promoting numeracy skills. In this sense, the ability to read and write numbers (after dictation) were correlated, although weakly, and the ability to perform maths operations was also weakly correlated with their ability to write numbers. Writing numbers appears to be, then, a basic skill to be promoted so as to also further their other numeracy skills.
The elaboration and implementation of an assessment instrument for measuring abilities acquired by a group of users through the DLP —a literacy programme that takes into account the learner’s communicative and sociocultural context— was described. The aim of developing the instrument was to capture the participants’ knowledge with a tailored tool, recognising their status as adult individuals, who actively manage their own lives, those of their children, engage in economic activities, and participate in social endeavours within their community. The instrument assessed the targeted skills through situations designed based on the daily lives of participants.
The administration of the assessment instrument to a group of women who completed the DLP yielded results that delineate the nuanced nature of their acquired skills. For instance, participants demonstrated proficiency in comprehending texts even while still developing decoding automation. Moreover, the results obtained through the instrument provided insights into the participants’ ability to apply their knowledge in practical, real-world scenarios. For instance, participants demonstrated their capacity to interpret texts of varying degrees of familiarity and to perform mathematical operations to solve hypothetical purchasing situations. These tasks emphasise the use of reading, writing and numeracy skills to navigate everyday challenges, aligning with the DLP’s goal of developing functional literacy. Additionally, some tasks —such as writing spontaneous texts like the recipe, deciding how to narrate the story in the vignettes, and answering questions about decisions on scenarios such as a doctor’s appointment and shopping at the market— engaged participants in ways that require not only comprehension but also the analysis and evaluation of information to make informed decisions in context-specific situations, reflecting aspects of critical literacy. This tailored instrument aims to capture how adults might navigate specific situations, utilise their knowledge and available resources to address practical problems, thereby contributing to a deeper understanding of the participants’ ability.
The outcomes of the administration reinforce our methodological approach to evaluating the literacy and numeracy competencies of adult learners with specific instruments. Learning undertaken by adult individuals —due to their cognitive development trajectories and life experience— through a literacy programme as the DLP significantly diverges from the process of children acquiring foundational skills in formal educational settings or other instructional environments. Furthermore, the implementation of a tool designed based on the sociocultural environment of women dwelling in rural areas of the Peruvian Andes provided insights on the resourcefulness with which participants employed their acquired knowledge to navigate the challenges of daily adult life. We trust sharing this assessment tool and the account of its development holds potential utility for researchers engaged in similar studies, particularly those aiming to evaluate the outcomes of literacy programmes tailored to specific user demographics.
While developing and implementing an assessment instrument tailored to the target population of literacy programmes confers an important benefit by enabling to capture the nuanced nature of the knowledge acquired by participants and how they can employ it, this study had limitations: while efforts were made to achieve an optimal balance between familiarity and novelty, for the participants, in the design of assessment scenarios, further refinement will be useful. Moreover, to enhance the utility of the instrument, more information on the degree of familiarity with the scenarios is important to enable more accurate interpretation of the data.
While the designed instrument allowed to gather information on the acquired knowledge by participants to the DLP, further research is needed in order to capture the sustained impact of this knowledge on their lives. First, it is necessary to delve deeper into elucidating the manner in which participants apply their acquired knowledge within authentic real-life contexts in the future. This shift towards real-life application scenarios would aim to provide a more detailed understanding of the practical implications of the programme outcomes. Second, future studies should seek to explore the longitudinal trajectory of participants’ learning by assessing the sustainability of their knowledge over time and examining how it evolves and influences the participants’ lives. This longitudinal perspective will offer insights into the enduring impact of the literacy interventions and shed light on the ongoing educational needs of adult learners in rural settings in Peru.
ADDAE, D. Adults who learn: Evaluating the social impact of an adult literacy project in rural South Africa. Social Sciences & Humanities Open, 3(1), 100115, 2021. https://doi.org/10.1016/j.ssaho.2021.100115
BARTLETT, L. Literacy’s verb: Exploring what literacy is and what literacy does. International Journal of Educational Development, 28(6), 737–753, 2008. https://doi.org/10.1016/j.ijedudev.2007.09.002
BARTON, D. (2007). Literacy. An introduction to the ecology of written language. (2nd ed.). Blackwell Publishing
BEDER, H. (1991). Adult literacy: Issues for policy and practice. Malabar, FL: Krieger Publishing Co.
CAIN, K. Literacy development: The interdependent roles of oral language and reading comprehension. In R. H. Bahr; E. R. Silliman (Eds.), Routledge Handbook of Communication Disorders, 1st ed., pp. 556–584, 2015.
CATTS, H. W. The simple view of reading: Advancements and false impressions. Remedial and Special Education, 39(5), 317–323, 2018. https://doi.org/10.1177/0741932518767
CAYHUALLA, N.; CHILÓN, D.; ESPÍRITU, R. Adaptación de la Batería de Evaluación de los Procesos Lectores PROLEC-R en estudiantes de primaria de Lima Metropolitana [Master´s Thesis, Pontificia Universidad Católica del Perú]. PUCP Theses Repository, 2011. http://hdl.handle.net/20.500.12404/1309
CAYHUALLA, N.; CHILÓN, D.; ESPÍRITU, R. Adaptación psicométrica de la Batería de Evaluación de los Procesos Lectores Revisada (PROLEC-R). Propósitos y Representaciones, 1(1), 39-57, 2013. https://doi.org/10.20511/pyr2013.v1n1.3
CULLIGAN, N. (2005). Theoretical Understandings of Adult Literacy: A Literature review. Massey University Department of Communication and Journalism.
DE MIER, M. V.; BORZONE, A. M.; CUPANI, M. La fluidez lectora en los primeros grados: relación entre habilidades de decodificación, características textuales y comprensión. Un estudio piloto con niños hablantes de español. Revista Neuropsicología Latinoamericana, 4(1), 18–33, 2012. https://neuropsicolatina.org/index.php/Neuropsicologia_Latinoamericana/article/view/79
DUBECK, M.; GOVE, A. The early grade reading assessment (EGRA): Its theoretical foundation, purpose, and limitations. International Journal of Educational Development, 40, 315–322, 2015. https://doi.org/10.1016/j.ijedudev.2014.11.004
EZEIZABARRENA, M. J.; GARCÍA, I. The Spanish adaptation of MAIN. ZAS Papers in Linguistics, 64, 211–220, 2020. https://doi.org/10.21248/zaspil.64.2020.576
FRANKER, Q. (2016). Grundläggande litteracitet för nyanlända ungdomar. [Basic literacy for newly arrived youth]
FREEBODY, P.; LUKE, A. Literacies programs: Debates and demands in cultural context. Prospect: an Australian journal of TESOL, 5(3), 7-16. https://eprints.qut.edu.au/49099/1/DOC090312.pdf.
FREEDMAN, A. M.; MINER, K. R. ; ECHT, K. V. ; PARKER, R. ; COOPER, H. L. F. (2011). Amplifying diffusion of health information in Low-Literate populations through adult Education health Literacy classes. Journal of Health Communication, 16(sup3), 119–133. https://doi.org/10.1080/10810730.2011.604706
FUHRIMAN, A.; BALLIF-SPANVILL, B.; WARD, C.; SOLOMON, Y.; WIDDISON-JONES, K. Meaningful learning? Gendered experiences with an NGO-sponsored literacy program in rural Mali. Ethnography and Education, 1(1), 103–124, 2006. https://doi.org/10.1080/17457820500512887
GAGARINA, N.; KLOP, D.; KUNNARI, S.; TANTELE, K.; VÄLIMAA, T.; BALČIŪNIENĖ, I.; BOHACKER, U.; WALTERS, J. Main: Multilingual Assessment Instrument for Narratives. ZAS Papers in Linguistics, 56, 2012.
GOVE, A.; JIMÉNEZ, J.; CROUCH, L.; MULCAHY-DUNN, A.; CLARKE, M. Manual para la evaluación inicial de la lectura en niños de educación primaria. USAID - Agencia de los Estados Unidos para el Desarrollo Internacional, 2009.
GOODY, J. (ed.). 1968. Literacy in traditional societies. Cambridge, UK: Cambridge University Press.
HULL, G. (2000). Critical literacy at work. Journal of adolescent and adult literacy, 43, (7), 648 ‐ 652.
INEI. Instituto Nacional de Estadística e Informática (n.d.). Tasa de alfabetización de mujeres y hombres de 15 y más años de edad, según ámbito geográfico. Encuesta Nacional de Hogares (ENAHO), 2022. https://m.inei.gob.pe/estadisticas/indice-tematico/analfabetismo-y-alfabetismo-8036/
JANKS, H. (2010). Literacy and power. New York: Routledge
JANKS, H. (2013). Critical literacy in teaching and research1. Education Inquiry, 4(2), 225–242. https://doi.org/10.3402/edui.v4i2.22071
JOHANSSON, B. Dispurse Guidelines. Dispurse foundation, 2022. https://dispurse.org/media/miffobts/guidelines_eng_1.pdf
JOHANSSON, B.; FRANKER, Q. FOCUS - For a functional, digital, and critical literacy. EuroCALL 2023. CALL for All Languages - Short Papers, August 15, 2023. http://ocs.editorial.upv.es/index.php/EuroCALL/EuroCALL2023/paper/view/16981
KELLNER, D.; SHARE, J. (2005). Toward Critical Media Literacy: Core concepts, debates, organizations, and policy. Discourse Studies in the Cultural Politics of Education, 26(3), 369–386. https://doi.org/10.1080/01596300500200169
LONSDALE, M.; MCCURRY, D. (2004). Literacy in the new Millennium. Commonwealth of Australia. https://www.ncver.edu.au/__data/assets/file/0012/2424/nr2l02.pdf
LUKE, A. (2012). Critical Literacy: Foundational notes. Theory Into Practice, 51(1), 4–11. https://doi.org/10.1080/00405841.2012.636324
MCCAFFERY, K. J. ; MORONY, S. ; MUSCAT, D. M. ; SMITH, S. K. ; SHEPHERD, H. L. ; DHILLON, H. M. ; HAYEN, A. ; LUXFORD, K. ; MESHREKY, W. ; COMINGS, J; NUTBEAM, D. (2016). Evaluation of an Australian health literacy training program for socially disadvantaged adults attending basic education classes: study protocol for a cluster randomised controlled trial. BMC Public Health, 16(1). https://doi.org/10.1186/s12889-016-3034-9
MCKENNA, M. C.; ROBINSON, R. D. (1990). Content literacy: A definition and implications. Journal of Reading, 34(3), 184–186.
MINISTRY OF EDUCATION. Minedu. (2005) Diseño curricular básico nacional para los ciclos inicial e intermedio del Programa de Educación Básica Alternativa para Jóvenes y Adultos (PEBAJA), https://www.minedu.gob.pe/normatividad/reglamentos/CURRICULUM-PEBAJA.pdf
MUSCAT, D. M. ; SMITH, S. ; DHILLON, H. M.; MORONY, S.; DAVIS, E. L., LUXFORD, K.; SHEPHERD, H. L.; HAYEN, A.; COMINGS, J.; NUTBEAM, D; MCCAFFERY, K. (2016). Incorporating health literacy in education for socially disadvantaged adults: an Australian feasibility study. International Journal for Equity in Health, 15(1). https://doi.org/10.1186/s12939-016-0373-1
OAKHILL, J.; CAIN, K. Introduction to Comprehension Development. In K. Cain & J. Oakhill (Eds.), Children’s Comprehension Problems in Oral and Written Language: A Cognitive Perspective (pp. 3–40). The Guilford Press, 2007. https://psycnet.apa.org/record/2007-05218-001
PROCTOR, C. P.; DALEY, S.; LOUICK, R.; LEIDER, C. M.; GARDNER, G. L. How motivation and engagement predict reading comprehension among native English-speaking and English-learning middle school students with disabilities in a remedial reading curriculum. Learning and Individual Differences, 36, 76–83, 2014. https://doi.org/10.1016/j.lindif.2014.10.014
RIPOLL SALCEDA, J. C.; AGUADO ALONSO, G.; CASTILLA-EARLS, A. P. The simple view of reading in elementary school: A systematic review. Revista de Logopedia, Foniatría y Audiología, 34(1), 17–31, 2014. https://doi.org/10.1016/j.rlfa.2013.04.006
SHI, Y.; TSANG, M. C. (2008). Evaluation of adult literacy education in the United States: A review of methodological issues. Educational Research Review, 3, 187–217. https://doi.org/10.1016/j.edurev.2007.10.004
STREET, B. (1984). Literacy in theory and practice. Cambridge University Press.
UNESCO. (2006). Education for All Global Monitoring Report: Literacy for Life. UNESCO Publishing.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
Notas
[i] Franker, Qarin (2016).Part 8: Resurser och litteracitetspraktiker. (Resources and Literacy Practices.) I: Module: Emergent literacy. The Swedish National Agency for Education. In Johansson, Franker (2020) and Johansson (2022)
[ii] In Johansson, Franker (2020) and Johansson (2022)
[iii] The DLP also incorporates digital literacy into its framework. However, due to space constraints, we focus only on functional and critical literacies, as they are more relevant to the discussion.
[iv] Further insights into the acquisition of functional and critical literacy, as well as the application of these literacy skills in real-life situations by each participant, were gathered through semi-structured interviews conducted individually with a subgroup of participants. The analysis of this rich and extensive data is currently underway.
[v] For the Initial Cycle, PEBAJA defines reading comprehension as the ability to read both orally and silently, understand and appreciate short and simple texts. The definition also involves understanding audiovisual messages presented in programmes and advertisements from mass media and make value judgments about them, but this approach was not employed in our assessment. Text writing is defined as writing legibly, producing short everyday texts that express their experiences, needs, feelings, and desires (Ministry of Education, 2005, pp. 24, 25).
[vi] The Number and Operations component for the Initial Cycle is defined as the ability to use natural numbers to record, interpret, and communicate quantitative information about real-life situations. It also involves solving problems related to the students’ immediate environment by applying calculations and estimation skills for addition and subtraction of natural numbers and decimal numbers up to the hundredths place (Ministry of Education, 2005, p. 35).
[vii] Both dialects of Spanish are mutually intercomprehensible. Differences may be phonological (pronunciation of specific segments or sounds), lexical (vocabulary) or morpho-syntactic (word and sentence formation processes and structures).
[viii] EGRA is a test that assesses the skills that contribute to reading acquisition in alphabetic languages, composed of a collection of several sub-tasks. We administered the pseudowords sub-task, previously employed for assessing decoding in Peruvian population by the Young Lives Study (https://www.younglives.org.uk/peru).
[ix] Originally, data on word reading was to be gathered, but due to errors in administration only phrase reading was tested.
[x] MAIN is the Multilingual Assessment Instrument for Narratives developed within the framework of the COST Action IS0804 “Language Impairment in a Multilingual Society: Linguistic Patterns and the Road to Assessment” by Natalia Gagarina and collaborators for evaluating narrative abilities of bilingual children. The protagonists and objects were selected by the authors to ensure they are familiar to different populations and cultures.
[xi] Cayhualla and collaborators administered the Batería de Evaluación de los Procesos Lectores Revisada (Revised Battery for the Evaluation of Reading Processes) PROLEC-R to a group of 504 primary school students. We used the results from the administration of the decoding subtask, which involves pseudowords, administered to the 3rd-grade group of 84 students as a reference standard.
[xii] The F-statistic in ANOVA tests whether the differences between group means are greater than the differences within each group. An F-value of 4.67 suggests a meaningful difference between the groups. The p-value indicates the likelihood that the observed results occurred by chance. A p-value below .05 (or other predetermined thresholds) suggests that the observed differences between groups are statistically significant, meaning they are unlikely to have occurred randomly. The p-value of .005 strongly supports the statistical significance of the result, indicating it is unlikely that the observed differences occurred by chance. The η² (eta-squared) represents the proportion of total variation in the data that is explained by the group differences. With an η2-value of .08, the effect size is small to moderate, meaning that approximately 8% of the variability in the data is accounted for by the differences between the groups.
[xiii] Purpose of the text was not considered because participants were required to write texts with specific purposes, which all accomplished.
[xiv] Spearman’s non-parametric correlation is a statistical method employed to measure the strength and direction of the relationship between two variables, without assuming that the data follows a normal distribution. The correlation coefficient (r ) indicates the strength of the relationship, with values 1 or -1 suggesting the strongest possible relationships, and values near 0, indicating no relationship. The p-value indicates the probability that the observed relationship occurred by chance. A p-value below .05 suggests that the correlation is statistically significant.