Jerry Greenfield

EFL/ESL Readability

My primary research focus in English as a second or foreign language has been in the area of readability, the metric of reading difficulty of texts. The abstract of my dissertation can be found here. The following paragraphs from the introduction explain the source of my interest:


Introduction: Origin of the Study

This study grows directly out of concerns about the use of readability formulas at Miyazaki International College (MIC) in developing and assessing English materials to be read by its Japanese EFL students. MIC was founded in 1994 and accredited by the Japanese Ministry of Education as an innovative liberal arts college which uses English as the language of instruction. Because the college’s approach to English as a Foreign Language (EFL) is content-based and driven by the subject matter requirements of a liberal arts curriculum, the faculty need to identify appropriate texts that their students, who typically enter the college with low-intermediate to intermediate English proficiency, can realistically be expected to read. Some instructors have used the classic readability formulas (Flesch Reading Ease and Flesch-Kincaid, Cole-Liau, and Bormuth Reading Levels) available in the Microsoft Word 6 computer application as an

aid both in estimating the reading level of existing materials and in creating new materials and adaptations.

Using readability formulas in such a context raises a number of questions. In the first place, given the disparity between the Japanese students’ English proficiency and that of typical native-English-speaking American college students, how are reading levels indicated by the formulas related to the reading abilities of MIC’s students? A 12th grade reading level score means that the text might be read easily by American high school graduates, but clearly it would be a stretch for Japanese high school graduates for whom English is a foreign language. Second, in light of psycholinguistic and interactive models of the reading process, how do differences in linguistic and cultural background affect the accuracy of the American formulas for Japanese students? These questions become even more pointed when the formulas are used not only to assess the reading difficulty of existing material but also to manipulate the characteristics of texts being written or adapted in order to obtain lower readability scores. Do formula-driven changes in the form of the texts make them as much more readable in practice as their adjusted scores indicate? In short, can the formulas at hand be used with confidence, or do they need to be adjusted or replaced by other measures? If they are valid, what do formula scores mean in relation to the English proficiency of EFL students? In particular, is there a way to relate readability scores to TOEFL scores of EFL readers? Answers to these questions are needed to inform teaching practices at MIC, and they will not only be of interest in other Japanese EFL contexts but will also be suggestive for second language reading instruction everywhere.


I was able to answer conclusively the core question of whether the classic readability formulas are valid for the Japanese EFL readers: They are, or at least they measure relative difficulty of texts as well for MIC’s readers as they do for American school students. About the same time as I was finishing my dissertation, JD Brown published his own research on EFL readability, asking much the same question and apparently coming to the opposite conclusion. I examined his study and presented a critique at the Japan Association of Language Teachers convention in 2000 that put Brown’s conclusions in perspective while confirming my own results. That presentation in expanded form was published in JALT Journal in 2004 and can be found here.


The issue of what scores mean in terms of the reading performance of students proved to be a more vexing question. My data did not permit establishing a relationship between readability levels and TOEFL scores. Following up my original research with exploratory projects to identify criteria that relate actual performance to formula scores tentatively associated readability scores with subjective student and faculty ratings of text difficulty and with performance on classroom tasks. I reported those studies at Thai TESOL 2006 (handout here). Still, the issue of performance criteria has, as far as I am aware, still (in 2010), not been researched adequately. Until if has been, readability measurement itself will be of limited value.