The University of Iowa is proud to host the 27th Midwest Association of Language Testers Conference (MwALT) in October 2026. Join us as we welcome MwALT back to Iowa City for the fifth time.

MwALT 2026 invites submissions that examine language assessment from diverse perspectives. The theme, Language Assessment Through Multiple Lenses: Uses, Impacts, and Innovations, highlights three interrelated dimensions of the field:

  • Uses: the practical applications of language assessments in educational, professional, and social contexts.
  • Innovations: emerging methodologies, technologies, and tools shaping language assessments.
  • Impacts: the broader effects of language assessments on learners, educators, institutions, and society.

Submissions may address theoretical or empirical questions related to, but not limited to, stakeholder roles, ethical considerations, assessment literacy, sociopolitical contexts, and technological innovations (e.g., Generative AI, NLP). Proposals across all educational levels and contexts, including K-12, higher education, professional settings, government, and the private sector, are encouraged. We particularly welcome submissions that foster critical reflection and dialogue on best practices, ethical and social implications, and future directions in language assessment.

Important Dates

Proposal submission opens: May 1, 2026
Submission portal closes: June 30, 2026
Results announced: July 31, 2026
Registration opens: August 15, 2026
Registration closes: September 30, 2026
Conference: October 10, 2026

Call for Proposals will be open from May 1 - June 30, 2026

Call for Proposals - Submission Instructions

The 2026 MwALT Conference invites submissions that examine language assessment from diverse perspectives. The theme, Language Assessment Through Multiple Lenses: Uses, Innovations, and Impacts, highlights three interrelated dimensions of the field:

  • • Uses: the practical applications of language assessments in educational, professional, and social contexts.
  • • Innovations: emerging methodologies, technologies, and tools shaping language assessment.
  • • Impacts: the broader effects of assessment on learners, educators, institutions, and society.

The conference provides a platform for researchers, practitioners, and policymakers to share insights, challenge assumptions, and explore assessment approaches that are both effective and equitable.

Submissions may address theoretical or empirical questions related to, but not limited to, stakeholder roles, ethical considerations, assessment literacy, sociopolitical contexts, and technological innovations (e.g., Generative AI, NLP). Proposals across all educational levels and contexts, including K–12, higher education, professional settings, government, and the private sector, are encouraged. We particularly welcome submissions that foster critical reflection and dialogue on best practices, ethical and social implications, and future directions in language assessment.

Submission Instructions

All submissions must be original and not previously presented elsewhere. Abstracts should not exceed 300 words. During submission, authors may indicate their preferred presentation format: Paper Presentation or Roundtable Discussion.

  • Paper Presentation: Papers should present theoretical or empirical research and are allocated 20 minutes for presentation plus 10 minutes for Q&A. Empirical papers should include research background and rationale, methodology, findings, and implications. Conceptual papers should describe the problem addressed, the theoretical orientation or new approach, and the logic of the argument to be presented.
  • Roundtable Discussion (including Work-In-Progress): This format provides an opportunity for presenters to discuss assessment research or test development projects with a small group of participants. Each presenter will have three 20-minute discussion segments to display their work and engage participants in discussion and feedback. Presentations should focus on methodology, preliminary results, and next steps, inviting participants to offer suggestions and perspectives. Research-in-Progress submissions are welcome.

Submission Portal (Opening from May 1 to June 30, 2026)

Evaluation of Proposals

All submissions will undergo double-blind review by at least two reviewers. Proposals will be evaluated based on the following criteria:

  • Clarity and coherence of the proposal
  • Appropriateness of methods for data collection and analysis (if applicable)
  • Alignment with the conference theme
  • Significance and implications for the field of language assessment

Preference will be given to proposals addressing the conference theme, though all topics in language assessment are welcome.

Proposal review results will be announced around July 31, 2026.

Registration will be open August 15 - September 30, 2026

Plenary Speakers

Jamie L. Schissel

Jamie L. Schissel

Professor of TESOL, University of North Carolina at Greensboro

Presentation - Humanizing Approaches for Language Assessment in the Age of GenAI: Peoples and Possibilities

Jamie L. Schissel is a Professor of TESOL and coordinator for M.Ed. TESOL and Add-on ESL/Dual Language Education certificate programs. Her research focuses on assessment, policy, and teacher education with(in) culturally and linguistically diverse communities. Through historical analyses and participatory action research collaborations, these projects emphasize relationship-building to improve educational opportunities. She is the editor of TESOL Quarterly Forum, and her work has been published in journals such as Applied Linguistics, Language Assessment Quarterly, Language Policy, Language TestingLinguistics and Education, and TESOL Quarterly. In 2021, she received the AERA Bilingual Education Research SIG Early Career Scholar Award. She is co-founder and co-facilitator of two language assessment associations: Asociación Mexicana de Evaluación de Lenguas Indígenas [Mexican Association for the Evaluation of Indigenous Languages] and Association of Language Teachers for Classroom Assessment in the Dominican Republic. Across teaching, research, and service, she focuses on the care of those around her.

Presentation Abstract - Humanizing Approaches for Language Assessment in the Age of GenAI: Peoples and Possibilities

The field of language testing and assessment—like the overall field of educational measurement—has been an earlier adapter and innovator of technological advances. As detailed in the 2025 special issue of Language Testing, the prevalence of Generative Artificial Intelligence (GenAI) presents significant promises and challenges. Many of these challenges exacerbate existing issues faced by culturally and linguistically minoritized (CLM) test-takers. It is within this landscape that this talk situates humanizing approaches for language assessment. Humanizing assessment presents an overall perspective that the lived experiences of test-takers are inherently valuable for practices of assessment design, development, implementation, interpretation, and understanding consequences of use. Humanizing practices are well-established as pedagogical and research approaches. There has been a growing emphasis on adopting humanizing assessment practices across multiple disciplines as well. As a complex area of intersecting research, the transdisciplinary lens of language assessment has been opening pathways for state-of-the-art contributions for understanding humanizing assessment practices through theoretical, empirical, and practice-based research that focuses on the humanity and dignity (cf ILTA Code of Ethics 2000 Principle 1; 2024 Principle of Respect) of CLM test-takers in particular. Using testimonio-as-methodology, I present narratives from CLM test-takers in the United States and Mexico who are leaders in the field of language education. CLM in these contexts means self-identifying as immigrant, transnational, Indigenous, and/or other less clearly delineated categorizations. Their narratives contribute to a complex mosaic of understandings of the peoples who are needed to take assessments and the potentials for assessments moving forward. Their perspectives provide a vast array of often underrepresented perspectives and approaches that serve to advance the field of language testing and assessment overall. To conclude, I discuss how engaging and galvanizing research focused on humanizing approaches for language assessment can guide the important work with CLM test-takers forward. 

Ping-Lin Chuang

Ping-Lin Chuang

Language Measurement Scientist, Duolingo

Presentation - Technology as a Mediator of Language Assessment: From Construct Representation to Test-Taker Experience

Ping-Lin Chuang is a Language Measurement Scientist at Duolingo, where she conducts validity and efficacy research for the Duolingo English Test. She received her PhD in Linguistics from the University of Illinois at Urbana-Champaign. Her research focuses on writing and speaking assessment, including test-taker performance and rater behavior, as well as the application of technology and psycholinguistic methods in language assessment. Her work has been published in Language TestingJournal of Second Language WritingStudies in Second Language Acquisition, and Applied Linguistics. Ping-Lin previously served as an MwALT Student Representative and is currently Co-Chair of the ILTA Integrated Assessment Special Interest Group. She was a recipient of the MwALT Best Student Presentation Award (2021, 2024), the Robert Lado Memorial Award (2022), and the ILTA Student Travel Award (2022).

Presentation Abstract: Technology as a Mediator of Language Assessment: From Construct Representation to Test-Taker Experience

Recent advances in technology have profoundly reshaped language assessment, influencing not only how tests are delivered and scored, but also how language ability is conceptualized and interpreted. As assessment practices increasingly integrate automation, generative artificial intelligence, and digitally mediated environments, long-standing assumptions about constructs, evaluation, and test use warrant renewed examination (Voss et al., 2023; Xi, 2023). In this plenary, I will position technology as a mediator in language assessment and examine this role through three interconnected perspectives in a digital-first assessment context.

First, I will consider construct representation through vocabulary assessment by examining how psychometric difficulty relates to underlying linguistic features. Differences between expected and observed response patterns invite reflection on how language constructs are represented in technology-mediated assessments. Second, I will turn to scoring and evaluation, exploring how human judgments of writing and speaking performance play a crucial role in automated scoring systems. Technology allows human evaluation to be scaled, supporting scoring that is consistent and aligned with the intended construct, while also highlighting the need to examine potential bias. Third, I will address assessment use and impact by exploring how digitally mediated practice environments influence test-taker affect and performance, extending the consequences of assessment beyond the test itself.

Together, these perspectives demonstrate the need to view language assessment as an integrated system in which design, scoring, and use are deeply connected through technological mediation. I will conclude by discussing implications for validity, ethics, and innovation in language assessment, and by reflecting on how the field might respond to the opportunities and challenges posed by technological advancements.

Organizing Committee

Lia Plakans, Professor of Multilingual Education, University of Iowa
I-Chun Vera Hsiao, PhD Candidate in Multilingual Education, University of Iowa
Xinyue Shui, PhD Candidate in Multilingual Education, University of Iowa
Andy Jiahao Liu, PhD Student in Multilingual Education, University of Iowa
Kwangmin Lee, Assistant Professor of TESOL, Western Michigan University
Renka Ohta, Senior Research Project Manager, ETS
Ray J. T. Liao, Assistant Professor, National Taiwan Ocean University

MwALT 2026 Sponsors