Custom Mobile App for Foreign Language Assessment

Franziska Lys, Matthew Taylor, and Sergei Kalugin

Discipline: German

Download the PDF – Lys, Taylor, and Kalugin: Custom Mobile App for Foreign Language Assessment

Context

One of the difficulties in teaching language, culture, and literature is the alignment of each student’s proficiency with a given course. First-year students initially take an online placement test in order to begin appropriately, but we do not yet offer a continuous reassessment of students’ developing language skills to help them make subsequent course choices during their years at Northwestern. Often, we simply assume that if student X has taken course Y, he or she should be ready for course Z. Yet, from empirical research in our department, we know that students develop their language proficiency differently, even when taking the same courses or engaging in similar study abroad programs and extracurricular activities. Using a standardized assessment, we found that average scores among students showed an anticipated improvement as they progress through the program: 45% at the end of the first-year program; 62% at the end of the second-year program; and 75% and 81% for for students in our 200-level and 300-level courses, respectively. However, when looking at individual scores, there are notable variations: Some students in the first-year program individually scored between 70% and 80% already at the end of their first year, while a few students scored as low as 50% in a third-year course. Clearly, the first-year students could have advanced much more rapidly by skipping the second-year sequence altogether.

Class Level Students Tested Average Score Lowest and Highest Score
First-Year German N=56 45% 25% – 80%
Second-Year German N=47 62% 39% – 86%
Students in 200-level courses N=23 75% 53% – 88%
Students in 300-level courses N=27 81% 50% – 100%
Placement Exam Results according to Class Levels at the end of Spring 2014

Project

We are developing a custom mobile app that will allow students to repeatedly self-evaluate their foreign language proficiency throughout their undergraduate years. To measure proficiency, the app will rely on a published framework of language ability validation with clearly defined can-do statements describing what a learner is supposed to be able to do in four key language areas: reading, listening, speaking, and writing. Globally, there are two widely adopted systems of proficiency validation: The Common European Framework of Reference (CEFR) developed in Europe and the American Council on the Teaching of Foreign Languages (ACTFL) scale developed in the United States. Both frameworks are based on extensive empirical research across multiple languages, and both are intended to provide transparency and coherence in language learning, teaching, and curriculum development. After a thorough comparison of the two systems, we have decided to use the ACTFL can-do statements as a primary input for the app. Each can-do statement will be accompanied by appropriate language samples.

Objectives & Outcomes

The goal of this project is to provide an accurate and easy-to-use self-evaluation tool to students. With it, they can repeatedly assess their foreign language proficiency throughout their undergraduate career. Repeated self-evaluation will encourage students to take a more active role in assessing their progress, which is an important step toward a more personalized learning environment that meets individualized needs. It will also build self-confidence and allow students to optimally align course work with their current proficiency level.

Since the project is still in development, these remain expected outcomes. Once the app is in use, we will have more data that can measure the success of our objectives and how closely we have met the outcomes. During the development and testing of the app, we plan to explore several research opportunities: investigation of validity of test questions—especially self-assessment—through data collection; alignment of our courses to can-do statements; and investigation of how well our majors and minors do in our four-year program.

Results

Although the app is currently in development, the team is already gathering valuable data that inform an ability to meet the stated objectives. The team has had several meetings to discuss the use of cross-platform application development kits that will allow for the packaging of responsive mobile applications built using HTML5 and JavaScript. The technology selection must allow for requisite functionality and features. Additionally, because the app will be primarily used by undergraduates, students are also contributing to the design, implementation, and experiential data related to the app development, including an undergraduate student supported by an Undergraduate Research Assistant Program (URAP) grant.

We have learned that we will need to achieve a unique and memorable brand appearance, and we have decided to name the app LAVA (for Language Assessment Verification Application). Building on this theme, we have designed several of the interactive screens and are working on various testing categories. We will verify a first level of tests during spring quarter.

Lessons Learned

As we have begun to implement the project, we have already learned from the earliest phases that the assessment mechanism should be enriched by a more robust scope of interactivity. The original idea of presenting simple can-do statements has expanded to also including a bank of quiz and exercise simulation items that ask students to more concretely verify the can-do statements. This decision has extended the implementation time of the project.