This project focuses on assessment design and investigates the validity of a portfolio assessment as a tool for learning in linguistics for the purposes of testing the efficacy of incorporating formative assessment tasks into online teaching. The project involves a case study of a linguistics course for in-service teachers, where the participants’s subject knowledge is tested before and after teaching and assessment activities in the course.
Concerns about online exams are related to validity and whether results accurately reflect students’ conceptual knowledge of a topic. The main concern is cheating: with unsupervised online exams it is difficult to tell whether the exam is the students’ own work. Furthermore, since internet searches can give students the answers they need, and the conceptual knowledge displayed in an exam can be very superficial at best or a complete misrepresentation at worst, the validity of the assessment can be called into question. These issues are perhaps of special concern in subjects where the focus is on declarative knowledge or questions to which there is generally one correct answer or procedure, which is the case in many first-year university courses.
Exams are just one part of the assessment design of a course. A good assessment design involves both formative and summative assessment. The two are sometimes regarded as conflicting or opposing forms of assessment (Lau, 2016), but both are natural components of any course design: Formative assessment, often dubbed assessment for learning, usually takes place during the course, and gives both students and teachers an idea about students’ learning. Summative assessment, dubbed assessment of learning, normally takes place at the end of a course, in the form of an exam, to measure the extent to which learning outcomes are achieved. Our project focuses on the effects of the integration of the two in a portfolio design.