Osnova témat

  • Welcome

    Psychometrics uses statistical models for analysis of educational, psychological, or patient-reported measurements. This course covers computational aspects of main topics in psychometrics including reliability and validity of measurement, traditional item analysis, use of regression models for item description, item response theory (IRT) models, differential item functioning (DIF), computerized adaptive testing (CAT), and an overview of further topics. Methods are demonstrated using data of behavioral measurements from different areas. Exercises are prepared in freely available statistical software R, other psychometric software is also introduced.

  • L1 Introduction

    Read Chapter 1 on https://perusall.com

    Run R code for the chapter from https://github.com/patriciamar/PsychometricsBook 

  • L2 Validity

  • L3 Internal structure

  • L4 Reliability

  • L5 Item analysis

  • L6 Item analysis w/ regression

  • L7 IRT models

  • L8 More complex IRT models

  • L9 More complex IRT models

  • L10 Differential item functioning

  • L11 Invited talk

    December 20, 2022 (3:00 PM CET). 
    David Kaplan (University of Wisconsin – Madison): Probabilistic Forecasting with International Large-Scale Assessments: Applications to the UN Sustainable Development Goals

    Note. Plenary room (room. 318, second floor) Institute of Computer Science, Pod Vodárenskou věží 2, Prague 8, also on Zoom.

    Abstract. In 2015, the Member States of the United Nations (UN) adopted the Sustainable Development Goals. With regard to education, the UN identified equitable, high-quality education, including the achievement of literacy and numeracy by all youth and a substantial proportion of adults, both men and women, as one of its global SDGs to be attained by 2030. To analyze education policies such as these, it is critically important to monitor trends in educational outcomes over time. Indeed, as educational systems around the world face new challenges due to the COVID-19 pandemic, monitoring trends in educational outcomes could help identify the long-run impact of this unprecedented health crisis on global education. To this end, international large-scale assessment programs such as PISA are uniquely situated to provide population-level trend data on literacy and numeracy outcomes. The purpose of this talk is to describe a new project in collaboration with the University of Heidelberg and funded by the US Institute of Education Sciences. This project proposes a methodology applicable to international large-scale assessments, and PISA in particular, to monitor and forecast changes in gender equity and to relate changes over time in gender equity to policy-relevant predictors and exogenous events such as the coronavirus pandemic. We utilize a Bayesian workflow to account for uncertainty in all steps in the modeling process, including uncertainty in the parameters of the model as well as model uncertainty in the choice of policy-relevant predictors. A proof-of-concept using data from the United States NAEP program provides a demonstration of the ideas.

    References.
    Kaplan, D., & Huang, M. (2021). Bayesian probabilistic forecasting with large-scale educational trend data: A case study using NAEP. Large-scale Assessments in Education, 9(1), 1-31. https://doi.org/10.1186/s40536-021-00108-2
    Kaplan, D., & Jude, N. (2021). Trend analysis with international large-scale assessments: Past practice, current issues, and future directions. In International Handbook of Comparative Large-Scale Studies in Education: Perspectives, Methods and Findings (pp. 1-14). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-030-88178-8_57

  • L12 Computerized adaptive tests