Predicting Effects of HPO Interventions with Socio-Cognitive Agents that Leverage Individual Residuals (TAILOR)
Human performance optimization (HPO) is centered around designing tools and interventions that increase human performance. Typically, A/B experiments are used to evaluate alternative HPO interventions to determine which yield the best performance. Unfortunately, A/B experiments are time consuming and costly to run. Further, each new intervention requires an additional A/B experiment to evaluate it, which means that it is often feasible to only evaluate a few intervetions. To overcome these issues, we exploring the use of computational models of humans—or computer models that receive the same stimuli as human A/B experiment participants and then make the same decisions they do—to simulate A/B experiments and support reasoning about about hypothetical and counterfactual HPO interventions. Additionally, we are also exploring how data about specific individuals can be leveraged to create tuned computational models that better approximate and predict the behavior of specific individuals they were tuned to.
Knowledge tracing algorithms are embedded in Intelligent Tutoring Systems (ITS) to keep track of what students know and do not know, in order to better focus practice. Due to costly in-classroom experiments, we explore the idea of using machine learning models that can simulate students’ learning process. We conduct experiments using such agents generated by Apprentice Learner (AL) Architecture to investigate the online use of different knowledge tracing models (Bayesian Knowledge Tracing and the Streak model). We were able to successfully A/B test these different approaches using simulated students. An analysis of our experimental results revealed an error in the implementation of one of our knowledge tracing models that was not identified in our previous work, suggesting AL agents provide a practical means of evaluating knowledge tracing models prior to more costly classroom deployments. Additionally, our analysis found that there is a positive correlation between the model parameters achieved from human data and the param- eters obtained from simulated learners. This finding suggests that it might be possible to initialize the parameters for knowledge tracing models using simulated data when no human student data is yet available.
Teachable AI (TAI) systems can significantly reduce the burden to create AI systems that empower non-programmers to author AI models. Our goal is the build teachable AI agents that can be taught rather than programmed, however, using natural human interactions. The Natural Training Interaction (NTI) testbed is a system that lets us observe teaching and learning interactions between multiple participants; including humans and AI agents. Studying these interactions will enable us to understand the patterns and modalities that are deemed effective when transferring knowledge between participants. We aim to eventually build TAI systems that utilize these natural interaction patterns - to build human-centered AI technologies.