I am a scientist interested in applying machine learning, statistics and data visualization techniques to answer political, psychological and economic questions.
In my work I use Python (sklearn, matplotlib, Scipy, pandas and other libraries) and R (Caret, MASS, tidyverse family, ggplot and other libraries). I am also skilled with SPSS and other statistical packages, such as Jamovi and JASP.
PhD in Cognition and Cognitive Neuroscience, 2016 - 2022
Texas A&M University
Graduate coursework, 2014 - 2016
University of Louisiana at Lafayette
BSc in Psychology, 2009 - 2013
National Research University - Higher School of Economics
Predicting equipment failures from sensor data
Mouse tracking, a new action-based measure of behavior, has advanced theories of decision making with the notion that cognitive and social decision making is fundamentally dynamic. Implicit in this theory is that people’s decision strategies, such as discounting delayed rewards, are stable over task design and that mouse trajectory features correspond to specific segments of decision making. By applying the hierarchical drift diffusion model and the Bayesian delay discounting model, we tested these assumptions. Specifically, we investigated the extent to which the “mouse-tracking” design of decision-making tasks (delay discounting task, DDT and stop-signal task, SST) deviate from the standard “keypress” design of decision making tasks. We found remarkable agreement in delay discounting rates (intertemporal impatience) obtained in the keypress and mouse-tracking versions of DDT (ρ = 0.90) even though these tasks were given about 1 week apart. Rates of evidence accumulation converged well in the two versions (DDT, ρ = .86; SST, ρ = .55). Omission/commission error in SST showed high agreement (ρ = .42, ρ = .53). Mouse-motion features such as maximum velocity and AUC (area under the curve) correlated well with nondecision time (ρ = −.42) and boundary separation (ρ = .44)—the amount of information needed to accumulate prior to making a response. These results indicate that the response time (RT) and motion-based decision tasks converge well at a fundamental level, and that mouse-tracking features such as AUC and maximum velocity do indicate the degree of decision conflict and impulsivity.
The accurate detection of attention-deficit/hyperactivity disorder (ADHD) symptoms, such as inattentiveness and behavioral disinhibition, is crucial for delivering timely assistance and treatment. ADHD is commonly diagnosed and studied with specialized questionnaires and behavioral tests such as the stop-signal task. However, in cases of late-onset or mild forms of ADHD, behavioral measures often fail to gauge the deficiencies well-highlighted by questionnaires. To improve the sensitivity of behavioral tests, we propose a novel version of the stop-signal task (SST), which integrates mouse cursor tracking. In two studies, we investigated whether introducing mouse movement measures to the stop-signal task improves associations with questionnaire-based measures, as compared to the traditional (keypress-based) version of SST. We also scrutinized the influence of different parameters of stop-signal tasks, such as the method of stop-signal delay setting or definition of response inhibition failure, on these associations. Our results show that a) SSRT has weak association with impulsivity, while mouse movement measures have strong and significant association with impulsivity; b) machine learning models trained on the mouse movement data from “known” participants using nested cross-validation procedure can accurately predict impulsivity ratings of “unknown” participants; c) mouse movement features such as maximum acceleration and maximum velocity are among the most important predictors for impulsivity; d) using preset stop-signal delays prompts behavior that is more indicative of impulsivity.
Attention-deficit/Hyperactivity disorder (ADHD) affects the quality of life worldwide. It is commonly diagnosed and studied with specialized questionnaires and behavioral tests. However, in cases of late-onset or mild forms of ADHD, behavioral measures often fail to gauge the deficiencies well-highlighted by questionnaires. This lack of sensitivity in behavioral tests is problematic because it prevents researchers from studying pathophysiology of ADHD ranging from normal to abnormal. To improve the sensitivity of behavioral tests, in the present study we propose a novel version of the Stop-signal task (SST) - a common behavioral test of ADHD - which integrates machine learning and mouse cursor tracking (ML-SST). In one experiment, we compared ML-SST and a standard version of SST (s-SST) in their ability to detect ADHD symptoms in an adult sample. Our results indicate that introducing mouse cursor tracking and ridge regression produces the strongest and most stable associations between questionnaire data and behavioral measures.
ADHD is frequently characterized as a disorder of executive function (EF). However, behavioral tests of EF, such as go/No-go tasks, often fail to grasp the deficiency in EF revealed by questionnaire-based measures. This inability is usually attributed to questionnaires and behavioral tasks assessing different constructs of executive functions. We propose an additional explanation for this discrepancy. We hypothesize that this problem stems from the lack of dynamic assessment of decision-making (e.g., continuous monitoring of motor behavior such as velocity and acceleration in choice reaching) in classical versions of behavioral tasks. We test this hypothesis by introducing dynamic assessment in the form of mouse motion in a go/No-go task. Our results indicate that, among healthy college students, self-report measures of ADHD symptoms become strongly associated with performance in behavioral tasks when continuous assessment (e.g., acceleration in the mouse-cursor motion) is introduced.