• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • HUman Bio-behavioral Signals Lab
  • People
  • Research
  • Dissemination
  • Press
  • H3 @ ICMI2018
  • VerBIO Dataset

HUBBS Lab @ TAMU

Texas A&M University College of Engineering

Research

Research Projects

 

National Science Foundation (NSF) CAREER – Award #2046118

Enabling Trustworthy Speech Technologies for Mental Healthcare

This research aims to design reliable machine learning, notably for speech-based diagnosis and monitoring of mental health, for addressing three pillars of trustworthiness: explainability, privacy preservation, and fair decision making. Trustworthiness is critical for both patients and clinicians: patients must be treated fairly and without the risk of reidentification, while clinical decision-making needs to rely on explainable and unbiased machine learning. Thus, this work (1) designs novel speaker anonymization algorithms that retain mental health information and suppress information related to the identity of the speaker; (2) improves explainability of speech-based models for tracking mental health through novel convolutional architectures that learn explainable spectrotemporal transformations relevant to speech production fundamentals; and (3) examines how bias in data and model design may perpetuate social disparities in mental health. Through a series of experiments this work further contributes to understanding ways in which human-machine partnerships are formed in mental healthcare settings along dimensions of trust formation, maintenance, and repair. You can find more information here.

Student researchers: Kexin Feng, Abdullah Aman Tutul, Ehsanul Haque Nirjhar, Vinesh Ravuri, Michael Yang

Related Publications:

  • Feng & Chaspari, “Toward Knowledge-Driven Speech-Based Models of Depression: Leveraging Spectrotemporal Variations in Speech Vowels,” accepted to IEEE-EMBS BHI 2022
  • Ravuri et al., “Preserving Mental Health Information in Speech Anonymization,” accepted to 2nd Workshop on What’s Next in Affect Modeling, ACII 2022
  • Tutul et al., “Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech” ACM ICMI 2021

 

Air Force Office of Research (AFOSR) – FA9550-22-1-0010

Investigating Human Trust in AI: A Case Study of Human-AI Collaboration on a Speech-Based Data Analytics Task

This research investigates trust in artificial intelligence (AI) during a human-in-the-loop collaborative speech-based data analytics task (DAT). Our work examines dimensions of trust during a human-AI collaboration paradigm that involves a speech-based DAT, in which human users will be called to collaborate with an AI system in detecting deceptive/truthful speech, a challenging DAT of high relevance to AFRL and the military. The objectives of this work are to: (1) Investigate dimensions of trust in human-AI collaborative DAT; (2) Identify human and system-related factors of trust in AI; and (3) Build an evidence-based model of human trust in AI and its effect on human-AI teaming outcomes.

Student researchers: Abdullah Aman Tutul

Related Publications:

  • Tutul et al., “Investigating Trust in Human-Machine Learning Collaboration: A Pilot Study on Estimating Public Anxiety from Speech” ACM ICMI 2021

 

National Aeronautics and Space Administration (NASA) – #80NSSC22K0775

Artificial intelligence for tracking micro-behaviors in longitudinal data and predicting their effect on well-being and team performance

Future long-distance space exploration will have a number of challenges that increase the risk of inadequate cooperation, coordination and psychosocial adaptation, and can lead to behavioral health and performance decrements. Micro-behaviors detected by artificial intelligence (AI) have the potential to provide unique insights into emotional reactivity and operationally-relevant team performance, beyond self-report team functioning measures commonly used in NASA-funded research. Our research leverages advanced multimodal data analytics to detect micro-behaviors, including micro-aggressions, micro-conflicts, and micro-affirmations. It further identifies emotional reactivity to micro-behaviors, and examines the effect of micro-behaviors on operationally relevant team functioning.

Student researchers: Projna Paromita

Related Publications:

  • Paromita et al., “Vocal markers of micro-behaviors between astronaut team members during analog space exploration missions,” under review.

 

National Science Foundation (NSF) S&CC – Award #2126045

Digital Twin City for Age-friendly Communities – Crowd-biosensing of Environmental Distress for Older Adults

Neighborhood environments in most communities can be the source of significant physical and emotional distress to older adults, thereby inhibiting their mobility and outdoor physical activity. This project thus aims to (1) create a digital twin city (DTC) model that reveals older adults’ collective distress and associated environmental conditions, and (2) leverage the DTC model to develop and implement technological and environmental interventions that alleviate such distress and promote older adults’ independent mobility and physical activity. The DTC model is constructed by matching or twinning crowdsourced biosignals with street-level visual data from participatory sensing and Google Street View, enabling the establishment of a city’s affective map. The project leverages the DTC model to design, implement, and/or evaluate stress-responsive interventions, in collaboration with local stakeholders and older adults in an underserved neighborhood in Houston, TX.

Postdoctoral researchers: Jinwoo Kim

Student researchers: Raquel Yupanqui, Ehsanul Haque Nirjhar

Related Publications:

  • Kim et al., “Pedestrians as Sensors for Walkable Built Environment: Location-based Collective Distress using Large-scale Biosignals in Real Life,” under review.
  • Nirjhar, Kim, et al., “Sensor-based detection of individual walkability perception to promote healthy communities,” under review.
  • Kim et al., “Capturing Environmental Distress of Pedestrians Using Multimodal Data: The Interplay of Biosignals and Image-Based Data,” ASCE Journal of Computing in Civil Engineering, 2022
  • Kim et al., “Can Pedestrians’ Physiological Signals Be Indicative of Urban Built Environment Conditions?” CRC 2020
  • Kim et al., “Environmental Distress and Physiological Signals: Examination of the Saliency Detection Method,”ASCE Journal of Computing in Civil Engineering, 2020

 

National Science Foundation (NSF) CHS – Award #1956021

Bio-Behavioral Data Analytics to Enable Personalized Training of Veterans for the Future Workforce

This project promotes fair and ethical treatment of veterans in the future job landscape by providing the empirical knowledge needed to remove implicit bias and misconceptions against veterans and prepare veterans for obtaining and maintaining competitive positions in the future workforce. We are gathering empirical evidence to understand veterans’ common feelings, thoughts, and potential weaknesses in social effectiveness skills during the civilian job interviews. We are further designing a preliminary assistive technology enabled by artificial intelligence for promoting veterans’ interview skills in a tailored and inclusive manner, ultimately preparing them for the future workforce and broadening their participation in fields where they are traditionally underrepresented, such as computing. Please follow this link for more information.

Student researchers: Ellen Hagen, Ehsanul Haque Nirjhar, Md Nazmus Sakib

Related Publications:

  • Hagen et al., “Interviewer Perceptions of Veterans in Civilian Employment Interviews and Suggested Interventions,” APA International Military Testing Association, 2022
  • Nirjhar et al., “A pilot study on self-reported and bio-behavioral measures of stress of U.S. veterans during civilian job interviews,” accepted in ACII 2022
  • Raether et al., “Evaluating Just-In-Time Vibrotactile Feedback for Communication Anxiety,” accepted in ACM ICMI 2022
  • Nirjhar et al., “Knowledge- and data-driven models of multimodal trajectories of public speaking anxiety in real and virtual settings,” ACM ICMI 2021
  • Agarwal et al., “Evaluating in-the-moment feedback in virtual reality based on physiological and vocal markers for personalized speaking training,” CONVR 2021

 

National Institutes of Health (NIH) – 1R42MH123368-01

The Development and Systematic Evaluation of an AI-Assisted Just-in-Time Adaptive Intervention for Improving Child Mental Health

Early intervention of maladaptive family relationships is thus crucial for preventing or offsetting negative developmental trajectories in at-risk children. Just-in-time adaptive interventions (JITAIs) use smartphones, wearables, and artificial intelligence (AI) to identify and respond to psychological processes and contextual events as they unfold in everyday life. In collaboration with The Technological Interventions for Ecological Systems (TIES) Lab at UT Austin and the Signal Analysis and Interpretation Laboratory at USC, this research project builds and tests a JITAI to provide opportune supports to families in dynamic response to contextual events and shifting psychological states to amplify attachment, regulate emotion, and intervene in maladaptive parent-child interactional patterns. More information can be found here.

Student researchers: Abdullah Aman Tutul

 

Engineering Information Foundation (EiF) – Grant 18.02

In-the-Moment Interventions for Public Speaking Anxiety

Public speaking skills are essential to help people effectively exchange ideas, persuade, and make tangible impact, and comprise a major factor of academic and professional success. A major cause of anxiety during public speaking is related to the novelty and uncertainty of the task, which can be alleviated through the exposure to public speaking experiences and gradual change of the negative perception related to this situation. The goal of this research is the design of in-the-moment virtual-reality interventions for public speaking that can predict momentary anxiety from bio-behavioral signals and automatically provide personalized feedback. You can see this video for more information.

Student researchers: Megha Yadav, Ehsanul Haque Nirjhar, Kexin Feng, Jason Raether

Related Publications:

  • von Ebers et al., “Predicting the Effectiveness of Systematic Desensitization Through Virtual Reality for Mitigating Public Speaking Anxiety,” ACM ICMI 2020
  • Nirjhar et al., “Exploring Bio-Behavioral Signal Trajectories of State Anxiety During Public Speaking,” IEEE ICASSP 2020
  • Yadav et al., “Virtual reality interfaces and population-specific models to mitigate public speaking anxiety,” IEEE ACII 2020 (nominated for best paper award)
  • Yadav et al., “Speak Up! Studying the interplay of individual and contextual factors to physiological-based models of public speaking anxiety,” TransAI 2019

Texas A&M PESCA Research Seed Grant Program

Privacy-Preserving Emotion Recognition

Voice-enabled communication is a major part of today’s cyberspace and relies on the transmission and sharing of speech signals. Monitoring speech patterns can significantly benefit people’s lives, since it can help tracking, predicting, and potentially intervening on individuals’ physical and mental health. Yet, the sensitivity and privacy of information included in speech renders its sharing an irresistible trend for commercial, political, and cultural purposes–often with malicious intentions. This work develops computational models of speech capable to preserve facets of information related to the human state (e.g., affect, pathology, emotion), while eliminating speech-dependent information related to the identity of the speaker.

Student researchers: Vansh Narula

Related Publications:

  • Narula et al., “Preserving privacy in image-based emotion recognition through user anonymization,” ACM ICMI 2020
  • Arora et al., “Exploring Siamese Neural Network Architectures for Preserving Speaker Identity in Speech Emotion Classification“, Proc. Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction (M3HMI) Workshop, ICMI 2018.

Texas A&M Innovation[X] Program

Adaptive Responsive Environments

Light, colors, smells, and noises are several of the environmental factors that consciously or unconsciously affect our mood, cognition, performance, or even physical and emotional health. For individuals with neurological abnormalities, such as children with autism spectrum disorders (ASD), environmental discomfort yielding from noises, scents, light, and heat, can feel like a continuous bombardment. The environmental sensation is different for each person and a single condition cannot fit all individuals, therefore outlining the need of personalized solutions. In collaboration with Mechanical Engineering, Construction Science, and Psychological & Brain Sciences, we aim to design an intelligent and adaptive indoor living space which can continuously and unobtrusively “sense” each individual’s neuro-physiology, and then seamlessly and intuitively adjust the local environment (e.g., temperature, light) in a unique and personalized way to mitigate negative outcomes (e.g., increased stress). You can find more information on the project website and this video.

Student researchers: Shravani Sridhar

 

Crowd-Biosensing Of Physical And Emotional Distress For Walkable Built Environment

Texas A&M X-Grant

Texas A&M Triads for Transformation (T3) Program

In collaboration with the Smart and Sustainable Construction (SSC) Research Group directed by Dr. Changbum R. Ahn, the goal of this project is to detect locations of emotional and physical distress in the built environment. Our team develops signal processing and machine learning algorithms in order to quantify pedestrians’ collective distress from physiological signals collected from wearable devices.

Student researcher: Prakhar Mohan, Jinwoo Kim, Ehsanul Haque Nirjhar

Related Publications:

  • Kim et al., “Saliency Detection Analysis of Physiological Responses of Pedestrians to Diagnose Built Environment Features in Neighborhood,” Advanced Engineering Informatics, 2020
  • Kim et al., “Can pedestrians’ physiological signals be indicative of urban built environment conditions?,” ASCE CRC 2020
  • Kim, et al., “Saliency Detection Analysis of Pedestrians’ Physiological Responses to Assess Adverse Built Environment Features,” ASCE i3CE, 2019 (best paper award)
  • Yadav et al., “Capturing and quantifying emotional distress in the built environment“, Proc. Workshop on Human-Habitat for Health (H3), ICMI 2018.

 

Intelligence Advanced Research Projects Activity (IARPA) – #2017-17042800005

TILES – Tracking Individual Performance with Sensors

In collaboration with the Signal Analysis and Interpretation Laboratory directed by Dr. Shri Narayanan, the purpose of this study is to understand how individual differences, mental states, and well-being affect job performance, by collecting physical information through the use of wearable sensors, environmental information through the use of environmental sensors, and behavioral information through the use of surveys. More information about this project can be found here.

Student researcher: Projna Paromita (Psyche)

Related Publications:

  • Hadjiantonis et al., “Dynamical systems modeling of day-to-day signal-based patterns of emotional self-regulation and stress spillover in highly-demanding health professions,” IEEE EMBC 2020

 

Biomedical/physiological signal processing for wearable technology

Wearable biometric sensors are being increasingly embedded into our everyday life yielding large amounts of biomedical/physiological data, for which the presence of human experts is not always guaranteed. These underline the need for robust physiological models that efficiently analyze and interpret the acquired signals with applications in daily life, well-being, healthcare, security, and human-computer interaction. The goal of this research is the development of robust algorithms for reliable representation and interpretation of biomedical/physiological signals and their co-evolution with other signal modalities and behavioral indices, centered around three main axes.

Student researcher: Projna Paromita (Psyche)

Related Publications:

  • Goel et al., “Knowledge-driven dictionaries for sparse representation of continuous glucose monitoring signals,” IEEE EMBC 2018
  • Chaspari et al., “Markov Chain Monte Carlo Inference of Parametric Dictionaries for Sparse Bayesian Approximations,” IEEE TSP 2016
  • Chaspari et al., “Sparse Representation of Electrodermal Activity with Knowledge-Driven Dictionaries,” IEEE TBME 2015
  • Chaspari et al., “EDA-Gram: Designing Electrodermal Activity Fingerprints for Visualization and Feature Extraction,” EMBC 2016
  • Chaspari et al., “Quantifying EDA synchrony through joint sparse representation: A case-study of couples’ interactions,” ICASSP 2015

Acoustic analysis of emotion and behavior

Acoustic aspects of speech, such as intonation and prosody, are linked to emotion, affect and several psychopathological factors. We have analyzed non-verbal vocalizations (e.g. laughters) in terms of children’s engagement patterns. We have further explored the use of transfer learning techniques for leveraging the abundance of publicly available data. Finally, the co-regulation of acoustic patterns between children has been studied in relation to their engagement levels during speech-controlled interactive robot-companion games.

Student researcher: Kexin Feng

Related Publications:

  • Feng & Chaspari, “A Siamese Neural Network with Modified Distance Loss For Transfer Learning in Speech Emotion Recognition,” AAAI AffCon Workshop, 2020
  • Gujral et al., “Leveraging transfer learning techniques for classifying infant vocalizations,” IEEE BHI 2019.
  • Chaspari et al., “Exploring Children’s Verbal and Acoustic Synchrony: Towards Promoting Engagement in Speech-Controlled Robot-Companion Games,” INTERPERSONAL@ICMI 2015
  • Chaspari et al., “Emotion classification of speech using modulation features,” EUSIPCO 2014
  • Chaspari et al., “An acoustic analysis of shared enjoyment in ECA interactions of children with Autism,” ICASSP 2012

 

Tweets by tamu_hubbs

© 2016–2025 HUBBS Lab @ TAMU Log in

Texas A&M Engineering Experiment Station Logo
  • State of Texas
  • Open Records
  • Risk, Fraud & Misconduct Hotline
  • Statewide Search
  • Site Links & Policies
  • Accommodations
  • Environmental Health, Safety & Security
  • Employment