Chongyan Chen

Focus on HCI, Affective Computing, and mHealth

About Me

Hi, I'm a master student at UT, majoring in Information Science. My research interest is HCI, Affective Computing (emotion AI) and Mobile Health. I would like to help people measure, analyze, and improve their long-term mental health. I am skilled in Signal Processing, Data Analysis, Machine Learning, and is learning Deep Learning and Knowledge Graph recently. Here is a link to my resume.
I am self-disciplined and have great determination. Besides, I am creative, good at thinking outside of the box, and identifying potential solutions. I am always eager to learn and open to new things.
In my leisure time, I like drawing, and playing basketball with my friends.


University of Texas at Austin2020-2024

PhD Information Science

University of Texas at Austin2018-2020

M.Sc. Information Science GPA: 3.96/4.0


  • INF 385T AI in Health
  • INF 385K Projects in HCI
  • INF 385T Intro to Machine Learning
  • INF 385T Virtual Environments
  • EE 382V Activity Sensing/Recognition
  • EE 382V Advanced Programming Tools
  • EE 360C Algorithms

South China University of Technology2014-2018

B.Eng. Electronic Engineering GPA: 3.5/4.0


  • Advanced Language Program Design III
  • Computer Networks
  • Data Structures
  • Digital Signal Processing II
  • Embedded System and its application
  • Signals & Systems
  • Software Engineering


"Evaluation of Mental Stress and Heart Rate Variability Derived from Wrist-Based Photoplethysmography" Chongyan Chen, Chunhung Li, Chih-Wei Tsai, and Xinghua Deng. IEEE Eurasia Conference on Biomedical Engineering, Healthcare and Sustainability 5/31/2019. Paper | Poster | Award
"Activity Recognition with Wristband Based on Histogram and Bayesian Classifiers" Yi-Cong Huang, Wing-Kuen Ling, Chi-Wa Cheng, Chun-Hung Li, Chong-Yan Chen. IEEE 5th International Conference on Signal and Image Processing (ICSIP), 7/19/2019. Paper


  • Kilgarlin Fellowship, 2020-2024
  • William and Margaret Kilgarlin Endowed Scholarship ($54,750), 2020-2021
  • Master Thesis-Dean’s Choice Award finalist, 5/8/2020
  • Best Conference Paper Award, IEEE ECBIOS, 6/2019
  • Winner in Intro to Machine Learning course VQA Answerability Competition, University of Texas at Austin, 4/2019
  • First Prize Award: 311 Calls and 500 Cities Hackathon, University of Texas at Austin,10/2018
  • Outstanding Undergraduate thesis, South China University of Technology6/2018
  • University third-class scholarship, South China University of Technology4/2018
  • Honorable Mention Prize, Mathematical Contest in Modeling1/2017
  • Skills

      Programming Languages
    • Python
    • Java
    • Kotlin
    • C
    • C++
      Artificial Intelligence
    • Deep Learning
    • Machine Learning
    • scikit-learn
    • Keras
    • PyTorch
    • Tensorflow
    • LSTM
    • BERT
    • CNN
    • GAN
      Web + Mobile development
    • React Native
    • Android development(Java, Kotlin)
    • HTML
    • CSS
    • JavaScript
    • PHP
    • JQuery
    • Ajax
      Backend + Systems
    • Linux
    • Azure
    • Google Cloud
    • AWS
    • Docker
      Data related
    • SQL
    • NoSQL (MongoDB)
    • Qlik
      Knowledge Graph
    • Gephi
    • CrowdSourcing
    • Git
    • MATLAB
    • Unit Test/Jenkins
    • LaTex
    • Unity 3D
    • Sketch
    • InDesign
    • Photoshop
      Human Languages
    • Mandarin
    • English
    • Cantonese
      Recently, I am learning the followings...
    • Explainable AI
    • Adversarial Attack
    • Network Compression
      I plan to learn the followings in 3 months...
    • Transfer Learning
    • Auto-encoder
    • Meta Learning
    • Anomaly Detection
    • Life-Long Learning
    • Reinforcement Learning

    Technical Experience

    Algorithm Engineer (Intern), HUAWEI, Shenzhen, China, 06/2019 – 08/2019

    • Conducted URL pattern extraction using Regex, utilized md5 for page update recognition, and applied Naïve Bayes to recognize dead links.
    • Detected keywords and compared text-similarity for real title extraction, employed DFS to reconstruct Xpath to extract text content, utilized Hash and Dynamic Programming to extract repeated node to extract web border.

    Algorithm Engineer (Intern), Add Care Ltd, Shenzhen, China11/2017 – 3/2019

  • Trained CNN to detect utensils in videos, used Haar-like features and Adaboost to detect human faces, and tracked them using Kernelized Correlation Filter. Identified eating gestures by collision checking between the path of utensils and human faces.
  • Designed stress induction experiment. Collected, filtered ECG and wrist-based PPG signals and detected signal quality. Designed Peak Finding Algorithms for PPG and ECG.
  • Calculated Heart Rate Variability to classify stress states. The overall Leave-One-Participant-Out accuracy of wristed-based PPG with 3 mins temporal window reaches 80%.
  • Add Care official website Glutrac- one of the best health tech at CES 2020

    Projects and Research


  • Developed ByteMe application for both Web (frontend: HTML + JS + Ajax; backend: Python + Flask) and Mobile platforms (React Native and Kotlin).
  • Built, deployed and managed application using Google App Engine, wrote python Database API to handle MongoDB, developed navigation function, camera function, and user login function with Google Firebase.
  • Implemented “NewByte” page with the AutoFill function using Food 101 classification model based on Google Inception V3 model and Azure API.
  • WebsiteApp made by Kotlin App made by React Native

    VizWiz Project

  • Developed the app with speech to text (DeepSpeech) and image quality detection algorithms.
  • Question Answerability: Extracted features for visual question’s answerability using OpenCV and Azure API; extracted text features using NLTK to predict answerability of a visual question.
  • Master Thesis: Studying external Knowledge (knowledge base + image search engine) for VQA. The results show that including external knowledge can largely improve the accuracy of VQA and show a possibility of answering “unanswerable” questions (marked unanswerable by crowd workers).
  • Click here for more details:
      The field of computer vision has made significant advances in visual question answering (VQA) and image captioning. and image captioning Theha sophisticatedve proposed lots of fancy models, in use todaywhich works well on simple simple image captioning and VQA tasks but they perform poorly when the task requiresVQA or image captioning needs common sense or external knowledge. Previous research hasve explored Visual Question Answering (VQA) using awith knowledge base and iImage captioning usingwith reverse image search. However, there is a need for studies thatnone have explored the benefits of multi-source external knowledge for real tasks in these two areas and for real task. Besides, to our knowledge, we are the first research propose using image search by text for these two areas.

      This thesis compares three kinds of external knowledge: knowledgebase, reverse image search, and search by text and evaluates them on two image captioning datasets: COCO-captions and VizWiz-captions as well as on three visual question answering datasets: VQA v2, VizWiz-VQA, and OK-VQA. The results show that including external knowledge can largely improve the accuracy of VQA. This research confirms that reverse image search is suitable for the image captioning task and suggests to explore knowledge base for image captioning. It also suggests that knowledge base is more suitable for traditional VQA while search by text is more suitable for VizWiz VQA dataset. Besides, the results show a possibility of answering visual questions using low-quality images or even answering “unanswerable” questions by using external knowledge. Our research provides greater understanding for the VizWiz Challenge and reveals a gap between traditional VQA/Image captioning task and real VQA/Image captioning task from the perspective of external knowledge.
    VizWiz Websites Our Image & Video Computing Group

    Understanding Health-related Information Searching Behavior Through Eye Tracking

    Collected eye-tracking data (AOIs, TTFF, etc.) using Tobii TX300 eye-tracker and iMotions. Analyzed data using Kruskal-Wallis test, One-Way Anova and Mann-Whitney U Test. (Paper)

    Paper Poster

    Activities Recognition in Self-Driving Car

    Collected ten peoples’ five activities to solve the take-over problem. Reduced individual differences. Built pose estimator to detect skeleton of people. Extracted secondary features to help classify similar activities. Ensemble them with LSTM. (Paper)


    Virtual Presnetaion

  • Summary: We used Unity 3D to build a virual presentation demo.
  • Why: Our design can help people with presentation anxiety and improve presentation skills. It also provide a solution for distance meeting.
  • Details: We disigned different human-human interaction/attitudes for virtual audience. For positive attitude, some virtual audience would imitate the actions of the speakers when the speaker is doing experiment, some audience would always pay attention to the speaker by turning their body towards the speaker. For passive attitude, the audience just ignore the speaker.
    Besides, we designed different human-objects interactions: interacting with slides, poping out details of the display item when user gets close to the item, etc.
  • 2017 Mathematical Contest in Modeling - "Cooperate and navigate"

  • Summary: We analyzed of the effects of allowing self-driving, cooperating cars on the roads in several countries in the U.S. as well as suggesting the best percentage of self-driving car, and policy changes like setting exclusive lane.
  • Why: Self-driving, cooperating cars have been proposed as a solution to increase capacity of highways without increasing number of lanes or roads. The behavior of these cars interacting with the existing traffic flow and each other is not well understood at this point
  • Details: We built Phantom Traffic Jam Model to simulate traffic jam on highway with few intersections and accidents. Created Smart Driver Model with versions for human drivers and smart cars.
    We predicted traffic condition with varied road density and smart car proportions.
    We built Global Decision Model to control smart car proportions and provide optimal route plans for both human drivers and smart cars. Paper
  • Contact Me