At the end of my sophomore year, I read one CHI paper by chance: "Blocks4All: overcoming accessibility barriers to blocks programming for children with visual impairments." Written by Milne and her colleagues, this paper explores the methods of overcoming accessibility issues in existing block-based programming tools. I was super intrigued by how HCI researchers try to understand user groups and reflect design implications into an artifact.
Surprisingly, at that moment my supervisor asked if I'm willing to lead a project for developing an AAC (Augmentative and Alternative Communication) application. Definitely I said yes, because I found it really difficult to take part in any research project as an undergrad without any previous research experience then. From then, I spent almost every night developing an iOS app, while simultaneously conducting user studies, reading relevant papers, and even designing! Literally everything was new: from being funded for a research to getting an approval from IRB for the evaluation! Fortunately, at the moment when I finished my research, we were almost at the deadline of CHI 2020 and submitted our work there, and got accepted with an Honorable Mention Award.
To summarize our paper briefly, we tried not to restrict a stakeholder of AAC to only a child, but also viewed their caregivers very important in the process of child's communication. Based on this research context, we aimed to develop a system that helps them collaborate to drive the best strategies on child's communication. As a result, we developed Talkingboogie, a mobile AAC system for children with developmental disabilities.
Usage flow of TalkingBoogie system
TalkingBoogie system consists of two mobile apps: TalkingBoogie-AAC for caregiver-child communication, and TalkingBoogie-Coach supporting caregiver collaboration.
Keyscreens of TalkingBoogie-AAC
Keyscreens of TalkingBoogie-Coach
Facilitating conversation between strangers using a chatbot with ML-infused personalized topic suggestion
It is a prevalent behavior of having a chat with strangers in online settings where people can easily gather. Yet, people often find it difficult to initiate and maintain conversation due to the lack of information about strangers.
Hence, we aimed to facilitate conversation between the strangers with the use of machine learning (ML) algorithms and present BlahBlahBot, an ML-infused chatbot that moderates conversation between strangers with personalized topics. Based on social media posts, BlahBlahBot supports the conversation by suggesting topics that are likely to be of mutual interest between users.
A user study with three groups (control, random topic chatbot, and BlahBlahBot; N=18) found the feasibility of BlahBlahBot in increasing both conversation quality and closeness to the partner, along with the factors that led to such increases from the user interview. Overall, our preliminary results imply that an ML-infused conversational agent can be effective for augmenting a dyadic conversation.
This work was published in CHI 2021 (Late-Breaking Work).
Framework of BlahBlahBot
Exploring design space of Computer Vision Symdrome (CVS) intervention application
Prolonged time of computer use increased the prevalence of ocular problems including eyestrain, tired eyes, irritation, redness, blurred vision, and double vision, collectively referred to as computer vision syndrome. In this study, we aim to explore the interface elements of computer-based intervention for computer vision syndrome to set design guidelines based on pros/cons of each interface element.
In this project, I developed LiquidEye: an intervention application targeted to MacOS. In order to understand which design elements of intervention app do users prefer, we made the system to collect each user's time-series system preferences. We later jointly analyzed them with the self-reported satisfaction in order to identify the significance of each element statistically.
This work was later published in Journal of Medical Internet Research (IF = 5.03).
My primary research area is human-computer interaction (HCI), where I mainly focused on developing social computing tools. Currently, I'm working with Kenneth Holstein (CoALA Lab) and Adam Perer (CMU DIG) on understanding human explanations for designing AI explanations.
My research focuses on designing a social computing system, with automated algorithms facilitating user collaborations.
Specifically, I aim to design systems where such algorithms are targeted to help stakeholders cope with various challenges arising in collaboration.
Hover over the following diagram to view some relevant publications!
Collaborative mobile AAC system for non-verbal children with developmental disabilities