This Q&A highlight features Rachel Ma, an Honorable Mention in the 2023 CRA Outstanding Undergraduate Researchers award program. Rachel double majored in Computer Science and Music at Brown University and is now pursuing a PhD in Electrical Engineering and Computer Science (EECS) at MIT.
Key Words: Artificial Intelligence, NLP, Robotics, Skill Generalization.
What brought you to computing research?
I saw postings for Undergraduate Teaching Research Assistant (UTRA) positions at Brown, which listed various research projects. The DuckieSky Drone project, which aimed to teach basic robotics, engineering, and programming skills to high school students from various demographic backgrounds, caught my interest. I contacted and applied to the professor who posted it, Professor Stefanie Tellex. I was accepted, and it became my first research project. During the first year of this project, I built my own drone, developed software, and worked with others to create lesson plans and teaching materials for high schools. These materials focused on building and flying a drone, its software and hardware principles, and ethics/safety considerations. In my second year, I stepped into a managerial and communications lead role, where I worked with different high schools in Rhode Island to coordinate the program and organize technical support.
What can you tell us about your project?
Humans use verbs to describe actions that can be applied across various objects. Due to the importance of natural language in everyday life, robots that interact with humans should be able to follow commands expressed as verbs. Humans can easily adapt skills from one object to another (open, push, and pull different objects). Robots should be able to do the same to learn and generalize skills efficiently. For example, if the robot knows how to open a door, it should be able to generalize the verb “open” to a microwave or refrigerator with minimal additional learning. Most robotics research focuses on understanding adjectives or nouns for object retrieval or learning to do a single task (like opening a door), but few examine verb generalization. In my research project, I created an AI model to generalize robot skills (verbs) to novel/unseen object categories. Our method allows a robot to generate a path for a novel object based on a verb, which can then be used as input to a motion planner. We focus on object manipulation verbs that describe changes in physical states, such as ‘translate’ or ‘rotate’. These changes are reflected in the differences between the initial (pre-action) and termination (post-action) states of the object or its path. For example, using the verb “open,” the initial state would be a door fully shut, while the termination state would be the door ajar. This work was presented at the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
How was the collaborative environment in your lab?
The teamwork in our lab was very active and supportive. I worked closely with REU student Lyndon Lam and my labmates, PhD students Benjamin Spiegel, Aditya Ganeshan, Roma Patel, Ben Abbatematteo; post-doc David Paulius; and a few undergraduate students from the Humans2Robot and Intelligent Robot Labs. Our advisors, Professor George Konidaris and Professor Stefanie Tellex, provided great guidance. This collaboration made projects go smoothly and helped us learn and grow our research abilities.
What challenges did you encounter?
One significant challenge was defining the scope of our project. After brainstorming sessions, multiple iterations were required to refine our ideas, ensuring our project goals were clear and achievable. Another technical challenge was the lack of high-quality data for training our model. To address this, I had to create a new dataset. Additionally, our initial model configuration only generated two images per object trajectory—representing the initiation and termination states before and after applying a verb. To enhance the model’s functionality, I proposed modifications that enabled the creation of more detailed trajectories with more than just two timestep images, which significantly improved our results.
What are your favorite aspects of research?
Robotics is an inherently interdisciplinary field, and I really enjoyed exploring its different components, such as computer vision, natural language processing, and robot control, through discussions with my collaborators and mentors during my research. I became very passionate about AI, human-robot interaction, and the potential of integrating natural language processing with robotics. As a result, I decided to apply to PhD programs and am now at MIT as a first-year PhD student.
How do you balance research with other interests?
During my undergraduate studies at Brown, I was deeply involved in extracurriculars, peer mentoring, and student life. I served in various roles in the Computer Science department as an Undergraduate Teaching Assistant (TA), Head TA, and Meta TA. In addition to my major in computer science, I pursued a double major in music, where I was actively involved in performing and composing. I was also involved in course development and research in the Choreo-robotics project—integrating dance with robotics—which allowed me to merge my creative passions from music with my research in robotics. I also joined the Alpha Chi Omega sorority, where my engagement extended beyond social activities. This unique blend of interests enriched my experience and inspired other Computer Science students at Brown and members of my sorority to explore research opportunities.
Do you have any advice for other students looking to get into research?
Grab coffee with members of the lab you are interested in, attend a few of their general meetings to learn about their exciting ideas, and decide if you are interested in the topics or if you would love to join their projects!
— Edited by Yasra Chandio and Alejandro Velasco Dimate