Sriram Yenamandra

I recently graduated with a Master's degree in Computer Science (Machine Learning specialization) from Georgia Tech. During my Master's degree, I worked in Prof. Dhruv Batra's lab on solving embodied mobile manipulation tasks. At Georgia Tech, I also had the privilege of being advised by Prof. Judy Hoffman on the problems of visual domain adaptation and bias identification.

Before coming to Georgia Tech, I earned my bachelor's degree in Computer Science and Engineering with a minor in Applied Statistics from IIT Bombay, where I worked on the problem of image inpainting under the supervision of Prof. Suyash Awate.

Email  /  CV  /  Google Scholar  /  Twitter  /  Github

profile photo
Research Interests

My research interests broadly lie in computer vision, machine learning and robotics. My goal is to develop general-purpose robotic assistants that can quickly learn to perform new tasks in novel environments. The development of such generalist agents remains challenging, particularly due to scarcity in real robot data and the lack of realism in simulated environments. I am interested in utilizing multiple synthetic data sources (eg. simulations, diffusion models) and uncurated video datasets to solve real-world (video, 3D or embodied) tasks. My previous work has contributed to this objective by developing a framework for using simulations to build real-world mobile manipulators, tackling visual domain shifts (such as sim2real), and employing generative models to pinpoint data slices that may need additional collection.

Publications

(*=equal contribution)

GOAT: GO to Any Thing
Matthew Chang*, Theophile Gervet*, Mukul Khanna*, Sriram Yenamandra*, Dhruv Shah, So Yeon Min, Kavit Shah, Chris Paxton, Saurabh Gupta, Dhruv Batra, Roozbeh Mottaghi, Jitendra Malik*, Devendra Singh Chaplot*
Preprint
arXiv / code

LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images
Viraj Prabhu, Sriram Yenamandra, Prithvijit Chattopadhyay, Judy Hoffman
NeurIPS 2023
arXiv / code

HomeRobot: Open Vocabulary Mobile Manipulation
Sriram Yenamandra*, Arun Ramachandran*, Karmesh Yadav*, Austin Wang, Mukul Khanna, Theophile Gervet, Tsung-Yen Yang, Vidhi Jain, Alexander William Clegg, John Turner, Zsolt Kira, Manolis Savva, Angel Chang, Devendra Singh Chaplot, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton
CoRL 2023
arXiv / code

FACTS: First Amplify Correlations and Then Slice to Discover Bias
Sriram Yenamandra, Pratik Ramesh, Viraj Prabhu, Judy Hoffman
ICCV 2023
arXiv / code

Adapting Self-Supervised Vision Transformers by Probing Attention-Conditioned Masking Consistency
Viraj Prabhu*, Sriram Yenamandra*, Aaditya Singh, Judy Hoffman
NeurIPS, 2022
arXiv / code

Housekeep: Tidying Virtual Households using Commonsense Reasoning
Yash Kant, Arun Ramachandran, Sriram Yenamandra, Igor Gilitschenski, Dhruv Batra, Andrew Szot*, and Harsh Agrawal*
ECCV, 2022
project page / arXiv / code / colab

Semi-Supervised Deep Expectation-Maximization for Low-Dose PET-CT
Vatsala Sharma, Ansh Khurana, Sriram Yenamandra, Suyash P. Awate
ISBI, 2022 (Best paper award)
paper

Learning Image Inpainting from Incomplete Images using Self-Supervision
Sriram Yenamandra, Ansh Khurana, Rohit Jena, Suyash P. Awate
ICPR, 2020
paper


(Design and CSS courtesy: Jon Barron )