Research
I am interested in the area of representation learning. More recently, I have developed an interest in diffusion models (subjects include alignment, test-time scaling, controlled generation).
Please refer to my Google Scholar profile for details on the publications I have been a part of.
Conference tutorials
Invited talks, demos, etc.
- SoTA Diffusion Models with 🧨 diffusers (slides and recording)
- IBM Research (October 17, 2023)
- The Dyson Robotics Lab, Imperial College London
- Department of Statistics, University of Oxford
- 🧨 diffusers for research at VAL, Indian Institute of Science (IISc), June 12, 2023. Slides are here.
- Controlling Text-to-Image Diffusion Models: Assorted Approaches
- Demo of 🧨 diffusers at ICCV 2023 (Tweet).
- A talk on diffusion models, ETH Zurich (May 06, 2024). Slides are here.
- Transformers in Diffusion Models for Image Generation and Beyond, CS25 v5, Stanford (May 27, 2025). Slides are here.
For regular talks, refer here.
Teaching assistance
Served as a TA for Full Stack Deep Learning’s 2022 cohort.
Reviewing
- Conferences: ICCV’25, ICML’25, CVPR’25, ICLR’25, NeurIPS’24, AAAI’23, ICASSP’21 (sub-reviewer)
- Workshops: UDL workshop (ICML’21)
- Journals: TMLR, Artificial Intelligence (Elsevier), IEEE Access.
Misc
- Released a dataset for large-scale multi-label text classification (joint work with Soumik Rakshit).
- Exploration on instruction-tuning Stable Diffusion.