Research
I am interested in a number of topics pertaining to Deep Learning:
- Self-supervised and semi-supervised representation learning
- Model robustness
- Model optimization
My Google Scholar profile is available here.
Selected publications and preprints:
- Gupta J., Paul S., Ghosh A. (2019) A Novel Transfer Learning-Based Missing Value Imputation on Discipline Diverse Real Test Datasets—A Comparative Study with Different Machine Learning Algorithms. In: Abraham A., Dutta P., Mandal J., Bhattacharya A., Dutta S. (eds) Emerging Technologies in Data Mining and Information Security. Advances in Intelligent Systems and Computing, vol 814. Springer, Singapore.
- Saptarshi Sengupta, Sanchita Basak, Pallabi Saikia, Sayak Paul, Vasilios Tsalavoutis, Frederick Atiah, Vadlamani Ravi, Alan Peters, A review of deep learning with special emphasis on architectures, applications and recent trends, Knowledge-Based Systems, Volume 194, 2020, 105596, ISSN 0950-7051.
- S. Chakraborty*, A. R. Gosthipaty* and S. Paul*, “G-SimCLR: Self-Supervised Contrastive Learning with Guided Projection via Pseudo Labelling,” 2020 International Conference on Data Mining Workshops (ICDMW), Sorrento, Italy, 2020, pp. 912-916, doi: 10.1109/ICDMW51313.2020.00131. arXiv copy of the paper is available here. *equal contribution.
- Andrey Ignatov, Grigory Malivenko, Radu Timofte, Sheng Chen, Xin Xia, Zhaoyan Liu, Yuwei Zhang, Feng Zhu, Jiashi Li, Xuefeng Xiao, Yuan Tian, Xinglong Wu, Christos Kyrkou, Yixin Chen, Zexin Zhang, Yunbo Peng, Yue Lin, Saikat Dutta, Sourya Dipta Das, Nisarg A. Shah, Himanshu Kumar, Chao Ge, Pei-Lin Wu, Jin-Hua Du, Andrew Batutin, Juan Pablo Federico, Konrad Lyda, Levon Khojoyan, Abhishek Thanki*, Sayak Paul*, and Shahid Siddiqui. “Fast and Accurate Quantized Camera Scene Detection on Smartphones, Mobile AI 2021 Challenge: Report.” (CVPR 2021)1 ArXiv:2105.08819 [Cs, Eess], May 2021. arXiv.org, https://arxiv.org/abs/2105.08819. *equal contribution.
- Sayak Paul*, and Siddha Ganju*. “Flood Segmentation on Sentinel-1 SAR Imagery with Semi-Supervised Learning.”2 ArXiv:2105.07581 [Cs], July 2021. arXiv.org, https://arxiv.org/abs/2107.08369. *equal contribution. This work was also featured by NVIDIA in this blog post. This work made it to the following NeurIPS 2021 workshops: AI for Science: Mind the Gaps, Tackling Climate Change with Machine Learning, Women in ML, and Machine Learning and the Physical Sciences. Additionally, we presented this work at PyTorch Developer Day 2021. The PyTorch team helped us create a beautiful poster for our work which is available here.
- Sayak Paul*, and Pin-Yu Chen*. “Vision Transformers Are Robust Learners.”, AAAI 2022, https://arxiv.org/abs/2105.07581. *equal contribution.
Others
- Tutorial organizer and presenter: Practical Adversarial Robustness in Deep Learning: Problems and Solutions (CVPR 2021).
- Reviewer: Uncertainty & Robustness in Deep Learning workshop (ICML 2021), ICASSP 2021 (sub-reviewer), Artificial Intelligence (Elsevier), IEEE Access.
- Released a dataset for large-scale multi-label text classification (joint work with Soumik Rakshit).
-
This is our report for this CVPR 2021 competition. The report contains solution approaches from the teams (including ours) that got the top positions. ↩
-
This paper demonstrates the solution approach our team took to finish as the first runners-up at this competition organized by the NASA Impact team. It got accepted for an oral presentation at the ESA-ECMWF workshop 2021. ↩