Recent and selected work
(*indicates equal contribution)
Linearizing Large Language Models
Jean Mercat*, Igor Vasiljevic*, Sedrick Keh*, Kushal Arora, Achal Dave, Adrien Gaidon, Thomas Kollar
COLM, 2024
DataComp-LM: In search of the next generation of training sets for language models
DataComp-LM Team
In submission
Language models scale reliably with over-training and on downstream tasks
Samir Yitzhak Gadre, Georgios Smyrnis, Vaishaal Shankar, Suchin Gururangan, Mitchell Wortsman, Rulin Shao, Jean Mercat, Alex Fang, Jeffrey Li, Sedrick Keh, Rui Xin, Marianna Nezhurina, Igor Vasiljevic, Jenia Jitsev, Luca Soldaini, Alexandros G. Dimakis, Gabriel Ilharco, Pang Wei Koh, Shuran Song, Thomas Kollar, Yair Carmon, Achal Dave, Reinhard Heckel, Niklas Muennighoff, Ludwig Schmidt
In submission
Transcrib3D: 3D Referring Expression Resolution through Large Language Models
Jiading Fang, Xiangshan Tan, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Hongyuan Mei, Rares Ambrus, Gregory Shakhnarovich, Matthew R Walter
IROS, 2024
Towards Zero-Shot Scale-Aware Monocular Depth Estimation
Vitor Guizilini, Igor Vasiljevic, Dian Chen, Rares Ambrus, Adrien Gaidon
ICCV, 2023
Depth Field Networks for Generalizable Multi-view Scene Representation
Vitor Guizilini*, Igor Vasiljevic*, Jiading Fang*, Rares Ambrus, Greg Shakhnarovich, Matthew Walter, Adrien Gaidon
ECCV, 2022
Full Surround Monodepth from Multiple Cameras
Vitor Guizilini*, Igor Vasiljevic*, Rares Ambrus, Greg Shakhnarovich, Adrien Gaidon
RA-L, 2022
Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion
Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Sudeep Pillai, Wolfram Burgard, Greg Shakhnarovich, Adrien Gaidon
3DV, 2020 (Oral Presentation)
DIODE: A Dense Indoor and Outdoor DEpth Dataset
Igor Vasiljevic, Nick Kolkin, Shanyi Zhang, Ruotian Luo, Haochen Wang, Falcon Z. Dai, Andrea F. Daniele, Mohammadreza Mostajabi, Steven Basart, Matthew R. Walter, Gregory Shakhnarovich
CVPRW, 2020
|