Abhimitra (Abhi) Meka


I am a Research Scientist in the Augmented Reality Perception group at Google where I work with Thabo Beeler, Christoph Rhemann and many other exceptional researchers, engineers and artists.  My work lies at the intersection of computer graphics, computer vision and machine learning. I am particularly interested in the process of acquiring, understanding and modifying visual appearance of people and objects in images and videos to enable augmented reality.

I was a visiting postdoctoral scholar at Stanford University working with Maneesh Agrawala and Gordon Wetzstein in 2020. Before that I graduated summa-cum-laude with a Doctorate of Engineering from the Graphics, Vision and Video Group (Now Department of Visual Computing and Artifical Intelligence) at the Max Planck Insitute for Informatics, advised by Christian Theobalt. I was awarded the Eurographics PhD Award 2021 Honorable Mention for my doctoral dissertation.

I encourage you to talk to me about inverse rendering, view synthesis and relighting for Augmented Reality applications. Or we could talk if you are familiar with this!

Research Interests

Research Projects

EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes

Li, Meka, Müller, Bühler, Hilliges, Beeler

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2022

A volumetric synthesis model for high-quality photorealistic performance capture and animation of human eyes

Project Page Paper Video Presentation

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting

Tan, Fanello, Meka, Orts-Escolano, Tang, Pandey, Taylor, Tan, Zhang

ACM SIGGRAPH 2022 Conference Proceedings

A generative model that synthesizes novel volumetric 3D human heads that can be photorealistically relit under desired environments

Project Page Paper Supplementary Code

VariTex:Variational Neural Face Textures

Bühler, Meka, Li, Beeler, Hilliges

International Conference on Computer Vision (ICCV) 2021

A generative model that synthesizes novel 3D human faces with fine-grained explicit control over extreme poses and expressions

Project Page Paper Code Presentation Demo Video Blog

Real-time Global Illumination Decomposition of Videos

Meka*, Shafiei*, Zollhoefer, Richardt, Theobalt

ACM Transactions on Graphics 2021 (Presented at SIGGRAPH 2021)

An optimization based technique to decompose videos into per-frame reflectance and global illumination layers in real-time 

Project Page Paper Supplementary Video

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

Meka*, Pandey*, Haene, Orts-Escolano, Barnum, Davidson, Erickson, Zhang, Taylor, Bouaziz, Legendre, Ma, Overbeck, Beeler, Debevec, Izadi, Theobalt, Rhemann, Fanello

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2020 

First technique to perform high-quality view synthesis and relighting of dynamic full-body human performances in a Lightstage

Project Page Paper Video

Self-supervised Outdoor Scene Relighting

Yu, Meka, Elgharib, Seidel, Theobalt, Smith

European Conference on Computer Vision (ECCV) 2020

A neural rendering technique to relight outdoor scenes under desired lighting from a single image

Project Page Paper Dataset Code&Models Video

Deep Reflectance Fields: High-Quality Facial Reflectance Field Inference From Color Gradient Illumination

Meka, Haene, Pandey, Zollhoefer, Fanello, Fyffe, Kowdle, Yu, Busch, Dourgarian, Denny, Bouaziz, Lincoln, Whalen, Harvey, Taylor,  Izadi, Debevec, Theobalt, Valentin, Rhemann

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2019

A neural rendering technique to capture fully relightable high-resolution dynamic facial performances in a Lightstage  

Project Page Paper Presentation Video

LIME: Live Intrinsic Material Estimation

Meka, Maximov, Zollhöfer, Chatterjee, Seidel, Richardt, Theobalt

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 - Spotlight

An ML technique to estimate high-frequency material of an object of any shape  from a single image and lighting from depth+video

Project Page Paper Supplementary Presentation Poster Code&Models Dataset(34 GB) Video

Live User-Guided Intrinsic Video For Static Scenes

Meka*, Fox*, Zollhöfer, Richardt, Theobalt

IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017

Presented at International Symposium on Mixed and Augmented Reality (ISMAR) 2017

An interactive technique guided by 3D user strokes to perform geometry reconstruction and  intrinsic decomposition of static scenes

Project Page Paper Presentation Poster Dataset Video

Live Intrinsic Video

Meka, Zollhöfer, Richardt, Theobalt

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2016

The first technique to perform intrinsic decomposition of live video streams using fast non-linear GPU optimization

Project Page Paper Presentation Dataset Video