Abhimitra (Abhi) Meka

I am a Research Scientist in the Augmented Reality Perception group at Google where I work with Thabo Beeler, Christoph Rhemann and many other exceptional researchers, engineers and artists. My work lies at the intersection of computer graphics, computer vision and machine learning. I am particularly interested in the process of acquiring, understanding and modifying visual appearance of people and objects in images and videos to enable augmented reality.

I was a visiting postdoctoral scholar at Stanford University working with Maneesh Agrawala and Gordon Wetzstein in 2020. Before that I graduated summa-cum-laude with a Doctorate of Engineering from the Graphics, Vision and Video Group (Now Department of Visual Computing and Artifical Intelligence) at the Max Planck Insitute for Informatics, advised by Christian Theobalt. I was awarded the Eurographics PhD Award 2021 Honorable Mention for my doctoral dissertation.

I encourage you to talk to me about Inverse Rendering for Augmented Reality applications. Or we could talk if you are familiar with this!

E-mail : abhijr@domainname.com, domainname = googlemail

Research Interests


  • Inverse Rendering

  • Digital Humans

  • Augmented Reality Rendering

Research Projects

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting

arXiv 2022

A generative model that synthesizes novel 3D human faces that can be photorealistically relit under desired environments

VariTex: Variational Neural Face Textures

International Conference on Computer Vision (ECCV) 2021

A generative model that synthesizes novel 3D human faces with fine-grained explicit control over extreme poses and expressions

Real-time Global Illumination Decomposition of Videos

ACM Transactions on Graphics 2021 (Presented at SIGGRAPH 2021)

An optimization based technique to decompose videos into per-frame reflectance and global illumination layers in real-time

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2020

First technique to perform high-quality view synthesis and relighting of dynamic full-body human performances in a Lightstage

Self-supervised Outdoor Scene Relighting

European Conference on Computer Vision (ECCV) 2020

A neural rendering technique to relight outdoor scenes under desired lighting from a single image

Deep Reflectance Fields: High-Quality Facial Reflectance Field Inference From Color Gradient Illumination

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2019

A neural rendering technique to capture fully relightable high-resolution dynamic facial performances in a Lightstage

Live Intrinsic Material Estimation

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 - Spotlight

An ML technique to estimate high-frequency material of an object of any shape from a single image and lighting from depth+video

Live User-Guided Intrinsic Video For Static Scenes

IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017

Presented at International Symposium on Mixed and Augmented Reality (ISMAR) 2017

An interactive technique guided by 3D user strokes to perform geometry reconstruction and intrinsic decomposition of static scenes

Live Intrinsic Video

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2016

The first technique to perform intrinsic decomposition of live video streams using fast non-linear GPU optimization