Abhimitra (Abhi) Meka

EmailLinkLinkedInLink

I am a Research Scientist in the Augmented Reality Perception group at Google where I work with Thabo Beeler, Christoph Rhemann and many other exceptional researchers, engineers and artists.  My work lies at the intersection of computer graphics, computer vision and machine learning. I am particularly interested in the process of acquiring, understanding and modifying visual appearance of people and objects in images and videos to enable augmented reality.

I encourage you to talk to me about inverse rendering, view synthesis and relighting for Augmented Reality applications. Or we could talk if you are familiar with this!

Research Interests



Research Projects

Lite2Relight: 3D-aware Single Image Portrait Relighting

Rao, Fox, Meka, B R, Zhan, Weyrich, Bickel, Pfister, Matusik, Elgharib, Theobalt

ACM SIGGRAPH 2024 Conference Proceedings

An efficient feedforward encoder for volumetric 3D view synthesis and environmental relighting of a face from a single image

Project Page Paper Video Supplementary Code

FaceFolds: Meshed Radiance Manifolds for Efficient Volumetric Rendering of Dynamic Faces

Medin, Li, Du, Garbin, Davidson, Wornell, Beeler, Meka

ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D) 2024

A novel single multi-layer mesh + video texture representation to achieve very efficient rendering of volumetric dynamic face sequences on graphics platforms like game engines without ML integration

Project Page Paper Video

GANtlitz: Ultra High Resolution Generative Model for Multi-Modal Face Textures

Gruber, Collins, Meka, Müller, Sarkar, Orts-Escolano, Prasso, Busch, Gross, Beeler

Computer Graphics Forum (Proceedings of Eurographics) 2024

A generative model to synthesize ultra-high-resolution (6𝑘 × 4𝑘) multi-modal face appearance maps for novel identities trained from very sparse data (~100 identities).

Project Page Paper Video

ShellNeRF: Learning a Controllable High-resolution Model of the Eye and Periocular Region

Li, Sarkar, Meka, Bühler, Müller, Gotardo, Hilliges, Beeler

Computer Graphics Forum (Proceedings of Eurographics) 2024

A novel discretized volumetric representation for animation and synthesis of the eye and periocular region using concentric surfaces around a 3DMM face mesh

Project Page Paper

One2Avatar: Generative Implicit Head Avatar for Few-shot User Adaptation

Yu, Bai, Meka, Tan, Xu, Pandey, Fanello, Park, Zhang

arXiv 2024

A novel approach to generate an animatable photo-realistic avatar from only a few or one image of the target person using a by a 3D generative model learned from multi-view multi-expression data

Project Page Paper

LitNeRF: Intrinsic Radiance Decomposition for High-Quality View Synthesis and Relighting of Faces

Sarkar, Bühler, Li, Wang, Vicini, Riviera, Zhang, Orts-Escolano, Gotardo, Beeler, Meka

ACM SIGGRAPH Asia 2023 Conference Proceedings

A volumetric formulation to achieve ultra high-quality view-synthesis and relighting of human heads in sparse multi-view multi-light capture rigs

Project Page Paper

Preface: A Data-driven Volumetric Prior for Few-shot Ultra High-resolution Face Synthesis

Bühler, Sarkar, Shah, Li, Wang, Helminger, Orts-Escolano, Lagun, Hilliges, Beeler, Meka

International Conference on Computer Vision (ICCV) 2023

A novel data-driven volumetric human face prior that enables high-quality synthesis of ultra high-resolution novel views of human faces from very sparse input images

Project Page Paper Extended Webpage (Download)

Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

Pan, Tewari, Leimkühler, Liu, Meka, Theobalt

ACM SIGGRAPH 2023 Conference Proceedings

An interactive point-and-drag based image manipulation technique through optimization of generative image features

Project Page Paper Code

VoLux-GAN: A Generative Model for 3D Face Synthesis with HDRI Relighting

Tan, Fanello, Meka, Orts-Escolano, Tang, Pandey, Taylor, Tan, Zhang

ACM SIGGRAPH 2022 Conference Proceedings

A generative model that synthesizes novel volumetric 3D human heads that can be photorealistically relit under desired environments

Project Page Paper Supplementary Code

VariTex:Variational Neural Face Textures

Bühler, Meka, Li, Beeler, Hilliges

International Conference on Computer Vision (ICCV) 2021

A generative model that synthesizes novel 3D human faces with fine-grained explicit control over extreme poses and expressions

Project Page Paper Code Presentation Demo Video Blog

Real-time Global Illumination Decomposition of Videos

Meka*, Shafiei*, Zollhoefer, Richardt, Theobalt

ACM Transactions on Graphics 2021 (Presented at SIGGRAPH 2021)

An optimization based technique to decompose videos into per-frame reflectance and global illumination layers in real-time 

Project Page Paper Supplementary Video

Deep Relightable Textures: Volumetric Performance Capture with Neural Rendering

Meka*, Pandey*, Haene, Orts-Escolano, Barnum, Davidson, Erickson, Zhang, Taylor, Bouaziz, Legendre, Ma, Overbeck, Beeler, Debevec, Izadi, Theobalt, Rhemann, Fanello

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia) 2020 

First technique to perform high-quality view synthesis and relighting of dynamic full-body human performances in a Lightstage

Project Page Paper Video

Self-supervised Outdoor Scene Relighting

Yu, Meka, Elgharib, Seidel, Theobalt, Smith

European Conference on Computer Vision (ECCV) 2020

A neural rendering technique to relight outdoor scenes under desired lighting from a single image

Project Page Paper Dataset Code&Models Video

Deep Reflectance Fields: High-Quality Facial Reflectance Field Inference From Color Gradient Illumination

Meka, Haene, Pandey, Zollhoefer, Fanello, Fyffe, Kowdle, Yu, Busch, Dourgarian, Denny, Bouaziz, Lincoln, Whalen, Harvey, Taylor,  Izadi, Debevec, Theobalt, Valentin, Rhemann

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2019

A neural rendering technique to capture fully relightable high-resolution dynamic facial performances in a Lightstage  

Project Page Paper Presentation Video

LIME: Live Intrinsic Material Estimation

Meka, Maximov, Zollhöfer, Chatterjee, Seidel, Richardt, Theobalt

IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2018 - Spotlight

An ML technique to estimate high-frequency material of an object of any shape  from a single image and lighting from depth+video

Project Page Paper Supplementary Presentation Poster Code&Models Dataset(34 GB) Video

Live User-Guided Intrinsic Video For Static Scenes

Meka*, Fox*, Zollhöfer, Richardt, Theobalt

IEEE Transactions on Visualization and Computer Graphics (TVCG) 2017

Presented at International Symposium on Mixed and Augmented Reality (ISMAR) 2017

An interactive technique guided by 3D user strokes to perform geometry reconstruction and  intrinsic decomposition of static scenes

Project Page Paper Presentation Poster Dataset Video

Live Intrinsic Video

Meka, Zollhöfer, Richardt, Theobalt

ACM Transactions on Graphics (Proceedings of SIGGRAPH) 2016

The first technique to perform intrinsic decomposition of live video streams using fast non-linear GPU optimization

Project Page Paper Presentation Dataset Video