Skip to content

zhuhao-nju/facescape

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FaceScape

FaceScape provides large-scale high-quality 3D face datasets, parametric models, docs and toolkits about 3D face related technology. [CVPR2020 paper]   [extended arXiv Report]    [supplementary]

Our latest progress will be updated to this repository constantly - [latest update: 2026/01/27]

Data

New: The data can be accessed at the new website https://nju-3dv.github.io/projects/FaceScape/. The old website (facescape.nju.edu.cn) will be decommissioned soon.

The available sources include:

Item (Docs) Description Quantity Quality
TU models Topologically uniformed 3D face models
with displacement map and texture map.
16940 models
(847 id × 20 exp)
Detailed geometry,
4K dp/tex maps
Multi-view data Multi-view images, camera parameters
and corresponding 3D face mesh.
>400k images
(359 id × 20 exp
× ≈60 view)
4M~12M pixels
Bilinear model The statistical model to transform the base
shape into the vector space.
4 for different settings Only for base shape.
Info list Gender / age of the subjects. 847 subjects --

The datasets are only released for non-commercial research use. As facial data involves the privacy of participants, we use strict license terms to ensure that the dataset is not abused.

Benchmark for SVFR

We present a benchmark to evaluate the accuracy of single-view face 3D reconstruction (SVFR) methods, view here for the details.

ToolKit

Start using python toolkit here, the demos include:

  • bilinear_model-basic - use facescape bilinear model to generate 3D mesh models.
  • bilinear_model-fit - fit the bilinear model to 2D/3D landmarks.
  • multi-view-project - Project 3D models to multi-view images.
  • landmark - extract landmarks using predefined vertex index.
  • facial_mask - extract facial region from the full head TU-models.
  • render - render TU-models to color images and depth map.
  • alignment - align all the multi-view models.
  • symmetry - get the correspondence of the vertices on TU-models from left side to right side.
  • rig - rig 20 expressions to 52 expressions.

Our More Projects related to FaceScape

Towards Native Generative Model for 3D Head Avatar (Fundamental Research 2026)
Yiyu Zhuang*, Hao Zhu*, Jiawei Zhang*, Yuxiao He*, Yanwen Wang, Jiahe Zhu, Yao Yao, Siyu Zhu, Xun Cao#

FATE: Full-head Gaussian Avatar with Textural Editing from Monocular Video (CVPR 2025)
Jiawei Zhang, Zijian Wu, Zhiyang Liang, Yicheng Gong, Dongfang Hu, Yao Yao, Xun Cao, Hao Zhu#

DicFace: Dirichlet-Constrained Variational Codebook Learning for Temporally Coherent Video Face Restoration (CVPR 2025)
Yan Chen*, Hanlin Shang*, Ce Liu, Yuxuan Chen, Hui Li, Weihao Yuan, Hao Zhu, Zilong Dong, Siyu Zhu#

VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior (3DV 2025)
Xusen Sun, Longhao Zhang, Hao Zhu#, Peng Zhang#, Bang Zhang, Xinya Ji, Kangneng Zhou, Daiheng Gao, Liefeng Bo, Xun Cao

Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation (ICLR 2025)
Jiahao Cui*, Hui Li*, Yao Yao, Hao Zhu, Hanlin Shang, Kaihui Cheng, Hang Zhou, Siyu Zhu#, Jingdong Wang

EmoTalk3D: High-Fidelity Free-View Synthesis of Emotional 3D Talking Head (ECCV 2024)
Qianyun He, Xinya Ji, Yicheng Gong, Yuanxun Lu, Zhengyu Diao, Linjia Huang, Yao Yao, Siyu Zhu, Zhan Ma, Songcen Xu, Xiaofei Wu, Zixiao Zhang, Xun Cao, Hao Zhu#

Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360° (ECCV 2024)
Yuxiao He, Yiyu Zhuang, Yanwen Wang, Yao Yao, Siyu Zhu, Xiaoyu Li, Qi Zhang, Xun Cao, Hao Zhu#

High-fidelity 3D Face Generation from Natural Language Descriptions (CVPR 2023)
Menghua Wu, Hao Zhu#, Linjia Huang, Yiyu Zhuang, Yuanxun Lu, Xun Cao

RAFaRe: Learning Robust and Accurate Non-parametric 3D Face Reconstruction from Pseudo 2D&3D Pairs (AAAI 2023)
Longwei Guo, Hao Zhu#, Yuanxun Lu, Menghua Wu, Xun Cao

Detailed Facial Geometry Recovery from Multi-view Images by Learning an Implicit Function (AAAI 2022)
Yunze Xiao*, Hao Zhu*, Haotian Yang, Zhengyu Diao, Xiangju Lu, Xun Cao

ChangeLog

  • 2026/01/27
    The download website for the FaceScape dataset has been relocated to https://nju-3dv.github.io/projects/FaceScape/. All data can now be accessed on the new site.
  • 2023/10/20
    Benchmark data and results have been updated to be consistent with the experiments in the latest journal version paper.
  • 2022/9/9
    One section is added to introduce open-source projects that use FaceScape data or models, and will be continuously updated.
  • 2022/7/26
    The data for training and testing MoFaNeRF is added to the download page.
  • 2021/12/2
    A benchmark to evaluate single-view face reconstruction is available, view here for detail.
  • 2021/8/16
    Share link on Google Drive is available after requesting the license key, view here for details.
  • 2021/5/13
    The fitting demo is added to the toolkit. Please note if you downloaded the bilinear model v1.6 before 2021/5/13, you need to download it again, because some parameters required by the fitting demo are supplemented.
  • 2021/4/14
    The bilinear model has been updated to 1.6, check it here.
    The new bilinear model can now be downloaded from NJU Drive or Google Drive without requesting a license key. Check it here.
    ToolKit and Doc have been updated with new content.
    Some wrong ages and genders in the info list are corrected in "info_list_v2.txt".
  • 2020/9/27
    The code of detailed riggable 3D face prediction is released, check it here.
  • 2020/7/25
    Multi-view data is available for download.
    The bilinear model is updated to ver 1.3, with vertex-color added.
    Info list including gender and age is available on the download page.
    Tools and samples are added to this repository.
  • 2020/7/7
    The bilinear model is updated to ver 1.2.
  • 2020/6/13
    The website of FaceScape is online.
    3D models and bilinear models are available for download.
  • 2020/3/31
    The pre-print paper is available on arXiv.

Bibtex

If you find this project helpful to your research, please consider citing:

@article{zhu2023facescape,
  title={FaceScape: 3D Facial Dataset and Benchmark for Single-View 3D Face Reconstruction},
  author={Zhu, Hao and Yang, Haotian and Guo, Longwei and Zhang, Yidi and Wang, Yanru and Huang, Mingkai and Wu, Menghua and Shen, Qiu and Yang, Ruigang and Cao, Xun},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year={2023},
  publisher={IEEE}}
@inproceedings{yang2020facescape,
  author = {Yang, Haotian and Zhu, Hao and Wang, Yanru and Huang, Mingkai and Shen, Qiu and Yang, Ruigang and Cao, Xun},
  title = {FaceScape: A Large-Scale High Quality 3D Face Dataset and Detailed Riggable 3D Face Prediction},
  booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month = {June},
  year = {2020},
  page = {601--610}}

Acknowledge

The project is supported by CITE Lab of Nanjing University, Baidu Research, and Aiqiyi Inc. The student contributors: Shengyu Ji, Wei Jin, Mingkai Huang, Yanru Wang, Haotian Yang, Yidi Zhang, Yunze Xiao, Yuxin Ding, Longwei Guo, Menghua Wu, Yiyu Zhuang.

About

FaceScape (PAMI2023 & CVPR2020)

Resources

Stars

Watchers

Forks