Compress3D: a Compressed Latent Space for 3D Generation from a Single Image


Abstract

3D generation has witnessed significant advancements, yet efficiently producing high-quality 3D assets from a single image remains challenging. In this paper, we present a triplane autoencoder, which encodes 3D models into a compact triplane latent space to effectively compress both the 3D geometry and texture information. Within the autoencoder framework, we introduce a 3D-aware cross-attention mechanism, which utilizes low-resolution latent representations to query features from a high-resolution 3D feature volume, thereby enhancing the representation capacity of the latent space. Subsequently, we train a diffusion model on this refined latent space. In contrast to solely relying on image embedding for 3D generation, our proposed method advocates for the simultaneous utilization of both image embedding and shape embedding as conditions. Specifically, the shape embedding is estimated via a diffusion prior model conditioned on the image embedding. Through comprehensive experiments, we demonstrate that our method outperforms state-of-the-art algorithms, achieving superior performance while requiring less training data and time. Our approach enables the generation of high-quality 3D assets in merely 7 seconds on a single A100 GPU.


Method Overview

Compress3D mainly contains 3 components. (a) Triplane AutoEncoder: Triplane Encoder encodes color point cloud on a low-resolution triplane latent space. Then we use a Triplane Decoder to decode 3D model from a triplane latent. (b) Triplane Diffusion Model: we use shape embedding and image embedding as conditions to generate triplane latent. (c) Diffusion Prior Model: generate shape embedding conditioned on the image embedding.


Comparison with Other Methods

Compared with other methods, Compress3D can generate 3D models with good texture and fine geometric details.


More Results


Ablation Study on Prior Network

To validate the importance of diffusion prior model, we train a triplane diffusion model conditioned only on the image embedding and compare it with our method.


In the Wild Images

In addition to testing our method on the test dataset, we also test our method on some images outside the dataset. Specifically, we collect some images on the internet as input. The generation results are shown as follows, which demonstrates that our method generalizes well to images in the wild.