CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models

1Dalian University of Technology, 2Tianjin University, 3ZMO AI Inc.
* Corresponding Author

CharacterFactory can create infinite new characters end-to-end for identity-consistent generation with diverse text prompts. Notably, it can be seamlessly combined with ControlNet (image), ModelScopeT2V (video) and LucidDreamer (3D). We omit the placeholders \(s^*_1~s^*_2\) of prompts such as "a photo of \(s^*_1~s^*_2\)" for brevity.

Abstract

Recent advances in text-to-image models have opened new frontiers in human-centric generation. However, these models cannot be directly employed to generate images with consistent newly coined identities. In this work, we propose CharacterFactory, a framework that allows sampling new characters with consistent identities in the latent space of GANs for diffusion models. More specifically, we consider the word embeddings of celeb names as ground truths for the identity-consistent generation task and train a GAN model to learn the mapping from a latent space to the celeb embedding space. In addition, we design a context-consistent loss to ensure that the generated identity embeddings can produce identity-consistent images in various contexts. Remarkably, the whole model only takes 10 minutes for training, and can sample infinite characters end-to-end during inference. Extensive experiments demonstrate excellent performance of the proposed CharacterFactory on character creation in terms of identity consistency and editability. Furthermore, the generated characters can be seamlessly combined with the off-the-shelf image/video/3D diffusion models. We believe that the proposed CharacterFactory is an important step for identity-consistent character generation.

Method

Overview of the proposed CharacterFactory. (a) We take the word embeddings of celeb names as ground truths for identity-consistent generation and train a GAN model constructed by MLPs to learn the mapping from $z$ to celeb embedding space. In addition, a context-consistent loss is designed to ensure that the generated pseudo identity can exhibit consistency in various contexts. \(s^*_1~s^*_2\) are placeholders for \(v^*_1~v^*_2\). (b) Without diffusion models involved in training, IDE-GAN can end-to-end generate embeddings that can be seamlessly inserted into diffusion models to achieve identity-consistent generation.

Experiments

Creating New Characters

More identity-consistent character generation results by the proposed CharacterFactory.

Interpolate start reference image.

Story Illustration

The proposed CharacterFactory can illustrate a story with the same character.

We provide more identity-consistent illustrations for a longer continuous story to further demonstrate the availability of this application.


Interpolation Property of IDE-GAN

We conduct linear interpolation between randomly sampled "\(z_1\)" and "\(z_2\)", and generate pseudo identity embeddings with IDE-GAN. To visualize the smooth variations in image space, we insert the generated embeddings into Stable Diffusion via the pipeline of Figure 2(b). The experiments in row 1, 3 are conducted with the same seeds, and row 2, 4 use random seeds.

Interpolate start reference image.

CharacterFactory & Image/Video/3D Models

More identity-consistent Image/Video/3D generation results by the proposed CharacterFactory with ControlNet, ModelScopeT2V and LucidDreamer.

Interpolate start reference image.

BibTeX

@article{wang2024characterfactory,
  title={CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models},
  author={Wang, Qinghe and Li, Baolu and Li, Xiaomin and Cao, Bing and Ma, Liqian and Lu, Huchuan and Jia, Xu},
  journal={arXiv preprint arXiv:2404.15677},
  year={2024}
}