Interpreting the Weight Space of Customized Diffusion Models
Amil Dravid, University of California, Berkeley
We investigate the space of weights spanned by a large collection of customized diffusion models. We populate this space by creating a dataset of over 60,000 model weights, each of which is fine-tuned to insert a different person's identity. Next, we model the underlying manifold of these weights as a subspace, which we term weights2weights. We demonstrate three immediate applications of this space &emdash; sampling, editing, and inversion. First, as each point in the space corresponds to an identity, sampling a set of weights from it results in a model encoding a novel identity. Next, we find linear directions in this space corresponding to semantic edits of the identity (e.g., adding a beard). These edits persist in appearance across generated samples. Finally, we show that inverting a single image into this space reconstructs a realistic identity, even if the input image is out of distribution (e.g., a painting). Our results indicate that the weight space of fine-tuned diffusion models behaves as an interpretable latent space of identities.