LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language Models

1Stanford University 2Google Research
*Equal Contribution

Click on the visualizations to see the input language instructions:

Bookstore. Airy and Inviting space space; create a central reading area surrouneded by the bookcases.

LayoutVLM can generate novel layouts from open-ended language instructions.

All the tables aligned to form a line, diving the room up into two halves, place all the chairs on one side of the line and the buffets on the other side.
Stack the tables vertically in the middle of the room. Arrange the utensils in a smaller circle around the tables, then position the chairs in a larger circle surrounding the utensils.
Tables are symmetrically placed in the room; each table should have two chairs on opposite sides of the table facing each other, ready for dining.
One table is placed in the middle of the room with all the plates and bowls placed on top of it; other tables are placed towards the corners with chairs on top of them.

Abstract

Open-universe 3D layout generation arranges unlabeled 3D assets conditioned on language instruction. Large language models (LLMs) struggle with generating physically plausible 3D scenes and adherence to input instructions, particularly in cluttered scenes. We introduce LayoutVLM, a framework and scene layout representation that exploits the semantic knowledge of Vision-Language Models (VLMs) and supports differentiable optimization to ensure physical plausibility. LayoutVLM employs VLMs to generate two mutually reinforcing representations from visually marked images, and a self-consistent decoding process to improve VLMs spatial planning. Our experiments show that LayoutVLM addresses the limitations of existing LLM and constraint-based approaches, producing physically plausible 3D layouts better aligned with the semantic intent of input language instructions. We also demonstrate that fine-tuning VLMs with the proposed scene layout representation extracted from existing scene datasets can improve performance.

Method

How does LayoutVLM arrange unlabeled 3D assets according to open-ended language instructions?

Our approach employs vision-language models (VLMs) to generate code for our proposed scene layout representation that specifies both an initial layout as well as a set of spatial relations between assets (and walls). This representation is then used to produce the final object placements through differentiable optimization.


BibTeX

@article{sun2024layoutvlm,
  title={LayoutVLM: Differentiable Optimization of 3D Layout via Vision-Language Models},
  author={Sun, Fan-Yun and Liu, Weiyu and Gu, Siyi and Lim, Dylan and Bhat, Goutam and Tombari, Federico and Li, Manling and Haber, Nick and Wu, Jiajun},
  journal={arXiv preprint arXiv:2412.02193},
  year={2024}
}