LLMto3D

Generating Parametric Objects from Text Prompts

Bat-El Hizmi , Abraham Shkolnik, Guy Austern, Yoav Sterman
Preprint Article

Recent advancements in Machine Learning (ML) have significantly enhanced the capability to generate 3D objects from textual descriptions, offering significant potential for design and manufacturing workflows. However, these models typically fail to meet practical requirements like printability or manufacturability, and often cannot accurately control the dimensions and the interrelations of elements within the generated 3D models.

This presents a major challenge in applying ML-generated designs in real-world applications. To address this gap, we introduce a novel method for translating natural language descriptions into parametric 3D objects using Large Language Models (LLMs). Our approach employs multiple agents, each one an LLM pre-trained for a specific task.

The first agent deconstructs textual prompts into design elements and describes their geometry and spatial relations. The second agent translates the description into code using the Rhino.Geometry coding library in the Rhino3D-Grasshopper modeling environment. A final agent reassembles the models and adds parametric control interfaces, enabling customizable outputs.

In this paper, we describe the method's architecture, and the training methodologies used to fine-tune the models. The results demonstrate that the suggested method successfully generates code for variations of familiar objects, while challenges remain in creating more complex designs that significantly diverge from the training data. In the discussion, we outline future directions for improvement, including expanding the training dataset and exploring advanced LLM models.

This work is a step towards making 3D modeling accessible to a broader audience, using everyday language to simplify the design process.

LLM Agent’s Flowchart
The LLM agent’s flowchart - shows an example of building a kettle
Project Architecture
Project architecture: Grasshopper, Hops, OpenAI API
Object-Deconstructor Example
Object-Deconstructor agent Input-Output example
Code-Writer Example
Code-Writer agent Input-Output example
Program-Assembler Example
Program-Assembler agent Input-Output example
Dataset
3D models created by the code examples from the dataset
Output
Results include generated sliders to control the generated 3D model
Results
Results of running the method on objects similar to the ones the model was trained on
Results
Results for complex objects whose structure was entirely new to the model
Results
3D printed results of models using the following prompts: (a) ’A mug,’ (b) ’A vase with a polygonal shape twisted along its height,’ and (c) ’A vase with four radii to control along its height.’