Current Assets and References

Vercel Site

https://creepy-building-blender-automated.vercel.app/

Code Repository

https://github.com/Bschuster3434/creepy-building-blender-automated


Introduction

With today’s AI tooling, it’s still unclear how far automation (versus human-in-the-loop workflows) can be pushed when going from sparse online photos—such as Google Maps, Street View, or Google Images—to a believable, walkable 3D environment. This is especially true for developers without a deep background in 3D modeling.

This document records a failed-to-partial-success attempt to build a deterministic, AI-assisted workflow for reconstructing a walkable 3D environment from sparse online imagery. The goal is not to present a polished solution, but to document what actually worked, what broke down, and where automation meaningfully fails today.

Most AI-driven 3D generation tools currently optimize for high-end rendering rather than usable geometry. Others require hundreds of highly specific reference photos or significant manual cleanup to produce assets that can function in a real-time 3D environment. Given recent advances in large language models and AI tooling, I wanted to test how close off-the-shelf tools could get to producing workable 3D meshes under realistic constraints.

What follows is the closest workflow I was able to construct using widely available tools. While the results show early promise, they also expose clear gaps in validation, structure, and execution that limit how far automation can go without human guidance.

Audience


Abstract / Orientation

Problem Statement

Using standard off-the-shelf large language models and generative AI tools—including image generation, Claude Code, and Blender via MCP—how close can we get to an automated workflow that takes sparse Google Maps imagery and produces a fully walkable 3D scene?


Why This Matters