FlashSight

A software–hardware system that uses AI to generate and project new textures onto real objects in real time, turning physical space into a living canvas.

FlashSight - Key Visual

Overview

FlashSight is a hybrid system that generates and projects dynamic textures onto real-world objects, transforming how surfaces are perceived. Rather than revealing what already exists, the system imagines and applies new visual layers onto physical matter—turning ordinary objects into continuously shifting visual artifacts.

Role

Conceptor & Technical Lead

Team

Heff Jin

Institution / Year

Harvard University - Graduate School of Design 2026

Tools

Python | Stable Diffusion | OpenCV

Background

Our perception of objects is largely defined by their surface—color, texture, and material qualities that appear fixed and inherent. However, with advances in computer vision and generative AI, these visual properties can be detached from the object itself and redefined in real time. This project emerges from the question of whether material perception can be manipulated without altering the object physically, blurring the boundary between what is real and what is computationally generated.

Concept

The core idea is to treat physical objects as canvases for generative imagination. Using a camera-projector system, FlashSight captures the geometry of an object and overlays it with AI-generated textures that adapt to its form. This creates a condition where the object remains physically unchanged, but its perceived identity is constantly re-authored. The system shifts perception from recognition to reinterpretation—where seeing becomes an act of continuous transformation.

The Project

FlashSight is a hybrid system that generates and projects dynamic textures onto real-world objects, transforming how surfaces are perceived. Rather than revealing what already exists, the system imagines and applies new visual layers onto physical matter—turning ordinary objects into continuously shifting visual artifacts.

Process

The system integrates computer vision, real-time rendering, and projection mapping. A camera captures the target object and extracts its geometry or surface features, which are then used as input for generating context-aware textures. These textures are produced using AI models and aligned back onto the object through calibrated projection. Iterations focused on spatial alignment, texture coherence, and responsiveness, ensuring that generated visuals accurately adhere to the object’s form while maintaining fluid, real-time transformation.