UNUSABLE.Ai

Particle Disappointment

Mar 04, 2026 · live

Particle Disappointment

I moved my prototype development into OpenAI Codex (desktop app).

Small win: it runs inside a proper git project folder, so versions stop evaporating.

Actual win: the ChatGPT 5.3 Codex model (quite an improvement for my usage), it can access files, run terminal commands, compile the code, and generally do the mechanical parts of software development that I would prefer not to do with my hands.

Meaning: I can get to functional prototypes with very little manual work on my side, and spend more time judging whether an idea is worth keeping.

Audio plugins as a concrete example

The first plugin I made “release ready” was a Thursday-to-Monday job. That included building a webview interface with reusable components. By historical standards, that is already fast.

Then I reused that codebase as a base for a new plugin and handed most of the iteration loop to Codex.

I did a prompt describing what I wanted and the what to do with the codebase - and I ended the prompt with my usual trick:

“Before you start, give me 10 questions that will make this a better product.”

From there it turned into a rhythm:

After ~10 revisions, the DSP part was where I wanted it. Codex had also mapped up a usable interface, so I could tweak it instead of constructing it from nothing.

The manual part I kept (on purpose)

UI tuning is still my manual process.

I spun up a local web server and copied the web UI files into a separate folder so I could edit them unzipped. Whenever I wanted to try the adjustments in the actual plugin, I asked Codex to sync those files back into the project.

It did. And it often compiled a fresh build without me explicitly asking for compilation, which is the kind of initiative I like.

In less than a workday from the start, I had another retail-ready plugin.

What this unlocked (and why this post has the title it does)

With this setup, a bunch of ideas I’ve kept “for later” can finally be prototyped.

One of those areas is particle simulation.

This obsession started ~15 years ago, when I used After Effects to make audio-reactive animations and thought there was more depth to it than “particles go brr.” There is. It also does not automatically mean “good DSP.”

So I tried four particle-based plugin ideas in under an hour.

None of them were interesting for sound.

That’s the disappointment. Also the value: I can now delete bad ideas quickly, instead of letting them live rent-free as “maybe someday.”

Particle DSP ideas I tested (and rejected)

Particle “Splat” Blur (spectral diffusion via particles)

A spectral blur that is literally a particle simulation in the STFT plane. Each particle carries a little bit of spectral energy (magnitude) and does a diffusion walk with optional drift fields (wind/vortices). Every frame, you “splat” the particles back onto the time–frequency grid with a smooth kernel and resynthesize.

The expected sound is a time-varying smear: transients turn into trails, harmonics bloom into fog, and the blur has motion coming from the sim’s dynamics rather than an LFO.

In practice (for my use): it was more “interesting concept” than “useful tool.”

Boids in Harmonic Space (flocking partials by ratio, not Hz)

Treat detected spectral peaks as boids living in log-frequency space. Neighbor relationships aren’t “near in Hz,” they’re “near a harmonic ratio” (2:1, 3:2, 5:4, etc.). Then run flocking rules, but instead of pulling peaks toward each other directly, you pull them toward ratio-consistent target positions so families of partials self-organize into harmonic stacks.

Sonically the hope was swarm-based harmonizer behavior: messy spectra collapsing into choral-ish harmonic clouds, with multiple “species” forming several simultaneous harmonic families.

Result: it made things move. But not really in an interesting way.

Spectral Lensing (local frequency warps driven by moving lenses)

A particle-ish physics idea applied as a time-varying warp field over log-frequency. Place moving “mass” points (lenses) that generate a displacement field, then warp the spectral envelope through that field. If you recombine the warped envelope with the original fine structure, you get formant motion without fully dragging pitch.

Expected sound: alive timbre shifts (vowel regions sliding, splitting, hollowing, swelling). Push it into unsafe territory (negative mass, turbulence, non-monotonic mapping) and you get foldover-style spectral tearing and wormhole-ish timbre jumps.

Result: I could hear it doing stuff, that is about the best thing I can say about it.

Particle Reverb (particles modulate the reverb physics)

Instead of a fixed algorithmic tail, use a particle simulation as the control system for the space. Particles move in a 3D room, bounce, cluster, and flow through zones with different absorption/scatter. Their aggregated density/velocity fields continuously modulate a stable late-tail core (FDN/diffusion network): diffusion strength, damping per band, stereo decorrelation, and time variance.

The intent was a tail that behaves like a living medium, not a static decay.

Result: With 70+ parameters and a random button, I couldn’t get one nice sounding reverb.


At this point, a reasonable person would stop.

I kept going.

Out of curiosity, I wanted to see how well Codex could build a particle simulator reacting to both live camera and live audio. (This is where the project leans into the “unusable.ai” label.)

It worked surprisingly well, assuming you enjoy watching hundreds of thousands of particles move around in semi-random patterns for a long time.

Maybe not useful. As a visual art instrument, it has potential.

Here’s its current form:

Metal Particle Art (live camera + mic sculpt a reactive 3D particle landscape)

The webcam feed becomes a continuously re-sampled spatial field where color, luma, contrast, gradients, and edges shape particle position, depth, motion vectors, and size. A live FFT/audio envelope (RMS, transients, multi-band energy) drives secondary dynamics on top of that image field: displacement strength, volumetric breathing, velocity response, and event-like camera behavior on sharp transients.

The result is not just “audio-reactive visuals,” but a hybrid instrument where video structure defines the geometry while sound injects force and instability, producing a cloud that feels alive rather than looped.

The camera rig orbits and drifts in ambient modes, but continuously tracks the moving cloud centroid so focus stays on the active mass. Trails and depth-of-field add persistence and spatial depth without breaking realtime performance.

It behaves like a visual medium you can play: the room, the face, the light, and the sound all become inputs to one evolving particle ecology.

If I ever push it further, the obvious next step is to feed it outside signals (open APIs, RSS) so the particle landscape reacts to something other than my desk lamp and breathing.