---
title: "Four Generations of Rendering 6,666 Characters at 60fps"
date: 2026-03-28
description: "From 6,500 DOM spans to variable-weight GLSL shaders — four generations of ASCII portrait rendering, each solving the last one's performance wall."
tags: ["webgl","performance","react","shaders","animations"]
readingTime: "12 min read"
url: https://alexmoening.com/dev-thoughts/four-generations-of-ascii-rendering.html
markdownUrl: https://alexmoening.com/dev-thoughts/four-generations-of-ascii-rendering.md
---

# Four Generations of Rendering 6,666 Characters at 60fps

[← Back to /dev/thoughts](/dev-thoughts/)

<p class="lead">This morning I stumbled onto Cheng Lou's <a href="https://chenglou.me/pretext/variable-typographic-ascii/">Variable Typographic ASCII</a> demo and couldn't stop staring at it. He renders ASCII art in Georgia at three font weights, selecting each character by brightness <em>and</em> width. The heavy characters pop forward. The thin ones recede. Weight becomes tone. I immediately wanted that for my homepage portrait.</p>

My homepage renders an interactive ASCII portrait — 6,666 colored characters that shimmer, scatter into particles on click, and reform. I've rebuilt the rendering engine four times now, and each version hit a performance wall that the next one solved. Four hours after seeing Cheng Lou's demo, I had variable-weight typography running on my portrait — but the road to get there was four generations long.

I wrote about the [architectural journey from P5.js to React](/dev-thoughts/why-react-for-everything) separately. This is the rendering half of that story: why each generation broke, and how Pretext gave me the idea for what to do once the GPU had headroom to spare.

### Generation 0: The DOM Wall

<p class="section-summary">6,500+ span elements. The browser spent more time on layout than rendering.</p>

The first version was the simplest possible approach. A Python script converted a photograph into colored ASCII art — each character wrapped in a `<span>` with an inline style for its RGB color. The result was a 265KB HTML file of nested spans that the browser rendered as text.

It worked. It was also a performance disaster.

Any modification to even a subset of those 6,500+ spans triggered a full grid reflow. Wave effects, interactivity — architecturally impossible. The rendering wasn't just slow. It couldn't be made fast.

<table class="data-table">
    <thead>
        <tr><th>Metric</th><th>Gen 0: DOM Spans</th></tr>
    </thead>
    <tbody>
        <tr><td>DOM nodes</td><td>6,500+</td></tr>
        <tr><td>Draw calls</td><td>N/A (browser layout engine)</td></tr>
        <tr><td>Interactivity</td><td>None (reflow too expensive)</td></tr>
        <tr><td>Frame budget used</td><td>100%+ on initial paint alone</td></tr>
        <tr><td>GPU involvement</td><td>Compositing only</td></tr>
    </tbody>
</table>

The portrait looked correct. It just couldn't move.

### Generation 1: Canvas 2D and Death by a Thousand Cuts

<p class="section-summary">Single canvas, 6,666 fillText() calls per frame — each one re-rasterizing the font from scratch.</p>

The fix seemed obvious: move to a `<canvas>` element and draw with P5.js. One DOM node instead of 6,500. The browser's layout engine could relax.

The P5.js sketch parsed the same 265KB HTML file into a data array, then on every frame called `p.text()` for each of the 6,666 characters. Wave effects became a nested loop: for each character, check distance to each active wave, compute a brightness boost, set the fill color, draw the character.

It worked at 60fps on my MacBook. Then I tested Firefox on Linux.

Mozilla's own bug tracker documents the problem with language I appreciate: Canvas 2D `fillText()` performance represents **"death by a thousand cuts"** ([Bug 1360222](https://bugzilla.mozilla.org/show_bug.cgi?id=1360222)). Each call invokes the platform font engine — CoreText on macOS, FreeType on Linux — to rasterize glyphs, compute sub-pixel positioning, apply anti-aliasing, and composite the result. There is no batching. There is no caching. Every character is rasterized fresh, every frame.

[Benchmarks from Mirko Sertic in 2015](https://www.mirkosertic.de/blog/2015/03/tuning-html5-canvas-filltext/) measured `fillText()` consuming **10ms per frame on Firefox Linux — 41% of the 16.67ms budget** for 60fps. Modern browsers have improved glyph caching since then, but the fundamental architecture hasn't changed — each `fillText()` still dispatches to the platform font engine, and at 6,666 calls per frame, the overhead compounds.

<table class="data-table">
    <thead>
        <tr><th>Metric</th><th>Gen 0: DOM</th><th>Gen 1: Canvas 2D</th></tr>
    </thead>
    <tbody>
        <tr><td>DOM nodes</td><td>6,500+</td><td>1 (canvas)</td></tr>
        <tr><td>Draw operations/frame</td><td>N/A</td><td>6,666 fillText() calls</td></tr>
        <tr><td>Wave cost/frame</td><td>N/A</td><td>O(chars × waves) on CPU</td></tr>
        <tr><td>Font rasterization</td><td>Once (browser layout)</td><td>6,666× per frame</td></tr>
        <tr><td>Threading</td><td>Main thread</td><td>Main thread (sequential)</td></tr>
        <tr><td>Interactivity</td><td>None</td><td>Click waves, random emissions</td></tr>
    </tbody>
</table>

The P5.js version added interactivity but inherited Canvas 2D's fundamental limitation: every text draw is a synchronous CPU operation. The GPU sits idle while the CPU does all the work, one character at a time.

### Generation 2: One Draw Call to Rule Them All

<p class="section-summary">WebGL instanced mesh — 6,666 characters in a single GPU draw call via a glyph atlas texture.</p>

The migration to React Three Fiber changed everything. Not because of React (I covered the [architecture story separately](/dev-thoughts/why-react-for-everything)), but because of what WebGL makes possible: **instanced rendering**.

The idea: don't draw 6,666 characters individually. Draw one quad, and tell the GPU to stamp it 6,666 times with different positions, colors, and texture coordinates. One draw call. One geometry. One material. The GPU's massively parallel architecture handles the rest.

The key innovation is the **glyph atlas** — a single texture containing every unique ASCII character pre-rendered once. Instead of rasterizing fonts per frame, each character instance just looks up its glyph in the atlas via UV coordinates. The font engine runs once at startup. The GPU does texture sampling from that point forward.

[Daniel Velasquez's benchmarks](https://velasquezdaniel.com/blog/rendering-100k-spheres-instantianing-and-draw-calls/) quantify the difference: individual Three.js meshes cap out at **~7,000 objects at 60fps**, while instanced rendering handles **100,000+ objects** — approximately a **14x increase**, though exact gains are hardware-dependent. The bottleneck with individual objects isn't GPU rendering capacity. It's CPU-to-GPU communication overhead from thousands of draw calls.

Wave effects moved from CPU nested loops to GLSL fragment shaders. Instead of JavaScript calculating brightness per character per wave per frame, the shader runs the same math on the GPU — but across all 6,666 fragments simultaneously, in parallel. What was O(chars × waves) sequential CPU work became massively parallel GPU work that takes a fraction of the time.

<table class="data-table">
    <thead>
        <tr><th>Metric</th><th>Gen 1: Canvas 2D</th><th>Gen 2: WebGL</th></tr>
    </thead>
    <tbody>
        <tr><td>Draw calls/frame</td><td>6,666</td><td>1</td></tr>
        <tr><td>Font rasterization</td><td>6,666× per frame</td><td>Once at startup (atlas)</td></tr>
        <tr><td>Wave computation</td><td>CPU, sequential</td><td>GPU, massively parallel</td></tr>
        <tr><td>Visual effects</td><td>Brightness boost only</td><td>Shimmer, sparkle, bloom, 8 variants</td></tr>
        <tr><td>Physics</td><td>None</td><td>Scatter/reform, shockwaves, mouse gravity</td></tr>
        <tr><td>Postprocessing</td><td>None</td><td>Bloom (neon glow)</td></tr>
    </tbody>
</table>

The result: more effects, more physics, more visual complexity — and a lower frame budget than the simpler P5.js version. The GPU is that much better at this kind of work.

### Generation 3: Weight as a Tonal Dimension

<p class="section-summary">Variable-weight font atlas where waves don't just brighten characters — they embolden them.</p>

Generation 2 solved the performance problem. Generation 3 asks: now that we have GPU headroom, what new visual dimensions can we add?

That question sat unanswered until I found [Pretext](https://chenglou.me/pretext/).

#### What Pretext does

[Pretext](https://github.com/chenglou/pretext) is Cheng Lou's text measurement library — it caches font metrics once and does all subsequent layout with pure arithmetic. No DOM, no `getBoundingClientRect()`, no layout reflow. Same philosophical move as the glyph atlas: measure once, compute forever. It's been getting a lot of well-deserved attention on X lately, and the [demos page](https://chenglou.me/pretext/) shows why — seven interactive showcases of what becomes possible when you eliminate the browser's measurement bottleneck.

The demo that stopped me was [Variable Typographic ASCII](https://chenglou.me/pretext/variable-typographic-ascii/). It renders an ASCII portrait in **Georgia** at **3 font weights × 2 styles**, with a particle-and-attractor brightness field driving the animation. Characters are selected by both **brightness and width** — because Georgia's `M` is wider than its `i`, you can't just swap characters by brightness alone without distorting the grid. Pretext's per-glyph width measurements make the substitution dimensionally correct.

The effect is striking. Heavy-weight characters advance visually. Thin-weight characters recede. The portrait gains depth — not from color or parallax, but from **typographic weight alone**. That's what I wanted.

#### What I took from it

My adaptation diverges from Pretext in two key ways. Pretext uses **proportional fonts** and **discrete weight steps** (3 levels). I use a **variable-weight monospace font** — JetBrains Mono with a continuous `wght` axis from 100 to 800 — and keep the fixed-width grid that the portrait depends on. Monospace means I don't need Pretext's width-aware character selection; every character occupies the same cell. But the concept that carries over is **weight as a visual dimension**: bright areas render heavy, dark areas render thin.

Here's how it works:

**The multi-weight glyph atlas.** Instead of rendering each character once, the atlas now renders the same ~50 unique characters at **8 weight levels** (100, 200, 300, 400, 500, 600, 700, 800). The atlas has 8 vertical bands — each containing the full character set at one weight. Total texture size: about 160×960 pixels, trivial for a modern GPU.

**Per-character base weight.** Each of the 6,666 character instances gets a `weight` attribute derived from its luminance: `(0.299×R + 0.587×G + 0.114×B) / 255`. Bright characters default heavy. Dark characters default thin.

**Dual-band sampling.** The fragment shader computes an effective weight (base + wave boost + breathing), maps it to the atlas, and **samples two adjacent weight bands**, interpolating between them with GLSL `mix()`. This creates smooth weight transitions instead of discrete jumps.

**Wave weight boost.** When a wave ring passes through a character, it doesn't just get brighter — it gets **bolder**. The weight boost factor is 0.35 (vs 2.5 for brightness), deliberately gentler so the emboldening effect is visible but not overpowering. The visual effect: ripples that "pulse ink" through the portrait.

**Breathing.** The entire portrait subtly thickens and thins on a 5-minute sinusoidal cycle — `sin(time × 0.02094) × 0.05`. It's imperceptible on any given frame, matching the 300-second tagline pulse I use elsewhere on the site. Disabled when `prefers-reduced-motion` is active.

<table class="data-table">
    <thead>
        <tr><th>Metric</th><th>Gen 2: Single Weight</th><th>Gen 3: Variable Weight</th></tr>
    </thead>
    <tbody>
        <tr><td>Atlas size</td><td>160×120px (1 band)</td><td>160×960px (8 bands)</td></tr>
        <tr><td>Texture samples/fragment</td><td>1</td><td>2 (adjacent bands + mix)</td></tr>
        <tr><td>Font file</td><td>None (Courier New system font)</td><td>72KB WOFF2 (JetBrains Mono Variable)</td></tr>
        <tr><td>Visual dimensions</td><td>Color + brightness</td><td>Color + brightness + weight</td></tr>
        <tr><td>Wave effect</td><td>Brightness boost</td><td>Brightness boost + weight boost</td></tr>
        <tr><td>Perf overhead</td><td>Baseline</td><td>&lt;5% (one extra texture sample)</td></tr>
    </tbody>
</table>

The performance cost of the second texture sample is negligible — the atlas fits comfortably in GPU texture cache, and the additional `mix()` math is trivial compared to the existing shimmer and sparkle calculations.

### The Full Picture

<p class="section-summary">Four generations, each solving the previous one's wall — from layout reflow to font rasterization to draw call overhead to visual expressiveness.</p>

<table class="data-table">
    <thead>
        <tr><th>Generation</th><th>Rendering</th><th>Draw Calls</th><th>Font Cost</th><th>Wave Engine</th><th>Wall Hit</th></tr>
    </thead>
    <tbody>
        <tr><td>0: DOM Spans</td><td>Browser layout</td><td>N/A</td><td>Once</td><td>None</td><td>Reflow made interactivity impossible</td></tr>
        <tr><td>1: P5.js Canvas</td><td>Canvas 2D</td><td>6,666/frame</td><td>6,666×/frame</td><td>CPU nested loop</td><td>fillText rasterization blew frame budget</td></tr>
        <tr><td>2: R3F WebGL</td><td>InstancedMesh</td><td>1</td><td>Once (atlas)</td><td>GPU fragment shader</td><td>Single visual dimension (brightness only)</td></tr>
        <tr><td>3: Variable Weight</td><td>InstancedMesh</td><td>1</td><td>Once (8-band atlas)</td><td>GPU + dual-band sampling</td><td>TBD</td></tr>
    </tbody>
</table>

Each generation's "wall" wasn't a failure — it was a ceiling that revealed the next opportunity. DOM spans couldn't move. Canvas 2D could move but burned the CPU on font work. WebGL moved the work to the GPU but used a single visual dimension. Variable-weight typography adds depth without meaningful performance cost because the GPU had headroom all along.

The pattern reminds me of something from the machine shop: you don't upgrade the tool because it broke. You upgrade it because you see a cut it can't make.

### What I Learned

<p class="section-summary">The GPU is embarrassingly underused by most web rendering approaches — and the best ideas come from seeing someone else's work and asking "what if."</p>

The biggest takeaway from this journey is how much GPU capacity goes unused in typical web rendering. Canvas 2D relies on the CPU for glyph rasterization — even in GPU-accelerated contexts, each `fillText()` dispatches to the platform font engine on the main thread. The browser's DOM renderer uses the GPU for compositing but not for the kind of parallel per-element computation that makes wave effects possible.

WebGL with instanced rendering unlocks that capacity. And once you're there, adding visual complexity (shimmer modes, sparkle effects, variable weight) costs almost nothing because the GPU processes all fragments in parallel. The jump from Gen 1 to Gen 2 was a **6,666:1 reduction in draw calls**. The jump from Gen 2 to Gen 3 was a **1:2 increase in texture samples** — barely measurable.

But the performance story is only half the lesson. Generation 3 didn't come from profiling a bottleneck. It came from stumbling onto Cheng Lou's demo and having a visceral reaction: *that's what weight can do to text*. Three generations of engineering gave me the GPU headroom. A stranger's demo gave me the idea. The best optimizations aren't always about going faster — sometimes they're about creating space for an idea you haven't had yet.

Generation 4 will come the same way — from someone else's work that makes me stop scrolling.

---

*Working on GPU-accelerated text rendering or variable-font experiments? Cheng Lou's [Pretext demos](https://chenglou.me/pretext/) are worth an hour of your time. And if you want to compare notes — find me on [LinkedIn](https://www.linkedin.com/in/alexmoening/).*

---

## Navigation

- [Home](/)
- [About](/about.html)
- [Projects](/projects.html)
- [Contact](/contact.html)
- [/dev/thoughts](/dev-thoughts/)

*Copyright 2026 Alex Moening. Opinions expressed are my own.*
