Hermiona (part three)

Eleven point three. That was the average error after seven rounds of optimization. Two hundred hours of training on the M4. But it was a lab number — the model tested on data from the same pool. I wanted to see what happens on a live font. One Hermiona had never seen.

I took two. One a geometric sans, clean, regular. The other humanistic — more character in the shapes, more decisions made by hand. Both with professional kerning. I compared value by value.

First problem: both fonts had loads of glyph variants that 066.KERN doesn’t support. Small caps, oldstyle numerals, tabular figures, ligatures. After filtering I was left with thirteen hundred pairs in one, three hundred seventy in the other. The rest — noise.

On letters alone the model gives an error of thirteen units in the first font, sixteen in the second. Correlation with the original above seventy-five percent. Not bad. L+Y — original minus one thirty, model minus one thirty-two. V+period — minus one twenty versus minus one twenty-one. T+p — minus fifty versus minus forty-nine. On canonical pairs Hermiona hits within a unit.

The problem sits elsewhere. In pairs where the typographer made a decision against the average. T+Y in one of the fonts. The typographer set plus seventy-one. Pushed the letters apart instead of squeezing them. That’s rare. Hermiona gave minus five. Off by seventy-six. The model hadn’t seen that move in the data often enough to understand it.

Twenty-three percent of pairs had the sign reversed — the model gave minus where the original had plus. On the geometric sans only eight percent. On the humanistic — worse, because there was more room for individual decisions.

There’s also a systematic bias. Hermiona pulls everything about ten units toward minus. Original mean minus one, model mean minus eleven. As if it got used to the idea that kerning always means tightening. On fonts where most pairs are negative — invisible. On fonts with a lot of positive pairs — it breaks things.

The model learned the craft. It didn’t learn the designer. Craft is the average of seven hundred fonts. A designer’s decision is the deviation from that average. To catch it, Hermiona would need to know something it doesn’t know now — how a pair behaves in the context of a word. Not T+a in isolation, but T+a in “Table”, “Tango”, “Task”. That’s what the next version is for.

Hermiona (part two)

Sixteen point seven. That was the error after the first round. Not bad. But I knew it wasn’t the end — the model had trained on seventy thousand samples from

Hermiona (part one)

Existing autokerning tools measure the white space between letters, compare it to a reference, equalize. They all run on a single formula. The problem isn’t that they can’t see shapes