Ulyana Ratushnyak

Ulyana Ratushnyak

@orangedog701350

Turka, Ukraine Joined Jan 2026

Only @orangedog701350 can see everyone listening in. Visitors see a rotating sample.

Ulyana Ratushnyak
@orangedog701350 · Jan 12, 2026

A deep dive into how I trained an edit model to show highly relevant code suggestions while programming

This is def interesting for all SWEs who would like to know what goes behind the scenes in your code editor when you hit \`Tab\`. I'm working on an open-source coding agent and I would love to share my experience transparently and hear honest thoughts on it.
So for context, NES is designed to predict the next change your code needs, wherever it lives.
Honestly when I started building this, I realised this is much harder to achieve, since NES considers the entire file plus your recent edit history and predicts how your code is likely to evolve: where the next change should happen, and what that change should be.
Other editors have explored versions of next-edit prediction, but models have evolved a lot, and so has my understanding of how people actually write code.
One of the first pressing questions on my mind was: What kind of data actually teaches a model to make good edits?
It turned out that real developer intent is surprisingly hard to capture. As anyone who’s peeked at real commits knows, developer edits are messy. Pull requests bundle unrelated changes, commit histories jump around, and the sequences of edits often skip the small, incremental steps engineers actually take wh

38 likes 112 responses