How Vibe Coding Makes Weird Platforms Worth Crossing
The issue was Monkey C.
More specifically, the issue was everything that comes with deciding to build something for a niche platform you do not already inhabit. Not because it is impossible. Not because it is even especially hard. But because it is just far enough outside your lane that the cognitive cost stops being worth it.
I just built a small Garmin Connect IQ data field for my own running, called MovePause. It does one thing well: it tracks alternating run and recovery periods during freeform interval sessions. While running, it shows the current rep and progress against the previous one. While recovering, it shows the current rest period, keeps the previous running duration in view, and nudges every thirty seconds with haptics. No settings. No prebuilt workout. Just enough rhythm and context to be useful in the moment.

It is a tiny piece of software. It is also exactly the sort of thing I would not have built before.
Not because I lacked the skill, but because I lacked the patience for the surrounding ecosystem cost. That cost has now collapsed enough to make projects like this rational, and I think that shift matters more than the loud headlines suggest.
The barrier was never capability
I have twenty years of engineering behind me. Java, JavaScript, Go, Python. Platform work, product engineering, event-driven microservices, cloud systems, integration headaches, operational realities. Learning Monkey C was never going to defeat me.
But learning just enough of a new language, SDK, runtime model, packaging toolchain, testing flow, and deployment mechanism takes time. Not deep-thinking time. Fiddly, context-heavy, mildly annoying exploration time. The kind that competes with everything else already on your plate.
A lot of the conversation around vibe coding focuses on non-programmers suddenly becoming able to build things. That is interesting, but it is not the bit I find most transformative. What interests me more is what happens for experienced engineers, because our limiting factor is rarely whether we can learn something. It is whether the learning is worth derailing everything else.
That is especially true for the long tail of small, high-friction ecosystems: smartwatch apps, browser extensions, obscure plugins, internal scripts on strange platforms, one-off integrations, tools for products you use but do not develop for.
The problem is not difficulty. The problem is activation energy.
What the agents removed
The first thing coding models eliminated was the bootstrap tax.
They helped with the initial app shape, reasoning about whether a data field or a full activity app made more sense, scaffolding the project, understanding Garmin timer state, proposing implementation approaches, structuring documentation, navigating build and packaging steps, and filling in the unglamorous connective tissue that normally kills momentum on day two.
That matters more than it sounds. The early phase of a niche-platform project is mostly not deep engineering. It is reconnaissance. Once the agents absorbed enough of that overhead, a small idea could actually survive long enough to become software.
What I still had to understand myself
This is the bit that makes the story useful rather than evangelical. The barrier did not disappear. It moved.
The product got simpler, not more complex
The first versions were overbuilt. Too configurable, too analytical. Moving time versus paused time. Recovery targets. Settings screens.
The better version emerged through iteration, and that iteration was mine. I was the one standing in the rain at the end of a rep, glancing at my wrist, realising I did not want analytics. I wanted to know when to go again. While running, I wanted a soft sense of where I was relative to the last effort. That was it.
No model could have simplified the product that way, because the simplification came from understanding the actual moment of use.
I had to understand the platform I was standing on
Coding agents can infer a lot, but eventually you bump into things that are genuinely platform-specific. In Garmin's case, that meant learning that a data field is not a full-screen app. It only owns its allocated rectangle within the watch layout. Simulator behaviour is not the same as real watch behaviour. Some ideas that feel obvious are simply not available in the data field model.
I had to understand, for example, that my field could not integrate with Garmin's global circular progress chrome. It could draw its own gauge inside its own bounds, but that was it.

This is an important pattern with agent-assisted work: the model can get you close, but it cannot repeal the actual constraints of the platform.
The ecosystem was still real
There was still a toolchain to wrangle: the Java dependency for the Monkey C language server, the SDK, the simulator, the distinction between a .prg for sideloading and an .iq package for the store. There was still hardware to deal with: Garmin Express competing with file transfer access, the watch exposing storage inconsistently on macOS. And there was still a publishing exercise: title, description, screenshots, metadata that makes sense to a stranger browsing the Connect IQ store.
None of this was insurmountable. The agents helped with all of it. But the final judgement still mattered at every step. What is this thing, really? How should it be described? What should be emphasised?
That is not a syntax problem. It is a product truth problem.
What changed, exactly?
The important change was not that coding models made the problem trivial. They made it worth starting.
That is a very different claim, and I think a much more defensible one.
Without agents, the path looked something like this: decide whether the idea is worth a weekend, realise you need to learn just enough Monkey C, get the simulator working, understand deployment and packaging, learn the store flow, and maybe, eventually, get to the actual product work.
With agents, that became: define the idea, test whether the platform shape makes sense, scaffold quickly, iterate on the behaviour, dive into the platform details only where reality forced it, and ship.
The barrier moved from ecosystem bootstrapping to human judgement. That is a huge improvement.
Why this matters beyond Garmin
I do not think the interesting part of this story is that I built a Garmin data field.
The interesting part is that this sort of project now clears the threshold of worth doing. That has implications for a lot of software that used to die in the gap between "possible" and "worth the interruption."
The long tail of niche software just got much more buildable. Not because coding agents are perfect. Not because they eliminate understanding. But because they compress the amount of new ecosystem overhead you need to internalise before you can reach the useful part.
That is especially powerful for experienced engineers, because we already know where the real work is. We are not afraid of building things. We are selective about where to spend our cognitive budget. Coding agents change that budget.
The real promise
I do not think the real promise of vibe coding is that nobody will need to learn how software works.
I think the real promise is that many more useful things will now get built, because the overhead of entering unfamiliar ecosystems has dropped below the threshold of irritation.
The future is not just that more people will code. It is that more ideas will survive first contact with the toolchain.