AI Did Not Fix eLearning. It Exposed Bad Product Design
- Krzysztof Kosman

- 15 hours ago
- 6 min read
For years, the eLearning industry has had an easy excuse.
Courses are boring because compliance content is boring.Learning platforms are clunky because enterprise software is clunky.Authoring tools are limited because accessibility matters.LMS systems are hard to use because training is complex.
Then AI arrived.
And suddenly that old logic started to look weaker than ever.
Because once software can generate a course draft in minutes, create visuals, add voiceover, suggest learning goals, and even help configure a large platform through conversation, the real bottleneck becomes impossible to ignore: the problem was never just content production. The problem was product design.
That was one of the most interesting ideas in my conversation with Emil Reisser-Weston, founder of Open eLMS. His perspective is unusual not because he talks about AI, but because he comes at the whole category through ergonomics and human factors design. In other words, he does not start with what the software can do. He starts with how people actually use systems, how they learn, and what makes interfaces feel natural instead of heavy.
That difference matters.
Because if you listen closely, his critique is not really about one tool, one LMS, or one vendor. It is about a much broader pattern in software.
Too many products are designed around internal logic, technical constraints, or category habits. Not around the real cognitive experience of the person using them.
The industry normalized boring
Emil’s criticism of mainstream eLearning is blunt. A lot of it, in his words, feels like endless vertical scrolling, website effects, and more of the same. His argument is not that designers are lazy. It is that the tools themselves shape the experience. When the dominant authoring paradigm is constrained, the output becomes constrained too.
That is a bigger idea than it may first appear.
In software, we often talk as if products are neutral containers. But they are not. Every product encodes assumptions. Every editor nudges behavior. Every interface rewards some actions and discourages others.
If your authoring tool makes it easiest to produce a long page of text with familiar blocks, then a huge share of the market will end up producing exactly that. If your LMS assumes learners should always come into one central system, consume information there, and only then apply it later, then your entire learning model starts to orbit around that structure.
Over time, those patterns stop looking like choices. They start looking like reality.
That is how weak design becomes industry common sense.
Good UX is still treated as suspicious in enterprise learning
One of the sharpest parts of the conversation was not about content at all. It was about why so many learning platforms still feel heavier than they should.
There is a strange belief in many categories of B2B software that if a product feels difficult, dense, or slightly unpleasant, it must be serious. If it feels simple, it risks looking lightweight. In learning, that bias seems especially stubborn.
Emil’s view is that many systems survive not because they are genuinely well designed, but because they are bought by people who do not use them day to day. Decisions get made above the user layer. Brand familiarity, procurement comfort, and “nobody got fired for choosing the big name” still shape the market. Meanwhile, the learners and administrators who actually live inside the product inherit the consequences.
This pattern is not unique to EdTech. It shows up in CRMs, HR systems, analytics tools, and internal enterprise platforms everywhere.
But learning software makes the problem especially visible, because the gap between what the system is supposed to enable and what it actually feels like to use is so personal. If learning feels dull, repetitive, and unnatural, people disengage fast. And once that happens, the problem is no longer just UX. It becomes a business problem, a culture problem, and a learning outcome problem.
AI did not magically solve this. It made the gap obvious.
It is tempting to frame AI as the solution to bad eLearning. But that is too simplistic.
AI can generate faster. It can summarize, suggest, draft, transform, and adapt. But if the surrounding system is still poorly designed, all you get is more output moving through the same weak structure.
That is why the most interesting part of Emil’s argument is not “AI can make courses quickly.” It is that AI becomes valuable when it is embedded inside a better product philosophy.
In the interview, he describes a workflow in which a PDF can become a first-pass learning experience with structure, visuals, voice, and interactivity in minutes rather than weeks. But even there, the real point is not speed for its own sake. The point is what speed unlocks. If the first draft can happen quickly, human effort can move higher up the stack: refinement, engagement, customization, tone, clarity, and relevance.
That is a much healthier way to think about AI-assisted software in general.
The best use of AI is rarely “replace the whole process.”It is usually “compress the low-leverage parts, so humans can spend more time on the parts that actually matter.”
Or, as Emil puts it in a memorable metaphor: bake the cake in ten minutes, then spend your time icing it.
Product design matters more when systems get smarter
There is another reason AI raises the stakes for product design.
As systems become more powerful, they usually become more complex under the hood. More options, more configurations, more workflows, more integrations, more edge cases. That complexity has to go somewhere.
Traditionally, a lot of it lands on the user.
Emil describes Open eLMS as a mature system with hundreds of features and a large amount of configuration depth. His point is that once a platform reaches that level of capability, the next interface cannot just be another admin panel. It has to become something simpler, more conversational, more adaptive. That is where he sees AI as a true front end layer: not a gimmick, but a way to make a powerful system usable.
This should resonate far beyond learning software.
In many categories, the next UX leap will not come from prettier dashboards. It will come from better mediation between system complexity and human intent.
Not “here are 680 features, go configure them.”But: “tell me what kind of business you run, what you need, and what outcome you want.”
That shift is profound.
And it reveals something uncomfortable for product teams: AI does not reduce the need for design maturity. It increases it.
Because if the underlying model, workflow, architecture, and mental model are messy, AI will not hide that for long. It will amplify confusion just as efficiently as it amplifies clarity.
The real opportunity is not more content. It is more relevance.
Another strong thread from the conversation was personalization.
Generic training content has been tolerated for too long, partly because producing anything more specific was expensive and slow. AI changes that equation. If creating variations becomes cheap enough, then there is less excuse for broad, one-size-fits-all learning that barely reflects the role, context, or real questions of the learner.
That idea matters well beyond EdTech.
A lot of software categories are still built on industrial-era content logic: create once, distribute widely, hope it fits most use cases well enough. But modern users increasingly expect systems to meet them closer to their actual context.
In learning, that could mean different pathways for different roles.In enterprise software, it could mean role-aware workflows, adaptive onboarding, or explanation layers that appear exactly when needed.In AI products, it could mean systems that do not just generate more, but generate the right thing for the right moment.
The companies that understand this will not just ship faster. They will feel smarter.
The deeper lesson for software teams
The most useful takeaway from this conversation is not “AI in education is exciting.” That is obvious.
The more interesting takeaway is this:
When AI enters a category, it exposes whether the category has real product thinking behind it.
If the answer is yes, AI can become a multiplier.If the answer is no, AI just makes shallow products faster.
That is why this conversation is not only about EdTech. It is about software design in general.
The winners in the next wave will not be the teams that can produce the most AI-generated assets. They will be the teams that understand:
how users actually think
where friction is legitimate and where it is just inherited
which workflows deserve automation and which deserve craft
how to turn system complexity into felt simplicity
and how to design products that are not only functional, but genuinely usable
For years, eLearning was allowed to hide weak product decisions behind process, legacy habits, and slow production cycles.
AI removed that cover.
Now the question is much harder, and much more interesting:
If software can generate almost anything, what exactly are you designing that is still worth using?


