ByteFlavor LogoBYTEFLAVOR
Perspectives
10 min read

The Architecture Behind Every Screen

Why traditional app design breaks down in chronic disease management, and how LLMs finally make human-centric design implementable.

By Christina Pavlopoulou and Peggy Szymanski

Cover image for The Architecture Behind Every Screen

App use is deeply entrenched in our lives. We do our banking online, order dinner and groceries through an app, buy clothes, books, and gadgets without leaving the couch. Most apps are designed with a final goal in mind; in the examples above, that goal is the purchase of goods and services. They assume a deterministic set of steps that leads there: browse, select items, put them in the cart, enter payment information, hit buy.

When the assumed path is not followed, friction ensues. I might put sneakers in my basket not because I plan to buy them right now, but because I want to show them to my partner. Or I'm comparing prices. Or I just want to remember something interesting I found. Lo and behold, I start receiving a barrage of emails informing me that my shopping cart is feeling lonely. Cute as that is, it ends up being ridiculously annoying.

The map drawn before you arrive

This mismatch between the person's preferred path and the app's assumed path stems from a fundamental limitation: traditional software cannot easily represent all the paths a person might take, and expanding those paths is slow and expensive. Every app is built on a static architecture of actions the user needs to perform before reaching a prespecified goal. Think of it as a map drawn in advance: if what we need is not already on the map, it simply does not exist. Every new scenario means new screens, new paths, and new logic. The more complex the domain, the more tangled the map becomes.

You might think: so what? Banking apps, shopping apps, and food delivery apps still make life much easier. A little friction is worth the price of convenience. And that is often true in domains where the path is predictable enough. But it becomes a serious problem in domains where the path toward the goal is hard to define, ambiguous, and highly variable across people. This is where healthcare and wellness apps struggle, especially those dealing with chronic disease management, weight loss, and quality of life. Such apps are notoriously hard to build and suffer from low retention: we believe that the inability to model the path accurately is the source of these problems.

The destination is clear, the path is not

Consider chronic disease management, such as diabetes. The end goal is not in dispute: maintain healthy levels of blood sugar, cholesterol, and blood pressure. What matters, and what is hard, is the path toward it. Getting there involves lifestyle changes and often medication, none of which happen overnight. The person needs to gradually introduce new foods, build in exercise, improve sleep, and make sure their medication doesn't interact with whatever else they may be taking for other conditions. This amounts to countless daily decisions. It is emotionally taxing and cognitively heavy.

Existing apps are familiar with this reality and borrow from behavioral science to address it. Meal and activity tracking aim to increase awareness, on the assumption that awareness leads to better choices. Habit tracking breaks large goals into small, repeatable actions. And this is exactly where the rigidity of the current UI/UX paradigm falls short.

The first problem is that these apps cannot adapt to the complexity and unpredictability of a person's life. What helps someone on Monday may be exactly wrong for that same person on Thursday. You might spend one week tracking meals consistently, feeling motivated, ready to raise the bar. The next week, your mother is in the hospital, you are eating from vending machines, and the last thing you need is an app telling you to aim higher.

The second problem is that continuous input is taxing, boring, and error prone. Take a reasonable habit: fill half your plate with non-starchy vegetables at dinner three times a week. In a traditional app, this becomes a setup flow followed by a tracking loop — choose the habit, log completion, review the score, repeat next week.

From the person's perspective, though, things are far less clear. If the plate was only almost half vegetables, does it count? Which vegetables are starchy and which are not? If the habit was technically completed but the week felt exhausting and unsustainable, was it still the right goal? And if it was not, what should happen next — scale back, try again, or abandon the plan altogether?

These are not edge cases. They are the reality of living with a chronic condition. The path from diagnosis to effective self-management looks different for a single parent working two jobs than it does for a retiree with time and resources. We know this. We design as if we don't.

What we call good design

The tech industry has a favorite phrase: user-centric design. It was a meaningful step forward from what came before, which was essentially "here's the system, figure it out." But the term has an assumption baked into it that we rarely examine: it centers the user, not the person. And a user doesn't exist without a system. The system comes first; the person becomes a user by virtue of using it. Everything we then call "good design" is really just making that pre-built system more navigable.

What we would really like is human-centric design. The person comes first, and the system adapts to them — not to their stated preferences, which is what user-centric design already claims to do, but to their intent and context in the moment, and to where they are on the longer path they are trying to walk.

The difference matters in domains like chronic disease management. Consider the person who set a goal of cooking at home four nights a week, and then, one Sunday, tells the system they barely cooked at all. A user-centric app treats this as a tracking entry: goal missed, streak broken, try again next week. A human-centric system asks what happened. The person explains they were called out of town for work. The question is no longer "how do I cook at home this week" but "how do I eat reasonably while living out of a hotel" — a different question, with different answers, and one the person never had to articulate in advance.

But responding well in the moment is not enough. A person managing a chronic condition is walking a long path, not having a single conversation. What helps today depends on what came before and shapes what comes next. A human-centric system has to hold that path — to know where the person is in a longer process of change so that each interaction builds on the last.

These ideas are not new1. What is new is that they are finally implementable. The reason human-centric design has remained theoretical is not philosophical resistance but technological limitation: traditional software simply could not handle the number of cases that arise in a person's life. Every case needed a screen, and no team could build enough screens.

Human-centric design doesn't start with screens. The map is drawn in the moment, for this person, at this time, given the path behind them.

Enter LLMs

Large language models change the equation in several ways. They address the limitations described above: the inability to adapt to the complexity and unpredictability of a person's life, and the burden of continuous structured input. They also open a door to systems that support people continuously, getting us closer to human-centric design.

LLMs shift the unit of interaction from checkboxes and dropdowns to natural language — a much richer medium for holding a person's experience. Traditional systems force that experience into whatever the interface offers. Did you complete your habit? Yes or no. But real life is not binary. "I almost did it" or "I didn't do it but here is what I learned" are real and meaningful answers, and a checkbox can't hold either one. An LLM can. Partial completion under difficult circumstances is not a failure — it's information. A system that recognizes the difference can help a person see where they actually are, not just where they fell short.

LLMs can carry a person's history with them. Traditional apps have very limited memory of the user, condensed into minimal predefined structures: number of habits logged, success streaks, badges. An LLM can hold context across conversations — not just what was said, but what it means when stitched together. You traveled last week. Your mother was hospitalized the week before. Before that, you had three strong weeks. The response to "I didn't track anything this week" should be completely different depending on which of those weeks it follows.

LLMs can read intent — what a person needs, not just what they say. Once a system can hold history, it can also read the present moment. The same person who needed a push last Monday may need compassion this Monday. Traditional systems can't distinguish between these moments; they deliver the same response to the same input. An LLM can detect that this is not the week to suggest a more ambitious goal. It can meet the person where they are, not where the product roadmap assumes they should be.

LLMs can generate interfaces — the capability that dissolves the architectural constraint entirely. Not select from pre-built screens — generate them. If the person needs a summary of their week, the system produces it. If they need a comparison between two meal plans, it builds it on the spot. If they need something no product team ever anticipated — because no product team could have anticipated the specific intersection of this person's history, goals, and current state — the system creates it in the moment.

The static map of predetermined screens is not augmented. It is drawn in real time, for this person, at this moment.

The other side of the equation

LLMs introduce, of course, their own problems. Three of them matter particularly in healthcare and wellness.

Hallucination. LLMs hallucinate. They produce information that sounds authoritative and is simply wrong. They are getting better fast, but for unusual combinations of person characteristics and conversations, the risk rises. In healthcare, this is not a cosmetic concern. Wrong information can lead to wrong decisions about medication, diet, or when to seek care — decisions that cause real harm. And even when the consequences are less severe, a single confident error can erode the trust that the whole relationship depends on. A system that is usually right but occasionally invents a drug interaction is not a system people will keep using, and they should not.

Memory. In chronic disease management, an AI could potentially play the role of a coach, which requires building a relationship with the person over time. After a few sessions, the system should know what level of detail this person wants, what matters to them, and which strategies have and haven't worked. None of this is reliably possible yet. LLMs can do some of it within a single conversation, but as context grows, they lose precision and drift from what matters. Memory today is typically implemented through summarization and retrieval of fragments from past conversations — useful, but not sufficient for the kind of continuity a real coaching relationship requires.

Sycophancy. LLMs tend to tell people what they want to hear. If you would like to be a prophet, then prophet you shall be. In health, the consequences are obvious. If a person says that tracking their blood sugar is making them anxious, a sycophantic system will validate the feeling and suggest they stop — not because stopping is the right call, but because agreement is the path of least resistance. The logic sounds caring. The outcome can be dangerous. A coach who only ever tells you what you want to hear is not a coach.

Still, a better experience

Despite their current shortcomings, LLMs can already yield experiences better than what existing apps offer. They are not yet suitable for the bespoke work of a qualified dietitian or physician — diagnosing, prescribing, reasoning through medication interactions. But they can support someone navigating a chronic condition in ways no static app has ever managed: meeting them where they are, adjusting when life gets in the way, holding the thread across weeks when nothing is going to plan. That is not the finished version of human-centric design. But it is already closer than anything that came before.

Footnotes

  1. The term "human-centered design" was developed in the late 1980s by Don Norman, a cognitive scientist who had introduced "user-centered design" a few years earlier in User Centered System Design (1986). Norman later argued that "user" was too narrow — that design should start with the human in context, not with a role defined by the system. The work built on earlier human-computer interaction research at Xerox PARC, which produced the GUI, the mouse, and WYSIWYG editing. In the 1990s, the design firm IDEO popularized human-centered design as a commercial methodology through what they called design thinking. The distinction we draw here extends this tradition in two ways: by foregrounding intent (what the person needs in the moment, not just their general preferences) and path (where the person is on a longer trajectory). Both become architecturally possible with LLMs, which is the subject of this piece.