Lessons from evolving digital product strategy in an AI- first world
By Natalia Loza
-------------------------------------------------------------------------------------------
The past two years have changed how digital products are built as development tools powered by
large models now handle much of the heavy lifting in standard implementation work. Features that
previously took weeks are often assembled in a few days. Authentication, payment flows,
dashboards and workflow logic can be scaffolded quickly with tools like Cursor, v0, or Replit
Agent.
This shift has made it clear that the hardest part of building a product is less about getting the
features working and more about deciding what is worth building, in which order, and what you
expect to learn from each step. When the tools remove a large part of the engineering constraint, the
bottleneck moves into strategy, learning and long term coherence.
The reflections below draw on work with both AI heavy products and more traditional digital tools.
The details differ across sectors, yet the underlying lessons travel well. They are useful prompts for
anyone responsible for deciding what gets built next.
When building gets easier, strategy gets harder
Implementation speed for common features has multiplied, and you can see this by taking a recent
roadmap item and comparing how long it would have taken a few years ago with the time it would
take now. Database work, third party integrations, and standard flows all compress once you adopt
modern tooling.
This creates a shift in where teams get stuck. Shipping many things quickly is now possible but
understanding whether those things matter is where they slow down. When most ideas can be
implemented, the limiting factor changes from whether the team can build them to whether the team
has a clear reason to do so and a way to judge the result.
As speed increases, key decisions move closer to the start of the process. In earlier cycles of
software, some gaps in thinking only appeared as developers wrestled with implementation. Now
those gaps show up as soon as someone suggests a feature, because implementation no longer slows
the process down. That makes the quality of early thinking much more visible and much harder to
ignore.
Teams need to add structure at the beginning of each initiative. Before a line of code is written, ask
a small set of direct questions. Which behaviour are we trying to influence, how will we see
whether the change helped, under what circumstances would we be willing to remove this feature in
a few months? These questions are not new in themselves but what has changed is that higher build
speed makes it immediately obvious when they have been skipped.
A related pattern is starting with conditions rather than screens. Instead of jumping straight into
page layouts or API specifications, we need to first define the conditions that need to hold for the
product to be useful. For instance, how the system should react when inputs are incomplete or
surprising, which signals it depends on, what level of consistency users need and which outcomes
are essential even as usage patterns change. Once those conditions are clear, interface design and
implementation flow with fewer reversals.
This way of working appears in categories that look quite ordinary from the outside, like workflow
tools, marketplaces, finance products or analytics platforms. It is mainly a response to increased
speed. When almost anything can be built, clarity on why something should be built matters more
than ever.
There is also a shift in risk for early stage teams. In many areas, technical execution is no longer the
main unknown, the critical risk is reaching product market fit before others run through a large
number of experiments with the same users. That reality rewards product managers and tech
founders who treat learning as a separate system, not as a side effect of shipping.
Teams that do well tend to have repeatable habits around insight as well as code. They keep
standing time in the calendar for structured conversations with users; they review support tickets
and sales calls regularly, looking for patterns instead of isolated stories; and they spend time with
engaged customers exploring the wider workflow rather than only the current surface of the
product. These habits keep the learning side of the business scaled appropriately as implementation
speeds up.
If engineering capacity in your business has grown while your learning practices look much the
same as they did several years ago, you may now be building faster than you understand. That leads
directly into the next question, which is where defensibility comes from when everyone can ship
quickly.
How data becomes defensible
When basic implementation becomes cheap, advantage tends to move toward things that cannot be
recreated quickly. For digital products, this often lies in the patterns that emerge from real use rather
than in the code itself.
A simple starting point is to look carefully at what your product collects. Some tools store
straightforward activity logs such as actions, timestamps, and status changes. Time tracking or
simple document tools often fall into this category, these can be useful products, yet a competitor
can copy the features, and gather similar data from their own users without facing much of a barrier.
Other tools create value from relationships and patterns across many participants. A marketplace
sees how buyers and sellers respond to each other, a collaboration product sees how roles interact
around shared work. In these cases, the structure of the data matters as much as volume. A new
entrant needs time and scale before similar patterns appear, which creates a more meaningful lead.
The strongest position often comes from repeating improvement loops. Usage makes the product
better, which attracts more usage, which then improves the product further. Each cycle strengthens
the system in ways that are difficult for others to match quickly.
Grammarly is a good example. Its underlying language models can now be approximated by other
providers, what is harder to copy is the years of feedback about which suggestions people accept
and which they ignore, in which context, and in which type of document. A phrase that works in
technical writing but feels wrong in marketing copy produces a clear judgement signal. That kind of
pattern only appears through sustained use and careful capture.
Similar loops appear in less obviously AI driven tools. For instance, a planning system that
improves its understanding of how a specific team estimates work, a budgeting product that
recognises where certain types of user regularly misjudge cost or timing, or a design system that
learns which component combinations repeatedly lead to successful outcomes.
A useful exercise is to sketch a simple map of entities and relationships in your product. Include the
types of users, artefacts, events and outcomes, and the connections between them. Then ask yourself
how long it would take to rebuild the most valuable part of that map if you had to start from zero
tomorrow. If the answer is that value appears as soon as the first user arrives, you are probably
dealing with basic log data; if it requires many people interacting over time, you are closer to
compounding value.
Many teams discover that their data model was designed mainly for short term functionality. Once
they see that, the conversation shifts from which features to add toward which recurring decisions
they could help users make better if they redesigned how information is captured and linked.
One practical choice is to specialise early. Narrowing the initial domain feels counterintuitive, yet it
often creates deeper value. A knowledge tool that focuses on one profession learns faster about that
profession’s language, habits, and pain points than a general note taking app. Breadth can often be
added later but depth requires focused time.
This line of thought applies even without custom models. Any product that could improve as it sees
more use should treat its data as a strategic asset. Once you recognise that, it becomes easier to see
which problems will remain interesting even as tools evolve, which leads directly to the class of
problems that general purpose tools are unlikely to erase.
Problems that tools won’t make go away
As individual productivity rises, some problem types stand out more clearly as they tend to involve
groups, institutions, and context rather than individual tasks. In many cases, better tools for
individuals make these problems more visible because people generate more work that has to be
aligned.
One such area is alignment across roles. Tools have made it simple for an engineer, designer or
marketer to move faster. Getting several departments to contribute information or decisions at the
right moment and coordination remain difficult.
You see similar dynamics in construction, where architects, contractors and clients all need to keep
a shared picture of the project current, and in supply chains where partners must share just enough
information with one another without giving away leverage. In these spaces, products succeed when
they make coordination possible, visible and accountable, rather than when they automate one
isolated task.
Accountability is another recurring theme. Products that do well, treat audit trails, review steps and
escalation paths as primary design concerns. That work can feel unglamorous when compared with
new features, yet it often determines whether the product can be used in serious settings.
Operational knowledge transfer is a further example. Many valuable insights live in unwritten
habits and stories. A production operator knows how a machine behaves on damp days. An
experienced account manager has a feel for which hesitant customer will eventually convert and
which one is just polite. Tools that only reflect official process ignore this layer and usually
underperform, whereas tools that succeed build structured ways to capture and share those small,
practical insights alongside formal records.
Motivation completes the picture as systems frequently fail because people have little reason to feed
them with good information. Products that bake contribution into existing routines, provide clear
feedback on its impact or make visible who is carrying the load tend to see much higher
engagement than those that rely on static forms.
From a product discovery perspective, this suggests a simple filter. When looking at a problem, ask
whether the main value lies in saving time on individual work, or in solving coordination,
responsibility, tacit knowledge, and motivation between people. Both can be valid targets and the
latter set of problems tends to be harder to copy and more durable once you have a foothold, which
is important when planning in a landscape where the technical base is constantly moving.
Planning with moving foundations
The services and platforms that many products rely on now shift more frequently. Capabilities
appear, pricing moves, reliability changes and regulations evolve. This is true for AI services, but
also for payments, messaging, hosting, analytics and many other layers.
Roadmaps built on the quiet assumption that the technical base will stay roughly stable for a year
feel increasingly strained. Teams still need direction, yet pretending that a long list of features will
hold unchanged over twelve months no longer matches reality.
One practical response is to treat different time horizons differently. In the near term, usually up to
six weeks, you focus on commitments. This horizon contains work you can carry out with high
confidence, using current tools and knowledge. Everyone can align around it without many caveats.
In the middle horizon, covering the next few months, you maintain clear options rather than fixed
promises. You identify work you might do and attach it to explicit triggers. For example, you might
decide to invest in a custom component only if a third party solution fails to reach a certain quality,
or to expand a feature only if usage passes a specific threshold. This connection between work and
triggers makes your planning more robust against change.
In the long horizon, stretching out a year or more, you write down beliefs instead of tasks. You
might expect that some capabilities will become cheaper or more reliable. You might expect a
regulation to clarify, or a platform to mature. These expectations influence foundational choices like
data models and internal interfaces, even while you deliver current features with present day tools.
This separation reduces friction when you need to adjust. People know what is fixed now, what
could change depending on conditions, and what is simply a view of the direction of travel.
Fast development also hides stacked dependencies. It becomes easy to add several features that all
rely on the same fragile assumption or external service, everything looks fine until that underlying
element shifts.
To counter this, some PMs now include light dependency mapping before a planning cycle. They
sketch which initiatives depend on which datasets, internal modules, third party providers or user
behaviours. Seeing these links on a single page often makes it obvious that several items depend on
the same weak point and should be sequenced rather than attempted in parallel.
Regular reviews then become part of the operating rhythm rather than emergency responses. Teams
set aside time to revisit their assumptions about key services, costs and user behaviour. They look
for places where new capabilities make old workarounds unnecessary, or where changes elsewhere
in the market have altered what is worth building.
Accepting that some features have short, deliberate lifespans also helps. A manual or semi manual
feature that runs for six months while you learn how users actually work can be more valuable than
a complex build that never ships. The key is to design it with that temporary role in mind, so that it
generates insight instead of turning into accidental legacy. This focus on adaptability at the planning
level ties naturally into the technical foundations that support it.
Adaptable technical foundations
Most meaningful products now depend on an ecosystem of external APIs, cloud services and
integration partners. These services change in capability and economics, new ones appear and
others stagnate or consolidate. Designing as if you will rely on a single provider for the full life of
your product is increasingly risky.
Teams that want to keep their options open usually introduce a small layer between their core logic
and each external service. Rather than wiring application code directly to a particular API or storage
system, they define a clear internal interface for each job the system needs done, and then
implement adapters that connect that interface to individual providers.
This approach requires some extra design and maintenance, yet it pays off when a provider changes
direction. Once you have that internal contract, you can move a portion of workload to a new
provider, reroute specific tasks to cheaper or faster options, or replace a weak service altogether,
without changing user facing behaviour.
Some teams take this further and classify work before it leaves their system. Simple, repetitive tasks
go to basic services. Complex, sensitive or time critical tasks go to providers that excel along those
dimensions. The same pattern applies to storage, messaging, search, or analytics. This allows you to
tune cost and quality more precisely than if every request is treated the same.
Fallbacks are as important as routing and a sensible pattern is to have a primary service, a secondary
route, and a clear threshold for escalating to manual handling. That way, if a provider degrades or
fails, the product continues to operate and you gain time to decide how to respond.
To keep this real rather than theoretical, it helps to send a small amount of live traffic through
backup paths and to watch how alternative providers perform on your own workloads. This gives
you a grounded view of what switching would involve, rather than relying on marketing material.
Flexibility has a strategic dimension as well as a technical one. Products that are tightly bound to
one provider have fewer options in negotiation and less room to adapt when that provider’s
priorities change. Products that have invested in internal clarity and external flexibility can respond
more easily to shifts in the wider landscape, which is vital in an environment where behaviour is
just as important as flow.
Understanding product behaviour
Most teams can describe their product in terms of flows, these are the sequences of steps a user
follows through the interface. In modern products, it is just as important to understand behaviour,
meaning how the system responds to inputs over time and how those responses change.
Search, routing, automation, and pricing are good ilustrations. A search flow can remain exactly the
same while the results become less relevant. A routing flow can still function while sending more
tasks to manual handling than it did previously. In each case, the interface changes little while the
usefulness of the outcome shifts significantly.
If you only check whether flows work, you can miss slow declines in behaviour. A feature appears
to be healthy from a design perspective while it slowly loses effectiveness in real use.
One remedy is to treat behaviour as something you observe continuously. You can track a few key
patterns over time, such as how often people refine a search, how many automated tasks require
correction, or how frequently a notification leads to the intended action. These metrics reveal
whether a feature is holding its value, improving or drifting.
Modern tools make it easier to spot such patterns. Session recordings highlight where people
hesitate, backtrack or abandon. Event streams reveal unexpected paths, clusters of support tickets
show where confusion or friction is concentrated. These signals help you decide where to
investigate further.
At the same time, behavioural data does not explain everything. It rarely captures the organisational
context that shapes how people work. A process that looks inefficient in your logs may be the only
viable path in an environment with rigid access controls, entrenched habits or internal politics.
Abrupt shifts in usage might reflect changes in a customer’s internal policy rather than anything in
your product.
The teams that make best use of behavioural insight tend to combine wide data with focused
conversations. Automated analysis runs in the background and points to where something
interesting is happening; human research explores why it is happening and what it means. Together,
this combination turns behaviour into a live input for product strategy rather than a static report.
A natural extension of this is to pay closer attention to how your product interprets user intent in the
first place, since misinterpretation is often what drives confusing behaviour.
Helping products interpret user intent
As interfaces become more flexible, users lean more on free text, command bars, mixed input and
improvised workflows. That freedom is useful, yet it increases the chance that the system
misunderstands what a user is trying to achieve.
You can think of each interaction as passing through a small chain. A person expresses something
they want or need while the system interprets that expression as a specific intention, it chooses an
action and produces an outcome. Next time, it interprets similar expressions slightly differently
based on what seemed to work before.
Improving the interpretation step in this chain can have an outsized effect on how the product feels,
even if the screens never change. Many products can do this without advanced AI, simply by
making the rules of interpretation explicit and improving them over time.
In practice, this might mean handling ambiguous inputs by presenting a couple of likely readings
rather than silently guessing. It might involve recognising that different words or phrases often
point to the same underlying concept and treating them consistently. It might involve designing
clarifying questions that appear only when needed and are easy to answer, rather than blocking
progress when a field is incomplete.
Search, configuration, and natural language features, all benefit from this attention. In early stages, a
team might handle edge cases one by one. As usage grows, that approach becomes brittle and
treating interpretation as a first class design concern gives you a place to centralise that work.
Products that take interpretation seriously tend to scale more gracefully. They can absorb more
variety in how people ask for things without becoming harder to use. That in turn supports a clearer
product identity, which links into another important theme, the role of principles in positioning your
product.
Positioning through principles
As capabilities converge and features are easier to copy, what often remains distinctive is the set of
principles that quietly shape your product. Two tools can support similar tasks and look comparable
on a checklist, yet feel very different in daily use. That difference usually comes from consistent
choices grounded in a particular view of how work should happen.
Linear is a familiar example for product development teams. It is not the only tool that offers issues
and roadmaps but its distinctiveness comes from a sustained preference for speed through reduction.
That preference shows up in deep keyboard support, restrained formatting and firm limits on
configuration that might slow teams down. These decisions may frustrate some users, yet they make
the product feel sharply tuned for others.
Notion grew from a different view, one that emphasises giving people blocks they can combine into
their own structures. That view led to the block model, linked databases and the decision to
prioritise flexibility even at the cost of a steeper learning curve. Again, this is about a coherent
pattern as opposed to any single feature.
For your own product, an honest way to surface principles is to look at the trade offs you
consistently make. Features you decline to build, even though customers request them or options
you keep out of the interface, maybe user groups you quietly decide not to optimise for. If these
decisions line up around a few clear ideas, you are probably operating from a real set of principles,
if they do not, the product may be drifting.
Writing these principles down in plain, specific language helps. They might include preferences
such as reducing configuration where possible, assuming the user is skilled at their job, or designing
workflows that favour depth in a few tasks over surface coverage of many. The value of these
principles lies in using them repeatedly when evaluating new ideas. Over time, this creates a
product that feels coherent in ways that competitors find hard to imitate.
Strong principles also make it easier to say no. In a world where building more features is relatively
easy, focus and consistency are often the scarcer resources. Clear principles give you a credible
explanation, internally and externally, for resisting features that might win you some users while
eroding the identity that keeps others loyal.
The foundation that doesn’t shift
Tools, platforms and methods will continue to move quickly. Some practices that feel cutting edge
in 2025 will look routine in a few years and that is a normal feature of working in technology.
Underneath that movement, the fundamentals of what users need change much more slowly. People
still want to make progress with less effort, they still want tools that respect their constraints and
their judgement, they still care that products they rely on will remain available and will improve in
ways that make sense to them.
Teams that manage to adapt without losing their bearings usually keep these two layers separate in
their thinking. On a regular cadence, often each quarter, they ask themselves what has changed in
the technical and market environment they depend on, and then what has stayed the same in the
problems their users face.
The first question keeps them from clinging to outdated approaches when better ones become
available or when risk profiles change. The second prevents them from chasing every new
possibility at the expense of their core purpose.
For PMs and tech founders, it helps to attach identity to a problem space rather than a specific
solution. A company that sees itself as helping small businesses run more predictable finances is
better able to endure changes in tooling than one that defines itself as a provider of a particular
reporting feature. This kind of framing makes it easier to replace parts of the stack without feeling
that the company has lost its direction.
Writing down a short list of truths that you expect to remain valid over many years can act as a
useful reference. These might be constraints your users will still face, tensions they will still need to
manage, or aspects of their world that are unlikely to simplify. When major product or platform
decisions are on the table, revisiting that list helps keep adaptation aligned with purpose.
In the end, product strategy in an AI shaped world still revolves around familiar skills. Careful
thinking about the problem, deliberate sequencing of work and alignment between what you build
and what people genuinely need remain central. The context has changed, the execution has
accelerated, and the environment moves in shorter cycles, which means weak thinking is exposed
earlier and copied faster.
Product leaders and digital tech founders who build strong habits around learning, interpretation, thoughtful use of data and principled decision making will be better prepared to create products that stay useful as the underlying tools and platforms continue to shift.
About the Author
Natalia Loza is a product and commercial expert with a strong track record of scaling digital technologies from concept to revenue and exit. She is also the founder of Connected.Ventures, the UK’s largest network of innovation ecosystem builders, who believe in tech as a force for good. Alongside her work, she speaks, judges, and advises at organisations like OneTech (including our JP Morgan programmes), Foundervine, Startup Bootcamp, and Techstars. As a UK Tech Innovation Delegate, she also represents the sector at international events, including major tech innovation conferences in the United States and Italy.
.png)
Latest
Check out some of our most recent blog posts, covering founder stories, industry news and company updates.
.png)

