What's Working in AI

Insights from the full-day event by Octoco AI and Capitec

Events
what's working in AI panel discussion

At What’s Working in AI, hosted by Octoco AI and Capitec, one theme surfaced across every talk, regardless of domain or perspective: AI is not limited by what it can produce. Organisations are limited by what they can process.

Throughout the day, engineers, product leaders, researchers, and data scientists shared how AI is already reshaping the way they work. Faster prototyping, rapid code generation, and near-instant insights are no longer theoretical - they’re operational.

Yet, despite this, meaningful productivity gains remain inconsistent.

The gap isn’t capability. It is capacity.

More Output, Same Bottleneck

This tension was captured clearly in insights shared by Capitec’s data leadership. As Azhar Said (Capitec) explained, tasks that previously took days can now be completed in seconds. But the number of questions being asked is growing just as quickly.

The bottleneck hasn’t disappeared. It has shifted.

AI has removed friction from production, but it has exposed a deeper constraint: the ability to evaluate what’s being produced. More output does not guarantee better outcomes. Without strong judgment, it simply creates more to sift through.

Azhar emphasised a principle that stood out across the day: keep an expert in the loop. Not just a human checkpoint, but someone accountable for the result - someone who takes ownership of the output as if AI wasn’t involved.

The System Can’t Keep Up

This idea was explored further by Andrew La Grange (Full Stack), who framed information as something that behaves like a system with limits.

Every organisation has a “carrying capacity” - a threshold for how much information it can meaningfully process. AI dramatically increases the volume of available input: code, analysis, designs, and content. But unless the system’s ability to interpret and act on that input evolves alongside it, the system becomes overloaded.

He introduced the idea of a threshold where the signal turns into noise. Below it, AI is useful and accelerates progress. Above it, the same output becomes harder to evaluate, reducing overall effectiveness.

This explains why many teams feel busier, but not necessarily more productive.

Rethinking How We Build

From a product and engineering perspective, this shift is already reshaping workflows.

Matthew Wridgway (Capitec) shared how AI is compressing traditional timelines. Research, design extraction, and development can now happen within a single day, resulting in functional prototypes rather than static mockups.

This changes more than speed. It challenges the structure of collaboration itself.

Traditional design-to-development handovers were built for humans - documents, mockups, and specifications. Increasingly, these need to evolve into formats that AI systems can interpret: structured data, constraints, and context.

The result is a tighter, more iterative loop between design, engineering, and real-world testing.

The advantage is not just faster delivery. It’s the ability to run more experiments and learn faster.

Humans and AI Play Different Roles

Several speakers approached the human-AI relationship from different angles, arriving at a similar conclusion.

G-J van Rooyen (Octoco) framed it simply: AI handles task execution, while humans are responsible for thinking. The value of engineering doesn’t lie in moving faster through tasks, but in the reasoning, trade-offs, and decisions behind them.

He compared it to training: using AI to remove all effort from a process misses the point. The development happens through the thinking, not just the output.

From an engineering lens, Herman Lintvelt (Octoco AI) offered a complementary perspective. He described modern AI tools as powerful accelerators - capable of dramatically increasing output, but equally capable of amplifying mistakes.

His focus: don’t optimise for speed alone. Optimise for learning. Build in a way that maximises feedback and understanding, not just delivery.

The Acceleration Curve

Zooming out further, Bruce Bassett (WITS) highlighted the rate at which AI itself is evolving.

AI capability is not improving linearly. It is accelerating, with performance increasing on a compounding curve. This means the gap between what AI can generate and what humans can effectively evaluate is not static - it’s widening.

The implication is clear: adapting once is not enough. Organisations need to continuously evolve how they work, how they validate, and how they make decisions.

What’s Actually Working

Across all these perspectives, a consistent pattern emerged.

The organisations seeing real impact from AI are not simply adopting tools. They are restructuring their systems to handle what those tools produce.

They:

  • Maintain strong expert oversight
  • Build processes for rapid evaluation and decision-making
  • Focus on learning cycles, not just output
  • Align their workflows with increased information flow

They expand their capacity alongside their capability.

Where This Leaves Us

AI is already embedded in how modern engineering teams operate. The question is no longer whether to use it, but how to use it well.

The takeaway from What’s Working in AI is that success lies in how organisations respond to increased output. Without the ability to absorb and evaluate it, more capability leads to diminishing returns.

With the right systems in place, it becomes a multiplier.

At Octoco, this is how we approach AI in engineering: not as a shortcut, but as a force that requires better thinking, stronger systems, and clearer ownership.

Because AI doesn’t replace the need for judgment.

It raises the standard for it.