SNI: WEEK 17
- Apr 24
- 7 min read

Welcome to all the AI news that matters this week – across tech, biopharma, medtech, advanced manufacturing and insurance. The wins, the fails and the somewhere in-betweens.
tl;dr: Everyone's delegating now
For 18 months, all the Big Tech Bros have been telling us that agents are genuine assistants, capable of autonomous work. 'Fire them up, let them go!'
For 17.75 months - especially non-developers - that's pretty much been a lie. Albeit of ever-decreasing proportions. But, given the events of the last week and a bit, Brightbeam is happy to call it. The 'agentic era' is here. For all.
Both OpenAI and Anthropic have converted their product from a chatbot you converse with to a colleague you delegate to. They made that clear just seven days apart.
Here's how the agentic age finally came of age:
Anthropic launched Opus 4.7 on 16 April with long-running tasks and higher-resolution vision, then launched Claude Design one day later for prototypes, slides and one-pagers. The internet exploded. Sleep was lost. The industry played and admired the outputs.
Then OpenAI shipped GPT-5.5 'Spud' – its first fully retrained base model since GPT-4.5, for workflows rather than conversations. It wraps together computer use, knowledge work and research.
Few disagreed that Spud was also impressive. But which lab has the edge is abundantly clear. Anthropic's secondary-market valuation tipped over $1tn, eclipsing OpenAI's $880bn for the first time.
Suddenly, apparently a little spooked, Google DeepMind assembled a Sergey Brin-led strike team to close the agentic gap. A gap that might now be seen by some as a gaping hole. And something it seems enterprise is also eager to avoid.
Merck and Google Cloud announced a $1bn multi-year agentic deal across R&D, manufacturing, commercial and corporate functions.
ServiceNow and Google Cloud are unifying their agentic systems for enterprise.
And as if the events of the present were not enough to keep our thoughts occupied, the near-future also pushed its way in. Kemba Walden, the former US national cyber director, put on record that Anthropic's still-muzzled model Mythos 'can hack nearly anything'.
Is it ironic, then, that Mythos itself suffered unauthorised access for weeks before detection?
The FDA's first warning letter naming AI agents as a compliance violation in pharmaceutical manufacturing also reminded us of the potential dangers ahead. Dangers which are going to become ever-more common. The UK MHRA's new clinical trial regulations, including in-silico trial acceptance start on Tuesday - the first G7 regulator formally admitting AI-generated evidence into the approval pathway.
Apple then walked into all this as the counterpoint.
Tim Cook stepped down after 15 years, with hardware SVP John Ternus – architect of the M-chip platform – ascending. Whether Ternus can reorient Apple's culture for the agentic era is the question his board did not answer.
There are certainly those who believe he can.
But however, you're going to call that, here's everything else worth reading this week:
AI & tech:
Meta is tracking employee usage across Google, LinkedIn and Wikipedia to source training data for its internal AI project: the precedent extends well beyond Meta, because any enterprise whose AI programme draws on employee-derived data now has a liability template it has to audit against.
Deutsche Bank CEO Christian Sewing dismisses Mythos as an existential threat to the financial system: European financial institutions are reaching threat assessments that diverge from their US and UK counterparts, and the transatlantic gap is itself a governance signal.
Robinhood announces a $75m secondary investment in OpenAI: a retail finance brand takes a public position three weeks after OpenAI investors openly questioned the $852bn fundraising valuation. And despite Anthropic's rise.
Biopharma:
AI models are generating and testing hypotheses autonomously: OpenAI and Anthropic have released frontier models optimised for scientific research workflows, with direct implications for the rate at which AI-generated hypotheses can become regulatory evidence.
AI operational failures appear at the scale-up phase, not the pilot: latency spikes, cost overruns and reliability failures concentrate at the transition from proof-of-concept to production – the phase that pharmaceutical agentic deployments are entering now.
Silo Pharma acquires AI agent platform Qwikagents as its PTSD drug advances towards trials: a clinical-stage biotech acquiring AI orchestration infrastructure - rather than licensing it - signals a belief that proprietary agent capability cannot be safely outsourced in drug development.
Pharma services roundup: Among regulatory AI guidance, CRO consolidation and digital CMC developments, the quarter produced transatlantic alignment on AI evidence standards for regulatory submissions.
The AI drug discovery capital stack and research: structure-based, foundation-model-first and agentic-lab approaches carry different data-moat structures – distinctions worth knowing as the first AI-originated candidates approach later-stage trials.
Medtech:
Philips anchors global AI healthcare development in Bengaluru, not the US: the Dutch company's hub will develop AI diagnostics and clinical decision tools for global deployment – a talent and cost strategy that positions where European medical-device AI is built.
Healthcare providers and advocacy groups respond to White House national AI policy framework: provider bodies welcomed clarity on AI deployment scope; advocacy groups cited the absence of binding liability provisions for AI-driven clinical decisions as a gap the framework leaves unresolved.
Treehub AI Health Fund launches to back academic healthcare AI: the fund targets the gap between academic AI research and clinical deployment, focusing on precision medicine and diagnostics models that have academic validity but lack commercial development capital.
Advanced manufacturing:
Denham Capital and First American Nuclear partner for hyperscale AI data centres.
Broadcom deepens Meta and Google Cloud AI chip ties as custom silicon reshapes the supply chain around Nvidia: Broadcom's position as custom-silicon intermediary tightens exactly as hyperscalers look for inference-cost leverage, and the procurement implications for anyone buying GPU capacity in 2026–27 are immediate.
China's YMTC hits HBM3 roadblock as domestic memory makers rush to close the AI chip gap: HBM3 fabrication requires process technology China does not currently have, extending Western leverage over AI infrastructure supply chains while Chinese AI deployment scales.
BlackBerry QNX and NVIDIA deepen safety-critical edge AI collaboration across robotics, medical and industrial systems: the partnership provides a functional safety certification layer for AI at the industrial edge – the specific regulatory requirement that has constrained autonomous AI deployment in these categories.
Insurance:
Testudo launches AI underwriting platform backed by Lloyd's Lab: London-market backing converts AI liability into its own risk class rather than a tech-E&O rider – the pricing precedent that every 2026–27 AI-adjacent renewal will be measured against.
CFC pilots agentic AI underwriting system that processes applications in seconds: the London-based cyber insurer is deploying agentic underwriting in live operations – an economic bet that automation speed compounds better than human underwriter throughput in its risk category.
Boleron becomes first insurance broker approved by OpenAI to distribute through ChatGPT: first-mover position in ChatGPT's distribution layer is a platform-access claim – the same structural position Boleron's competitors will seek once premium conversion through the channel becomes measurable.
Legacy architecture is the specific technical constraint blocking insurers' agentic AI adoption: agentic AI requires real-time data access that batch-processing legacy systems cannot provide – a two-tier market is forming between carriers that have modernised and those still running inherited infrastructure.
AI data centre expansion is quietly raising insurance costs for industrial projects: underwriters are pricing fire suppression, cooling failure and power-density risk in AI data centres differently from conventional commercial property, with material premium increases appearing in renewal terms.
But what set podcast tongues a-wagging?
Three conversations brought the arrival of the agentic age into even sharper contrast:
Demis Hassabis on Intelligence Squared gave compliance teams a vocabulary they had been missing.
Martin Shkreli on the a16z podcast framed the OpenAI–Anthropic contest as an economic problem, not an intelligence one.
Peter Diamandis with the Moonshots panel read the labour-market data and pointed out that the handover is already happening.
'Jagged intelligence' is now compliance vocabulary.
Hassabis's description of frontier AI as jagged – not uniformly capable, but capable in patches and unreliable in others – names the reality that many compliance teams in healthcare, insurance and pharmaceuticals have already been managing without a word for it. Some decisions are defensible to let models make, others need human review, and the two categories do not correlate with the model's headline capability. For a chief compliance officer or general counsel weighing deployment scope, jagged intelligence is the frame that lets procurement draw the line without over-deploying or over-restricting.
The competition between labs is economic, not technical.
Shkreli on a16z offered the sharpest read of the commercial dynamic: OpenAI under-monetises its consumer base while Anthropic over-bills its enterprise customers, and the revenue potential of fully monetised AI sits well above either company's current trajectory. The consequence for buyers is that the pricing model is now the most volatile part of the purchase decision – what cost $200 a seat last quarter halts mid-quarter, as GitHub Copilot demonstrated, or moves out of Pro, as Claude Code did, because per-user subscriptions break when the user is running a continuous process. Shkreli's separate point on pharma is the universal regulated-sector corollary: no amount of compute substitutes for a decade of clinical validation. For any sector where value depends on certified outcomes, compute is the easy part of the budget.
The labour-market numbers for the handover have already arrived.
The Moonshots panel anchored on Stanford HAI's AI Index 2026 data: coding-benchmark performance moved from 60% to 97% in one year, generative AI reached 53% global adoption in three years (faster than PC or internet), documented AI incidents rose from 233 to 362, and software developer employment for 22–25-year-olds is already in measurable decline. The pivot enterprise strategists are still planning against is the pivot labour-market participants have already been repricing. For any CEO committing to a 2026–27 hiring plan – in financial services, pharmaceutical operations, engineering, or any knowledge-work function – the early-career intake question is no longer a forward-looking scenario. It is the plan that needs to be reconciled with headcount already on the books.
Thank you for reading this week's report. Come back next week for all the AI news in your sector and beyond.




