Past Stop 03 · London Agentic AI

Agentic AI + Evals at Databricks

A London Agentic AI and Agentic AI London meetup at Databricks focused on agent evaluation, Mosaic AI, DSPy, context engineering and production-quality AI applications.

Eval Engineering · Mosaic AI · DSPy · production reliability
Tue, Nov 18, 2025 · 6:00 PM GMT
Date
Databricks London HQ
Venue
Event completed
Status
Agentic AI + Evals at Databricks event artwork
Original event artwork
Network partners

Companies In The Room

Hosts, sponsors and speaker organisations represented at this London Agentic AI stop.

Sponsors & Hosting Partners
1 company
Databricks logo
Databricks
Speaker Organisations
4 companies
Databricks logo
Databricks
E
EdisonWatch
I
IntentHQ
C
CometML
event focus

What This Stop Covered

The Databricks event focused on how teams can evaluate and improve agentic AI systems before they become production risk. The theme connected Mosaic AI, DSPy, context engineering and practical evaluation strategies for production-grade AI applications.

The session gave London AI builders a venue to discuss how agent behaviour, retrieval quality, workflow reliability and deployment constraints can be measured with real evaluation loops.

AI agent evalsAgent evaluationDatabricks Mosaic AIDSPyContext engineeringProduction AI applications
event context

What Was Special About This Event

A concise briefing on the technical context behind the event: why the topic mattered to the London Agentic AI community and what sponsors, speakers and attendees came to discuss.

Event detail

The public Meetup page confirms the title, Databricks London HQ venue and event timing.

Event detail

The archive content positions this event around evals, Mosaic AI, DSPy and context engineering.

Event detail

The event video is linked from the London Agentic AI YouTube archive.

attendee takeaway

What You Learned

01

How to think about evaluation loops for production AI agents.

02

Where Databricks Mosaic AI fits in the development and evaluation lifecycle.

03

How DSPy and compiler-style thinking relate to context and prompt optimisation.

04

How to discuss agent reliability with data, ML and platform teams.

technical programme

Sessions And Speakers

Databricks

Mosaic AI Framework

A Databricks session on Mosaic AI patterns for building, evaluating and deploying production-quality AI applications.

Puneet Jain
Databricks community session

DSPy and Analogies with LLVM

A context-engineering talk connecting DSPy ideas to compiler-style optimization and agent evaluation workflows.

Eito Miyamura
Databricks, EdisonWatch, IntentHQ and CometML

Evaluating AI Agents Panel

A panel moderated by Eito Miyamura on how to evaluate agent quality, reliability and operational behaviour before deployment.

Eito MiyamuraSultan Al AwarKyra WullfertSangram ReddyJacques Verre
event recording

Watch The Session

The video is embedded here for context and linked as a source so search engines, LLMs and attendees can connect the event page to the original recording.

Open on YouTube →
route timing

Agenda

6:30 PM
Welcome and opening remarks
6:40 PM
Mosaic AI Framework
7:00 PM
DSPy and LLVM analogies
7:20 PM
Break
7:30 PM
Evaluating AI agents panel and audience Q&A
who should attend

Audience

AI engineers building agent systems
Data and ML practitioners evaluating LLM applications
Platform teams responsible for reliability and deployment
Founders and technical leaders choosing agent evaluation strategy
event links

Original Event Links

Original registration, archive and recording links for this London Agentic AI event.

London Agentic AI

The UK's original, high-signal Agentic AI community for AI engineers, agent builders, researchers, founders and technical leaders building production agents.

Subscribe on Luma →