Efficient LLM Inference at Scale
Reducing the cost and latency of large language model inference through speculative decoding, adaptive quantisation, and dynamic context compression โ without measurable accuracy loss.
Reliable & Safe AI Agent Systems
Formal frameworks for measuring, testing, and guaranteeing the reliability of autonomous AI agents in enterprise environments โ covering planning accuracy, tool-use safety, and graceful failure modes.
Advanced RAG & Knowledge Grounding
Improving retrieval-augmented generation beyond naive chunking โ through hybrid dense-sparse retrieval, reranking architectures, and citation-grounded answer synthesis that eliminates hallucination in high-stakes settings.
Deep Learning on Tabular & Time-Series Data
Investigating when and why transformer-based models outperform gradient boosting on enterprise tabular data โ and developing hybrid architectures that capture the strengths of both paradigms.
Explainability for High-Stakes AI
Developing post-hoc and intrinsically interpretable AI methods that satisfy regulatory requirements in healthcare, finance, and insurance โ without sacrificing predictive performance.
Federated & Privacy-Preserving ML
Building ML systems that learn from sensitive data distributed across organisations โ using federated learning, differential privacy, and secure multi-party computation โ without any raw data leaving its source.
Collaborate With Us
We partner with academic groups, industry labs, and enterprises on joint research that bridges frontier AI and real-world deployment.
Explore a Collaboration