← Back to careers

Data Platform Engineer

Build the data foundations that production AI depends on — vector databases, knowledge graphs, data governance, and RAG pipelines deployed in governed enterprise environments.

EngineeringLondon, UK (Hybrid)Full-time
Express interest →

About Deliverance AI

Deliverance AI is the production AI platform company. We exist because 94% of enterprises fail to scale AI — not for lack of ambition or budget, but for lack of platform, governance, and delivery capability. We close the gap between AI investment and AI production for enterprises across regulated industries including pharma and biotech, financial services, retail, telecommunications, and logistics.

Our proprietary nine-layer platform — built around three core capabilities: Clarity (see everything), Govern (control everything), and Accelerate (ship everything) — is deployed inside customer environments with live workloads running on it from day one. We are not a consultancy that writes strategy decks. We are not a staffing firm that lends contractors. We are an engineering-led company with proprietary platform IP, a growing agent marketplace, and 15+ pre-built AI blueprints that cut months off delivery timelines.

Our engagement model is simple: Assess (4 weeks), Deploy (12–16 weeks), Operate (ongoing). Dedicated engineering pods own delivery end-to-end. Every deployment compounds the platform. Every use case ships faster than the last. Governed, observable, and delivering value from day one.

About the role

The Data Platform Engineer owns Layer 6 of our nine-layer architecture: Data & RAG. This is the layer that connects enterprise data to production AI agents — vector databases, hybrid search, knowledge graphs, ETL pipelines, and the data governance infrastructure that ensures everything stays compliant and auditable.

You will work as part of a dedicated engineering pod, embedded within customer engagements during the Deploy phase. Your job is to ensure that AI agents and models have access to clean, governed, relevant data — and that the data layer integrates seamlessly with the customer's existing data warehouses, lakes, and enterprise systems. In regulated industries like pharma (GxP), financial services (FCA), and government, data lineage, audit trails, and residency are not nice-to-haves — they are hard requirements that determine whether AI workloads can go live.

Every RAG pipeline you build, every knowledge graph you configure, every data integration you deliver enriches our blueprint library and makes the next engagement faster. This is a role where your work compounds.

What you will do

  • Design and implement the Data & RAG layer for enterprise AI deployments — vector databases, hybrid search, knowledge graphs, and embedding pipelines that feed production AI agents.
  • Integrate the Deliverance AI platform with customer data sources including enterprise data warehouses, data lakes, document management systems, and API-based feeds.
  • Build data governance, lineage tracking, and audit logging that satisfies regulatory requirements across pharma (GxP), financial services (FCA), healthcare (MHRA), and EU AI Act compliance.
  • Implement data quality monitoring and alerting that ensures AI agents receive clean, timely, and representative data in production.
  • Configure and optimise vector databases (Pinecone, Weaviate, Milvus, pgvector) for retrieval performance, cost, and accuracy across different use-case patterns.
  • Work within engineering pods alongside AI Architects and ML Engineers during the Deploy phase, contributing to the four gated deployment phases.
  • Contribute to the Observability & FinOps layer (Layer 7) by building data flow telemetry, throughput monitoring, and cost attribution for data pipeline operations.
  • Document reusable data integration patterns that feed back into the blueprint library, accelerating future customer engagements.

What we are looking for

  • 4+ years of experience as a Data Engineer, Data Platform Engineer, or similar role building production data systems at scale.
  • Strong skills in Python and SQL, with hands-on experience building data pipelines using tools like Apache Spark, dbt, Airflow, or similar.
  • Experience with vector databases (Pinecone, Weaviate, Milvus, pgvector) and understanding of embedding-based retrieval, hybrid search, and knowledge graph patterns.
  • Solid knowledge of Kubernetes, cloud platforms (AWS, GCP, Azure), and infrastructure-as-code practices.
  • Understanding of data governance, privacy, and compliance requirements — particularly UK GDPR, EU AI Act data requirements, GxP validation, and audit trail design.
  • Experience working with enterprise customers, ideally in consulting or professional services contexts where you have embedded within customer teams.
  • Familiarity with AI/ML data requirements including training data preparation, fine-tuning datasets, and RAG evaluation methodologies.
  • A collaborative mindset and strong communication skills — this role involves significant customer-facing interaction as part of a delivery pod.