Deliveroo is hiring a

Data Engineer

London, United Kingdom

We’re in the business of delivering awesome meals to your home.


We build and run a superb website, suite of mobile apps, and a fleet of drivers, all so you can order from real restaurants, to your doorstep.


We work with hundreds of the UK's best loved restaurants (Carluccio's, GBK, Nando's, Rossopomodoro, many top quality independents) to deliver their meals to homes and offices everywhere. Our customers are as passionate about great food as we are — and we satisfy them in just 32 minutes on average!


We also operate in France, Ireland, and Germany – more to come, soon.


As a logistics company, our software is used to transport physical goods and solve real-world problems. We handle rapid decisions about dispatching drivers in real-time and remove slow human think-time and inefficient information discovery from every interchange point in the order processing workflow so food deliveries can be fulfilled as quickly as possibly. Our current total order fulfillment time from the time that the customer submits an order until they receive their meal is, on average, 32 minutes, although we know we can do better.


Our product team is growing, quickly: our breakneck-paced growth comes with many challenges, from our dispatch backend to excellence in front-end UX, and from software architecture to highly usable mobile applications.


We're uncompromising though: we like smart, ambitious software engineers, designers, and product managers to join us, and help us build and scale this lovable (and loved!) product.


We are looking for like-minded developers who truly love the craft of software engineering. We value a job well done rather than unreasonably long hours. We encourage our engineers to be curious, put time into studying software development (learn new languages, study design patterns or algorithms, experiment with new open source frameworks, etc) and to continue to develop new skills.


As a Data Engineer on our team, you will:

  • Responsible for maintaining the existing ETL pipelines built using Luigi, backends are a mix of postgres and third party APIs
  • Overseeing generation of new data products for the BI team (computed tables)
  • Improving the performance of our data warehouse (currently postgres, starting the migration to Redshift)
  • Monitoring and enhancing performance of the whole pipeline - monitoring using DataDog / NewRelic and working with data scientists to optimise modelling elements (e.g. code profiling and implementing cython)
  • Migrating data execution from static EC2 instances to a continuously integrated solution, potentially docker and ansible on ECS
  • Contribute to integrate a new workflow management tool (Airflow...)
  • In the future, helping us build our distributed compute pipeline - open to ideas here, but our starter for ten would be to migrate to a container infrastructure onto Mesos with batch compute in spark. We have a LOT of GPS data ...