As a DevOps Engineer in Data, you will contribute to the design, build and running of our data infrastructure, to increase the flexibility, scalability and efficiency of the Lalamove data platform, which is the foundation of business analytics, reporting, data engineering and data science. You will be building the CI/CD automation pipelines that streamline and optimize our development processes. You will be deploying container applications to Kubernetes, and scaling different components of our data platform on AWS.
What we seek:
- Quick learner: you demonstrate the ability to learn new technology and frameworks quickly.
- Problem solver: you are a problem solver with strong critical thinking skills, and willing to find creative solutions to difficult problems.
- High autonomy: Self-organized, self-starter, passionate with a can-do attitude and take ownership of end-to-end projects. Ability to work independently yet teamwork oriented.
What you’ll need:
- Experience in building CI/CD automation pipelines
- Experience with Docker for containerization and Kubernetes for orchestration and production scaling
- Experience with deploying container applications with helm charts
- Experience of AWS architecture and administration in production environments
Plus but not required:
- Understanding of data technologies that power data platforms (e.g.: Spark, Kafka, Airflow, Avro, Redshift, ELK, etc.)
- Experience in managing Hadoop cluster on the cloud such as EMR
- Programming experience in Python or Scala
- Experience in managing and scaling Tableau Server