Our client is a full-lifecycle product development services leader that combines chip-to-cloud software engineering expertise and vertical industry experience to help their customers design, build, and deliver their next-generation products and digital experiences.
The customer is a global technology and data company that builds verification, optimization, and analytics solutions for the advertising industry.
We’re looking for an engineer to join the Data Engineering team. If you are excited by technology that has the power to handle hundreds of thousands of transactions per second; collect tens of billions of events each day; evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then we are the place for you! In this role, you will build and expand upon the data science models testing framework and testing infrastructure of our core ad verification, analytics, and anti-ad fraud software products. The ideal candidate is naturally curious, dedicated, detailed-oriented with a strong desire to work with awesome people in a highly collaborative environment.
The main stack on the project:
- Java 60%
- Python 30%
- Scala 10%
- 5+ years of recent hands-on Java / Python / Scala experience
- Strong knowledge of collections, multi-threading, JVM memory model, etc.
- Great understanding of designing for performance, scalability, and reliability
- Superb understanding of algorithms, scalability and various tradeoffs in a Big Data setting
- In-depth understanding of object-oriented programming concepts
- AWS experience, especially with EMR, Step functions, Glue, CDK
- Excellent interpersonal and communication skills
- Understanding of full software development life cycle, agile development, and continuous integration
- Good knowledge of Linux command-line tools
- Experience with Hadoop MapReduce, Spark, Airflow, Pig, HIVE
- Solid understanding of database fundamentals, good knowledge of SQL
- Exposure to messaging frameworks like Kafka or RabbitMQ
- Working on Big Data technologies such as Hadoop, MapReduce, Kafka, and/or Spark in columnar databases
- Architect, design, code and maintain components for aggregating tens of billions of daily transactions
- Migrating services from on-prem to Cloud
- Lead the entire software lifecycle including hands-on development, code reviews, testing, deployment, and documentation for streaming and batch ETL’s and RESTful API’s
- Mentor junior team members
Conditions of work:
- An interesting and challenging opportunity in a large and dynamically developing company
- Exciting projects involving the newest technologies
- Professional development opportunities
- Excellent compensation and benefits package, performance bonus program
- Modern and comfortable office facilities