Twilio is hiring an

AI/Machine Learning Senior Engineer (Java & Python)

San Francisco, United States

Twilio Understand is the brain of Twilio’s natural language understanding (NLU). Twilio Understand, powers Twilio's conversational AI platform that allows developers to create omnichannel AI assistants, bots, and conversational interactions.

The Twilio Understand platform takes a unique approach to designing, training and rolling out AI that sets it apart from any other conversational platforms in the market. Building with Twilio Assistant, developers and companies can release their AI assistants from the PoC stage and realize their promise in the real world.

About the job:

We're looking for a strong Senior Machine Learning Engineer who is excited about working on an early stage product with huge potential to work side-by-side with our Data Science and NLP experts to build the technology to power the next generation ML-powered communication products.

As a senior engineer at Twilio, you are empowered to create impactful experiences for our customers, who are developers and builders. We believe products are a reflection of the people who build them so we are looking for engineers with a strong sense of ownership and drive with the ambition to accomplish what no one thought was possible.

You will own, create and operate REST APIs, Data Pipelines, and Machine Learning Pipelines to power our AI systems. As a Senior Engineer you will be responsible for building the Twilio Understand infrastructure comprised of horizontally scalable Java & Python services, high availability data stores, machine learning pipelines, distributed Queues and developer-facing APIs.

Your services will go from concept to sustained exponential growth in a very short period of time. You enjoy getting the MVP out the door, and you know that to take services to scale require developing a complex distributed platform and will be concerned with availability, throughput, latency and real-time responsiveness.

Responsibilities:

  • Join a small, high-impact, multi-talented engineering team.
  • Collaborate with Product Managers, Data Scientists, Architects, and Engineering leaders to define, architect and build new customer-facing features.
  • Own, operate, and maintain your team’s services in a distributed production environment. Employ Agile methodologies to continuously deliver value to customers.
  • Drive quality by writing unit, functional, load and performance tests.
  • Work in a full ownership DevOps model to ensure services are reliable, scalable, manageable and supportable.  Develop diagnostic and troubleshooting tools made available via our developer portal and to our customer support organization.
  • Excel as an engineer and be a productive member of the team where leadership is a behavioral trait, not a title.  Lead architecture, design and code reviews as well as mentor junior engineers.

Requirements:

  • 5+ years of experience of hands-on experience developing distributed systems based on Java.
  • Hands on experience with cloud technologies such as AWS (Amazon Web Services), OpenStack or Azure.
  • Extensive experience scaling production backend systems.  You can design and develop horizontally scalable, resilient and performing-under-load systems.  You have scaled data tiers employing a variety of SQL/NoSQL database and caching technologies.
  • You have worked in an Agile development environment. You are an expert in practical aspects of running Scrum (or other agile methodologies) within a team and in a distributed cross-team environment.
  • Ideally, you have experience in a production DevOps environment where you ship rapidly and often.
  • Minimum 5 years experience building complex distributed systems across concerns of reliability, high-availability, performance, scalability, capacity planning, and automation.
  • You believe test automation is the only scalable way to ensure quality, and have built and maintained a complex automated test suite.
  • Bachelor's degree in a computer science related field is a minimum requirement.

Bonus points:

  • You have experience with data pipelines, especially with Kafka
  • Passion and interest in working closely with Data Science and learning about building Machine Learning based products.

About us:

Twilio's mission is to fuel the future of communications. Developers and businesses use Twilio to make communications relevant and contextual by embedding messaging, voice and video capabilities directly into their software applications. Founded in 2008, Twilio has over 650 employees, with headquarters in San Francisco and other offices in Bogotá, Dublin, Hong Kong, London, Madrid, Mountain View, Munich, New York City, Singapore and Tallinn.

Twilio is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal opportunity regardless of race, color, ancestry, religion, gender, gender identity, parental or pregnancy status, national origin, sexual orientation, age, citizenship, marital status, disability, or Veteran status and operate in compliance with the San Francisco Fair Chance Ordinance. #LI-POST

Similar jobs

Other jobs at Twilio

Made by Marc Köhlbrugge