Hive is hiring a

Systems Engineer / DevOps

About Hive

Hive is a full-stack deep learning platform helping to bring companies into the AI era. We take complex visual challenges and build custom machine learning models to solve them. For AI to work, companies need large volumes of high quality training data. We generate this data through Hive Data, our proprietary data labeling platform with over 1,000,000 globally distributed workers, generating millions of high quality pieces of data per day. We then use this training data to build machine learning models for verticals such as Media, Autonomous Driving, Security, and Retail. Today, we work with some of the largest companies in the world to redefine how they think about unstructured visual data. Together, we build solutions that incorporate AI into their businesses to completely transform industries.

We are fortunate that investors like Peter Thiel (Founders Fund), General Catalyst, 8VC, and others see Hive's potential to be groundbreaking in AI business solutions. We have over 160 talented individuals globally in our San Francisco and Delhi offices. Please reach out if you are interested in joining the AI revolution!

DevOps and Systems Team

Our unique machine learning needs led us to open our own datacenter in early 2016, with an emphasis on GPU resources. Even with our datacenter, we maintain a hybrid infrastructure into AWS to power some parts of our consumer apps. As we continue to commercialize our machine learning models, we also need to grow our DevOps and Systems team to maintain the reliability of a SaaS offering for our customers. The ideal candidate doesn’t need to be actively managed and take automation seriously. You believe there is no task that can’t be automated or server scale too large. You get satisfaction from allowing developers to deploy their servers without worry of downtime!


  • Create automation tools for creating, provisioning, and deploying servers
  • Build tools to streamline the deployment of releases and hot-fixes
  • Configure and monitor data center infrastructure (load balancer, firewall, switches, instances, etc.) 
  • Help automate systems for 24x7 monitoring and failure recovery
  • Bring expertise on troubleshooting application, database, and networking performance and failures
  • Continually identify areas for process improvement in the production environment and develop appropriate resolutions


  • You take automation seriously
  • You have fought fires at scale, wrestled with lost instances on AWS, and can troubleshoot awry processes and servers with your eyes closed
  • You have at least 2 years of work experience in a Linux-based DevOps role
  • You have AWS cloud management experience
  • You know at least one language well, and enjoy writing code
  • Extremely proficient with the Unix command line, shell scripting, and configuring systems monitoring tools
  • Knowledge of all things networking: TCP / IP, ICMP, SSH, DNS, HTTP, SSL / TLSKnowledge of storage systems, RAID, distributed file systems, NFS?iSCSI / CIFS
  • You have experience with configuration management, monitoring, and automation tools
  • Working knowledge of firewalls, VPN, routing, switching, load balancing, security, and DNS
  • Ability to compile and install Unix application from source, test, and create package managed versions of such
  • You have a strong desire and ability to learn quickly
  • You are excited about providing services that affect tens of millions of users
  • A degree in computer science, or similar, is an added plus!

What We Offer You

We are a group of ambitious individuals who are passionate about creating a revolutionary machine learning company. At Hive, you will have a significant career development opportunity and a chance to contribute to one of the fastest growing AI startups. The work you will do here will have a noticeable and direct impact on the development of Hive.

Thank you for your interest in Hive.