Stripe is the best software platform for running an internet business. We handle billions of dollars every year for hundreds of thousands of businesses around the world. One third of Amercians bought something on Stripe in the last year.
The internal data products group is responsible for making Stripes happy and productive while using data. We work on building a platform to accelerate Stipe’s data productivity as Stripe scales. That includes building an internal dashboarding and notebooking portal, an interface to analytical query engines, and data documentation and discovery tools. More than 80% of Stripes are active users of our platform and more than 60% have produced and shared data using our platform last year.
We’re looking for people with a strong background in and interest in building and improving data tools. The ideal candidate will have a mix of technical expertise, a passion for solving data productivity problems, and a pragmatic ability to ship results iteratively.
- Help make the day-to-day life of Stripe’s working with data more enjoyable and efficient
- Deliver practical, useful, and reliable tools and processes to data users across Stripe
- Work with stakeholders across Stripe to balance and accommodate competing desires
We’re looking for someone who:
- Has 5+ years of experience in an engineering or data science role, ideally with experience in building internal data tools
- Comfortable with at least 1 of the following (Modern JS Frameworks, Typescript, Scala, Java), and open to working with others
- Has a focus on providing an excellent user experience
- Has the ability to communicate and work across teams to help users integrate with the platform
- Holds themselves and others to a high bar when working with production systems
- Enjoys working with a diverse group of people with different expertise
- Think about systems and services and write high quality code. Languages can be learned: we care much more about your general engineering skill than knowledge of a particular language or framework
Nice to haves:
- Experience with SQL, statistics and data analysis
- Experience with data visualization (D3, R/Python and Superset)
- Experience with building data pipelines (Spark, Scalding, Hive, Pig, Airflow)
Some things you might work on:
- Collaborative authoring of data documentation
- Dataset lineage and provenance explorer and integration with other internal data services
- Collaborate with teams (e.g. Experimentation) to build bespoke visualizations and dashboards
- Refactor backend services to improve full stack development velocity and scalability
- Our stack spans fullstack Typescript, Next.js, React, Postgres, Java, Trino, and Spark