The payments landscape is changing rapidly with hundreds of ways to accept payments online globally. Primer needs to be a constant in a world where new payment methods, and payments services are being unveiled regularly. Given the plethora of innovation in this space, payments has never been more complex to implement and maintain.
Data is a core part of Primer's product suite, helping to drive merchant success rates and provide valuable insights across a truly global, open-ended payments infrastructure. With a unique opportunity to gain a holistic view of payments patterns across the globe, Primer is perfectly positioned to use the data we acquire to shape the future of online payments.
We are looking for a skilled data engineer to join our growing data team. As a Data Engineer you will be responsible for designing, building and optimising our data and data pipeline architecture, as well as improving data flow across the organisation. The ideal candidate will have a wealth of knowledge building data pipelines, data wrangling and enjoy building systems from the ground up.
At Primer the product is everything, and data will be no different. It will be used to enhance Primer's offering, so it is vital to have a product focused mindset. The data team will be in constant communication with the domain experts from across the organisation in order to drive the product forward.
What you'll be doing
- Helping to build and provision a complete data infrastructure from scratch
- Creating and maintaining a hugely scalable data pipeline
- Building ETL infrastructure for data across a wide range of data sources
- Building analytical tools that utilise the data pipeline to provide actionable insights into operational performance, merchant efficiency and other key performance indicators
- Manipulating, processing and extracting value from large disconnected datasets.
- Keeping data separated and secure across national boundaries
- Proven experience gained from a similar position
- Experience building and optimising ‘big data’ data pipelines, architectures and data sets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong analytic skills related to working with unstructured datasets.
- Relevant experience with one or more of the following would be advantageous:
- Experience with big data tools: Hadoop, Spark, Kafka, etc
- Experience with relational SQL and NoSQL databases, including Postgres and Dynamodb
- Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience with AWS cloud services: EC2, EMR, RDS, Redshift
- Experience with stream-processing systems: Storm, Spark-Streaming, etc.
- Experience with object-oriented/object function scripting languages: Python, Java etc.
- Share options offered as part of package
- Machine + peripherals of your choosing
- Up to £500 towards your home office setup
- Fully remote set up
- All expenses paid bi-annual get-togethers
- Learning Budget: Books + other learning resources on us
- Flexible working