ABOUT THE TEAM
Data sits at the heart of Revolut and plays a uniquely crucial role in what we do. With data we build intelligent real-time systems to personalise our product, tackle financial crime, automate reporting, track team performances and enhance customer experiences.
Fundamentally, data underpins all operations at Revolut and being part of the team gives you the chance to have a major impact across the company – apply today to join our world class data department.
ABOUT THE ROLE
You will work with the Financial Crime - Transaction monitoring team. We are a full stack team of data engineers, data scientists, backend engineers and financial crime specialists who are solving some of the hardest anti-fraud and anti-money laundering problems in the world. Our team builds artificial intelligence (AI) driven systems that continuously learn anomalous patterns typical of fraud. In this regards, we need a senior data engineer to scale our various microservices and machine learning systems. If you love thinking analytically about big data computations at microsecond latencies, then this is your gig!
What you'll be doing:
-You will be responsible for writing scalable backend systems that use a machine learning model to score users in near real-time.
-You will research and productionize stream processing algorithms that can efficiently compute various statistics (mean, standard deviation, percentiles) on a data stream.
-You will work on a small team of other data engineers to standardize features across our many different models into a unified feature store.
-At Revolut, our data and backend engineers are also responsible for “dev ops” of the systems they write and hence, you will be responsible for building and maintaining our Kubernetes based deployment pipeline along with pagerduty and alerting instrumentation.
-Our data stack is based predominantly on Python on the backend with Exasol as our data warehouse. We are hosted on Google Cloud Platform and our data scientists and engineers rely heavily on DataFlow, Big Query, Composer and Apache Beam, for their machine learning pipelines.
WHAT SKILLS YOU’LL NEED
-Fluency with Python3 years (or more) experience in back-end development or data engineering
-Bachelor’s Degree (or above) in Computer Science/Maths/Physics/similar
-Previous background in working with machine learning or data engineering teams is a plus (not required)
-Quick learner with an ambitious and results driven personality
-Working well as part of a team in a fast-paced environment
-Excellent communication and organisational skills