KAR Global is looking to expand our data team as we continue to grow our data platform. You should have a strong background with Python and SQL. As a member of the data team the main responsibilities are implementing/maintaining Airflow ETL jobs, using Python to ingest external data sources into the Data Warehouse, and working closely with the Product and Data Science teams to deliver data in useable formats and to the appropriate data sources.
We have a polyglot data model using many cutting-edge data platforms. We are currently using Snowflake as our Data Warehouse, Elastic Search for location-based searching, Postgres for transactional data.
Our tech stack is very cutting edge. Snowflake drives the Data Warehouse, ElasticSearch enables our location-based searching/metrics, and Apache Spark is used to train our models. All environments are run on AWS EKS and data processing framework is written in Python.
About Our Candidate:
This candidate should be a self-starter who is interested in learning new systems/environments and is passionate about developing quality supportable data service solutions for internal and external customers. We value natural curiosity about data and technology that drives results through quality, repeatable, and long-term sustainable database and code development. The candidate should be highly dynamic and excited by the opportunities to learn many different products and data domains and how they drive business outcomes and value for our customers.
What You Will Be Doing:
You will participate daily in ceremonies of Agile sprint to help design, plan, build, test, develop, and support KAR data products and platforms consisting of Airflow ETL pipelines and Postrgres, Redshift, Dynamo DB, and Elastic search, and Snowflake databases. Our team works in a shared services delivery model supporting seven lines of business, including front-end customer facing products, B2B portals, mobile applications, business analytics, and data science initiatives.