Job Summary
Design and build robust and secure data pipelines to a Snowflake data warehouse from on premise and cloud data sources, using AWS data stores such as Redshift, RDS, S3, and Athena.
JOB ACCOUNTABILITIES
- Translate functional specifications into technical requirements and designs
- Develop and implement data pipelines, utilizing Python and PySpark.
- Work with business analysts and users to understand their needs and develop solutions
- Support maintenance, bug fixing, and performance analysis of data pipelines
- Contribute to knowledge building and sharing
MINIMUM KNOWLEDGE/ EXPERIENCE / TRAINING / QUALIFICATIONS REQUIRED FOR POSITION
- Bachelor's degree in Computer Science, Engineering, Mathematics, or Statistics from a recognized university / institute.
- Min 1+ year of experience in Data engineering.
- Experience in Data Warehousing / Data Lake and BI landscape.
- Familiar with AWS Services like Amazon Redshift, Glue EC2, S3, Athena, RDS, Lambda & Event Bridge.
- Strong SQL skills, including complex querying for data extraction and analysis.
- Experience in Power BI, Logistics systems & SAP will be an added advantage.