This job is expired
Sr. Data Engineer
Nucleusteq
| 2023-12-12
NucleusTeq is a company founded by engineers for engineers. We pride ourselves in our revolutionary implementation of digital solutions for business transformation.
If you've got the following skillset and experience, we'd love to see your resume:
Responsibilities
10+ years of experience in data engineering with at least 5 spent building data pipelines and extracting data from SQL / Oracle / DB2 / mainframe and ingesting into Big Data or Cloud platform such as Hadoop and Snowflake
Expertise in event streaming and building a data pipeline ingesting Kafka/MQ events / batch files to a Cloud database such as Snowflake
Expert data design, modeling, and data warehousing experience in cloud data base technologies such as MS Azure, AWS, and Snowflake. Exposure to Yugabyte and Google Cloud Platform is a plus
Experience in modern data design methodologies such as DDD (Domain Driven Design), Bounded Contexts, and Data Mesh building data warehouses for agility, scalability, and autonomy
Experience migrating data and models from on premise environments into cloud such as Azure using re-hosting, re-platforming or refactoring techniques depending on the use cases
Expert development, analysis, and design skills in database technologies such as Oracle, MYSQL, SQL Server, DB2, Cassandra etc.
Experience in designing and building data assets for batch and NRT Data Warehouses, Operational Data Stores, and BI&A universes. Well versed with industry standard Kimball and Inm on practices
Experience in creating ER (Entity Relationship), Logical, Physical, and Conceptual data models for an enterprise. Hands on experience with Data Modeling tool such as Erwin and/or Power Designer etc. and well versed in designing universes for reports / dashboards in tools such as Tableau, Cognos etc.
Big Data development, design, and architecture for distributed and scalable file system such as HDFS. Working experience in HWs / Cloudera Hadoop environment with expert development skills in HIVE
Hands on experience integrating with object and relational databases, caches, and search engines (such as Oracle, MYSQL, DB2, MongoDB, Cassandra, Redis, Elastic Search etc)
Experience scripting with python to extract data and run data procedures. Experience with Spark is a plus
Experience working in agile methodology and facilitating sessions with partners including Business, IT (engineers and developers), BI (Business Intelligence), System of Record (SOR) SMEs etc. to assess business requirements and deliver design prototypes and models
Java Programming Skills – Candidate should have a proven track record of developing in Java and applying it to complex enterprise systems. Experience using Spring framework and Springboot
Experience creating RESTful API based on the Java Spring framework that run as microservices on Kubernetes within Docker containers
Candidate should have a minimum of 2 years of Apache Kafka experience and a proven track record of applying it to complex enterprise systems
Experience with API development tools such as SwaggerHub and Stoplight
Experience CI/CD using tools such as Jenkins
Experience working with VSAM files / Cobol copybooks and DB2 on a mainframe
Not available