Job Description
The Role:
Unified’s engineering team builds a data analytics platform that includes data injection, ETL processing, and an analytics reporting dashboard for the next-generation Unified Platform. You’d be working heavily in Java building an entirely new platform using a microservices-based architecture. You will get exposure to many different types of databases, including graph, relational, NoSQL, and data warehouse. You will be working on a high-performance high-availability stack using cutting-edge technology. Your opinion on architecture design decisions is welcome and expected. You will interface with many different techs and product teams in order to deliver these services.
What you'll do:
- Design the data pipeline that translates raw data which satisfies the business requirements and is also extremely performant and fault-tolerant
- Build data loading services for the purpose of importing data from numerous disparate internal and external data sources, inclusive of APIs, logs, relational, and non-relational databases
- Identify and anticipate top Kafka Platform architecture solutions and features to successfully meet the strategic needs of the organization
- Contribute to the continual improvement of the business’s data platforms through observations and well-researched knowledge.
- Able to identify, evaluate and discuss alternative technologies and techniques
- Practice and enforce disciplined software engineering (writing tests, code reviews, and pair programming)
- Play an analytical role to develop and manage scalable data processing platforms that are used for exploratory data analysis and real-time analytics.
- Collaborate with engineering teams to tease out implementation details and identify roadblocks before they occur
- Be accountable for complex stories or business requirements
- Maintain a quarterly view of defined, prioritized, and scoped work
- Work closely with cross-functional teams of data and backend engineers, analysts, DevOps and product managers
Who you are:
- You’re someone that has seen multiple code bases and wants to contribute from day one
- You’re great at some languages, solid in others
- You’d rather be working on the backend of the code and contributing to the core services that make our Platform run
- You’ve had experience developing software that actually goes to market, not just one-off apps. Must be pretty comfortable with experimenting
- You love the fact that you have one of the most amazing skill sets in the world - writing code that bends to your will
- Ambitious and self-starter, who can work independently, meet deadlines, demonstrate out of the box problem-solving skills and is willing to learn and experiment with new technologies
- Excellent communicator and collaborator; able to work effectively with both technical and non-technical teams
Need to have:
- 6+ years of professional software development experience
- Hands-on experience implementing data pipeline infrastructure for data ingestion and transformation near real time availability of data for applications, BI analytics, and ML pipelines.
- Expert-level working knowledge of Datawarehouse, structured/semi-structured data storage formats (Parquet, ORC, Avro)
- Experience designing optimized solutions for large datasets.
- Knowledge and experience designing solutions with cloud-native AWS Cloud services (EC2, EMR, RDS, EKS, AWS Glue, etc.) and deploying alternative solutions for appropriate use cases.
- Proven experience and knowledge in any one of Big Data Technologies such as Kafka, Spark, Hive, Hadoop, HBase, Presto, etc.
- Strong knowledge of architecture/design of Event-driven and streaming systems
- Understanding of Kafka, Kafka Connect, Kafka Streams and KSQL
- Strong experience developing modern services and applications using languages like Java/Python/Scala etc
- In-depth knowledge of SQL or NoSQL and experience using a variety of data stores like (RedShift, Neo4j, PostgreSQL, etc.)
- Experience in Open Source software development and CI/CD is desirable
- Experience in handling architectural and design considerations such as performance, scalability, reusability and flexibility issues
- Experience with Git and command-line
- Knowledge of testing Methodologies
- Experience with Agile SCRUM development methodology
- Ability to set technical and cultural standards for engineers
Nice to have:
- Experience working with Social Network APIs (Facebook, Twitter, LinkedIn etc.)
- Experience working with client-side MVC frameworks
- Familiarity with Vagrant & Docker
***************************
Unified helps marketers make informed and impactful decisions with the industry’s only business intelligence platform purposely designed for social advertising. With experience collecting and enriching over $3 billion dollars of social investment data, Unified is passionate about providing Fortune 2000 brands and agencies greater transparency into their many teams, tools and strategies. The Unified Platform and service teams are specifically built to ensure data quality, optimize investments and answer critical business questions. Unified has offices in Austin, Atlanta, New York City, San Francisco, and Los Angeles.
For the last four years, Unified has been recognized by AdAge and Crain’s as one of the “Best Places To Work”. For more information, visit www.Unified.com or follow @Unified on Twitter. Unified is an equal opportunity employer.