Our clients who are an innovative technology driven company is looking for Data engineers.
The company keep their customers moving forward by designing, building and running their data landscapes. These landscapes are often developed in public cloud environments, but there is diversity in implementation: no customer is the same. They have data lake environments on their own MCC (Mission Critical Cloud), AWS, Azure and more.
As Data Engineer you are part of a multi-disciplinary team. You will be taking on various cases, as well as finding answers to questions like these;
- What data quality rules are there and how should they be implemented?
- Which storage solution(s) is/are required?
- How and with what tooling will we do ETL?
- What ingestion types are required?
- How will governance of the entire landscape be done?
- What is required to train my ML model effectively?
As a Data Engineer, you get to actively explore, design, build and run data lakes in one of the public clouds. Colleagues that work with on different projects or with different customers will come to you and the team for knowledge and expertise. Your work is customer focused but will also contribute to expanding and maturing the company’s data landscape architecture and its initiatives.
We’re looking for a Data Engineer with the following experience:
- Developing and maintaining data pipelines.
- Comfortable with DevOps-driven teams.
- Configuring and managing tools like Spark, Glue, Kinesis, S3, Lambda, Kafka, Redshift, Athena, EventHub, etc.
- Supporting tooling such as Gitlab (CI/CD), Ansible (Automation), Terraform (Infra)
- Affinity with scripting languages, preferably Python.
- Ability to self-organize; you are in control of your destination, there is no management.
- Capable of communicating proficiently with a customer that has various levels of expertise.
If you are the person we are looking for, please apply through website.