Senior Data Engineer / Analytics Engineer (AWS)

<p class="___1et8vw6 f18zxyen f1mevb6">DVT is one of the top software development companies on the continent. Our engineers consult on cutting-edge platforms at leading companies across South Africa and globally. You'll work alongside some of the most established practitioners in the country, on the latest technologies in the modern data stack.</p><p style="min-height: 1.7em;"></p><p class="___1et8vw6 f18zxyen f1mevb6">We are proud of our culture of continuous learning, internal knowledge sharing, and sponsored technical events across the AWS and data ecosystem.</p><p style="min-height: 1.7em;"></p><p class="___1et8vw6 f18zxyen f1mevb6">We are looking for a <strong>Senior Data Engineer / Analytics</strong> <strong>Engineer</strong> to join our Data and Automation practice on a high-impact client engagement. You will help design, build, and operate a modern <strong>AWS-first data platform</strong> — moving data through S3 into Redshift Serverless, orchestrated by Airflow, modelled with dbt, and scripted in Python, with a likely evolution towards Snowflake.</p><p style="min-height: 1.7em;"></p><p class="___1et8vw6 f18zxyen f1mevb6">This is a client-facing role in a fully remote environment. You will own pipelines end to end, shape analytics engineering practices, and communicate clearly with distributed stakeholders.This is <strong>not</strong> a generic backend engineering role. Strong software engineers are only relevant where they bring credible, hands-on experience in a modern cloud data platform.</p><br><br><h3 class="___d1kctb0 fod5ikn fl43uef faaz57k fqcjy3b f1mevb6"><strong>DUTIES AND RESPONSIBILITIES</strong></h3><p class="___1et8vw6 f18zxyen f1mevb6"><strong>Data Platform & Pipelines</strong></p><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p>Design, build, and maintain robust <strong>ETL/ELT pipelines</strong> across AWS-native data environments</p></li><li><p>Own <strong>Airflow</strong> orchestration — scheduling, dependencies, retries, alerting, and operational support</p></li><li><p>Develop analytics-ready data models in <strong>dbt</strong>, using modular, warehouse-first transformation patterns</p></li><li><p>Work confidently across <strong>S3</strong> (raw, staged, curated) and <strong>Redshift Serverless</strong> for storage and warehousing</p></li><li><p>Contribute to the roadmap and potential migration toward <strong>Snowflake</strong> as a future warehouse</p></li></ul><p class="___1et8vw6 f18zxyen f1mevb6"><strong>Engineering & Quality</strong></p><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p>Write clean, maintainable <strong>Python</strong> for pipeline logic, scripting, and lightweight engineering tasks</p></li><li><p>Embed data quality, testing, and observability into every pipeline — not as an afterthought</p></li><li><p>Apply sound version control, code review, and CI/CD practices to data workloads</p></li></ul><p class="___1et8vw6 f18zxyen f1mevb6"><strong>Client & Collaboration</strong></p><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p>Engage directly with client stakeholders: gather requirements, present solutions, and advise on trade-offs</p></li><li><p>Partner with analysts, product teams, and other engineers in a distributed, remote-first setup</p></li><li><p>Contribute to architectural reviews, retrospectives, and continuous improvement of platform practices</p></li></ul><h3 class="___d1kctb0 fod5ikn fl43uef faaz57k fqcjy3b f1mevb6"><strong>REQUIRED EXPERIENCE AND SKILLS</strong></h3><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p><strong>5+ years</strong> in data engineering, analytics engineering, or closely related roles</p></li><li><p>Strong hands-on <strong>AWS data platform</strong> experience — S3-centred flows, cloud-native data workflows, warehouse-driven delivery</p></li><li><p><strong>Apache Airflow</strong> — proven experience designing, maintaining, and troubleshooting production pipelines</p></li><li><p><strong>dbt</strong> — solid analytics engineering patterns, modular models, testing, and documentation</p></li><li><p><strong>Warehouse experience</strong> — Redshift preferred; Snowflake highly desirable; comparable warehouse backgrounds considered if adaptable</p></li><li><p><strong>Python</strong> — confident scripting for pipelines, transformations, and automation</p></li><li><p>Strong understanding of data modelling (dimensional, wide tables, incremental strategies)</p></li><li><p>Excellent written and verbal <strong>communication</strong> — able to explain technical work credibly to non-technical audiences</p></li><li><p>Self-directed delivery in a <strong>fully remote, client-facing</strong> environment</p></li></ul><h3 class="___d1kctb0 fod5ikn fl43uef faaz57k fqcjy3b f1mevb6"><strong>NICE TO HAVE</strong></h3><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p><strong>Snowflake</strong> migration or implementation experience</p></li><li><p>Pipeline <strong>monitoring and observability</strong> (e.g. Datadog, Monte Carlo, CloudWatch, OpenLineage)</p></li><li><p>Experience implementing <strong>data quality frameworks</strong> (e.g. dbt tests, Great Expectations)</p></li><li><p>Background moving organisations from traditional warehouse-centric patterns toward modern <strong>analytics engineering</strong></p></li><li><p>Experience in <strong>fintech, lending, or financial services</strong> environments</p></li><li><p>Exposure to event-driven or streaming patterns (Kinesis, Kafka)</p></li></ul><h3 class="___d1kctb0 fod5ikn fl43uef faaz57k fqcjy3b f1mevb6"><strong>MINIMUM REQUIREMENTS</strong></h3><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p>Matric (Grade 12) certificate</p></li><li><p>Bachelor's degree in Computer Science, Information Systems, Engineering, Mathematics, or a related field (or equivalent practical experience)</p></li><li><p>AWS certification advantageous (e.g. Data Engineer – Associate, Solutions Architect – Associate/Professional)</p></li><li><p>Reliable home-office setup and connectivity suitable for a fully remote client engagement</p></li></ul><h3 class="___d1kctb0 fod5ikn fl43uef faaz57k fqcjy3b f1mevb6"><strong>WHAT WE'RE NOT LOOKING FOR</strong></h3><ul class="___yduxjt0 fjh19gf f18zxyen f1mevb6 f1mswpri"><li><p>Pure backend / application-only engineers with no production data platform work</p></li><li><p>Candidates with no real orchestration experience</p></li><li><p>Candidates with no warehouse or data modelling background</p></li><li><p>Profiles without AWS exposure</p></li><li><p>Candidates who cannot clearly articulate the data work they've shipped</p></li></ul>

Back to blog