Location
Austin
Type
Permanent
Salary
$130,000 - $140,000 per annum
Sector
Vacancies
Reference
39709
Contact
Telephone
+44 203 397 4565
Apply to this job
Create an Email Alert
From this Job
You know quality is more than a checkbox. It’s the foundation of reliable, intelligent systems – and without trusted data, nothing else works.
As the client expands their asset intelligence capabilities following a recent acquisition, they’re building scalable pipelines and deploying on-prem network sensors that dramatically improve asset visibility. This is a pivotal moment to join as a Data Quality Engineer, where your work will ensure that the data powering ML and analytics pipelines is consistently accurate, complete, and trusted.
You’ll join a high-performing Quality Engineering team focused on embedding testing and validation into every layer of data and ML pipeline development. This is a hands-on, technical role – but one where your impact will be felt across the business, as the reliability of these systems is central to future growth.
You’ll lead on test automation across integrated data pipelines, drive CI/CD integration of quality checks, and help the company scale its data reliability as systems become more complex.
Build automated data quality test frameworks across ML and analytics pipelines
Implement and maintain end-to-end regression and integration tests in CI/CD (CircleCI, GitHub Actions)
Deploy and test network sensors across multiple IT environments (TAP, SPAN, NETFLOW, etc.)
Validate integration of Redjack’s pipelines with the wider architecture of the acquiring company
Design monitoring dashboards, anomaly detection pipelines, and alerts for proactive quality management
Collaborate with cross-functional teams to co-design test plans and evolve testing strategies
Create test coverage across structured and unstructured data in production ML systems
4+ years’ experience in Quality Engineering, ML Test Automation or Data Quality
Proficiency in Python and SQL to build validation tools and test frameworks
Hands-on experience with CI/CD pipelines and orchestration tools (Airflow, MLflow, Kubeflow, GitHub Actions)
Strong understanding of distributed systems (e.g. Kafka, APIs) and cloud infrastructure (AWS S3, Snowflake, BigQuery)
Familiarity with data quality and validation frameworks such as Great Expectations, dbt, or Deequ
Exposure to Rust or networking environments
Experience with mocking libraries (Mockito, mountebank)
Containerisation and orchestration tools (Docker, Kubernetes)
Multi-cloud familiarity: AWS, GCP, and Azure
Competitive salary aligned with Austin market benchmarks
Health insurance, paid time off, and hybrid working (must be Austin-based)
Career growth within a scaling, global software organisation focused on AI and data intelligence
Dedicated budget for continuous learning across AI reliability, data governance, and QE
Apply to this job