We are Brainlabs, the High-Performance media agency, on a mission to become the world's biggest and best independent media agency. We plan and buy media that delivers profitable and sustained business impact. And what’s our formula? Superteams of Brainlabbers, fueled by data and enabled by technology.

Brainlabs has always been a culture-first company. In fact, from the very beginnings of the agency a set of shared principles, philosophies and values was documented in The Brainlabs Handbook, helping us create our unique culture.

As with everything here we always seek to adapt and improve so The Brainlabs Handbook has been fine-tuned to become The Brainlabs Culture Code.

This Culture Code consists of 12 codes that talk to what it means to be a Brainlabber. It’s a joint commitment to continuous development and creating a company that we can all be proud of, where Brainlabbers can turn up to do great work, make great friends and win together.

You can read The Brainlabs Culture Code in full here.

Job Summary:

We are looking for a motivated and detail-oriented Data Engineer with 2 to 4 years of experience in designing, building, and managing scalable data solutions on Google Cloud Platform (GCP). The ideal candidate will have a strong background in data engineering, cloud-based architectures, and proficiency in implementing data pipelines to transform raw data into actionable insights.

Key Responsibilities:

  • Data Pipeline Development:
    • Design, develop, and maintain ETL/ELT pipelines using GCP tools like CloudFunctions, CloudRun, Dataflow, Dataproc, or Cloud Data Fusion.
    • Ensure data pipelines are scalable, efficient, and optimised for performance.
  • Data Integration and Storage:
    • Integrate data from various sources into GCP services such as BigQuery, Cloud Storage, and Cloud SQL.
    • Design and implement data warehouse/mart solutions using BigQuery for analytics and reporting.
  • Data Transformation & Optimization:
    • Build transformation logic using SQL, Python, or Spark for preparing clean and structured data.
    • Optimise query performance and storage cost in BigQuery or other GCP storage systems.
  • Data Quality & Monitoring:
    • Develop processes to ensure data quality, integrity, and consistency across the pipeline.
    • Implement monitoring and logging systems using tools like Stackdriver or Looker.
  • Collaboration & Communication:
    • Work closely with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand requirements.
    • Provide technical guidance on GCP best practices and tools.
  • Documentation & Maintenance:
    • Maintain clear documentation of processes, workflows, and data architecture.
    • Ensure regular maintenance and version control of pipelines and scripts.

Skills & Qualifications:

1. Mandatory Skills:

    • Hands-on experience with GCP services like CloudFunctions, CloudRun, Schedular, BigQuery, Dataflow, Pub/Sub, and Cloud Storage.
    • Strong programming skills in Python, SQL.
    • Knowledge of data modelling, schema design, and query optimization techniques.
    • Experience in building batch and streaming data pipelines.

2.Preferred Skills:

    • Familiarity with orchestration tools like Apache Airflow, Cloud Composer, or similar.
    • Working experience on other cloud stack for ETL(AWS or Azure) is a plus
    • Exposure to GCP’s AI/ML tools (e.g., AI Platform, AutoML) is a plus.
    • Knowledge of CI/CD practices and tools like Git, Jenkins, or Terraform for pipeline deployments 
    • Understanding of data security, governance, and compliance practices on GCP.
    • Excellent communication and collaboration skills.
    • Ability to work in a fast-paced and dynamic environment.

Education & Certifications:

  • Bachelor’s degree in Computer Science, Engineering
  • GCP Data Engineer or Associate Cloud Engineer certification (preferred but not mandatory).



What happens next?

We know searching for a job is tough and that you want to find the best career and employer for you. We also want to ensure that this position is the best fit for both you and us. Therefore, you will participate in a comprehensive interview process that includes skills interviews with our team. The goal of this process is to allow you to get to know us as we learn more about you.


Brainlabs actively seeks and encourages applications from candidates with diverse backgrounds and identities. We are proud to be an equal opportunity workplace: we are committed to equal opportunity for all applicants and employees regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion, or belief, and marriage and civil partnerships. If you have a disability or special need that requires accommodation during the application process, please let us know!

Please note that we will never ask you to transfer cash or make any other payment to us in order to apply for a role or to work for Brainlabs. Any such asks are fraudulent and should be reported to the appropriate authorities in your area.

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)
cover_letter chosen  
(File types: pdf, doc, docx, txt, rtf)



Enter the verification code sent to to confirm you are not a robot, then submit your application.

This application was flagged as potential bot traffic. To resubmit your application, turn off any VPNs, clear the browser's cache and cookies, or try another browser. If you still can't submit it, contact our support team through the help center.