About us 

Tamara is MENA’s leading payments innovator, focused on providing a seamless experience for merchants and customers through fair and transparent financial solutions. The company’s flagship Buy Now Pay Later platform lets shoppers split their payments online and in-store with no interest and no hidden fees. 

Tamara was founded in Riyadh, Saudi Arabia in late 2020 and has since grown to more than 200 employees in offices around the world in KSA, UAE, Germany, and Vietnam.

The company’s $110 million Series A round in 2021 - led by Checkout.com - broke records as the largest ever in the Middle East and to date, it has raised $216 million in equity and debt.

Tamara has over 3 million customers and more than 4,000 partner merchants including leading global and regional brands like IKEA, SHEIN, Adidas, Namshi, and Jarir plus local SMEs.

 

About the role  

We are looking for a Senior/Middle Level Data Engineer to join our team. This role will be responsible for our data platform and data pipeline architecture, as well as improving the data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder who is not worried about figuring out things from the ground up. 

 

What you will do

  • Develop and maintain optimal data platform architecture (data warehouse, data lake, data governance, data protection )
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and OCI (Oracle Cloud Infrastructure - similar like AWS/GCP services) ‘big data’ technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and OCI regions.

 

 

What we are looking for 

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and organizational skills.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • We are looking for a candidate with 5+ years in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
    • Experience with designing and implementing the architecture of a big data platform with open source tools.
    • Experience with big data tools: Tableau, Snowflake, Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres or MySQL and Cassandra or MongoDB.
    • Experience with data pipeline and workflow management tools: Airflow, Superset etc.
    • Experience with container services: K8S, Helm
    • Experience with stream-processing systems: Kafka, Spark-Streaming, etc.
    • Experience with Python is a must have for this role.
    • Experience with object-oriented/object function scripting languages: Java, Scala, etc will be good to have.

Apply for this Job

* Required

resume chosen  
(File types: pdf, doc, docx, txt, rtf)


Please reach out to our support team via our help center.