Tamara is the leading Buy Now, Pay Later provider in the MENA region. Our mission is to empower people to shop through an honest, transparent and inclusive financial solution. We provide a Buy Now, Pay Later solution for customers to pay with the ability to split their payments.
The company operates out of its HQ in Saudi Arabia and has offices across the UAE, Germany and Vietnam.
- Develop and maintain optimal data platform architecture (data warehouse, data lake, data governance, data protection )
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and OCI (Oracle Cloud Infrastructure - similar like AWS/GCP services) ‘big data’ technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Keep our data separated and secure across national boundaries through multiple data centers and OCI regions.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.
- Strong project management and organizational skills.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- We are looking for a candidate with 5+ years in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
- Experience with designing and implementing the architecture of a big data platform with open source tools.
- Experience with Python is a must have for this role.
- Experience with big data tools: Tableau, Snowflake, Hadoop, Spark, Kafka, etc.
- Experience with relational SQL and NoSQL databases, including Postgres or MySQL and Cassandra or MongoDB.
- Experience with data pipeline and workflow management tools: Airflow, Superset etc.
- Experience with container services: K8S, Helm
- Experience with stream-processing systems: Kafka, Spark-Streaming, etc
- Experience with object-oriented/object function scripting languages: Java, Scala, etc will be good to have.