As a Senior Software Engineer with expertise in Data Engineering with Convera, you will Collaborate with business stakeholders, product management, and enterprise architecture to define & evolve future state data strategy, architecture, and data roadmap.
You will be responsible for:
Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function policies procedures and compliance requirements that meet deliverables.
Review and analyze complex multi-faceted larger scale or longer-term Software Engineering challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors.
Identifying, designing, and implementing process improvements that include building/re-engineering data models, data architectures, pipelines, and data applications.
Design and build an infrastructure for extraction, transformation, and loading of data from a wide range of data sources like snowflake, SQL server.
Review tools and technologies to create data architecture that supports new data initiatives and is useful in next-gen products.
Increase automation and build analytic solutions at scale to serve the business requirements.
Manages/improves the existing data platform capabilities to meet various compliance and regulatory requirements.
Conduct thorough performance assessments of existing SSIS packages, identifying areas for optimization.
You should apply if you have:
Should have a bachelor’s or equivalent degree along with 8+ years of experience in Data Engineering specifically in ETL development.
Experience with data warehousing, data architecture, ETL data pipeline and/or data engineering environments at enterprise scale that are built on Snowflake
Experience with SSIS packages for large-scale data integration.
Rewrite or enhance SSIS code to improve execution speed and resource utilization.
Lead the design, development, and implementation of ETL processes using Talend
Implement appropriate indexing and partitioning strategies to optimize data access.
Enhance error handling mechanisms to ensure data integrity and reliability.
Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data by leveraging ETL tools like SSIS, Talend etc.
Creation of necessary infrastructure for ETL jobs from a wide range of data sources
Collecting data requirements, maintaining metadata about data
Experience in Data storage technologies like Amazon S3, SQL, NoSQL,
Experience in designing normalize and Der normalized database objects.
Experience in optimizing AWS jobs from cost and performance point of view.
Experience in writing complex SQL queries and store procedures.
Data modeling technical awareness
Conduct research and identify automation tasks.
Integration using Talend/Talend API.
Hands on experience developing pipeline using Lambda, Glue, Athena, real-time data ingestion pipeline development and other key AWS data services.
Experience in working with stakeholders working in different time zones.
Experience in reporting tools like powerBI, tableau etc.
Hands on experience setting up Airflow message consuming through Kafka.
Strong experience of implementing and managing Airflow for orchestrating purposes.
Familiarity with performance tuning techniques for SQL Server databases.
Proficient understanding of code versioning tools - Git, Gitlab, Github
Knowledge of SDLC, Testing & CI/CD aspects such as Jenkins, BB, JIRA
Good to have banking and finance domain experience