Job Summary: We are seeking a detail-oriented and highly analytical Data Engineer to join our growing team. This role will focus on designing, building, and maintaining reliable data pipelines that enable efficient data ingestion, transformation, and delivery. The ideal candidate will be responsible for developing scalable workflows, optimizing ETL/ELT processes, and ensuring data quality across multiple sources. You will work closely with cross-functional teams-including product, finance, business, and compliance-to deliver accurate, timely, and actionable data that supports key decision-making and compliance requirements. Leveraging modern cloud technologies, you will ensure our data infrastructure is robust, scalable, and aligned with business needs. Key Responsibilities: Analyze large and complex datasets to identify trends, anomalies, and inconsistencies that inform business decisions. Design, build, and maintain scalable data pipelines for ingestion, transformation, and storage. Develop and optimize ETL/ELT workflows using Python, SQL, Databricks, and AWS services. Collaborate with cross-functional teams to gather data requirements and deliver high-quality solutions. Ensure data accuracy and integrity through reconciliation processes and timely reporting. Qualifications: Proficiency in SQL for data querying and transformation. Hands-on experience with Python for data processing and automation. Experience with Databricks or other data lake/data warehousing solutions. Familiarity with AWS services such as S3, Lambda, Glue, Step Functions, and Athena. Nice to Have: Experience working in banking or fintech industries.