Discover your 100% YOU with MicroSourcing! Position: Data Engineer Work setup & shift: WFH / Dayshift Why join MicroSourcing Youu0027ll have: Competitive Rewards: Enjoy above-market compensation, healthcare coverage on day one, plus one or more dependents, paid time-off with cash conversion, group life insurance, and performance bonuses A Collaborative Spirit: Contribute to a positive and engaging work environment by participating in company-sponsored events and activities. Work-Life Harmony: Enjoy the balance between work and life that suits you with flexible work arrangements. Career Growth: Take advantage of opportunities for continuous learning and career advancement. Inclusive Teamwork: Be part of a team that celebrates diversity and fosters an inclusive culture. Position Summary The Data Engineer plays a foundational role in enabling enterprise-wide data access and analytics. As a key member of the small but high-impact Data & Analytics team-including a Power BI Developer and Data Analyst-this role is responsible for acquiring, transforming, and integrating data from a diverse range of source systems, including multiple ERPs and SaaS applications. The ideal candidate combines strong technical skills with a deep curiosity for data and problem-solving across modern and legacy technologies. This role will be responsible for designing, managing, and enhancing our Medallion Architecture within Microsoft Fabric, ensuring a structured, scalable and high-quality data foundation that powers downstream reporting, advanced analytics, and business insight generation. Key Responsibilities: Data Acquisition & Integration . Design, build, and maintain data pipelines that ingest data from various sources including Microsoft SQL Server, PostgreSQL, IBM DB2, SaaS APIs, and flat files. . Prioritise data source integration based on business value, aligning with stakeholder needs and reporting requirements. . Implement data ingestion patterns that support both batch and real-time pipelines. Medallion Architecture & Microsoft Fabric Lakehouse . Own and evolve the Medallion Architecture (Bronze, Silver, Gold layers) within the Microsoft Fabric Lakehouse environment. . Structure and transform raw ingested data (Bronze) into cleansed, conformed datasets (Silver), and curated, business-ready data models (Gold) to support reporting and analytics. . Collaborate with the Power BI Developer and Data Analyst to ensure Gold layer datasets align with semantic model and reporting needs. Data Quality, Governance & Security . Implement controls and monitoring to ensure data accuracy, freshness, and integrity at each stage of the data pipeline. . Document data flows, transformations, and lineage from source through to Power BI. . Apply data access governance and classification policies in accordance with enterprise standards. Tooling, Automation & Innovation . Use Fabric-native tools (Dataflows Gen2, Pipelines, Notebooks) and languages like SQL, Python, or Spark to build scalable data workflows. . Automate routine tasks and enable self-service data access where appropriate. . Explore new methods to bridge modern and legacy system integration. Support & Collaboration . Troubleshoot pipeline failures and support operational continuity for business-critical data feeds. . Engage with ERP/IT teams to understand schema changes and coordinate integration updates. . Participate in Agile planning, sprint ceremonies, and shared delivery roadmap. Qualifications & Experience . Bachelor's degree in Computer Science, Information Systems, Data Engineering, or related field (or equivalent experience). . Certifications in Microsoft Azure or Fabric data engineering pathways are advantageous. . 3+ years of experience in data engineering, with demonstrated exposure to hybrid cloud and on-premise ecosystems. . Strong hands-on experience working with Microsoft SQL Server, PostgreSQL, IBM DB2, or similar platforms. . Proven experience in building ELT/ETL pipelines across structured, semi-structured, and API-driven sources. . Practical knowledge of the Medallion Architecture and Microsoft Fabric lakehouse principles (or similar lakehouse paradigms such as Databricks Delta). . High proficiency in SQL, data modelling, and data transformation. . Strong Python or Spark skills for data orchestration and enrichment. . Pragmatic approach to solving complex integration problems with incomplete or legacy documentation. . Collaborative mindset with strong communication skills across business and technical domains. . Detail-oriented with a commitment to data quality and long-term maintainability