Experience
Professional Experience
Data Engineer — NatWest Bank
September 2025 – Present | Greater London, UK
Building and maintaining production data platforms that process millions of transactions daily, supporting analytics and regulatory reporting across the organization.
Key Responsibilities:
- Design and implement real-time data ingestion pipelines using Kafka and Amazon S3
- Develop distributed data processing workflows with PySpark for transformation and validation
- Build curated data models in Snowflake supporting downstream analytics and reporting
- Orchestrate end-to-end pipelines using Apache Airflow DAGs with dependency management and monitoring
- Collaborate with cross-functional teams on data architecture, governance, and documentation
- Optimize SQL and PySpark workloads for performance and cost-efficiency
- Publish business-ready datasets and dashboards via Tableau
Technologies: Kafka, PySpark, Amazon S3, Snowflake, Apache Airflow, Python, SQL, Tableau, Jira, Confluence
Impact: Enabled reliable data flows supporting critical business operations, analytics, and regulatory compliance
Data Engineer — Accenture
July 2023 – August 2025
Delivered large-scale cloud data engineering solutions for enterprise clients across multiple industries, working with both Azure and AWS cloud platforms.
Key Responsibilities:
- Designed and implemented scalable ETL/ELT pipelines using Azure Databricks, Azure Data Factory, and Snowflake
- Developed PySpark workflows for distributed data processing and transformation
- Built reusable transformation layers with dbt, ensuring consistent business logic and modular data models
- Automated pipeline deployments using CI/CD (GitHub Actions, Terraform)
- Implemented data quality checks, schema evolution strategies, and governance controls
- Optimized query performance across Databricks, Snowflake, and cloud data lakes
- Leveraged Microsoft Fabric for unified analytics workflows and lakehouse architecture
- Produced technical documentation and data flow diagrams supporting cross-team collaboration
Technologies: Azure Databricks, Azure Data Factory, Azure Data Lake, Snowflake, PySpark, dbt, Microsoft Fabric, AWS (S3, Glue), Python, SQL, Terraform, GitHub Actions
Impact: Delivered data platforms enabling analytics, reporting, and machine learning for Fortune 500 clients
Data Engineer — Dpoint Group
May 2022 – June 2023 | Barcelona, Spain
Contributed to business intelligence and analytics initiatives, building ETL pipelines and reporting solutions that supported operational decision-making.
Key Responsibilities:
- Developed and maintained ETL processes using SSIS to extract data from SAP BW
- Created interactive Power BI dashboards for executive insights and KPI monitoring
- Wrote optimized SQL queries, views, and stored procedures for reporting and analytics
- Automated recurring reporting workflows using Python and Excel VBA
- Supported migration of on-premise ETL processes to Azure Data Factory
- Implemented version control and documentation standards for data pipelines
Technologies: SSIS, SAP BW, Power BI, Azure Data Factory, Python, SQL, Excel VBA, Git
Impact: Improved reporting efficiency and reduced manual workload through automation
Education
Master of Business Administration (MBA)
York St John University, London, United Kingdom
2023 – 2024
Bachelor of Internet & Communication Technology
Tor Vergata University, Rome, Italy
2020 – 2023
Certifications
Microsoft Certified: Fabric Data Engineer Associate
Microsoft | January 2026 – January 2027
Credential ID: 1D3467-780915
Verify Credential
Validated expertise in building and managing data engineering solutions on Microsoft Fabric, including:
- Data lakehouse architecture with OneLake and Delta Lake
- Building data pipelines with Data Factory and Dataflow Gen2
- Real-time analytics with KQL databases and Eventstream
- Data transformation with Notebooks and Spark
- Power BI integration and semantic modeling
- Data governance and security in Fabric workspaces
Additional Training & Certifications
In Progress:
- AWS Certified Solutions Architect
- Azure Data Engineer Associate
Completed:
- Apache Airflow workflow orchestration
- Databricks Apache Spark certification training
-
dbt Analytics Engineering fundamentals
Skills Summary
Programming: Python (PySpark, Pandas), SQL, Shell Scripting
Cloud Platforms: AWS, Azure, Microsoft Fabric
Data Engineering: Apache Kafka, Apache Airflow, Snowflake, dbt, ETL/ELT
Databases: Snowflake, Azure SQL, Redshift, PostgreSQL, MySQL
DevOps: Docker, Terraform, CI/CD, Git, GitHub Actions
BI Tools: Tableau, Power BI
Methodologies: Agile/Scrum, Data Modeling, Data Architecture
| ← Back to Home | View Open Source Contributions → |