
Jll
JLL provides professional services in real estate and investment management worldwide.
Senior Data Engineer
Design & develop cloud data pipelines using SQL, PySpark, and Databricks.
Job Highlights
About the Role
The Senior Data Engineer will join the Enterprise Data team as an individual contributor, designing and developing strategic data solutions using the latest cloud technologies. The role operates globally, collaborating with JLLT teams across countries and regions, and works closely with data scientists, analysts, and other stakeholders to deliver scalable infrastructure. The engineer will be responsible for building and maintaining cloud‑based data pipelines, ensuring data quality, and documenting solutions for smooth handovers. • Design, develop, and maintain scalable and efficient cloud-based data infrastructure using SQL and PySpark • Collaborate with cross-functional teams to understand data requirements, identify potential data sources, and define data ingestion architecture • Design and implement efficient data pipeline frameworks, ensuring the smooth flow of data from various sources to data lakes, data warehouses, and analytical platforms • Troubleshoot and resolve issues related to data processing, data quality, and data pipeline performance • Stay updated with emerging technologies, tools, and best practices in cloud data engineering, SQL, and PySpark • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver data solutions that meet their needs • Document data infrastructure, data pipelines, and ETL processes, ensuring knowledge transfer and smooth handovers • Create complex automated tests and integrate them into testing frameworks
Key Responsibilities
- ▸cloud infrastructure
- ▸data pipelines
- ▸sql pyspark
- ▸data quality
- ▸etl testing
- ▸documentation
What You Bring
• We are seeking a Senior Data Engineer who is a self-starter to work in a diverse and fast-paced environment as part of our Enterprise Data team • Bachelor's degree in Computer Science, Data Engineering, or a related field (Master's degree preferred) • Minimum 5+ years of experience in data engineering or full-stack development, with a focus on cloud-based environments • Advanced expertise in managing big data technologies (Python, SQL, PySpark, Spark) with a proven track record of working on large-scale data projects • Strong Databricks experience • Advanced database/backend testing with the ability to write complex SQL queries for data validation and integrity • Strong streaming and real-time API/service validation including automation • Experience with automated web services (WSDL) and microservices (REST) using custom scripts and assertions for data validation and data-driven testing • Experience with cloud platforms such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) • Proficiency in object-oriented programming and software design patterns • Experience working in DevOps model, including installing, configuring, and integrating automation scripts on continuous integration tools (CI/CD) and GitHub for real-time test suite execution and troubleshooting • Experience with Unit, Functional, Integration, User Acceptance, System, and Security testing of data pipelines • Strong experience in designing and implementing data pipelines, ETL processes, and workflow automation • Familiarity with data warehousing concepts, dimensional modeling, data governance best practices, and cloud-based data warehousing platforms (e.g., AWS Redshift, Google BigQuery, Snowflake) • Familiarity with cutting-edge AI technologies and demonstrated ability to rapidly learn and adapt to emerging concepts and frameworks • Strong problem-solving skills and ability to analyze complex data processing issues • Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams • Attention to detail and commitment to delivering high-quality, reliable data solutions • Ability to adapt to evolving technologies and work effectively in a fast-paced, dynamic environment
Requirements
- ▸python
- ▸spark
- ▸databricks
- ▸aws
- ▸etl
- ▸bachelor's
Work Environment
Office Full-Time