|
Senior Data Engineer - New York New York
Company: Synechron Inc. Location: New York, New York
Posted On: 04/28/2024
We are At Synechron, we believe in the power of digital to transform businesses for the better. Our global consulting firm combines creativity and innovative technology to deliver industry-leading digital solutions. Synechron's progressive technologies and optimization strategies span end-to-end Artificial Intelligence, Consulting, Digital, Cloud & DevOps, Data, and Software Engineering, servicing an array of noteworthy financial services and technology firms. Through research and development initiatives in our FinLabs we develop solutions for modernization, from Artificial Intelligence and Blockchain to Data Science models, Digital Underwriting, mobile-first applications and more. Over the last 20+ years, our company has been honored with multiple employer awards, recognizing our commitment to our talented teams. With top clients to boast about, Synechron has a global workforce of 13,950+, and has 52 offices in 20 countries within key global markets. Our challenge This position is for a Cloud Data engineer with a background in Python, Pyspark, SQL and data warehousing for enterprise level systems. The position calls for someone that is comfortable working with business users along with business analyst expertise. Additional Information - New York Only* The base salary for this position will vary based on geography and other factors. In accordance with New York law, the base salary for this role if filled within New York is $130k - $145k/year & benefits (see below). The Role Responsibilities : - Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
- Design, develop, and deploy Spark program in databricks environment to process and analyze large volumes of data.
- Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
- Proficient in developing programs in Python and SQL
- Experience with Data warehouse Dimensional data modeling.
- Working with event based/streaming technologies to ingest and process data.
- Working with structured, semi structured and unstructured data.
- Optimize Databricks jobs for performance and scalability to handle big data workloads.
- Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
- Implement best practices for data management, security, and governance within the Databricks environment. Experience designing and developing Enterprise Data Warehouse solutions.
- Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
- Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards. Requirements: You are:
- Minimum 9+ years of experience is required.
- 5+ years Python coding experience.
- 5+ years - SQL Server based development of large datasets
- 5+ years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
- Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
- Experience in Data warehousing - OLTP, OLAP, Dimensions, Facts, and Data modeling.
- Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
- Experience with Cloud based data architectures, messaging, and analytics.
- Cloud certification(s).
- Minimally a BA degree within an engineering and/or computer science discipline
- Master's degree strongly preferred. It would be great if you also had:
- Any experience with Airflow is a Plus. We can offer you:
|
|