Sr ETL Data Engineer: W2 role
Information Technology | Hybrid in New York, NY | Contract | From $60.00 to $70.00 per hour
Role: Sr ETL Data Engineer
Location: NYC: Hybrid
USC/ GC can apply
About the Role
You will be responsible for architecting and operating scalable, reliable data infrastructure that supports both operational and analytical workloads. This includes managing ETL pipelines, optimizing storage and access patterns, and supporting structured data modelling for use by analysts, data scientists, and application developers.
Responsibilities
· Design, build, and maintain robust data infrastructure to support analytics, reporting, and sustainability workflows
· Own the architecture and administration of relational and cloud-native databases (e.g., PostgreSQL, Snowflake, Redshift, MySQL)
· Build and manage ETL/ELT pipelines for ingesting and transforming data across systems and third-party sources
· Optimize database schemas, indexes, partitioning strategies, and storage for both query performance and cost-efficiency
· Enable and support analytics platforms by providing clean, well-documented, queryable datasets for downstream use (BI, dashboards, AI/ML)
· Implement and monitor data quality, governance, and access control policies
· Collaborate with data analysts, ESG specialists, and application developers to streamline data access and analytical readiness
· Automate and scale platform operations using infrastructure-as-code, containerization, and cloud-native orchestration tools
· Establish and manage data observability, pipeline monitoring, and alerting for reliability and integrity
- Support ESG data integration for internal and external sustainability disclosures, investment analysis, and regulatory reporting
Qualifications
· 8+ (or 5+ with exceptional expertise) years of experience in data engineering, data platform infrastructure, or analytics-focused engineering
· Strong command of SQL and database optimization techniques
· Experience with data warehouse systems (e.g., Snowflake, Redshift, BigQuery) and OLAP/OLTP hybrid design
· Proficiency with Python or similar for scripting, transformation, and data operations
· Familiarity with data pipeline frameworks (e.g., Airflow, dbt, Luigi) and orchestration practices
· Hands-on experience with cloud platforms (e.g., AWS, Azure) and cloud-native data services
· Experience building data platforms for analytics, BI, or machine learning use cases
· Knowledge of CI/CD, Git, and Agile methodologies applied to data workflows
- Solid understanding of data security, access control, and compliance
Preferred Skills
· Experience working with ESG/sustainability datasets or regulatory data (e.g., MSCI, CDP, SFDR)
· Knowledge of investment data models, portfolio risk data, or financial instrument structures
· Exposure to data versioning, columnar formats (Parquet, ORC), or data lake architectures
· Familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation) and containerization (e.g., Docker, Kubernetes)
· Experience with data observability, monitoring pipelines, and alerting frameworks