Senior Data Engineer
Data Analytics | Remote in New York, NY | Full Time, Contract, and Temporary | From $90.00 to $110.00 per hour
Job Description
Senior Data Engineer 1295-1
- Hourly pay: $90-$110/hr (Pay varies based on the candidate's experience and location)
- Worksite: Leading music streaming company (Remote, Candidates must be located in the United States, Candidates in Eastern and Central time zones are preferred)
- W2 Employment, Group Medical, Dental, Vision, Life, Retirement Savings Program, PSL
- 40 hours/week, 6 Month Assignment
A leading music streaming company seeks a Senior Data Engineer. The successful candidate will design and maintain scalable, high-performance data processing pipelines and real-time ingestion systems to support evolving business needs. The ideal candidate has 5+ years of experience in data engineering, strong expertise in Spark and Python, and a solid understanding of distributed systems, data modeling, and cloud-based data warehousing solutions.
Senior Data Engineer Responsibilities:
- Design, develop, and maintain scalable, high-performance data processing systems using tools like Spark, Flink, or Druid
- Build and support real-time and batch data ingestion pipelines
- Integrate data from various sources, ensuring accuracy, completeness, and consistency
- Develop and maintain ETL processes and data models for analytical and operational use
- Write complex SQL queries to support data analysis, reporting, and business intelligence
- Optimize pipeline performance, storage efficiency, and query execution
- Collaborate with data scientists, analysts, and engineering teams to understand requirements and deliver scalable solutions
- Manage data infrastructure, including storage and access systems
- Stay current with evolving data engineering technologies and best practices
- Contribute to system architecture and participate in code reviews to ensure high quality and reliability
Senior Data Engineer Qualifications:
- 5+ years of professional experience in data engineering or a related field
- Bachelor’s degree in Computer Science, Engineering, or a related technical field
- Proficiency in Spark and Python for large-scale data processing
- Strong understanding of distributed systems, data modeling, and ETL processes
- Hands-on experience with cloud-based data warehousing solutions (e.g., Snowflake, AWS Redshift, Teradata)
- Experience writing and optimizing complex SQL queries
- Familiarity with workflow orchestration tools and data pipeline frameworks
- Strong problem-solving skills and high attention to detail
- Excellent verbal and written communication skills
- Ability to work independently and collaboratively in a cross-functional environment
- Experience with Scala and Databricks is a plus
- Knowledge of real-time data processing tools (e.g., Flink, Druid) is a bonus