Data Pipeline Engineer [Remote, NO C2C]
Engineering | Remote | Full Time
JOB TYPE: Freelance, Contract Position - No agencies (See notes below)
HOURLY RANGE: Our client is looking to pay $75 - $100 USD / HR
ESTIMATED DURATION: 40Hrs/Week - Long Term, ongoing project
Braintrust (usebraintrust.com) is a user-controlled talent network, where you keep 100% of what you earn and actually get to own the platform. We've been onboarding some big clients and specifically need a Data Pipeline Engineer for our client.
The core engineering team at our client's is looking for a talented Data Pipeline engineer to do foundational development work by helping to define and deliver against a coherent product roadmap.
As a Data Pipeline Engineer you would have a hand in the design, implementation, deployment, and support of their Data Service project. As a member of the Data Service team you will work on foundational data infrastructure.
You’d optimize their external visibility and assist various 3rd party sources to leverage their Data Service. You’d help architect real-time microservices that capture client's blockchain events and build systems that provide support to their external API services where users can monitor network health and other various analytics over time.
As a team member, you would be responsible for creating technically viable software with a team of senior engineers specializing in devops, distributed systems, system architecture, testing, and other related fields. You would be collaborating with some of the most diligent minds in the cryptocurrency industry on product direction, both on the core team and among its partners, investors, and advisors. As an early team member, you must feel comfortable working in a fast-paced environment where the solutions aren’t already predefined.
Prior experience with blockchain projects is helpful but they are primarily interested in capacity to grow into the role. You should have prior experience in developing high-quality backend architecture and some passing knowledge of how such architecture principles should apply to blockchain data services.
Our client looking for individuals who are passionate about being at the forefront of a new technological paradigm and can lead the design and development of scalable applications.
- Build and support a data pipeline platform that allows a customer's behavioral data to directly impact their individualized experiences.
- Learn to evaluate multiple technical approaches and drive consensus with your engineering peers
- Use data to solve real world problems and assist both internal and external partners with data integrations
- You will be responsible for ensuring access to data and tooling for the Core Engineering/Product team to leverage for direct customer use
- Developing with sound testing and debugging practices
- Creating technical documentation and well-commented code for open source consumption
- Participating in open source development on shared resources with external development teams
- Experience with data modeling, data warehousing, and building data pipelines
- Experience in SQL and building a time series database (Postgres)
- Knowledge of data management fundamentals and data storage principles
- Knowledge of distributed systems as it pertains to data storage and computing
- Proficiency in, at least, one modern scripting or programming language such as Python, NodeJS
- Proven success in communicating with users, other technical teams, and senior management to collect requirements, describe data modeling decisions and data engineering strategy
- Knowledge of software engineering best practices across the development lifecycle, including agile methodologies, coding standards, code reviews, source management, build processes, testing, and operations
- Strong understanding of distributed systems and Restful APIs.
- You’re opinionated about tooling and curious about new trends and technologies in the software development world.
- Independence and self-motivation
- 4+ years engineering experience
- Data Processing - experience with building and maintaining large scale and/or real-time complex data processing pipelines using Kafka, Hadoop, Hive, Storm, or Zookeeper
- Experience with large-scale distributed storage and database systems (SQL or NoSQL, e.g. MySQL, Cassandra)
- Background in academic economics or finance
- Familiarity with Cosmos, Tendermint, or Thorchain
- Familiarity with the Rust/Golang programming language
- Experience in small startup environments
- Experience with a distributed team / remote work
ABOUT THE HIRING PROCESS:
Qualified candidates will be invited to do a screening interview with the Braintrust staff. We will answer your questions about the project, and our platform. If we determine it is the right fit for both parties, we'll invite you to join the platform and create a profile to apply directly for this project.
C2C Candidates: This role is not available to C2C candidates working with an agency. If you are a professional contractor who has created an LLC/corp around their consulting practice, this is well aligned with Braintrust and we’d welcome your application.
Braintrust values the multitude of talents and perspectives that a diverse workforce brings. All qualified applicants will receive consideration for employment without regard to race, national origin, religion, age, color, sex, sexual orientation, gender identity, disability, or protected veteran status.