Your browser cookies must be enabled in order to apply for this job. Please contact support@jobscore.com if you need further instruction on how to do that.

Data Architect

Software Development | Cupertino, CA | Full Time

Job Description

At Globant, we dream and build Digital Journeys that matter to millions of users. We do that by leveraging engineering, design and innovation with our own industry-leading practices, like our Agile PODs and specialized Studios.

We want you to join us in creating these journeys for the biggest clients in tech, retail, travel, banking, ecommerce and media, revolutionizing and growing their core businesses while helping them (and you!) stay ahead of the curve.

WHAT ARE WE LOOKING FOR?

We are seeking an experienced Data Architect to join our team to build tools for a distributed data processing platform into a challenging hi-tech company. You will work developing new algorithms to process large scale data efficiently with a large team of interdisciplinary engineers.

Responsibilities:

  • Define and execute ETLs using Apache Sparks on Hadoop among other Big Data technologies

  • Design and implement data pipelines for processing and aggregate data

  • Message oriented architecture definitions using Kafka

  • Orchestrate data ingestion into Data Warehouses such as HBase

  • Work with stakeholders and a cross-functional team to understand requirements, evaluate design alternatives and architecture complex solutions.

  • Build collaborative partnerships with software architects, technical leads and key individuals within other functional organizations

  • Ensure code quality by actively participate in code reviews. Test solutions and ensure it meets specifications and requested performance.

  • Build and foster a high performance engineering culture, mentor team members and provide your team with the tools and motivation to make things happen

  • Leads the analysis and design of quality technical solutions.

Requirements

  • BS or MS in Computer Science or related technical field or equivalent combination of education/experience

  • A minimum of 8+ years of experience in global software development and deployment

  • Experience with Java or Scala

  • Focus to details, computer science algorithms, data structures, and distributed algorithms

  • Significant experience with Hadoop ecosystem (Spark, Hive, HBase)

  • Knowledge on data streaming and message queue middlewares such as Kafka

  • Deep knowledge on Extract, Transform, Load (ETL) and distributed processing techniques such as Map-Reduce

  • Experience working with large scale enterprise organization with cross-functional teams

  • Excellent verbal and written communication skills

  • Great problem solving and analytical skills

  • Experience working in solutions over any major Cloud provider (AWS, Azure, GCP) is a plus

  • Graph, Data classification and clustering algorithms in distributed environment is a plus

  • Experience defining REST services and platform integrations is a plus

We are interested in hard-working, fast-learning talents and we have the know-how and scale to help you make your own career path. If you seek an entrepreneurial, flexible and team-oriented culture, come join us. We are ready.

We are ready.