Your browser cookies must be enabled in order to apply for this job. Please contact if you need further instruction on how to do that.

Internal Application: Hadoop Architect/Lead- Data Infrastructure

1111 - Technical Operations | Redwood City, CA | Full Time

Job Description

About Amobee:

Amobee is a technology company that transforms the way brands and agencies make marketing decisions. The Amobee Marketing Platform enables marketers to plan and activate cross channel, programmatic media campaigns using real-time market research, proprietary audience data, advanced analytics, and more than 150 integrated partners, including Facebook, Instagram, Pinterest, Snapchat and Twitter. Amobee is a wholly owned subsidiary of Singtel, one of the largest communications technology companies in the world which reaches over 640 million mobile subscribers. The company operates across North America, Europe, Middle East, Asia and Australia. For more information, visit or follow @amobee


Data Infrastructure team at Amobee is responsible for building and managing Big data ecosystem components, like very large Hadoop clusters, Spark, extremely fast changing RDBMS environments and different types of storage environments. This is an agile environment that provides opportunity to solve complex and scaling challenges.

As a member of this team, you will be integral part of managing 24X7 big data components. Currently Hadoop environment is at tens of petabytes in size. You will be contributing to day-to-day operational activities. You will also get a chance to contribute on performance improvements and optimization of our Hadoop ecosystem components. You’ll get the chance to take on complex and interesting problems as part of a fast-paced, highly-collaborative team.  We've built a complex analytic system around our Hadoop ecosystem for scalability and high availability. It's imperative that you approach administration with an emphasis on repeatability, testability, and consistency. The demands on this system are increasing rapidly as new type of data is ingestion grows,as we add more functionality and products.

The successful candidate for this position will be a self-motivated with an attitude of getting things done. Should be able to see big picture and also be able to deep-dive into details to solve complex problems.

The successful candidate for this position will be a self-motivated with an attitude of getting things done. Should be able to see big picture and also be able to deep-dive into details to solve complex problems.


  • Contribute actively to improve Amobee’s Big Data ecosystem architecture.
  • Apply in-depth analysis of hadoop based workload, project-based work, design solutions to issues, and evaluate their effectiveness.
  • Develop and maintain operational best practices for smooth operation of large hadoop clusters.
  • Design, develop and manage diagnostic & instrumentation tools for troubleshooting & in-depth analysis.
  • Optimize and tune the Hadoop environment to meet the performance requirements.
  • Partner with Hadoop developers in building best practices for Data Lake, Warehouse and Analytic environments.
  • Investigate emerging technologies in hadoop ecosystem that relevant to our needs and implement.
  • Prototype big data technologies to solves business use cases.

Required Qualifications

  • 5-7+ years plus of hands-on experience in deploying and administering multi petabyte scale Hadoop cluster.
  • Strong problem solving and trouble shooting skills.
  • Deep expert level understanding of Hadoop design principals and the factors that affect distributed system performance.
  • Well versed in installing, administering and managing Hadoop clusters running  CDH5, YARN, Spark, Cloudera manager.
  • Sound understanding with Hadoop ecosystem products like HDFS, map-reduce, Spark, Storm, Pig, Oozie, Zookeeper and Cloudera manager.
  • Good scripting experience with at least two of the following: Shell, Python Ruby or Perl.
  • Good knowledge in implementing metric collection for monitoring and alerting.
  • Hands on experience with automation tool like Puppet.
  • BS/MS degree in computer science or related field.
  • Good knowledge of Hadoop cluster connectivity and security.

Desired Qualifications

  • Experience in RDBMS, NOSQL databases and Java programming, .
  • Linux administration and troubleshooting skills.
  • Working knowledge of open source projects like Git, Nagios, TSDB, Docker and OpenStack.
  • Good knowledge of common ETL packages / libraries, data ingestion and programming language.

Location: Redwood City, CA

In addition to our great environment, we offer a competitive base salary, bonus program, stock options, employee development programs and other comprehensive benefits. Please send a cover letter along with your resume when applying to the position of interest located at We are an Equal Opportunity Employer. No phone calls and no recruiting agencies, please.