Big Data DevOps, Network Big Data CoE

Sorry, this job is no longer available. Please Search for Jobs to conduct a new search

Eagle is currently seeking a Big Data DevOps, Network Big Data CoE for a twelve (12) month contract position, scheduled to begin immediately.

Key Responsibilities

The successful candidate will be responsible for:

  • Participating in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support;
  • Developing standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis; 
  • Ensuring Big Data practices integrate into overall data architectures and data management principles (e.g. data governance, data security, metadata, data quality); 
  • Creating formal written deliverables and other documentation, and ensuring designs, code, and documentation are aligned with enterprise direction, principles, and standards; 
  • Training and mentoring teams in the use of the fundamental components in the Hadoop stack; 
  • Assisting in the development of comprehensive and strategic business cases used at management and executive levels for funding and scoping decisions on Big Data solutions; 
  • Troubleshooting production issues within the Hadoop environment; and,
  • Performance tuning of a Hadoop processes and applications.

Skills and Qualifications

The successful candidate must have:

  • Proven experience as a Hadoop Developer/Analyst in Business Intelligence; 
  • Strong communication, technology awareness and capability to interact work with senior technology leaders;
  • Good knowledge on Agile Methodology and the Scrum process; 
  • Delivery of high-quality work, on time and with little supervision;
  • Bachelor in Computer Science, Management Information Systems, or Computer Information Systems is required; 
  • Minimum of 4 years of Building Java apps; 
  • Minimum of 2 years of building and coding applications using Hadoop components - HDFS, Hive, Impala, Sqoop, Flume, Kafka, StreamSets, HBase, etc.; 
  • Minimum of 2 years of coding Scala / Spark, Spark Streaming, Java, Python, HiveQL; 
  • Minimum 4 years understanding of traditional ETL tools & Data Warehousing architecture; 
  • Strong personal leadership and collaborative skills, combined with comprehensive, practical experience and knowledge in end-to-end delivery of Big Data solutions;
  • Experience in Exadata and other RDBMS is a plus;
  • Proficiency in SQL/HiveQL; and, 
  • Hands on expertise in Linux/Unix and scripting skills are required.

Don't miss out on this opportunity, apply online today!

Eagle is an equal opportunity employer and will provide accommodations during the recruitment process upon request. We thank all applicants for their interest; however, only candidates under consideration will be contacted. Please note that your application does not signify the beginning of employment with Eagle and that employment with Eagle will only commence when placed on an assignment as a temporary employee of Eagle. 

  • Posted On: May 17, 2018
  • Job Type: Contract
  • Job ID: 55266
  • Location: Toronto/GTA ON