Big Data Developer
Sorry, this job is no longer available. Please Search for Jobs to conduct a new search
Eagle is assisting our client in the search for a Big Data Developer. This is a six (6) month contract positions with a top tier organization, to begin immediately.
As a Big Data Developer, you will be accountable for design, development, testing, and implementation of high quality solutions for the Enterprise Data Lake.
- Responsible for designing, developing, testing, tuning and the deployment of software solutions within the Hadoop Eco system;
- Designing and implementing product features in collaboration with business and IT stakeholders;
- Designing reusable Java components, frameworks and libraries;
- Working closely with the Architecture group and driving solutions;
- Implementing the data management framework for the Data Lake;
- Supporting the implementation and driving to a stable state in production;
- Reviewing code and providing feedback relative to best practices, performance improvements etc.;
- Demonstrating a substantial depth of knowledge and experience in a specific area of Big Data and Java development;
- Leverage existing frameworks and standards, while contributing ideas to or resolving issues with current framework owners - where no framework or pattern exists, create them;
- Professionally influence and negotiate with other technical leaders to arrive and implement the most optimum solution considering standards and project constraints;
- Mentor junior and other team members in all related Big Data technologies;
- The above is not an all-inclusive list of all duties performed by this job title, only a representative summary of the primary duties and responsibilities. He/She may be required to perform other additional duties as assigned.
What you bring to the table:
- 3+ years of hands on expertise with Big data technologies (HBASE, HIVE, SQOOP, PIG);
- Demonstrated proficiency with Spark, Scala, Python, PySpark, MapReduce, HDFS, Tez;
- Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming;
- Experience with integration of data from multiple data sources;
- Experience with NoSQL databases, such as HBase, Cassandra, MongoDB;
- Knowledge of various ETL techniques and frameworks, such as Flume and Sqoop;
- Experience with various messaging systems, such as Kafka or RabbitMQ;
- Experience with Big Data ML toolkits, such as Mahout and SparkML;
- Good understanding of Lambda Architecture, along with its advantages and drawbacks;
- Experience with Hortonworks or Cloudera distribution; and,
- Proficient understanding of distributed computing principles.
Any of the following would be assets:
- Collaborative personality, able to engage in interactive discussions with the rest of the team;
- Excellent technical analysis/design skills;
- Ability to work with technical and business-oriented teams;
- Track record of delivering projects on time;
- Ability to work with non-technical resources on the team to translate data needs into Big Data solutions using the appropriate tools;
- Skills to develop technical resources for methods, procedures, and standards to use during design, development, and unit testing phases of the project;
- Excellent communication skills (both written and oral) combined with strong interpersonal skills;
- Strong analytical skills and thought processes combined with the ability to be flexible and work analytically in a problem solving environment;
- Strong attention to detail; and,
- Strong organizational and multi-tasking skills.
Don’t miss out on this excellent career opportunity, apply online today!
Eagle is an equal opportunity employer and will provide accommodations during the recruitment process upon request. We thank all applicants for their interest; however, only candidates under consideration will be contacted. Please note that your application does not signify the beginning of employment with Eagle and that employment with Eagle will only commence when placed on an assignment as a temporary employee of Eagle.
JOB ID# 57665