Scala/Spark Data EngineerEagle is currently seeking a Scala/Spark Data Engineer for a three (3) month contract opportunity, scheduled to begin immediately.
The successful candidate will be responsible for:
- Analyzing the source data, preparing data model and mapping, and developing Scala/Spark programs and related components in the work areas of Information Management ("IM") relating to claims, insurance and finance data;
- Using data pipelines to extract data from data sources which are in various formats (viz - flat files, XML, relational tables, Oracle logs), and use tools (such as StreamSets, Scala, and Spark programs) to transform and store the data in big data platform (Data Lake) after data validation;
- Developing StreamSets, Spark and Scala programs for data ingestion;
- Data transformation as per the mapping document; and,
- Data Analysis as per requirements and develop data models/mapping.
The qualified candidate must have:
- Advanced skill in Scala;
- Applied knowledge in Big Data platforms, ideally with exposure to Hadoop ecosystem (HDFS, Pig, Hive, SPARK, Big SQL, NoSQL, YARN);
- Experience in designing efficient and robust pipelines;
- Intermediate skill in SQL development;
- Knowledge of data modeling and understanding of different data structures and their benefits and limitations under particular use cases;
- Hands-on experience with structured and unstructured databases; and,
- Experience with enterprise systems such as Guidewire Claim Center, Guidewire Policy Center, and SAP would be an asset.
Eagle is an equal opportunity employer and will provide accommodations during the recruitment process upon request. We thank all applicants for their interest; however, only candidates under consideration will be contacted. Please note that your application does not signify the beginning of employment with Eagle and that employment with Eagle will only commence when placed on an assignment as a temporary employee of Eagle.