*Need USC or GC------------open for CTC*


*Position : **Lead Hadoop Big Data Developer / Big Lake*
Location:  Basking Ridge, NJ
Duration:  6+ month contract
Interview process: Phone, Skype, or F2F. (Any combination of the three)


Project; Big data platform Data Lake
Team: Part of a team 5 developers

Responsibilities:

·         Lead, design and develop ETL data flows using Hive / Pig.

·         Loading from disparate data sets.

·         Pre-processing using Hive and Pig.

·         Translate complex functional and technical requirements into
detailed design.

·         Perform analysis of data sets and uncover insights.

·         Maintain security and data privacy.

·         Implement data flow scripts using Unix / hive / pig scripting.

·         Propose best practices/standards.

·         Work with the System Analyst and Development Manager on a
day-to-day basis

·         Work with other team members to accomplish key development tasks

·         Work with service delivery (support) team on transition and
stabilization


Required Software / Tools:

·         We are looking for a Lead Big Data Developer who will lead and
work on the collecting, storing, processing, and analyzing of huge sets of
data.

·         The primary focus will be on choosing optimal solutions to use
for these purposes, then maintaining, implementing, and monitoring them.

·         You will also be responsible for integrating them with the
architecture used across the company.

·         Individual should have expertise and at least 5 years of hands-on
experience in Java Enterprise ecosystem (Design, Development, Test, Deploy
on Production) is required .

·         At least 4 years of hands-on experience with
Hadoop/Hive/MapReduce/Pig/Sqoop/Flume (Design, Development, Test, Deploy on
Production Hadoop cluster) , demonstrated ability to segment and organize
data from disparate sources and knowledge of data security and encryption
models is desirable.

·         Experience working with Hadoop/HBase/Hive/MRV1/MRV2 & ETL
processing with HIVE/Pig scripts is required.

·         Demonstrate experience in implementing Unix scripting is required.

·         Any experience is health care insurance industry is a plus.


Must have skills/attributes:

·         PIG/Hive/Hadoop MR / Unix scripting

Nice to have skills/attributes:

·         Experience with HBase, alend, NoSQL databases-Experience Apache
Spark or other streaming big data processing preferred.

·         Previous experience with Insurance industry.


Thanks

sanj...@technocraftsol.com

-- 
You received this message because you are subscribed to the Google Groups "Open 
Source Erp & Crm" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to open-source-erp-crm+unsubscr...@googlegroups.com.
To post to this group, send email to open-source-erp-crm@googlegroups.com.
Visit this group at https://groups.google.com/group/open-source-erp-crm.
For more options, visit https://groups.google.com/d/optout.

Reply via email to