*Hi,*


Please find the below position and respond back to me at
prave...@xperttech.com



*Job Title: Lead Data Engineer (Hadoop)*

Location: Sunnyvale, CA

Duration: 6+ Months

Pay Rate: $DOE/Hour



*INTERVIEW PROCESS: PHONE SCREEN AND IN-PERSON INTERVIEW  - LOCAL
CANDIDATES ONLY*



*EXPERIENCE SKILL MATRIX:*

Total years of Experience in IT: years

Total years of US Experience: years



*1. COMPLETE SKILL MATRIX (YEARS OF EXPERIENCE IN EACH SKILL):*

1.       10 years in developing software applications including: analysis,
design, coding, testing, deploying and supporting of applications: years

2.       BS degree in Computer Science, Applied Mathematics, Physics,
Statistics or area of study related to data sciences and data mining:

3.       Hadoop ecosystem: years

4.       Hadoop: years

5.       NoSQL: years

6.       MapReduce programs against structured and unstructured data: years

7.       Loading data to Hive and writing software accessing Hive data:
years

8.       Loading external data to Hadoop environments using tools like
MapReduce, Sqoop, and Flume: years

9.       Scripting languages like Pig to manipulate data: years

10.   Very large data sets, knows how to build programs that leverage the
parallel capabilities of Hadoop and MPP platforms: years

11.   Hortonworks Hadoop distribution components and custom packages
Experience designing and deploying data solutions at web-scale using
MapReduce framework: years

12.   Designing solutions using Flume, Avro, and Thrift: years

13.   HDFS, Oozie, MapReduce, Hive, Hadoop, Cassandra, Mongo: years



*Description:*



*Job Requirements:*



*Minimum Requirements:*

1.       10 years’ experience in developing software applications
including: analysis, design, coding, testing, deploying and supporting of
applications.

2.       BS degree in Computer Science, Applied Mathematics, Physics,
Statistics or area of study related to data sciences and data mining.

3.       Proficient in application/software architecture (Definition,
Business Process Modeling, etc.).

4.       Understand application/software development and design.

5.       Collaborative personality, able to engage in interactive
discussions with the rest of the team.

6.       Inquisitive on Big Data technology; current on new ideas and tools.

7.       Good understanding of the Hadoop ecosystem and low level
constructs.

8.       Experience with end-to-end software development processes and
practices agile/scrum experience preferred).



*Desired Requirements:*

1.       MS degree or PhD degree in Computer Science, Applied Mathematics,
Physics, Statistics or area of study related to data sciences and data
mining.

2.       Ability to work with technical and business-oriented teams.

3.       Experience building Big Data solutions using Hadoop and/or NoSQL
technology.

4.       Ability to work with non-technical resources on the team to
translate data needs into Big Data solutions using the appropriate tools.

5.       Extensive experience developing complex MapReduce programs against
structured and unstructured data.

6.       Experience with loading data to Hive and writing software
accessing Hive data.

7.       Experience loading external data to Hadoop environments using
tools like MapReduce, Sqoop, and Flume.

8.       Experience using scripting languages like Pig to manipulate data.

9.       Experience working with very large data sets, knows how to build
programs that leverage the parallel capabilities of Hadoop and MPP
platforms.

10.   Experience with Hortonworks Hadoop distribution components and custom
packages Experience designing and deploying data solutions at web-scale
using MapReduce framework.

11.   Experience designing solutions using Flume, Avro, Thrift.

12.   Experience designing and delivering services-based solutions on
Hadoop.

13.   Experience around interfacing with data-science products and creating
tools for easier deployment of data-science tools.

14.   Experience with designing and developing web-scale real-time systems.

15.   Experience to one of the following technologies: HDFS, Oozie,
MapReduce, Hive, Hadoop, Cassandra, Mongo.

16.   Experience in extending open-source Hadoop components.







Thanks/Regards,



*Praveen Kumar*



400 W Cummings Park

Suite#2850

Woburn, MA-01801

Email: prave...@xperttech.com <nag...@xperttech.com>

Phone: 781-797-1042

Fax: 978-405-5040

www.xperttech.com

-- 
You received this message because you are subscribed to the Google Groups 
"American Vendor--IT Consulting" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sap-vendor+unsubscr...@googlegroups.com.
To post to this group, send email to sap-vendor@googlegroups.com.
Visit this group at https://groups.google.com/group/sap-vendor.
For more options, visit https://groups.google.com/d/optout.

Reply via email to