*Title: Hadoop Engineer*

*Location: Pleasanton, CA*

*Position Type: Contract 12 Months*

*Required local who can do F2F*



*Scope:*

The Hadoop Engineer shall support the State Fund enterprise architecture
team.  The consultant will provide professional services to support the
long term IT strategy and planning to include high level analysis,
professional reports and presentations, and mentoring, support and
training.



State Fund, at their discretion, can utilize Consultant resource in any
capacity to meet the needs of State Fund.  Consultant resources may be
assigned by State Fund to lead teams, work independently, work on teams
lead by State Fund, Consultant resources or, teams lead by other consulting
firms.



*Tasks*:

The tasks for the Hadoop Engineer include, but are not limited to, the
following:

·         Translate client user requirements into technical architecture
vision and implementation plan

·         Design and implement an integrated Big Data platform and
analytics solution

·         Design and implement data collectors to collect and transport
data to the Big Data Platform.

·         Implement monitoring solution(s) for the Big Data platform to
monitor health on the infrastructure.



*Technical Knowledge and Skills:*

Consultant resources shall possess most of the following technical
knowledge and experience:

·         2+ years of hands-on development, Deployment and production
Support experience in Hadoop environment.

·         4-5 years of programming experience in Java, Scala, Spark, Python.


·         Proficient in SQL and relational database design and methods for
data retrieval.

·         Knowledge of NoSQL systems like HBase or Cassandra

·         Hands-on experience in Cloudera Ddistribution 5.x ,Java and Solr.

·         Hands-on experience in creating, indexing Solr collections in
Solr Cloud environment.

·         Hands-on experience building data pipelines using Hadoop
components Sqoop, Hive, Pig, Solr, MR, Spark, Spark SQL.

·         Must have experience with developing Hive QL, UDF’s for analyzing
semi structured/structured datasets.

·         Must have experience with Spring framework.

·         Hands-on experience ingesting and processing various file formats
like Avro/Parquet/Sequence Files/Text Files etc.

·         Must have working experience in the data warehousing and Business
Intelligence systems.

·         Expertise in Unix/Linux environment in writing scripts and
schedule/execute jobs.

·         Successful track record of building automation scripts/code using
Java, Bash, Python etc. and experience in production support issue
resolution process.



*Nice-To-Have Skills*

·         Hands-on experience working in Real-Time analytics like
Spark/Kafka/Storm

·         Working knowledge of Aspire Content Processing Platform and Query
Processing Language(QPL)

·         Experience working with users on requirements gathering

·         Agile development methodologies.







*Thanks & Regards,*



*Mohammad*

*Sr. Recruiter*

*Saicon Consultants, Inc.*

(408) 216-2646 Ext 149 (W)

(913) 273-0058 (F)

 Email: mash...@saiconinc.com

*SBA 8(a) Certified /WBE/MBE/DBE/SDB*

*Inc.500 Company – 2006. 2007, 2008, 2009 & 2010*

*Ranked #1 " Fastest-Growing Area Businesses" - Kansas City Business
Journal - 2006*

*Ranked in Top 10 of Corporate 100 - Ingram's - 2006 & 2007*

*CMMI Level 3 *

*ISO 9001:2008 Certified*

-- 
You received this message because you are subscribed to the Google Groups "SAP 
Workflow" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sap-workflow+unsubscr...@googlegroups.com.
To post to this group, send email to sap-workflow@googlegroups.com.
Visit this group at https://groups.google.com/group/sap-workflow.
For more options, visit https://groups.google.com/d/optout.

Reply via email to