Dear Consultant,
I recently came across your resume posting on the internet and was
wondering if you are still available or interested in
hearing about other career opportunities? Below is a job description I
would like to discuss with you. If you or someone you
know might be interested in hearing more about this or other career
opportunities please email me a word version of you
resume along with the best time and number to reach you at.


Title: Hadoop Engineer
Location: Pleasanton, CA
Duration: Long Term Contract(upto 3 years)

NOTE: !!Must Need LOCAL For F2F!!

The tasks for the Hadoop Engineer include, but are not limited to, the
•Translate client user requirements into technical architecture vision and
implementation plan
•Design and implement an integrated Big Data platform and analytics solution
•Design and implement data collectors to collect and transport data to the
Big Data Platform.
•Implement monitoring solution(s) for the Big Data platform to monitor
health on the infrastructure.

Consultant resources shall possess most of the following technical
knowledge and experience:
•3+ year’s development experience in Hadoop.
•Proven expertise in Hadoop production software development.
•Successful track record of providing production support for large-scale
distributed systems, with experience in creating software/scripts to
automate production systems with some of the followings: Java, bash,
python, etc. Must have atleast 1 year experience supporting production
issues and problems.
•4+ years of experience programming in Java.
•Proficient in SQL, NoSQL, Spark and relational database design and methods
for efficiently retrieving data.
•Information retrieval using Solr and Lucene.
•Experience resolving complex search issues in and around the Lucene/Solr
•Experience designing and developing code, scripts and data pipelines that
leverage structured and unstructured data integrated from multiple sources
- TIFF/PDF, relational, web crawl.
•Must have experience in building analytics for structured and unstructured
data and managing large data ingestion using technologies like Kafka
/Flume/Avro/Thrift /Sqoop.
•Must have hands on experience in implementing solutions using MR, Hive,
Pig, or HBase.
•Software install and configuration.
•Participating in requirements and design workshops with our users.
•Developing project deliverable documentation.
•Agile development methodologies.
•Must have a disciplined, methodical, and minimalist approach to designing
and building software
•Experience in Machine Learning, Mahout, Tika, RDF/Triple Stores, Graph
•Experience with OCR tools
•Contribution to open source for Oozie , HBase, ZooKeeper or other open
source project on Apache

Dayle Wilson - IT Technical Recruiter
ITBrainiac Inc.
Direct : 201-448-4949

You received this message because you are subscribed to the Google Groups "SAP 
or Oracle Financials" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
To post to this group, send email to
Visit this group at
For more options, visit

Reply via email to