Hi
Please find the requirement below and let me know your interest... *Role: Hadoop Engineer* *Location: Pleasanton, CA* *Phone and Skype but locals preferred* *Duration: 6 Months* • 3+ year’s development experience in Hadoop. • Proven expertise in Hadoop production software development. • Successful track record of providing production support for large-scale distributed systems, with experience in creating software/scripts to automate production systems with some of the followings: Java, bash, python, etc. Must have atleast 1 year experience supporting production issues and problems. • 4+ years of experience programming in Java. • Proficient in SQL, NoSQL, Spark and relational database design and methods for efficiently retrieving data. *Description:* The scope of duties for the mid-level Data Architect includes, but is not limited to, the following: The consultant resource shall support our client's enterprise architecture team. The consultant will provide professional services to support the long term IT strategy and planning to include high level analysis, professional reports and presentations, and mentoring, support and training. *Technical Knowledge and Skills:* Consultant resources shall possess most of the following technical knowledge and experience: • 3+ year’s development experience in Hadoop. • Proven expertise in Hadoop production software development. • Successful track record of providing production support for large-scale distributed systems, with experience in creating software/scripts to automate production systems with some of the followings: Java, bash, python, etc. Must have atleast 1 year experience supporting production issues and problems. • 4+ years of experience programming in Java. • Proficient in SQL, NoSQL, Spark and relational database design and methods for efficiently retrieving data. • Information retrieval using Solr and Lucene. • Experience resolving complex search issues in and around the Lucene/Solr ecosystem. • Experience designing and developing code, scripts and data pipelines that leverage structured and unstructured data integrated from multiple sources - TIFF/PDF, relational, web crawl. • Must have experience in building analytics for structured and unstructured data and managing large data ingestion using technologies like Kafka /Flume/Avro/Thrift /Sqoop. • Must have hands on experience in implementing solutions using MR, Hive, Pig, or HBase. • Software install and configuration. • Participating in requirements and design workshops with our users. • Developing project deliverable documentation. • Agile development methodologies. • Must have a disciplined, methodical, and minimalist approach to designing and building software • Experience in Machine Learning, Mahout, Tika, RDF/Triple Stores, Graph databases. • Experience with OCR tools • Contribution to open source for Oozie , HBase, ZooKeeper or other open source project on Apache *Thanks and Regards,* *Manu Priya* *Sr. Technical Recruiter* *IDC Technologies* *1851 McCarthy Boulevard, Suite 116|Milpitas, CA , USA, 95035* *408-459-5794 [Direct] I **[email protected]* <[email protected]> *www.idctechnologies.com* <http://www.idctechnologies.com/> -- You received this message because you are subscribed to the Google Groups "SAP Resource Center" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at https://groups.google.com/group/sap-resource-center. For more options, visit https://groups.google.com/d/optout.
