*Please send replies/profiles to [email protected]
<[email protected]>*

*​-All our requirements are Direct client Requirements.*
*​-​Please Do submit only qualified profiles.*
*​-​​Please share the resumes only after you discuss with your consultant​.
 *

​
I have multiple Hadoop positions with our various clients.  please review
the requirements and let me know if you have any matching profiles.  Please
mention the requirement number, which you are submitting the profiles.



*Requirement #1:*

*Hadoop Operations Engineer *

*Los Altos, CA *

*Responsibilities *

Own and manage several Hadoop clusters in production and development
environments.

Work with engineering software developers to investigate problems and make
changes to the Hadoop environment and associated applications.

Develop monitoring and performance metrics for Hadoop clusters.

Automate deployment and management of Hadoop services. Investigate emerging
technologies relevant to our needs.

Contribute to the evolving architecture of our services to meet changing
requirements for scaling, reliability, performance, manageability, and
price.

Document designs and procedures for building and managing Hadoop clusters.

Train NOC staff to follow support and escalation procedures.



*Qualifications *

2+ years experience operating Hadoop services, preferably in a large-scale
production environment.

Familiarity with use of standard Hadoop ecosystem features and applications
such as MapReduce, HBase, Hue, and Hive. Knowledge of OpenTSDB or Kafka
would be a plus.

Strong Linux administration and troubleshooting skills.

Solid basis in systems management automation using popular open-source
tools (eg, Puppet, Python, Bash).

Experience running and troubleshooting Java applications.

Java programming background would be useful.

Responsible and meticulous approach to work.

Team player with good communication skills and ability to perform under
fire.

Interested in engineering productivity and supporting software developers



*Requirement #2:*

*Hadooop/Big Data Engineer*

*Manhattan, NY*

*Required Skills: *

1-7 years experience of development Strong object-oriented programming
skills Java and SQL experience

Strong analytical skills and the ability to pick up new technologies
quickly

Computer science or an advanced mathematics degree

Cloud/Big Data experience is (MapReduce, Hive, Hadoop, AWS, etc)



*Requirement #3:*

*Hadoop Administrator*

*Bellevue, WA*

*Mandatory Skills: *

Hadoop and Big Data ecosystem – Hadoop, MapReduce , Flume, Hive, HBase,
Pig, R, Cluster Monitoring and Troubleshooting



*Detailed JD: *

Experience with one compiled language(java, c/c++) and one
interpreted(perl, python etc) ,Database and SQL skills a strong plus.

Strong knowledge and strong deployment experience in the Hadoop and Big
Data ecosystem – Hadoop, MapReduce , Flume, Hive, HBase, Pig, R, etc.

Experience with large scale Hadoop environments build and support including
design, capacity planning, cluster set up, performance tuning and
monitoring

Responsible for implementation and ongoing administration of Hadoop
infrastructure Cluster maintenance as well as creation and removal of nodes
, HDFS support and maintenance.

Backup and restores , Cluster Monitoring and Troubleshooting.

Manage and review Hadoop log files , File system management and monitoring.
Design, implement and maintain security , Data capacity and node
forecasting and planning.

Closely working with the infrastructure, network, database, application,
and business intelligence teams to ensure data quality and availability.

Works with application teams to install operating system and Hadoop
updates, patches, version upgrades as required.

Extensive Experience in TIBCO BW, BC, AXWAY, EMS, Self Starter, Contribute
to Integration Architecture team using TIBCO set of tools.

Extensive experience in implementing A2A, B2B integration.

Excellent communication skills.

Ability to handle day to day activities of the CoE team and communicate
well in multi-vendor delivery project.



*Requirement #4:*

*Hadoop Cluster Engineer *

*Iselin, NJ *

*Job Description: *

Enterprise Data Management is seeking a candidate to join the Data
Integration Engineering team to deliver platform capabilities to support
Big Data and complex analytics.

This position will design and grow data platform capabilities through
engineering and deploying Big Data software solutions in a stable,
scalable, high performing, cost effective, and secure manner.

The successful candidate will demonstrate the ability to quickly understand
big data software technologies and engineer the infrastructure to support
them.



*Responsibilities: *

Deliver core hardware, software and data infrastructure components in
support of business requirements

Lead platform engineering efforts building and delivering platforms to
support the use of: Hadoop components - HDFS, Hbase, Hive, Sqoop, Flume,
MapReduce, Python, Pig, HiveQLTraditional ETL tools

Partner closely within a team and across IT organization

Grow and develop skills of other team members

Ensure quality of the technology solution delivered, e.g. stability,
scalability, availability, performance, cost, securityProvide thought
leadership and dependable execution in building and evolving big data
platforms



*Requirements:*

3+ years of experience with Cloudera Hadoop implementations a major plus

5+ years of experience with large complex data implementations

5+ years of experience on ETL and SQL

3+ years of experience in a mentorship roleWorking knowledge of Big
Data/Analytical technologies, e.g. Cloudera Hadoop, MapReduce,
MongoDBDemonstrated experience in Data management tools; relational
databases e.g. Oracle, Teradata, SQL Server), Data Manipulation tools, e.g.
DataStage, Informatica.

Excellent relationship building and interpersonal skills.



*Requirement #5:*

*Hadoop Developer (2 positions) *

*Mountain View, CA (local candidates highly preferred) *

*Start Date: Immediate (expected Tue, Oct 28th) *

*Note: Please do not submit candidates who are on a project and will need
two weeks to join. We need candidates who can join immediately.*

*Job Description: *

Content and Web Data team. If the convergence of big data, mobile, search
and advertising excite you, then you will love us.

We are looking for talented developers who prefer to be a “jack of all
trades”.

Your mission, develop and manage our content pipeline at web scale.

You will be acquiring and transforming data from multiple sources while
making the process as efficient as possible.



*Core Skill Set *

- Pig Latin 2 yrs experience

- XPath query 1-2 yrs experience

- Java 1-2 yrs experience

- Sparql query 1 yr experience

- Clojure scripting 1 yr experience

- Regular Expressions

- Java Script 1yr

​




Please let me know if you need any more information.



Thanks & Regards,




*Mohan Ganti*

Mail: [email protected] <[email protected]>


*Disclaimer: We respect your Online Privacy. This is not an unsolicited
mail. Under Bill 1618 Title III passed by the 105th U.S. Congress this mail
cannot be considered Spam as long as we include Contact Information and a
method to be removed from our mailing list. If you are not interested in
receiving our emails then please reply with a "REMOVE" in the subject line
and mention all the email addresses to be removed with any email addresses.*

-- 
You received this message because you are subscribed to the Google Groups "SAP 
BASIS" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sap-basis.
For more options, visit https://groups.google.com/d/optout.

Reply via email to