Greetings!


*Kindly share profiles to **[email protected]*
<[email protected]>



US Photo ID proof is must



*REQT: *

*Location:* Mercer Island, Washington

*Experience:* 8 Years

*Skills: *Sr. Hadoop Administrator

*Job Type: *Contract



Location

Mercer Island, Washington

Job Description

*As a successful Senior Hadoop Administrator*, you will build and enhance
the tooling needed to deploy and operate Hadoop clusters at scale, across
multiple data centers and cloud providers. You will build tooling to
maintain the health and operations of our data infrastructure. You will
have a constant focus on customer facing service availability, and design
systems that are resilient to failure within heavy growth. You will work
across engineering, development and release management teams to enhance
system operability.



*Responsibilities:* Responsible for implementation and ongoing
administration of Hadoop infrastructure. Aligning with the systems
engineering team to propose and deploy new hardware and software
environments required for Hadoop and to expand existing
environments.Working with data delivery teams to setup new Hadoop users.
This job includes setting up Linux users, setting up Kerberos principals
and testing HDFS, Hive, Pig and MapReduce access for the new users.-Cluster
maintenance as well as creation and removal of nodes using tools like
Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other
tools.Performance tuning of Hadoop clusters and Hadoop MapReduce
routines.Screen Hadoop cluster job performances and capacity planning.
Monitor Hadoop cluster connectivity and security. Manage and review Hadoop
log files.File system management and monitoring.HDFS support and
maintenance.Diligently teaming with the infrastructure, network, database,
application and business intelligence teams to guarantee high data quality
and availability.Collaborating with application teams to install operating
system and Hadoop updates, patches, version upgrades when required.Point of
Contact for Vendor escalation



*Skills Required*: General operational expertise such as good
troubleshooting skills, understanding of system capacity, bottlenecks,
basics of memory, CPU, OS, storage, and networks.Hadoop skills like HBase,
Hive, Pig, Mahout, etc.Most essential requirements:They should be able to
deploy Hadoop cluster, add and remove nodes, keep track of jobs, monitor
critical parts of the cluster, configure name-node high availability,
schedule and configure it and take backups. Good knowledge of Linux as
Hadoop runs on Linux.Familiarity with open source configuration management
and deployment tools such as Puppet or Chef and Linux scripting.Knowledge
of Troubleshooting Core Java Applications is a plus.





*Kindly share the below details for quick process:*

Full Legal Name as in Driving License/ Passport:

DOB (MM/DD/YYYY):

Current Location, City and State:

Mobile and Home Phone No:

Email ID:

US work authorization:

Highest Educational degree:

Year of Passed-out:

Overall years of experience

Currently on a project:

Willingness to relocate across US:

Passport Number:

Interview Availability:

Available to join from (Availability):

Skype Id:

Expected Hourly Rate (w2/1099/c2c)





Thanks,



N.Sivakumar

[email protected]

Terminal Contacts LLC



*Post REQUIREMENTS/HOT LISTS @ [email protected]
<[email protected]>*

-- 
You received this message because you are subscribed to the Google Groups "Hot 
List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/directclienteq.
For more options, visit https://groups.google.com/d/optout.

Reply via email to