Hadoop Administrator :: NYC, NY :: Contract Position

2016-11-30 Thread Sharad Rajvanshi
Hi ,
I hope you are doing Good!
Please share resumes AT* rarajeshkumar6...@gmail.com
*

*Job Title:: Hadoop Administrator*
*Location   :: NYC, NY*
*Duration   :: Contract Position*
*Experience:: 9+ Years *

*Description:*
Experience with:  Ranger, Kerberos, AD Integrations, Syncsort,  Pivotal
HAWQ, Wandisco Fusion.  This is the full skill set we’re seeking:

· Hortonworks
· Ranger
· Pivotal HAWQ
· Wandisco Fusion
· Kerberos
· Ambari
· AD Integration
· YARN/Mapreduce
· Hive
· Hue
· MySQL
· High Availability
· PostgreSQL
· Syncsort
· Pig
· Spark
· Redhat Linux
· AWS

The Senior Hadoop Engineer will assist in the setup and production
readiness of Client's Data Lake. The candidate will work on the
installation and configuration of Pivotal HC 3.0, utilizing Open Source
components such as Ranger and Ambari. The candidate should have a knowledge
of concepts such as LDAP integration, Kerberos, and highly available
architectures.

Education: Bachelor's degree or equivalent work experience

Experience: Minimum 3 years as a Hadoop Engineer or Hadoop Administrator

Responsibilities:
· Responsible for setup, administration, monitoring, tuning,
optimizing, governing Hadoop Cluster and Hadoop components
· Design & implement new components and various emerging
technologies in Hadoop Echo System, and successful execution of various
Proof-Of-Technology (PoT)
· Design and implement high availability options for critical
component like Kerberos, Ranger, Amabari, Resource Manager, MySQL
repositories.
· Collaborate with various cross functional teams: infrastructure,
network, database, and application for various activities: deployment new
hardware/software, environment, capacity uplift etc.
· Work with various teams to setup new Hadoop users, security and
platform governance
· Create and executive capacity planning strategy process for the
Hadoop platform
· Work on cluster maintenance as well as creation and removal of
nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Ambari
etc.
· Performance tuning of Hadoop clusters and various Hadoop
components and routines.
· Monitor job performances, file system/disk-space management,
cluster & database connectivity, log files, management of backup/security
and troubleshooting various user issues.
· Hadoop cluster performance monitoring and tuning, disk space
management
· Harden the cluster to support use cases and self-service in 24x7
model and apply advanced troubleshooting techniques to on critical, highly
complex customer problems
· Contribute to the evolving Hadoop architecture of our services to
meet changing requirements for scaling, reliability, performance,
manageability, and price.
· Setup monitoring and alerts for the Hadoop cluster, creation of
dashboards, alerts, and weekly status report for uptime, usage, issue, etc.
· Design, implement, test and document performance benchmarking
strategy for platform as well for each use cases
· Act as a liaison between the Hadoop cluster administrators and
the Hadoop application development team to identify and resolve issues
impacting application availability, scalability, performance, and data
throughput.
· Research Hadoop user issues in a timely manner and follow up
directly with the customer with recommendations and action plans
· Work with project team members to help propagate knowledge and
efficient use of Hadoop tool suite and participate in technical
communications within the team to share best practices and learn about new
technologies and other ecosystem applications
· Automate deployment and management of Hadoop services including
implementing monitoring
· Drive customer communication during critical events and
participate/lead various operational improvement initiatives

Required Skills:
· Bachelor's Degree in Computer Science, Information Science,
Information Technology or Engineering/Related Field
· 3 Years Of strong Hadoop/Big Data experience.
· Strong Experience on administration and management of large-scale
Hadoop production clusters
· Able to deploy Hadoop cluster, add and remove nodes, keep track
of jobs, monitor critical parts of the cluster, configure high
availability, schedule and configure and take backups.
· Strong Experience with Hortonworks (HDP) or Pivotal (PHD) Hadoop
Distribution and Core Hadoop Echo System components: MapReduce and HDFS
· Strong Experience with Hadoop cluster
management/administration/operations
using Oozie, Yarn, Ambari, Zookeeper, Tez, Slider
· Strong Experience with Hadoop Security & Governance using Ranger,
Falcon, Kerberos, Security Concepts-Best Practices
· Strong Experience 

Hadoop Administrator :: NYC, NY :: Contract Position

2016-11-29 Thread navneet developer
Hi ,
I hope you are doing Good!
Please share resumes AT* rarajeshkumar6...@gmail.com
*

*Job Title:: Hadoop Administrator*
*Location   :: NYC, NY*
*Duration   :: Contract Position*

*Description:*
Experience with:  Ranger, Kerberos, AD Integrations, Syncsort,  Pivotal
HAWQ, Wandisco Fusion.  This is the full skill set we’re seeking:

· Hortonworks
· Ranger
· Pivotal HAWQ
· Wandisco Fusion
· Kerberos
· Ambari
· AD Integration
· YARN/Mapreduce
· Hive
· Hue
· MySQL
· High Availability
· PostgreSQL
· Syncsort
· Pig
· Spark
· Redhat Linux
· AWS

The Senior Hadoop Engineer will assist in the setup and production
readiness of Client's Data Lake. The candidate will work on the
installation and configuration of Pivotal HC 3.0, utilizing Open Source
components such as Ranger and Ambari. The candidate should have a knowledge
of concepts such as LDAP integration, Kerberos, and highly available
architectures.

Education: Bachelor's degree or equivalent work experience

Experience: Minimum 3 years as a Hadoop Engineer or Hadoop Administrator

Responsibilities:
· Responsible for setup, administration, monitoring, tuning,
optimizing, governing Hadoop Cluster and Hadoop components
· Design & implement new components and various emerging
technologies in Hadoop Echo System, and successful execution of various
Proof-Of-Technology (PoT)
· Design and implement high availability options for critical
component like Kerberos, Ranger, Amabari, Resource Manager, MySQL
repositories.
· Collaborate with various cross functional teams: infrastructure,
network, database, and application for various activities: deployment new
hardware/software, environment, capacity uplift etc.
· Work with various teams to setup new Hadoop users, security and
platform governance
· Create and executive capacity planning strategy process for the
Hadoop platform
· Work on cluster maintenance as well as creation and removal of
nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, Ambari
etc.
· Performance tuning of Hadoop clusters and various Hadoop
components and routines.
· Monitor job performances, file system/disk-space management,
cluster & database connectivity, log files, management of backup/security
and troubleshooting various user issues.
· Hadoop cluster performance monitoring and tuning, disk space
management
· Harden the cluster to support use cases and self-service in 24x7
model and apply advanced troubleshooting techniques to on critical, highly
complex customer problems
· Contribute to the evolving Hadoop architecture of our services to
meet changing requirements for scaling, reliability, performance,
manageability, and price.
· Setup monitoring and alerts for the Hadoop cluster, creation of
dashboards, alerts, and weekly status report for uptime, usage, issue, etc.
· Design, implement, test and document performance benchmarking
strategy for platform as well for each use cases
· Act as a liaison between the Hadoop cluster administrators and
the Hadoop application development team to identify and resolve issues
impacting application availability, scalability, performance, and data
throughput.
· Research Hadoop user issues in a timely manner and follow up
directly with the customer with recommendations and action plans
· Work with project team members to help propagate knowledge and
efficient use of Hadoop tool suite and participate in technical
communications within the team to share best practices and learn about new
technologies and other ecosystem applications
· Automate deployment and management of Hadoop services including
implementing monitoring
· Drive customer communication during critical events and
participate/lead various operational improvement initiatives

Required Skills:
· Bachelor's Degree in Computer Science, Information Science,
Information Technology or Engineering/Related Field
· 3 Years Of strong Hadoop/Big Data experience.
· Strong Experience on administration and management of large-scale
Hadoop production clusters
· Able to deploy Hadoop cluster, add and remove nodes, keep track
of jobs, monitor critical parts of the cluster, configure high
availability, schedule and configure and take backups.
· Strong Experience with Hortonworks (HDP) or Pivotal (PHD) Hadoop
Distribution and Core Hadoop Echo System components: MapReduce and HDFS
· Strong Experience with Hadoop cluster
management/administration/operations using Oozie, Yarn, Ambari, Zookeeper,
Tez, Slider
· Strong Experience with Hadoop Security & Governance using Ranger,
Falcon, Kerberos, Security Concepts-Best Practices
· Strong Experience with Hadoop ETL/Data