Hi,

If you are Interested and available for the Job, Please revert back with
latest resume, expected pay rate, availability and other details required
for submission to *n...@apetan.com <n...@apetan.com>.*



*Job Title*

*Hadoop Admin*

*Project Location   *

*DC/Maryland Area (Bowie/MD)*

*Duration*

*Long term *


*Skills Required and Job Description:*

*Mode of Interview:  Skype*



*Requirements:*

*Hadoop Design, Implementation and Support*


We develop cutting-edge software solutions that are helping to
revolutionize the Informatics industry. We are seeking technical and
business professionals with advanced leadership skills to join our
tight-knit team in our headquarters located in Maryland. This is an
opportunity to work with fellow best-in-class IT professionals to deploy
new Business Solutions utilizing the latest technologies for Big Data
solutions, including a wide array of Open Source tools

This position requires extensive experience on the Hadoop platform using
Sqoop, Pig, Hive, and Flume to design, build and support highly scalable
data processing pipelines.

* Hadoop Administrator*

Responsibilities:
Work with Data Architects to plan and deploy new Hadoop environments and
expand existing Hadoop clusters.
Design Big Data solutions capable of supporting and processing large sets
of structured, semi-structured and structured data
Provide Administration, management and support for large scale Big Data
platforms on Hadoop eco-system.
Provide Hadoop cluster capacity planning, maintenance, performance tuning,
and trouble shooting.
Install, configure, support and manage Hadoop clusters using Apache &
Cloudera (CDH3, CDH4), and Yarn distributions.
Install and configure Hadoop Ecosystems like Sqoop, Pig, Hive, Flume, and
HBase and Hadoop Daemons on the Hadoop cluster
Monitor and follow proper backup & recovery strategies for High
Availability
Configure various property files like core-site.xml, hdfs-site.xml,
mapred-site.xml.
Monitor multiple Hadoop cluster environments using Ganglia and Nagios, and
monitored workload, job performance and capacity using Cloudera Manger
Define and schedule all Hadoop/Hive/Sqoop/HBase jobs
Import and export data from web servers into HDFS using various tools

Required Skills:
Extensive experience in Business Intelligence, data warehousing, analytics,
and Big Data
Experience with Hardware architectural guidance, planning and estimating
cluster capacity, and creating roadmaps for Hadoop cluster deployment
Expertise in the design, installation, configuration and administration of
Hadoop Ecosystems like Sqoop, Pig, Hive, Flume, and HBase and Hadoop
Daemons on the Hadoop cluster
Working knowledge of capacity planning, performance tuning and optimizing
the Hadoop environment.
Experience in HDFS data storage and support for running map-reduce jobs.
Experience in commissioning, decommissioning, balancing, and managing Nodes
on Hadoop Clusters.
Experience with Hadoop cluster capacity planning, maintenance, performance
tuning, and trouble shooting.
Good understanding of Partitioning concepts and different file formats
supported in Hive and Pig.
Experience in importing and exporting the data using Sqoop from HDFS to
Relational Database systems/mainframe and vice-versa.
Hands on experience with data analytics tools such as Splunk, Cognos,
Tableau etc.



-- 

*Nick G.* |* Technical Recruiter **| **Apetan Consulting LLC |*

*Tel: 201-620-9700 * 141 **| **15 Union Avenue,  office # 6,  Rutherford,
New Jersey 07070  | *

*Mail :-** n...@apetan.com <n...@apetan.com> **| **www.apetan.com*
<http://www.apetan.com/> |

https://www.linkedin.com/in/nikhil-gupta-a4637391

-- 
You received this message because you are subscribed to the Google Groups 
"Oracle Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to oradev+unsubscr...@googlegroups.com.
To post to this group, send email to oradev@googlegroups.com.
Visit this group at https://groups.google.com/group/oradev.
For more options, visit https://groups.google.com/d/optout.

Reply via email to