Hello Business Partners,
Hope you are doing great!

Please share resume to mah...@sparinfosys.com

Position: Big Data Administrator
Location: NYC, NY
Duration: 12 Months

*Job description:*

   - Big Data Admin responsible for the build out, day-to-day management
   and support of Big Data clusters based on Hadoop and other technologies,
   on-premises and in cloud.
   - Responsible for cluster availability
   - Responsible for implementation and support of the Enterprise Hadoop
   environment.
   - Involves designing, capacity planning, cluster set up, monitoring,
   structure planning, scaling and administration of Hadoop components ((YARN,
   MapReduce, HDFS, HBase, Zookeeper,  Storm, Kafka, Spark, Pig and Hive)
   - Work with core production support personnel in IT and Engineering to
   automate deployment and operation of the infrastructure. Manage, deploy and
   configure infrastructure with automation toolsets.
   - Working with data delivery teams to setup new Hadoop users. This job
   includes setting up Linux users, setting up Kerberos principals and testing
   HDFS, Hive, Pig and MapReduce access for the new users.
   - Responsible for implementation and ongoing administration of Hadoop
   infrastructure.
   - Aligning with the systems engineering team to propose and deploy new
   hardware and software environments required for Hadoop and to expand
   existing environments.
   - Identify hardware and software technical problems, storage and/or
   related system malfunctions.
   - Diligently teaming with the infrastructure, network, database,
   application and business intelligence teams to guarantee high data quality
   and availability.
   - Leverage experience with diagnosing network performance, Support
   development and production deployments
   - Setup, configure and maintain security for Big Data clusters
   - Expand and maintain our Hadoop environments (MapR distro, HBase, Hive,
   Yarn, Zookeeper, Oozie, Spyglass, etc.) and Apache stack environments
   (Java, Spark/Scala, Kafka, Elastic search, Drill, Kylin, etc.)
   - Contribute to the evolving architecture of our storage service to meet
   changing requirements for scaling, reliability, performance, manageability,
   and price.
   - Collaborating with application teams to install operating system and
   Hadoop updates, patches, version upgrades when required.
   - Work closely with Technology Leadership, Product Managers, and
   Reporting Team for understanding the functional and system requirements


Expertise in Cluster maintenance as well as creation and removal of nodes
using tools like Ganglia, Nagios, Amazon web services, and other tools.
Excellent troubleshooting and problem-solving abilities.
Facilitate POCs for new related tech versions, toolsets, and solutions both
built and bought to prove viability for given business cases

Maintain user accounts, access requests, node
configurations/buildout/teardown, cluster maintenance, log files, file
systems, patches, upgrades, alerts, monitoring, HA, etc.
Manage and review Hadoop log files & File system management and monitoring.
Must have great presentation skills & Must be able to work with little
direction & Experience within Insurance a Plus

*Experience Required in : *
AWS
Horton Work Distribution

-- 
Regards
Mahesh Kumar
Resourcing Specialist
Desk No: 201-528-5307  Ext: 422
mah...@sparinfosys.com

-- 
You received this message because you are subscribed to the Google Groups 
"VB.NET 2003 Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to vbnet2003group+unsubscr...@googlegroups.com.
To post to this group, send email to vbnet2003group@googlegroups.com.
Visit this group at https://groups.google.com/group/vbnet2003group.
For more options, visit https://groups.google.com/d/optout.

Reply via email to