I have a Hadoop Admin role in Southborough, MA.  The Pay Rate is in the
$45-$47/hr range.

The contract is 12+ months.



[image: *]      They just set up their dev environment with AWS clusters
and *need someone to come in and help with continuing to establish AWS
clusters* in higher environments (QA, production, etc) with Hadoop.

[image: *]      Someone with *past DBA or Unix Admin* and *recent Hadoop
Admin* is ideal for this role.

[image: *]      Ideally we can find a *local candidate* to match this rate
and skillset; if we can’t deliver a local resource within a few days I will
go back to the manager and advise that we open it up, however he expressed
a desire to start with local candidates only who can F2F.





*Title:* Hadoop Admin

*Max Submittals (if applicable): *N/A

*Company: *Global Atlantic

Global Atlantic Financial Group, through its subsidiaries, offers a broad
range of retirement, life and reinsurance products designed to help our
customers address financial challenges with confidence. A variety of
options help Americans customize a strategy to fulfill their protection,
accumulation, income, wealth transfer and end-of-life needs. In

*Location*: 132 Turnpike Road, Suite 210, Southborough, MA 01772 (moving to
Brighton in Q4—candidates need to be comfortable with both locations); free
parking, no public transportation

*Team: *Database Operations & Support

*Team Size: *small, 3-4

*Why Open: *Need support as they set up AWS clusters

*Start: *ASAP pending background check completion

*Duration: *12 months, may extend

*Rate: *$45/hr MAX

*Important Skills *

*Technical – *Hadoop Admin, Hortonworks, AWS

*Soft – *communication skills, able to self-start and work with minimal
supervision



*Interview Process: *phone then in-person; local only for now. Will
consider Skype if they struggle with local candidates

*Background Check (if applicable): *criminal, employment verification,
education (must be completed prior to start)

*What the environment / Sell: *newly formed group, setting up new
environment, investing in technology

*Hours / Shift: *regular shift, 9a-5p



*Must Haves:*

[image: *]      Hadoop Admin skills

[image: *]      HortonWorks experience



*Plus:*

[image: *]      AWS clusters



*Day to Day:*

   - Responsible for the build out, day-to-day management, and support of
   Big Data clusters based on Hadoop and other technologies, on-premises and
   in cloud. Responsible for cluster availability.
   - Responsible for implementation and support of the Enterprise Hadoop
   environment. * Involves designing, capacity planning, cluster set up,
   monitoring, structure planning, scaling and administration of Hadoop
   environment (YARN, MapReduce, HDFS, HBase, Zookeeper, * Storm, Kafka,
   Spark, Pig and Hive)
   - Manage large scale Hadoop cluster environments, handling all Hadoop
   environment builds, including design, capacity planning, cluster setup,
   security, performance tuning and ongoing monitoring.
   - Evaluate and recommend systems software and hardware for the
   enterprise system including capacity modeling.
   - Contribute to the evolving architecture of our storage service to meet
   changing requirements for scaling, reliability, performance, manageability,
   and price.
   - Automate deployment and operation of the big data infrastructure.
   Manage, deploy and configure infrastructure with automation toolsets.
   - Working with data delivery teams to setup new Hadoop users. This job
   includes setting up Linux users, setting up Kerberos principals and testing
   HDFS, Hive, Pig and MapReduce access for the new users.
   - Responsible for implementation and ongoing administration of Hadoop
   infrastructure.
   - Identify hardware and software technical problems, storage and/or
   related system malfunctions.
   - Diligently teaming with the infrastructure, network, database,
   application and business intelligence teams to guarantee high data quality
   and availability.
   - Expand and maintain our Hadoop environments (MapR distro, HBase, Hive,
   Yarn, Zookeeper, Oozie, Spyglass, etc.) and Apache stack environments
   (Java, Spark/Scala, Kafka, Elasticsearch, Drill, Kylin, etc.)
   - Collaborating with application teams to install operating system and
   Hadoop updates, patches, version upgrades when required.
   - Work closely with Technology Leadership, Product Managers, and
   Reporting Team for understanding the functional and system requirements
   - Expertise in Cluster maintenance as well as creation and removal of
   nodes using tools like Ganglia, Nagios, Amazon web services, and other
   tools.
   - Excellent troubleshooting and problem-solving abilities.
   - Facilitate POCs for new related tech versions, toolsets, and solutions
   both built and bought to prove viability for given business cases
   - Maintain user accounts, access requests, node
   configurations/buildout/teardown, cluster maintenance, log files, file
   systems, patches, upgrades, alerts, monitoring, HA, etc.
   - Manage and review Hadoop log files.
   - File system management and monitoring.



Qualifications:

   - 2 years of application database programming experience.
   - 5 years of professional experience working with Hadoop technology
   stack.
   - 5 years of proven experience in AWS and Horton Works Distribution.
   - Prior experience with performance tuning, capacity planning, and
   workload mapping.
   - Expert experience with at least one of the following languages;
   Python, Unix Shell scripting.
   - A deep understanding of Hadoop design principals, cluster
   connectivity, security and the factors that affect distributed system
   performance.









Kind Regards

*Fazal Mahmood*

*Aspire Systems Inc.*

*Certified Minority Owned Business Enterprise (MBE) *

*E-Verified Approved Employer*

*Four Time Award Winner ( 2009 & 2010 & 2011 & 2012) Best of Danbury Award*

*Approved Inc. 500 Company*

*Corp Office:*

*36 Mill Plain Road, **Suite # 16, Danbury, CT 06811*

*Ph : **203 778 9992*

*Fax   : 203 798 0060 <203%20798%200060>*

*Email: **fa...@aspiresystem.com <a...@aspiresystem.com>*

*Yahoo IM*:* fazalmrecruiter*

*URL: **www.aspiresystem.com
<http://mail.aspiresystem.com/exchweb/bin/redir.asp?URL=http://mail.aspiresystem.com/exchweb/bin/redir.asp?URL=http://www.aspiresystem.com>*

-- 
You received this message because you are subscribed to the Google Groups 
"java-core" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to java-core+unsubscr...@googlegroups.com.
To post to this group, send an email to java-core@googlegroups.com.
Visit this group at https://groups.google.com/group/java-core.
For more options, visit https://groups.google.com/d/optout.

Reply via email to