Hello,

Hope you are doing good.

We have an immediate opening for the below position, kindly let me know
your interest with your updated resume at *[email protected]
<[email protected]>*.





*Lead - Core Hadoop Engineering *

*Contract*

*Austin, Texas, California --Hybrid working (2 days working from office)*



Job Titles

Job Description

*Senior - Hive Developer*

*(Hive 3.x and Tez, Bash, Python, Java)*

Experience with Big Data and the Cloud platform services: Apache Hadoop,
Apache Hive.
Hive Tez migration experience and optimization for Hive code.
Basic Knowledge in ETL, Data Pipelines using Python, Shell Scripts, Ctrl-M,
Apache Airflow; Building & Populating Data Warehouses, and Querying with BI
tools.
Basic Knowledge in RDBMS fundamentals: Design & Creation of Databases,
Schemas, Tables; DB Administration, Security & working with MySQL & IBM Db2.
Basic Knowledge in SQL query language, database functions, stored procs,
working with multiple tables, joins, & transactions.

*Lead - Core Hadoop Engineering *

*(Hive, Hadoop Core) *


Contributor/committer in one of the Bigdata technologies - Hive, Yarn,
Hadoop/HDFS
Proficiency in engineering practices and writing high quality code, with
expertise in either one of Java or Scala.

*Hadoop / Kafka DevOps/Admin *

*Open Source*

• Person will be responsible to Perform Big Data Platform Administration
and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark
clusters
• Work on Performance Tuning and Increase Operational efficiency on a
continuous basis
• Monitor health of the platforms and Generate Performance Reports and
Monitor and provide continuous improvements
• Working closely with development, engineering, and operation teams,
jointly work on key deliverables ensuring production scalability and
stability
• Develop and enhance platform best practices
• Ensure the Hadoop platform can effectively meet performance & SLA
requirements
• Responsible for Big Data Production environment which includes Hadoop
(HDFS and YARN), Hive, Spark, Livy, SOLR, Oozie, Kafka, Airflow, Nifi,
Hbase etc
• Perform optimization, debugging and capacity planning of a Big Data
cluster
• Perform security remediation, automation and selfheal as per the
requirement
• Hands on Experience in Hadoop Admin, Hive, Spark, Kafka, experience in
maintaining, optimization, issue resolution of Big Data large scale
clusters, supporting Business users and Batch process.
• Hands-on Experience No SQL Databases HBASE is plus
• Prior Experience in Linux / Unix OS Services, Administration, Shell, awk
scripting is a plus
• Excellent oral and written communication and presentation skills,
analytical and problem-solving skills
• Self-driven, Ability to work independently and as part of a team with
proven track record
• Experience on Hortonworks distribution or Open Source preferred





*Bharat Chhibber | Sr. Technical Recruiter*

*Direct: 919 626 9615 | EMAIL [email protected]
<[email protected]>*

-- 
You received this message because you are subscribed to the Google Groups 
"Android Discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-discuss/CAEmgVe3h0B%2BAEtEvU5gR80J%2B2bR_%3DSK3SY1bc-ixY053S5PLyA%40mail.gmail.com.

Reply via email to