Big Data Engineer with strong Spark Streaming Atlanta, GA
Job Details:
Must Have Skills
• Apache Kafka
• Spark
• Hadoop
Detailed Job Description
• Seeking an experienced Data Engineer Big Data individual for
Atlanta.
• Candidate must have Big Data engineering experience and must
demonstrate an affinity for working with others to create successful
solutions.
• Will work on NS Big Data Platforms Cloudera.
• Must be a very good communicator, and have some experience working
with business areas to translate their business data needs and data
questions into project requirements.
• Will participate in all phases of the Data Engineering life cycle
Top 3 responsibilities you would expect the Subcon to shoulder and execute
• Must have Big Data engineering experience and must demonstrate an
affinity for working with others to create successful solutions
• Must be a very good communicator, and have some experience working
with business areas to translate their business data needs and data
questions into project requirements
• Will participate in all phases of the Data Engineering life cycle
and will independently and collaboratively write project requirements,
architect solutions and perform data ingestion development.
Skills and Experience:
Required:
• 6+ years of overall IT experience
• 3+ years of experience with high-velocity high-volume stream
processing: Apache Kafka and Spark Streaming
o Experience with real-time data processing and streaming techniques
using Spark structured streaming and Kafka
o Deep knowledge of troubleshooting and tuning Spark applications
• 3+ years of experience with data ingestion from Message Queues
(Tibco, IBM, etc.) and different file formats across different platforms
like JSON, XML, CSV
• 3+ years of experience with Big Data tools/technologies like
Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or Cloud platforms
e.g. AWS, GCP, etc.
• 3+ years of experience building, testing, and optimizing ‘Big Data’
data ingestion pipelines, architectures and data sets
• 2+ years of experience with Python (and/or Scala) and PySpark
• 2+ years of experience with NoSQL databases, including HBASE and/or
Cassandra
• Knowledge of Unix/Linux platform and shell scripting is a must
• Strong analytical and problem solving skills
Preferred (Not Required):
• Experience with Cloudera/Hortonworks HDP and HDF platforms
• Experience with NIFI, Schema Registry, NIFI Registry
• Strong SQL skills with ability to write intermediate complexity
queries
• Strong understanding of Relational & Dimensional modeling
• Experience with GIT code versioning software
• Experience with REST API and Web Services
• Good business analyst and requirements gathering/writing skills
*Thanks & Regards*
*Ankit Mendiratta | Senior Technical Recuiter** - * *Recruitment*
*D *732-733-2115 | C 518 336 7878 | F 606.656.1391 <(606)+656-1391> |
[email protected]
200 Centennial Ave, Suite 204, Piscataway, NJ 08854
--
You received this message because you are subscribed to "rtc-linux".
Membership options at http://groups.google.com/group/rtc-linux .
Please read http://groups.google.com/group/rtc-linux/web/checklist
before submitting a driver.
---
You received this message because you are subscribed to the Google Groups
"rtc-linux" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/rtc-linux/CALQk_betnKDRdn2BzML4_J4E-pKwNXLJDZd3Wja%3D1x3sh5N-zg%40mail.gmail.com.