Dear Friends,

Greetings of the day!!!

Please share suitable profiles at the earliest:

JD as follows:

Position1

*Role: Kafka Admin*

*Remote*

*Kafka Admin with Linux experience*

·         5+ years of solid Kafka Admin experience

·         Design, build, assemble, and configure application or technical
architecture components using business requirements.

·         Hands-on experience with Kafka clusters hosted on Amazon cloud is
a plus.

·         Experience in Kafka build pipelines using Ansible, Cloud
formation templates, shells etc.

·         Experience in Jenkins, GitHub

·         Experience in implementing security & authorization (permission
based) on Kafka cluster.

·         Experience in open source Kafka, zookeepers, Kafka connect,
schema registry Avro schemas.

·         High availability cluster setup, maintenance and ongoing support

·         Has good knowledge of best practices

·         Hands on experience in standing up and administrating Kafka
platform which includes creating a backup & mirroring of Kafka Cluster
brokers, broker sizing, topic sizing, h/w sizing, performance monitoring,
broker security, topic security, consumer/producer access management(ACL)

·         Knowledge of Kafka API (development experience is a plus)

·         Knowledge of best practices related to security, performance, and
disaster recovery.

Position2

*Role: SR. Data Engineer*

*Location: Deerfield, Illinois*

Looking for Data Engineers with Databricks experience: who have experience
in Databricks. Should be capable of programming in Python, Pyspark, Scala
and Spark SQL.

*Requirements :*
·      Capability or programming in one of the following languages.
o  Python/Pyspark
o  Scala
o  Spark SQL

·         Deep understanding of distributed computing and Spark Architecture

·         Experience with Delta Lake/Tables within Databricks

·         Experience with optimizing Spark Queries and analyzing Spark DAGs

·         Understanding of both Batch and Streaming Jobs in Spark

·         Familiarity with different file types such as CSV, Parquet, JSON,
etc.

·         Experience with other Azure data services such as Azure Data Lake
Gen2, Event Hubs, Synapse etc.

·         Deep understanding of Azure data storage technologies such as
Azure Blob Storage, Azure Data Lake Gen2, Synapse, Azure SQL, Azure Event
Hub and the frameworks applied to each.

·         Understanding of ETL/ELT practices including data cleansing,
validation, and transformation activities.

·         Data modelling experience shaping and transforming data into
third normal form and dimensional models.


-- 

Thanks & Regards,

*Srikanth | *Technical Recruiter

HCL Global Systems, Inc.

Email ID: [email protected]

Certified Minority Business Enterprise (MBE)

-- 
You received this message because you are subscribed to the Google Groups 
"Android Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/android-developers/CAED9fV4APDVjQDrOwjkuuT82r%3DnDNLEUTAquNCYfKzbk55aYnQ%40mail.gmail.com.

Reply via email to