Hi Partner,

For immediate consideration please reply to *[email protected]
<[email protected]>*

*Position: Hadoop + Spark Developer*

*Location: NJ *

*Duration: 6 Months*


•             Worked in real time Big data projects on at least 20 node
cluster.

•             Exposure to Hive, Map Reduce, Pig and HDFS.

•             well versed with Hadoop concepts like data ingestion,
cleansing, encryption/decryption, transformation, data lineage etc

•             UBS has all the analytics project hence the programming
skills are mandatory, candidate should know either Java or Scala or Python

•             Resource manager concepts in Hadoop 2

•             Spark fundamentals understand Standalone vs Cluster mode

•             Spark SQL is heavily used in UBS along with parquet file
formats.

•             Spark jobs performance tuning, UBS has critical SLA on jobs
in order to achieve this candidate should have good understanding of spark
internals to tune the jobs.

•             Kyro sterilization framework

•             Scala experience is huge bonus.


Regards,


Sameer

[email protected]

-- 
You received this message because you are subscribed to the Google Groups 
"US_IT.Groups" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/us_itgroups.
For more options, visit https://groups.google.com/d/optout.

Reply via email to