Send your resumes at adi...@unicomtec.com

​
*​*
*​**Title :*
​​
Data Warehouse ETL Developer

*Location:*
​​
Chicago, IL

*Rate:* Best Available Rate (all inclusive)

*Duration:* 4+ months (will extend)



*Need a strong Data Warehouse ETL Developer with 8+ years of experience.*

·         Hadoop

·         Java / J2EE

·         Python

·         Teradata

·         DataStage

·         Excellent communication skills

·         Healthcare domain is a plus



*Candidate Profile: *



·         8+ years of hands-on programming experience with 3+ years in
Hadoop platform

·         Proficiency with Java and one of the scripting languages like
Python etc.

·         J2EE, EJB, WAS deployments, RESTful service

·         Good grasp of data movement approaches and techniques and when to
apply them

·         Strong hand on experience with databases like Db2, Teradata

·         Flair for data, schema, data model, how to bring efficiency in
big data related life cycle

·         Ability to acquire, compute, store and provision various types of
datasets in Hadoop platform

·         Understanding of various Visualization platforms (Tableau,
Qlikview, others)

·         Strong object-oriented design and analysis skills

·         Excellent technical and organizational skills

·         Excellent written and verbal communication skills

·         Top skill sets / technologies:

·         Java / Python

·         Sqoop/Flume/Kafka/Pig/Hive/(DataStage or similar ETL tool) /
HBase / NoSQL / Datameer / MapReduce/Spark

·         Data Integration/Data Management/Data visualization experience

·         Responsible for the design of data movement into an throughout
the TIL, including but not limited to the Operational Data Store, Atomic
Data Warehouse, Dimensional Data Warehouse and Master Data Management.
Mentoring of designers for detailed design.

·         Development of enterprise design view and application to project
level.



Essential Functions:

·         Review all Project Level Data Movement Designs for adherence to
Standards and Best Practices Suggest changes to Project Level Designs
Develop New Data Movement Design Patterns where Required

·         Guide the Coding and Testing of standard data movement reusable
components



Requirements:

·         Strong Analytical and problem solving skills.

·         Build distributed, scalable, and reliable data pipelines that
ingest and process data at scale and in real-time.

·         Collaborate with other teams to design and develop data tools
that support both operations and product use cases.

·         Source huge volume of data from diversified data platforms into
Hadoop platform

·         Perform offline analysis of large data sets using components from
the Hadoop ecosystem.

·         Evaluate big data technologies and prototype solutions to improve
our data processing architecture.

·         Knowledge of Healthcare domain is an added advantage



Thanks &Regards

Aditya


*Unicom **Technologies Inc*

Email: adi...@unicomtec.com URL www.unicomtec.com

Gtalk: unicomaditya
  

-- 
You received this message because you are subscribed to the Google Groups 
"REQSRESUMES" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to reqsresumes+unsubscr...@googlegroups.com.
To post to this group, send email to reqsresumes@googlegroups.com.
Visit this group at https://groups.google.com/group/reqsresumes.
For more options, visit https://groups.google.com/d/optout.

Reply via email to