*Please send me the profiles at [email protected]
<[email protected]>*


*Position : ETL Developer *

*Location : Madison, WI*

*Duration : Long Term *



*Job Description*

   - Design ETL solutions to process/load source systems data into multiple
   database
   - 3+ years hands on experience with dimensional and operational data
   modeling
   - 5+ years hands on lead experience architecting and designing ETL
   components
   - Strong skills with understanding the tradeoffs of various design
   options on multiple platforms
   - Experience with preparing technical artifacts to support design efforts
   - Ensures adherence to design/development standards
   - Ability to Lead a team of technical people which would include
   on/offshore vendor partners
   - Good communicator and proactive approach to daily activities
   - Positive and problem solving attitude
   - Experience in Agile technical project data delivery
   - Ability to build collaborative relationships across the organization
   - Ability to plan, orchestrate, and coordinate technical release
   activities
   - Very strong analytical complex thinking
   - Ability to communicate back to non-technical stakeholders in a easy to
   understand manner

*Additional Job Requirements*

   - Experience using the technologies in the Hadoop ecosystem such as:
   Hadoop, HDFS, Spark, MapReduce, Pig, Hive, Flume, Sqoop, HD Insight,
   Zookeeper, Oozie, Hue, Kafka Experience in high availability
   configurations, Hadoop cluster connectivity and tuning and Hadoop security
   configurations
   - Good understanding of Operating Systems (Unix/Linux), Networks and
   System Administration experience
   - Experience with cloud data technologies (Azure, AWS or Google)
   - Experience with Spark Streaming, Spark SQL, Pyspark and SparkR is
   important
   - Good understanding of Spark's RDD API
   - Good understanding of Spark's Dataframe and API
   - Experience and good understanding of Apache Spark Data sources API
   - Knowledge of PowerShell scripting is needed to instantiate a cluster
   - Experience in Implementation of Hive data warehouse is important
   - Experience in one of any messaging queue tools: ActiveMQ, RabbitMQ,
   Kafka is nice to have
   - Experience with various data structures such as ORC, Parque or Avro
   - Experience with Azure Microsoft technologies such as: Data factory,
   Polybase, SQL Data warehouse is desired
   - Experience in implementing any MPP platform such as Teradata, Neteeza
   is desired
   - Experience in configuring Spark cluster (schedulers,queues) to
   allocate memory efficiently
   - Experience working with data science team in implementing predictive
   models is nice to have



*E- Mail is the best way to reach me *




*Deepak Gulia *| Simplion – cloud*.* made simple

Direct: 000000000| Fax: 408-935-8696 | Email: [email protected]

*GTALK :-  **[email protected]*
<[email protected]>



*https://in.linkedin.com/in/deepak-gulia-308a2b9b*
<https://in.linkedin.com/in/deepak-gulia-308a2b9b>



*INC 500 *|* 5000* Honoree – 2013, 2012, 2011, 2010, 2009

*Fast Private Companies* award by Silicon Valley Business Journal – 2012,
2011, 2010, 2009, 2008

*Best Places to Work in Bay Area *by San Francisco Business Times – 2013,
2012, 2011, 2009

*Minority Business Enterprise *certified by NMSDC

*We are an E-Verified Company*

-- 
You received this message because you are subscribed to the Google Groups 
"Oracle Developers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/oradev.
For more options, visit https://groups.google.com/d/optout.

Reply via email to