*Job Details:*

*Position: Data Engineer with AWS Redshift and ETL*

*Location: New York, NY*

*Duration: 6 Months*



*Kindly share the resumes to dee...@kanisol.com <dee...@kanisol.com> OR you
can reach me on 609-651-4663*


*Job Description*

·         Experienced data engineer

·         Building ETL  and data pipes, previous extensive experience with
Informatica ETL tools is a must;

·         Knowledge of other ETL tools like Snowflake, SSIS, Amazon Glue,
etc. is a plus

·         Experience with designing ETL data pipes in DW environment -
maintaining dimensions and facts; staging areas

·         Understanding data modeling - specifically dimensional modeling
(slowly changing dimensions, fact tables)

·         Extensive experience with relational databases - Redshift is
strongly preferred, but strong knowledge of relational db like Oracle or
SQL server, etc. is ok;

·         Understanding database performance tuning is a must

·         Experience with AWS, S3, Redshift is preferred

·         Knowledge of analytical and reporting tools like Looker or
Tableau is a big plus.

·         Knowledge of Python is a plus.



*Title: Data Engineer*

*Location: New York, NY*

*Duration: Contract*



*Kindly share the resumes to dee...@kanisol.com <dee...@kanisol.com> OR you
can reach me on 609-651-4663*


*Job Description:*

·         Collaborate with product teams, data analysts and data scientists
to design and build data-forward solutions

·         Build and deploy streaming and batch data pipelines capable of
processing and storing petabytes of data quickly and reliably

·         Integrate with a variety of data providers ranging from
marketing, web analytics, and consumer devices metrics

·         Build and maintain dimensional data warehouses in support of
business intelligence tools

·         Develop data catalogs and data validations to ensure clarity and
correctness of key business metrics

·         Drive and maintain a culture of quality, innovation and
experimentation



*Required skills*

·         *Any of the following tools : (Kafka or Spark, or Flink)*

·         AWS-based data solutions with tools such as Cloud Formation, IAM,
and Kinesis

·         Big-data solutions using technologies like S3, Spark and an
in-depth understanding of data partitioning and sharding techniques

·         Experience loading and querying cloud-hosted databases such as
Redshift. Building streaming data pipelines using Kafka or Spark, or Flink

-- 



*Thanks and regards,*
*Deepak Kannoji*
*Kani Solutions Inc*
*Phone: 609-651-4663*
*Email: dee...@kanisol.com <dee...@kanisol.com> *
*Skype: kannoji.deepak*
*https://www.linkedin.com/in/deepakkannoji/
<https://www.linkedin.com/in/deepakkannoji/>*

-- 
You received this message because you are subscribed to the Google Groups 
"CorptoCorp" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to corptocorp+unsubscr...@googlegroups.com.
To post to this group, send email to corptocorp@googlegroups.com.
Visit this group at https://groups.google.com/group/corptocorp.
For more options, visit https://groups.google.com/d/optout.

Reply via email to