*Send available consultants profile at.......
 [email protected] <[email protected]>*

*Hadoop Data and Integration* Engineer for a remote opportunity. We're
looking for someone to design and develop extremely efficient and reliable
data pipelines to move terabytes of data into the Data Lake and other
landing zones. This person will use expert coding skills in Hive H-SQL,
T-SQL, PL/SQL to develop and implement data auditing strategies and
processes to ensure data accuracy and integrity.


*Responsibilities:-*

   - Daily monitoring of database storage allocation and usage as well as
   other resource usage.
   - Assists with backup standards and schedules and recovery procedures.
   - Security plan development, testing and maintenance.
   - Intent of this security plan is to establish a means of providing
   complete accountability for any and all use of the databases.
   - Provides performance turning related to indexing, stored procedures,
   triggers and database/server configuration.
   - Provides Technical guidance to teammates.
   - Excellent communication and Presentation Skills
   - Design and build data processing pipelines for structured and
   unstructured data      using tools and frameworks in the Hadoop ecosystem.
   - Implement and configure tools for Hadoop-based data lake
   implementations and Proof of concepts.
   - Solid software engineer with excellent analytical and troubleshooting
   skills.
   - Work closely with Analysts to develop and implement data
   transformations within data lake systems, both on-prem and in the AWS
   environment.
   - Develop and/or consume web services for data integration and ingestion
   of source data.
   - Create and maintain workflows with an emphasis on reusability,
   scalability, optimization and parameterization in a variety of platforms.
   - Responsible for understanding RDBMS and big data concepts and
   connectivity to various data sources and platforms.
   - Develop detailed flow charts detailing data lineage across ingestion
   and transformation of data as needed.
   - Analyze and document data from different platforms/products and
   determine appropriate transformations to standardize and potentially
   combine into a single destination.
   - Document and develop pipelines that preserve a data dictionary and
   maintain/save appropriate metadata.
   - Perform analysis on projects and provide project plans down to the
   task level and include time estimates for each task.
   - Provide status reports that give a detailed description of the current
   project’s progress and indicate time devoted to each task.
   - Analyze data as needed in various data environments, including but not
   limited to Oracle, SQL Server, Hadoop/Hive, RedShift and more.

*Qualifications:*

   - Design and develop extremely efficient and reliable data pipelines to
   move terabytes of data into the Data Lake and other landing zones.
   - Use expert coding skills in Hive H-SQL, T-SQL, PL/SQL.
   - Develop and implement data auditing strategies and processes to ensure
   data accuracy and integrity.
   - Mentor and teach others.
   - Solid Linux skills.
   - At least 2-3 years of real-time experience with AWS.
   - Experience with a wide variety of tools, including Attunity products,
   DMS, and SnapLogic is a plus.






*Anuj Rai*
*Sr. Recruitment Manager*

*Zenith tech Solutions*


*Desk: 518-621-0048Fax: 518-244-49773 park Hill*

*Albany, NY 12204*

*Email Id:- [email protected] <[email protected]>*

*Gmail:-     [email protected] <[email protected]>*

*Website:- www.zenithtechsolutions.com
<http://www.zenithtechsolutions.com/>*

-- 
You received this message because you are subscribed to the Google Groups "IT 
RECURITER" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/it-recuriter.
For more options, visit https://groups.google.com/d/optout.

Reply via email to