Hi Team,

This side Utkarsh from 1Point System we have an opening for  *Sr Data
Engineer (AWS, ETL, Python) with SAP HANA Database experience* at Atlanta,
GA,

Please have a look at complete JD and do let me know so that we can proceed
further.


*Job: SETL / AWS / Python Developer*

*Sr Data Engineer (AWS, ETL, Python) with SAP HANA Database experience*
* Long-Term Contract*
*Atlanta, GA | Onsite from day one*
*Visa: No H1B*


*Note from Client:*
This This is a Sr. Data Engineer role but associated with Data Engineering
along with experience handling *Linux systems and stored procedures in SAP
HANA database. *
*They won’t consider resumes without SAP HANA Database experience. *
This person will assist in design, development, and implementation of
end-to-end *complex ETL system* using *Informatica , Alteryx* and SAP HANA
tools.
Data ingestion to Azure/ *AWS and data lake* and lake formation is a big
PLUS.

*Qualifications:*

   - 5 years of relevant experience.
   - Bachelor’s Degree in computer science, Information Systems, or related
   field
   - Good experience in Python & SQL
   - Good Understanding of cloud computing and AWS architecture and best
   practices
   - Development experience on Lambda with Python or Java
   - Must have experience in using AWS services API, AWS CLI and SDK
   - Working experience on AWS services like EC2, S3, Route 53, Cloud
   watcher
   - Python/ETL Developer with Strong AWS experience
   - Use of AWS technologies for building, deploying, and operating
   applications -Very Critical Requirement
   - Understanding of core AWS services and basic AWS architecture best
   practices (S3, EBS, EC2, SQS/SNS, CloudFront, Route53, Lambda, CloudWatch,
   ECS Fargate, API Gateway)
   - Proficiency in developing, deploying, and debugging cloud-based
   applications using AWS ETL native tools
   - Understanding of the use of containers in the development process
   - Ability to understand ETL process
   - Expertise in AWS ETL tool Glue, EMR, Redshift, RDS, SNS/SQS
   - Ability to design technical workflow end to end using AWS services
   - Experience with working in Data Lake and data pipeline
   - Python, SQL, PySpark, Spark SQL, NoSQL such as Document DB/Mongo DB
   - Ability to use a CI/CD pipeline to deploy applications on AWS
   - Be able and willing to mentor team members


*Responsibilities:*

   - Architecting and supporting ETL and data load processes
   - Designing processes that can extract/receive data from various
   heterogeneous source systems
   - Perform data cleansing and transforming the data according to business
   rules
   - Developing reusable frameworks for data extraction, loading and
   cleansing
   - Designing and building change data capture processes and updating the
   DataMart accordingly
   - Parquet / Avro to file formats - Building capabilities to exchange the
   data with external partners in various formats (Parquet, Avro, XML, CSV,
   Flat File)



-- 

*Thanks and Regards*

*Utkarsh Dwivedi**| **1Point System LLC*
Direct: *__________ <__________>** • *[email protected]

115 Stone Village Drive • Suite C • Fort Mill, SC • 29708



*An E-Verified company | An Equal Opportunity Employer *

*DISCLAIMER: If you have received this email in error or prefer not to
receive such emails in the future, please notify by replying with a
''REMOVE'' in the subject line and your email address shall be removed
immediately from the mailer list.*

-- 
You received this message because you are subscribed to the Google Groups 
"project managment" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/project-managment/CAH5SFOo2gy9Z7ORT%2BBfuH1rUGBCff0V_0%2B2r1Y6xxH6Eo8j90w%40mail.gmail.com.

Reply via email to