Dear *Professional*,


Hope you are doing great today...



This is *BABU, BDM FROM PANTAR SOLUTIONS INC*, we are an Information
Technology and Business Consulting firm specializing in Project-based
Solutions and Professional Staffing Services. Please have a look at below
position which is with our Client and let me know your interest ASAP. I
would really appreciate if you could send me your *MOST RECENT UPDATED
RESUME*:


*Job Title: Lead Data Engineer (Azure) with Databricks/Snowflake/Fabric,
SQL, ETL, Python, Spark, Hadoop,  Data Architecture, Data Modeling,
Redshift, ADF, Retail Domain Exp.*

*Work Location: Hybrid (will work 2 days onsite 3 days work from home)
 Must be able to reliably commute to Charlotte, NC (will allow three works
for the right candidate to relocate):*





*Job Type: Contract To Hire (6 Months then convert to FTE) || 1099 onlyWork
Authorization:  USC or Green Card ONLYInterview Process:* Introduction call
with Prime Vendor plus 3 rounds with End Customer (1-hacker rank, 2- Hiring
Manager and data architect, 3 -Director)


*Top skills, experience, background, etc.*

*MUST have : **Solid 10 years of experience leading a team, previous
experience in retail, python, SQL , good to have: spark|Tech Stack: prefer
Azure but open for others AWS, GCP etc | Must have :databricks or snowflake
or fabric*

How many years of experience needed? minimum 10

Any nice-to-have skills that would make them exceptional? Certifications ,
2+ yrs data architecture, data engineering patters , git repo, selfstarter,
and go-getter attitude, software engineering background , ML experience

They are looking for a Lead with 10+ years
Candidates need to have retail experience
Experience in enterprise environment
Strong Data Architecture, Data Modeling & Retail Domain Knowledge
Strong leadership skills, self starter, need to be able to provide examples
of the impact they have created


*JOB DESRIPTION:*

As the Technical Lead Data Engineer, your primary responsibility will be to
spearhead the design, development, and implementation of data solutions
aimed at empowering our organization to derive actionable insights from
intricate datasets. You will take the lead in guiding a team of data
engineers, fostering collaboration with cross-functional teams, and
spearheading initiatives geared towards fortifying our data infrastructure,
CI/CD pipelines, and analytics capabilities.

*Responsibilities:*

Apply advanced knowledge of Data Engineering principles, methodologies and
techniques to design and implement data loading and aggregation frameworks
across broad areas of the organization.
Gather and process raw, structured, semi-structured and unstructured data
using batch and real-time data processing frameworks.
Implement and optimize data solutions in enterprise data warehouses and big
data repositories, focusing primarily on movement to the cloud.
Drive new and enhanced capabilities to Enterprise Data Platform partners to
meet the needs of product / engineering / business.
Experience building enterprise systems especially using Databricks,
Snowflake and platforms like Azure, AWS, GCP etc
Leverage strong Python, Spark, SQL programming skills to construct robust
pipelines for efficient data processing and analysis.
Implement CI/CD pipelines for automating build, test, and deployment
processes to accelerate the delivery of data solutions.
Implement data modeling techniques to design and optimize data schemas,
ensuring data integrity and performance.
Drive continuous improvement initiatives to enhance performance,
reliability, and scalability of our data infrastructure.
Collaborate with data scientists, analysts, and other stakeholders to
understand business requirements and translate them into technical
solutions.
Implement best practices for data governance, security, and compliance to
ensure the integrity and confidentiality of our data assets.

*Qualifications:*

Bachelor’s or master’s degree in computer science, Engineering, or a
related field.
Proven experience (10+) in a data engineering role, with expertise in
designing and building data pipelines, ETL processes, and data warehouses.
Strong proficiency in SQL, Python and Spark programming languages.
Strong experience with cloud platforms such as AWS, Azure, or GCP is a must.
Hands-on experience with big data technologies such as Hadoop, Spark,
Kafka, and distributed computing frameworks.
Knowledge of data lake and data warehouse solutions, including Databricks,
Snowflake, Amazon Redshift, Google BigQuery, Azure Data Factory, Airflow
etc.
Experience in implementing CI/CD pipelines for automating build, test, and
deployment processes.
Solid understanding of data modeling concepts, data warehousing
architectures, and data management best practices.
Excellent communication and leadership skills, with the ability to
effectively collaborate with cross-functional teams and drive consensus on
technical decisions.
Relevant certifications (e.g., Azure, databricks, snowflake) would be a
plus.





*PLEASE NOTE:*

If for any reason this does not interest you or you felt uncomfortable by
any part of this email, I sincerely apologize. Please consider this E-mail
as a request for referrals and feel free in forward this email to anyone
whom you might find a fit.

*Thanks & Regards,*

Babu
Pantar Solutions Inc
1908 Cox Rd, Weddington, NC  28104
Email: babu (dot) s (at) pantarsolutions (dot) com
<http://pantarsolutions.com>

-- 
You received this message because you are subscribed to the Google Groups 
"project managment" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/project-managment/CAHynwgp9k3z0aFg7ibunieBRrNCYB1JmCsEQP3_P7s3Vo7iz_g%40mail.gmail.com.

Reply via email to