Dear *Professional*,
Hope you are doing great today... My name is *Babu*, and I am the *Business Development Manager* at *Pantar Solutions Inc*. We are an Information Technology and Business Consulting firm that specializes in Project-based Solutions and Professional Staffing Services. Please review the position below with our client and let me know your interest as soon as possible. I would greatly appreciate it if you could send me your* most recent updated resume.* *Data Engineer with Hadoop, Spark, Kafka, Scala, HDFS, S3, ETL schedulers (Apache Airflow, Luigi, Oozie, AWS Glue), PostgreSQL/MySQL, Redshift/BigQuery/HBase/ClickHouse Exp.* *SFO, CA || Remote* *6+ months* *Consultant LinkedIn profile should be created before 2018 || No Junk Profiles please.* *Need 9-10+ yrs of IT Exp. Profiles || Need Passport number, I94, Travel History documents during submission for H1/GC/GC EAD* *Expertise:* · 9+ years of relevant industry experience with a BS/Masters, or 2+ years with a PhD · Experience with distributed processing technologies and frameworks, such as Hadoop, Spark, Kafka, and distributed storage systems (e.g., HDFS, S3) · Demonstrated ability to analyze large data sets to identify gaps and inconsistencies, provide data insights, and advance effective product solutions · Expertise with ETL schedulers such as Apache Airflow, Luigi, Oozie, AWS Glue or similar frameworks · Solid understanding of data warehousing concepts and hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and columnar databases (e.g., Redshift, BigQuery, HBase, ClickHouse) · Excellent written and verbal communication skills *A Typical Day:* · Design, build, and maintain robust and efficient data pipelines that collect, process, and store data from various sources, including user interactions, financial details, and external data feeds. · Develop data models that enable the efficient analysis and manipulation of data for merchandising optimization. Ensure data quality, consistency, and accuracy. · Build scalable data pipelines (SparkSQL & Scala) leveraging Airflow scheduler/executor framework · Collaborate with cross-functional teams, including Data Scientists, Product Managers, and Software Engineers, to define data requirements, and deliver data solutions that drive merchandising and sales improvements. · Contribute to the broader Data Engineering community at Client to influence tooling and standards to improve culture and productivity · Improve code and data quality by leveraging and contributing to internal tools to automatically detect and mitigate issues. · Skill Sets - Python, SQL (expert level), Spark and Scala (intermediate). *PLEASE NOTE:* If this opportunity doesn't interest you or if any part of this email made you uncomfortable, I sincerely apologize. Please consider this email as a request for referrals, and feel free to forward it to anyone you think might be a good fit. *Thanks & Regards,* Babu Pantar Solutions Inc 1908 Cox Rd, Weddington, NC 28104 Email: babu (dot) s (at) pantarsolutions (dot) com <http://pantarsolutions.com> -- You received this message because you are subscribed to the Google Groups "project managment" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/project-managment/CAHynwgpjguH%3DdZdHP%3DGAuDN4CawHpgfjmSMwDsuXAgCKXJsNWQ%40mail.gmail.com.
