Hi,

This is Manohar from IndSoft, Inc.



We have Senior Bigdata engineer position with our client. Please share your
consultant resumes to [email protected] or you can reach me at
630-524-0011


*Position:* Senior Big Data Engineer

*Location: Denver, CO*

*Duration: 6-12 Months*



*Mandatory Skills:*

Big Data (Spark, Kafka) , AWS, Database (SQL, MYSQL, PostgreSQL).
Programming (JAVA OR Scala). Rest not important.



*Job Description:*

·         Deploy Enterprise data-oriented solutions leveraging Data
Warehouse, Big Data and Machine Learning frameworks

·         Optimizing data engineering and machine learning pipelines

·         Support data and cloud transformation initiatives

·         Contribute to our cloud strategy based on prior experience

·         Understand the latest technologies in a rapidly innovative
marketplace

·         Independently work with all stakeholders across the organization
to deliver point and strategic solutions



*Skills - Experience and Requirements*

·         Should have prior experience in working as Data
warehouse/Big-data architect.

·         Experience in advanced Apache Spark processing framework, spark
programming using Scala or Python with knowledge in shell scripting.

·         Coding experience in Java and/or Scala is a must.

·         Experience in using AWS APIs (e.g., JavaAPI, Boto3, etc.) to
integrate different services

·         Should have experience in both functional programming and Spark
SQL programming dealing with processing terabytes of data

·         Specifically, this experience must be in writing Big-data data
engineering jobs for large scale data integration in AWS. Prior experience
in writing Machine Learning data pipeline using Spark programming language
is an added advantage.

·         Advanced SQL experience including SQL performance tuning is a
must.

·         Experience in logical & physical table design in Big data
environment to suit processing frameworks

·         Knowledge of using, setting up and tuning Spark on EMR using
resource management framework such as Yarn or standalone spark.

·         Experience in writing spark streaming jobs (producers/consumers)
using Apache Kafka or AWS Kinesis is required

·         Should have knowledge in variety of data platforms such as
Redshift, S3, DynamoDB, MySQL/PostgreSQL

·         Experience in AWS services such as EMR, Glue, Athena, IAM,
Lambda, Cloud watch and Data pipeline

·         Experience in AWS cloud transformation projects are required.



Thanks,

Manohar Reddy

IndSoft, Inc

630-524-0011

-- 
You received this message because you are subscribed to "rtc-linux".
Membership options at http://groups.google.com/group/rtc-linux .
Please read http://groups.google.com/group/rtc-linux/web/checklist
before submitting a driver.
--- 
You received this message because you are subscribed to the Google Groups 
"rtc-linux" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/rtc-linux/CAPY1naFRmErEV%2B6MtyVmQW0NdaPWqspmFXnfTpNC3PaXe8%2BxYA%40mail.gmail.com.

Reply via email to