Hi, This is Manohar from IndSoft, Inc.
We have multiple requirements with our client. Please share your consultant resumes to [email protected] or you can reach me at 630-524-0011 *Need more than 9+ years of experience* *Need Passport Number* *No OPT’s* *Position1:* Senior Java Application Developer *Location: Denver, CO* *Duration: 6-12 Months* *Project Description:* · Enterprise application integration developer *Job Description:* · Strong experience in JAVA with experience on using web services both SOAP/REST. · Experience in creating API's in spring/springboot framework. · Build and modify enterprise java based applications. · AWS experience is a must in deploying cloud based solutions. · Experience in using API Gateway backed by AWS Lambda. · Some experience in python is an added advantage. · Experience in creating high level system solution and detail system architecture design . · Experience with relational database and object data model design. · Proven experience in functional and technical architecture, design and development, testing support, and system integration. · Experience with converged / virtualized environments. *Position2:* Senior Big Data Engineer *Location: Denver, CO* *Duration: 6-12 Months* *Mandatory Skills:* Big Data (Spark, Kafka) , AWS, Database (SQL, MYSQL, PostgreSQL). Programming (JAVA OR Scala). Rest not important. *Job Description:* · Deploy Enterprise data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks · Optimizing data engineering and machine learning pipelines · Support data and cloud transformation initiatives · Contribute to our cloud strategy based on prior experience · Understand the latest technologies in a rapidly innovative marketplace · Independently work with all stakeholders across the organization to deliver point and strategic solutions *Skills - Experience and Requirements* · Should have prior experience in working as Data warehouse/Big-data architect. · Experience in advanced Apache Spark processing framework, spark programming using Scala or Python with knowledge in shell scripting. · Coding experience in Java and/or Scala is a must. · Experience in using AWS APIs (e.g., JavaAPI, Boto3, etc.) to integrate different services · Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data · Specifically, this experience must be in writing Big-data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is an added advantage. · Advanced SQL experience including SQL performance tuning is a must. · Experience in logical & physical table design in Big data environment to suit processing frameworks · Knowledge of using, setting up and tuning Spark on EMR using resource management framework such as Yarn or standalone spark. · Experience in writing spark streaming jobs (producers/consumers) using Apache Kafka or AWS Kinesis is required · Should have knowledge in variety of data platforms such as Redshift, S3, DynamoDB, MySQL/PostgreSQL · Experience in AWS services such as EMR, Glue, Athena, IAM, Lambda, Cloud watch and Data pipeline · Experience in AWS cloud transformation projects are required. Thanks, Manohar IndSoft, Inc 630-524-0011 [email protected] -- You received this message because you are subscribed to "rtc-linux". Membership options at http://groups.google.com/group/rtc-linux . Please read http://groups.google.com/group/rtc-linux/web/checklist before submitting a driver. --- You received this message because you are subscribed to the Google Groups "rtc-linux" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/rtc-linux/CAPY1naEmjMd67%3DaWTiXw1JRXZrSvvx6z%2BG776d1WMZ698K4_MA%40mail.gmail.com.
