Hello ,
Hope you are doing great!

Please find the below mentioned job description and if you are interested
revert back with the updated resume to mah...@axiustek.com

*Position: Bigdata Architect *
*Location: Milwaukee, WI *
*Duration: Long Term*

*Qualifications:*

   - 12+ years of total IT experience including 3+ years of Big Data
   experience (Hadoop, Spark Streaming, Kafka, Spark SQL, HBase, Hive and
   Sqoop). Hands on experience on Big Data tools and technologies is mandatory.
   - Proven experience of driving technology and architectural execution
   for enterprise grade solutions based on Big Data platforms.
   - Designed at least one Hadoop Data Lake end to end using the above Big
   Data Technologies.
   - Exp in designing Hive and HBase Data models for storage and
   high-performance queries.
   - Knowledge of standard methodologies, concepts, best practices, and
   procedures within Big Data environment.
   - Proficient in Linux/Unix scripting.
   - Bachelor's degree in Engineering - Computer Science, or Information
   Technology. Master's degree in Finance, Computer Science, or Information
   Technology a plus.
   - Experience in Agile methodology is a must.
   - Experience in Storm and NoSQL Databases (e.g. Cassandra) is desirable.
   - Knowledge on Oracle or any other RDBMS experience is desirable
   - Familiarity with one of the leading Hadoop distributions like
   Hortonworks, Cloudera, or MapR is desirable.
   - Exposure to infrastructure as service providers such as: Google
   Compute Engine, Microsoft Azure or Amazon AWS is a plus.
   - Self-starter and able to independently implement the solution.
   - Good communication skills and problem-solving techniques

*Job Description:*

   - Define big data solutions that leverage value to the customer;
   understand customer use cases and workflows and translate them into
   engineering deliverables
   - Architecting and Designing Hadoop solution.
   - Actively participate in Scrum calls, work closely with product owner
   and scrum master for the sprint planning, estimates and story points.
   - Break the user stories into actionable technical stories, dependencies
   and plan the execution into sprints.
   - Designing batch and real time load jobs from a broad variety of data
   sources into Hadoop. And design ETL jobs to read data from Hadoop and pass
   to variety of consumers / destinations.
   - Perform analysis of vast data stores and uncover insights.
   - Responsible for maintaining security and data privacy, creating
   scalable and high-performance web services for data tracking.
   - Propose best practices / standards and implement them in the
   deliverables.
   - Analyze the long running queries and jobs, performance tune them by
   using query optimization techniques and Spark code optimization.


-- 
Thanks & Regards
Mahesh Gurram
Sr. US IT Recruiter
Desk No: 703-738-6662 Ext: 2152
Email: mah...@axiustek.com <mah...@sparinfosys.com>
*www.axiustek.com* <http://www.axiustek.com/>

-- 
You received this message because you are subscribed to the Google Groups 
"VB.NET 2003 Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to vbnet2003group+unsubscr...@googlegroups.com.
To post to this group, send email to vbnet2003group@googlegroups.com.
Visit this group at https://groups.google.com/group/vbnet2003group.
For more options, visit https://groups.google.com/d/optout.

Reply via email to