Hi,
Greetings from Metahorizon, We have a following requirements, Please share consultant profile meeting this requirements. Data Architect Pittsburgh, PA 6+ months Contract MapR experience would be great, but will consider any Hadoop distribution experience (Cloudera, Hortonworks, or MapR) Big Data Solutions Architect/Specialist/Sr. Engineer Responsibilities: . Lead a development team of big data designers, developers, data scientists, and DevOps . Implement a big data enterprise warehouse, BI and analytics system using Hive, HBase, and Kylin . Develop and maintain processes to acquire, analyze, store, cleanse, and transform large datasets using tools like Spark, Kafka, Sqoop, Hive, Talend, Elastic Search, Logstash, and Kibana (ELK) . Provide recommendations, technical direction and leadership for the selection and incorporation of new technologies into the Hadoop ecosystem . Participate in regular status meetings to track progress, resolve issues, mitigate risks and escalate concerns in a timely manner . Contribute to the development, review, and maintenance of product requirements documents, technical design documents, and functional specifications . Help design innovative, customer-centric solutions based on deep knowledge of large-scale, data-driven technology and the CPG industry . Help develop and maintain enterprise data standards, best practices, security policies and governance processes for the Hadoop ecosystem Required Skills: . Four-year degree in Computer Science/Software Engineering or related degree program, or equivalent application development, implementation and operations experience. . Advanced study or degrees such as Master's degree in Business (MBA), Masters, PhD., in Computer Science/Software Engineering or a related scientific degree program is preferred . Minimum 5+ years of experience in large systems analysis and development, addressing unique issues of architecture and data management. Has the experience to work at the highest technical level of all phases of systems analysis and development activity, across the full scope of system development cycle . 3+ years related experience on data warehousing and business intelligence projects . 2+ years implementation or development experience with the Hadoop ecosystem . Working knowledge of the entire big data development stack . Experience handling very large data sets (10's of terabytes and up preferred) . Experience with secure RESTful Web Services . Experience with Sqoop, Spark, Hive, Kafka, and Hbase (Kylin preferred) . Experience with Lucene-based enterprise search platforms (e.g ELK, or Solr) . Experience with automated testing for Big Data platforms . Experience with best practices for data integration, transformation, governance and data quality . Experience with developing, designing and coding, completing programming and documentation, and performing testing for complex ETL applications (using Talend and Scala preferred) . Experience with Agile software development process and development best practices . Experience with Big Data text mining and big Data Analytics preferred . Understanding of Big Data Architecture along with tools being used on Hadoop ecosystem . Ability to lead tool suite selection and lead proofs of concepts . Ability to share ideas among a collaborative team and drive the team based on technical expertise and sharing best practices from past experience Best regards, Ankur Metahorizon Inc. 400 E Royal Lane, Suite III-212 lrving, Texas 75039 Direct: 214-628-4362 logo <mailto:[email protected]> [email protected] Gtalk: <mailto:[email protected]> [email protected] This is not an unsolicited eMail. Under Bill 1618 Title III passed by the 105th USA Congress this eMail cannot be considered as spam as long as we include our contact information and an option to be removed from our eMailing list. If you have received this message in error or, are not interested in receiving our eMails, please reply to the email with "remove" in subject and include your "original email address.
