Hi,

Hope you are doing great,
Myself,* Chaitanya* from R2 Technologies, We have a requirement for *Big
data Consultant *at *Philadelphia, PA*. Please review the Job description
below and if you’d like to pursue this, please include a word copy of your
latest resume along with a daytime phone number and rate in your response. You
can also reach me at *470-242-7345 Ex-302*, Drop the suitable profiles
on *[email protected]
<[email protected]>*



*Position: Big data Consultant*

*Location: **Philadelphia, PA*

*Duration: 6 months+*

*F2F Interview*



*Need to provide passport number for submission*


*Expert In:*

·         Object oriented programming in Python and/or Scala is a must have.

·         Build Custom Data Pipelines in Python and Scala that Clean,
Transform and Aggregate data from many different sources

·         Expert level working experience with Spark and related big data
technologies such as: MapReduce, Hadoop, HBase, Hive, Elastic Search

·         Data Structures and Data Processing Algorithms and Frameworks

·         Data Migration, high throughput Data Pipelines

·         Massively Parallel Processing using Python Tools

·         Big data modelling and ETL Processes

·         Ability to analyze Performance Issues in Big Data Environment

·         Data Modelling, Data Transfer and Storage, Partitioning, Indexing
and caching Techniques

·         Analytical views and data visualization

·         RDBMS, In-Memory, NoSQL, Columnar and Document Storage Systems

·         Batch Processing with Petabyte Scale data sets

·         Communication and Collaboration

*Well Experienced With:*

·         Large Volumes of Data Processing and Developing Automated ETL
Processes

·         High Throughput Low Latency Systems

·         Large Scale Data Modelling from Big Data perspective

·         Big Data Structures in Python

·         PyData, Anaconda, NumPy, PyTables, DataFrames, RDDs, Jupyter
Notebook

·         PyHive, PySpark

·         JSON/Parquet Data formats

·         Real time streaming with either Spark Streaming of Kafka

·         Develop API's/Tools that help share data with Enterprise

·         Debugging in Big Data environment

·         Version control Systems: Git/github/BitBucket etc.

·         DevOps Skills with Python, Shell Scripts, Ansible, Docker etc.

*Nice to have:*

·         Familiarity with PyPi

·         Workflow Management Tools such as Luigi, Apache Airflow,
Snowflow, Apache NiFi or similar

·         Experience Developing Models that help Predictions

·         Experience with Functional Programming is a Big Plus

*Non-Technical Experience:*

·         Effective Collaborator and Team Player

·         Quick Learner and Self-starter with excellent debugging skills

·         Ability to foresee scalability issues and make judgements



*Thanks & Regards,*

*Chaitu*

*Teachnical Recruiter **|**R2 Technologies*


*6515, Shiloh Rd Unit 110 Alpharetta, GA - 30005*

*Desk: *470-242-7345 EXT: 302*|**Email: c
<[email protected]>[email protected] <[email protected]>*

*Gtalk: [email protected] <[email protected]>*

*Linked-in: https://www.linkedin.com/in/chaitanya-chaitu-480003148/
<https://www.linkedin.com/in/chaitanya-chaitu-480003148/>
<https://www.linkedin.com/in/chaitanya-chaitu-480003148/>*

-- 
You received this message because you are subscribed to the Google Groups 
"project managment" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/project-managment.
For more options, visit https://groups.google.com/d/optout.

Reply via email to