*Position: Hadoop Developer*

*Location: NYC, NY                                           *

*Duration: 6+ Months*



*Need Local Consultants*



*Job Description:*



Client is looking for a strong developer who is interested in modeling
complex financial analytics using the Apache Hadoop & Spark ecosystem.

R&D's Foundational Services group brings together core services and
technology frameworks that are shared across clients application teams. The
group aims to create standardized solutions to complex yet common problems
across R&D. It deals with challenges of storage, low latency retrievals,
high volume requests, scalability and high availability over a distributed
environment for enterprise wide use.

Client has been looking to refresh and evolve many facets of its data and
analytics infrastructure. This includes, PriceHistory, our canonical end of
day time series datastore, the client Query Platform, a standardized query
framework that allows users to express complex data retrieval, analytics
and screening criteria and clients Data Platform, an initiative to help
structure data flows with a view towards improving discoverability and data
provenance.

We are looking for developers who can help evolve the architecture for our
analytics infrastructure, model datasets that makes them amenable to
high-speed distributed analytics, and define how our core analytic
workflows should be expressed using some of the newer programming
paradigms. Our challenges stem from the low-latency, high throughput and
high availability contexts where these applications need to work. A
successful candidate should have proven experience with the Hadoop stack
and NoSQL data stores (preferably HBase or Cassandra). He/she should have
experience working on critical infrastructure and a strong desire to drive
a product forward. The candidate must exhibit a passion for big data
technologies and a flexible creative approach to problem solving.



*Responsibilities:*



·         Work with various application teams at client that look to use
these technology stacks to understand how to model their analytic workflows.

·         Develop clean and performant code acceptable to application teams.

·         Interface with the Apache Spark and the Scientific Python
communities and be an influencer within them.



*Qualifications:*



·         Programming competence in Java and Scala with Python being a
strong bonus.

·         Strong proficiency with Apache Hadoop and Apache Spark
programming paradigms.

·         Experience with Pandas, NumPy and abstractions such as
Distributed Dataframes is a strong bonus.

·         Knowledge of financial data and workflows is a strong plus,
though not a necessity.

·         Excellent problem solving and communication skills and thrives in
a highly collaborative and dynamic work environment.









-- 

*Thanks & Regards,*

*Michael Williams || Technical Recruiter*

*Phone : 646 661 6122*
* <email%[email protected]>*

*Email: [email protected] <[email protected]>*

*Hangout: [email protected] <[email protected]>*

* <email%[email protected]>*

-- 
You received this message because you are subscribed to the Google Groups 
"CorptoCorp" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/corptocorp.
For more options, visit https://groups.google.com/d/optout.

Reply via email to