Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Hadoop Wiki" for change 
notification.

The "Hbase/PoweredBy" page has been changed by danharvey.
http://wiki.apache.org/hadoop/Hbase/PoweredBy?action=diff&rev1=54&rev2=55

--------------------------------------------------

  
  [[http://www.meetup.com|Meetup]] is on a mission to help the world’s people 
self-organize into local groups.  We use Hadoop and HBase to power a site-wide, 
real-time activity feed system for all of our members and groups.  Group 
activity is written directly to HBase, and indexed per member, with the 
member's custom feed served directly from HBase for incoming requests.  We're 
running HBase 0.20.0 on a 11 node cluster.
  
- [[http://www.mendeley.com|Mendeley]] We are creating a platform for 
researchers to collaborate and share their research online. HBase is helping us 
to create the worlds largest research paper collection and is being used to 
store all our raw imported data. We use a lot of map reduce jobs to process 
these papers into pages displayed on the site. We also use HBase with Pig to do 
analytics and produce the article statistics shown on the web site. You can 
find out more about how we use HBase in these slides 
[http://www.slideshare.net/danharvey/hbase-at-mendeley].
+ [[http://www.mendeley.com|Mendeley]] We are creating a platform for 
researchers to collaborate and share their research online. HBase is helping us 
to create the world's largest research paper collection and is being used to 
store all our raw imported data. We use a lot of map reduce jobs to process 
these papers into pages displayed on the site. We also use HBase with Pig to do 
analytics and produce the article statistics shown on the web site. You can 
find out more about how we use HBase in these slides 
[http://www.slideshare.net/danharvey/hbase-at-mendeley].
  
  [[http://ning.com|Ning]] uses HBase to store and serve the results of 
processing user events and log files, which allows us to provide near-real time 
analytics and reporting. We use a small cluster of commodity machines with 4 
cores and 16GB of RAM per machine to handle all our analytics and reporting 
needs.
  

Reply via email to