Dear wiki user,

You have subscribed to a wiki page "Hadoop Wiki" for change notification.

The page PoweredBy has been reverted to revision 240 by stack.
http://wiki.apache.org/hadoop/PoweredBy?action=diff&rev1=243&rev2=244

--------------------------------------------------

    * 532 nodes cluster (8 * 532 cores, 5.3PB).
    * Heavy usage of Java MapReduce, Pig, Hive, HBase
    * Using it for Search optimization and Research.
-   * 
[[http://www.profischnell.com/uebersetzung/uebersetzung-deutsch-englisch.html|Englisch
 Deutsch Übersetzung]] 
  
   * [[http://www.enormo.com/|Enormo]]
    * 4 nodes cluster (32 cores, 1TB).
@@ -444, +443 @@

  
   * [[http://www.thestocksprofit.com/|Technical analysis and Stock Research]]
    * Generating stock analysis on 23 nodes (dual 2.4GHz Xeon, 2 GB RAM, 36GB 
Hard Drive)
-   * 
[[http://www.profi-fachuebersetzung.de/language-translation.html|Translation 
agency]] / [[http://www.profischnell.com|Übersetzung]]  
[[http://hemorrhoid.com/|Hemorrhoid]] [[http://www.profischnell.com|Deutsch]]
+ 
   * [[http://www.tid.es/about-us/research-groups/|Telefonica Research]]
    * We use Hadoop in our data mining and user modeling, multimedia, and 
internet research groups.
    * 6 node cluster with 96 total cores, 8GB RAM and 2 TB storage per machine.
@@ -477, +476 @@

  
   * [[http://t2.unl.edu|University of Nebraska Lincoln, Research Computing 
Facility]]
    . We currently run one medium-sized Hadoop cluster (200TB) to store and 
serve up physics data for the computing portion of the Compact Muon Solenoid 
(CMS) experiment. This requires a filesystem which can download data at 
multiple Gbps and process data at an even higher rate locally. Additionally, 
several of our students are involved in research projects on Hadoop.
- 
  
  = V =
   * [[http://www.veoh.com|Veoh]]

Reply via email to