Hi Ahmed,
I used the following BibTex entry in my Master Thesis:
@webpage{mahout,
Abstract = {Apache Mahout's goal is to build scalable machine learning
libraries. With scalable we mean: Scalable to reasonably large data sets. Our
core algorithms for clustering, classfication and batch based collaborative
filtering are implemented on top of Apache Hadoop using the map/reduce
paradigm. However we do not restrict contributions to Hadoop based
implementations: Contributions that run on a single node or on a non-Hadoop
cluster are welcome as well. The core libraries are highly optimized to allow
for good performance also for non-distributed algorithms},
Author = {{Apache Software Foundation}},
Date-Added = {2011-03-15 13:39:56 +0100},
Date-Modified = {2011-04-29 14:12:11 +0200},
Description = { Mahout's goal is to build scalable machine learning
libraries. With scalable we mean: Scalable to reasonably large data sets. Our
core algorithms for clustering, classfication and batch based collaborative
filtering are implemented on top of Apache Hadoop using the map/reduce
paradigm. However we do not restrict contributions to Hadoop based
implementations: Contributions that run on a single node or on a non-Hadoop
cluster are welcome as well. The core libraries are highly optimized to allow
for good performance also for non-distributed algorithms. Scalable to support
your business case. Mahout is distributed under a commercially friendly Apache
Software license. Scalable community. The goal of Mahout is to build a vibrant,
responsive, diverse community to facilitate discussions not only on the project
itself but also on potential use cases. Come to the mailing lists to find out
more.},
Distribution = {Global},
Keywords = {apache, apache hadoop, apache hive, apache incubator,
apache lucene, apache solr, apache taste, apache thrift, apache xml, business
data mining, cloudbase hadoop, cluster analysis, collaborative filtering, data
extraction, data filtering, data framework, data integration, data matching,
data mining, data mining algorithms, data mining analysis, data mining data,
data mining introduction, data mining pdf, data mining software, data mining
sql, data mining techniques, data representation, data set, data visualization,
datamining, distributed solr, feature extraction, fuzzy k means, genetic
algorithm, hadoop, hadoop cluster, hadoop download, hadoop forum, hadoop gfs,
hadoop lucene, hadoop pig latin, hadoop sequence file, hadoop sequencefile,
hierarchical clustering, high dimensional, hive hadoop, install solr,
introduction to data mining, kmeans, knowledge discovery, learning approach,
learning approaches, learning methods, learning techniques, lucene, machine
learning, machine translation, mahout apache, mahout taste, map reduce hadoop,
mining data, mining methods, naive bayes, natural language processing, open
source search engine, open source search engine software, opencms search, org
apache lucene, pattern recognition, pattern recognition and machine learning,
pig apache, pig hadoop, search algorithms, search engine, solr api, solr
faceted, solr open source, solr search engine, solr tika, statistical
consulting, statistical data mining, supervised, text mining, time series data,
unsupervised, web data mining, zookeeper, zookeeper apache},
Lastchecked = {2011-03-15},
Robots = {index,follow},
Title = {Apache Mahout:: Scalable machine-learning and data-mining
library},
Url = {http://mahout.apache.org},
Bdsk-Url-1 = {http://mahout.apache.org}}
/Manuel
On 08.04.2012, at 21:24, Ahmed Abdeen Hamed wrote:
> Hello,
>
> Is there a specific format the Mahout developers would like for citing
> Mahout?
>
> Thanks very much,
>
> -Ahmed
--
Manuel Blechschmidt
Dortustr. 57
14467 Potsdam
Mobil: 0173/6322621
Twitter: http://twitter.com/Manuel_B