On 04.08.2011 11:58, David Cabanillas wrote:
Sorry Sebastien and I have only test the cf working with mysql, but I
have not any example.
Nowadays I am in an initial state, my idea is to apply similarity in a
job portal, the idea is to recommend to unemployed skills that other
unemployed have selected. For example, if many users knows Java and
Pascal and a new user selects Java the system should recommend Pascal too.
How many datapoints will the system have to process for this? I'm not
sure you really need to use hadoop for that. An in-memory recommender
might be a much easier to deploy solution.
--sebastian
Thanks for your support.
PS: At the end If I would have done the solution (to use mahout with
mysql) I have published this solution in your blog.
On Thu, Aug 4, 2011 at 11:48 AM, Sebastian Schelter <[email protected]
<mailto:[email protected]>> wrote:
David,
it is not helpful and frustrating, if you don't answer questions.
I think you are on a wrong track currently, to quote my blogpost:
"Be aware that this is a guide intended for readers already familiar
with Collaborative Filtering and recommender systems that are
evaluating Mahout as a choice for building their production systems
on. The focus is on making the right engineering decisions rather
than on explaining algorithms here."
And please reply to the user-mailinglist and not to me in person,
the purpose of Apache projects offering support is to have public
conversations and give all readers the possibility to learn not to
have free private consultation by the committers.
--sebastian
On 04.08.2011 11:44, David Cabanillas wrote:
Right now I only want to connect mahout with mysql and I have
not find
any example.
In the section *Putting the puzzle together you said:
*
DataSource datasource = ...
But what's means ... ???
On Thu, Aug 4, 2011 at 10:14 AM, Sebastian Schelter
<[email protected] <mailto:[email protected]>
<mailto:[email protected] <mailto:[email protected]>>> wrote:
David, can you please give us some details about your usecase?
It seems like you're trying to reimplement the system I
described in
http://ssc.io/deploying-a-____massively-scalable-____recommender-system-with-____apache-mahout/
<http://ssc.io/deploying-a-__massively-scalable-__recommender-system-with-__apache-mahout/>
<http://ssc.io/deploying-a-__massively-scalable-__recommender-system-with-__apache-mahout/
<http://ssc.io/deploying-a-massively-scalable-recommender-system-with-apache-mahout/>>
That system is highly optimized for a certain class of
usecases and
only makes sense if you have like 100+ million datapoints
and 100+
requests/second to your recommender.
If you just want to start diving into recommendation mining and
build a first system to play with, working with this article is
definitely the wrong approach. In that case, I highly
suggest you
get a copy of "Mahout in Action", http://manning.com/owen/ which
gives a superb introduction to recommendation mining with
mahout.
--sebastian
On 03.08.2011 14:59, David Cabanillas wrote:
Hello Sebastian,
Right now, I have the precomputed item-similarity my
problem is to
relate it with mysql.
In section*Setting up the infrastructure for the live
recommender
system* you suggest that we should to use
MySQLJDBCDataModel, tu
I don't
understand how it works.
Don't have any code example to relate mahout and mysql?
Many thanks.
bye
--david
--
bye
--david
--
bye
--david