I'm running into another stack trace once I switched to using an 
OracleDataSource.  AbstractJDBCDataModel is setting the fetch direction to 
FETCH_UNKNOWN which causes the ResultSet.isLast() call to fail. I think using 
TYPE_SCROLL_INSENSITIVE as the direction should resolve this issue.  Do you 
think it makes sense to change this in the abstract data model class or is this 
a special case that requires a custom OracleJDBCDataModel?

Here's the trace:
java.sql.SQLException: Invalid operation for forward only resultset : isLast
        at 
oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
        at 
oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:131)
        at 
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:197)
        at 
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:261)
        at 
oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:269)
        at 
oracle.jdbc.driver.OracleResultSetImpl.isLast(OracleResultSetImpl.java:368)
        at 
org.apache.mahout.cf.taste.impl.model.jdbc.AbstractJDBCDataModel$ResultSetUserIterator.hasNext(AbstractJDBCDataModel.java:577)
        at 
org.apache.mahout.cf.taste.impl.recommender.TopItems.getTopUsers(TopItems.java:81)
        at 
org.apache.mahout.cf.taste.impl.neighborhood.NearestNUserNeighborhood.getUserNeighborhood(NearestNUserNeighborhood.java:102)
        at 
org.apache.mahout.cf.taste.impl.recommender.GenericUserBasedRecommender.recommend(GenericUserBasedRecommender.java:83)
        at 
org.apache.mahout.cf.taste.impl.recommender.AbstractRecommender.recommend(AbstractRecommender.java:53)
        at 
org.apache.mahout.cf.taste.impl.recommender.CachingRecommender$RecommendationRetriever.get(CachingRecommender.java:198)
        at 
org.apache.mahout.cf.taste.impl.recommender.CachingRecommender$RecommendationRetriever.get(CachingRecommender.java:185)
        at 
org.apache.mahout.cf.taste.impl.common.Cache.getAndCacheValue(Cache.java:103)
        at org.apache.mahout.cf.taste.impl.common.Cache.get(Cache.java:77)
        at 
org.apache.mahout.cf.taste.impl.recommender.CachingRecommender.recommend(CachingRecommender.java:123)
        at 
org.apache.mahout.cf.taste.impl.recommender.CachingRecommender.recommend(CachingRecommender.java:99)


-Adam 


-Adam

-----Original Message-----
From: Sean Owen [mailto:[email protected]] 
Sent: Thursday, May 14, 2009 11:29 AM
To: [email protected]; Burnett, Adam
Subject: Re: Getting started

That's my bad. I just committed a fix that makes the result of these
methods in RandomUtils more reasonable for small values and should fix
this corner-ish case.

The org.apache.mahout.cf.taste stuff itself does not require Hadoop;
the Hadoop bindings are all in org.apache.mahout.cf.taste.hadoop.

2009/5/14 Burnett, Adam <[email protected]>:
> Hello Mahout community.  I'm just getting started in exploring how to
> integrate Mahout into my application. I've written a quick sample but
> I'm getting the following exception when trying to use the
> SlopeOneRecommender.
>
> INFO: Building average diffs...
> Exception in thread "main" java.lang.IllegalArgumentException
>        at
> org.apache.mahout.cf.taste.impl.common.RandomUtils.isNotPrime(RandomUtil
> s.java:93)
>        at
> org.apache.mahout.cf.taste.impl.common.RandomUtils.nextPrime(RandomUtils
> .java:81)
>        at
> org.apache.mahout.cf.taste.impl.common.RandomUtils.nextTwinPrime(RandomU
> tils.java:67)
>        at
> org.apache.mahout.cf.taste.impl.common.FastMap.rehash(FastMap.java:285)
>        at
> org.apache.mahout.cf.taste.impl.recommender.slopeone.MemoryDiffStorage.p
> runeInconsequentialDiffs(MemoryDiffStorage.java:254)
>        at
> org.apache.mahout.cf.taste.impl.recommender.slopeone.MemoryDiffStorage.b
> uildAverageDiffs(MemoryDiffStorage.java:227)
>        at
> org.apache.mahout.cf.taste.impl.recommender.slopeone.MemoryDiffStorage.<
> init>(MemoryDiffStorage.java:119)
>        at
> org.apache.mahout.cf.taste.impl.recommender.slopeone.SlopeOneRecommender
> .<init>(SlopeOneRecommender.java:65)
>
> I built Mahout from the 774566 revision of the trunk.  I have a
> MySQLJDBCDataModel that reads a custom table from my database.  One
> question I have is do I have to have the hadoop cluster running before
> attempting to use Mahout in my app?  Could not having it running cause
> the above error?  Any thoughts in how to proceed from here?  Thanks!
>
> Adam Burnett
>
>
>

Reply via email to