Hi Alex, Based on my experience, you just need to wait. In my case, last time to index just 400MB of data, Solr took around 1h. The server was with 2GB ram, dual processor, with no other software installed on it (Red Hat OS).
Hope it helps Davide -----Original Message----- From: Alex McLintock [mailto:[email protected]] Sent: Tuesday, August 11, 2009 9:11 PM To: [email protected] Subject: Nutch to SolR. First steps I'm trying to send my Nutch crawl to SolR. I've "generated, fetched, updated", several times. I've done an invertlinks. But when I try to do the solrindex it just sits there for ages and doesnt seem to stress the solr server at all. I'm using Nutch 1.0, Sun Java 1.6, Ubuntu Linux 9.04. /local/apps/software/nutch$ bin/nutch solrindex http://rio23:8983/solr/ crawl/crawldb crawl/linkdb crawl/segments/* Is there some kind of "verbose" option so that I can better see what it is doing? I could maybe insert some extra deugging, or do i need to run this in Eclipse? The Java process seems to be using up most of a core's CPU time so it seems to be doing *something*. This is my first Solr project so I have proved that it is up and running, but havent actually added any data to it yet... Alex
