Hi All,
I am a solr newbie. I find solr documents easy to access and use, which is
really good thing. While my problem is I did not find a solr home grown
profiling/monitoring tool.
I set up the server as a multi-core server, each core has approximately 2GB
index. And I need to update solr and re
Look up highlighting. http://wiki.apache.org/solr/HighlightingParameters
Notice: This email and any attachments are confidential. If received in error
please destroy and immediately notify us. Do not copy or disclose the contents.
Not sure if this is appropirate for this list, but I will try anyway and
hope to get a few pointers.
I am trying to help a Rehabilitation Research Center set up a document
search on their website (as a volunteer). They have a word document with a
lot of information about resources and contact p
Hello all
According to the docs, I need to use solr.LowerCaseTokenizerFactory
Does anyone have any experience with it? Can anyone comment on pitfalls or
things to beware of?
Does anyone know of any examples I can look at?
Thanks
Mark
: I have about 20k text files, some very small, but some up to 300MB, and
: would like to do text searching with highlighting.
:
: Imagine the text is the contents of your syslog.
:
: I would like to type in some terms, such as "error" and "mail", and have
: Solr return the syslog lines with t
I have rows=10. Good idea, I will set it to 1.
Should I expect a constant return time with rows=10 despite the # of total
found documents since they aren't returned?
> Another thing to note is that QTime does not include the time it takes to
> retrieve the stored documents to include in the respo
Another thing to note is that QTime does not include the time it takes to
retrieve the stored documents to include in the response. So if you're using a
high rows value in your query, QTime may be much smaller than the actual time
Solr spends generating the response.
Try adding rows=1 to your quer
Guys, thank you for all the replies.
I think I have figured out a partial solution for the problem on Friday
night. Adding a whole bunch of debug statements to the info stream showed
that every document is following "update document" path instead of "add
document" path. Meaning that all document I
That sounds like Nagle's algorithm.
http://en.wikipedia.org/wiki/Nagle's_algorithm#Interactions_with_real-time_systems
On Sun, Oct 30, 2011 at 2:01 PM, wrote:
> Another interesting note. When I use the Solr Admin screen to perform the
> same query, it doesn't take as long. Only when using SolrJ
Another interesting note. When I use the Solr Admin screen to perform the
same query, it doesn't take as long. Only when using SolrJ and Http Solr
server connection.
>
>> I am running Solr 3.4 in a glassfish domain for
>> itself. I have about 12,500 documents with a 100 or so
>> fields with the
Yeah, I figured that. I guess I will have to dig deeper because the data
transferred is only about 60k and all local on one machine. Shouldn't take
13 seconds for that.
>
>> I am running Solr 3.4 in a glassfish domain for
>> itself. I have about 12,500 documents with a 100 or so
>> fields with t
: Follow up question: what if it is a string instead of number? While you can
: use [387 TO *] to find out all number that is bigger than 387, how do you
: find specific set of "string"?
range queries should also work on strings, but i think the more general
solution to your problem is that you
> I am running Solr 3.4 in a glassfish domain for
> itself. I have about 12,500 documents with a 100 or so
> fields with the works (stored, termv's, etc).
>
> In my webtier code, I use SolrJ and execute a query as
> such:
>
> long querystart =
> new Date().getTime();
>
Hi,
I am running Solr 3.4 in a glassfish domain for itself. I have about
12,500 documents with a 100 or so fields with the works (stored,
termv's, etc).
In my webtier code, I use SolrJ and execute a query as such:
long querystart = new Date().getTime();
System.out.pr
14 matches
Mail list logo