Re: fq vs. q

2009-06-15 Thread Michael Ludwig

Ensdorf Ken schrieb:


I ran into this very issue recently as we are using a freshness
filter for our data that can be 6//12/18 months etc.  I discovered
that even though we were only indexing with day-level granularity, we
were specifying the query by computing a date down to the second and
thus virutally every filter was unique.  It's amazing how something
this simple could bring solr to it's knees on a large data set.


I want to retrieve documents (TV programs) by a particular date and
decided to convert the date to an integer, so I have:

* 20090615
* 20090616
* 20090617 etc.

I lose all date logic (timezones) for that date field, but it works for
this particular use case, as the date is merely a tag, and not a real
date I need to perform more logic on than an integer allows.

Also, an integer looks about as efficient as it gets, so I thought it
preferable to a date for this use case. YMMV.

I think if you truncate dates to incomplete dates, you effectively also
lose all the date logic. You may still apply it, but what would you take
the result to mean? You can't regain precision you've decided to drop.

The actual points in time where my TV programs start and end are
encoded as a UNIX timestamp with exactitude down to the second, also
stored as an integer, as I don't need sub-second precision.

This makes sense for my client, which is not Java, but PHP, so it uses
the C library strftime and friends, which need UNIX timestamps.

Bottom line, I think it may make perfect sense to store dates and times
in integers, depending on your use case and your client.

Michael Ludwig


Re: fq vs. q

2009-06-15 Thread Shalin Shekhar Mangar
On Mon, Jun 15, 2009 at 4:39 PM, Michael Ludwig m...@as-guides.com wrote:

 I want to retrieve documents (TV programs) by a particular date and
 decided to convert the date to an integer, so I have:

 * 20090615
 * 20090616
 * 20090617 etc.

 I lose all date logic (timezones) for that date field, but it works for
 this particular use case, as the date is merely a tag, and not a real
 date I need to perform more logic on than an integer allows.

 Also, an integer looks about as efficient as it gets, so I thought it
 preferable to a date for this use case. YMMV.

 I think if you truncate dates to incomplete dates, you effectively also
 lose all the date logic. You may still apply it, but what would you take
 the result to mean? You can't regain precision you've decided to drop.


Note that with Trie search coming in (see example schema.xml in the nightly
builds), this rounding may not be necessary any more.
-- 
Regards,
Shalin Shekhar Mangar.


Problem with Query Parser?

2009-06-15 Thread Avlesh Singh
I noticed a strange behavior of the Query parser for the following query on
my index.
+(category_name:$ product_name:$ brand_name:$) +is_available:1
Fields, category_name, product_name and brand_name are of type text and
is_available is a string field, storing 0 or 1 for each doc in the index.

When I perform the query: *+(category_name:$ product_name:$
brand_name:$)*, i get no results (which is as expected);
However, when I perform the query: *+(category_name:$ product_name:$
brand_name:$) +is_available:1*, I get results for all is_available=1. This
is unexpected and undesired, the first half of the query is simply ignored.

I have noticed this behaviour for pretty much all the special characters: $,
^, * etc ... I am using the default text field analyzer.
Am I missing something or is this a known bug in Solr?

Cheers
Avlesh


Re: Custom Request handler Error:

2009-06-15 Thread Noor

I want Solr, to accept my custom class changes and run it.
So for this, pls anyone guide me to achieve this..

noor wrote:

Yes, i changed custom into /custom, now it calls my class.
But, in browser, It shows
Null RequestHandler null.

So, i need to accept my changes by solr. For that, what i need to do,
pls guide me to acheive this.


Noble Paul wrote:

register is as follows
requestHandler name=/custom 
class=org.apache.solr.my.MyCustomHandler


the request must be made to the uri /custom only then the requests
would come to your handler

On Sat, Jun 13, 2009 at 5:49 PM, noornoo...@opentechindia.com wrote:
 

Yes, i changed requestHandler name as,
requestHandler name=custom 
class=org.apache.solr.my.MyCustomHandler

.

Then also,
In statistics page, my custom handler under QueryHandler's request 
count

remains 0. It shows that, the webrequest is not coming to my class

Noble Paul wrote:
   

register your handler in some other name and fire a request to that

On Fri, Jun 12, 2009 at 8:07 PM, noornoo...@opentechindia.com wrote:

 

I solved this NullPointerException, by the following changes.

In java code:
public void handleRequestBody(SolrQueryRequest request, 
SolrQueryResponse

response) throws Exception {
SolrCore coreToRequest =
request.getCore();//coreContainer.getCore(core2);
.
}

and in solr-config.xml:
requestHandler name=/select class=solr.my.MyCustomHandler
lst name=defaults
str name=echoParamsexplicit/str
str name=qtandem/str
str name=debugQuerytrue/str
/lst
/requestHandler

Now, my webapp runs fine by,
http://localhost:8983/mysearch
searching also working fine.
But, these are not run through my custom handler. So i felt, it 
wrongly

doing searching.
Because, in solr admin statistics page,
my custom handler under QueryHandler's request count remains 0, it
doesn't
get incremented, when i search something. Rather, 
statndardReqHandler's

request count is incremented.

And another thing, how do we debug solr. ???
Please anybody help me to solve this ...

Thanks in advance.

Noble Paul ??? ?? wrote:

   

is there any error on the console?

On Fri, Jun 12, 2009 at 4:26 PM, Noornoo...@opentechindia.com 
wrote:



 

hi,
 i am new to apache solr.
I need to create a custom request handler class. So i create a 
new one

and
changed the solr-config.xml file as,
 requestHandler name=/select class=solr.my.MyCustomHandler
lst name=defaults
str name=echoParamsexplicit/str
str name=qtandem/str
str name=debugQuerytrue/str
/lst
 /requestHandler

And in my java class, the code is,

public class MyCustomHandler extends RequestHandlerBase{
 public CoreContainer coreContainer;
 public void handleRequestBody(SolrQueryRequest request,
SolrQueryResponse
response) throws Exception {
SolrCore coreToRequest = coreContainer.getCore(core2);
ModifiableSolrParams params = new ModifiableSolrParams();
params.set(echoParams, explicit);
params.set(q, text);
params.set(debugQuery, true);
request = new LocalSolrQueryRequest(coreToRequest, params);
// SolrRequestHandler reqHandler =
coreToRequest.getRequestHandler(/select);
coreToRequest.execute(reqHandler, request, response);
coreToRequest.close();
request.close();
 }
 // the abstract methods - getDescription(), getSourceId(),
getSource(),
getVersion() are //overrided... but these methods doesn't have any
implementations.
}


But, if i search any text in my webapp from browser, gots the 
HTTP 500

error.
i dont know how SolrContainer is intialized
Pls anyone give me the solution...

thanks and regards,
Mohamed






  




  





  







Debug Solr in Netbeans..

2009-06-15 Thread noor
hi, 
i am new to apache solr.

i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes in 
solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor




Re: Debug Solr in Netbeans..

2009-06-15 Thread Mark Miller

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes in 
solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger on port 
5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... (In 
NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr example.

--
- Mark

http://www.lucidimagination.com





Re: fq vs. q

2009-06-15 Thread Michael Ludwig

Fergus McMenemie schrieb:


The article could explain the difference between fq= and
facet.query= and when you should use one in preference to
the other.


My understanding is that while these query modifiers rely on the
same implementation (cached filters) to boost performance, they
simply and obviously differ in that fq limits the result set to
your filter criterion whereas facet.query does not restrict the
result but instead enhances it with statistical information gained
from applying set intersection of result and facet query filters.

It looks like facet.query is just a more flexible means of
defining a filter than possible using a mere facet.field.

Would that be approximately correct?

A question of mine:

It appears to me that each facet.query invariably leads to one
boolean filter, so if you wanted to do range faceting for a given
field and obtain, say, results reduced from their actual continuum
of values to three ranges {A,B,C}, you'd have to define three
facet.query parameters accordingly. A mere facet.field, on the
other hand, creates as many filters as there are unique values in
the field. Is that correct?

Michael Ludwig


Re: Debug Solr in Netbeans..

2009-06-15 Thread noor

Solr starts running in the port 8983,
i created build.properties in the project folder, where the build.xml is.
And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes in 
solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger on 
port 5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... (In 
NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr example.





Re: fq vs. q

2009-06-15 Thread Michael Ludwig

Shalin Shekhar Mangar schrieb:

On Mon, Jun 15, 2009 at 4:39 PM, Michael Ludwig m...@as-guides.com
wrote:



I think if you truncate dates to incomplete dates, you effectively
also lose all the date logic. You may still apply it, but what would
you take the result to mean? You can't regain precision you've
decided to drop.


Note that with Trie search coming in (see example schema.xml in the
nightly builds), this rounding may not be necessary any more.


http://svn.apache.org/repos/asf/lucene/solr/trunk/example/solr/conf/schema.xml

Not sure I understand correctly, but this sounds as if given an
integer field and a @precisionStep of 3, the original value is stored
along with three copies that omit (1) the last bit, (2) the two last
bits, (3) the three last bits. So a given range query might be
optimized to an equality query. But I'm not sure I'm on the right
track here.

Michael Ludwig


Re: Joins or subselects in solr

2009-06-15 Thread Michael Ludwig

Nasseam Elkarra schrieb:


I am storing items in an index. Each item has a comma separated list
of related items. Is it possible to bring back an item and all of its
related items in one query? If so how and how would you distinguish
between which one is the main item and which are the related.


Think about the data structure. You're saying there is a main item,
which suggests there is some regularity to the underlying data
structure, possibly a tree.

If there is a main item, each item should store a reference to the main
item. You could then perform a lookup specifying q=mainitem:12345. That
would retrieve all items related to 12345 and solve the problem more
efficiently than having each item store a list of all its related items.

I'm thinking of small or moderately sized trees here, such as they grow
in mailing lists or discussion boards.

If it's not a tree, but some less regular graph, then the notion of a
main item needs clarification.

Michael Ludwig


Re: Debug Solr in Netbeans..

2009-06-15 Thread noor

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the build.xml is.
And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes in 
solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger on 
port 5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... 
(In NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr example.








Re: Debug Solr in Netbeans..

2009-06-15 Thread Mark Miller
Do you see the following printed to std out when you start solr (using 
'run-example')?


Listening for transport dt_socket at address: 5005

noor wrote:

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the build.xml 
is.

And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes in 
solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger on 
port 5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... 
(In NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr example.









--
- Mark

http://www.lucidimagination.com





Re: Debug Solr in Netbeans..

2009-06-15 Thread noor

No.
In netbeans, debugger-console output shows,

Attaching to localhost:8983
handshake failed - connection prematurally closed

i dont know where the problem is ?

Mark Miller wrote:
Do you see the following printed to std out when you start solr (using 
'run-example')?


Listening for transport dt_socket at address: 5005

noor wrote:

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the 
build.xml is.

And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes 
in solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger on 
port 5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... 
(In NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr 
example.














Re: Debug Solr in Netbeans..

2009-06-15 Thread Mark Miller

If you don't see that, you may have build.properties in the wrong place.

When you run 'solr-example' in debug mode, Listening for transport 
dt_socket at address: 5005 will be printed to STD out.


Once you have that working correctly, you want to attach to port 5005, 
not 8983. Solr runs on 8983, but the debugger is listening on 5005.



- Mark

noor wrote:

No.
In netbeans, debugger-console output shows,

Attaching to localhost:8983
handshake failed - connection prematurally closed

i dont know where the problem is ?

Mark Miller wrote:
Do you see the following printed to std out when you start solr 
(using 'run-example')?


Listening for transport dt_socket at address: 5005

noor wrote:

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the 
build.xml is.

And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes 
in solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger 
on port 5005.


In Netbeans, from the main menu, select Debug  Attach Debugger... 
(In NetBeans 6.1 and older select Run  Attach Debugger...).


Follow the dialogue box prompts to connect to the running Solr 
example.















--
- Mark

http://www.lucidimagination.com





version of lucene

2009-06-15 Thread JCodina

I have the solr-nightly build of last week, and in the lib foloder i can find
the lucene-core-2.9-dev.jar
I need to do some changes to the shingle filter in order to remove stopwords
from bigrams, but to do so I need to compile lucene, 
the problem is, lucene is in version 2.4 not 2.9
If I take, with subverison, version 2.4 then compiling solr I get the next
error:
.../apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java:21:
cannot find symbol
[javac] symbol  : class Collector
[javac] location: package org.apache.lucene.search
[javac] import org.apache.lucene.search.Collector;


any hints on the right version of lucene/solr  to be able to use solr 1.4

Joan
-- 
View this message in context: 
http://www.nabble.com/version-of-lucene-tp24036137p24036137.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Debug Solr in Netbeans..

2009-06-15 Thread noor

Now, i put that build.properties file in the solr location tooo.
But still i am getting.

Attaching to localhost:5005
Connection refused

Note:
Solr lib folder contains, my custom class's jar file.
But in netbeans, i am doing the attach-debugger processing.
And in browser, i am accessing that class setting as,
http://localhost:8983/solr/custom?q=searchTextdebugQuery=true
Browser page also gives Null error.

Is this way correct.

For your information,
pls see about my custom handler settings on the following page:
http://markmail.org/message/uvm5xp3ld5mmd5or?q=custom+solr+handler+error:



Mark Miller wrote:

If you don't see that, you may have build.properties in the wrong place.

When you run 'solr-example' in debug mode, Listening for transport 
dt_socket at address: 5005 will be printed to STD out.


Once you have that working correctly, you want to attach to port 5005, 
not 8983. Solr runs on 8983, but the debugger is listening on 5005.



- Mark

noor wrote:

No.
In netbeans, debugger-console output shows,

Attaching to localhost:8983
handshake failed - connection prematurally closed

i dont know where the problem is ?

Mark Miller wrote:
Do you see the following printed to std out when you start solr 
(using 'run-example')?


Listening for transport dt_socket at address: 5005

noor wrote:

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the 
build.xml is.

And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created classes 
in solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger 
on port 5005.


In Netbeans, from the main menu, select Debug  Attach 
Debugger... (In NetBeans 6.1 and older select Run  Attach 
Debugger...).


Follow the dialogue box prompts to connect to the running Solr 
example.



















Re: version of lucene

2009-06-15 Thread Mark Miller

JCodina wrote:

I have the solr-nightly build of last week, and in the lib foloder i can find
the lucene-core-2.9-dev.jar
I need to do some changes to the shingle filter in order to remove stopwords
from bigrams, but to do so I need to compile lucene, 
the problem is, lucene is in version 2.4 not 2.9

If I take, with subverison, version 2.4 then compiling solr I get the next
error:
.../apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java:21:
cannot find symbol
[javac] symbol  : class Collector
[javac] location: package org.apache.lucene.search
[javac] import org.apache.lucene.search.Collector;


any hints on the right version of lucene/solr  to be able to use solr 1.4

Joan
  

You want to build from svn trunk: http://svn.apache.org/viewvc/lucene/java/

You want revision r779312, because as you can see in CHANGES.txt, the 
last time Solr updated Lucene,

it was to Lucene 2.9-dev r779312.

--
- Mark

http://www.lucidimagination.com





LRUCache causing locked threads

2009-06-15 Thread CameronL

I've searched through the forums and seen a few similar problems to this, but
nothing that seemed to help much.  

We're running Solr 1.3 on Tomcat 6.0.16 and Java 6.  We've been having
performance problems with our search, causing long query times under normal
traffic.  We've taken a thread dump and have seen many threads locked or
waiting for LRUCache (see below).  Our cache values are as follows:

filterCache class=solr.LRUCache size=2 initialSize=1
autowarmCount=1/
queryResultCache class=solr.LRUCache size=2 initialSize=1
autowarmCount=5000/
documentCache class=solr.LRUCache size=25000 initialSize=1
autowarmCount=0/


http-8983-99 daemon prio=10 tid=0x002beb3f5800 nid=0x2fb9 waiting for
monitor entry [0x47ea5000..0x47ea6c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
- waiting to lock 0x002a9fb94be8 (a
org.apache.solr.search.LRUCache$1)
at
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
at
org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
at
org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
at
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
at
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
at
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
http-8983-83 daemon prio=10 tid=0x002bead1a000 nid=0x2f76 waiting for
monitor entry [0x46e95000..0x46e96c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
- locked 0x002a9fb94be8 (a org.apache.solr.search.LRUCache$1)
at
org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
at
org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
at
org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
at
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
at
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
at
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
at
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
at
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)

Has anyone else experienced this or does anyone have an idea of why this
might be happening?
-- 
View this message in context: 
http://www.nabble.com/LRUCache-causing-locked-threads-tp24040421p24040421.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: LRUCache causing locked threads

2009-06-15 Thread Yonik Seeley
Solr 1.4 has a cache implementation that's lockless for gets, and
faster for gets.  There's a new faceting implementation as well.

-Yonik
http://www.lucidimagination.com

On Mon, Jun 15, 2009 at 2:39 PM, CameronLcameron.develo...@gmail.com wrote:

 I've searched through the forums and seen a few similar problems to this, but
 nothing that seemed to help much.

 We're running Solr 1.3 on Tomcat 6.0.16 and Java 6.  We've been having
 performance problems with our search, causing long query times under normal
 traffic.  We've taken a thread dump and have seen many threads locked or
 waiting for LRUCache (see below).  Our cache values are as follows:

 filterCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=1/
 queryResultCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=5000/
 documentCache class=solr.LRUCache size=25000 initialSize=1
 autowarmCount=0/


 http-8983-99 daemon prio=10 tid=0x002beb3f5800 nid=0x2fb9 waiting for
 monitor entry [0x47ea5000..0x47ea6c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - waiting to lock 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
 http-8983-83 daemon prio=10 tid=0x002bead1a000 nid=0x2f76 waiting for
 monitor entry [0x46e95000..0x46e96c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - locked 0x002a9fb94be8 (a org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)

 Has anyone else experienced this or does anyone have an idea of why this
 might be happening?
 --
 View this message in context: 
 http://www.nabble.com/LRUCache-causing-locked-threads-tp24040421p24040421.html
 Sent from the Solr - User mailing list archive at Nabble.com.




localsolr sort

2009-06-15 Thread Nirkhe, Chandra
Hi 

I am trying to sort local results on geo_distance order. But getting
Unknown solr order error.

 

HTTP Status 400 - Unknown sort order: asc

 

Following is the HTTP request:

 

http://localhost:8080/solr/select?indent=onq=*:*qt=geolat=41.883784l
ong=-87.637668radius=30sort=geo_distance%20asc

 

Using Solr 1.5 latest trunk.

 

Any help is greatly appreciated.

 

 

Regards

Chandra

 

 



Re: LRUCache causing locked threads

2009-06-15 Thread CameronL

Unfortunately upgrading to 1.4 isn't an option for us at the moment. 

Since we're stuck using 1.3, is there anything in particular we need to do
to prevent these threads from locking (through configuration or something)
or is this sort of expected/unavoidable using 1.3?


Yonik Seeley-2 wrote:
 
 Solr 1.4 has a cache implementation that's lockless for gets, and
 faster for gets.  There's a new faceting implementation as well.
 
 -Yonik
 http://www.lucidimagination.com
 
 On Mon, Jun 15, 2009 at 2:39 PM, CameronLcameron.develo...@gmail.com
 wrote:

 I've searched through the forums and seen a few similar problems to this,
 but
 nothing that seemed to help much.

 We're running Solr 1.3 on Tomcat 6.0.16 and Java 6.  We've been having
 performance problems with our search, causing long query times under
 normal
 traffic.  We've taken a thread dump and have seen many threads locked or
 waiting for LRUCache (see below).  Our cache values are as follows:

 filterCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=1/
 queryResultCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=5000/
 documentCache class=solr.LRUCache size=25000 initialSize=1
 autowarmCount=0/


 http-8983-99 daemon prio=10 tid=0x002beb3f5800 nid=0x2fb9 waiting
 for
 monitor entry [0x47ea5000..0x47ea6c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - waiting to lock 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
 http-8983-83 daemon prio=10 tid=0x002bead1a000 nid=0x2f76 waiting
 for
 monitor entry [0x46e95000..0x46e96c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - locked 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)

 Has anyone else experienced this or does anyone have an idea of why this
 might be happening?
 --
 View this message in context:
 http://www.nabble.com/LRUCache-causing-locked-threads-tp24040421p24040421.html
 Sent from the Solr - User mailing list archive at Nabble.com.


 
 

-- 
View this message in context: 
http://www.nabble.com/LRUCache-causing-locked-threads-tp24040421p24040772.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: LRUCache causing locked threads

2009-06-15 Thread Yonik Seeley
On Mon, Jun 15, 2009 at 2:58 PM, CameronLcameron.develo...@gmail.com wrote:
 Unfortunately upgrading to 1.4 isn't an option for us at the moment.

 Since we're stuck using 1.3, is there anything in particular we need to do
 to prevent these threads from locking (through configuration or something)

Not really.

 or is this sort of expected/unavoidable using 1.3?

Throughput will be less when faceting with LRUCache, but not a lot
less under reasonable loads.  Just because you're seeing threads
blocked on LRUCache doesn't mean it would perform well if it were
lockless.
How many CPU cores do you have on your box, and how many requests
typically execute at the same time?
What's your CPU utilization under load?
Does a single faceting request return in acceptable time when no other
requests are running?

-Yonik
http://www.lucidimagination.com




 Yonik Seeley-2 wrote:

 Solr 1.4 has a cache implementation that's lockless for gets, and
 faster for gets.  There's a new faceting implementation as well.

 -Yonik
 http://www.lucidimagination.com

 On Mon, Jun 15, 2009 at 2:39 PM, CameronLcameron.develo...@gmail.com
 wrote:

 I've searched through the forums and seen a few similar problems to this,
 but
 nothing that seemed to help much.

 We're running Solr 1.3 on Tomcat 6.0.16 and Java 6.  We've been having
 performance problems with our search, causing long query times under
 normal
 traffic.  We've taken a thread dump and have seen many threads locked or
 waiting for LRUCache (see below).  Our cache values are as follows:

 filterCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=1/
 queryResultCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=5000/
 documentCache class=solr.LRUCache size=25000 initialSize=1
 autowarmCount=0/


 http-8983-99 daemon prio=10 tid=0x002beb3f5800 nid=0x2fb9 waiting
 for
 monitor entry [0x47ea5000..0x47ea6c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - waiting to lock 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
 http-8983-83 daemon prio=10 tid=0x002bead1a000 nid=0x2f76 waiting
 for
 monitor entry [0x46e95000..0x46e96c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - locked 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)

 Has anyone else experienced this or does anyone have an idea of why this
 might be happening?
 --
 View this message in context:
 http://www.nabble.com/LRUCache-causing-locked-threads-tp24040421p24040421.html
 Sent from the Solr - User mailing list archive at Nabble.com.





 --
 View this message in context: 
 

Re: Problem with Query Parser?

2009-06-15 Thread Otis Gospodnetic

Hi,

It looks like the query parser is doing its job of removing certain characters 
from the query string.

Maybe you can use this method directly or at least mimic it in your application:

./src/solrj/org/apache/solr/client/solrj/util/ClientUtils.java:  public static 
String escapeQueryChars(String s) {


Otis
--
Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



- Original Message 
 From: Avlesh Singh avl...@gmail.com
 To: solr-user@lucene.apache.org
 Sent: Monday, June 15, 2009 8:06:03 AM
 Subject: Problem with Query Parser?
 
 I noticed a strange behavior of the Query parser for the following query on
 my index.
 +(category_name:$ product_name:$ brand_name:$) +is_available:1
 Fields, category_name, product_name and brand_name are of type text and
 is_available is a string field, storing 0 or 1 for each doc in the index.
 
 When I perform the query: *+(category_name:$ product_name:$
 brand_name:$)*, i get no results (which is as expected);
 However, when I perform the query: *+(category_name:$ product_name:$
 brand_name:$) +is_available:1*, I get results for all is_available=1. This
 is unexpected and undesired, the first half of the query is simply ignored.
 
 I have noticed this behaviour for pretty much all the special characters: $,
 ^, * etc ... I am using the default text field analyzer.
 Am I missing something or is this a known bug in Solr?
 
 Cheers
 Avlesh



Possible Containers

2009-06-15 Thread Mukerjee, Neiloy (Neil)
Having tried Tomcat and not come to much success upon the realization that I'm 
using Tomcat 5.5 for other projects I'm working on and that I would be best off 
using Tomcat 6 for Solr v1.3.0, I am in search of another possible container. 
What have people used successfully that would be a good starting point for me 
to try out?


Re: Possible Containers

2009-06-15 Thread Andrew Oliver
I've had it running in Jetty and Tomcat.

Tomcat 6 + JDK6 have some nice performance semantics especially with
non-blocking IO, persistent connections, etc.

It is likely that it will run in Resin, though I haven't tried it.

It will also likely run in any of the Tomcat-based stuff (i.e. TC
Server from Spring Source, JBossAS from Red Hat)


-Andy

On Mon, Jun 15, 2009 at 2:25 PM, Mukerjee, Neiloy
(Neil)neil.muker...@alcatel-lucent.com wrote:
 Having tried Tomcat and not come to much success upon the realization that 
 I'm using Tomcat 5.5 for other projects I'm working on and that I would be 
 best off using Tomcat 6 for Solr v1.3.0, I am in search of another possible 
 container. What have people used successfully that would be a good starting 
 point for me to try out?



Re: Possible Containers

2009-06-15 Thread John Martyniak
I have been using jetty and have been really happy with the ease of  
use and performance.


-John

On Jun 15, 2009, at 3:41 PM, Andrew Oliver wrote:


I've had it running in Jetty and Tomcat.

Tomcat 6 + JDK6 have some nice performance semantics especially with
non-blocking IO, persistent connections, etc.

It is likely that it will run in Resin, though I haven't tried it.

It will also likely run in any of the Tomcat-based stuff (i.e. TC
Server from Spring Source, JBossAS from Red Hat)


-Andy

On Mon, Jun 15, 2009 at 2:25 PM, Mukerjee, Neiloy
(Neil)neil.muker...@alcatel-lucent.com wrote:
Having tried Tomcat and not come to much success upon the  
realization that I'm using Tomcat 5.5 for other projects I'm  
working on and that I would be best off using Tomcat 6 for Solr  
v1.3.0, I am in search of another possible container. What have  
people used successfully that would be a good starting point for me  
to try out?




John Martyniak
President/CEO
Before Dawn Solutions, Inc.
9457 S. University Blvd #266
Highlands Ranch, CO 80126
o: 877-499-1562
c: 303-522-1756
e: j...@beforedawnsoutions.com
w: http://www.beforedawnsolutions.com



UnInvertedField performance on faceted fields containing many unique terms

2009-06-15 Thread Kent Fitch
Hi,

This may be of interest to other users of SOLR's UnInvertedField who
have a very large number of unique terms in faceted fields.

Our setup is :

- about 34M lucene documents of bibliographic and full text content
- index currently 115GB, will at least double over next 6 months
- moving to support real-time-ish updates (maybe 5 min delay)

We facet on 8 fields, 6 of which are normal with small numbers of
distinct values.  But 2 faceted fields, creator and subject, are huge,
with 18M and 9M terms respectively.  (Whether we should be faceting on
such a huge number of values, and at the same time attempting to
provide real time-ish updates is another question!  Whether facets
derived from all of the hundreds of thousands of results regardless of
match quality which typically happens in a large full text index is
yet another question!).  The app is visible here:
http://sbdsproto.nla.gov.au/

On a server with 2xquad core AMD 2382 processors and 64GB memory, java
1.6.0_13-b03, 64 bit run with -Xmx15192M -Xms6000M -verbose:gc, with
the index on Intel X25M SSD, on start-up the elapsed time to create
the 8 facets is 306 seconds (best time).  Following an index reopen,
the time to recreate them in 318 seconds (best time).

[We have made an independent experimental change to create the facets
with 3 async threads, that is, in parallel, and also to decouple them
from the underlying index, so our facets lag the index changes by the
time to recreate the facets.  With our setup, the 3 threads reduced
facet creation elapsed time from about 450 secs to around 320 secs,
but this will depend a lot on IO capabilities of the device containing
the index, amount of file system caching, load, etc]

Anyway, we noticed that huge amounts of garbage were being collected
during facet generation of the creator and subject fields, and tracked
it down to this decision in UnInvertedField univert():

  if (termNum = maxTermCounts.length) {
// resize, but conserve memory by not doubling
// resize at end??? we waste a maximum of 16K (average of 8K)
int[] newMaxTermCounts = new int[maxTermCounts.length+4096];
System.arraycopy(maxTermCounts, 0, newMaxTermCounts, 0, termNum);
maxTermCounts = newMaxTermCounts;
  }

So, we tried the obvious thing:

- allocate 10K terms initially, rather than 1K
- extend by doubling the current size, rather than adding a fixed 4K
- free unused space at the end (but only if unused space is
significant) by reallocating the array to the exact required size

And also:

- created a static HashMap lookup keyed on field name which remembers
the previous allocated size for maxTermCounts for that field, and
initially allocates that size + 1000 entries

The second change is a minor optimisation, but the first change, by
eliminating thousands of array reallocations and copies, greatly
improved load times, down from 306 to 124 seconds on the initial load
and from 318 to 134 seconds on reloads after index updates.  About
60-70 secs is still spend in GC, but it is a significant improvement.

Unless you have very large numbers of facet values, this change won't
have any positive benefit.

Regards,

Kent Fitch


Re: Possible Containers

2009-06-15 Thread Eric Pugh
Can you highlight what problems you've had?  Solr doesn't have any  
really odd aspects about it that would prevent it from running in any  
kind of servlet  container.


Eric

On Jun 15, 2009, at 6:18 PM, John Martyniak wrote:

I have been using jetty and have been really happy with the ease of  
use and performance.


-John

On Jun 15, 2009, at 3:41 PM, Andrew Oliver wrote:


I've had it running in Jetty and Tomcat.

Tomcat 6 + JDK6 have some nice performance semantics especially with
non-blocking IO, persistent connections, etc.

It is likely that it will run in Resin, though I haven't tried it.

It will also likely run in any of the Tomcat-based stuff (i.e. TC
Server from Spring Source, JBossAS from Red Hat)


-Andy

On Mon, Jun 15, 2009 at 2:25 PM, Mukerjee, Neiloy
(Neil)neil.muker...@alcatel-lucent.com wrote:
Having tried Tomcat and not come to much success upon the  
realization that I'm using Tomcat 5.5 for other projects I'm  
working on and that I would be best off using Tomcat 6 for Solr  
v1.3.0, I am in search of another possible container. What have  
people used successfully that would be a good starting point for  
me to try out?




John Martyniak
President/CEO
Before Dawn Solutions, Inc.
9457 S. University Blvd #266
Highlands Ranch, CO 80126
o: 877-499-1562
c: 303-522-1756
e: j...@beforedawnsoutions.com
w: http://www.beforedawnsolutions.com



-
Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 | 
http://www.opensourceconnections.com
Free/Busy: http://tinyurl.com/eric-cal






Re: UnInvertedField performance on faceted fields containing many unique terms

2009-06-15 Thread Yonik Seeley
Great writeup Ken,

All the constants you see in UnInvertedField were a best guess - I
wasn't working with any real data.  It's surprising that a big array
allocation every 4096 terms is so significant - I had figured that the
work involved in processing that many terms would far outweigh
realloc+GC.

Could you open a JIRA issue with your recommended changes?  It's
simple enough we should have no problem getting it in for Solr 1.4.

Also, are you using a recent Solr build (within the last month)?
LUCENE-1596 should improve uninvert time for non-optimized indexes.

And don't forget to update http://wiki.apache.org/solr/PublicServers
when you go live!

-Yonik
http://www.lucidimagination.com



On Mon, Jun 15, 2009 at 7:43 PM, Kent Fitchkent.fi...@gmail.com wrote:
 Hi,

 This may be of interest to other users of SOLR's UnInvertedField who
 have a very large number of unique terms in faceted fields.

 Our setup is :

 - about 34M lucene documents of bibliographic and full text content
 - index currently 115GB, will at least double over next 6 months
 - moving to support real-time-ish updates (maybe 5 min delay)

 We facet on 8 fields, 6 of which are normal with small numbers of
 distinct values.  But 2 faceted fields, creator and subject, are huge,
 with 18M and 9M terms respectively.  (Whether we should be faceting on
 such a huge number of values, and at the same time attempting to
 provide real time-ish updates is another question!  Whether facets
 derived from all of the hundreds of thousands of results regardless of
 match quality which typically happens in a large full text index is
 yet another question!).  The app is visible here:
 http://sbdsproto.nla.gov.au/

 On a server with 2xquad core AMD 2382 processors and 64GB memory, java
 1.6.0_13-b03, 64 bit run with -Xmx15192M -Xms6000M -verbose:gc, with
 the index on Intel X25M SSD, on start-up the elapsed time to create
 the 8 facets is 306 seconds (best time).  Following an index reopen,
 the time to recreate them in 318 seconds (best time).

 [We have made an independent experimental change to create the facets
 with 3 async threads, that is, in parallel, and also to decouple them
 from the underlying index, so our facets lag the index changes by the
 time to recreate the facets.  With our setup, the 3 threads reduced
 facet creation elapsed time from about 450 secs to around 320 secs,
 but this will depend a lot on IO capabilities of the device containing
 the index, amount of file system caching, load, etc]

 Anyway, we noticed that huge amounts of garbage were being collected
 during facet generation of the creator and subject fields, and tracked
 it down to this decision in UnInvertedField univert():

      if (termNum = maxTermCounts.length) {
        // resize, but conserve memory by not doubling
        // resize at end??? we waste a maximum of 16K (average of 8K)
        int[] newMaxTermCounts = new int[maxTermCounts.length+4096];
        System.arraycopy(maxTermCounts, 0, newMaxTermCounts, 0, termNum);
        maxTermCounts = newMaxTermCounts;
      }

 So, we tried the obvious thing:

 - allocate 10K terms initially, rather than 1K
 - extend by doubling the current size, rather than adding a fixed 4K
 - free unused space at the end (but only if unused space is
 significant) by reallocating the array to the exact required size

 And also:

 - created a static HashMap lookup keyed on field name which remembers
 the previous allocated size for maxTermCounts for that field, and
 initially allocates that size + 1000 entries

 The second change is a minor optimisation, but the first change, by
 eliminating thousands of array reallocations and copies, greatly
 improved load times, down from 306 to 124 seconds on the initial load
 and from 318 to 134 seconds on reloads after index updates.  About
 60-70 secs is still spend in GC, but it is a significant improvement.

 Unless you have very large numbers of facet values, this change won't
 have any positive benefit.

 Regards,

 Kent Fitch



EmbeddedSolrServer seperate process

2009-06-15 Thread pof

Hi, one question: Will EmbeddedSolrServer work on a seperate java process
than the solr servlet(start.jar) as long as it is run on the same machine?

Thanks. 
-- 
View this message in context: 
http://www.nabble.com/EmbeddedSolrServer-seperate-process-tp24046680p24046680.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: UnInvertedField performance on faceted fields containing many unique terms

2009-06-15 Thread Kent Fitch
Hi Yonik,

On Tue, Jun 16, 2009 at 10:52 AM, Yonik
Seeleyyo...@lucidimagination.com wrote:

 All the constants you see in UnInvertedField were a best guess - I
 wasn't working with any real data.  It's surprising that a big array
 allocation every 4096 terms is so significant - I had figured that the
 work involved in processing that many terms would far outweigh
 realloc+GC.

Well, they were pretty good guesses!  The code is extremely fast for
reasonable sized term lists.
I think with our 18M terms, the increasingly long array of ints was
being reallocated, copied and garbage collected 18M/4K = 4,500 times,
creating 4500x(18Mx4bytes)/2 = 162GB of garbage to collect.

 Could you open a JIRA issue with your recommended changes?  It's
 simple enough we should have no problem getting it in for Solr 1.4.

Thanks - just added SOLR-1220.  I havent mentioned the change to the
initial allocation on 10K (rather than 1024) because I dont think it
is significant.  I also havent mentioned the remembering of sizes to
initially allocate, because the improvement is marginal compared to
this big change, and for all I know, a static hashmap with fieldnames
could cause unwanted side effects from field name clashes if running
SOLR with multiple indices.

 Also, are you using a recent Solr build (within the last month)?
 LUCENE-1596 should improve uninvert time for non-optimized indexes.

We're not - but we'll upgrade to the latest version of 1.4 very soon.

 And don't forget to update http://wiki.apache.org/solr/PublicServers
 when you go live!

We will - thanks for your great work in improving SOLR performance
with 1.4 which makes such outrageous uses of facets even thinkable.

Regards,

Kent Fitch


Re: Problem with Query Parser?

2009-06-15 Thread Avlesh Singh
How does one explain this?
+myField:$ give zero result
+myField:$ +city:Mumbai gives result for city:Mumbai

Cheers
Avlesh

On Tue, Jun 16, 2009 at 12:50 AM, Otis Gospodnetic 
otis_gospodne...@yahoo.com wrote:


 Hi,

 It looks like the query parser is doing its job of removing certain
 characters from the query string.

 Maybe you can use this method directly or at least mimic it in your
 application:

 ./src/solrj/org/apache/solr/client/solrj/util/ClientUtils.java:  public
 static String escapeQueryChars(String s) {


 Otis
 --
 Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



 - Original Message 
  From: Avlesh Singh avl...@gmail.com
  To: solr-user@lucene.apache.org
  Sent: Monday, June 15, 2009 8:06:03 AM
  Subject: Problem with Query Parser?
 
  I noticed a strange behavior of the Query parser for the following query
 on
  my index.
  +(category_name:$ product_name:$ brand_name:$) +is_available:1
  Fields, category_name, product_name and brand_name are of type text and
  is_available is a string field, storing 0 or 1 for each doc in the
 index.
 
  When I perform the query: *+(category_name:$ product_name:$
  brand_name:$)*, i get no results (which is as expected);
  However, when I perform the query: *+(category_name:$ product_name:$
  brand_name:$) +is_available:1*, I get results for all is_available=1.
 This
  is unexpected and undesired, the first half of the query is simply
 ignored.
 
  I have noticed this behaviour for pretty much all the special characters:
 $,
  ^, * etc ... I am using the default text field analyzer.
  Am I missing something or is this a known bug in Solr?
 
  Cheers
  Avlesh




Re: Problem with Query Parser?

2009-06-15 Thread Avlesh Singh

 Probably the analyzer removed the $, leaving an empty term and causing
 the clause to be removed altogether.


I predicted this behavior while writing the mail yesterday, Yonik.
Does it sound logical and intuitive?

Cheers
Avlesh

On Tue, Jun 16, 2009 at 9:42 AM, Yonik Seeley yo...@lucidimagination.comwrote:

 On Mon, Jun 15, 2009 at 11:53 PM, Avlesh Singhavl...@gmail.com wrote:
  How does one explain this?
  +myField:$ give zero result
  +myField:$ +city:Mumbai gives result for city:Mumbai

 Probably the analyzer removed the $, leaving an empty term and
 causing the clause to be removed altogether.

 -Yonik
 http://www.lucidimagination.com



Re: Problem with Query Parser?

2009-06-15 Thread Avlesh Singh
And here's the debug info:
str name=rawquerystring+myField:$ +city:Mumbai/str
str name=querystring+myField:$ +city:Mumbai/str
str name=parsedquery+city:Mumbai/str
str name=parsedquery_toString+city:Mumbai/str
str name=QParserOldLuceneQParser/str

I found this unintuitive. No results rather than All results was the
expected behavior.

Cheers
Avlesh

On Tue, Jun 16, 2009 at 9:58 AM, Avlesh Singh avl...@gmail.com wrote:

 Probably the analyzer removed the $, leaving an empty term and causing
 the clause to be removed altogether.


 I predicted this behavior while writing the mail yesterday, Yonik.
 Does it sound logical and intuitive?

 Cheers
 Avlesh


 On Tue, Jun 16, 2009 at 9:42 AM, Yonik Seeley 
 yo...@lucidimagination.comwrote:

 On Mon, Jun 15, 2009 at 11:53 PM, Avlesh Singhavl...@gmail.com wrote:
  How does one explain this?
  +myField:$ give zero result
  +myField:$ +city:Mumbai gives result for city:Mumbai

 Probably the analyzer removed the $, leaving an empty term and
 causing the clause to be removed altogether.

 -Yonik
 http://www.lucidimagination.com





PNW Hadoop / Apache Cloud Stack Users' Meeting, Wed Jun 24th, Seattle

2009-06-15 Thread Bradford Stephens
Greetings,

On the heels of our smashing success last month, we're going to be
convening the Pacific Northwest (Oregon and Washington)
Hadoop/HBase/Lucene/etc. meetup on the last Wednesday of June, the
24th.  The meeting should start at 6:45, organized chats will end
around  8:00, and then there shall be discussion and socializing :)

The meeting will probably be at the University of Washington in
Seattle again -- a (better) map and directions shall be provided when
the location is confirmed.

If you've ever wanted to learn more about distributed computing, or
just see how other people are innovating with Hadoop, you can't miss
this opportunity. Our focus is on learning and education, so every
presentation must end with a few questions for the group to research
and discuss. (But if you're an introvert, we won't mind).

The format is two or three 15-minute deep dive talks, followed by
several 5 minute lightning chats. We had a few interesting topics
last month:

-Building a Social Media Analysis company on the Apache Cloud Stack
-Cancer detection in images using Hadoop
-Real-time OLAP on HBase -- is it possible?
-Video and Network Flow Analysis in Hadoop vs. Distributed RDBMS
-Custom Ranking in Lucene

We already have one deep dive scheduled this month, on truly
scalable Lucene with Katta. If you've been looking for a way to handle
those large Lucene indices, this is a must-attend!

Looking forward to seeing everyone there again.

Cheers,
Bradford

http://www.roadtofailure.com -- The Fringes of Distributed Computing,
Computer Science, and Social Media.


Re: Problem with Query Parser?

2009-06-15 Thread Avlesh Singh

 Maybe you can use this method directly or at least mimic it in your
 application:
 ./src/solrj/org/apache/solr/client/solrj/util/ClientUtils.java:  public
 static String escapeQueryChars(String s)


Does not help either, Otis.
(+myField:$ +city:Mumbai) at best could get converted into (+myField:\\$
+city:Mumbai)
Output remains the same: all results rather than expected no results.

Cheers
Avlesh

On Tue, Jun 16, 2009 at 12:50 AM, Otis Gospodnetic 
otis_gospodne...@yahoo.com wrote:


 Hi,

 It looks like the query parser is doing its job of removing certain
 characters from the query string.

 Maybe you can use this method directly or at least mimic it in your
 application:

 ./src/solrj/org/apache/solr/client/solrj/util/ClientUtils.java:  public
 static String escapeQueryChars(String s) {


 Otis
 --
 Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch



 - Original Message 
  From: Avlesh Singh avl...@gmail.com
  To: solr-user@lucene.apache.org
  Sent: Monday, June 15, 2009 8:06:03 AM
  Subject: Problem with Query Parser?
 
  I noticed a strange behavior of the Query parser for the following query
 on
  my index.
  +(category_name:$ product_name:$ brand_name:$) +is_available:1
  Fields, category_name, product_name and brand_name are of type text and
  is_available is a string field, storing 0 or 1 for each doc in the
 index.
 
  When I perform the query: *+(category_name:$ product_name:$
  brand_name:$)*, i get no results (which is as expected);
  However, when I perform the query: *+(category_name:$ product_name:$
  brand_name:$) +is_available:1*, I get results for all is_available=1.
 This
  is unexpected and undesired, the first half of the query is simply
 ignored.
 
  I have noticed this behaviour for pretty much all the special characters:
 $,
  ^, * etc ... I am using the default text field analyzer.
  Am I missing something or is this a known bug in Solr?
 
  Cheers
  Avlesh




Re: EmbeddedSolrServer seperate process

2009-06-15 Thread Shalin Shekhar Mangar
On Tue, Jun 16, 2009 at 8:15 AM, pof melbournebeerba...@gmail.com wrote:


 Hi, one question: Will EmbeddedSolrServer work on a seperate java process
 than the solr servlet(start.jar) as long as it is run on the same machine?


EmbeddedSolrServer is run in the same process as the application which uses
it. CommonsHttpSolrServer is used by an application to communicate with Solr
running in a separate host (or JVM) using HTTP. The start.jar is the jetty
servlet container which can be used to host solr.

Does that answer your question?

-- 
Regards,
Shalin Shekhar Mangar.


Re: LRUCache causing locked threads

2009-06-15 Thread Noble Paul നോബിള്‍ नोब्ळ्
the FastLRUCache can be used in 1.3 if it can be compiled and added to
the solr.home/lib

On Tue, Jun 16, 2009 at 12:40 AM, Yonik
Seeleyyo...@lucidimagination.com wrote:
 On Mon, Jun 15, 2009 at 2:58 PM, CameronLcameron.develo...@gmail.com wrote:
 Unfortunately upgrading to 1.4 isn't an option for us at the moment.

 Since we're stuck using 1.3, is there anything in particular we need to do
 to prevent these threads from locking (through configuration or something)

 Not really.

 or is this sort of expected/unavoidable using 1.3?

 Throughput will be less when faceting with LRUCache, but not a lot
 less under reasonable loads.  Just because you're seeing threads
 blocked on LRUCache doesn't mean it would perform well if it were
 lockless.
 How many CPU cores do you have on your box, and how many requests
 typically execute at the same time?
 What's your CPU utilization under load?
 Does a single faceting request return in acceptable time when no other
 requests are running?

 -Yonik
 http://www.lucidimagination.com




 Yonik Seeley-2 wrote:

 Solr 1.4 has a cache implementation that's lockless for gets, and
 faster for gets.  There's a new faceting implementation as well.

 -Yonik
 http://www.lucidimagination.com

 On Mon, Jun 15, 2009 at 2:39 PM, CameronLcameron.develo...@gmail.com
 wrote:

 I've searched through the forums and seen a few similar problems to this,
 but
 nothing that seemed to help much.

 We're running Solr 1.3 on Tomcat 6.0.16 and Java 6.  We've been having
 performance problems with our search, causing long query times under
 normal
 traffic.  We've taken a thread dump and have seen many threads locked or
 waiting for LRUCache (see below).  Our cache values are as follows:

 filterCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=1/
 queryResultCache class=solr.LRUCache size=2 initialSize=1
 autowarmCount=5000/
 documentCache class=solr.LRUCache size=25000 initialSize=1
 autowarmCount=0/


 http-8983-99 daemon prio=10 tid=0x002beb3f5800 nid=0x2fb9 waiting
 for
 monitor entry [0x47ea5000..0x47ea6c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - waiting to lock 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)
 http-8983-83 daemon prio=10 tid=0x002bead1a000 nid=0x2f76 waiting
 for
 monitor entry [0x46e95000..0x46e96c30]
   java.lang.Thread.State: BLOCKED (on object monitor)
        at org.apache.solr.search.LRUCache.get(LRUCache.java:130)
        - locked 0x002a9fb94be8 (a
 org.apache.solr.search.LRUCache$1)
        at
 org.apache.solr.search.SolrIndexSearcher.getPositiveDocSet(SolrIndexSearcher.java:605)
        at
 org.apache.solr.search.SolrIndexSearcher.numDocs(SolrIndexSearcher.java:1556)
        at
 org.apache.solr.request.SimpleFacets.getFacetTermEnumCounts(SimpleFacets.java:377)
        at
 org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:156)
        at
 org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:182)
        at
 org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:96)
        at
 org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:70)
        at
 org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:169)
        at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
        at org.apache.solr.core.SolrCore.execute(SolrCore.java:1204)
        at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:303)
        at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:232)

 Has anyone else experienced this or does anyone have an idea of why this
 might be happening?
 --
 View this message in context:
 

Re: EmbeddedSolrServer seperate process

2009-06-15 Thread pof

It certain does, thank you.


Shalin Shekhar Mangar wrote:
 
 On Tue, Jun 16, 2009 at 8:15 AM, pof melbournebeerba...@gmail.com wrote:
 

 Hi, one question: Will EmbeddedSolrServer work on a seperate java process
 than the solr servlet(start.jar) as long as it is run on the same
 machine?


 EmbeddedSolrServer is run in the same process as the application which
 uses
 it. CommonsHttpSolrServer is used by an application to communicate with
 Solr
 running in a separate host (or JVM) using HTTP. The start.jar is the jetty
 servlet container which can be used to host solr.
 
 Does that answer your question?
 
 -- 
 Regards,
 Shalin Shekhar Mangar.
 
 

-- 
View this message in context: 
http://www.nabble.com/EmbeddedSolrServer-seperate-process-tp24046680p24047849.html
Sent from the Solr - User mailing list archive at Nabble.com.



Re: Debug Solr in Netbeans..

2009-06-15 Thread noor

Addition to my previous reply::

I am running solr by start.jar file. It has my custom class jar file in 
its lib folder.

Also in netbeans, custom class source has the checkpoint to debug.;
and in the project folder, i created the build.properties.
So i set the AttachDebugger settings; but it gives the connection 
refused error:


I don't know, i am doing correctly 
Please anyone help me to solve this ...

thanks and regards
Noorulla

noor wrote:

Now, i put that build.properties file in the solr location tooo.
But still i am getting.

Attaching to localhost:5005
Connection refused

Note:
Solr lib folder contains, my custom class's jar file.
But in netbeans, i am doing the attach-debugger processing.
And in browser, i am accessing that class setting as,
http://localhost:8983/solr/custom?q=searchTextdebugQuery=true
Browser page also gives Null error.

Is this way correct.

For your information,
pls see about my custom handler settings on the following page:
http://markmail.org/message/uvm5xp3ld5mmd5or?q=custom+solr+handler+error:



Mark Miller wrote:

If you don't see that, you may have build.properties in the wrong place.

When you run 'solr-example' in debug mode, Listening for transport 
dt_socket at address: 5005 will be printed to STD out.


Once you have that working correctly, you want to attach to port 
5005, not 8983. Solr runs on 8983, but the debugger is listening on 
5005.



- Mark

noor wrote:

No.
In netbeans, debugger-console output shows,

Attaching to localhost:8983
handshake failed - connection prematurally closed

i dont know where the problem is ?

Mark Miller wrote:
Do you see the following printed to std out when you start solr 
(using 'run-example')?


Listening for transport dt_socket at address: 5005

noor wrote:

Addition to the previous reply:
I built my custom project and put into solr webapps lib folder.
And starts running solr.
In netbeans, i made the changes as i said before.
But it shows connection refused error.

anybody please give me the solution...

noor wrote:

Solr starts running in the port 8983,
i created build.properties in the project folder, where the 
build.xml is.

And in that empty build.properties file,
i added,
example.debug=true
only.
And in Netbeans, Debug - Attach Debugger,
- Debugger is JavaDebugger(JPDA);
- Connector is SocketAttach(Attaches by socket to other VMs)
- HOST is localhost;
- Port as 5005;
and Timeout is empty.

During solr running, i set this, but in the output screen shows 
Connection is refused.


Is my changes are correct ??? or i need to change anything else...


thanks and regards,
Noor


Mark Miller wrote:

noor wrote:

hi, i am new to apache solr.
i got the solr source code, and i created my own (custom) classes.
Also, i make the request reference to that newly created 
classes in solr-config.xml.


now i need to debug my code, when the solr search calls my class..
So, for this, i dont know how to debug my code?

Please anybody help me to achieve this.

thanks and regards,
Noor



Make a file next to build.xml called build.properties.

Add to the empty file: example.debug=true

Run the ant target 'run-example' in build.xml.

Solr will run with the ability to connect with a remote debugger 
on port 5005.


In Netbeans, from the main menu, select Debug  Attach 
Debugger... (In NetBeans 6.1 and older select Run  Attach 
Debugger...).


Follow the dialogue box prompts to connect to the running Solr 
example.