Re: Apache Solr Quiz

2012-10-31 Thread Yulia Crowder
I will love to cover other parts of SOLR management.
If you have any Solr quiz question on mind, please send it to me and I will
insert it to the quiz (and to the site).
Together we can make a good SOLR quiz questions for the community.

Thank you all.
Yulia
http://www.quizmeup.com/quiz/**apache-solr-configurationhttp://www.quizmeup.com/quiz/apache-solr-configuration



On Fri, Oct 19, 2012 at 2:37 PM, Dmitry Kan dmitry@gmail.com wrote:

 Thanks for the quiz. It is refreshing. Do you plan on covering other parts
 of SOLR management, like various handlers, scoring, plugins, sharding etc?

 Dmitry

 On Wed, Oct 17, 2012 at 7:12 PM, Yulia Crowder yulia.crow...@gmail.com
 wrote:

  I love Solr!
  I have searched for a quiz about Solr and didn't find any on the net.
  I am pleased to say that I have conducted a Quiz about Solr:
 
  http://www.quizmeup.com/quiz/apache-solr-configuration
 
  It is build on a free wiki based quiz site. You can, and welcome to,
  improve my questions and add new questions.
  Hope you find it useful and enjoyable way to learn about Solr.
  Comments?
 



Re: SolrCloud AutoSharding? In enterprise environment?

2012-10-31 Thread joseph_12345
Thanks Otis for the response.

1. Is there any performance impact if the client is invoking the solr index
using the VIP url instead of individual shard URLs? If the default sharding
of SOLR is based on uniqueId.hashcode % numServers, how does the SOLR
identify which Shard to get the data if client is querying by any name/value
of a document(Unique Id is not passed in the URL). Is ZooKeeper doing this
logic of finding out which shard to go and get the data? .Sorry to go ask
more into details but would like to know. 

2. I have followed the steps of
http://wiki.apache.org/solr/SolrCloud#Getting_Started and set up multiple
shards in my local box and did index some documents. But I am still not
clear on the index and document file system structure, I mean how would I
verify if the data is really distributed. Can you please point me to some
good documentation of the folder structure of where the index files will be
created in each shard. When I indexed few documents by pointing to one
shard, I saw few files getting created under
apache-solr-4.0.0\example\solr\mycollection\data\index. Is this the complete
index files location ?

Thanks
Jaino



--
View this message in context: 
http://lucene.472066.n3.nabble.com/SolrCloud-AutoSharding-In-enterprise-environment-tp4017036p4017201.html
Sent from the Solr - User mailing list archive at Nabble.com.


RE: Unable to build trunk

2012-10-31 Thread Markus Jelsma
Hi,

Where is that lock file located? I triggered it again (in another contrib) and 
wil trigger it again in the future and don't want to remove my ivy cache each 
time :)

Thanks
 
 
-Original message-
 From:Robert Muir rcm...@gmail.com
 Sent: Tue 30-Oct-2012 15:14
 To: solr-user@lucene.apache.org
 Subject: Re: Unable to build trunk
 
 Its not wonky. you just have to ensure you have nothing else (like
 some IDE, or build somewhere else) using ivy, then its safe to remove
 the .lck file there.
 
 I turned on this locking so that it hangs instead of causing cache
 corruption, but ivy only has simplelockfactory so if you ^C at the
 wrong time, it might leave a .lck file.
 
 On Tue, Oct 30, 2012 at 9:27 AM, Erick Erickson erickerick...@gmail.com 
 wrote:
  Not sure if it's relevant, but sometimes the ivy caches are wonky. Try
  deleting (on OS X) ~/.ivy2 recursively and building again? Of course
  your next build will download a bunch of jars...
 
  FWIW,
  Erick
 
  On Tue, Oct 30, 2012 at 5:38 AM, Markus Jelsma
  markus.jel...@openindex.io wrote:
  Hi,
 
  Since yesterday we're unable to build trunk and also a clean check out 
  from trunk. We can compile the sources but not the example or dist.
 
  It hangs on resolve and after a while prints the following:
 
  resolve:
 
  [ivy:retrieve]
  [ivy:retrieve] :: problems summary ::
  [ivy:retrieve]  WARNINGS
  [ivy:retrieve]  module not found: 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]   local: tried
  [ivy:retrieve]
  /home/markus/.ivy2/local/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/ivys/ivy.xml
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  /home/markus/.ivy2/local/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/jars/randomizedtesting-runner.jar
  [ivy:retrieve]   shared: tried
  [ivy:retrieve]
  /home/markus/.ivy2/shared/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/ivys/ivy.xml
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  /home/markus/.ivy2/shared/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/jars/randomizedtesting-runner.jar
  [ivy:retrieve]   public: tried
  [ivy:retrieve]
  http://repo1.maven.org/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]   sonatype-releases: tried
  [ivy:retrieve]
  http://oss.sonatype.org/content/repositories/releases/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]   working-chinese-mirror: tried
  [ivy:retrieve]
  http://mirror.netcologne.de/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  http://mirror.netcologne.de/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.jar
  [ivy:retrieve]  ::
  [ivy:retrieve]  ::  UNRESOLVED DEPENDENCIES ::
  [ivy:retrieve]  ::
  [ivy:retrieve]  :: 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4: not 
  found
  [ivy:retrieve]  ::
  [ivy:retrieve]  ERRORS
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]
  [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
 
  BUILD FAILED
  /home/markus/src/solr/trunk/solr/build.xml:336: The following error 
  occurred while 

Re: Unable to build trunk

2012-10-31 Thread Robert Muir
you will have to use 'find' on your .ivy2 !

On Wed, Oct 31, 2012 at 6:32 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
 Hi,

 Where is that lock file located? I triggered it again (in another contrib) 
 and wil trigger it again in the future and don't want to remove my ivy cache 
 each time :)

 Thanks


 -Original message-
 From:Robert Muir rcm...@gmail.com
 Sent: Tue 30-Oct-2012 15:14
 To: solr-user@lucene.apache.org
 Subject: Re: Unable to build trunk

 Its not wonky. you just have to ensure you have nothing else (like
 some IDE, or build somewhere else) using ivy, then its safe to remove
 the .lck file there.

 I turned on this locking so that it hangs instead of causing cache
 corruption, but ivy only has simplelockfactory so if you ^C at the
 wrong time, it might leave a .lck file.

 On Tue, Oct 30, 2012 at 9:27 AM, Erick Erickson erickerick...@gmail.com 
 wrote:
  Not sure if it's relevant, but sometimes the ivy caches are wonky. Try
  deleting (on OS X) ~/.ivy2 recursively and building again? Of course
  your next build will download a bunch of jars...
 
  FWIW,
  Erick
 
  On Tue, Oct 30, 2012 at 5:38 AM, Markus Jelsma
  markus.jel...@openindex.io wrote:
  Hi,
 
  Since yesterday we're unable to build trunk and also a clean check out 
  from trunk. We can compile the sources but not the example or dist.
 
  It hangs on resolve and after a while prints the following:
 
  resolve:
 
  [ivy:retrieve]
  [ivy:retrieve] :: problems summary ::
  [ivy:retrieve]  WARNINGS
  [ivy:retrieve]  module not found: 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]   local: tried
  [ivy:retrieve]
  /home/markus/.ivy2/local/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/ivys/ivy.xml
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  /home/markus/.ivy2/local/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/jars/randomizedtesting-runner.jar
  [ivy:retrieve]   shared: tried
  [ivy:retrieve]
  /home/markus/.ivy2/shared/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/ivys/ivy.xml
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  /home/markus/.ivy2/shared/com.carrotsearch.randomizedtesting/randomizedtesting-runner/2.0.4/jars/randomizedtesting-runner.jar
  [ivy:retrieve]   public: tried
  [ivy:retrieve]
  http://repo1.maven.org/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]   sonatype-releases: tried
  [ivy:retrieve]
  http://oss.sonatype.org/content/repositories/releases/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]   working-chinese-mirror: tried
  [ivy:retrieve]
  http://mirror.netcologne.de/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.pom
  [ivy:retrieve]-- artifact 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4!randomizedtesting-runner.jar:
  [ivy:retrieve]
  http://mirror.netcologne.de/maven2/com/carrotsearch/randomizedtesting/randomizedtesting-runner/2.0.4/randomizedtesting-runner-2.0.4.jar
  [ivy:retrieve]  ::
  [ivy:retrieve]  ::  UNRESOLVED DEPENDENCIES ::
  [ivy:retrieve]  ::
  [ivy:retrieve]  :: 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4: not 
  found
  [ivy:retrieve]  ::
  [ivy:retrieve]  ERRORS
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]  impossible to acquire lock for 
  com.carrotsearch.randomizedtesting#randomizedtesting-runner;2.0.4
  [ivy:retrieve]
  [ivy:retrieve] :: USE VERBOSE OR DEBUG MESSAGE 

Re: Are there any limitations on multi-value field joins?

2012-10-31 Thread Erick Erickson
As I remember, the underlying algorithm enumerates all the unique values
in the field when doing the join (or something like that). So when the
filed you're joining has many unique values, it performs poorly. Worse,
it'll be fine on small data sets, the kind we usually develop with. But then
when you put your real data set in it (usually orders of magnitude more
data than we test with) you set yourself up for surprises

Trying to use Solr in an RDBMS-like manner is simply not using Solr's
strengths. An army of very bright people put a lot of work into making
DBs performant in their problem space, work that isn't a amenable
for a search engine...

FWIW
Erick

On Tue, Oct 30, 2012 at 6:39 PM, Steven Livingstone Pérez
webl...@hotmail.com wrote:
 Thanks. Can you explain a bit more about your second point below. 
 Specifically what makes it a bad fit? (design wise, performance)?

 Thanks again.
 Steven

 Sent from my Windows Phone
 
 From: Erick Erickson
 Sent: 30/10/2012 22:22
 To: solr-user@lucene.apache.org
 Subject: Re: Are there any limitations on multi-value field joins?

 Whenever anyone starts talking about using Solr to perform what
 would be multi-way DB joins I break out in hives.

 First of all, the limited join capability in Solr only returns the
 values from ONE of the documents. There's no way to return values
 from both the from and to documents.

 Second, Solr's join capability is a poor fit if the fields being joined have
 many unique values, so that's something to be careful of

 I'd advise that you see if you can flatten (de-normalize) your data such
 that you can make simple queries rather than try to use Solr like you
 would a DB...

 FWIW,
 Erick

 On Tue, Oct 30, 2012 at 7:20 AM, Steven Livingstone Pérez
 webl...@hotmail.com wrote:
 Hi - I've done quite a bit of Googling and reading but can't find a 
 definitive answer to this.
 I would like to have a list of key data rows each with a unique id and some 
 data.
 datarow1 a b cdatarow2 x y zdatarow3 m n o...
 I'd then like to have other rows that point to one or more of they data rows 
 that have a multi-valued field that can contain one or many of the unique 
 id's above.
 User1 datarow1, datarow2, datarow3 etcUser2 datarow4, datarow21, datarow43 
 etc...
 Then i will join from the user1 row to the data row.
 My question is simply are there *any* limitation on doing this kind of join? 
 I believe there are some geo-spatial issues and sorting (i don't need to 
 sort on the id) but before i jump fully into this approach i've like to 
 understand anything i may run into - or whether it is better to have them as 
 individual rows and join them that way.
 many thanks,/steven


Re: need help on solr search

2012-10-31 Thread Erick Erickson
You need to provide significantly more information than you have.
What are your perf requirements? How big is your data set? What
kinds of searches are you talking about here? How are you
measuring response?

This really feels like an XY problem.

Best
Erick

On Wed, Oct 31, 2012 at 1:33 AM, jchen2000 jchen...@yahoo.com wrote:
 Hi Solr experts,

 Our documents as well as queries consist of 10 properties in a particular
 order. Because of stringent requirements on search latency, we grouped them
 into only 2 fields with 5 properties each (we may use just 1 field, field
 number over 3 seems too slow), and each property value is split into
 fixed-length terms (like n-gram, hopefully to save search time) and prefixed
 with property name. What we want is to find out how similar the query is to
 the documents by comparing terms. We can't use the default OR operator since
 it's slow, we wanted to take advantage of the prefix and the defined order.

 My questions are:
 1) Can we do this simply through solr configuration, and how if possible?
 2) If we need to customize solr request handler or anything else, where to
 start?

 Thanks a lot!

 Jeremy



 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/need-help-on-solr-search-tp4017191.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud AutoSharding? In enterprise environment?

2012-10-31 Thread Erick Erickson
There's no real magic about the index structure, it's just the
same as non-cloud Solr. So presumably you have
example and example2 directories or some such.
They're just standard Solr installations with the
index in the usual place, the docs for the particular
shard are stored in the data/index directories under
example and example2.

SolCloud is simply doing the usual querying for you, it
sends the request to _all_ shards and assembles the
response. The only time hashing on the uniqueKey
comes into play is when SolrCloud is deciding what
shard to send the document to during indexing AFAIK.

Likewise, going to specific Admin pages for each node will
show you stats for that particular node. If you really want to,
you can probably just fire up the Solr node without any
of the ZK parameters and it'll be out of the cloud. But an easier
way is just to go into the admincloud page and click on the
nodes displayed. Then click on the core (probably collection1)
and you'll see statistics for that particular node.

Best
Erick

On Wed, Oct 31, 2012 at 4:07 AM, joseph_12345
hellojoseph_12...@yahoo.com wrote:
 Thanks Otis for the response.

 1. Is there any performance impact if the client is invoking the solr index
 using the VIP url instead of individual shard URLs? If the default sharding
 of SOLR is based on uniqueId.hashcode % numServers, how does the SOLR
 identify which Shard to get the data if client is querying by any name/value
 of a document(Unique Id is not passed in the URL). Is ZooKeeper doing this
 logic of finding out which shard to go and get the data? .Sorry to go ask
 more into details but would like to know.

 2. I have followed the steps of
 http://wiki.apache.org/solr/SolrCloud#Getting_Started and set up multiple
 shards in my local box and did index some documents. But I am still not
 clear on the index and document file system structure, I mean how would I
 verify if the data is really distributed. Can you please point me to some
 good documentation of the folder structure of where the index files will be
 created in each shard. When I indexed few documents by pointing to one
 shard, I saw few files getting created under
 apache-solr-4.0.0\example\solr\mycollection\data\index. Is this the complete
 index files location ?

 Thanks
 Jaino



 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/SolrCloud-AutoSharding-In-enterprise-environment-tp4017036p4017201.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Query regarding range search on int fields

2012-10-31 Thread Leena Jawale
Hi,

I have created Solr XML data source. In that I want to make range query on int 
field which is price.
But by default it is considering that field as text. So I have added Price in 
the fields section making price as int,
Indexed=true, Search by default=true, Stored=true, Include in results=true.
After that when I go in Solr Admin and see the Schema Browser it is showing the 
error like
Please wait...loading and parsing Schema Information from LukeRequestHandler

If it does not load or your browser is not javascript or ajax-capable, you may 
wish to examine your schema using the Server side transformed 
LukeRequestHandlerhttp://127.0.0.1:/solr/collection2/admin/luke?wt=xslttr=luke.xsl
 or the raw 
schema.xmlhttp://127.0.0.1:/solr/collection2/admin/file/?file=schema.xml 
instead.

Could you help me in this?
Thanks,
Leena Jawale








The contents of this e-mail and any attachment(s) may contain confidential or 
privileged information for the intended recipient(s). Unintended recipients are 
prohibited from taking action on the basis of information in this e-mail and 
using or disseminating the information, and must notify the sender and delete 
it from their system. LT Infotech will not accept responsibility or liability 
for the accuracy or completeness of, or the presence of any virus or disabling 
code in this e-mail


SOLR 3.5 sometimes throws java.lang.NumberFormatException: For input string: java.math.BigDecimal:1848.66

2012-10-31 Thread Marcin Pilaczynski
Welcome all,

We have a very strange problem with SOLR 3.5. It SOMETIMES throws exceptions:

2012-10-31 10:20:06,408 SEVERE [org.apache.solr.core.SolrCore:185]
(http-10.205.49.74-8080-155) org.apache.solr.common.SolrException:
ERROR: [doc=MyDoc # d3mo1351674222122-1 # 2012-10-31 08:03:42.122]
Error adding field 'AMOUNT'='java.math.BigDecimal:1848.66'
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:324)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:53)
at 
my.package.solr.dihutils.DateCheckUpdateProcessorFactory$DateCheckUpdateProcessor.processAdd(DateCheckUpdateProcessorFactory.java:91)
at 
org.apache.solr.handler.BinaryUpdateRequestHandler$2.document(BinaryUpdateRequestHandler.java:79)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:139)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readIterator(JavaBinUpdateRequestCodec.java:129)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:211)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readNamedList(JavaBinUpdateRequestCodec.java:114)
at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:176)
at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:102)
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:144)
at 
org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:69)
at 
org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:45)
at 
org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:56)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1372)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at 
org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)
at 
org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)
at 
org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
at 
org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at 
org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.lang.NumberFormatException: For input string:
java.math.BigDecimal:1848.66
at 
sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
at java.lang.Double.parseDouble(Double.java:510)
at org.apache.solr.schema.TrieField.createField(TrieField.java:418)
at org.apache.solr.schema.SchemaField.createField(SchemaField.java:104)
at 
org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:203)
at 
org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:281)
... 36 more

Exceptions are thrown always for different BigDecimal values (so the
problem is not related to BigDecimal value).
We have no idea what's going on. Any ideas?
Greetings

-- 
Marcin P


Re: SOLR 3.5 sometimes throws java.lang.NumberFormatException: For input string: java.math.BigDecimal:1848.66

2012-10-31 Thread Rafał Kuć
Hello!

Look at what Solr returns in the error - you send the following value
java.math.BigDecimal:1848.66 - remove the java.math.BigDecimal:
and your problem should be gone.

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

 Welcome all,

 We have a very strange problem with SOLR 3.5. It SOMETIMES throws exceptions:

 2012-10-31 10:20:06,408 SEVERE [org.apache.solr.core.SolrCore:185]
 (http-10.205.49.74-8080-155) org.apache.solr.common.SolrException:
 ERROR: [doc=MyDoc # d3mo1351674222122-1 # 2012-10-31 08:03:42.122]
 Error adding field 'AMOUNT'='java.math.BigDecimal:1848.66'
 at
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:324)
 at
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
 at
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:53)
 at
 my.package.solr.dihutils.DateCheckUpdateProcessorFactory$DateCheckUpdateProcessor.processAdd(DateCheckUpdateProcessorFactory.java:91)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.document(BinaryUpdateRequestHandler.java:79)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:139)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readIterator(JavaBinUpdateRequestCodec.java:129)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:211)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readNamedList(JavaBinUpdateRequestCodec.java:114)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:176)
 at
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:102)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:144)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:69)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:45)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:56)
 at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1372)
 at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at
 org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)
 at
 org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
 org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
 at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
 at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
 at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NumberFormatException: For input string:
 java.math.BigDecimal:1848.66
 at
 sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
 at java.lang.Double.parseDouble(Double.java:510)
 at
 org.apache.solr.schema.TrieField.createField(TrieField.java:418)
 at
 org.apache.solr.schema.SchemaField.createField(SchemaField.java:104)
 at
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:203)
 at
 

Re: SOLR 3.5 sometimes throws java.lang.NumberFormatException: For input string: java.math.BigDecimal:1848.66

2012-10-31 Thread Erick Erickson
It _looks_ like somehow the string you're sending as a BigDecimal is,
literally, java.math.BigDecimal:1848.66 rather than 1848.66. How are
you generating the field value? I'm guessing that your (SolrJ?) program is
somehow messing this up...

Best
Erick


On Wed, Oct 31, 2012 at 7:28 AM, Marcin Pilaczynski marcin@gmail.comwrote:

 Welcome all,

 We have a very strange problem with SOLR 3.5. It SOMETIMES throws
 exceptions:

 2012-10-31 10:20:06,408 SEVERE [org.apache.solr.core.SolrCore:185]
 (http-10.205.49.74-8080-155) org.apache.solr.common.SolrException:
 ERROR: [doc=MyDoc # d3mo1351674222122-1 # 2012-10-31 08:03:42.122]
 Error adding field 'AMOUNT'='java.math.BigDecimal:1848.66'
 at
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:324)
 at
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
 at
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:53)
 at
 my.package.solr.dihutils.DateCheckUpdateProcessorFactory$DateCheckUpdateProcessor.processAdd(DateCheckUpdateProcessorFactory.java:91)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.document(BinaryUpdateRequestHandler.java:79)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:139)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readIterator(JavaBinUpdateRequestCodec.java:129)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:211)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readNamedList(JavaBinUpdateRequestCodec.java:114)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:176)
 at
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:102)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:144)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:69)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:45)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:56)
 at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1372)
 at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at
 org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)
 at
 org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
 org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
 at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
 at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
 at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NumberFormatException: For input string:
 java.math.BigDecimal:1848.66
 at
 sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1222)
 at java.lang.Double.parseDouble(Double.java:510)
 at org.apache.solr.schema.TrieField.createField(TrieField.java:418)
 at
 org.apache.solr.schema.SchemaField.createField(SchemaField.java:104)
 at
 org.apache.solr.update.DocumentBuilder.addField(DocumentBuilder.java:203)

Re: Query regarding range search on int fields

2012-10-31 Thread Erick Erickson
Is this LucidWorks? You'll need to ask on the LucidWorks forums if so.

Best
Erick


On Wed, Oct 31, 2012 at 7:26 AM, Leena Jawale
leena.jaw...@lntinfotech.comwrote:

 Hi,

 I have created Solr XML data source. In that I want to make range query on
 int field which is price.
 But by default it is considering that field as text. So I have added Price
 in the fields section making price as int,
 Indexed=true, Search by default=true, Stored=true, Include in results=true.
 After that when I go in Solr Admin and see the Schema Browser it is
 showing the error like
 Please wait...loading and parsing Schema Information from
 LukeRequestHandler

 If it does not load or your browser is not javascript or ajax-capable, you
 may wish to examine your schema using the Server side transformed
 LukeRequestHandler
 http://127.0.0.1:/solr/collection2/admin/luke?wt=xslttr=luke.xsl or
 the raw schema.xml
 http://127.0.0.1:/solr/collection2/admin/file/?file=schema.xml
 instead.

 Could you help me in this?
 Thanks,
 Leena Jawale







 
 The contents of this e-mail and any attachment(s) may contain confidential
 or privileged information for the intended recipient(s). Unintended
 recipients are prohibited from taking action on the basis of information in
 this e-mail and using or disseminating the information, and must notify the
 sender and delete it from their system. LT Infotech will not accept
 responsibility or liability for the accuracy or completeness of, or the
 presence of any virus or disabling code in this e-mail



No lockType configured for NRTCachingDirectory

2012-10-31 Thread Markus Jelsma
Hi,

Besides replication issues (see other thread) we're also seeing these warnings 
in the logs on all 10 nodes and for all cores using today's or yesterday's 
trunk.

2012-10-31 11:01:03,328 WARN [solr.core.CachingDirectoryFactory] - [main] - : 
No lockType configured for 
NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/opt/solr/cores/shard_h/data
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@5dd183b7; 
maxCacheMB=48.0 maxMergeSizeMB=4.0) assuming 'simple'

The factory is configured like:

config
  luceneMatchVersionLUCENE_50/luceneMatchVersion
  directoryFactory name=DirectoryFactory 

class=${solr.directoryFactory:solr.NRTCachingDirectoryFactory}/

..
/config

And the locking mechanisch is configured like:

indexConfig
..
lockTypenative/lockType
..
/indexConfig

Any ideas to why it doesn't seem to see my lockType? 

Thanks
Markus


Re: Query regarding range search on int fields

2012-10-31 Thread Gora Mohanty
On 31 October 2012 16:56, Leena Jawale leena.jaw...@lntinfotech.com wrote:

 Hi,

 I have created Solr XML data source. In that I want to make range query on
 int field which is price.
 But by default it is considering that field as text. So I have added Price
 in the fields section making price as int,
 Indexed=true, Search by default=true, Stored=true, Include in results=true.
 After that when I go in Solr Admin and see the Schema Browser it is
 showing the error like
 Please wait...loading and parsing Schema Information from
 LukeRequestHandler

 If it does not load or your browser is not javascript or ajax-capable, you
 may wish to examine your schema using the Server side transformed
 LukeRequestHandler
 http://127.0.0.1:/solr/collection2/admin/luke?wt=xslttr=luke.xsl or
 the raw schema.xml
 http://127.0.0.1:/solr/collection2/admin/file/?file=schema.xml
 instead.


There is probably an error in your Solr schema. Please share schema.xml
with us.

Also, it would be good if you could follow up in one thread to people
responding to your post. It makes things difficult if the information is
spread over multiple threads.

Regards,
Gora


Re: SOLR 3.5 sometimes throws java.lang.NumberFormatException: For input string: java.math.BigDecimal:1848.66

2012-10-31 Thread Marcin Pilaczynski
First we were adding BigDecimal object to SolrInputDocument directly
as field value.
Now we are adding BigDecimal.toPlainString() as field value.

SOLR relies on JavaBinCodec class which does de/serialization in it's
own way - some kind of bug in there?

I don't know what is the proper way to handle BigDecimal values in
SOLR 3.5 after all?

2012/10/31 Erick Erickson erickerick...@gmail.com:
 It _looks_ like somehow the string you're sending as a BigDecimal is,
 literally, java.math.BigDecimal:1848.66 rather than 1848.66. How are
 you generating the field value? I'm guessing that your (SolrJ?) program is
 somehow messing this up...

 Best
 Erick


 On Wed, Oct 31, 2012 at 7:28 AM, Marcin Pilaczynski 
 marcin@gmail.comwrote:

 Welcome all,

 We have a very strange problem with SOLR 3.5. It SOMETIMES throws
 exceptions:

 2012-10-31 10:20:06,408 SEVERE [org.apache.solr.core.SolrCore:185]
 (http-10.205.49.74-8080-155) org.apache.solr.common.SolrException:
 ERROR: [doc=MyDoc # d3mo1351674222122-1 # 2012-10-31 08:03:42.122]
 Error adding field 'AMOUNT'='java.math.BigDecimal:1848.66'
 at
 org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:324)
 at
 org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:60)
 at
 org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:53)
 at
 my.package.solr.dihutils.DateCheckUpdateProcessorFactory$DateCheckUpdateProcessor.processAdd(DateCheckUpdateProcessorFactory.java:91)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$2.document(BinaryUpdateRequestHandler.java:79)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:139)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readIterator(JavaBinUpdateRequestCodec.java:129)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:211)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$2.readNamedList(JavaBinUpdateRequestCodec.java:114)
 at
 org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:176)
 at
 org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:102)
 at
 org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:144)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.parseAndLoadDocs(BinaryUpdateRequestHandler.java:69)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler.access$000(BinaryUpdateRequestHandler.java:45)
 at
 org.apache.solr.handler.BinaryUpdateRequestHandler$1.load(BinaryUpdateRequestHandler.java:56)
 at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:58)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1372)
 at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:356)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:252)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:235)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at
 org.jboss.web.tomcat.security.SecurityAssociationValve.invoke(SecurityAssociationValve.java:190)
 at
 org.jboss.web.tomcat.security.JaccContextValve.invoke(JaccContextValve.java:92)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.process(SecurityContextEstablishmentValve.java:126)
 at
 org.jboss.web.tomcat.security.SecurityContextEstablishmentValve.invoke(SecurityContextEstablishmentValve.java:70)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
 org.jboss.web.tomcat.service.jca.CachedConnectionValve.invoke(CachedConnectionValve.java:158)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:330)
 at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:829)
 at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:598)
 at
 org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.lang.NumberFormatException: For input string:
 

Re: Grouping based on multiple criteria

2012-10-31 Thread Marcio Ghiraldelli
I have a similar issue, and I am solving it by implementing my own
components and dealiing with the resultings doc IDs

public class MerchantShuffleComponent extends SearchComponent {
...
ctx.docs = shuffledDocList;

rb.rsp.getValues().remove(response);
rb.rsp.add(response, ctx);
...

searchComponent name=merchant_shuffle_query
class=solr.MerchantShuffleComponent /

requestHandler name=/shuffle class=solr.SearchHandler
arr name=components
strquery/str
strmerchant_shuffle_query/str
strfacet/str
strmlt/str
strhighlight/str
strstats/str
strdebug/str
/arr

Atenciosamente,
Marcio Ghiraldelli


2012/10/30 Alan Woodward alan.woodw...@romseysoftware.co.uk

 Hi list,

 I'd like to be able to present a list of results which are grouped on a
 single field, but then show various members of each group according to
 several different criteria.  So for example, for e-commerce search, we
 group at the top level by the vendor, but then show the most expensive
 item, least expensive item, most heavily discounted item, etc.

 I can't find anything that would let me do this in the current grouping
 code.  I'm thinking I'd need to implement a form of TopFieldCollector that
 maintained multiple sort orders that could be used for the second pass
 collector, but there doesn't seem to be anywhere to plug that in easily.

 Is there anything already out there that I'm missing, or do I have to do
 some actual work?  :-)

 Thanks, Alan


After to SOLR 4.0.0 Upgrade - ClusterState says we are the leader, but locally we don't think so

2012-10-31 Thread balaji.gandhi
Hi,

After upgrading from Solr 4.0.0-Beta to Solr 4.0.0 we are getting this error
from ALL the leader nodes:-

Oct 31, 2012 6:44:03 AM
org.apache.solr.update.processor.DistributedUpdateProcessor
doDefensiveChecks
SEVERE: ClusterState says we are the leader, but locally we don't think so

Is there a configuration/schema change we are missing? Please let us know.

Thanks,
Balaji



--
View this message in context: 
http://lucene.472066.n3.nabble.com/After-to-SOLR-4-0-0-Upgrade-ClusterState-says-we-are-the-leader-but-locally-we-don-t-think-so-tp4017277.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Datefaceting on multiple value in solr

2012-10-31 Thread Sagar Joshi1304
Thanks Chris,

seems it is working fine, below is the query

http://localhost:8993/solr/select?q=*:*fq=(!tag=test)name:test*fq=(!tag=test1)name:test1fq=(!tag=test3)name:test3facet=truefacet.range={!key=test
ex=test1,test3}Admission_Datefacet.range={!key=test1
ex=test,test3}Admission_Datefacet.range={!key=test3
ex=test,test1}Admission_Date*
facet.range.start=2012-06-01T12:00:00Zfacet.range.end=2012-06-30T17:00:00Zfacet.range.gap=%2B1DAYrows=0

please correct me if query is wrong, but if i have suppose 100 names then i
have to add fq 100 times, and in each fq i have to add exclusion of other
99; is there any other parameters or simpler way? 

sorry, to bother you more.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Datefaceting-on-multiple-value-in-solr-tp4014021p4017281.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: No lockType configured for NRTCachingDirectory

2012-10-31 Thread Mark Miller
By trunk do you mean 4X or 5X?

On Wed, Oct 31, 2012 at 7:47 AM, Markus Jelsma
markus.jel...@openindex.io wrote:
 Hi,

 Besides replication issues (see other thread) we're also seeing these 
 warnings in the logs on all 10 nodes and for all cores using today's or 
 yesterday's trunk.

 2012-10-31 11:01:03,328 WARN [solr.core.CachingDirectoryFactory] - [main] - : 
 No lockType configured for 
 NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/opt/solr/cores/shard_h/data
  lockFactory=org.apache.lucene.store.NativeFSLockFactory@5dd183b7; 
 maxCacheMB=48.0 maxMergeSizeMB=4.0) assuming 'simple'

 The factory is configured like:

 config
   luceneMatchVersionLUCENE_50/luceneMatchVersion
   directoryFactory name=DirectoryFactory
 
 class=${solr.directoryFactory:solr.NRTCachingDirectoryFactory}/

 ..
 /config

 And the locking mechanisch is configured like:

 indexConfig
 ..
 lockTypenative/lockType
 ..
 /indexConfig

 Any ideas to why it doesn't seem to see my lockType?

 Thanks
 Markus



-- 
- Mark


RE: No lockType configured for NRTCachingDirectory

2012-10-31 Thread Markus Jelsma
That's 5, the actual trunk/
 
-Original message-
 From:Mark Miller markrmil...@gmail.com
 Sent: Wed 31-Oct-2012 16:29
 To: solr-user@lucene.apache.org
 Subject: Re: No lockType configured for NRTCachingDirectory
 
 By trunk do you mean 4X or 5X?
 
 On Wed, Oct 31, 2012 at 7:47 AM, Markus Jelsma
 markus.jel...@openindex.io wrote:
  Hi,
 
  Besides replication issues (see other thread) we're also seeing these 
  warnings in the logs on all 10 nodes and for all cores using today's or 
  yesterday's trunk.
 
  2012-10-31 11:01:03,328 WARN [solr.core.CachingDirectoryFactory] - [main] - 
  : No lockType configured for 
  NRTCachingDirectory(org.apache.lucene.store.MMapDirectory@/opt/solr/cores/shard_h/data
   lockFactory=org.apache.lucene.store.NativeFSLockFactory@5dd183b7; 
  maxCacheMB=48.0 maxMergeSizeMB=4.0) assuming 'simple'
 
  The factory is configured like:
 
  config
luceneMatchVersionLUCENE_50/luceneMatchVersion
directoryFactory name=DirectoryFactory
  
  class=${solr.directoryFactory:solr.NRTCachingDirectoryFactory}/
 
  ..
  /config
 
  And the locking mechanisch is configured like:
 
  indexConfig
  ..
  lockTypenative/lockType
  ..
  /indexConfig
 
  Any ideas to why it doesn't seem to see my lockType?
 
  Thanks
  Markus
 
 
 
 -- 
 - Mark
 


Re: Solr4.0 / SolrCloud queries

2012-10-31 Thread Mark Miller
If you can share any logs, that would help as well.

- Mark


Re: Solr Swap Function doesn't work when using Solr Cloud - SOLR3866

2012-10-31 Thread Andre Bois-Crettez

Hello,

Same as Sam, I believe the SWAP command is important for important use
cases.

For example, with Solr 3, we do use Current and Temp cores, so that
incremental updates to the index are done live on Current, as well as
searches.
Whenever a full/baseline/from scratch index need to be performed,
incremental index is stopped, a delete *:* on Tempis performed, and all
documents are fed into Temp. Once done and commited, a SWAP between Temp
and Current is performed, and we can continue in nominal incremental
mode. All this time the searches on Current were working and mostly
correct, minus not seeing updates during the full index.

1) currently with SolrCloud, how is it possible to rebuild a new index
from scratch, without too much disruption of live searches?
It is realistic to stop any commit/softcommit, do the delete *:*, then
feed all documents, then only commit ? It seems the transaction log
would be way too large ? Our use case for each collection is around 10
million of documents, a dozen kB each.

2) maybe a workaround is possible using collections, ie. search on
collection1 while indexing collection2, then once done search on
collection2 ?

3) and in what ways could we help to implement a solution on SOLR-3866 ?


André

On 09/25/2012 03:17 AM, sam fang wrote:

Hi Mark,

If can support in future, I think it's great. It's a really useful feature.
For example, user can use to refresh with totally new core. User can build
index on one core. After build done, can swap old core and new core. Then
get totally new core for search.

Also can used in the backup. If one crashed, can easily swap with backup
core and quickly serve the search request.

Best Regards,
Sam

On Sun, Sep 23, 2012 at 2:51 PM, Mark Millermarkrmil...@gmail.com  wrote:


FYI swap is def not supported in SolrCloud right now - even though it may
work, it's not been thought about and there are no tests.

If you would like to see support, I'd add a JIRA issue along with any
pertinent info from this thread about what the behavior needs to be changed
to.

- Mark

On Sep 21, 2012, at 6:49 PM, sam fangsam.f...@gmail.com  wrote:


Hi Chris,

Thanks for your help. Today I tried again and try to figure out the

reason.

1. set up an external zookeeper server.

2. change /opt/solr/apache-solr-4.0.0-BETA/example/solr/solr.xml

persistent

to true. and run below command to upload config to zk. (renamed multicore
to solr, and need to put zkcli.sh related jar package.)
/opt/solr/apache-solr-4.0.0-BETA/example/cloud-scripts/zkcli.sh -cmd
upconfig -confdir

/opt/solr/apache-solr-4.0.0-BETA/example/solr/core0/conf/

-confname
core0 -z localhost:2181
/opt/solr/apache-solr-4.0.0-BETA/example/cloud-scripts/zkcli.sh -cmd
upconfig -confdir

/opt/solr/apache-solr-4.0.0-BETA/example/solr/core1/conf/

-confname
core1 -z localhost:2181

3. Start jetty server
cd /opt/solr/apache-solr-4.0.0-BETA/example
java -DzkHost=localhost:2181 -jar start.jar

4. publish message to core0
/opt/solr/apache-solr-4.0.0-BETA/example/solr/exampledocs
cp ../../exampledocs/post.jar ./
java -Durl=http://localhost:8983/solr/core0/update -jar post.jar
ipod_video.xml

5. query to core0 and core1 is ok.

6. Click swap in the admin page, the query to core0 and core1 is
changing. Previous I saw sometimes returns 0 result. sometimes return 1
result. Today
seems core0 still return 1 result, core1 return 0 result.

7. Then click reload in the admin page, the query to core0 and core1.
Sometimes return 1 result, and sometimes return nothing. Also can see

the zk

configuration also changed.

8. Restart jetty server. If do the query, it's same as what I saw in

step 7.

9. Stop jetty server, then log into zkCli.sh, then run command set
/clusterstate.json {}. then start jetty again. everything back to

normal,

that is what previous swap did in solr 3.6 or solr 4.0 w/o cloud.


 From my observation, after swap, seems it put shard information into
actualShards, when user request to search, it will use all shard
information to do the
search. But user can't see zk update until click reload button in admin
page. When restart web server, this shard information eventually went to
zk, and
the search go to all shards.

I found there is a option distrib, and used url like 
http://host1:18000/solr/core0/select?distrib=falseq=*%3A*wt=xml;, then
only get the data on the
core0. Digged in the code (handleRequestBody method in SearchHandler

class,

seems it make sense)

I tried to stop tomcat server, then use command set /clusterstate.json

{}

to clean all cluster state, then use command cloud-scripts/zkcli.sh -cmd
upconfig to upload config to zk server, and start tomcat server. It
rebuild the right shard information in zk. then search function back to
normal like what
we saw in 3.6 or 4.0 w/o cloud.

Seems solr always add shard information into zk.

I tested cloud swap on single machine, if each core have one shard in the
zk, after swap, eventually zk has 2 slices(shards) for that core because
now only
do 

Re: After to SOLR 4.0.0 Upgrade - ClusterState says we are the leader, but locally we don't think so

2012-10-31 Thread balaji.gandhi
Mark,

We have tried the following:-
1. Removing everything in the ZooKeeper snapshot directory
2. Removing the indexes

We get the same error in both the cases.

Attached the cloud dumps:-  cloud_dump.json
http://lucene.472066.n3.nabble.com/file/n4017315/cloud_dump.json  

Thanks,
Balaji



--
View this message in context: 
http://lucene.472066.n3.nabble.com/After-to-SOLR-4-0-0-Upgrade-ClusterState-says-we-are-the-leader-but-locally-we-don-t-think-so-tp4017277p4017315.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrJ 4.0.0 addFilterQuery() issue ?

2012-10-31 Thread Indika Tantrigoda
Just created the defect in Jira.
https://issues.apache.org/jira/browse/SOLR-4020

Thanks.

On 31 October 2012 10:47, Indika Tantrigoda indik...@gmail.com wrote:

 Thanks for the reply Chris.

 Yes you are correct, SolrJ is serializing a String[] instead of the
 separate String values.

 Using solrQuery.add(fq, your first filter); and solrQuery.add(fq,
 your second filter); has the same effect. Because it calls the add()
 method in the ModifiableSolrParams.java class. (Similar to
 solrQuery.setFilterQueries()).

 Yes, I will open a Jira issue for this with more information.

 Thanks,
 Indika


 On 31 October 2012 05:08, Chris Hostetter hossman_luc...@fucit.orgwrote:


 : org.apache.solr.common.SolrException:
 : org.apache.lucene.queryparser.classic.ParseException: Cannot parse
 : '[Ljava.lang.String;@1ec278b5': Encountered EOF at line 1, column
 28.

 Hmmm.. that looks like a pretty anoying bug -- somehwere SolrJ is
 serializing a String[] instead of sending the individual String values.

 can you please open a jira for this with these details?

 : Is there a new/alternate way in SolrJ 4 that this is done ?

 I would say that one possible workarround may be to
 use...
 solrQuery.add(fq, your first filter);
 solrQuery.add(fq, your second filter);

 ...but i don't know where the bug is to know if that will actally work.
 if you could try that also and mention the results in a comment in the
 Jira you open that would be helpful.

 -Hoss





Re: need help on solr search

2012-10-31 Thread jchen2000
Sure.  here are some more details:
1) we are having 30M ~ 60M documents per node (right now we have 4 nodes,
but that will increase in the future).  Documents are relatively small
(around 3K), but 99% searches must be returned within 200ms and this is
measured by test drivers sitting right in front of solr servers. 

2) throughput requirement right now is about 300 qps. The machines we use
are quite powerful with 16 cores, lots of memory and with ssd drives. We
haven't really achieved this throughput, but search latency is more of an
issue

3) one property value may overlap with value in another different property,
but we don't want to match those so we prefixed terms with property name

Thanks,
Fang  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/need-help-on-solr-search-tp4017191p4017341.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOLR 3.5 sometimes throws java.lang.NumberFormatException: For input string: java.math.BigDecimal:1848.66

2012-10-31 Thread Chris Hostetter

: SOLR relies on JavaBinCodec class which does de/serialization in it's
: own way - some kind of bug in there?
: 
: I don't know what is the proper way to handle BigDecimal values in
: SOLR 3.5 after all?

The safe thing to do is only add primitive java objects that Solr 
understands natively - String, Long, Integer, Float, Double, Boolean,
and Date.  the JavaBinCodec has some logic for trying to deal with other 
types of objects -- but i *thought* it's fall back was to just rely on 
toString for any class it doesn't recognize -- so it does seem like 
there is a bug in there somewhere...

https://issues.apache.org/jira/browse/SOLR-4021


-Hoss


Re: SolrCloud Tomcat configuration: problems and doubts.

2012-10-31 Thread Mark Miller
A big difference if you are using tomcat is that you still need to
specify jetty.port - unless you change the name of that sys prop in
solr.xml.

Some more below:

On Wed, Oct 31, 2012 at 2:09 PM, Luis Cappa Banda luisca...@gmail.com wrote:
 Hello!

 How are you?I followed SolrCloud Wiki tutorial and noticed that all worked
 perfectly with Jetty and with a very basic configuration. My first
 impression was that SolrCloud is amazing and I´m interested on deploying a
 more complex and near-production environment SolrCloud architecture with
 tests purposes. I´m using Tomcat as application server, so I´ve started
 testing with it.

 I´ve installed Zookeper sevice in a single machine and started up with the
 following configuration:

 *1.)*

 ~zookeperhome/conf/zoo.cfg

 *tickTime=2000*
 *initLimit=10*
 *syncLimit=5*
 *dataDir=~zookeperhome/data/*
 *clientPort=9000*

 *2.) * I testing with a single core Solr server called 'items_en'. I have
 the configuration is as follows:

 *Indexes conf/data tree*: /mnt/data-store*/solr/*
/solr.xml
/zoo.cfg
/items_en/
  /conf/

 schema.xml

 solrconfig.xml
 etc.

 So we have a simple configuration where conf files and data indexes files
 are in the same path.

 *3.)* Ok, so we have Solr server configured, but I have to save into
 Zookeper the configuration. I do as follows:

 *./bin/zkcli.sh -cmd upconfig -zkhost 127.0.0.1:9000 -confdir *
 /mnt/data-store/solr/*items_en/conf -collection items_en -confname items_en
 *

 And seems to work perfectly, because if I use Zookeper client and executes
 'ls' command the files appear:

 *./bin/zkCli.sh -server localhost:9000
 *
 *
 *
 *[zk: localhost:9000(CONNECTED) 1] ls /configs/items_en*
 *[admin-extra.menu-top.html, currency.xml, protwords.txt,
 mapping-FoldToASCII.txt, solrconfig.xml, lang, spellings.txt,
 mapping-ISOLatin1Accent.txt, admin-extra.html, xslt, scripts.conf,
 synonyms.txt, update-script.js, velocity, elevate.xml, zoo.cfg,
 admin-extra.menu-bottom.html, stopwords_en.txt, schema.xml]*
 *
 *
 *
 *
 *4.) *I would like that all the Solr servers deployed in that Tomcat
 instance points to Zookeper port 9000 service, so I included the following
 JAVA_OPTS hoping that they´ll make that posible:

 *JAVA_OPTS=-DzkHost=127.0.0.1:9000 -Dcollection.configName=items_en
 -DnumShards=2 *
 *
 *
 *Question 1: suposing that JAVA_OPTS are OK, do you think there exists a
 more flexible and less fixed way to indicate to each Solr server instance
 which is it´s Zookeper service?*

Your zkHost should actually be a comma sep list of the zk hosts. Yes,
we hope to improve this in the future as zookeeper becomes more
flexible.

 *
 *
 *Question 2: can you increment the numShards later even after an
 indexation? Example: imagine that you have millions of documents and you
 want to expand from two to four shards and increment aswell the number of
 Solr servers*

You can't change the number of shards yet - there is an open jira
issue for this and ongoing work. It's been called shard splitting.

 *
 *
 *Question 3: do again suposing that JAVA_OPTS is OK (or near to be OK), is
 it necessary to include always -DnumShard per each Tomcat server? Can' t
 this confuse Zookeeper instance?*

It depends on how you start your instances. The first one is the only
one that matters - it only makes sense to specify for each instance if
you plan on starting them all at the same time and are not sure which
the first to register in zk will be.

 *
 *
 *Question 4: **imagine that we have three Zookeeper instances to manage
 config files in production environment. The parameter -DzkHost should be
 like this? -DzkHost=host1:port1,host2:port2,host3:port3.*

Yes.

 *
 *
 *5.) *I started *Tomcat (port 8080)* with a single Solr server and
 everything seems to be OK: there is a single core setted as 'items_en' and
 Cloud button is active. The graph is a simple tree with shard1 and shard2.
 Connected to shard1 is the current instance. *Also, if I execute any query
 I just receive a 503 error code: no servers hosting.*
 *

Not sure why offhand - if you are not passing jetty.port (or something
else if you have renamed it - like tomcat.port), that will be a
problem.

 *
 *
 *
 *6.) *I started another Solr server in a* second Tomcat instance (port
 9080). *Its Solr home is in the following path:

 *Indexes conf/data tree*: /mnt/data-store*/solr2/*
/solr.xml
/zoo.cfg
/items_en/
  /conf/

 schema.xml

 solrconfig.xml
 

Re: Anyone working on adapting AnalyzingQueryParser to solr?

2012-10-31 Thread balaji.gandhi
Hi,

Is the AnalyzingQueryParser ported to SOLR? I read that it it available in
Lucene. Not sure about SOLR.

We are trying to workaround this limitation:-

On wildcard and fuzzy searches, no text analysis is performed on the search
word.

http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters

Thanks,
Balaji



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Anyone-working-on-adapting-AnalyzingQueryParser-to-solr-tp500199p4017364.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: need help on solr search

2012-10-31 Thread Otis Gospodnetic
Hi,

Not sure if I follow your requirements correctly, but it sounds like
you may be looking for phrase queries (as opposed to term/keyword
queries).

Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html


On Wed, Oct 31, 2012 at 1:33 AM, jchen2000 jchen...@yahoo.com wrote:
 Hi Solr experts,

 Our documents as well as queries consist of 10 properties in a particular
 order. Because of stringent requirements on search latency, we grouped them
 into only 2 fields with 5 properties each (we may use just 1 field, field
 number over 3 seems too slow), and each property value is split into
 fixed-length terms (like n-gram, hopefully to save search time) and prefixed
 with property name. What we want is to find out how similar the query is to
 the documents by comparing terms. We can't use the default OR operator since
 it's slow, we wanted to take advantage of the prefix and the defined order.

 My questions are:
 1) Can we do this simply through solr configuration, and how if possible?
 2) If we need to customize solr request handler or anything else, where to
 start?

 Thanks a lot!

 Jeremy



 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/need-help-on-solr-search-tp4017191.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: Datefaceting on multiple value in solr

2012-10-31 Thread Otis Gospodnetic
Hi,

I didn't follow the thread, but maybe you are looking for fq=(name1
OR name2 OR ) for those 100 names you mentioned, so that 1 doesn't
filter out the other 99.

Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html


On Wed, Oct 31, 2012 at 10:02 AM, Sagar Joshi1304
sagar.jo...@amultek.com wrote:
 Thanks Chris,

 seems it is working fine, below is the query

 http://localhost:8993/solr/select?q=*:*fq=(!tag=test)name:test*fq=(!tag=test1)name:test1fq=(!tag=test3)name:test3facet=truefacet.range={!key=test
 ex=test1,test3}Admission_Datefacet.range={!key=test1
 ex=test,test3}Admission_Datefacet.range={!key=test3
 ex=test,test1}Admission_Date*
 facet.range.start=2012-06-01T12:00:00Zfacet.range.end=2012-06-30T17:00:00Zfacet.range.gap=%2B1DAYrows=0

 please correct me if query is wrong, but if i have suppose 100 names then i
 have to add fq 100 times, and in each fq i have to add exclusion of other
 99; is there any other parameters or simpler way?

 sorry, to bother you more.





 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Datefaceting-on-multiple-value-in-solr-tp4014021p4017281.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Solr 4.0 admin panel

2012-10-31 Thread Tannen, Lev (USAEO) [Contractor]
Hi,
I apologize for the trivial question, but I cannot find out what is wrong. I 
try to switch from Solr 3.6 to Solr 4.0. All what I have done was I have 
downloaded and unzipped the  official binary file for Windows (32 bit) and run 
just an example and it does not work.
In Solr 3.6 the request http://localhost:8983/solr/admin returns an 
administration panel. In Solr4.0 it returns just a general Apache Solr page 
with dead links. Dead links means that when I click on them nothing happends.
I have try to run the multicore example and 
http://localhost:8983/solr/core0/admin  returns not found
An attempt to run the cloud example also return a generic page. In all cases 
search works. I can even add a document to the index and search for it. Only 
admin does not work.

Does Solr 4.0 works differently from 3.6?
Please advise.
Thank you.
Lev Tannen

Info: Operating system --- Windows 7 enterprise
  Java---  jre6 or jdk6

Log:
C:\myWork\apache-solr-4.0.0\examplejava -jar start.jar
2012-10-31 11:44:44.526:INFO:oejs.Server:jetty-8.1.2.v20120308
2012-10-31 11:44:44.526:INFO:oejdp.ScanningAppProvider:Deployment monitor 
C:\myWork\apache-solr-4.0.0\example\contexts at interval 0
2012-10-31 11:44:44.542:INFO:oejd.DeploymentManager:Deployable added: 
C:\myWork\apache-solr-4.0.0\example\contexts\solr.xml
2012-10-31 11:44:45.290:INFO:oejw.StandardDescriptorProcessor:NO JSP Support 
for /solr, did not find org.apache.jasper.servlet.JspServlet
2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
pache-solr-4.0.0\example/webapps/solr.war
2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
pache-solr-4.0.0\example/webapps/solr.war
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
Oct 31, 2012 11:44:45 AM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer$Initializer 
initialize
INFO: looking for solr.xml: C:\myWork\apache-solr-4.0.0\example\solr\solr.xml
Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer init
INFO: New CoreContainer 2091149
Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
INFO: Loading CoreContainer using Solr Home: 'solr/'
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for directory: 'solr/'
Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
INFO: Registering Log Listener
Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer create
INFO: Creating SolrCore 'collection1' using instanceDir: solr\collection1
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for directory: 'solr\collection1\'
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrConfig initLibs
INFO: Adding specified lib dirs to ClassLoader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar'
 to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar'
 to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcmail-jdk15-1.45.jar'
 to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcprov-jdk15-1.45.jar'
 to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/boilerpipe-1.1.0.jar' 
to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 
'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/commons-compress-1.4.1.jar'
 to classloader
Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
replaceClassLoader
INFO: Adding 

Re: Anyone working on adapting AnalyzingQueryParser to solr?

2012-10-31 Thread Ahmet Arslan

 We are trying to workaround this limitation:-
 
 On wildcard and fuzzy searches, no text analysis is
 performed on the search
 word.

I think http://wiki.apache.org/solr/MultitermQueryAnalysis is a more elegant 
way to deal with this.




Re: Solr 4.0 admin panel

2012-10-31 Thread Péter Király
Dear Lev,

core0 is only available on multicore environment. You should start Solr as

java -Dsolr.solr.home=multicore -jar start.jar

cheers,
Péter

2012/10/31 Tannen, Lev (USAEO) [Contractor] lev.tan...@usdoj.gov:
 Hi,
 I apologize for the trivial question, but I cannot find out what is wrong. I 
 try to switch from Solr 3.6 to Solr 4.0. All what I have done was I have 
 downloaded and unzipped the  official binary file for Windows (32 bit) and 
 run just an example and it does not work.
 In Solr 3.6 the request http://localhost:8983/solr/admin returns an 
 administration panel. In Solr4.0 it returns just a general Apache Solr page 
 with dead links. Dead links means that when I click on them nothing happends.
 I have try to run the multicore example and 
 http://localhost:8983/solr/core0/admin  returns not found
 An attempt to run the cloud example also return a generic page. In all cases 
 search works. I can even add a document to the index and search for it. Only 
 admin does not work.

 Does Solr 4.0 works differently from 3.6?
 Please advise.
 Thank you.
 Lev Tannen

 Info: Operating system --- Windows 7 enterprise
   Java---  jre6 or jdk6

 Log:
 C:\myWork\apache-solr-4.0.0\examplejava -jar start.jar
 2012-10-31 11:44:44.526:INFO:oejs.Server:jetty-8.1.2.v20120308
 2012-10-31 11:44:44.526:INFO:oejdp.ScanningAppProvider:Deployment monitor 
 C:\myWork\apache-solr-4.0.0\example\contexts at interval 0
 2012-10-31 11:44:44.542:INFO:oejd.DeploymentManager:Deployable added: 
 C:\myWork\apache-solr-4.0.0\example\contexts\solr.xml
 2012-10-31 11:44:45.290:INFO:oejw.StandardDescriptorProcessor:NO JSP Support 
 for /solr, did not find org.apache.jasper.servlet.JspServlet
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
 pache-solr-4.0.0\example/webapps/solr.war
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started 
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
 pache-solr-4.0.0\example/webapps/solr.war
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer$Initializer 
 initialize
 INFO: looking for solr.xml: C:\myWork\apache-solr-4.0.0\example\solr\solr.xml
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 2091149
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Registering Log Listener
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer create
 INFO: Creating SolrCore 'collection1' using instanceDir: solr\collection1
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr\collection1\'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrConfig initLibs
 INFO: Adding specified lib dirs to ClassLoader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar'
  to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar'
  to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcmail-jdk15-1.45.jar'
  to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcprov-jdk15-1.45.jar'
  to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader 
 replaceClassLoader
 INFO: Adding 
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/boilerpipe-1.1.0.jar'
  to 

Re: Solr 4.0 admin panel

2012-10-31 Thread James Ji
The right address to go is http://localhost:8983/solr/ on Solr 4.0.
http://localhost:8983/solr/admin links to nothing if you go check the
servlet.

Cheers

James

On Wed, Oct 31, 2012 at 3:25 PM, Tannen, Lev (USAEO) [Contractor] 
lev.tan...@usdoj.gov wrote:

 Hi,
 I apologize for the trivial question, but I cannot find out what is wrong.
 I try to switch from Solr 3.6 to Solr 4.0. All what I have done was I have
 downloaded and unzipped the  official binary file for Windows (32 bit) and
 run just an example and it does not work.
 In Solr 3.6 the request http://localhost:8983/solr/admin returns an
 administration panel. In Solr4.0 it returns just a general Apache Solr page
 with dead links. Dead links means that when I click on them nothing
 happends.
 I have try to run the multicore example and
 http://localhost:8983/solr/core0/admin  returns not found
 An attempt to run the cloud example also return a generic page. In all
 cases search works. I can even add a document to the index and search for
 it. Only admin does not work.

 Does Solr 4.0 works differently from 3.6?
 Please advise.
 Thank you.
 Lev Tannen

 Info: Operating system --- Windows 7 enterprise
   Java---  jre6 or jdk6

 Log:
 C:\myWork\apache-solr-4.0.0\examplejava -jar start.jar
 2012-10-31 11:44:44.526:INFO:oejs.Server:jetty-8.1.2.v20120308
 2012-10-31 11:44:44.526:INFO:oejdp.ScanningAppProvider:Deployment monitor
 C:\myWork\apache-solr-4.0.0\example\contexts at interval 0
 2012-10-31 11:44:44.542:INFO:oejd.DeploymentManager:Deployable added:
 C:\myWork\apache-solr-4.0.0\example\contexts\solr.xml
 2012-10-31 11:44:45.290:INFO:oejw.StandardDescriptorProcessor:NO JSP
 Support for /solr, did not find org.apache.jasper.servlet.JspServlet
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
 pache-solr-4.0.0\example/webapps/solr.war
 2012-10-31 11:44:45.322:INFO:oejsh.ContextHandler:started
 o.e.j.w.WebAppContext{/solr,file:/C:/myWork/apache-solr-4.0.0/example/solr-webapp/webapp/},C:\myWork\a
 pache-solr-4.0.0\example/webapps/solr.war
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or
 JNDI)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or
 JNDI)
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml:
 C:\myWork\apache-solr-4.0.0\example\solr\solr.xml
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 2091149
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr/'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer load
 INFO: Registering Log Listener
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.CoreContainer create
 INFO: Creating SolrCore 'collection1' using instanceDir: solr\collection1
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for directory: 'solr\collection1\'
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrConfig initLibs
 INFO: Adding specified lib dirs to ClassLoader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcmail-jdk15-1.45.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 'file:/C:/myWork/apache-solr-4.0.0/contrib/extraction/lib/bcprov-jdk15-1.45.jar'
 to classloader
 Oct 31, 2012 11:44:45 AM org.apache.solr.core.SolrResourceLoader
 replaceClassLoader
 INFO: Adding
 

Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Jack Krupansky
Scoring or ranking of document relevancy is called similarity. You can 
create your own similarity class, or even have a field-specific similarity 
class.


See, for example:
http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/similarities/Similarity.html

and

http://wiki.apache.org/solr/SchemaXml#Similarity

-- Jack Krupansky

-Original Message- 
From: Dotan Cohen

Sent: Wednesday, October 31, 2012 5:29 PM
To: solr-user@lucene.apache.org
Subject: Eliminate or reduce fieldNorm as a consideration.

I would like to lower or eliminate the contribution of the fieldNorm
on some searches. I figured that a LocalParam might help, but I cannot
find any documentation on it. Is there documentation on how to reduce
the consideration for tf, idf, fieldNorm, and coord? Where is that?

Thanks.

--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com 



Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Ahmet Arslan

 I would like to lower or eliminate
 the contribution of the fieldNorm
 on some searches. I figured that a LocalParam might help,
 but I cannot
 find any documentation on it. Is there documentation on how
 to reduce
 the consideration for tf, idf, fieldNorm, and coord? Where
 is that?

omitNorms=true|false

This is arguably an advanced option. Set to true to omit the norms associated 
with this field (this disables length normalization and index-time boosting for 
the field, and saves some memory). Only full-text fields or fields that need an 
index-time boost need norms. 

http://wiki.apache.org/solr/SchemaXml


Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Dotan Cohen
On Wed, Oct 31, 2012 at 11:50 PM, Ahmet Arslan iori...@yahoo.com wrote:
 omitNorms=true|false

 This is arguably an advanced option. Set to true to omit the norms associated 
 with this field (this disables length normalization and index-time boosting 
 for the field, and saves some memory). Only full-text fields or fields that 
 need an index-time boost need norms. 

 http://wiki.apache.org/solr/SchemaXml

Thank you, but I am looking for a query-time modifier. I do need the
fieldNorm enabled in the general sense.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Jack Krupansky
You could write a custom search component that checked for your desired 
request parameters, and then it could set them for a custom similarity 
class, which you would also have to write.


-- Jack Krupansky

-Original Message- 
From: Dotan Cohen

Sent: Wednesday, October 31, 2012 6:07 PM
To: solr-user@lucene.apache.org
Subject: Re: Eliminate or reduce fieldNorm as a consideration.

On Wed, Oct 31, 2012 at 11:50 PM, Ahmet Arslan iori...@yahoo.com wrote:

omitNorms=true|false

This is arguably an advanced option. Set to true to omit the norms 
associated with this field (this disables length normalization and 
index-time boosting for the field, and saves some memory). Only full-text 
fields or fields that need an index-time boost need norms. 


http://wiki.apache.org/solr/SchemaXml


Thank you, but I am looking for a query-time modifier. I do need the
fieldNorm enabled in the general sense.

--
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com 



Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Dotan Cohen
On Wed, Oct 31, 2012 at 11:44 PM, Jack Krupansky
j...@basetechnology.com wrote:
 Scoring or ranking of document relevancy is called similarity. You can
 create your own similarity class, or even have a field-specific similarity
 class.

 See, for example:
 http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/similarities/TFIDFSimilarity.html
 http://lucene.apache.org/core/4_0_0/core/org/apache/lucene/search/similarities/Similarity.html

 and

 http://wiki.apache.org/solr/SchemaXml#Similarity


Thank you Jack. That seems extraordinarily rigid, in the sense that
one could not apply on-the-fly score computation component
coefficients. Surely I'm not the first dev to run into an issue with
the default scoring algorithm and want to tweak it only on specific
queries!

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: Solr 4.0 admin panel

2012-10-31 Thread Chris Hostetter
: The right address to go is http://localhost:8983/solr/ on Solr 4.0.
: http://localhost:8983/solr/admin links to nothing if you go check the
: servlet.

For back compat, http://localhost:8983/solr/admin should automatially redirect 
to 
http://localhost:8983/solr/#/ -- regardless of wether you are in 
legacy single core mode or not (unless perhaps you have a solr core named 
admin ?.. haven't tried that)

:  administration panel. In Solr4.0 it returns just a general Apache Solr page
:  with dead links. Dead links means that when I click on them nothing
:  happends.

Can you elaborate on what exactly you are seeing?  A Jira issue with a 
screenshot attached would be helpful.  It would also be good to know what 
browser you are using, i think there are bugs with using hte new UI 
javascript works in IE9 that have never been addressed because the 
relative usage of IE by solr-users is so low there was no strong push to 
invest time in trying to figure them out.  Yeha, here's hte issue...

https://issues.apache.org/jira/browse/SOLR-3876

-Hoss


Re: Eliminate or reduce fieldNorm as a consideration.

2012-10-31 Thread Dotan Cohen
On Thu, Nov 1, 2012 at 12:16 AM, Jack Krupansky j...@basetechnology.com wrote:
 You could write a custom search component that checked for your desired
 request parameters, and then it could set them for a custom similarity
 class, which you would also have to write.


Perhaps, but if I'm going that route I would have it recognize some
LocalParams (such as omitNorms=true right there) to be flexible at
query time. I'm actually surprised that this doesn't yet exist.

Thanks.

-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


Re: Anyone working on adapting AnalyzingQueryParser to solr?

2012-10-31 Thread balaji.gandhi
Hi iorixxx, this is how we our email field defined.

fieldType name=text_email class=solr.TextField
positionIncrementGap=100
analyzer
tokenizer class=solr.StandardTokenizerFactory/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.PatternReplaceFilterFactory
pattern=\. replacement= DOT  replace=all/
filter class=solr.PatternReplaceFilterFactory pattern=@
replacement= AT  replace=all/
filter class=solr.WordDelimiterFilterFactory
generateWordParts=1 generateNumberParts=1
catenateWords=0 catenateNumbers=0
catenateAll=0 splitOnCaseChange=0/
/analyzer
analyzer type=multiterm
tokenizer class=solr.KeywordTokenizerFactory /
/analyzer
/fieldType

So a query like [emailAddress : bob*] would match b...@bob.com, but queries
which include the special charecter like [bob@*] and [bob@bob.*] will not
match any email addresses.

Yes, I tried the multiterm and it does not fix the issue. Any thots?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Anyone-working-on-adapting-AnalyzingQueryParser-to-solr-tp500199p4017404.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr blocking on commit

2012-10-31 Thread dbabits
I second the original poster- all selects are blocked during commits.
I have Master replicating to Slave.
Indexing happens to Master, few docs/about every 30 secs
Selects are run against Slave.

This is the pattern from the Slave log:

Oct 30, 2012 12:33:23 AM org.apache.solr.core.SolrDeletionPolicy
updateCommits
INFO: newest commit = 1349195567630
Oct 30, 2012 12:33:42 AM org.apache.solr.core.SolrCore execute
INFO: [core3] webapp=/solr path=/select

During the 19 seconds that you see between the 2 lines, the /select is
blocked, until the commit is done.
This has nothing to do with jvm, I'm monitoring the memory and GC stats with
jConsole and log.
I played with all settings imaginable: commitWithin, commit=true,
useColdSearcher, autoWarming settings from 0 on-nothing helps.

The environment is: 3.6.0, RHEL Lunux 5.3.2, 64-bit, 96G RAM, 6 CPU cores,
java 1.6.0_24, ~70 million docs.
As soon as I suspend replication (command=disablepoll), everything becomes
fast.
As soon as I enable it - it pretty much becomes useless.
Querying Master directly exibits the same problem of course.

Thanks a lot for your help.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-blocking-on-commit-tp474874p4017416.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: After adding field to schema, the field is not being returned in results.

2012-10-31 Thread Otis Gospodnetic
Hi,

That should work just fine.  It;s either a bug or you are doing something
you didn't mention.  Maybe you can provide a small, self-enclosed unit test
and stick it in JIRA?

Otis
--
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html


On Wed, Oct 31, 2012 at 8:42 PM, Dotan Cohen dotanco...@gmail.com wrote:

 I had stopped Solr 4.0, added a new stored and indexed field to
 schema.xml, and then restarted Solr. I see that my application is
 adding documents and I can query for and retrieve documents just fine.
 However, the new field is not being returned in the XML. I can see
 that if I try to sort on the field I get no error, whereas if I try to
 sort on a nonexistent filed I do get an error, so it seems that Solr
 does recognize the new field as actually existing. However even for
 new documents I cannot get the field added to my XML output from
 queries to Solr, even if the field is explicitly requested with 'fl'.

 What must I do to have Solr return this new field in the XML output?

 Thanks.

 --
 Dotan Cohen

 http://gibberish.co.il
 http://what-is-what.com



Re: After adding field to schema, the field is not being returned in results.

2012-10-31 Thread Alexandre Rafalovitch
And - just to get stupid options out of the way - you don't have any
parameters defined on the handlers that may list the fields to return?

Regards,
   Alex.
Personal blog: http://blog.outerthoughts.com/
LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch
- Time is the quality of nature that keeps events from happening all at
once. Lately, it doesn't seem to be working.  (Anonymous  - via GTD book)


On Wed, Oct 31, 2012 at 8:42 PM, Dotan Cohen dotanco...@gmail.com wrote:

 I had stopped Solr 4.0, added a new stored and indexed field to
 schema.xml, and then restarted Solr. I see that my application is
 adding documents and I can query for and retrieve documents just fine.
 However, the new field is not being returned in the XML. I can see
 that if I try to sort on the field I get no error, whereas if I try to
 sort on a nonexistent filed I do get an error, so it seems that Solr
 does recognize the new field as actually existing. However even for
 new documents I cannot get the field added to my XML output from
 queries to Solr, even if the field is explicitly requested with 'fl'.

 What must I do to have Solr return this new field in the XML output?

 Thanks.

 --
 Dotan Cohen

 http://gibberish.co.il
 http://what-is-what.com



Re: After adding field to schema, the field is not being returned in results.

2012-10-31 Thread Dotan Cohen
On Thu, Nov 1, 2012 at 2:52 AM, Otis Gospodnetic
otis.gospodne...@gmail.com wrote:
 Hi,

 That should work just fine.  It;s either a bug or you are doing something
 you didn't mention.  Maybe you can provide a small, self-enclosed unit test
 and stick it in JIRA?


I would assume that it's me doing something wrong! How does this look:

/solr/select?q=*rows=1sort=created_iso8601%20descfl=created_iso8601,created

response
  lst name=responseHeader
int name=status0/int
int name=QTime1/int
lst name=params
  str name=q*:*/str
  str name=rows1/str
  str name=flcreated_iso8601,created/str
/lst
  /lst
  result name=response numFound=1037937 start=0
doc
  int name=created1350854389/int
/doc
  /result
/response

Surely the sort parameter would throw an error if the
created_iso8601field did not exist. That field is indexed and stored,
with no parameters defined on handlers that may list the fields to
return as Alexandre had mentioned.


-- 
Dotan Cohen

http://gibberish.co.il
http://what-is-what.com


SOLRJ - Error while using CommonsHttpSolrServer

2012-10-31 Thread Jegannathan Mehalingam
- I tried using HttpSolrServer, but had some problems and some news 
groups mentioned that it is buggy and I should be using 
CommonsHttpSolrServer. So, I am using CommonsHttpSolrServer. But both 
approaches does not work.

- I have tried using SOLR 4.0 as well as 3.6.1. I get errors in both cases.
- I started SOLR server from C:\apche_solr\apache-solr-4.0.0\example 
using the command java -Dsolr.solr.home=./example-DIH/solr/ -jar 
start.jar  1logs/dih.log 21


Here is my code which uses CommonsHttpSolrServer:

String url = http://localhost:8983/solr/#/solr/update/;;

  try {

  CommonsHttpSolrServer server = new CommonsHttpSolrServer( url );
  server.setSoTimeout(1000);  // socket read timeout
server.setConnectionTimeout(100);
server.setDefaultMaxConnectionsPerHost(100);
server.setMaxTotalConnections(100);
server.setFollowRedirects(false);  // defaults to false
server.setAllowCompression(false);
  server.setMaxRetries(1); // defaults to 0.   1 not recommended.
  server.setParser(new XMLResponseParser()); // binary parser is used 
by default


SolrInputDocument doc2 = new SolrInputDocument();
doc2.addField( OR, 119.96);
doc2.addField( M, 1);
doc2.addField( PT, 94.5946);
doc2.addField( PY , 12);
doc2.addField( LR , 118);
doc2.addField( PC, FST 04/26/12);
doc2.addField( SN , OTHER);
doc2.addField( CE, 2012-05-04T04:00:00Z);
doc2.addField( VC, 184);
doc2.addField( VR, 24563);
doc2.addField( PR, 4539673);
doc2.addField( SE, 2012-04-30T04:00:00Z);
doc2.addField( IE, 2012-05-08T04:00:00Z);
doc2.addField( ZR, 4539673);
doc2.addField( PNT, 94.61079);
doc2.addField( GY, 17);
doc2.addField( id, 111_111_1_11_2);
doc2.addField( PAR, 359.88);
doc2.addField( CTE, 2012-04-26T04:00:00Z);
doc2.addField( LE, PDS LOCATION);
doc2.addField( DR, 691);
doc2.addField( OY, 4);
doc2.addField( SS, 10);

CollectionSolrInputDocument docs = new ArrayListSolrInputDocument();
docs.add( doc2 );
UpdateResponse resp = server.add( docs );
System.out.println(resp.getResponse());
server.commit(true, true);

}
catch (Exception e) {
System.out.println(e);
e.printStackTrace();
}


When I run this code, I get the following exception:

org.apache.solr.common.SolrException: parsing error
org.apache.solr.common.SolrException: parsing error
at 
org.apache.solr.client.solrj.impl.XMLResponseParser.processResponse(XMLResponseParser.java:143)
at 
org.apache.solr.client.solrj.impl.XMLResponseParser.processResponse(XMLResponseParser.java:104)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:469)
at 
org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:249)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)

at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:69)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at SolrIndexClient.indexTest(SolrIndexClient.java:117)
at SolrIndexClient.main(SolrIndexClient.java:139)
Caused by: java.lang.Exception: really needs to be response or result. 
 not:html
at 
org.apache.solr.client.solrj.impl.XMLResponseParser.processResponse(XMLResponseParser.java:134)



Any idea what is going on? I do not see anything in the log file. I am 
not sure if my request is even hitting the server. In our 
implementation, we need to update the indexes programatically in 
near-real time. If we can't do this, then SOLR is unusable for us .


Thanks in advance,
Jegan Mehalingam