Re: solr search speed is so slow.

2012-02-11 Thread Jan Høydahl
Hi,

I got your private email but will reply in public for everyones benefit.

I'm sure you can fine-tune various things in order to gain some ms.
But my first and best advice to you is to increase the amount of RAM in your 
computer. Your computer is very weak and under-specced for use with search. I 
don't know your application's use and business value, but this looks more like 
a desktop computer than a production computer, and you cannot really measure 
performance on anything else than your target production environment.

You say your index size is 1.4Gb which will double in future to 2.8Gb. Throw in 
8Gb RAM in your server and allocate 2Gb for Java/Solr (-Xmx2g). The remaining 
6Gb will be used by Windows and for disk caching your index. This will cause 
your entire index to be cached in RAM and you're not dependent on your (slow) 
disk for search.

Aside from this, have you profiled what in your search takes the most time? Try 
to add debugQuery=true and look at the timings section at the bottom of the 
resonse to see whether it's the query part or perhaps the highlighting part 
which spends the most time. How large are your PDF docs (text, not binary) on 
average? Also, newer versions of Solr may have optimizations for faster 
highlighting..

Another thing from your reqHandler config. You use maxAnalyzedChars=-1. The 
correct should be hl.maxAnalyzedChars and this will only work for the original 
highlighter, not fvh.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com

On 10. feb. 2012, at 05:15, Rong Kang wrote:

 Thanks for your reply.
 
 I didn't use any other params except  q(for example 
 http://localhost:8080/solr/search?q=drugs). no facet, no sort.
 I don't think configure newSearcher or firstSearcher can help, because I want 
 every query can be very fast. Do you have other solution?
 I think 460ms is too slow even though a word  is firstly searched.
 
 
 My computer 's setting:
 cpu: amd 5000, 2.2GHz, 1 cpu with 2 cores.
 main memory: 2G, 800Mhz
 disk drive : 7200r/min
 
 This is my  full search configuration:
 
 
  requestHandler name=/search 
 class=org.apache.solr.handler.component.SearchHandler
   lst name=defaults
  str name=wtxslt/str
  str name=trdismaxdoc.xsl/str 
  int name=maxAnalyzedChars-1/int
  str name=echoParamsall/str
  str name=indentoff/str
   str name=flfilename/str
  int name=rows10/int
  str name=defTypedismax/str
   str name=qffilename^5.0 text^1.5/str
  str name=q.alt*:*/str
  str name=hlon/str
  str name=hl.flfilename text/str
 bool name=hl.useFastVectorHighlightertrue/bool
   str name=hl.tag.pre![CDATA[b style=color:red]]/str 
 str name=hl.tag.post![CDATA[/b]]/str
 int name=hl.fragsize100/int
 int name=f.filename.hl.fragsize100/int   
  str name=f.filename.hl.alternateFieldfilename/str  
  int name=f.text.hl.fragsize100/int  
  int name=f.text.hl.snippets3/int   
 
   /lst
  /requestHandler
 
 
 and my schema.xml 
 
 
 fields
   field name=text type=text indexed=true multiValued=true 
 termVectors=true termPositions=true termOffsets=true/
   field name=filename type=filenametext indexed=true 
 required=true termVectors=true termPositions=true termOffsets=true/
   field name=id type=string stored=true/ 
 /fields
 defaultSearchFieldtext/defaultSearchField
 uniqueKeyid/uniqueKey 
 copyField source=filename dest=text/
 
 
 and 
 
 
 fieldType name=filenametext class=solr.TextField 
 positionIncrementGap=100
  analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
 generateNumberParts=1 catenateWords=1 catenateNumbers=1 catenateAll=0 
 splitOnCaseChange=1/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.RemoveDuplicatesTokenFilterFactory/
  /analyzer
  analyzer type=query
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.SynonymFilterFactory synonyms=synonyms.txt 
 ignoreCase=true expand=true/
filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
 generateNumberParts=1 catenateWords=0 catenateNumbers=0 catenateAll=0 
 splitOnCaseChange=1/
filter class=solr.LowerCaseFilterFactory/
filter class=solr.RemoveDuplicatesTokenFilterFactory/
  /analyzer
/fieldType
 fieldType name=text class=solr.TextField positionIncrementGap=100
  analyzer type=index
tokenizer class=solr.WhitespaceTokenizerFactory/
filter class=solr.StopFilterFactory ignoreCase=true 
 words=stopwords.txt/
filter class=solr.WordDelimiterFilterFactory generateWordParts=1 
 generateNumberParts=1 catenateWords=1 catenateNumbers=1 catenateAll=0 
 splitOnCaseChange=1/
filter class=solr.LowerCaseFilterFactory/
filter 

Re: Setting up logging for a Solr project that isn't in tomcat/webapps/solr

2012-02-11 Thread Jan Høydahl
You can unpack your war (jar -xvf solr.war), change logging.properties and then 
pack it again (jar -cvf solr.war)
You can also try to specify a new folder in a lib... tag in solrconfig.xml 
and put your prop file there.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com
Solr Training - www.solrtraining.com

On 11. feb. 2012, at 01:07, Mike O'Leary wrote:

 I set up a Solr project to run with Tomcat for indexing contents of a 
 database by following a web tutorial that described how to put the project 
 directory anywhere you want and then put a file called projectname.xml in 
 the tomcat/conf/Catalina/localhost directory that contains contents like this:
 
 ?xml version=1.0 encoding=utf-8?
 Context docBase=C:/projects/solr_apps/solr_db/solr.war crossContext=true
  Environment name=solr/home type=java.lang.String 
 value=C:/projects/solr_apps/solr_db override=true/
 /Context
 
 I got this working, and now I would like to create a logging.properties file 
 for Solr only, as described in the Apache Solr Reference Guide distributed by 
 Lucid. It says:
 
 To change logging settings for Solr only, edit 
 tomcat/webapps/solr/WEB-INF/classes/logging.properties. You will need to 
 create the classes directory and the logging.properties file. You can set 
 levels from FINEST to SEVERE for a class or an entire package. Here are a 
 couple of examples:
 org.apache.commons.digester.Digester.level = FINEST
 org.apache.solr.level = WARNING
 
 I think this explanation assumes that the Solr project is in 
 tomcat/webapps/solr. I tried putting a logging.properties file in various 
 locations where I hoped Tomcat would pick it up, but none of them worked. If 
 I have a solr_db.xml file in tomcat/conf/Catalina/localhost that points to a 
 Solr project in C:/projects/solr_apps/solr_db (that was created by copying 
 the contents of the apache-solr-3.5.0/example/solr directory to 
 C:/projects/solr_apps/solr_db and going from there), where is the right place 
 to put a Solr only logging.properties file?
 Thanks,
 Mike



Re: indexing with DIH (and with problems)

2012-02-11 Thread alessio crisantemi
dear all,
I update my solr at 3.5 version but now I have this problem:

Grave: Full Import failed
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.NoSuchMethodError:
org.apache.solr.core.SolrResourceLoader.getClassLoader()Ljava/lang/ClassLoader;
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
Caused by: java.lang.NoSuchMethodError:
org.apache.solr.core.SolrResourceLoader.getClassLoader()Ljava/lang/ClassLoader;
 at
org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:72)
 at
org.apache.solr.handler.dataimport.EntityProcessorBase.init(EntityProcessorBase.java:59)
 at
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:71)
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:319)
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 ... 5 more
feb 11, 2012 12:27:26 PM org.apache.solr.update.DirectUpdateHandler2
rollback
Informazioni: start rollback
feb 11, 2012 12:27:26 PM org.apache.solr.update.DirectUpdateHandler2
rollback
Informazioni: end_rollback
feb 11, 2012 12:27:27 PM org.apache.solr.handler.dataimport.DataImporter
doFullImport
Informazioni: Starting Full Import
feb 11, 2012 12:27:27 PM org.apache.solr.core.SolrCore execute
Informazioni: [] webapp=/solr path=/select
params={clean=falsecommit=truecommand=full-importqt=/dataimport}
status=0 QTime=0
feb 11, 2012 12:27:27 PM org.apache.solr.handler.dataimport.SolrWriter
readIndexerProperties
Avvertenza: Unable to read: dataimport.properties
feb 11, 2012 12:27:28 PM org.apache.solr.handler.dataimport.DataImporter
doFullImport
Grave: Full Import failed
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.NoSuchMethodError:
org.apache.solr.core.SolrResourceLoader.getClassLoader()Ljava/lang/ClassLoader;
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:424)
 at
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:242)
 at
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:180)
 at
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:331)
 at
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:389)
 at
org.apache.solr.handler.dataimport.DataImporter$1.run(DataImporter.java:370)
Caused by: java.lang.NoSuchMethodError:
org.apache.solr.core.SolrResourceLoader.getClassLoader()Ljava/lang/ClassLoader;
 at
org.apache.solr.handler.dataimport.TikaEntityProcessor.firstInit(TikaEntityProcessor.java:72)
 at
org.apache.solr.handler.dataimport.EntityProcessorBase.init(EntityProcessorBase.java:59)
 at
org.apache.solr.handler.dataimport.EntityProcessorWrapper.init(EntityProcessorWrapper.java:71)
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:319)
 at
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:383)
 ... 5 more
feb 11, 2012 12:27:28 PM org.apache.solr.update.DirectUpdateHandler2
rollback
Informazioni: start rollback
feb 11, 2012 12:27:28 PM org.apache.solr.update.DirectUpdateHandler2
rollback
Informazioni: end_rollback
I don't know..
suggestions?
best
a.

2012/2/10 Gora Mohanty g...@mimirtech.com

 On 10 February 2012 04:15, alessio crisantemi
 alessio.crisant...@gmail.com wrote:
  hi all,
  I would index on solr my pdf files wich includeds on my directory
 c:\myfile\
 
  so, I add on my solr/conf directory the file data-config.xml like the
  following:
 [...]

  but this is the result:
 [...]

 Your Solr URL for dataimport looks a little odd: You seem to be
 doing a delta-import. Normally, one would start with a full import:
 http://solr-host:port/solr/dataimport?command=full-import

 Have you looked in the Solr logs for the cause of the exception?
 Please share that with us.

 Regards,
 Gora



Re: Highlighting stopwords

2012-02-11 Thread O. Klein

Koji Sekiguchi wrote
 
 (12/01/24 9:31), O. Klein wrote:
 Let's say I search for spellcheck solr on a website that only contains
 info about Solr, so solr was added to the stopwords.txt. The query that
 will be parsed then (dismax) will not contain the term solr.

 So fragments won't contain highlights of the term solr. So when a
 fragment
 with the highlighted term spellcheck is generated, it would be less
 confusing for people who don't know how search engines work to also
 highlight the term solr.

 So my first test was to have a field with StopFilterFactory and search on
 that field, while using another field without StopFilterFactory to
 highlight
 on. This didn't do the trick.
 
 Are you saying that using hl.q parameter on highlight field while using q
 on
 the search field that has StopFilter and hl.q doesn't work for you?
 
 koji
 -- 
 http://www.rondhuit.com/en/
 

At first glance using hl.q did the trick. I just have problems when I am
using terms with uppercase. Eventhough I use filter
class=solr.LowerCaseFilterFactory/ on the highlighted field in both query
and index I do get search results, but just no highlights (lowercasing the
terms fixes the problem).

Can someone confirm whether this is a bug?

Thank you. 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Highlighting-stopwords-tp3681901p3734892.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud Replication Question

2012-02-11 Thread Mark Miller

On Feb 10, 2012, at 9:40 PM, Jamie Johnson wrote:

 
 
 how'd you resolve this issue?
 


I was basing my guess on seeing JamiesMac.local and jamiesmac in your first 
cluster state dump - your latest doesn't seem to mismatch like that though.

- Mark Miller
lucidimagination.com













Help with MMapDirectoryFactory in 3.5

2012-02-11 Thread Bill Bell
 I am using Solr 3.5.

I noticed in solrconfig.xml:

directoryFactory name=DirectoryFactory
class=${solr.directoryFactory:solr.StandardDirectoryFactory}/

I don't see this parameter taking.. When I set
-Dsolr.directoryFactory=solr.MMapDirectoryFactory

How do I see the setting in the log or in stats.jsp ? I cannot find a place
that indicates it is set or not.

I would assume StandardDirectoryFactory is being used but I do see (when I
set it or NOT set it)

ame:  searcher  class:  org.apache.solr.search.SolrIndexSearcher  version:
1.0  description:  index searcher  stats: searcherName :  Searcher@71fc3828
main 
caching :  true 
numDocs :  2121163 
maxDoc :  2121163 
reader :  
SolrIndexReader{this=1867ec28,r=ReadOnlyDirectoryReader@1867ec28,refCnt=1,se
gments=1} 
readerDir :  
org.apache.lucene.store.MMapDirectory@C:\solr\jetty\example\solr\providersea
rch\data\index 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@45c1cfc1
indexVersion :  1324594650551
openedAt :  Sat Feb 11 09:49:31 MST 2012
registeredAt :  Sat Feb 11 09:49:31 MST 2012
warmupTime :  0

Also, how do I set unman and what is the purpose of chunk size?




Re: Help with MMapDirectoryFactory in 3.5

2012-02-11 Thread Bill Bell
Also, does someone have an example of using unmap in 3.5 and chunksize?

From:  Bill Bell billnb...@gmail.com
Date:  Sat, 11 Feb 2012 10:39:56 -0700
To:  solr-user@lucene.apache.org
Subject:  Help with MMapDirectoryFactory in 3.5

 I am using Solr 3.5.

I noticed in solrconfig.xml:

directoryFactory name=DirectoryFactory
class=${solr.directoryFactory:solr.StandardDirectoryFactory}/

I don't see this parameter taking.. When I set
-Dsolr.directoryFactory=solr.MMapDirectoryFactory

How do I see the setting in the log or in stats.jsp ? I cannot find a place
that indicates it is set or not.

I would assume StandardDirectoryFactory is being used but I do see (when I
set it or NOT set it)

ame:  searcher  class:  org.apache.solr.search.SolrIndexSearcher  version:
1.0  description:  index searcher  stats: searcherName : Searcher@71fc3828
main 
caching : true 
numDocs : 2121163 
maxDoc : 2121163 
reader : 
SolrIndexReader{this=1867ec28,r=ReadOnlyDirectoryReader@1867ec28,refCnt=1,se
gments=1} 
readerDir : 
org.apache.lucene.store.MMapDirectory@C:\solr\jetty\example\solr\providersea
rch\data\index 
lockFactory=org.apache.lucene.store.NativeFSLockFactory@45c1cfc1
indexVersion : 1324594650551
openedAt : Sat Feb 11 09:49:31 MST 2012
registeredAt : Sat Feb 11 09:49:31 MST 2012
warmupTime : 0 

Also, how do I set unman and what is the purpose of chunk size?




Distributed search: RequestHandler

2012-02-11 Thread ku3ia
Hi!

I'm using Solr 3.5. I have two shards. Now I'm using default and my own
defined request handler to search by these shards:
  requestHandler name=distributed class=solr.SearchHandler
default=false
lst name=defaults
  str name=shards192.168.1.1:8080/solr,192.168.1.2:8080/solr/str
/lst
  /requestHandler

So, urls I have:
http://192.168.1.1:8080/solr/select/?q=testrows=0qt=distributed:
{responseHeader:{status:0,QTime:1},response:{numFound:20,start:0,docs:[]}}

http://192.168.1.1:8080/solr/select/?q=testrows=0
{responseHeader:{status:0,QTime:1},response:{numFound:13,start:0,docs:[]}}

http://192.168.1.2:8080/solr/select/?q=testrows=0
{responseHeader:{status:0,QTime:1},response:{numFound:7,start:0,docs:[]}}

Can I configure my solrconfig to use default search handler, for example:

  requestHandler name=search class=solr.SearchHandler default=true
lst name=defaults
  str name=echoParamsexplicit/str
  int name=rows10/int
/lst
lst name=appends
  str name=qtdistributed/str
/lst
  /requestHandler

  requestHandler name=standalone class=solr.SearchHandler
default=false /

  requestHandler name=distributed class=solr.SearchHandler
default=false
lst name=defaults
  str name=shards192.168.1.1:8080/solr,192.168.1.2:8080/solr/str
  str name=qtstandalone/str
/lst
  /requestHandler

but this unfortunaly don't works :-(

The goal is to use only one core on each server and don't use qt parameter
in request:
http://192.168.1.1:8080/solr/select/?q=testrows=0
{responseHeader:{status:0,QTime:1},response:{numFound:20,start:0,docs:[]}}

P.S. No SolrCloud, only Solr 3.5.
P.P.S. Maybe it can be configured for core at solr.xml file?

Thanks.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Distributed-search-RequestHandler-tp3735621p3735621.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SolrCloud Replication Question

2012-02-11 Thread Jamie Johnson
I wiped the zk and started over (when I switch networks I get
different host names and honestly haven't dug into why).  That being
said the latest state shows all in sync, why would the cores show up
as down?

On Sat, Feb 11, 2012 at 11:08 AM, Mark Miller markrmil...@gmail.com wrote:

 On Feb 10, 2012, at 9:40 PM, Jamie Johnson wrote:



 how'd you resolve this issue?



 I was basing my guess on seeing JamiesMac.local and jamiesmac in your 
 first cluster state dump - your latest doesn't seem to mismatch like that 
 though.

 - Mark Miller
 lucidimagination.com













Re: SolrCloud Replication Question

2012-02-11 Thread Mark Miller

On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:

 I wiped the zk and started over (when I switch networks I get
 different host names and honestly haven't dug into why).  That being
 said the latest state shows all in sync, why would the cores show up
 as down?


If recovery fails X times (say because the leader can't be reached from the 
replica), a node is marked as down. It can't be active, and technically it has 
stopped trying to recover (it tries X times and eventually give up until you 
restart it).

Side note, I recently ran into this issue: SOLR-3122 - fix coming soon. Not 
sure if you have looked at your logs or not, but perhaps it's involved.

- Mark Miller
lucidimagination.com













SolrJ + SolrCloud

2012-02-11 Thread Darren Govoni
Hi,
  Do all the normal facilities of Solr work with SolrCloud from SolrJ?
Things like /mlt, /cluster, facets , tvf's, etc.

Darren



boost question. need boost to take a query like bq

2012-02-11 Thread Bill Bell


We like the boost parameter in SOLR 3.5 with eDismax.

The question we have is what we would like to replace bq with boost, but we
get the multi-valued field issue when we try to do the equivalent queriesŠ
HTTP ERROR 400
Problem accessing /solr/providersearch/select. Reason:
can not use FieldCache on multivalued field: specialties_ids


q=*:*bq=multi_field:87^2defType=dismax

How do you do this using boost?

q=*:*boost=multi_field:87defType=edismax

We know we can use bq with edismax, but we like the multiply feature of
boost.

If I change it to a single valued field I get results, but they are all 1.0.

str name=YFFL5
1.0 = (MATCH) MatchAllDocsQuery, product of:
  1.0 = queryNorm
/str

q=*:*boost=single_field:87defType=edismax  // this works, but I need it on
multivalued






FW: boost question. need boost to take a query like bq

2012-02-11 Thread Bill Bell


I did find a solution, but the output is horrible. Why does explain look so
badly?

lst name=explainstr name=2H7DF
6.351252 = (MATCH) boost(*:*,query(specialties_ids:
#1;#0;#0;#0;#0;#0;#0;#0;#0; ,def=0.0)), product of:
  1.0 = (MATCH) MatchAllDocsQuery, product of:
1.0 = queryNorm
  6.351252 = query(specialties_ids: #1;#0;#0;#0;#0;#0;#0;#0;#0;
,def=0.0)=6.351252
/str


defType=edismaxboost=query($param)param=multi_field:87
--


We like the boost parameter in SOLR 3.5 with eDismax.

The question we have is what we would like to replace bq with boost, but we
get the multi-valued field issue when we try to do the equivalent queriesŠ
HTTP ERROR 400
Problem accessing /solr/providersearch/select. Reason:
can not use FieldCache on multivalued field: specialties_ids


q=*:*bq=multi_field:87^2defType=dismax

How do you do this using boost?

q=*:*boost=multi_field:87defType=edismax

We know we can use bq with edismax, but we like the multiply feature of
boost.

If I change it to a single valued field I get results, but they are all 1.0.

str name=YFFL5
1.0 = (MATCH) MatchAllDocsQuery, product of:
  1.0 = queryNorm
/str

q=*:*boost=single_field:87defType=edismax  // this works, but I need it on
multivalued






Re: SolrJ + SolrCloud

2012-02-11 Thread Mark Miller

On Feb 11, 2012, at 6:02 PM, Darren Govoni wrote:

 Hi,
  Do all the normal facilities of Solr work with SolrCloud from SolrJ?
 Things like /mlt, /cluster, facets , tvf's, etc.
 
 Darren
 


SolrJ works the same in SolrCloud mode as it does in non SolrCloud mode - it's 
fully supported. There is even a new SolrJ client called CloudSolrServer that 
has built in cluster awareness and load balancing.

In terms of what is supported - anything that is supported with distributed 
search - that is most things, but there is the odd man out - like MLT - looks 
like an issue is open here: https://issues.apache.org/jira/browse/SOLR-788 but 
it's not resolved yet.

- Mark Miller
lucidimagination.com













Re: Highlighting stopwords

2012-02-11 Thread Koji Sekiguchi

(12/02/11 21:19), O. Klein wrote:


Koji Sekiguchi wrote


(12/01/24 9:31), O. Klein wrote:

Let's say I search for spellcheck solr on a website that only contains
info about Solr, so solr was added to the stopwords.txt. The query that
will be parsed then (dismax) will not contain the term solr.

So fragments won't contain highlights of the term solr. So when a
fragment
with the highlighted term spellcheck is generated, it would be less
confusing for people who don't know how search engines work to also
highlight the term solr.

So my first test was to have a field with StopFilterFactory and search on
that field, while using another field without StopFilterFactory to
highlight
on. This didn't do the trick.


Are you saying that using hl.q parameter on highlight field while using q
on
the search field that has StopFilter and hl.q doesn't work for you?

koji
--
http://www.rondhuit.com/en/



At first glance using hl.q did the trick. I just have problems when I am
using terms with uppercase. Eventhough I usefilter
class=solr.LowerCaseFilterFactory/  on the highlighted field in both query
and index I do get search results, but just no highlights (lowercasing the
terms fixes the problem).

Can someone confirm whether this is a bug?


I don't see your situation. Giving us concrete examples (especially request 
parameters
including q and hl.q) would help a lot!

koji
--
http://www.rondhuit.com/en/


Re: SolrCloud Replication Question

2012-02-11 Thread Jamie Johnson
I didn't see anything in the logs, would it be an error?

On Sat, Feb 11, 2012 at 3:58 PM, Mark Miller markrmil...@gmail.com wrote:

 On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:

 I wiped the zk and started over (when I switch networks I get
 different host names and honestly haven't dug into why).  That being
 said the latest state shows all in sync, why would the cores show up
 as down?


 If recovery fails X times (say because the leader can't be reached from the 
 replica), a node is marked as down. It can't be active, and technically it 
 has stopped trying to recover (it tries X times and eventually give up until 
 you restart it).

 Side note, I recently ran into this issue: SOLR-3122 - fix coming soon. Not 
 sure if you have looked at your logs or not, but perhaps it's involved.

 - Mark Miller
 lucidimagination.com













Re: SolrCloud Replication Question

2012-02-11 Thread Mark Miller
Yeah, that is what I would expect - for a node to be marked as down, it either 
didn't finish starting, or it gave up recovering...either case should be 
logged. You might try searching for the recover keyword and see if there are 
any interesting bits around that.

Meanwhile, I have dug up a couple issues around recovery and committed fixes to 
trunk - still playing around...

On Feb 11, 2012, at 8:44 PM, Jamie Johnson wrote:

 I didn't see anything in the logs, would it be an error?
 
 On Sat, Feb 11, 2012 at 3:58 PM, Mark Miller markrmil...@gmail.com wrote:
 
 On Feb 11, 2012, at 3:08 PM, Jamie Johnson wrote:
 
 I wiped the zk and started over (when I switch networks I get
 different host names and honestly haven't dug into why).  That being
 said the latest state shows all in sync, why would the cores show up
 as down?
 
 
 If recovery fails X times (say because the leader can't be reached from the 
 replica), a node is marked as down. It can't be active, and technically it 
 has stopped trying to recover (it tries X times and eventually give up until 
 you restart it).
 
 Side note, I recently ran into this issue: SOLR-3122 - fix coming soon. Not 
 sure if you have looked at your logs or not, but perhaps it's involved.
 
 - Mark Miller
 lucidimagination.com
 
 
 
 
 
 
 
 
 
 
 

- Mark Miller
lucidimagination.com













Re: indexing with DIH (and with problems)

2012-02-11 Thread Shawn Heisey

On 2/11/2012 4:33 AM, alessio crisantemi wrote:

dear all,
I update my solr at 3.5 version but now I have this problem:

Grave: Full Import failed
org.apache.solr.handler.dataimport.DataImportHandlerException:
java.lang.NoSuchMethodError:


The data import handler has always been a contrib module, but it used to 
be actually included in the .war file.  That has been changed, now it's 
in separate jar files.


When you downloaded or compiled 3.5.0, the dist directory should have 
contained dataimporthandler and dataimporthandler-extras jar files.  
Mine, which I have compiled myself from the 3.5 svn branch, are named 
the following:


apache-solr-dataimporthandler-3.5-SNAPSHOT.jar
apache-solr-dataimporthandler-extras-3.5-SNAPSHOT.jar

At minimum, put the first jar file in a lib folder referenced in your 
solrconfig.xml file.  I couldn't tell you whether you'll need the 
-extras file as well, you'll have to experiment.


Thanks,
Shawn