Re: DisMax search

2011-10-27 Thread jyn7
Sorry my bad :(. Thanks for the help. It worked. I completely overlooked the
defType.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/DisMax-search-tp3455671p3458454.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: DisMax search

2011-10-26 Thread jyn7
I am searching for 9065 , so its not about case sensitivity. My search is
searching across all the field names and not limiting it to one
field(specified in the qf param and using deftype dismax)

--
View this message in context: 
http://lucene.472066.n3.nabble.com/DisMax-search-tp3455671p3456716.html
Sent from the Solr - User mailing list archive at Nabble.com.


Exact Match using Copy Fields

2011-08-18 Thread jyn7
Hi,

I am trying to achieve an exact match search on a text field. I am using a
copy field and copying it to a string and using that for the search.

 field name=imprint type=text indexed=true stored=true/
 field name=author type=text indexed=true stored=true/
 field name=author_exact type=string indexed=true stored=false/
 field name=imprint_exact type=string indexed=true stored=false/

 copyField source=author dest=author_exact/
 copyField source=imprint dest=imprint_exact/

and now I want to do an exact match on the imprint field and am trying to
search using the below, and the results are not limited to imprint_exact, I
even get the results with author_exact having the queried value.
facet=trueqf=imprint_exactfl=*,scorefq=published_on:[* TO NOW]q=Cris
Williamsonstart=0rows=10

Can anyone help me correct this?

Thanks.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Exact-Match-using-Copy-Fields-tp3265027p3265027.html
Sent from the Solr - User mailing list archive at Nabble.com.


Query Results Differ

2011-06-24 Thread jyn7
Hi,

I am trying to understand why the two queries return different results. To
me they look similar, can some one help me understand the difference in the
results.

Query1 :
facet=trueq=timefq=supplierid:1001start=0rows=10sort=published_on desc

Query2: facet=trueq=timefq=supplierid:1001+published_on:[* TO
NOW]start=0rows=10sort=published_on desc

The first query returns only 44 rows while the second one returns 200,000
rows. When I dont have the filter for published_on, I am assuming that SOLR
should return all the results with supplier id 1001, so Query 1 should have
returned more number of results(or atleast same number of results ) than the
second query.  

Thanks.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-Results-Differ-tp3104412p3104412.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Query Results Differ

2011-06-24 Thread jyn7
So if I use a second fq parameter, will SOLR apply an AND on both the fq
parameters?
I have multiple indexed values, so when I search for q=time, does SOLR
return results with Time in any of the indexed values ? Sorry for the silly
questions


--
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-Results-Differ-tp3104412p3104611.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Query Results Differ

2011-06-24 Thread jyn7
Thanks Stefan.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-Results-Differ-tp3104412p3105914.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOlR -- Out of Memory exception

2011-06-17 Thread jyn7
I did that , but when I split them into 5 mill records, the first file went
through fine, when I started processing the second file SOLR hit an OOM
again:
org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: Java heap space
at
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.init(FreqProxTermsWriterPerField.java:184)
at
org.apache.lucene.index.FreqProxTermsWriterPerField$FreqProxPostingsArray.newInstance(FreqProxTermsWriterPerField.java:194)
at
org.apache.lucene.index.ParallelPostingsArray.grow(ParallelPostingsArray.java:48)
at
org.apache.lucene.index.TermsHashPerField.growParallelPostingsArray(TermsHashPerField.java:137)
at
org.apache.lucene.index.TermsHashPerField.add(TermsHashPerField.java:440)
at
org.apache.lucene.index.DocInverterPerField.processFields(DocInverterPerField.java:169)
at
org.apache.lucene.index.DocFieldProcessorPerThread.processDocument(DocFieldProcessorPerThread.java:248)

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOlR-Out-of-Memory-exception-tp3074636p3076610.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOlR -- Out of Memory exception

2011-06-17 Thread jyn7
I commented the autocommit option and tried uploading the file (a smaller
file now 5 million records) and I hit an oom again:

Jun 17, 2011 2:32:59 PM org.apache.solr.common.SolrException log
SEVERE: java.lang.OutOfMemoryError: Java heap space

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOlR-Out-of-Memory-exception-tp3074636p3077812.html
Sent from the Solr - User mailing list archive at Nabble.com.


SOlR -- Out of Memory exception

2011-06-16 Thread jyn7
We just started using SOLR. I am trying to load a single file with 20 million
records into SOLR using the CSV uploader. I keep getting and out of Memory
after loading 7 million records. Here is the config:

autoCommit 
 maxDocs1/maxDocs
 maxTime6/maxTime 
I also  encountered a LockObtainFailedException
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out:
NativeFSLock@D:\work\solr\.\data\index\write.lock
at org.apache.lucene.store.Lock.obtain(Lock.java:84)
at
org.apache.lucene.index.IndexWriter.lt;initgt;(IndexWriter.java:1097)

So I changed the  lockType to SIngle, now again I am getting an Out of
Memory Exception. I also increased the JVM heap space to 2048M but still
getting an Out of Memory.




--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOlR-Out-of-Memory-exception-tp3074636p3074636.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: SOlR -- Out of Memory exception

2011-06-16 Thread jyn7
Yes Eric, after changing the lock type to Single, I got an OOM after loading
5.5 million records. I am using the curl command to upload the csv.

--
View this message in context: 
http://lucene.472066.n3.nabble.com/SOlR-Out-of-Memory-exception-tp3074636p3074765.html
Sent from the Solr - User mailing list archive at Nabble.com.