Re: Problem with facet.fields

2012-01-05 Thread Marc SCHNEIDER
Hello,

Thanks a lot for your answers.

Sorry I typed it wrong, it was :
q=*:*facet=truefacet.field=foofacet.field=lom.classification.ddc.id
which caused an error.

That's said I added echoParms to the request and only got :
str name=facet.fieldlom.classification.ddc.id/str

So multivalued URL params are not taken in account.
I'm using Jetty and Solrj with EmbeddedSolrServer implementation.
Trying it using the normal http version does work, so you're right
it's a problem with the client library.

Any idea why it would refuse multivalued parameters?

Marc.

On Wed, Jan 4, 2012 at 9:23 PM, Chris Hostetter
hossman_luc...@fucit.org wrote:


 : If I put : q=*:*facet=truefacet.field=lom.classification.ddc.id
 : = I have results for facet fields
 : If I put : q=*:*facet=truefacet.field=lom.educational.context
 : = I have results for facet fields
 :
 : But if I put : q=*:*facet=truefacet.field=lom.classification.ddc.id
 : facet.field=lom.educational.context
 :
 : I have only facet results for the first field. I tried to invert them and
 : got also only results for the first field. It is like the the second field
 : was ignored.

 How are you doing these queries?  Are you using some sort of client
 library via some langauge, or are you pasting those URL snippets into a
 browser?  what does the responseHeader section of the results look like
 if you add echoParams=all to the URL?  what servlet container are you using?

 My suspicion is that you are using some client library that doesn't
 understand multivalued URL params and is just putting them into a map, the
 responseHeader w/echoParams turned on will tell you exactly what Solr is
 getting.

 I just tried this using the example schema in Solr 3.5 and it worked
 fine...

 http://localhost:8983/solr/select?echoParams=allq=*:*facet=truefacet.field=lom.classification.ddc.id_sfacet.field=lom.educational.context_s

 : Furthermore I tried :
 : q=*:*facet=truefacet.field=lom.classification.ddc.idfacet.field=foo
 : = it works although the 'foo' field doesn't exist
 : q=*:*facet=truefacet.field=lom.classification.ddc.idfacet.field=foo
 : = gives me an error telling me the 'foo' field doesn't exist.

 i don't understand these last examples at all ... both of those lines are
 identical


 -Hoss


Re: How to index documents in SOLR running in Window XP envronment

2012-01-05 Thread dsy99
Dear Gora and all,
Thank you very much for replying.
My question is how to index documents (.XML, .pdf, .doc files) in Solr. I
was trying using curl but it is not working in Windows XP environment. Do
any one of you have any ready made program/DIH which I can use to index
these types of files.

Regds:
Divakar

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-index-documents-in-SOLR-running-in-Window-XP-envronment-tp3632488p3634507.html
Sent from the Solr - User mailing list archive at Nabble.com.


Intermittent connection timeouts to Solr server using SolrNet

2012-01-05 Thread Ian Grainger
Hi - I have also posted this question on SO:
http://stackoverflow.com/questions/8741080/intermittent-connection-timeouts-to-solr-server-using-solrnet


I have a production webserver hosting a search webpage, which uses SolrNet
to connect to another machine which hosts the Solr search server (on a
subnet which is in the same room, so no network problems). All is fine 90%
of the time, but I consistently get a small number of The operation has
timed out errors.

I've increased the timeout in the SolrNet init to *30* seconds (!)

SolrNet.Startup.InitSolrDataObject(
new SolrNet.Impl.SolrConnection(
System.Configuration.ConfigurationManager.AppSettings[URL]
) {Timeout = 3}
);

...but all that happened is I started getting this message instead of Unable
to connect to the remote server which I was seeing before. It seems to have
made no difference to the amount of timeout errors.

I can see *nothing* in *any* log (believe me I've looked!) and clearly my
configuration is correct because it works most of the time. Anyone any
ideas how I can find more information on this problem?

Thanks!


-- 
Ian

i...@isfluent.com a...@endissolutions.com
+44 (0)1223 257903


Re: How to index documents in SOLR running in Window XP envronment

2012-01-05 Thread Dan McGinn-Combs
 Look in the Example directory for a POST.SH and POST.JAR. These could be
used to do the job on Windows. But to be honest, I didn't have any problems
using CURL on Windows. You just have to be careful to double quote rather
than single quote and use the right kind of slashes for directories.

Dan

On Thursday, January 5, 2012, dsy99 ds...@rediffmail.com wrote:
 Dear Gora and all,
 Thank you very much for replying.
 My question is how to index documents (.XML, .pdf, .doc files) in Solr. I
 was trying using curl but it is not working in Windows XP environment. Do
 any one of you have any ready made program/DIH which I can use to index
 these types of files.

 Regds:
 Divakar

 --
 View this message in context:
http://lucene.472066.n3.nabble.com/How-to-index-documents-in-SOLR-running-in-Window-XP-envronment-tp3632488p3634507.html
 Sent from the Solr - User mailing list archive at Nabble.com.


-- 
Dan McGinn-Combs
dgco...@gmail.com
Google Voice: +1 404 492 7532
Peachtree City, Georgia USA


How to index documents in SOLR running in Window XP envronment

2012-01-05 Thread Dan McGinn-Combs
Look in the Example directory for a POST.SH and POST.JAR. These could be
used to do the job on Windows. But to be honest, I didn't have any problems
using CURL on Windows. You just have to be careful to double quote rather r

On Thursday, January 5, 2012, dsy99 ds...@rediffmail.com wrote:
 Dear Gora and all,
 Thank you very much for replying.
 My question is how to index documents (.XML, .pdf, .doc files) in Solr. I
 was trying using curl but it is not working in Windows XP environment. Do
 any one of you have any ready made program/DIH which I can use to index
 these types of files.

 Regds:
 Divakar

 --
 View this message in context:
http://lucene.472066.n3.nabble.com/How-to-index-documents-in-SOLR-running-in-Window-XP-envronment-tp3632488p3634507.html
 Sent from the Solr - User mailing list archive at Nabble.com.


-- 
Dan McGinn-Combs
dgco...@gmail.com
Google Voice: +1 404 492 7532
Peachtree City, Georgia USA


SmartChineseAnalyzer and stopwords.txt

2012-01-05 Thread Delbosc, Sylvain
Hello,

I would like to know how to use stopwords with SmartChineseAnalyzer.
Following what is described at 
http://lucene.apache.org/java/2_9_0/api/contrib-smartcn/org/apache/lucene/analysis/cn/smart/SmartChineseAnalyzer.html
 it seems to be possible but I do not manage to make it work.

Presently I am defining my analyzer like this but the stopwords.txt file 
located in the same directory as schema.xml does not seem to be taken into 
account.
  analyzer 
class=org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer/

Has somebody managed to make this work?

NB: I am using SolR 1.4 and I am using several cores.

Best Regards,
_

Sylvain DELBOSC/ Capgemini Sud / Toulouse
Application Architect Senior / TIC - ADC

Tel.: +33 5 61 31 55 70 / www.capgemini.comhttp://www.capgemini.com/
Fax: +33 5 61 31 53 85

15, avenue du Docteur Grynfogel
BP 53655 - 31036 Toulouse Cedex 1
[cid:image001.gif@01CCCBB1.E82858F0]Ensemble, libérons nos énergies.
_
Capgemini is a trading name used by the Capgemini Group of companies which 
includes Capgemini Sud, registered in Toulouse, France (RCS 479 766 990) whose 
registered office is 15 avenue du Dr Grynfogel - BP 53655 - 31036 Toulouse 
cedex 1.

[cid:image002.gif@01CCCBB1.E82858F0]







This message contains information that may be privileged or confidential and is 
the property of the Capgemini Group. It is 
intended only for the person to whom it is addressed. If you are not the 
intended recipient, you are not authorized to 
read, print, retain, copy, disseminate, distribute, or use this message or any 
part thereof. If you receive this message 
in error, please notify the sender immediately and delete all copies of this 
message.


prevent PlainTextEntityProcessor to encode text

2012-01-05 Thread meghana
Hi all,

I am importing one field into solr using PlainTextEntityProcessor from a
text file , which have text in XML format. after importing it some of the
text get encoded (e.g. it convert quotation mark() to quot;)

Can i prevent it to not encode XML and keep it as it is. (like do not
convert quotation mark to quot;)

Plz help me
Thanks 
Meghana

--
View this message in context: 
http://lucene.472066.n3.nabble.com/prevent-PlainTextEntityProcessor-to-encode-text-tp3634744p3634744.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Search issue while making multivalued field to signle valued.

2012-01-05 Thread meghana
i have one text field.

previously it was multivalued field and imported using xpathentityprocessor
and it was working fine; now i change it to single value field and use
plaintextentiyprocessor.

when i do make search on it; currently (single value - plaintext entity
processor) its giving less document than previously (multivalued -
xpathentity processor)

what should be the cause of this? 
and how/what changes needed so that it works as previously?

Thanks
Meghana

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Search-issue-while-making-multivalued-field-to-signle-valued-tp3634764p3634764.html
Sent from the Solr - User mailing list archive at Nabble.com.


LUCENE-995 in 3.x

2012-01-05 Thread Ingo Renner
Hi all,

I've backported LUCENE-995 to 3.x and the unit test for TestQueryParser is 
green. 

What would be the workflow to actually get it into 3.x now?
- attach the patch to the original issue or
- create a new issue attaching the patch there?


best
Ingo

-- 
Ingo Renner
TYPO3 Core Developer, Release Manager TYPO3 4.2, Admin Google Summer of Code

TYPO3
Open Source Enterprise Content Management System
http://typo3.org










Re: soft commit 2

2012-01-05 Thread Erick Erickson
What is your evidence that it doesn't work
when you specify it in solrconfig.xml? You
haven't provided enough information about
what you've tried to give us much to go on.

It might help to review:
http://wiki.apache.org/solr/UsingMailingLists

Best
Erick

On Tue, Jan 3, 2012 at 8:17 AM, ramires uy...@beriltech.com wrote:
 hi

 softcommit work with below command but don`t work in solrconfig.xml. What is
 wrong with below xml part?

 curl http://localhost:8984/solr/update -H Content-Type: text/xml
 --data-binary 'commit softCommit=true waitFlush=false
 waitSearcher=false/'

  updateHandler class=solr.DirectUpdateHandler2
        autoSoftCommit
       maxTime1000/maxTime
     /autoSoftCommit
  /updateHandler


 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/soft-commit-2-tp3628975p3628975.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: doing snapshot after optimize - rotation parameter?

2012-01-05 Thread Erick Erickson
Have you looked at deletionPolicy and maxCommitsToKeep?

Best
Erick

On Tue, Jan 3, 2012 at 8:32 AM, Torsten Krah
tk...@fachschaft.imn.htwk-leipzig.de wrote:
 Hi,

 i am taking snapshots of my master index after optimize calls (run each
 day once), to get a clean backup of the index.
 Is there a parameter to tell the replication handler how many snapshots
 to keep and the rest should be deleted? Or must i use a custom script
 via cron?

 regards

 Torsten


Re: Commit without an update handler?

2012-01-05 Thread Erick Erickson
Hmmm, does it work just to put this in the masters index and let
replication to its tricks and issue your commit on the master?

Or am I missing something here?

Best
Erick

On Tue, Jan 3, 2012 at 1:33 PM, Martin Koch m...@issuu.com wrote:
 Hi List

 I have a Solr cluster set up in a master/slave configuration where the
 master acts as an indexing node and the slaves serve user requests.

 To avoid accidental posts of new documents to the slaves, I have disabled
 the update handlers.

 However, I use an externalFileField. When the file is updated, I need to
 issue a commit to reload the new file. This requires an update handler. Is
 there an update handler that doesn't accept new documents, but will effect
 a commit?

 Thanks,
 /Martin


Re: Do Hignlighting + proximity using surround query parser

2012-01-05 Thread Erick Erickson
Please review:
http://wiki.apache.org/solr/UsingMailingLists

You haven't provided enough information for
anyone to provide much help.

Best
Erick

On Wed, Jan 4, 2012 at 8:28 AM, reachpratik pra...@reach1to1.com wrote:
 Hello,
 I am not able to do highlighting with surround query parser on the returned
 results.
 I have tried the highlighting component but it does not return highlighted
 results.

 Any suggestions would help.


 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Do-Hignlighting-proximity-using-surround-query-parser-tp3631827p3631827.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: LUCENE-995 in 3.x

2012-01-05 Thread Michael McCandless
Thank you Ingo!

I think post the 3.x patch directly on the issue?

I'm not sure why this wasn't backported to 3.x the first time around...

Mike McCandless

http://blog.mikemccandless.com

On Thu, Jan 5, 2012 at 8:15 AM, Ingo Renner i...@typo3.org wrote:
 Hi all,

 I've backported LUCENE-995 to 3.x and the unit test for TestQueryParser is 
 green.

 What would be the workflow to actually get it into 3.x now?
 - attach the patch to the original issue or
 - create a new issue attaching the patch there?


 best
 Ingo

 --
 Ingo Renner
 TYPO3 Core Developer, Release Manager TYPO3 4.2, Admin Google Summer of Code

 TYPO3
 Open Source Enterprise Content Management System
 http://typo3.org










Re: Generic RemoveDuplicatesTokenFilter

2012-01-05 Thread Erick Erickson
@Pravesh
That look reasonable. Of course you could extend it
to do many things. I'm assuming you've just created
a plugin that you use rather than compile this into
the Solr code, right?

@astubbs
I'd probably use a TokenFilter(Factory) implementation
as a plugin as I think pravesh has.
It would also be possible to use a TokenizerFactory,
depending on where you need this to happen, but FilterFactory
is my first choice, they allow more flexibility.

I rather doubt a patch on this order will make it into the code,
it's rather special-purpose.

Best
Erick

On Wed, Jan 4, 2012 at 12:32 PM, astubbs antony.stu...@gmail.com wrote:
 That's exactly what I need. I'm using phonetic tokens on ngrams, and there's
 lots of dupes. Can you submit it as a patch? What's the easiest way to get
 this into my solr?

 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Generic-RemoveDuplicatesTokenFilter-tp3581656p3632499.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: LUCENE-995 in 3.x

2012-01-05 Thread Ingo Renner

Am 05.01.2012 um 15:05 schrieb Michael McCandless:

 Thank you Ingo!
 
 I think post the 3.x patch directly on the issue?

thanks for the advice Michael, path is attached: 
https://issues.apache.org/jira/browse/LUCENE-995


Ingo

-- 
Ingo Renner
TYPO3 Core Developer, Release Manager TYPO3 4.2, Admin Google Summer of Code

TYPO3
Open Source Enterprise Content Management System
http://typo3.org










Re: Detecting query errors with SolrJ

2012-01-05 Thread Erick Erickson
Shawn:

Somewhere you have access to a CommonsHttpSolrServer, right? There's a
getHttpClient call that returns an org.apache.commons.httpclient that
might get you the information you need.

Best
Erick

On Wed, Jan 4, 2012 at 1:03 PM, Shawn Heisey s...@elyograg.org wrote:
 When doing Solr queries in a browser, it's pretty easy to see an HTTP error
 status and read the reason, but I would like to do the same thing in a
 deterministic way with SolrJ.  Can anyone point me to examples that show how
 to retrieve the HTTP status code and the reason for the error?  I would like
 to inform the end user that there was a problem and offer ideas about how
 they might be able to fix it, rather than simply show an empty result grid.

 Thanks,
 Shawn



XPathEntityProcessor append text in foreach

2012-01-05 Thread meghana
Hi all, 

i have one non-multivalued field , which i want to import from a file using
XPathEntityProcessor.

 entity name=x onError=continue  processor=XPathEntityProcessor
transformer=TemplateTransformer forEach=/tt/body/div/p
url=${SRC.FileName} dataSource=FS
   field column=Mfld xpath=/xa/xb/
   /entity

it works for fine for multi-valued field but , for signle value field it
just assign one value of that. can i append text of  '/xa/xb' in 'Mfld'? 

Thanks 
Meghana

--
View this message in context: 
http://lucene.472066.n3.nabble.com/XPathEntityProcessor-append-text-in-foreach-tp3635022p3635022.html
Sent from the Solr - User mailing list archive at Nabble.com.


matching on month

2012-01-05 Thread Don Hill
Hi,

I am trying to return results based on the month of a date field, Is this
possible.
I know I can do ranges using the field:[date TO date] but now I have a
requirement to return records based on just the month part of a date

so if I have record with these dates and search on the May/05

date name=*effDate_tdt*2011-05-01T00:00:00Z/date
date name=*effDate_tdt*2006-05-01T00:00:00Z/date
date name=*effDate_tdt*2004-05-01T00:00:00Z/date
date name=*effDate_tdt*1995-07-01T00:00:00Z/date

the query would only return these

date name=*effDate_tdt*2011-05-017T00:00:00Z/date
date name=*effDate_tdt*2006-05-30T00:00:00Z/date
date name=*effDate_tdt*2004-05-03T00:00:00Z/date


Re: doing snapshot after optimize - rotation parameter?

2012-01-05 Thread Torsten Krah
Am Donnerstag, den 05.01.2012, 08:48 -0500 schrieb Erick Erickson:
 Have you looked at deletionPolicy and maxCommitsToKeep?

Hm, but that are deletion policy parameters for the running index, how
much commit points should be kept - the internal ones from lucene:

#
!-- configure deletion policy here--
deletionPolicy class=solr.SolrIndexDeletionPolicy
  !--  Store only the commits with optimize.Non optimized commits
will get deleted by lucene when
the last IndexWriter/IndexReader using this commit point is
closed  --
  str name=keepOptimizedOnlytrue/str
  !--Maximum no: of commit points stored . Older ones will be
cleaned when they go out of scope--
  str name=maxCommitsToKeep/str
  !-- max age of a stored commit--
  str name=maxCommitAge/str
/deletionPolicy
#

A rotated snapshot is everytime out-of-scope - its like a backup,
maxCommitsToKeep would not make any sense here, right?
Reading this: https://issues.apache.org/jira/browse/SOLR-617 it sounds
like different use case.

Are there really meant to be used for rotation the snapshot directories,
reading the comments it does not sound to be what i am looking for, am i
right?

regards

Torsten

 
 Best
 Erick
 
 On Tue, Jan 3, 2012 at 8:32 AM, Torsten Krah
 tk...@fachschaft.imn.htwk-leipzig.de wrote:
  Hi,
 
  i am taking snapshots of my master index after optimize calls (run each
  day once), to get a clean backup of the index.
  Is there a parameter to tell the replication handler how many snapshots
  to keep and the rest should be deleted? Or must i use a custom script
  via cron?
 
  regards
 
  Torsten



smime.p7s
Description: S/MIME cryptographic signature


Re: Commit without an update handler?

2012-01-05 Thread Martin Koch
Yes.

However, something must actually have been updated in the index before a
commit on the master causes the slave to update (this is what was confusing
me).

Since I'll be updating the index fairly often, this will not be a problem
for me.

If, however, the external file field is updated often, but the index proper
isn't, this could be a problem.

Thanks,
/Martin

On Thu, Jan 5, 2012 at 2:56 PM, Erick Erickson erickerick...@gmail.comwrote:

 Hmmm, does it work just to put this in the masters index and let
 replication to its tricks and issue your commit on the master?

 Or am I missing something here?

 Best
 Erick

 On Tue, Jan 3, 2012 at 1:33 PM, Martin Koch m...@issuu.com wrote:
  Hi List
 
  I have a Solr cluster set up in a master/slave configuration where the
  master acts as an indexing node and the slaves serve user requests.
 
  To avoid accidental posts of new documents to the slaves, I have disabled
  the update handlers.
 
  However, I use an externalFileField. When the file is updated, I need to
  issue a commit to reload the new file. This requires an update handler.
 Is
  there an update handler that doesn't accept new documents, but will
 effect
  a commit?
 
  Thanks,
  /Martin



Re: matching on month

2012-01-05 Thread Sethi, Parampreet
Hi Don,

You can try 


date name=*effDate_tdt*2011-05-01T00:00:00Z TO
2011-05-30T00:00:00Z/date
date name=*effDate_tdt*2006-05-01T00:00:00Z TO
2006-05-30T00:00:00Z/date
 And so on.


I am not sure if month query is available, probably other group members
can shed more light on the same. But this can be used as quick fix.

-param


On 1/5/12 9:41 AM, Don Hill justj...@gmail.com wrote:

Hi,

I am trying to return results based on the month of a date field, Is this
possible.
I know I can do ranges using the field:[date TO date] but now I have a
requirement to return records based on just the month part of a date

so if I have record with these dates and search on the May/05

date name=*effDate_tdt*2011-05-01T00:00:00Z/date
date name=*effDate_tdt*2006-05-01T00:00:00Z/date
date name=*effDate_tdt*2004-05-01T00:00:00Z/date
date name=*effDate_tdt*1995-07-01T00:00:00Z/date

the query would only return these

date name=*effDate_tdt*2011-05-017T00:00:00Z/date
date name=*effDate_tdt*2006-05-30T00:00:00Z/date
date name=*effDate_tdt*2004-05-03T00:00:00Z/date



RE: How to index documents in SOLR running in Window XP envronment

2012-01-05 Thread Dyer, James
Just be sure to download the correct binary for your version of Windows.  Then 
unzip the file somewhere and add curl.exe to your PATH.  It should then just 
work from the command line like the examples.  If you need more curl help, you 
might need to ask elsewhere.

With curl you can upload simple .xml files to solr ... See 
http://wiki.apache.org/solr/UpdateXmlMessages .  For other xml formats or 
different file types (you had mentioned .pdf and .doc), see the instructions at 
http://wiki.apache.org/solr/ExtractingRequestHandler . 

Keep in mind that for a lot of uses, curl is a good solution only for rapid 
development but other solutions (DIH or SolrJ) will be better for the long 
haul.  

James Dyer
E-Commerce Systems
Ingram Content Group
(615) 213-4311


-Original Message-
From: dsy99 [mailto:ds...@rediffmail.com] 
Sent: Thursday, January 05, 2012 7:17 AM
To: solr-user@lucene.apache.org
Subject: RE: How to index documents in SOLR running in Window XP envronment

Dear James,

Can you please list the steps to be followed to execute curl for indexing
files in SOLR, after downloading it from the site
http://curl.haxx.se/download.html;. 
After downloading the it, from the link provided I am unable to proceed
further.

Thank you very much in advance.

Regds:
Divakar

--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-index-documents-in-SOLR-running-in-Window-XP-envronment-tp3632488p3634775.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: LUCENE-995 in 3.x

2012-01-05 Thread Michael McCandless
Awesome, thanks Ingo... I'll have a look!

Mike McCandless

http://blog.mikemccandless.com

On Thu, Jan 5, 2012 at 9:23 AM, Ingo Renner i...@typo3.org wrote:

 Am 05.01.2012 um 15:05 schrieb Michael McCandless:

 Thank you Ingo!

 I think post the 3.x patch directly on the issue?

 thanks for the advice Michael, path is attached: 
 https://issues.apache.org/jira/browse/LUCENE-995


 Ingo

 --
 Ingo Renner
 TYPO3 Core Developer, Release Manager TYPO3 4.2, Admin Google Summer of Code

 TYPO3
 Open Source Enterprise Content Management System
 http://typo3.org










Re: matching on month

2012-01-05 Thread Erick Erickson
The query would actually look like
fq=effDate_tdt:[2011-05-01T00:00:00Z TO 2011-05-31T00:00:00Z]

and you need to be a little careful with the end date, this would
actually skip documents on 31 May, you'd need to do something like:
fq=effDate_tdt:[2011-05-01T00:00:00Z TO 2011-05-31T23:59:59:999Z]

As of 4.0, you can mix inclusive/exclusive as
q=effDate_tdt:[2011-05-01T00:00:00Z TO 2011-06-01T00:00:00Z}

Best
Erick

On Thu, Jan 5, 2012 at 10:04 AM, Sethi, Parampreet
parampreet.se...@teamaol.com wrote:
 Hi Don,

 You can try


 date name=*effDate_tdt*2011-05-01T00:00:00Z TO
 2011-05-30T00:00:00Z/date
 date name=*effDate_tdt*2006-05-01T00:00:00Z TO
 2006-05-30T00:00:00Z/date
  And so on.


 I am not sure if month query is available, probably other group members
 can shed more light on the same. But this can be used as quick fix.

 -param


 On 1/5/12 9:41 AM, Don Hill justj...@gmail.com wrote:

Hi,

I am trying to return results based on the month of a date field, Is this
possible.
I know I can do ranges using the field:[date TO date] but now I have a
requirement to return records based on just the month part of a date

so if I have record with these dates and search on the May/05

date name=*effDate_tdt*2011-05-01T00:00:00Z/date
date name=*effDate_tdt*2006-05-01T00:00:00Z/date
date name=*effDate_tdt*2004-05-01T00:00:00Z/date
date name=*effDate_tdt*1995-07-01T00:00:00Z/date

the query would only return these

date name=*effDate_tdt*2011-05-017T00:00:00Z/date
date name=*effDate_tdt*2006-05-30T00:00:00Z/date
date name=*effDate_tdt*2004-05-03T00:00:00Z/date



Re: SOLR results case

2012-01-05 Thread Juan Grande
Hi Dave,

The stored content (which is returned in the results) isn't modified by the
analyzers, so this shouldn't be a problem. Could you describe in more
detail what you are doing and the results that you're getting?

Thanks,

*Juan*



On Thu, Jan 5, 2012 at 2:17 PM, Dave dla...@gmail.com wrote:

 I'm running all of my indexed data and queries through a
 LowerCaseFilterFactory because I don't want to worry about case when
 matching. All of my results are titles - is there an easy way to restore
 case or convert all results to Title Case when returning them? My results
 are returned as JSON if that makes any difference.

 Thanks,
 Dave



Re: SOLR results case

2012-01-05 Thread Dave
Hi Juan,

When I'm storing the content, the field has a LowerCaseFilterFactory
filter, so that when I'm searching it's not case sensitive. Is there a way
to re-filter the data when it's presented as a result to restore the case
or convert to Title Case?

Thanks,
Dave

On Thu, Jan 5, 2012 at 12:41 PM, Juan Grande juan.gra...@gmail.com wrote:

 Hi Dave,

 The stored content (which is returned in the results) isn't modified by the
 analyzers, so this shouldn't be a problem. Could you describe in more
 detail what you are doing and the results that you're getting?

 Thanks,

 *Juan*



 On Thu, Jan 5, 2012 at 2:17 PM, Dave dla...@gmail.com wrote:

  I'm running all of my indexed data and queries through a
  LowerCaseFilterFactory because I don't want to worry about case when
  matching. All of my results are titles - is there an easy way to restore
  case or convert all results to Title Case when returning them? My results
  are returned as JSON if that makes any difference.
 
  Thanks,
  Dave
 



Heads Up - Index File Format Change on Trunk

2012-01-05 Thread Simon Willnauer
Folks,

I just committed LUCENE-3628 [1] which cuts over Norms to DocVaues.
This is an index file format change and if you are using trunk you
need to reindex before updating.

happy indexing :)

simon

[1] https://issues.apache.org/jira/browse/LUCENE-3628


Re: SOLR results case

2012-01-05 Thread Juan Grande
Hi Dave,

Have you tried running a query and taking a look at the results?

The filters that you define in the fieldType don't affect the way the data
is *stored*, it affects the way the data is *indexed*. With this I mean
that the filters affect the way that a query matches a document, and will
affect other features that rely on the *indexed* values (like faceting) but
won't affect the way in which results are shown, which depends on the
*stored* value.

*Juan*



On Thu, Jan 5, 2012 at 3:19 PM, Dave dla...@gmail.com wrote:

 Hi Juan,

 When I'm storing the content, the field has a LowerCaseFilterFactory
 filter, so that when I'm searching it's not case sensitive. Is there a way
 to re-filter the data when it's presented as a result to restore the case
 or convert to Title Case?

 Thanks,
 Dave

 On Thu, Jan 5, 2012 at 12:41 PM, Juan Grande juan.gra...@gmail.com
 wrote:

  Hi Dave,
 
  The stored content (which is returned in the results) isn't modified by
 the
  analyzers, so this shouldn't be a problem. Could you describe in more
  detail what you are doing and the results that you're getting?
 
  Thanks,
 
  *Juan*
 
 
 
  On Thu, Jan 5, 2012 at 2:17 PM, Dave dla...@gmail.com wrote:
 
   I'm running all of my indexed data and queries through a
   LowerCaseFilterFactory because I don't want to worry about case when
   matching. All of my results are titles - is there an easy way to
 restore
   case or convert all results to Title Case when returning them? My
 results
   are returned as JSON if that makes any difference.
  
   Thanks,
   Dave
  
 



Re: matching on month

2012-01-05 Thread Chris Hostetter

: The query would actually look like
: fq=effDate_tdt:[2011-05-01T00:00:00Z TO 2011-05-31T00:00:00Z]

i think your overlooking part of the question ... Don seems to be asking 
how to query if the value of a date field contains a day in the month of 
may, regarldess of year...

: the query would only return these
: 
: date name=*effDate_tdt*2011-05-017T00:00:00Z/date
: date name=*effDate_tdt*2006-05-30T00:00:00Z/date
: date name=*effDate_tdt*2004-05-03T00:00:00Z/date

...this isn't possible with the solr date field.  you'll need to create a 
special month field and index just the month value there.



-Hoss


Solr Scoring question

2012-01-05 Thread Christopher Gross
I'm getting different results running these queries:

http://localhost:8080/solr/select?q=*:*fq=source:wikifq=tag:carsort=score+desc,dateSubmitted+ascfl=title,score,dateSubmittedrows=100

http://localhost:8080/solr/select?fq=source:wikiq=tag:carsort=score+desc,dateSubmitted+descfl=title,score,dateSubmittedrows=100

They return the same amount of results (and I'm assuming the same
ones) -- but the first one (with q=*:*) has a score of 1 for all
results, making it only sort by dateSubmitted.  The second one has
scores, and it properly sorts them.

I was thinking that the two would be equivalent and give the same
results in the same order, but I'm guessing that there is something
happening behind the scenes in Solr (Lucene?) that makes the *:* give
me a score of 1.0 for everything.  I tried to find some documentation
to figure out if this is the case, but I'm not having much luck for
that.

I have a JSP file that will take in parameters, do some work on them
to make them appropriate for Solr, then pass the query it builds to
Solr.  Should I just put more brains in that to avoid using a *:*
(we're trying to verify results and we ran into this oddity).

This is for Solr 3.4, running Tomcat 5.5.25 on Java 1.5.

Thanks!  Let me know if Ineed to clarify anything...

-- Chris


Re: Solr Scoring question

2012-01-05 Thread Simon Willnauer
hey,

On Thu, Jan 5, 2012 at 9:31 PM, Christopher Gross cogr...@gmail.com wrote:
 I'm getting different results running these queries:

 http://localhost:8080/solr/select?q=*:*fq=source:wikifq=tag:carsort=score+desc,dateSubmitted+ascfl=title,score,dateSubmittedrows=100

 http://localhost:8080/solr/select?fq=source:wikiq=tag:carsort=score+desc,dateSubmitted+descfl=title,score,dateSubmittedrows=100

 They return the same amount of results (and I'm assuming the same
 ones) -- but the first one (with q=*:*) has a score of 1 for all
 results, making it only sort by dateSubmitted.  The second one has
 scores, and it properly sorts them.

 I was thinking that the two would be equivalent and give the same
 results in the same order, but I'm guessing that there is something
 happening behind the scenes in Solr (Lucene?) that makes the *:* give
 me a score of 1.0 for everything.  I tried to find some documentation
 to figure out if this is the case, but I'm not having much luck for
 that.

q=*:* is a constant score query that retireves all documents in your
index. The issue here is that with *:* you don't have anything to
score while with q=tag:car you can score the term car with tf idf etc.

does that make sense?

simon

 I have a JSP file that will take in parameters, do some work on them
 to make them appropriate for Solr, then pass the query it builds to
 Solr.  Should I just put more brains in that to avoid using a *:*
 (we're trying to verify results and we ran into this oddity).

 This is for Solr 3.4, running Tomcat 5.5.25 on Java 1.5.

 Thanks!  Let me know if Ineed to clarify anything...

 -- Chris


Round Robin concept in distributed Solr

2012-01-05 Thread Suneel
So scenario A (round-robin):

query 1: /solr-shard-1/select?q=dog... shards=shard-1,shard2
query 2: /solr-shard-2/select?q=dog... shards=shard-1,shard2
query 3: /solr-shard-1/select?q=dog... shards=shard-1,shard2
etc.

or or scenario B (fixed):

query 1: /solr-shard-1/select?q=dog... shards=shard-1,shard2
query 2: /solr-shard-1/select?q=dog... shards=shard-1,shard2
query 3: /solr-shard-1/select?q=dog... shards=shard-1,shard2
etc. 

I want to use round-robin for load balancing and got this piece of code.
please anyone describe me about query.

-
Suneel Pandey
Sr. Software Developer
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Round-Robin-concept-in-distributed-Solr-tp3636345p3636345.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: LUCENE-995 in 3.x

2012-01-05 Thread Ingo Renner

Am 05.01.2012 um 16:27 schrieb Michael McCandless:

 Awesome, thanks Ingo... I'll have a look!

Thank YOU for taking the time and looking into it!


Ingo

-- 
Ingo Renner
TYPO3 Core Developer, Release Manager TYPO3 4.2, Admin Google Summer of Code

TYPO3
Open Source Enterprise Content Management System
http://typo3.org










Re: [Solr Event Listener plug-in] Execute query search from SolrCore - Java Code

2012-01-05 Thread Chris Hostetter

: I have tried to open a new searcher and make a forced commit inside the
: postCommit method of the listener, but it caused many issues.
: How can I complete the commit and then call the postCommit method of the
: listener with the logic inside ( with a lot of queries on the last
: committed docs)?

this is the chicken and the egg problem that i think i mentioned before, 
and the reason why most people deal with this type of situation externally 
from solr -- plugins like UpdateProcessor's can get a searcher fro mthe 
SolrCore, but searchers always represent a snapshot moment in time of the 
most recent commit prior to asking hte SolrCore for that searcher -- they 
don't see any changes being made while the searcher is in use (if they 
did, queries that did things like faceting and highlighting would make no 
sense)

your UpdateProcessor could concievable ask for a new searcher for *every* 
document it wants to check, but that still wouldn't help you find 
doucments that hadn't been commited yet -- the index can't tell you about 
any doc until it's committed.

You either need to keep track of the uncommited docs yourself, or if 
you're willing to depend on Solr trunk and use unreleased 4x code, you 
*might* be able to leverage the realtime get stuff that uses a 
transaction log of docs added to make it possible to ask for docs by ID 
even if they haven't been commited yet...

http://www.lucidimagination.com/blog/2011/09/07/realtime-get/
https://wiki.apache.org/solr/RealTimeGet

...but i have no idea if you'll run into any sort of problems reading from 
the transaction log from an UpdateProcessor (not sure what the internal 
API looks like)

-Hoss


Re: edismax ignores the quoted sub phrase query ?

2012-01-05 Thread Chris Hostetter

: Is this the intended behavior of edismax, or am I missing anything ?

it definitely looks like a bug to me, that pf clause is non sensical.

I've opened a jira to track this, but i'm afraid i can't offer any advice 
on how to fix it...

https://issues.apache.org/jira/browse/SOLR-3008


-Hoss


Solr TransformerException, SocketException: Broken pipe

2012-01-05 Thread bhawna singh
Hi Guys,
We are experiencing SEVERE exceptions in SOLR (stacktrace below)
Please let me know if anyone has experienced this and have some insight /
pointers on to where and what should I look for to resolve this.
ERROR [solr.servlet.SolrDispatchFilter] - : java.io.IOException: XSLT
transformation error

After the exception the SOLR goes in Unstable state and the response time
increases from less than 50ms to more than 5000ms.

Thanks,
Bhawna

Stack Trace:
ERROR [solr.servlet.SolrDispatchFilter] - [http-bio-8080-exec-10138] :
java.io.IOException: XSLT transformation error
at
org.apache.solr.response.XSLTResponseWriter.write(XSLTResponseWriter.java:108)
at
org.apache.solr.servlet.SolrDispatchFilter.writeResponse(SolrDispatchFilter.java:340)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:261)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:224)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175)
at
org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:462)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:164)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:100)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:405)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:278)
at
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:515)
at
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:302)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: javax.xml.transform.TransformerException: ClientAbortException:
java.net.SocketException: Broken pipe
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:719)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:313)
at
org.apache.solr.response.XSLTResponseWriter.write(XSLTResponseWriter.java:106)
... 17 more
Caused by: ClientAbortException:  java.net.SocketException: Broken pipe
at
com.sun.org.apache.xml.internal.serializer.ToStream.flushWriter(ToStream.java:299)
at
com.sun.org.apache.xml.internal.serializer.ToXMLStream.endDocument(ToXMLStream.java:194)
at
com.sun.org.apache.xml.internal.serializer.ToUnknownStream.endDocument(ToUnknownStream.java:825)
at dev.transform()
at
com.sun.org.apache.xalan.internal.xsltc.runtime.AbstractTranslet.transform(AbstractTranslet.java:603)
at
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:709)
... 19 more
Caused by: ClientAbortException:  java.net.SocketException: Broken pipe
at
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:373)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:437)
at
org.apache.catalina.connector.OutputBuffer.doFlush(OutputBuffer.java:321)
at
org.apache.catalina.connector.OutputBuffer.flush(OutputBuffer.java:299)
at
org.apache.catalina.connector.CoyoteOutputStream.flush(CoyoteOutputStream.java:103)
at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:278)
at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:122)
at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:212)
at org.apache.solr.common.util.FastWriter.flush(FastWriter.java:115)
at
com.sun.org.apache.xml.internal.serializer.ToStream.flushWriter(ToStream.java:294)
... 24 more
Caused by: java.net.SocketException: Broken pipe
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:92)
at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
at
org.apache.coyote.http11.InternalOutputBuffer.realWriteBytes(InternalOutputBuffer.java:218)
at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:437)
at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:351)
at
org.apache.coyote.http11.InternalOutputBuffer$OutputStreamOutputBuffer.doWrite(InternalOutputBuffer.java:243)
at
org.apache.coyote.http11.filters.ChunkedOutputFilter.doWrite(ChunkedOutputFilter.java:119)
at
org.apache.coyote.http11.AbstractOutputBuffer.doWrite(AbstractOutputBuffer.java:190)
at org.apache.coyote.Response.doWrite(Response.java:533)
at

no such core error with EmbeddedSolrServer

2012-01-05 Thread Phillip Rhodes
Hi all, I'm having an issue that I hope someone can shed some light on.

I have a Groovy program, using Solr 3.5, where I am attempting to use
EmbeddedSolrServer using the instructions shown here:

http://wiki.apache.org/solr/Solrj#EmbeddedSolrServer

to that end, I have code setup like this:

...

System.setProperty('solr.solr.home',
'/usr/servers/solr/apache-solr-3.5.0/example/heceta');
CoreContainer.Initializer initializer = new CoreContainer.Initializer();
CoreContainer coreContainer = initializer.initialize();

EmbeddedSolrServer solrServer = new EmbeddedSolrServer(coreContainer, '' );

to initialize the solrServer.  But when I try to add a doc using the solrServer
instance, I get an exception:

org.apache.solr.common.SolrException: No such core:
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:104)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:105)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:121)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:106)
at org.apache.solr.client.solrj.SolrServer$add.call(Unknown Source)
at 
org.fogbeam.exp.IndexerWithOwnerInfo.indexFile(IndexerWithOwnerInfo.groovy:150)


My solr.xml looks like this:

solr persistent=false

  !--
  adminPath: RequestHandler path to manage cores.
If 'null' (or absent), cores will not be manageable via request handler
  --
  cores adminPath=/admin/cores defaultCoreName=collection1
core name=collection1 instanceDir=. /
  /cores
/solr



Can somebody tell me if the documentation on how to set this up is
wrong, or if there is a solr bug, or if there
is just something I've missed in my configuration?


Thanks,


Phillip


Re: doing snapshot after optimize - rotation parameter?

2012-01-05 Thread Chris Hostetter
: i am taking snapshots of my master index after optimize calls (run each

please define what you mean by taking snapshots -- the replication 
handler has no snapshot command that i know of.  Are you using the 
old snapshooter scripts from Solr 1.4 style replication, or are you 
doing something else outside of Solr?

: day once), to get a clean backup of the index.

If your goal is to create backups, use the backup command of the 
replication handler.

: Is there a parameter to tell the replication handler how many snapshots
: to keep and the rest should be deleted? Or must i use a custom script
: via cron?

As of Solr 3.5 the backup command supports a numberToKeep param...

https://wiki.apache.org/solr/SolrReplication#HTTP_API


-Hoss


Re: Query regarding solr custom sort order

2012-01-05 Thread umaswayam
Hi Bernd,

The column which comes from database is string only,  that is being default
populated. How do I convert it to double as the format is 1.00,2.00,3.00 in
database. So I need it to be coverted to double only.

Thanks,
Uma Shankar

--
View this message in context: 
http://lucene.472066.n3.nabble.com/Query-regarding-solr-custom-sort-order-tp3631854p3637181.html
Sent from the Solr - User mailing list archive at Nabble.com.


Indexing Failed.Rolled back all changes Issue

2012-01-05 Thread Rajdeep Alapati
Hi,

I am new to this SOLR.I was digging data import request handler for past few 
days and now i am doing some poc after i download the solr server. I do have a 
table called files and i want to perform the Full import as part of POC. I 
changed the data-config.xml and registered the data-config.xml at 
solr-config.xml at conf/. I hit the http request for full-import. Even though 
my query returns 3 rows outside the solr but it is not returning any rows 
inside the import.when i see the /data-import status i do find the error 
message.Indexing Failed.Rolled back all changes. after i did some digging 
regarding the issue.i came to know that i missed dataimport.properties file at 
conf/. Now pls help me here.

1) Can anybody send me the sample file how the dataimport.properties file 
should be?
2) Even though i am not performing delta import do i need dataimport.properties
3) why i am not able to see any rows? what is the attribute i can check if the 
dataimport is successful? and in which file i will find this information.i 
think solr/data-import http request will help me?

Can any body help me? i know this should be a basic problem, but i am beginner.

Thanks
Raj Deep
Benefitfocus is the largest benefits technology provider in the U.S. 
More than 15 million consumers, 300,000 employers and 60,000 brokers 
use our cloud-based platform to shop, enroll, manage and exchange all 
their benefits in one place. From consumer engagement and education 
to enrollment, communication and billing, Benefitfocus is helping 
companies find a better way to manage their benefits. For 
more information, visit www.benefitfocus.com 
Benefitfocus ­ All Your Benefits. One Place.

CONFIDENTIALITY NOTICE: This message and any attachments are for the 
use of the intended individuals and may contain information that is 
confidential and exempt from disclosure under law. If you are not the 
intended recipient, any further use, distribution, or disclosure of this 
message or attachments is strictly prohibited. If you have received this 
communication in error, please contact the sender or 
bfpostmas...@benefitfocus.com 
immediately and delete this message and any attachments from your system. 
(01/2012)