Re: Setting up two cores in solr.xml for Solr 4.0

2012-09-04 Thread veena rani



  
try the above code snippet , in solr.xml.
But it works on Tomcat.

On Wed, Sep 5, 2012 at 1:10 AM, Chris Hostetter wrote:

>
> :   
>
> I'm pretty sure what you hav above tells solr that core MYCORE_test it
> should use the instanceDir MYCORE but ignore the  in that
> solrconfig.xml and use the one you specified.
>
> This on the other hand...
>
> : >   
> : > 
> : >
>
> ...tells solr that the MYCORE_test SolrCore should use the instanceDir
> MYCORE, and when parsing that solrconfig.xml file it should set the
> variable ${dataDir} to be "MYCORE_test" -- but if your solconfig.xml file
> does not ever refer to the  ${dataDir} variable, it would have any effect.
>
> so the question becomes -- what does your solrconfig.xml look like?
>
>
> -Hoss
>



-- 
Regards,
Veena.
Banglore.


Re: Solr 4.0 BETA Replication problems on Tomcat

2012-09-04 Thread Sami Siren
I opened SOLR-3789. As a workaround you can remove internal from the config and it should work.

--
 Sami Siren

On Wed, Sep 5, 2012 at 5:58 AM, Ravi Solr  wrote:
> Hello,
> I have a very simple setup one master and one slave configured
> as below, but replication keeps failing with stacktrace as shown
> below. Note that 3.6 works fine on the same machines so I am thinking
> that Iam missing something in configuration with regards to solr
> 4.0...can somebody kindly let me know if Iam missing something ? I am
> running SOLR 4.0 on Tomcat-7.0.29 with Java6. FYI I never has any
> problem with SOLR on glassfish, this is the first time Iam using it on
> Tomcat
>
> On Master
>
> 
>  
>   commit
>   optimize
>   schema.xml,stopwords.txt,synonyms.txt
>   00:00:10
>   
> 
>
> On Slave
>
> 
>  
>  name="masterUrl">http://testslave:8080/solr/mycore/replication
>
> 00:00:50
> internal
> 5000
> 1
>  
> 
>
>
> Error
>
> 22:44:10WARNING SnapPuller  Error in fetching packets
>
> java.util.zip.ZipException: unknown compression method
> at 
> java.util.zip.InflaterInputStream.read(InflaterInputStream.java:147)
> at 
> org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
> at 
> org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)
> at 
> org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:124)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:149)
> at 
> org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:144)
> at 
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchPackets(SnapPuller.java:1024)
> at 
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:985)
> at 
> org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:627)
> at 
> org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:331)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:297)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:175)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at 
> java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
>
> 22:44:10SEVERE  ReplicationHandler  SnapPull failed
> :org.apache.solr.common.SolrException: Unable to download
> _3_Lucene40_0.tip completely. Downloaded 0!=170 at
> org.apache.solr.handler.SnapPuller$FileFetcher.cleanup(SnapPuller.java:1115)
> at 
> org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:999)
> at org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:627)
> at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:331)
> at 
> org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:297)
> at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:175) at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)


Problem after replication using solr 1.4

2012-09-04 Thread ravicv
 Hi,

We have configured replication in our solr setup.

After replication master index size grows to double the size even though
maxNumberOfBackups is not configured in my solrconfig.xml

 Master replication handler



optimize



 Slave replication handler



${solr.master.security_pricing.url}
00:00:20
5000
1
 


 Whether solr takes default maxNumberOfBackups as 1?. If so how can i keep
my index size as original always or is their a way to delete the backup
after replication is completed?

My initial index size at master after first replication is : 246MB.
My initial index size at master after secondreplication is : 492MB.

Is their any way to clear backups in master after replication is done?

 I am using solr1.4 version.

Thanks,
Ravi




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Problem-after-replication-using-solr-1-4-tp4005501.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-09-04 Thread aniljayanti
Hi,

thanks,

I m sending my whole configurations in schema and solrconfig.xml files.


schema.xml
---



  
  
  
  


 
 
 

  







 

*
solrconfig.xml
-


  suggest
  org.apache.solr.spelling.suggest.Suggester
  org.apache.solr.spelling.suggest.fst.FSTLookup  
  suggest
  autocomplete_text
  true
  0.005
  true
  true

   
  jarowinkler 
  lowerfilt 
  org.apache.lucene.search.spell.JaroWinklerDistance 
  spellchecker 
   
 edgytext 
  
  
  


  true
  suggest
  true
  5
  false
  5
  1000
  true


  suggest
  query

  

URL : suggest/?q="michael b"
-
Response : 

 
 
 
  0 
  3 
  
   
 
 
 
  10 
  1 
  8 
  
  michael bully herbig 
  michael bolton 
  michael bolton: arias 
  michael falch 
  michael holm 
  michael jackson 
  michael neale 
  michael penn 
  michael salgado 
  michael w. smith 
  
  
 
  10 
  9 
  10 
  
  b in the mix - the remixes 
  b2k 
  backstreet boys 
  backyard babies 
  banda maguey 
  barbra streisand 
  barry manilow 
  benny goodman 
  beny more 
  beyonce 
  
  
  "michael bully herbig b in the mix - the
remixes" 
  
  
  



--
View this message in context: 
http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4005490.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr 4.0 BETA Replication problems on Tomcat

2012-09-04 Thread Ravi Solr
Hello,
I have a very simple setup one master and one slave configured
as below, but replication keeps failing with stacktrace as shown
below. Note that 3.6 works fine on the same machines so I am thinking
that Iam missing something in configuration with regards to solr
4.0...can somebody kindly let me know if Iam missing something ? I am
running SOLR 4.0 on Tomcat-7.0.29 with Java6. FYI I never has any
problem with SOLR on glassfish, this is the first time Iam using it on
Tomcat

On Master


 
  commit
  optimize
  schema.xml,stopwords.txt,synonyms.txt
  00:00:10
  


On Slave


 
http://testslave:8080/solr/mycore/replication

00:00:50
internal
5000
1
 



Error

22:44:10WARNING SnapPuller  Error in fetching packets

java.util.zip.ZipException: unknown compression method
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:147)
at 
org.apache.solr.common.util.FastInputStream.readWrappedStream(FastInputStream.java:79)
at 
org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:88)
at 
org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:124)
at 
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:149)
at 
org.apache.solr.common.util.FastInputStream.readFully(FastInputStream.java:144)
at 
org.apache.solr.handler.SnapPuller$FileFetcher.fetchPackets(SnapPuller.java:1024)
at 
org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:985)
at 
org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:627)
at 
org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:331)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:297)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:175)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at 
java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)

22:44:10SEVERE  ReplicationHandler  SnapPull failed
:org.apache.solr.common.SolrException: Unable to download
_3_Lucene40_0.tip completely. Downloaded 0!=170 at
org.apache.solr.handler.SnapPuller$FileFetcher.cleanup(SnapPuller.java:1115)
at org.apache.solr.handler.SnapPuller$FileFetcher.fetchFile(SnapPuller.java:999)
at org.apache.solr.handler.SnapPuller.downloadIndexFiles(SnapPuller.java:627)
at org.apache.solr.handler.SnapPuller.fetchLatestIndex(SnapPuller.java:331)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:297)
at org.apache.solr.handler.SnapPuller$1.run(SnapPuller.java:175) at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRunAndReset(FutureTask.java:317)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:150) at
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$101(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.runPeriodic(ScheduledThreadPoolExecutor.java:180)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:204)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)


Re: Using a sum of fields in a filter query

2012-09-04 Thread Chris Hostetter

: The piece I was also missing as well was to add:
: 

a) the FunctionQParserPlugin is already registered by default using hte 
name "func" -- you shouldn't need to register it explicitly unless you 
want to use it with a custom name.

b) the FunctionQParserPlugin is not even required in order to use 
"FunctionRangeQParserPlugin" (aka: "frange") that Rafał suggested for 
filtering by function range...

: > fq={!frange l=0 u=100}sum(fielda, fieldb, fieldc)



-Hoss

Re: Using a sum of fields in a filter query

2012-09-04 Thread Mark Mandel
Thanks!

The piece I was also missing as well was to add:


To my solrconfig.xml.

Once I did that, it all worked perfectly!

Much appreciated!

Mark



On Tue, Sep 4, 2012 at 5:25 PM, Rafał Kuć  wrote:

> Hello!
>
> Try something like
>
> fq={!frange l=0 u=100}sum(fielda, fieldb, fieldc)
>
> --
> Regards,
>  Rafał Kuć
>  Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch
>
> > Hey all,
>
> > I have a series of fields with numeric values in my solr index.
>
> > What I want to be able to do is the equivalent of something like below in
> > my filter query (fq) parameter:
>
> > sum(fielda, fieldb, fieldc):[0 to 100]
>
> > So the sum of the fields is between 0 and 100.
>
> > Is there some way to do this in SOLR on a FQ?  I've hunted high and low
> and
> > haven't been able to see the correct syntax for it.
>
> > If someone could point me in the right direction, I would greatly
> > appreciated it.
>
> > Thanks!
>
> > Mark
>
>


-- 
E: mark.man...@gmail.com
T: http://www.twitter.com/neurotic
W: www.compoundtheory.com

2 Devs from Down Under Podcast
http://www.2ddu.com/


Re: Adding config to SolrCloud without creating any shards/slices

2012-09-04 Thread Mark Miller
FYI - this should be no problem now. You can upload config and make
config -> collection links before starting Solr - using the ZkCLI cmd
tool or just modifying zk yourself.

On Fri, May 18, 2012 at 10:12 AM, Mark Miller  wrote:
>
> On May 18, 2012, at 3:06 AM, Per Steffensen wrote:
>
>> First of all, sorry about the subject of this discussion. It should have 
>> been something like "Adding config to SolrCloud without starting a Solr 
>> server"
>>
>> Mark Miller skrev:
>>> k
>>> On May 16, 2012, at 5:35 AM, Per Steffensen wrote:
>>>
>>>
 Hi

 We want to create a Solr config in ZK during installation of our product, 
 but we dont want to create any shards in that phase. We will create shards 
 from our application when it starts up and also automatically maintain the 
 set of shards from our application (which uses SolrCloud). The only way we 
 know to create a Solr config in ZK is to spin up a  Solr with  system 
 properties zkHost, bootstrap_confdir and collection.configName. Is there 
 another, more API-ish, way of creating a Solr config in ZK?

 Regards, Per Steffensen

>>>
>>> I've started some work on this, but I have not finished.
>>>
>>> There is a main method in ZkController that has some initial code. 
>>> Currently it just lets you upload a specifically named config set directory 
>>> - I would also like to add the same multi core config set upload option we 
>>> have on startup - where it reads solr.xml, finds all the config dirs and 
>>> uploads them, and links each collection to a config set named after it.
>>>
>> Yeah ok, I just want the config created - no shards/slices/collections.
>
> That's all that is created.
>
>>> Technically, you could use any tool to set this up - there are a variety of 
>>> options in the zk world - you just have to place the config files under the 
>>> right node.
>> I would really want to do it through Solr. This is the correct way, I think. 
>> So that, when you change your "strategy" e.g. location or format of configs 
>> in ZK, I will automatically inherit that.
>
> We have to commit to some level of back compat support with our ZK layout 
> regardless. We expect to expose it.
>
>>> There is one other tricky part though - the collection has to be set to the 
>>> right config set name. This is specified on the collection node in 
>>> ZooKeeper. When creating a new collection, you can specify this as a param. 
>>> If none is set and there is only one config set, that one config set is 
>>> used. However, some link must be made, and it is not done automatically 
>>> with your initial collections in solr.xml unless there is only one config 
>>> set.
>>>
>> I know about that, and will use Solr to create collections. I just want the 
>> config established in ZK before that, and not create the config "during the 
>> process of creating a collection".
>
> Yeah, but doing what you want is tricky because of that point.
>
>>> So now I'm thinking perhaps we should default the config set name to the 
>>> collection name. Then if you simply use the collection name initially when 
>>> you upload the set, no new linking is needed. If you don't like that, you 
>>> can explicitly override what config set to use. Convention would be to name 
>>> your config sets after your collection name, but extra work would allow you 
>>> to do whatever you want.
>>>
>> I want several collections to use the same config, so I would have to do 
>> that extra work.
>
> I'm not sure I have a great solution yet then. How are you creating your 
> initial collections?
>
> If you are creating them on the fly with solrj (a collections api coming soon 
> by the way), then you can simply give the collection set name to use when you 
> do.
>
> If you are creating them in solr.xml so that they exist on startup, and some 
> have to share config sets, I think we need to add something else. Perhaps a 
> hint property you could add to each core in solr.xml that caused a link to be 
> made when the core is first started? Since the config sets will be uploaded 
> first, we need some way of indicating to each collection which set to end up 
> using.
>
>>> You can find an example of the ZkContoller main method being used in 
>>> solr/cloud-dev scripts. The one caveat is that we pass an extra param to 
>>> solrhome and briefly run a ZkServer within the ZkController#main method 
>>> since we don't have an external ensemble. Normally this would not make 
>>> sense and you would want to leave that out. I need to clean this all up 
>>> (the ZkController params) and document it on the wiki as soon as I make 
>>> these couple tweaks though.
>>>
>> Ok
>
> I've actually made the changes that I said I would. So now, this would be 
> pretty easy if each collection had it's own config set. Let's work out how to 
> make your case a little easier as well.
>
>>
>> Thanks, Mark
>>> - Mark Miller
>>> lucidimagination.com
>>>
>>
>> Regards, Per Steffensen
>>>
>>>
>>>
>>>
>>>
>>>
>>>

Re: Replication lag after cache optimizations

2012-09-04 Thread Chris Hostetter

: However, with these modifications we noticed an important replication 

I'm not sure how exactly you are measuring/defining "replication lag" but 
if you mean "lag in how long until the newly replicated documents are 
visible in searches" then that _may_ be fairly easy to explain...

: My previous cache settings (fieldValueCache was disabled):

FYI: for historical reasons, there is always a fieldValueCache, even if 
you don't declare one

https://wiki.apache.org/solr/SolrCaching#fieldValueCache

: 

you have gone from using the hardcoded deault fieldValueCache, (which had 
no warming configured at all) to configuring an autowarmCount of 1024 -- 
you should easily be able to see in the logs that the "newSearcher" time 
on your machines is much longer since this change, as it autowarms thouse 
fieldValueCache entries.

This means that, compared to your preivous settings, the "first 
request" that attempts to use those fieldValueCache entires should be 
much faster then before, but the trade off is that you are spending 
the time to generate those cache entires "up-front" before you allow any 
requests to see the updated index at all.  

FWIW: the entries in the fieldValueCache are keyed off of field name (they 
are very bug UnInvertedField objects, and are typtically very few of them 
-- this is why, IIRC, yonik recommends no wautowarming of fieldValueCache 
at all) so having a size of 16384 and an autowarmCount of 1024 is probably 
overkill ... i suspect if you check the actual size at runtime you'll see 
that there are way fewer entires then that -- if you have anywhere close 
to 16384 entires i would love to hear more about your usecase.


-Hoss


Re: Missing Features - AndMaybe and Otherwise

2012-09-04 Thread Lance Norskog
Solr uses Lucene- everything I described in Solr with text queries.

- Original Message -
| From: "Ramzi Alqrainy" 
| To: solr-user@lucene.apache.org
| Sent: Tuesday, September 4, 2012 12:30:44 AM
| Subject: Re: Missing Features - AndMaybe and Otherwise
| 
| Many thanks for your email, but what about Solr? and how we can
| handle my
| case ?
| 
| Thanks,
| 
| 
| 
| --
| View this message in context:
| 
http://lucene.472066.n3.nabble.com/Missing-Features-AndMaybe-and-Otherwise-tp4005059p4005163.html
| Sent from the Solr - User mailing list archive at Nabble.com.
| 


Re: SolrCloud - Basic Auth - 401 error

2012-09-04 Thread Sudhakar Maddineni
Thanks Mark!

>  I changed the filter url pattern to {my core name}/admin/* instead of
> /admin/* and it worked.


-Sudhakar.

On Tue, Sep 4, 2012 at 12:33 PM, Mark Miller  wrote:

> Don't protect /admin/cores or (admin/collections probably).
>
> On Tue, Sep 4, 2012 at 2:54 PM, Sudhakar Maddineni
>  wrote:
> > Hi,
> >   I setup a two shard cluster using tomcat 6.0.35 with solr 4.0.0-BETA
> > version and zookeeper 3.3.4. I wanted to secure the solr admin page and
> > added a BASIC auth to the container so that all admin requests to the
> index
> > will be protected.I did this by adding below security constraint tag in
> > web.xml in {tomcat_home}/conf directory. Also, I defined the
> corresponding
> > roles ans user credentials in tomcat-users.xml at the same location.
> After
> > doing this, I could see that admin page is successfully  secured.
> >
> > ISSUE:  But, the issue is Replication is not working and getting *401-
> > unauthorized* access when replica tries to connect to the leader.Is there
> > any workaround to fix this issue?
> >
> > Appreciate your help.
> >
> > 
> > 
> >   
> > Solr authenticated application
> >   
> >   /admin/*
> >   GET
> >   POST
> > 
> > 
> >   solradmin
> > 
> >   
> >
> > BASIC
> > Basic Authentication
> >   
> >   
> > My role
> > solradmin
> >   
> >
> > Thanks,Sudhakar.
>
>
>
> --
> - Mark
>


Re: Maximum index size on single instance of Solr

2012-09-04 Thread Michael Brandt
Thanks everyone!

On Thu, Aug 30, 2012 at 11:11 AM, pravesh  wrote:

> We have a 48GB index size on a single shard. 20+ million documents.
> Recently
> migrated to SOLR 3.5
> But we have a cluster of SOLR servers for hosting searches. But i do see to
> migrate to SOLR sharding going forward.
>
>
> Thanx
> Pravesh
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Maximum-index-size-on-single-instance-of-Solr-tp4004171p4004418.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: Setting up two cores in solr.xml for Solr 4.0

2012-09-04 Thread Chris Hostetter

:   

I'm pretty sure what you hav above tells solr that core MYCORE_test it 
should use the instanceDir MYCORE but ignore the  in that 
solrconfig.xml and use the one you specified.

This on the other hand...
 
: >   
: > 
: >

...tells solr that the MYCORE_test SolrCore should use the instanceDir 
MYCORE, and when parsing that solrconfig.xml file it should set the 
variable ${dataDir} to be "MYCORE_test" -- but if your solconfig.xml file 
does not ever refer to the  ${dataDir} variable, it would have any effect.

so the question becomes -- what does your solrconfig.xml look like?


-Hoss


Re: Setting up two cores in solr.xml for Solr 4.0

2012-09-04 Thread Mark Miller
Sounds weird. It's just parsing xml with an xml parser, so offhand, I
don't see why that should matter.

On Tue, Sep 4, 2012 at 3:09 PM, Paul  wrote:
> By trial an error, I found that you evidently need to put that
> property inline, so this version works:
>
> 
>   
>   
> 
>
> Is the documentation here in error? http://wiki.apache.org/solr/CoreAdmin
>
> On Tue, Sep 4, 2012 at 2:50 PM, Paul  wrote:
>> I'm trying to set up two cores that share everything except their
>> data. (This is for testing: I want to create a parallel index that is
>> used when running my testing scripts.) I thought that would be
>> straightforward, and according to the documentation, I thought the
>> following would work:
>>
>> 
>>   
>>   
>> 
>>
>> 
>>
>> I thought that would create a directory structure like this:
>>
>> solr
>>   MYCORE
>> conf
>> data
>>   index
>> MYCORE_test
>>   index
>>
>> But it looks like both of the cores are sharing the same index and the
>> MYCORE_test directory is not created. In addition, I get the followin
>> in the log file:
>>
>> INFO: [MYCORE_test] Opening new SolrCore at solr/MYCORE/,
>> dataDir=solr/MYCORE/data/
>> ...
>> WARNING: New index directory detected: old=null new=solr/MYCORE/data/index/
>>
>> What am I not understanding?



-- 
- Mark


Re: SolrCloud - Basic Auth - 401 error

2012-09-04 Thread Mark Miller
Don't protect /admin/cores or (admin/collections probably).

On Tue, Sep 4, 2012 at 2:54 PM, Sudhakar Maddineni
 wrote:
> Hi,
>   I setup a two shard cluster using tomcat 6.0.35 with solr 4.0.0-BETA
> version and zookeeper 3.3.4. I wanted to secure the solr admin page and
> added a BASIC auth to the container so that all admin requests to the index
> will be protected.I did this by adding below security constraint tag in
> web.xml in {tomcat_home}/conf directory. Also, I defined the corresponding
> roles ans user credentials in tomcat-users.xml at the same location. After
> doing this, I could see that admin page is successfully  secured.
>
> ISSUE:  But, the issue is Replication is not working and getting *401-
> unauthorized* access when replica tries to connect to the leader.Is there
> any workaround to fix this issue?
>
> Appreciate your help.
>
> 
> 
>   
> Solr authenticated application
>   
>   /admin/*
>   GET
>   POST
> 
> 
>   solradmin
> 
>   
>
> BASIC
> Basic Authentication
>   
>   
> My role
> solradmin
>   
>
> Thanks,Sudhakar.



-- 
- Mark


Re: Setting up two cores in solr.xml for Solr 4.0

2012-09-04 Thread Paul
By trial an error, I found that you evidently need to put that
property inline, so this version works:


  
  


Is the documentation here in error? http://wiki.apache.org/solr/CoreAdmin

On Tue, Sep 4, 2012 at 2:50 PM, Paul  wrote:
> I'm trying to set up two cores that share everything except their
> data. (This is for testing: I want to create a parallel index that is
> used when running my testing scripts.) I thought that would be
> straightforward, and according to the documentation, I thought the
> following would work:
>
> 
>   
>   
> 
>
> 
>
> I thought that would create a directory structure like this:
>
> solr
>   MYCORE
> conf
> data
>   index
> MYCORE_test
>   index
>
> But it looks like both of the cores are sharing the same index and the
> MYCORE_test directory is not created. In addition, I get the followin
> in the log file:
>
> INFO: [MYCORE_test] Opening new SolrCore at solr/MYCORE/,
> dataDir=solr/MYCORE/data/
> ...
> WARNING: New index directory detected: old=null new=solr/MYCORE/data/index/
>
> What am I not understanding?


SolrCloud - Basic Auth - 401 error

2012-09-04 Thread Sudhakar Maddineni
Hi,
  I setup a two shard cluster using tomcat 6.0.35 with solr 4.0.0-BETA
version and zookeeper 3.3.4. I wanted to secure the solr admin page and
added a BASIC auth to the container so that all admin requests to the index
will be protected.I did this by adding below security constraint tag in
web.xml in {tomcat_home}/conf directory. Also, I defined the corresponding
roles ans user credentials in tomcat-users.xml at the same location. After
doing this, I could see that admin page is successfully  secured.

ISSUE:  But, the issue is Replication is not working and getting *401-
unauthorized* access when replica tries to connect to the leader.Is there
any workaround to fix this issue?

Appreciate your help.



  
Solr authenticated application
  
  /admin/*
  GET
  POST


  solradmin

  
   
BASIC
Basic Authentication
  
  
My role
solradmin
  

Thanks,Sudhakar.


Setting up two cores in solr.xml for Solr 4.0

2012-09-04 Thread Paul
I'm trying to set up two cores that share everything except their
data. (This is for testing: I want to create a parallel index that is
used when running my testing scripts.) I thought that would be
straightforward, and according to the documentation, I thought the
following would work:


  
  

   


I thought that would create a directory structure like this:

solr
  MYCORE
conf
data
  index
MYCORE_test
  index

But it looks like both of the cores are sharing the same index and the
MYCORE_test directory is not created. In addition, I get the followin
in the log file:

INFO: [MYCORE_test] Opening new SolrCore at solr/MYCORE/,
dataDir=solr/MYCORE/data/
...
WARNING: New index directory detected: old=null new=solr/MYCORE/data/index/

What am I not understanding?


Re: AW: AW: auto completion search with solr using NGrams in SOLR

2012-09-04 Thread Kiran Jayakumar
I wonder why. I had a similar use case & works great for me. If you can
send the snapshot of analysis for a sample string (say "hello world " for
indexing, "hel" - positive case, "wo" - negative case for querying), then
we can see whats going on. Also the debug query output would be helpful.


On Fri, Aug 31, 2012 at 10:28 PM, aniljayanti wrote:

> Hi,
>
> Thanks,
>
> As i already used "KeywordTokenizerFactory" in my earlier posts.
>
>  positionIncrementGap="100"
> omitNorms="true">
> 
>   *
>   
>replacement=" " replace="all"/>
>maxGramSize="15" side="front" />
> *   
>
>  *
>  
>   replacement=" " replace="all"/>
> *   
>   
>
> getting same results.
>
> AnilJayanti
>
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/auto-completion-search-with-solr-using-NGrams-in-SOLR-tp3998559p4004871.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


add dictionary

2012-09-04 Thread Emiliana Suci
how to add a dictionary in lucene? please give an example.



--
View this message in context: 
http://lucene.472066.n3.nabble.com/add-dictionary-tp4005319.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: stem porter with tokenizer..

2012-09-04 Thread Emiliana Suci
thanx a lot :)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/stem-porter-with-tokenizer-tp4004913p4005316.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Maximum Phrase Length in edismax

2012-09-04 Thread Jack Krupansky

I tried the following query successfully with both Solr 3.6.1 and 4.0-BETA:

http://localhost:8983/solr/select/?debugQuery=true&defType=edismax&qf=name+features+id&q="Notes,+Calendar,+Phone+book,+Hold+button,+Date+display,+Photo+wallet,+Built-in+games,+JPEG+photo+playback,+Upgradeable+firmware,+USB+2.0+compatibility,+Playback+speed+control,+Rechargeable+capability,+Battery+level+indication";

That certainly has more than 128 characters in the quoted phrase.

Can you give us a query that fails? Also provide the relevant field types. 
Maybe you have an analyzer that generates different terms between index and 
query analysis.


Also add &debugQuery=true to your Solr query request so we can see what 
Lucene query was generated.


-- Jack Krupansky
-Original Message- 
From: llee

Sent: Tuesday, September 04, 2012 11:07 AM
To: solr-user@lucene.apache.org
Subject: Maximum Phrase Length in edismax

I have a site where users need to be able to execute queries that contain
long quoted strings. The site is using Apache Solr with the EDismax parser
enabled. When users enter phrases that have more than ~128 characters, Solr
fails to return any results. When they enter shorter phrases, Solr returns
valid results. It appears that Solr is imposing a limit on phrase lengths.
Is this interpretation correct? If so, is it possible to increase this
limit?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Maximum-Phrase-Length-in-edismax-tp4005290.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Re: StreamingUpdateSolrServer - Failure during indexing

2012-09-04 Thread Mark Miller
You can override the method that logs the error - then parse the msg
for the doc ids?

On Tue, Sep 4, 2012 at 6:03 AM, Kissue Kissue  wrote:
> Hi Lance,
>
> As far as i can see, one document failing does not fail the entire update.
> From my logs i can see the error logged in the logs but indexing just
> continues to the next document. This happens with the
> StreamingUpdateSolrServer which is multithreaded.
>
> Thanks.
>
> On Tue, Jun 19, 2012 at 9:58 AM, Lance Norskog  wrote:
>
>> When one document fails, the entire update fails, right? Is there now
>> a mode where successful documents are added and failed docs are
>> dropped?
>>
>> If you want to know if a document is in the index, search for it!
>> There is no other guaranteed way.
>>
>> On Sun, Jun 17, 2012 at 3:14 PM, Jack Krupansky 
>> wrote:
>> > You could instantiate an anonymous instance of StreamingUpdateSolrServer
>> > that has a "handleError" method that then parses the exception message to
>> > get the request URI. If there isn't enough information there, you could
>> add
>> > a dummy request option to your original request that was a document
>> > identifier of your own.
>> >
>> > Pseudo code:
>> >
>> >   StreamingUpdateSolrServer myServer = new
>> StreamingUpdateSolrServer(...){
>> > void handleError( Throwable ex ){
>> >   super.handleError(ex);
>> >   // extract text from ex.getMessage()
>> > }
>> >   };
>> >
>> > Included in the message text is "request: " followed by the URI for the
>> HTTP
>> > method, which presumably has the request options (unless they were
>> encoded
>> > in the body of the request as multipart form data.)
>> >
>> > -- Jack Krupansky
>> >
>> > -Original Message- From: Kissue Kissue
>> > Sent: Sunday, June 17, 2012 7:40 AM
>> > To: solr-user@lucene.apache.org
>> > Subject: StreamingUpdateSolrServer - Failure during indexing
>> >
>> >
>> > Hi,
>> >
>> > Using the StreamingUpdateSolrServer, does anybody know how i can get the
>> > list of documents that failed during indexing so maybe i can index them
>> > later? Is it possible? I am using Solr 3.5 with SolrJ.
>> >
>> > Thanks.
>>
>>
>>
>> --
>> Lance Norskog
>> goks...@gmail.com
>>



-- 
- Mark

http://www.lucidworks.com


Maximum Phrase Length in edismax

2012-09-04 Thread llee
I have a site where users need to be able to execute queries that contain
long quoted strings. The site is using Apache Solr with the EDismax parser
enabled. When users enter phrases that have more than ~128 characters, Solr
fails to return any results. When they enter shorter phrases, Solr returns
valid results. It appears that Solr is imposing a limit on phrase lengths.
Is this interpretation correct? If so, is it possible to increase this
limit?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Maximum-Phrase-Length-in-edismax-tp4005290.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: New SearchRaquestHandler (distinct field value in successive results)

2012-09-04 Thread Jamel ESSOUSSI
I have tested this (grouping), but the problem is :

The grouping does not give me the one by one result



--
View this message in context: 
http://lucene.472066.n3.nabble.com/New-SearchRaquestHandler-distinct-field-value-in-successive-results-tp4005253p4005256.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: New SearchRaquestHandler (distinct field value in successive results)

2012-09-04 Thread Rafał Kuć
Hello!

Look at http://wiki.apache.org/solr/FieldCollapsing

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

> Hi,

> I have the following schema

> field1 = shop_name
> field2 = offer_id
> field3 = offer_title
> field4 = offer_price


> In the search result, I would not have tow successive results that have the
> same shop_name.

> I should develop a new SearchRequestHandler ?



> Thanks for your help





> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/New-SearchRaquestHandler-distinct-field-value-in-successive-results-tp4005253.html
> Sent from the Solr - User mailing list archive at Nabble.com.



New SearchRaquestHandler (distinct field value in successive results)

2012-09-04 Thread Jamel ESSOUSSI
Hi,

I have the following schema

field1 = shop_name
field2 = offer_id
field3 = offer_title
field4 = offer_price


In the search result, I would not have tow successive results that have the
same shop_name.

I should develop a new SearchRequestHandler ?



Thanks for your help





--
View this message in context: 
http://lucene.472066.n3.nabble.com/New-SearchRaquestHandler-distinct-field-value-in-successive-results-tp4005253.html
Sent from the Solr - User mailing list archive at Nabble.com.


exception in highlighter when using phrase search

2012-09-04 Thread Yoni Amir
I got this problem with solr 4 beta and the highlighting component.

When I search for a phrase, such as "foo bar", everything works ok.
When I add highlighting, I get this exception below.
You can see according to the first log line that I am searching only one field  
(all_text), but what is not visible in the log is that I am highlighting on all 
fields in the document, with hl.requireFieldMatch=false and hl.fl=*.

INFO  (SolrCore.java:1670) - [rcmCore] webapp=/solr path=/select 
params={fq={!edismax}module:"Alerts"+and+bu:"abcd+Region1"&qf=attachment&qf=all_text&version=2&rows=20&wt=javabin&start=0&q="foo
 bar"} hits=103 status=500 QTime=38 
ERROR (SolrException.java:104) - null:java.lang.NullPointerException
   at 
org.apache.lucene.analysis.util.CharacterUtils$Java5CharacterUtils.fill(CharacterUtils.java:191)
   at 
org.apache.lucene.analysis.util.CharTokenizer.incrementToken(CharTokenizer.java:152)
   at 
org.apache.lucene.analysis.miscellaneous.WordDelimiterFilter.incrementToken(WordDelimiterFilter.java:209)
   at 
org.apache.lucene.analysis.util.FilteringTokenFilter.incrementToken(FilteringTokenFilter.java:50)
   at 
org.apache.lucene.analysis.miscellaneous.RemoveDuplicatesTokenFilter.incrementToken(RemoveDuplicatesTokenFilter.java:54)
   at 
org.apache.lucene.analysis.core.LowerCaseFilter.incrementToken(LowerCaseFilter.java:54)
   at 
org.apache.solr.highlight.TokenOrderingFilter.incrementToken(DefaultSolrHighlighter.java:629)
   at 
org.apache.lucene.analysis.CachingTokenFilter.fillCache(CachingTokenFilter.java:78)
   at 
org.apache.lucene.analysis.CachingTokenFilter.incrementToken(CachingTokenFilter.java:50)
   at 
org.apache.lucene.search.highlight.Highlighter.getBestTextFragments(Highlighter.java:225)
   at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlightingByHighlighter(DefaultSolrHighlighter.java:510)
   at 
org.apache.solr.highlight.DefaultSolrHighlighter.doHighlighting(DefaultSolrHighlighter.java:401)
   at 
org.apache.solr.handler.component.HighlightComponent.process(HighlightComponent.java:136)
   at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:206)
   at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1656)
   at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:454)
   at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:275)
   at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
   at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
   at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
   at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
   at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128)
   at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
   at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
   at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293)
   at 
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849)
   at 
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583)
   at 
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454)
   at java.lang.Thread.run(Thread.java:736)

Any idea?

Thanks,
Yoni


Re: Start up errors

2012-09-04 Thread Jack Krupansky
As the exception suggests, you have an XML syntax error. Look for "{1}" and 
"source" and correct the error. Compare your schema.xml to the Solr example 
schema.xml to see how it is different.


-- Jack Krupansky

-Original Message- 
From: Tolga

Sent: Tuesday, September 04, 2012 6:22 AM
To: solr-user@lucene.apache.org
Subject: Start up errors

Hi,

When I started Solr, I got the following errors. The same are at
http://www.example.com:8983/solr

SEVERE: Exception during parsing file:
schema:org.xml.sax.SAXParseException: Open quote is expected for
attribute "{1}" associated with an  element type  "source".
at
com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195)
at
com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:174)
at
com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:388)
at
com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1414)
at
com.sun.org.apache.xerces.internal.impl.XMLScanner.scanAttributeValue(XMLScanner.java:807)
at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanAttribute(XMLNSDocumentScannerImpl.java:460)
at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:277)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2756)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:647)
at
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140)
at
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
at
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:232)
at
com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)
at org.apache.solr.core.Config.(Config.java:159)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:418)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:332)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:216)
at
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161)
at
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)

4/09/2012 1:14:29 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed:
Open quote is expected for attribute "{1}

Doubts in Result Grouping in solr 3.6.1

2012-09-04 Thread mechravi25
Hi,

I am currently using solr 3.6.1 version and for indexing data, i am using
the data import handler for 3.5 because of the reason posted in the
following forum link
http://lucene.472066.n3.nabble.com/Dataimport-Handler-in-solr-3-6-1-td4001149.html

I am trying to achieve result grouping based on a field "grpValue" which has
value like this "Name XYZ|Company". There are totally 359 docs that were
indexed and the field "grpValue" in all the 359 docs contains the word
"Company" in its value.

I gave the following in my schema.xml for splitting the word while indexing
and querying


   


  

 
 

 
  

 



I am trying to split the words if I have a single space or an “|” symbol in
my data when i use the pattern="\s+|\|" in PatternTokenizerFactory. 

When I gave the analyze option in solr, the sample value was split inot 3
words "Name","XYZ","Company" in both my index and query analyzer.

When i gave the following url

http://localhost:8080/solr/core1/select/?q=*%3A*&version=2.2&start=0&rows=359&indent=on&group=true&group.field=grpValue&group.limit=0

I noticed that I have a grouping name called Company which has numFound as
73 but the particular field "grpValue" has the word "Company" in its value
in all the 359 docs. Ideally, i should have got 359 docs as numFound under
my group

- 
- 
  359 
- 
- 
  Company 
   
  

Please someone guide me as to why only 73 docs is present in that group
instead of 359.

I also noticed that when I counted the numFound in all the groups, it
totalled upto 359. 


Please guide me on this and I am not sure what I am missing. Please let me
know in case more details is needed.

Thanks in advance.  






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Doubts-in-Result-Grouping-in-solr-3-6-1-tp4005239.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: solr issue with seaching words

2012-09-04 Thread Dikchant Sahi
Try debugging it using analysis page or running the query in debug mode
(&debugQuery=true).

In analysis page, add 'RCA-Jack/' to index and 'jacke' to query. This might
help you understanding the behavior.

If still unable to debug, some additional information would be required to
help.

On Tue, Sep 4, 2012 at 3:38 PM, zainu  wrote:

> I am facing a strange problem. I am searching for word "jacke" but solr
> also
> returns result where my description contains 'RCA-Jack/'. Íf i search
> "jacka" or "jackc" or "jackd", it works fine and does not return me any
> result which is what i am expecting in this case.
>
> Only when there is "jacke", it return me result with "RCA-Jack/". So there
> seems some kind of relationshio between "e" and "/" and it considers e as
> "/".
>
> Any help?
>
>
>
> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/solr-issue-with-seaching-words-tp4005200.html
> Sent from the Solr - User mailing list archive at Nabble.com.
>


Re: solr issue with seaching words

2012-09-04 Thread Rafał Kuć
Hello!

I suppose, you may have word delimiter along with stemming in your
configuration. You may see how your analysis chain works in Solr
analysis pages (admin panel). You can paste the type configuration for
the field you experiencing issues on and we will be able to see what
is happening.

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

> I am facing a strange problem. I am searching for word "jacke" but solr also
> returns result where my description contains 'RCA-Jack/'. Íf i search
> "jacka" or "jackc" or "jackd", it works fine and does not return me any
> result which is what i am expecting in this case.

> Only when there is "jacke", it return me result with "RCA-Jack/". So there
> seems some kind of relationshio between "e" and "/" and it considers e as
> "/". 

> Any help?



> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/solr-issue-with-seaching-words-tp4005200.html
> Sent from the Solr - User mailing list archive at Nabble.com.



solr issue with seaching words

2012-09-04 Thread zainu
I am facing a strange problem. I am searching for word "jacke" but solr also
returns result where my description contains 'RCA-Jack/'. Íf i search
"jacka" or "jackc" or "jackd", it works fine and does not return me any
result which is what i am expecting in this case.

Only when there is "jacke", it return me result with "RCA-Jack/". So there
seems some kind of relationshio between "e" and "/" and it considers e as
"/". 

Any help?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/solr-issue-with-seaching-words-tp4005200.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Clustering

2012-09-04 Thread Denis Kuzmenok
Hi, all.
I know there is carrot2 and mahout for clustering. I want to implement such 
thing:
I fetch documents and want to group them into clusters when they are added to 
index (i want to filter "similar" documents for example for 1 week). i need 
these documents quickly, so i cant rely on some postponed calculations. Each 
document should have assigned cluster id (like group similar documents into 
clusters and assign each document its cluster id.
It's something similar to news aggregators like google news. I dont need to 
search for clusters with documents older than 1 week (for example). Each 
document will have its unique id and saved into DB. But solr will have cluster 
id field also.
Is it possible to implement this with solr/carrot/mahout?

Re: Solr Cloud Implementation with Apache Tomcat

2012-09-04 Thread Rafał Kuć
Hello!

Try starting a standalone Zookeeper, which is very simple - look at 
http://zookeeper.apache.org/doc/r3.4.3/zookeeperStarted.html

-- 
Regards,
 Rafał Kuć
 Sematext :: http://sematext.com/ :: Solr - Lucene - Nutch - ElasticSearch

> Hi folks,
> I am trying the Solr cloud using Apache Tomcat.
> In first I tried with Jetty server, which is mentioned in wiki.
> It is working fine, but while I am trying with Tomcat it is failure.
> Means solr is working, but solr cloud is not working.
> My doubt is how to configure the zookeeper in Tomcat.


> Thanks,
> Guru




> --
> View this message in context:
> http://lucene.472066.n3.nabble.com/Solr-Cloud-Implementation-with-Apache-Tomcat-tp4005209.html
> Sent from the Solr - User mailing list archive at Nabble.com.



Solr Cloud Implementation with Apache Tomcat

2012-09-04 Thread bsargurunathan
Hi folks,
I am trying the Solr cloud using Apache Tomcat.
In first I tried with Jetty server, which is mentioned in wiki.
It is working fine, but while I am trying with Tomcat it is failure.
Means solr is working, but solr cloud is not working.
My doubt is how to configure the zookeeper in Tomcat.


Thanks,
Guru




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Cloud-Implementation-with-Apache-Tomcat-tp4005209.html
Sent from the Solr - User mailing list archive at Nabble.com.


Start up errors

2012-09-04 Thread Tolga

Hi,

When I started Solr, I got the following errors. The same are at 
http://www.example.com:8983/solr


SEVERE: Exception during parsing file: 
schema:org.xml.sax.SAXParseException: Open quote is expected for 
attribute "{1}" associated with an  element type  "source".
at 
com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195)
at 
com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.fatalError(ErrorHandlerWrapper.java:174)
at 
com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:388)
at 
com.sun.org.apache.xerces.internal.impl.XMLScanner.reportFatalError(XMLScanner.java:1414)
at 
com.sun.org.apache.xerces.internal.impl.XMLScanner.scanAttributeValue(XMLScanner.java:807)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanAttribute(XMLNSDocumentScannerImpl.java:460)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:277)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:2756)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:647)
at 
com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140)
at 
com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:511)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:808)
at 
com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737)
at 
com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:119)
at 
com.sun.org.apache.xerces.internal.parsers.DOMParser.parse(DOMParser.java:232)
at 
com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderImpl.parse(DocumentBuilderImpl.java:284)

at org.apache.solr.core.Config.(Config.java:159)
at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:418)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:332)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:216)
at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:161)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:96)

at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)

at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)

at org.mortbay.jetty.Server.doStart(Server.java:224)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)

at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:985)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)
at org.mortbay.start.Main.invokeMain(Main.java:194)
at org.mortbay.start.Main.start(Main.java:534)
at org.mortbay.start.Main.start(Main.java:441)
at org.mortbay.start.Main.main(Main.java:119)

4/09/2012 1:14:29 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: Schema Parsing Failed: 
Open quote is expected for attribute "{1}" associated with an element 
type  "source".

at org.apache.solr.schema.IndexSchema.readSchema(IndexSchema.java:688)
at org.apache.solr.schema.IndexSchema.(IndexSchema.java:123)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:478)
at org.apache.solr.core.CoreContainer.load(Cor

Solr index doubles after replication

2012-09-04 Thread ravicv
Hi, 

We have configured replication in our solr setup. 

After replication master index size grows to double the size even though
maxNumberOfBackups is not configured in my solrconfig.xml

 *Master replication handler*



optimize



 *Slave replication handler*



${solr.master.security_pricing.url}
00:00:20
5000
1
 


 Whether solr takes default maxNumberOfBackups as 1?. If so how can i keep
my index size as original always or is their a way to delete the backup
after replication is completed? 

My initial index size at master after first replication is : 246MB. 
My initial index size at master after secondreplication is : 492MB. 

Is their any way to clear backups in master after replication is done?

 I am using solr1.4 version. 

Thanks, 
Ravi 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-index-doubles-after-replication-tp4005203.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: StreamingUpdateSolrServer - Failure during indexing

2012-09-04 Thread Kissue Kissue
Hi Lance,

As far as i can see, one document failing does not fail the entire update.
>From my logs i can see the error logged in the logs but indexing just
continues to the next document. This happens with the
StreamingUpdateSolrServer which is multithreaded.

Thanks.

On Tue, Jun 19, 2012 at 9:58 AM, Lance Norskog  wrote:

> When one document fails, the entire update fails, right? Is there now
> a mode where successful documents are added and failed docs are
> dropped?
>
> If you want to know if a document is in the index, search for it!
> There is no other guaranteed way.
>
> On Sun, Jun 17, 2012 at 3:14 PM, Jack Krupansky 
> wrote:
> > You could instantiate an anonymous instance of StreamingUpdateSolrServer
> > that has a "handleError" method that then parses the exception message to
> > get the request URI. If there isn't enough information there, you could
> add
> > a dummy request option to your original request that was a document
> > identifier of your own.
> >
> > Pseudo code:
> >
> >   StreamingUpdateSolrServer myServer = new
> StreamingUpdateSolrServer(...){
> > void handleError( Throwable ex ){
> >   super.handleError(ex);
> >   // extract text from ex.getMessage()
> > }
> >   };
> >
> > Included in the message text is "request: " followed by the URI for the
> HTTP
> > method, which presumably has the request options (unless they were
> encoded
> > in the body of the request as multipart form data.)
> >
> > -- Jack Krupansky
> >
> > -Original Message- From: Kissue Kissue
> > Sent: Sunday, June 17, 2012 7:40 AM
> > To: solr-user@lucene.apache.org
> > Subject: StreamingUpdateSolrServer - Failure during indexing
> >
> >
> > Hi,
> >
> > Using the StreamingUpdateSolrServer, does anybody know how i can get the
> > list of documents that failed during indexing so maybe i can index them
> > later? Is it possible? I am using Solr 3.5 with SolrJ.
> >
> > Thanks.
>
>
>
> --
> Lance Norskog
> goks...@gmail.com
>


Re: Solr Clustering

2012-09-04 Thread Chandan Tamrakar
yes there is a solr component if you want to cluster solr documents , check
the following link
http://wiki.apache.org/solr/ClusteringComponent

Carrot2 might be good if you want to cluster few thousands of documents ,
for example when user search solr , just cluster the  search results

Mahout is much more scalable and probably you need Hadoop for that


thanks
chandan

On Tue, Sep 4, 2012 at 2:10 PM, Denis Kuzmenok  wrote:

>
>
>  Original Message 
> Subject: Solr Clustering
> From: Denis Kuzmenok 
> To: solr-user@lucene.apache.org
> CC:
>
> Hi, all.
> I know there is carrot2 and mahout for clustering. I want to implement
> such thing:
> I fetch documents and want to group them into clusters when they are added
> to index (i want to filter "similar" documents for example for 1 week). i
> need these documents quickly, so i cant rely on some postponed
> calculations. Each document should have assigned cluster id (like group
> similar documents into clusters and assign each document its cluster id.
> It's something similar to news aggregators like google news. I dont need
> to search for clusters with documents older than 1 week (for example). Each
> document will have its unique id and saved into DB. But solr will have
> cluster id field also.
> Is it possible to implement this with solr/carrot/mahout?




-- 
Chandan Tamrakar
*
*


Solr Clustering

2012-09-04 Thread Denis Kuzmenok
Hi, all. I know there is carrot2 and mahout for clustering. I want to implement 
such thing: I fetch documents and want to group them into clusters when they 
are added to index (i want to filter "similar" documents for example for 1 
week). i need these documents quickly, so i cant rely on some postponed 
calculations. Each document should have assigned cluster id (like group similar 
documents into clusters and assign each document its cluster id. It's something 
similar to news aggregators like google news. I dont need to search for 
clusters with documents older than 1 week (for example). Each document will 
have its unique id and saved into DB. But solr will have cluster id field also. 
Is it possible to implement this with solr/carrot/mahout?

Solr : Condition Before Group By

2012-09-04 Thread Ramzi Alqrainy
Hi, 

I would like to help me for certain problem. 

My problem is : 

I have documents and I do group by on certain field (e.g. Field1).  I want
to get documents with Field2 is [3 or 9 or 12] if exist, otherwise get any
document. please see the below example. 


D1 :--- 
Field1 : 1  - 
Field2 : 3 ->  D1 (group by on field1 and field2 is
3) 
  - 
D2: --- 
Field1 : 1 
Field2 : 4 


D3:--- 
Field1 : 2   - 
Field2 : 5--> any document D3 or D4 
- 
D4:--- 
Field1 : 2 
Field2 : 7 

I want to get the results like below 
D1(Mandatory) 
(D3 OR D4)



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-Condition-Before-Group-By-tp4005179.html
Sent from the Solr - User mailing list archive at Nabble.com.


Solr Clustering

2012-09-04 Thread Denis Kuzmenok


 Original Message 
Subject: Solr Clustering
From: Denis Kuzmenok 
To: solr-user@lucene.apache.org
CC: 

Hi, all.
I know there is carrot2 and mahout for clustering. I want to implement such 
thing:
I fetch documents and want to group them into clusters when they are added to 
index (i want to filter "similar" documents for example for 1 week). i need 
these documents quickly, so i cant rely on some postponed calculations. Each 
document should have assigned cluster id (like group similar documents into 
clusters and assign each document its cluster id.
It's something similar to news aggregators like google news. I dont need to 
search for clusters with documents older than 1 week (for example). Each 
document will have its unique id and saved into DB. But solr will have cluster 
id field also.
Is it possible to implement this with solr/carrot/mahout?

Re: Is there any special meaning for # symbol in solr.

2012-09-04 Thread Oliver Schihin

You are not using a string type, but a TextField. And in your analysis chain,
standardtokenizer strips the number sign (or #). You can check this in the 
"analysis" part
of the solr backend.

You can either use a string type for seaches like C#, C++ and the like, or map 
the
characters to something textual *before* tokenizing. My solution goes something 
like this:


while mapping-chars.txt is:
*
# 
# Specials
# 

# C+ => Cplus
# C++ => Cplusplus
"\u0043\u002B" => "Cplus"
"\u0063\u002B" => "Cplus"
"\u0043\u002B\u002B" => "Cplusplus"
"\u0063\u002B\u002B" => "Cplusplus"

# C#, C♯ => Csharp
"\u0043\u0023" => "Csharp"
"\u0063\u0023" => "Csharp"
"\u0043\u266f" => "Csharp"
"\u0063\u266f" => "Csharp"

# F#, F♯ => Fsharp
"\u0046\u0023" => "Fsharp"
"\u0066\u0023" => "Fsharp"
"\u0046\u266f" => "Fsharp"
"\u0066\u266f" => "Fsharp"

# J#, J♯ => Jsharp
"\u004A\u0023" => "Jsharp"
"\u006A\u0023" => "Jsharp"
"\u004A\u266f" => "Jsharp"
"\u006A\u266f" => "Jsharp"

# ♭ => b
"\u266d" => "b"

# @ => at
"\u0040" => "at"
***

Then use any tokenizer



 Original-Nachricht 
Betreff: Re: Is there any special meaning for # symbol in solr.
Von: veena rani 
An: solr-user@lucene.apache.org
CC: te 
Datum: 04.09.2012 09:49


this is the field type i m using for techskill,

 


  




  
  




  



On Tue, Sep 4, 2012 at 1:16 PM, veena rani  wrote:


No, # is not a stop word.


On Tue, Sep 4, 2012 at 12:59 PM, 李赟  wrote:


Is "#" in your stop words list ?


2012-09-04



Li Yun
Software Engineer @ Netease
Mail: liyun2...@corp.netease.com
MSN: rockiee...@gmail.com




发件人: veena rani
发送时间: 2012-09-04  12:57:26
收件人: solr-user; te
抄送:
主题: Re: Is there any special meaning for # symbol in solr.

if i use this link ,
http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
, solr is going to display techskill:c result.
But i want to display only techskill:c#  result.
On Mon, Sep 3, 2012 at 7:23 PM, Toke Eskildsen 
wrote:
On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:

 I have an issue with the # symbol, in solr,
 I m trying to search for string ends up with # , Eg:c#, it is

throwing

 error Like, org.apache.lucene.queryparser.classic.ParseException:

Cannot

 parse '(techskill:c': Encountered "" at line 1, column 12.

Solr only received '(techskill:c', which has unbalanced parentheses.
My guess is that you do not perform a URL-encode of '#' and that you
were sending something like
http://localhost:8080/solr/select?&q=(techskill:c#)
when you should have been sending
http://localhost:8080/solr/select?&q=(techskill%3Ac%23)



--
Regards,
Veena.
Banglore.




--
Regards,
Veena.
Banglore.









Re: Re: Is there any special meaning for # symbol in solr.

2012-09-04 Thread Ahmet Arslan
> this is the field type i m using for
> techskill,
> 
>   name="techskill"   type="text_general" 
> indexed="true"
>  stored="true" />
> 
>  positionIncrementGap="100">
>       
>          class="solr.StandardTokenizerFactory"/>
>          class="solr.StopFilterFactory" ignoreCase="true"
> words="stopwords.txt" enablePositionIncrements="true" />
>         
>          class="solr.LowerCaseFilterFactory"/>
>       
>       
>          class="solr.StandardTokenizerFactory"/>
>          class="solr.StopFilterFactory" ignoreCase="true"
> words="stopwords.txt" enablePositionIncrements="true" />
>          class="solr.SynonymFilterFactory" synonyms="synonyms.txt"
> ignoreCase="true" expand="true"/>
>          class="solr.LowerCaseFilterFactory"/>
>       
>     
> 

StandardTokenizer (ST) eats up # character. c# becomes c. You can verify this 
using Analysis page. You need an analysis chain that preserves #. 

There are different ways to do it. WhitespaceTokenizer with WordDelimeter can 
be an option.

MappingCharFilter "#" => "SHARP" can be another too.


Re: Re: Is there any special meaning for # symbol in solr.

2012-09-04 Thread veena rani
this is the field type i m using for techskill,

 


  




  
  




  



On Tue, Sep 4, 2012 at 1:16 PM, veena rani  wrote:

> No, # is not a stop word.
>
>
> On Tue, Sep 4, 2012 at 12:59 PM, 李�S  wrote:
>
>> Is "#" in your stop words list ?
>>
>>
>> 2012-09-04
>>
>>
>>
>> Li Yun
>> Software Engineer @ Netease
>> Mail: liyun2...@corp.netease.com
>> MSN: rockiee...@gmail.com
>>
>>
>>
>>
>> 发件人: veena rani
>> 发送时间: 2012-09-04  12:57:26
>> 收件人: solr-user; te
>> 抄送:
>> 主题: Re: Is there any special meaning for # symbol in solr.
>>
>> if i use this link ,
>> http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
>> , solr is going to display techskill:c result.
>> But i want to display only techskill:c#  result.
>> On Mon, Sep 3, 2012 at 7:23 PM, Toke Eskildsen > >wrote:
>> > On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:
>> > > >  I have an issue with the # symbol, in solr,
>> > > >  I m trying to search for string ends up with # , Eg:c#, it is
>> throwing
>> > > >  error Like, org.apache.lucene.queryparser.classic.ParseException:
>> > Cannot
>> > > >  parse '(techskill:c': Encountered "" at line 1, column 12.
>> >
>> > Solr only received '(techskill:c', which has unbalanced parentheses.
>> > My guess is that you do not perform a URL-encode of '#' and that you
>> > were sending something like
>> > http://localhost:8080/solr/select?&q=(techskill:c#)
>> > when you should have been sending
>> > http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
>> >
>> >
>> --
>> Regards,
>> Veena.
>> Banglore.
>>
>
>
>
> --
> Regards,
> Veena.
> Banglore.
>
>


-- 
Regards,
Veena.
Banglore.


Re: Re: Is there any special meaning for # symbol in solr.

2012-09-04 Thread veena rani
No, # is not a stop word.


On Tue, Sep 4, 2012 at 12:59 PM, 李�S  wrote:

> Is "#" in your stop words list ?
>
>
> 2012-09-04
>
>
>
> Li Yun
> Software Engineer @ Netease
> Mail: liyun2...@corp.netease.com
> MSN: rockiee...@gmail.com
>
>
>
>
> 发件人: veena rani
> 发送时间: 2012-09-04  12:57:26
> 收件人: solr-user; te
> 抄送:
> 主题: Re: Is there any special meaning for # symbol in solr.
>
> if i use this link ,
> http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
> , solr is going to display techskill:c result.
> But i want to display only techskill:c#  result.
> On Mon, Sep 3, 2012 at 7:23 PM, Toke Eskildsen  >wrote:
> > On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:
> > > >  I have an issue with the # symbol, in solr,
> > > >  I m trying to search for string ends up with # , Eg:c#, it is
> throwing
> > > >  error Like, org.apache.lucene.queryparser.classic.ParseException:
> > Cannot
> > > >  parse '(techskill:c': Encountered "" at line 1, column 12.
> >
> > Solr only received '(techskill:c', which has unbalanced parentheses.
> > My guess is that you do not perform a URL-encode of '#' and that you
> > were sending something like
> > http://localhost:8080/solr/select?&q=(techskill:c#)
> > when you should have been sending
> > http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
> >
> >
> --
> Regards,
> Veena.
> Banglore.
>



-- 
Regards,
Veena.
Banglore.


Re: Missing Features - AndMaybe and Otherwise

2012-09-04 Thread Ramzi Alqrainy
Many thanks for your email, but what about Solr? and how we can handle my
case ?

Thanks,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Missing-Features-AndMaybe-and-Otherwise-tp4005059p4005163.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Re: Is there any special meaning for # symbol in solr.

2012-09-04 Thread 李赟
Is "#" in your stop words list ?


2012-09-04 



Li Yun
Software Engineer @ Netease
Mail: liyun2...@corp.netease.com
MSN: rockiee...@gmail.com




发件人: veena rani 
发送时间: 2012-09-04  12:57:26 
收件人: solr-user; te 
抄送: 
主题: Re: Is there any special meaning for # symbol in solr. 
 
if i use this link ,http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
, solr is going to display techskill:c result.
But i want to display only techskill:c#  result.
On Mon, Sep 3, 2012 at 7:23 PM, Toke Eskildsen wrote:
> On Mon, 2012-09-03 at 13:39 +0200, veena rani wrote:
> > >  I have an issue with the # symbol, in solr,
> > >  I m trying to search for string ends up with # , Eg:c#, it is throwing
> > >  error Like, org.apache.lucene.queryparser.classic.ParseException:
> Cannot
> > >  parse '(techskill:c': Encountered "" at line 1, column 12.
>
> Solr only received '(techskill:c', which has unbalanced parentheses.
> My guess is that you do not perform a URL-encode of '#' and that you
> were sending something like
> http://localhost:8080/solr/select?&q=(techskill:c#)
> when you should have been sending
> http://localhost:8080/solr/select?&q=(techskill%3Ac%23)
>
>
-- 
Regards,
Veena.
Banglore.