Re: Facet on multicore search when one field exists only in one of cores

2019-06-14 Thread Claudio R
 Hi Shawn,Thank you very much for your quick answer.The use of dynamic field 
"ignored" works, but it does not seem to be the correct way to solve the 
problem.We will try to equalize the cores.

Em sexta-feira, 14 de junho de 2019 11:18:40 BRT, Shawn Heisey 
 escreveu:  
 
 On 6/14/2019 7:54 AM, Claudio R wrote:
> When I try this request to get facet of fields: fieldA, fieldB and fieldC on 
> multicore search, I get error:
> 
> http://localhost:8983/solr/core1/select?q=*:*=localhost:8983/solr/core1,localhost:8983/solr/core2=*,[shard]=true=fieldA=fieldB=fieldC
> 
> Error from server at http://localhost:8983/solr/core2: 
> undefined field: "fieldB"
> 400
> 
> Is there any config / parameter in Solr to avoid this throw exception on 
> facet of multicore when one field not exists in a core?

Distributed queries must have compatible schemas in all the shards 
referenced.  If all fields referenced in the query are not covered by 
every one of those schemas, you're going to get an error.  This is just 
basic error-checking.

The only fix would be to correct the schemas.  You could add the missing 
field, or if you want all invalid fields ignored, simply set up a 
dynamicField named "*" that connects to a type that's ignored.  This 
should work:

    
    

I haven't actually tried this so I can't be SURE it will work, but I 
think it would.

A note for devs:  In the 8.1.0 _default configset, the "ignored" type 
does not have docValues="false" ... and I think docValues defaults to 
true on the StrField class.  I think that MIGHT be a problem.  Worth an 
issue?

Thanks,
Shawn
  

Re: Facet on multicore search when one field exists only in one of cores

2019-06-14 Thread Shawn Heisey

On 6/14/2019 7:54 AM, Claudio R wrote:

When I try this request to get facet of fields: fieldA, fieldB and fieldC on 
multicore search, I get error:

http://localhost:8983/solr/core1/select?q=*:*=localhost:8983/solr/core1,localhost:8983/solr/core2=*,[shard]=true=fieldA=fieldB=fieldC

Error from server at http://localhost:8983/solr/core2: undefined field: 
"fieldB"
400

Is there any config / parameter in Solr to avoid this throw exception on facet 
of multicore when one field not exists in a core?


Distributed queries must have compatible schemas in all the shards 
referenced.  If all fields referenced in the query are not covered by 
every one of those schemas, you're going to get an error.  This is just 
basic error-checking.


The only fix would be to correct the schemas.  You could add the missing 
field, or if you want all invalid fields ignored, simply set up a 
dynamicField named "*" that connects to a type that's ignored.  This 
should work:



docValues="false" multiValued="true" class="solr.StrField" />


I haven't actually tried this so I can't be SURE it will work, but I 
think it would.


A note for devs:  In the 8.1.0 _default configset, the "ignored" type 
does not have docValues="false" ... and I think docValues defaults to 
true on the StrField class.  I think that MIGHT be a problem.  Worth an 
issue?


Thanks,
Shawn


Facet on multicore search when one field exists only in one of cores

2019-06-14 Thread Claudio R
Hi,
I am using Solr 6.6.0 in mode standalone with 2 cores.
The first core has the schema:

id
fieldA
fieldB

The second core has the schema:

id
fieldA
fieldC

When I try this request to get facet of fields: fieldA, fieldB and fieldC on 
multicore search, I get error:

http://localhost:8983/solr/core1/select?q=*:*=localhost:8983/solr/core1,localhost:8983/solr/core2=*,[shard]=true=fieldA=fieldB=fieldC

Error from server at http://localhost:8983/solr/core2: 
undefined field: "fieldB"
400            

Is there any config / parameter in Solr to avoid this throw exception on facet 
of multicore when one field not exists in a core?

If I have this documents on cores:

core1:
id: "10"
fieldA: "productA"
fieldB: "value1"

core2:
id: "23"
fieldA: "productA"
fieldC: "value2"

I wish get a response like this:

Facet
fieldA
  productA (2)
fieldB
  value1 (1)
fieldC
  value2 (1)


Re: Solr multicore join performance tuning

2016-01-25 Thread Mikhail Khludnev
What a solr version, query parameters and debug output?
26.01.2016 6:38 пользователь "Bhawna Asnani" <bhawna.asn...@gmail.com>
написал:

> Hi,
> I am using solr multicore join queries for some admin filters. The queries
> are really slow taking up to 40-60 seconds ins some cases.
>
> I recently read that the schema field used to join to should have
> 'docValues=true'.
> Besides that, any suggestion to improve the performance?
>
> -Bhawna
>


Solr multicore join performance tuning

2016-01-25 Thread Bhawna Asnani
Hi,
I am using solr multicore join queries for some admin filters. The queries
are really slow taking up to 40-60 seconds ins some cases.

I recently read that the schema field used to join to should have
'docValues=true'.
Besides that, any suggestion to improve the performance?

-Bhawna


Re: solr multicore vs sharding vs 1 big collection

2015-08-04 Thread Shawn Heisey
On 8/4/2015 3:30 PM, Jay Potharaju wrote:
 For the last few days I have been trying to correlate the timeouts with GC.
 I noticed in the GC logs that full GC takes long time once in a while. Does
 this mean that the jvm memory is to high or is it set to low?

snip

 1973953.560: [GC 4474277K-3300411K(4641280K), 0.0423129 secs]
 1973960.674: [GC 4536894K-3371225K(4630016K), 0.0560341 secs]
 1973960.731: [Full GC 3371225K-3339436K(5086208K), 15.5285889 secs]
 1973990.516: [GC 4548268K-3405111K(5096448K), 0.0657788 secs]
 1973998.191: [GC 4613934K-3527257K(5086208K), 0.1304232 secs]

Based on what I can see there, it looks like 6GB might be enough heap. 
Your low points are all in the 3GB range, which is only half of that.  A
6GB heap is not very big in the Solr world.

Based on that GC log and my own experiences, I'm guessing that your GC
isn't tuned.  The default collector that Java chooses is *terrible* for
Solr.  Even just switching collectors to CMS or G1 will not improve the
situation.  Solr requires extensive GC tuning for good performance.

The SolrPerformanceProblems wiki page that I pointed you to previously
contains a little bit of info on GC tuning, and it also links to the
following page, which is my personal page on the wiki, and documents
some of my garbage collection journey with Solr:

https://wiki.apache.org/solr/ShawnHeisey

Thanks,
Shawn



Re: solr multicore vs sharding vs 1 big collection

2015-08-04 Thread Jay Potharaju
For the last few days I have been trying to correlate the timeouts with GC.
I noticed in the GC logs that full GC takes long time once in a while. Does
this mean that the jvm memory is to high or is it set to low?


 [GC 4730643K-3552794K(4890112K), 0.0433146 secs]
1973853.751: [Full GC 3552794K-2926402K(4635136K), 0.3123954 secs]
1973864.170: [GC 4127554K-2972129K(4644864K), 0.0418248 secs]
1973873.341: [GC 4185569K-2990123K(4640256K), 0.0451723 secs]
1973882.452: [GC 4201770K-2999178K(4645888K), 0.0611839 secs]
1973890.684: [GC 4220298K-3010751K(4646400K), 0.0302890 secs]
1973900.539: [GC 4229514K-3015049K(4646912K), 0.0470857 secs]
1973911.179: [GC 4237193K-3040837K(4646912K), 0.0373900 secs]
1973920.822: [GC 4262981K-3072045K(4655104K), 0.0450480 secs]
1973927.136: [GC 4307501K-3129835K(4635648K), 0.0392559 secs]
1973933.057: [GC 4363058K-3178923K(4647936K), 0.0426612 secs]
1973940.981: [GC 4405163K-3210677K(4648960K), 0.0557622 secs]
1973946.680: [GC 4436917K-3239408K(4656128K), 0.0430889 secs]
1973953.560: [GC 4474277K-3300411K(4641280K), 0.0423129 secs]
1973960.674: [GC 4536894K-3371225K(4630016K), 0.0560341 secs]
1973960.731: [Full GC 3371225K-3339436K(5086208K), 15.5285889 secs]
1973990.516: [GC 4548268K-3405111K(5096448K), 0.0657788 secs]
1973998.191: [GC 4613934K-3527257K(5086208K), 0.1304232 secs]
1974006.505: [GC 4723801K-3597899K(5132800K), 0.0899599 secs]
1974014.748: [GC 4793955K-3654280K(5163008K), 0.0989430 secs]
1974025.349: [GC 4880823K-3672457K(5182464K), 0.0683296 secs]
1974037.517: [GC 4899721K-3681560K(5234688K), 0.1028356 secs]
1974050.066: [GC 4938520K-3718901K(5256192K), 0.0796073 secs]
1974061.466: [GC 4974356K-3726357K(5308928K), 0.1324846 secs]
1974071.726: [GC 5003687K-3757516K(5336064K), 0.0734227 secs]
1974081.917: [GC 5036492K-3777662K(5387264K), 0.1475958 secs]
1974091.853: [GC 5074558K-3800799K(5421056K), 0.0799311 secs]
1974101.882: [GC 5097363K-3846378K(5434880K), 0.3011178 secs]
1974109.234: [GC 5121936K-3930457K(5478912K), 0.0956342 secs]
1974116.082: [GC 5206361K-3974011K(5215744K), 0.1967284 secs]

Thanks
Jay

On Mon, Aug 3, 2015 at 1:53 PM, Bill Bell billnb...@gmail.com wrote:

 Yeah a separate by month or year is good and can really help in this case.

 Bill Bell
 Sent from mobile


  On Aug 2, 2015, at 5:29 PM, Jay Potharaju jspothar...@gmail.com wrote:
 
  Shawn,
  Thanks for the feedback. I agree that increasing timeout might alleviate
  the timeout issue. The main problem with increasing timeout is the
  detrimental effect it will have on the user experience, therefore can't
  increase it.
  I have looked at the queries that threw errors, next time I try it
  everything seems to work fine. Not sure how to reproduce the error.
  My concern with increasing the memory to 32GB is what happens when the
  index size grows over the next few months.
  One of the other solutions I have been thinking about is to rebuild
  index(weekly) and create a new collection and use it. Are there any good
  references for doing that?
  Thanks
  Jay
 
  On Sun, Aug 2, 2015 at 10:19 AM, Shawn Heisey apa...@elyograg.org
 wrote:
 
  On 8/2/2015 8:29 AM, Jay Potharaju wrote:
  The document contains around 30 fields and have stored set to true for
  almost 15 of them. And these stored fields are queried and updated all
  the
  time. You will notice that the deleted documents is almost 30% of the
  docs.  And it has stayed around that percent and has not come down.
  I did try optimize but that was disruptive as it caused search errors.
  I have been playing with merge factor to see if that helps with deleted
  documents or not. It is currently set to 5.
 
  The server has 24 GB of memory out of which memory consumption is
 around
  23
  GB normally and the jvm is set to 6 GB. And have noticed that the
  available
  memory on the server goes to 100 MB at times during a day.
  All the updates are run through DIH.
 
  Using all availble memory is completely normal operation for ANY
  operating system.  If you hold up Windows as an example of one that
  doesn't ... it lies to you about available memory.  All modern
  operating systems will utilize memory that is not explicitly allocated
  for the OS disk cache.
 
  The disk cache will instantly give up any of the memory it is using for
  programs that request it.  Linux doesn't try to hide the disk cache from
  you, but older versions of Windows do.  In the newer versions of Windows
  that have the Resource Monitor, you can go there to see the actual
  memory usage including the cache.
 
  Every day at least once i see the following error, which result in
 search
  errors on the front end of the site.
 
  ERROR org.apache.solr.servlet.SolrDispatchFilter -
  null:org.eclipse.jetty.io.EofException
 
  From what I have read these are mainly due to timeout and my timeout is
  set
  to 30 seconds and cant set it to a higher number. I was thinking maybe
  due
  to high memory usage, sometimes it leads to bad performance/errors.
 
  Although this 

Re: solr multicore vs sharding vs 1 big collection

2015-08-03 Thread Upayavira
There are two things that are likely to cause the timeouts you are
seeing, I'd say.

Firstly, your server is overloaded - that can be handled by adding
additional replicas.

However, it doesn't seem like this is the case, because the second query
works fine.

Secondly, you are hitting garbage collection issues. This seems more
likely to me. You have 40m documents inside a 6Gb heap. That seems
relatively tight to me. What that means is that Java may well not have
enough space to create all the objects it needs inside a single commit
cycle, forcing a garbage collection which can cause application pauses,
which would fit with what you are seeing.

I'd suggest using the jstat -gcutil command (I think I have that right)
to watch the number of garbage collections taking place. You will
quickly see from that if garbage collection is your issue. The
simplistic remedy would be to allow your JVM a bit more memory.

The other concern I have is that Solr (and Lucene) is intended for high
read/low write scenarios. Its index structure is highly tuned for this
scenario. If you are doing a lot of writes, then you will be creating a
lot of index churn which will require more frequent merges, consuming
both CPU and memory in the process. It may be worth looking at *how* you
use Solr, and see whether, for example, you can separate your documents
into slow moving, and fast moving parts, to better suit the Lucene index
structures. Or to consider whether a Lucene based system is best for
what you are attempting to achieve.

For garbage collection, see here for a good Solr related write-up:

  http://lucidworks.com/blog/garbage-collection-bootcamp-1-0/

Upayavira

On Mon, Aug 3, 2015, at 12:29 AM, Jay Potharaju wrote:
 Shawn,
 Thanks for the feedback. I agree that increasing timeout might alleviate
 the timeout issue. The main problem with increasing timeout is the
 detrimental effect it will have on the user experience, therefore can't
 increase it.
 I have looked at the queries that threw errors, next time I try it
 everything seems to work fine. Not sure how to reproduce the error.
 My concern with increasing the memory to 32GB is what happens when the
 index size grows over the next few months.
 One of the other solutions I have been thinking about is to rebuild
 index(weekly) and create a new collection and use it. Are there any good
 references for doing that?
 Thanks
 Jay
 
 On Sun, Aug 2, 2015 at 10:19 AM, Shawn Heisey apa...@elyograg.org
 wrote:
 
  On 8/2/2015 8:29 AM, Jay Potharaju wrote:
   The document contains around 30 fields and have stored set to true for
   almost 15 of them. And these stored fields are queried and updated all
  the
   time. You will notice that the deleted documents is almost 30% of the
   docs.  And it has stayed around that percent and has not come down.
   I did try optimize but that was disruptive as it caused search errors.
   I have been playing with merge factor to see if that helps with deleted
   documents or not. It is currently set to 5.
  
   The server has 24 GB of memory out of which memory consumption is around
  23
   GB normally and the jvm is set to 6 GB. And have noticed that the
  available
   memory on the server goes to 100 MB at times during a day.
   All the updates are run through DIH.
 
  Using all availble memory is completely normal operation for ANY
  operating system.  If you hold up Windows as an example of one that
  doesn't ... it lies to you about available memory.  All modern
  operating systems will utilize memory that is not explicitly allocated
  for the OS disk cache.
 
  The disk cache will instantly give up any of the memory it is using for
  programs that request it.  Linux doesn't try to hide the disk cache from
  you, but older versions of Windows do.  In the newer versions of Windows
  that have the Resource Monitor, you can go there to see the actual
  memory usage including the cache.
 
   Every day at least once i see the following error, which result in search
   errors on the front end of the site.
  
   ERROR org.apache.solr.servlet.SolrDispatchFilter -
   null:org.eclipse.jetty.io.EofException
  
   From what I have read these are mainly due to timeout and my timeout is
  set
   to 30 seconds and cant set it to a higher number. I was thinking maybe
  due
   to high memory usage, sometimes it leads to bad performance/errors.
 
  Although this error can be caused by timeouts, it has a specific
  meaning.  It means that the client disconnected before Solr responded to
  the request, so when Solr tried to respond (through jetty), it found a
  closed TCP connection.
 
  Client timeouts need to either be completely removed, or set to a value
  much longer than any request will take.  Five minutes is a good starting
  value.
 
  If all your client timeout is set to 30 seconds and you are seeing
  EofExceptions, that means that your requests are taking longer than 30
  seconds, and you likely have some performance issues.  It's also
  possible that 

Re: solr multicore vs sharding vs 1 big collection

2015-08-03 Thread Bill Bell
Yeah a separate by month or year is good and can really help in this case.

Bill Bell
Sent from mobile


 On Aug 2, 2015, at 5:29 PM, Jay Potharaju jspothar...@gmail.com wrote:
 
 Shawn,
 Thanks for the feedback. I agree that increasing timeout might alleviate
 the timeout issue. The main problem with increasing timeout is the
 detrimental effect it will have on the user experience, therefore can't
 increase it.
 I have looked at the queries that threw errors, next time I try it
 everything seems to work fine. Not sure how to reproduce the error.
 My concern with increasing the memory to 32GB is what happens when the
 index size grows over the next few months.
 One of the other solutions I have been thinking about is to rebuild
 index(weekly) and create a new collection and use it. Are there any good
 references for doing that?
 Thanks
 Jay
 
 On Sun, Aug 2, 2015 at 10:19 AM, Shawn Heisey apa...@elyograg.org wrote:
 
 On 8/2/2015 8:29 AM, Jay Potharaju wrote:
 The document contains around 30 fields and have stored set to true for
 almost 15 of them. And these stored fields are queried and updated all
 the
 time. You will notice that the deleted documents is almost 30% of the
 docs.  And it has stayed around that percent and has not come down.
 I did try optimize but that was disruptive as it caused search errors.
 I have been playing with merge factor to see if that helps with deleted
 documents or not. It is currently set to 5.
 
 The server has 24 GB of memory out of which memory consumption is around
 23
 GB normally and the jvm is set to 6 GB. And have noticed that the
 available
 memory on the server goes to 100 MB at times during a day.
 All the updates are run through DIH.
 
 Using all availble memory is completely normal operation for ANY
 operating system.  If you hold up Windows as an example of one that
 doesn't ... it lies to you about available memory.  All modern
 operating systems will utilize memory that is not explicitly allocated
 for the OS disk cache.
 
 The disk cache will instantly give up any of the memory it is using for
 programs that request it.  Linux doesn't try to hide the disk cache from
 you, but older versions of Windows do.  In the newer versions of Windows
 that have the Resource Monitor, you can go there to see the actual
 memory usage including the cache.
 
 Every day at least once i see the following error, which result in search
 errors on the front end of the site.
 
 ERROR org.apache.solr.servlet.SolrDispatchFilter -
 null:org.eclipse.jetty.io.EofException
 
 From what I have read these are mainly due to timeout and my timeout is
 set
 to 30 seconds and cant set it to a higher number. I was thinking maybe
 due
 to high memory usage, sometimes it leads to bad performance/errors.
 
 Although this error can be caused by timeouts, it has a specific
 meaning.  It means that the client disconnected before Solr responded to
 the request, so when Solr tried to respond (through jetty), it found a
 closed TCP connection.
 
 Client timeouts need to either be completely removed, or set to a value
 much longer than any request will take.  Five minutes is a good starting
 value.
 
 If all your client timeout is set to 30 seconds and you are seeing
 EofExceptions, that means that your requests are taking longer than 30
 seconds, and you likely have some performance issues.  It's also
 possible that some of your client timeouts are set a lot shorter than 30
 seconds.
 
 My objective is to stop the errors, adding more memory to the server is
 not
 a good scaling strategy. That is why i was thinking maybe there is a
 issue
 with the way things are set up and need to be revisited.
 
 You're right that adding more memory to the servers is not a good
 scaling strategy for the general case ... but in this situation, I think
 it might be prudent.  For your index and heap sizes, I would want the
 company to pay for at least 32GB of RAM.
 
 Having said that ... I've seen Solr installs work well with a LOT less
 memory than the ideal.  I don't know that adding more memory is
 necessary, unless your system (CPU, storage, and memory speeds) is
 particularly slow.  Based on your document count and index size, your
 documents are quite small, so I think your memory size is probably good
 -- if the CPU, memory bus, and storage are very fast.  If one or more of
 those subsystems aren't fast, then make up the difference with lots of
 memory.
 
 Some light reading, where you will learn why I think 32GB is an ideal
 memory size for your system:
 
 https://wiki.apache.org/solr/SolrPerformanceProblems
 
 It is possible that your 6GB heap is not quite big enough for good
 performance, or that your GC is not well-tuned.  These topics are also
 discussed on that wiki page.  If you increase your heap size, then the
 likelihood of needing more memory in the system becomes greater, because
 there will be less memory available for the disk cache.
 
 Thanks,
 Shawn
 
 
 -- 
 Thanks
 Jay Potharaju


Re: solr multicore vs sharding vs 1 big collection

2015-08-02 Thread Jay Potharaju
The document contains around 30 fields and have stored set to true for
almost 15 of them. And these stored fields are queried and updated all the
time. You will notice that the deleted documents is almost 30% of the
docs.  And it has stayed around that percent and has not come down.
I did try optimize but that was disruptive as it caused search errors.
I have been playing with merge factor to see if that helps with deleted
documents or not. It is currently set to 5.

The server has 24 GB of memory out of which memory consumption is around 23
GB normally and the jvm is set to 6 GB. And have noticed that the available
memory on the server goes to 100 MB at times during a day.
All the updates are run through DIH.

Every day at least once i see the following error, which result in search
errors on the front end of the site.

ERROR org.apache.solr.servlet.SolrDispatchFilter -
null:org.eclipse.jetty.io.EofException

From what I have read these are mainly due to timeout and my timeout is set
to 30 seconds and cant set it to a higher number. I was thinking maybe due
to high memory usage, sometimes it leads to bad performance/errors.

My objective is to stop the errors, adding more memory to the server is not
a good scaling strategy. That is why i was thinking maybe there is a issue
with the way things are set up and need to be revisited.

Thanks


On Sat, Aug 1, 2015 at 7:06 PM, Shawn Heisey apa...@elyograg.org wrote:

 On 8/1/2015 6:49 PM, Jay Potharaju wrote:
  I currently have a single collection with 40 million documents and index
  size of 25 GB. The collections gets updated every n minutes and as a
 result
  the number of deleted documents is constantly growing. The data in the
  collection is an amalgamation of more than 1000+ customer records. The
  number of documents per each customer is around 100,000 records on
 average.
 
  Now that being said, I 'm trying to get an handle on the growing deleted
  document size. Because of the growing index size both the disk space and
  memory is being used up. And would like to reduce it to a manageable
 size.
 
  I have been thinking of splitting the data into multiple core, 1 for each
  customer. This would allow me manage the smaller collection easily and
 can
  create/update the collection also fast. My concern is that number of
  collections might become an issue. Any suggestions on how to address this
  problem. What are my other alternatives to moving to a multicore
  collections.?
 
  Solr: 4.9
  Index size:25 GB
  Max doc: 40 million
  Doc count:29 million
 
  Replication:4
 
  4 servers in solrcloud.

 Creating 1000+ collections in SolrCloud is definitely problematic.  If
 you need to choose between a lot of shards and a lot of collections, I
 would definitely go with a lot of shards.  I would also want a lot of
 servers for an index with that many pieces.

 https://issues.apache.org/jira/browse/SOLR-7191

 I don't think it would matter how many collections or shards you have
 when it comes to how many deleted documents are in your index.  If you
 want to clean up a large number of deletes in an index, the best option
 is an optimize.  An optimize requires a large amount of disk I/O, so it
 can be extremely disruptive if the query volume is high.  It should be
 done when the query volume is at its lowest.  For the index you
 describe, a nightly or weekly optimize seems like a good option.

 Aside from having a lot of deleted documents in your index, what kind of
 problems are you trying to solve?

 Thanks,
 Shawn




-- 
Thanks
Jay Potharaju


Re: solr multicore vs sharding vs 1 big collection

2015-08-02 Thread Shawn Heisey
On 8/2/2015 8:29 AM, Jay Potharaju wrote:
 The document contains around 30 fields and have stored set to true for
 almost 15 of them. And these stored fields are queried and updated all the
 time. You will notice that the deleted documents is almost 30% of the
 docs.  And it has stayed around that percent and has not come down.
 I did try optimize but that was disruptive as it caused search errors.
 I have been playing with merge factor to see if that helps with deleted
 documents or not. It is currently set to 5.
 
 The server has 24 GB of memory out of which memory consumption is around 23
 GB normally and the jvm is set to 6 GB. And have noticed that the available
 memory on the server goes to 100 MB at times during a day.
 All the updates are run through DIH.

Using all availble memory is completely normal operation for ANY
operating system.  If you hold up Windows as an example of one that
doesn't ... it lies to you about available memory.  All modern
operating systems will utilize memory that is not explicitly allocated
for the OS disk cache.

The disk cache will instantly give up any of the memory it is using for
programs that request it.  Linux doesn't try to hide the disk cache from
you, but older versions of Windows do.  In the newer versions of Windows
that have the Resource Monitor, you can go there to see the actual
memory usage including the cache.

 Every day at least once i see the following error, which result in search
 errors on the front end of the site.
 
 ERROR org.apache.solr.servlet.SolrDispatchFilter -
 null:org.eclipse.jetty.io.EofException
 
 From what I have read these are mainly due to timeout and my timeout is set
 to 30 seconds and cant set it to a higher number. I was thinking maybe due
 to high memory usage, sometimes it leads to bad performance/errors.

Although this error can be caused by timeouts, it has a specific
meaning.  It means that the client disconnected before Solr responded to
the request, so when Solr tried to respond (through jetty), it found a
closed TCP connection.

Client timeouts need to either be completely removed, or set to a value
much longer than any request will take.  Five minutes is a good starting
value.

If all your client timeout is set to 30 seconds and you are seeing
EofExceptions, that means that your requests are taking longer than 30
seconds, and you likely have some performance issues.  It's also
possible that some of your client timeouts are set a lot shorter than 30
seconds.

 My objective is to stop the errors, adding more memory to the server is not
 a good scaling strategy. That is why i was thinking maybe there is a issue
 with the way things are set up and need to be revisited.

You're right that adding more memory to the servers is not a good
scaling strategy for the general case ... but in this situation, I think
it might be prudent.  For your index and heap sizes, I would want the
company to pay for at least 32GB of RAM.

Having said that ... I've seen Solr installs work well with a LOT less
memory than the ideal.  I don't know that adding more memory is
necessary, unless your system (CPU, storage, and memory speeds) is
particularly slow.  Based on your document count and index size, your
documents are quite small, so I think your memory size is probably good
-- if the CPU, memory bus, and storage are very fast.  If one or more of
those subsystems aren't fast, then make up the difference with lots of
memory.

Some light reading, where you will learn why I think 32GB is an ideal
memory size for your system:

https://wiki.apache.org/solr/SolrPerformanceProblems

It is possible that your 6GB heap is not quite big enough for good
performance, or that your GC is not well-tuned.  These topics are also
discussed on that wiki page.  If you increase your heap size, then the
likelihood of needing more memory in the system becomes greater, because
there will be less memory available for the disk cache.

Thanks,
Shawn



Re: solr multicore vs sharding vs 1 big collection

2015-08-02 Thread Jay Potharaju
Shawn,
Thanks for the feedback. I agree that increasing timeout might alleviate
the timeout issue. The main problem with increasing timeout is the
detrimental effect it will have on the user experience, therefore can't
increase it.
I have looked at the queries that threw errors, next time I try it
everything seems to work fine. Not sure how to reproduce the error.
My concern with increasing the memory to 32GB is what happens when the
index size grows over the next few months.
One of the other solutions I have been thinking about is to rebuild
index(weekly) and create a new collection and use it. Are there any good
references for doing that?
Thanks
Jay

On Sun, Aug 2, 2015 at 10:19 AM, Shawn Heisey apa...@elyograg.org wrote:

 On 8/2/2015 8:29 AM, Jay Potharaju wrote:
  The document contains around 30 fields and have stored set to true for
  almost 15 of them. And these stored fields are queried and updated all
 the
  time. You will notice that the deleted documents is almost 30% of the
  docs.  And it has stayed around that percent and has not come down.
  I did try optimize but that was disruptive as it caused search errors.
  I have been playing with merge factor to see if that helps with deleted
  documents or not. It is currently set to 5.
 
  The server has 24 GB of memory out of which memory consumption is around
 23
  GB normally and the jvm is set to 6 GB. And have noticed that the
 available
  memory on the server goes to 100 MB at times during a day.
  All the updates are run through DIH.

 Using all availble memory is completely normal operation for ANY
 operating system.  If you hold up Windows as an example of one that
 doesn't ... it lies to you about available memory.  All modern
 operating systems will utilize memory that is not explicitly allocated
 for the OS disk cache.

 The disk cache will instantly give up any of the memory it is using for
 programs that request it.  Linux doesn't try to hide the disk cache from
 you, but older versions of Windows do.  In the newer versions of Windows
 that have the Resource Monitor, you can go there to see the actual
 memory usage including the cache.

  Every day at least once i see the following error, which result in search
  errors on the front end of the site.
 
  ERROR org.apache.solr.servlet.SolrDispatchFilter -
  null:org.eclipse.jetty.io.EofException
 
  From what I have read these are mainly due to timeout and my timeout is
 set
  to 30 seconds and cant set it to a higher number. I was thinking maybe
 due
  to high memory usage, sometimes it leads to bad performance/errors.

 Although this error can be caused by timeouts, it has a specific
 meaning.  It means that the client disconnected before Solr responded to
 the request, so when Solr tried to respond (through jetty), it found a
 closed TCP connection.

 Client timeouts need to either be completely removed, or set to a value
 much longer than any request will take.  Five minutes is a good starting
 value.

 If all your client timeout is set to 30 seconds and you are seeing
 EofExceptions, that means that your requests are taking longer than 30
 seconds, and you likely have some performance issues.  It's also
 possible that some of your client timeouts are set a lot shorter than 30
 seconds.

  My objective is to stop the errors, adding more memory to the server is
 not
  a good scaling strategy. That is why i was thinking maybe there is a
 issue
  with the way things are set up and need to be revisited.

 You're right that adding more memory to the servers is not a good
 scaling strategy for the general case ... but in this situation, I think
 it might be prudent.  For your index and heap sizes, I would want the
 company to pay for at least 32GB of RAM.

 Having said that ... I've seen Solr installs work well with a LOT less
 memory than the ideal.  I don't know that adding more memory is
 necessary, unless your system (CPU, storage, and memory speeds) is
 particularly slow.  Based on your document count and index size, your
 documents are quite small, so I think your memory size is probably good
 -- if the CPU, memory bus, and storage are very fast.  If one or more of
 those subsystems aren't fast, then make up the difference with lots of
 memory.

 Some light reading, where you will learn why I think 32GB is an ideal
 memory size for your system:

 https://wiki.apache.org/solr/SolrPerformanceProblems

 It is possible that your 6GB heap is not quite big enough for good
 performance, or that your GC is not well-tuned.  These topics are also
 discussed on that wiki page.  If you increase your heap size, then the
 likelihood of needing more memory in the system becomes greater, because
 there will be less memory available for the disk cache.

 Thanks,
 Shawn




-- 
Thanks
Jay Potharaju


Re: solr multicore vs sharding vs 1 big collection

2015-08-01 Thread Shawn Heisey
On 8/1/2015 6:49 PM, Jay Potharaju wrote:
 I currently have a single collection with 40 million documents and index
 size of 25 GB. The collections gets updated every n minutes and as a result
 the number of deleted documents is constantly growing. The data in the
 collection is an amalgamation of more than 1000+ customer records. The
 number of documents per each customer is around 100,000 records on average.
 
 Now that being said, I 'm trying to get an handle on the growing deleted
 document size. Because of the growing index size both the disk space and
 memory is being used up. And would like to reduce it to a manageable size.
 
 I have been thinking of splitting the data into multiple core, 1 for each
 customer. This would allow me manage the smaller collection easily and can
 create/update the collection also fast. My concern is that number of
 collections might become an issue. Any suggestions on how to address this
 problem. What are my other alternatives to moving to a multicore
 collections.?
 
 Solr: 4.9
 Index size:25 GB
 Max doc: 40 million
 Doc count:29 million
 
 Replication:4
 
 4 servers in solrcloud.

Creating 1000+ collections in SolrCloud is definitely problematic.  If
you need to choose between a lot of shards and a lot of collections, I
would definitely go with a lot of shards.  I would also want a lot of
servers for an index with that many pieces.

https://issues.apache.org/jira/browse/SOLR-7191

I don't think it would matter how many collections or shards you have
when it comes to how many deleted documents are in your index.  If you
want to clean up a large number of deletes in an index, the best option
is an optimize.  An optimize requires a large amount of disk I/O, so it
can be extremely disruptive if the query volume is high.  It should be
done when the query volume is at its lowest.  For the index you
describe, a nightly or weekly optimize seems like a good option.

Aside from having a lot of deleted documents in your index, what kind of
problems are you trying to solve?

Thanks,
Shawn



solr multicore vs sharding vs 1 big collection

2015-08-01 Thread Jay Potharaju
Hi

I currently have a single collection with 40 million documents and index
size of 25 GB. The collections gets updated every n minutes and as a result
the number of deleted documents is constantly growing. The data in the
collection is an amalgamation of more than 1000+ customer records. The
number of documents per each customer is around 100,000 records on average.

Now that being said, I 'm trying to get an handle on the growing deleted
document size. Because of the growing index size both the disk space and
memory is being used up. And would like to reduce it to a manageable size.

I have been thinking of splitting the data into multiple core, 1 for each
customer. This would allow me manage the smaller collection easily and can
create/update the collection also fast. My concern is that number of
collections might become an issue. Any suggestions on how to address this
problem. What are my other alternatives to moving to a multicore
collections.?

Solr: 4.9
Index size:25 GB
Max doc: 40 million
Doc count:29 million

Replication:4

4 servers in solrcloud.

Thanks
Jay


Re: solr multicore vs sharding vs 1 big collection

2015-08-01 Thread Erick Erickson
40 million docs isn't really very many by modern standards,
although if they're huge documents then that might be an issue.

So is this a single shard or multiple shards? If you're really facing
performance issues, simply making a new collection with more
than one shard (independent of how many replicas each has) is
probably simplest.

The number of deleted documents really shouldn't be a problem.
Typically the deleted documents are purged during segment
merging that happens automatically as you add documents. I often
see 10-15% or the corpus consist of deleted documents.

You can force these by doing a force merge (aka optimization), but that
is usually not recommended unless you have a strange situation where
you have lots and lots of docs that have been deleted as measured
by the Admin UI page, the deleted docs entry relative to the maxDoc
number (again on the admin UI page).

So show us what you're seeing that's concerning. Typically, especially
on an index that's continually getting updates it's adequate to just
let the background segment merging take care of things.

Best,
Erick

On Sat, Aug 1, 2015 at 8:49 PM, Jay Potharaju jspothar...@gmail.com wrote:
 Hi

 I currently have a single collection with 40 million documents and index
 size of 25 GB. The collections gets updated every n minutes and as a result
 the number of deleted documents is constantly growing. The data in the
 collection is an amalgamation of more than 1000+ customer records. The
 number of documents per each customer is around 100,000 records on average.

 Now that being said, I 'm trying to get an handle on the growing deleted
 document size. Because of the growing index size both the disk space and
 memory is being used up. And would like to reduce it to a manageable size.

 I have been thinking of splitting the data into multiple core, 1 for each
 customer. This would allow me manage the smaller collection easily and can
 create/update the collection also fast. My concern is that number of
 collections might become an issue. Any suggestions on how to address this
 problem. What are my other alternatives to moving to a multicore
 collections.?

 Solr: 4.9
 Index size:25 GB
 Max doc: 40 million
 Doc count:29 million

 Replication:4

 4 servers in solrcloud.

 Thanks
 Jay


Re: Creating Solr servers dynamically in Multicore folder

2014-09-10 Thread Erick Erickson
You should be good to go. Do note that you can the variables that were
defined in your schema.xml in the individual core.properties file for
the core in question if you need to, although the defaults work for
most people's needs.


Best,
Erick

On Tue, Sep 9, 2014 at 9:15 PM, nishwanth nishwanth.vupp...@gmail.com wrote:
 Hello Erick,

 Thanks for the response . My cores got created now after removing the
 core.properties in this location and the existing core folders .

 Also i commented the core related information on solr.xml . Are there going
 to be any further problems with the approach i followed.

 For the new cores i created could see the conf,data and core.properties file
 getting created.

 Thanks..






 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Creating-Solr-servers-dynamically-in-Multicore-folder-tp4157550p4157747.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: Creating Solr servers dynamically in Multicore folder

2014-09-10 Thread nishwanth
Hello Erick,

Thanks for the response 

I have attached the core.properties and solr.xml for your reference.

.  solr.xml http://lucene.472066.n3.nabble.com/file/n4158124/solr.xml  
core.properties
http://lucene.472066.n3.nabble.com/file/n4158124/core.properties  

Below is our plan on the creating cores.

Every Tenant (user) is  bound to some Contacts,sales,Orders and other
information . Numbers of tenants for our application will be approximately
10,000. 

We are planning to create a Core for every Tenant and maintain the
Contacts,sales,Orders and other information as a collection . So every time
Tenant logs in this information will be used.

Could you please let us know your thoughts on this approach.

Regards,
Nishwanth




--
View this message in context: 
http://lucene.472066.n3.nabble.com/Creating-Solr-servers-dynamically-in-Multicore-folder-tp4157550p4158124.html
Sent from the Solr - User mailing list archive at Nabble.com.


Creating Solr servers dynamically in Multicore folder

2014-09-09 Thread nishwanth
Hello ,

I  am using solr 4.8.1 Version and and i am trying to create the cores
dynamically on server start up using the following piece of code.

 HttpSolrServer s = new HttpSolrServer( url );
s.setParser(new BinaryResponseParser());
s.setRequestWriter(new BinaryRequestWriter());
SolrServer server = s;
String instanceDir =/opt/solr/core/multicore/;
CoreAdminResponse e =  new CoreAdminRequest().createCore(name,
instanceDir,
server,/opt/solr/core/multicore/solrconfig.xml,/opt/solr/core/multicore/schema.xml);

I am getting the error 

org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error
CREA
TEing SolrCore 'hellocore': Could not create a new core in
/opt/solr/core/multic
ore/as another core is already defined there
at
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSo
lrServer.java:554)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServ
er.java:210)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServ
er.java:206)
at
org.apache.solr.client.solrj.request.CoreAdminRequest.process(CoreAdm
inRequest.java:503)
at
org.apache.solr.client.solrj.request.CoreAdminRequest.createCore(Core
AdminRequest.java:580)
at
org.apache.solr.client.solrj.request.CoreAdminRequest.createCore(Core
AdminRequest.java:560)
at
app.services.OperativeAdminScheduler.scheduleTask(OperativeAdminSched
uler.java:154)
at Global.onStart(Global.java:31)

I am still getting the above error even  though the core0 and core1 folders
in multicore are deleted and the same is commented in
/opt/solr/core/multicore/solrconfig.xml. Also i enabled persistent=true in
the solrconfig.xml 



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Creating-Solr-servers-dynamically-in-Multicore-folder-tp4157550.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Creating Solr servers dynamically in Multicore folder

2014-09-09 Thread Erick Erickson
Well, you already have a core.properties file defined in that
location. I presume you're operating in core discovery mode. Your
cores would all be very confused if new cores were defined over top of
old cores.

It is a little clumsy at this point in that you have to have a conf
directory in place but _not_ a core.properties file to create a core
like this. Config sets will eventually fix this.

Best,
Erick

On Mon, Sep 8, 2014 at 11:00 PM, nishwanth nishwanth.vupp...@gmail.com wrote:
 Hello ,

 I  am using solr 4.8.1 Version and and i am trying to create the cores
 dynamically on server start up using the following piece of code.

  HttpSolrServer s = new HttpSolrServer( url );
 s.setParser(new BinaryResponseParser());
 s.setRequestWriter(new BinaryRequestWriter());
 SolrServer server = s;
 String instanceDir =/opt/solr/core/multicore/;
 CoreAdminResponse e =  new CoreAdminRequest().createCore(name,
 instanceDir,
 server,/opt/solr/core/multicore/solrconfig.xml,/opt/solr/core/multicore/schema.xml);

 I am getting the error

 org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error
 CREA
 TEing SolrCore 'hellocore': Could not create a new core in
 /opt/solr/core/multic
 ore/as another core is already defined there
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSo
 lrServer.java:554)
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServ
 er.java:210)
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServ
 er.java:206)
 at
 org.apache.solr.client.solrj.request.CoreAdminRequest.process(CoreAdm
 inRequest.java:503)
 at
 org.apache.solr.client.solrj.request.CoreAdminRequest.createCore(Core
 AdminRequest.java:580)
 at
 org.apache.solr.client.solrj.request.CoreAdminRequest.createCore(Core
 AdminRequest.java:560)
 at
 app.services.OperativeAdminScheduler.scheduleTask(OperativeAdminSched
 uler.java:154)
 at Global.onStart(Global.java:31)

 I am still getting the above error even  though the core0 and core1 folders
 in multicore are deleted and the same is commented in
 /opt/solr/core/multicore/solrconfig.xml. Also i enabled persistent=true in
 the solrconfig.xml



 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Creating-Solr-servers-dynamically-in-Multicore-folder-tp4157550.html
 Sent from the Solr - User mailing list archive at Nabble.com.


Re: Creating Solr servers dynamically in Multicore folder

2014-09-09 Thread nishwanth
Hello Erick,

Thanks for the response . My cores got created now after removing the
core.properties in this location and the existing core folders . 

Also i commented the core related information on solr.xml . Are there going
to be any further problems with the approach i followed.

For the new cores i created could see the conf,data and core.properties file
getting created.

Thanks..






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Creating-Solr-servers-dynamically-in-Multicore-folder-tp4157550p4157747.html
Sent from the Solr - User mailing list archive at Nabble.com.


jetty solr multicore log files

2014-04-17 Thread Kim, Soonho (IFPRI)

Dear all;

I have a quick question on the log files of solr 3.4 (multicore) in Jetty 
(linux). I am using this solr search linking with Drupal 6.
I tried to find the log files for this multicore (live/dev) but I couldn't find 
it.
For the single core, I found it under apache-solr-3.4/example/logs/jetty.log. 
For the multicore, when I run this using java -Dsolr.solr.home=multicore -jar 
start.jar, then I can see some output from the system such as
 
Apr 16, 2014 5:37:29 PM 
org.apache.solr.handler.component.SpellCheckComponent$SpellCheckerListener 
newSearcher
INFO: Loading spell index for spellchecker: jarowinkler
2014-04-16 17:37:29.413:INFO::Started SocketConnector@0.0.0.0:8984
Apr 16, 2014 5:37:29 PM org.apache.solr.core.SolrCore registerSearcher
INFO: [live] Registered new searcher Searcher@4f8953fb main
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_cck_field_dataset_country,memSize=16758,tindexSize=46,time=2,phase1=2,nTerms=38,bigTerms=0,termIn
 stances=190,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_cck_field_model_country,memSize=16976,tindexSize=48,time=2,phase1=2,nTerms=71,bigTerms=0,termInst
 ances=214,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_cck_field_resource_country,memSize=17162,tindexSize=48,time=2,phase1=2,nTerms=71,bigTerms=0,termI
 nstances=615,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_3,memSize=16754,tindexSize=42,time=1,phase1=1,nTerms=9,bigTerms=0,termInstances=245,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_6,memSize=16756,tindexSize=44,time=0,phase1=0,nTerms=5,bigTerms=0,termInstances=14,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_1,memSize=16758,tindexSize=46,time=1,phase1=0,nTerms=12,bigTerms=0,termInstances=450,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_7,memSize=4224,tindexSize=32,time=0,phase1=0,nTerms=0,bigTerms=0,termInstances=0,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_4,memSize=16852,tindexSize=44,time=2,phase1=2,nTerms=23,bigTerms=1,termInstances=549,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.request.UnInvertedField uninvert
INFO: UnInverted multi-valued field 
{field=im_vid_2,memSize=16756,tindexSize=44,time=0,phase1=0,nTerms=3,bigTerms=0,termInstances=15,uses=0}
Apr 16, 2014 5:37:36 PM org.apache.solr.core.SolrCore execute

Is it possible to generate a file instead of message from the command line 
(System.out.println)?
Thanks for your answer in advance. : ) 

Best,
Soonho


Re: jetty solr multicore log files

2014-04-17 Thread Shawn Heisey
On 4/17/2014 4:35 AM, Kim, Soonho (IFPRI) wrote:
 I have a quick question on the log files of solr 3.4 (multicore) in Jetty 
 (linux). I am using this solr search linking with Drupal 6.
 I tried to find the log files for this multicore (live/dev) but I couldn't 
 find it.
 For the single core, I found it under apache-solr-3.4/example/logs/jetty.log. 
 For the multicore, when I run this using java -Dsolr.solr.home=multicore -jar 
 start.jar, then I can see some output from the system such as

Solr 3.x has the logging jars included in the .war file, and they bind
to java.util.logging.  The default for this logging is stdout, which is
why you see them on your screen.

The situation changed with Solr 4.3.0 -- the logging jars were moved out
of the .war file and the binding was changed from java.util.logging to
log4j.  A config file for the logging was also provided in the example,
one that logs to a file as well as stdout.

Since you're running a release before that change, you'll need to create
a logging config file for java.util.logging (typically named
logging.properties, but any name is possible) and add a system property
to your java commandline:

-Djava.util.logging.config.file=myLoggingConfigFilePath

The Solr wiki has an example config file:

http://wiki.apache.org/solr/LoggingInDefaultJettySetup

If you can do it, I would strongly recommend using a much newer version
of Solr.

Thanks,
Shawn



Issues with multicore management

2014-03-04 Thread bengates
Hello,

I'm having issues with multicore management. 
What I want to do :
*1st point :* Create new cores on the fly without restarting the Solr
instance
*2nd point :* Have these new cores registered in case of restarting Solr
instance

So, I tried *config A* :
/solr.xml/ :



Then I duplicated the /example/multicore/core2/ directory to
/example/multicore/core3/ and ran the following URL :
http://localhost:8983/solr/admin/cores?wt=jsonindexInfo=falseaction=CREATEname=core3instanceDir=core3dataDir=dataconfig=solrconfig.xmlschema=managed-schema.xml
This was *successfull*. My core was then listed in the Solr Admin UI
http://localhost:8983/solr/#/~cores
Problem : after restarting the start.jar, *only core1 and core2 appear*. 
This config handles the *1st point* but not the *2nd one*.

So I tried *config B* :
/solr.xml/ :


Then I put a /core.properties/ file in each core directory with
/name=core1/, /name=core2/.
core1 and core2 are *automatically discovered*. 
I duplicated core2 to core3 directory and changed /core.properties/ with
/name=core3/.
Then I ran the CREATE URL :
http://localhost:8983/solr/admin/cores?wt=jsonindexInfo=falseaction=CREATEname=core3instanceDir=core3dataDir=dataconfig=solrconfig.xmlschema=managed-schema.xml
*SuccessFull*. BUT : my core *isn't listed* in the Admin UI. So, I tried to
reload it :
http://localhost:8983/solr/admin/cores?action=RELOADcore=core3wt=jsonindent=true
{
  responseHeader:{
status:400,
QTime:3},
  error:{
msg:Core with core name [core3] does not exist.,
code:400}}

However :
http://localhost:8983/solr/admin/cores?action=STATUScore=core3wt=jsonindent=true
{
  responseHeader:{
status:0,
QTime:1},
  initFailures:{},
  status:{
core3:{}}}

When I restart start.jar, the core is OK.
So, this config handles the *2nd point* but not the *1st one*.

What am I doing wrong ?
I'm running Solr 4.7.0.

Thanks,
Ben



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Issues-with-multicore-management-tp4121107.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Issues with multicore management

2014-03-04 Thread Dmitry Kan
Hi,

do you have persistent=true in your solr.xml in the root element?

Dmitry


On Tue, Mar 4, 2014 at 3:30 PM, bengates benga...@aliceadsl.fr wrote:

 Hello,

 I'm having issues with multicore management.
 What I want to do :
 *1st point :* Create new cores on the fly without restarting the Solr
 instance
 *2nd point :* Have these new cores registered in case of restarting Solr
 instance

 So, I tried *config A* :
 /solr.xml/ :



 Then I duplicated the /example/multicore/core2/ directory to
 /example/multicore/core3/ and ran the following URL :

 http://localhost:8983/solr/admin/cores?wt=jsonindexInfo=falseaction=CREATEname=core3instanceDir=core3dataDir=dataconfig=solrconfig.xmlschema=managed-schema.xml
 This was *successfull*. My core was then listed in the Solr Admin UI
 http://localhost:8983/solr/#/~cores
 Problem : after restarting the start.jar, *only core1 and core2 appear*.
 This config handles the *1st point* but not the *2nd one*.

 So I tried *config B* :
 /solr.xml/ :


 Then I put a /core.properties/ file in each core directory with
 /name=core1/, /name=core2/.
 core1 and core2 are *automatically discovered*.
 I duplicated core2 to core3 directory and changed /core.properties/ with
 /name=core3/.
 Then I ran the CREATE URL :

 http://localhost:8983/solr/admin/cores?wt=jsonindexInfo=falseaction=CREATEname=core3instanceDir=core3dataDir=dataconfig=solrconfig.xmlschema=managed-schema.xml
 *SuccessFull*. BUT : my core *isn't listed* in the Admin UI. So, I tried to
 reload it :

 http://localhost:8983/solr/admin/cores?action=RELOADcore=core3wt=jsonindent=true
 {
   responseHeader:{
 status:400,
 QTime:3},
   error:{
 msg:Core with core name [core3] does not exist.,
 code:400}}

 However :

 http://localhost:8983/solr/admin/cores?action=STATUScore=core3wt=jsonindent=true
 {
   responseHeader:{
 status:0,
 QTime:1},
   initFailures:{},
   status:{
 core3:{}}}

 When I restart start.jar, the core is OK.
 So, this config handles the *2nd point* but not the *1st one*.

 What am I doing wrong ?
 I'm running Solr 4.7.0.

 Thanks,
 Ben



 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Issues-with-multicore-management-tp4121107.html
 Sent from the Solr - User mailing list archive at Nabble.com.




-- 
Dmitry
Blog: http://dmitrykan.blogspot.com
Twitter: twitter.com/dmitrykan


Deciding how to correctly use Solr multicore

2014-02-09 Thread Pisarev, Vitaliy
Hello!

We are evaluating Solr usage in our organization and have come to the point 
where we are past the functional tests and are now looking in choosing the best 
deployment topology.
Here are some details about the structure of the problem: The application deals 
with storing and retrieving artifacts of various types. The artifact are stored 
in Projects. Each project can have hundreds of thousands of artifacts (total on 
all types) and our largest customers have hundreds of projects (~300-800) 
though the vast majority have tens of project (~30-100).

Core granularity
In terms of Core granularity- it seems to me that a core per project is 
sensible, as pushing everything to a single core will probably be too much. The 
entities themselves will have a special type field for distinction.
Moreover, it may be that not all of the project are active in a given time so 
this allows their indexes to remain on latent on disk.


Availability and synchronization
Our application is deployed on premise on our customers sites- we cannot go too 
crazy as to the amount of extra resources we demand from them- e.g. dedicated 
indexing servers. We pretty much need to make do with what is already there.

For now, we are planning to use the DIH to maintain the index. Each node the 
cluster on the app will have its own local index. When a project is created (or 
the feature is enabled on an existing project), a core is created for it on 
each one of the nodes, a full import is executed and then a delta import is 
scheduled to run on each one of the nodes. This gives us simplicity but I am 
wondering about the performance and memory consumption costs? Also, I am 
wondering whether we should use replication for this purpose. The requirement 
is for the index to be updated once in 30 seconds - are delta imports design 
for this?

I understand that this is a very complex problem in general. I tried to 
highlight all the most significant aspects and will appreciate some initial 
guidance. Note that we are planning to execute performance and stress testing 
no matter what but the assumption is that the topology of the solution can be 
predetermined with the existing data.






Re: Deciding how to correctly use Solr multicore

2014-02-09 Thread Jack Krupansky
The first question I always is ask is how do you want to query the data - 
what is the full range of query use cases?


For example, might a customer every want to query across all of their 
projects?


You didn't say how many customers you must be able to support. This leads to 
questions about how many customers or projects run on a single Solr server. 
It sounds like you may require quite a number of Solr servers, each 
multi-core. And in some cases a single customer might not fit on a single 
Solr server. SolrCloud might begin to make sense even though it sounds like 
a single collection would rarely need to be sharded.


You didn't speak at all about HA (High Availability) requirements or 
replication.


Or about query latency requirements or query load - which can impact 
replication requirements.


-- Jack Krupansky

-Original Message- 
From: Pisarev, Vitaliy

Sent: Sunday, February 9, 2014 4:22 AM
To: solr-user@lucene.apache.org
Subject: Deciding how to correctly use Solr multicore

Hello!

We are evaluating Solr usage in our organization and have come to the point 
where we are past the functional tests and are now looking in choosing the 
best deployment topology.
Here are some details about the structure of the problem: The application 
deals with storing and retrieving artifacts of various types. The artifact 
are stored in Projects. Each project can have hundreds of thousands of 
artifacts (total on all types) and our largest customers have hundreds of 
projects (~300-800) though the vast majority have tens of project (~30-100).


Core granularity
In terms of Core granularity- it seems to me that a core per project is 
sensible, as pushing everything to a single core will probably be too much. 
The entities themselves will have a special type field for distinction.
Moreover, it may be that not all of the project are active in a given time 
so this allows their indexes to remain on latent on disk.



Availability and synchronization
Our application is deployed on premise on our customers sites- we cannot go 
too crazy as to the amount of extra resources we demand from them- e.g. 
dedicated indexing servers. We pretty much need to make do with what is 
already there.


For now, we are planning to use the DIH to maintain the index. Each node the 
cluster on the app will have its own local index. When a project is created 
(or the feature is enabled on an existing project), a core is created for it 
on each one of the nodes, a full import is executed and then a delta import 
is scheduled to run on each one of the nodes. This gives us simplicity but I 
am wondering about the performance and memory consumption costs? Also, I am 
wondering whether we should use replication for this purpose. The 
requirement is for the index to be updated once in 30 seconds - are delta 
imports design for this?


I understand that this is a very complex problem in general. I tried to 
highlight all the most significant aspects and will appreciate some initial 
guidance. Note that we are planning to execute performance and stress 
testing no matter what but the assumption is that the topology of the 
solution can be predetermined with the existing data.







Re: Deciding how to correctly use Solr multicore

2014-02-09 Thread Erick Erickson
You might also get some mileage out of the transient core concept, see:
http://wiki.apache.org/solr/LotsOfCores

The underlying idea is to allow N cores to be active simultaneously aged out
on an LRU basis. The penalty here is that the first request for a core
that's not
already loaded will be the time it takes to load it up, which can be noticeable.

Also, Solr easily handles 10s of millions of documents. An alternate design is
to simply index everything in a single core with a type field (which
I think you
already have). Then you restrict results with simple fq clauses, like
fq=type:whatever

These are cached in the filterCache which you control through solrconfig.xml.
There are nuances around document relevance etc, but we'll leave that for later.

NOTE: there is some overhead for having multiple cores rather than all-in-one.
That said, I know of a bunch of organizations that use the many-core approach
so it's not a X is always better kind of thing.

Best,
Erick

On Sun, Feb 9, 2014 at 6:04 AM, Jack Krupansky j...@basetechnology.com wrote:
 The first question I always is ask is how do you want to query the data -
 what is the full range of query use cases?

 For example, might a customer every want to query across all of their
 projects?

 You didn't say how many customers you must be able to support. This leads to
 questions about how many customers or projects run on a single Solr server.
 It sounds like you may require quite a number of Solr servers, each
 multi-core. And in some cases a single customer might not fit on a single
 Solr server. SolrCloud might begin to make sense even though it sounds like
 a single collection would rarely need to be sharded.

 You didn't speak at all about HA (High Availability) requirements or
 replication.

 Or about query latency requirements or query load - which can impact
 replication requirements.

 -- Jack Krupansky

 -Original Message- From: Pisarev, Vitaliy
 Sent: Sunday, February 9, 2014 4:22 AM
 To: solr-user@lucene.apache.org
 Subject: Deciding how to correctly use Solr multicore


 Hello!

 We are evaluating Solr usage in our organization and have come to the point
 where we are past the functional tests and are now looking in choosing the
 best deployment topology.
 Here are some details about the structure of the problem: The application
 deals with storing and retrieving artifacts of various types. The artifact
 are stored in Projects. Each project can have hundreds of thousands of
 artifacts (total on all types) and our largest customers have hundreds of
 projects (~300-800) though the vast majority have tens of project (~30-100).

 Core granularity
 In terms of Core granularity- it seems to me that a core per project is
 sensible, as pushing everything to a single core will probably be too much.
 The entities themselves will have a special type field for distinction.
 Moreover, it may be that not all of the project are active in a given time
 so this allows their indexes to remain on latent on disk.


 Availability and synchronization
 Our application is deployed on premise on our customers sites- we cannot go
 too crazy as to the amount of extra resources we demand from them- e.g.
 dedicated indexing servers. We pretty much need to make do with what is
 already there.

 For now, we are planning to use the DIH to maintain the index. Each node the
 cluster on the app will have its own local index. When a project is created
 (or the feature is enabled on an existing project), a core is created for it
 on each one of the nodes, a full import is executed and then a delta import
 is scheduled to run on each one of the nodes. This gives us simplicity but I
 am wondering about the performance and memory consumption costs? Also, I am
 wondering whether we should use replication for this purpose. The
 requirement is for the index to be updated once in 30 seconds - are delta
 imports design for this?

 I understand that this is a very complex problem in general. I tried to
 highlight all the most significant aspects and will appreciate some initial
 guidance. Note that we are planning to execute performance and stress
 testing no matter what but the assumption is that the topology of the
 solution can be predetermined with the existing data.






Default core for updates in multicore setup

2014-02-05 Thread Tom Burton-West
Hello,

I'm running the example setup for Solr 4.6.1.

In the ../example/solr/  directory, I set up a second core.  I  wanted to
send updates to that core.

  I looked at  .../exampledocs/post.sh and expected to see the URL as:  URL=
http://localhost:8983/solr/collection1/update
However it does not have the core name:
URL=http://localhost:8983/solr/update
Solr however accepts updates with that url in the core named collection1.

I then tried to locate some config somewhere that would specify that the
default core would be collection1, but could not find it.

1) Is there somewhere where the default core for  the xx/solr/update URL is
configured?

2) I ran across SOLR-545 which seems to imply that the current behavior
(dispatching the update requests to the core named collection1) is a bug
which was fixed in Solr 1.3.   Is this a new bug or a change in design?

https://issues.apache.org/jira/browse/SOLR-545

Tom


Re: Default core for updates in multicore setup

2014-02-05 Thread Chris Hostetter

: I then tried to locate some config somewhere that would specify that the
: default core would be collection1, but could not find it.

in the older style solr.xml, you can specify a defaultCoreName.  Moving 
forward, relying on the default core name is discouraged (and will 
hopefully be removed before 5.0) so it's not possible to configure it in 
the new core discovry style of solr.xml...

https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml

For now owever, the hardcoded default of collection1 is still used for 
backcompat when there is no defaultCoreName configured by the user.  
and things like post.sh, post.jar, and the tutorial have not really been 
updated yet to reflect that the use of the default core name is 
deprecated.

: 2) I ran across SOLR-545 which seems to imply that the current behavior
: (dispatching the update requests to the core named collection1) is a bug

Yeah, A lot of things have changed since 1.3 ... not sure when exactly the 
configurable defaultCoreName was added, but it was sometime after that 
issue i believe.


-Hoss
http://www.lucidworks.com/


Re: Default core for updates in multicore setup

2014-02-05 Thread Tom Burton-West
Thanks Hoss,

hardcoded default of collection1 is still used for
backcompat when there is no defaultCoreName configured by the user.

Aha, it's hardcoded if there is nothing set in a config.  No wonder I
couldn't find it by grepping around the config files.

I'm still trying to sort out the old and new style solr.xml/core
configuration stuff.  Thanks for your help.

Tom




On Wed, Feb 5, 2014 at 4:31 PM, Chris Hostetter hossman_luc...@fucit.orgwrote:


 : I then tried to locate some config somewhere that would specify that the
 : default core would be collection1, but could not find it.

 in the older style solr.xml, you can specify a defaultCoreName.  Moving
 forward, relying on the default core name is discouraged (and will
 hopefully be removed before 5.0) so it's not possible to configure it in
 the new core discovry style of solr.xml...

 https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml

 For now owever, the hardcoded default of collection1 is still used for
 backcompat when there is no defaultCoreName configured by the user.
 and things like post.sh, post.jar, and the tutorial have not really been
 updated yet to reflect that the use of the default core name is
 deprecated.

 : 2) I ran across SOLR-545 which seems to imply that the current behavior
 : (dispatching the update requests to the core named collection1) is a bug

 Yeah, A lot of things have changed since 1.3 ... not sure when exactly the
 configurable defaultCoreName was added, but it was sometime after that
 issue i believe.


 -Hoss
 http://www.lucidworks.com/



Re: Default core for updates in multicore setup

2014-02-05 Thread Jack Krupansky
Tom, I did make an effort to sort out both the old and newer solr.xml 
features in my Solr 4.x Deep Dive e-book.


-- Jack Krupansky

-Original Message- 
From: Tom Burton-West

Sent: Wednesday, February 5, 2014 5:56 PM
To: solr-user@lucene.apache.org
Subject: Re: Default core for updates in multicore setup

Thanks Hoss,


hardcoded default of collection1 is still used for

backcompat when there is no defaultCoreName configured by the user.

Aha, it's hardcoded if there is nothing set in a config.  No wonder I
couldn't find it by grepping around the config files.

I'm still trying to sort out the old and new style solr.xml/core
configuration stuff.  Thanks for your help.

Tom




On Wed, Feb 5, 2014 at 4:31 PM, Chris Hostetter 
hossman_luc...@fucit.orgwrote:




: I then tried to locate some config somewhere that would specify that the
: default core would be collection1, but could not find it.

in the older style solr.xml, you can specify a defaultCoreName.  Moving
forward, relying on the default core name is discouraged (and will
hopefully be removed before 5.0) so it's not possible to configure it in
the new core discovry style of solr.xml...

https://cwiki.apache.org/confluence/display/solr/Solr+Cores+and+solr.xml

For now owever, the hardcoded default of collection1 is still used for
backcompat when there is no defaultCoreName configured by the user.
and things like post.sh, post.jar, and the tutorial have not really been
updated yet to reflect that the use of the default core name is
deprecated.

: 2) I ran across SOLR-545 which seems to imply that the current behavior
: (dispatching the update requests to the core named collection1) is a bug

Yeah, A lot of things have changed since 1.3 ... not sure when exactly the
configurable defaultCoreName was added, but it was sometime after that
issue i believe.


-Hoss
http://www.lucidworks.com/





Re: How to share Schema between multicore on Solr 4.4

2013-10-09 Thread Erick Erickson
Shawn:

Hmmm, I hadn't thought about that before. The shareSchema
stuff is keyed off the absolute directory (and timestamp) of
the schema.xml file associated with a core and is about
sharing the internal object that holds the parsed schema.

Do you know for sure if the fact that this is coming from ZK
actually shares the schema object? 'Cause I've never
looked to see and it would be a good thing to have in my
head...


Thanks!
Erick

On Tue, Oct 8, 2013 at 8:33 PM, Shawn Heisey s...@elyograg.org wrote:
 On 10/7/2013 6:02 AM, Dharmendra Jaiswal wrote:

 I am using Solr 4.4 version with SolrCloud on Windows machine.
 Somehow i am not able to share schema between multiple core.


 If you're in SolrCloud mode, then you already *are* sharing your schema.
 You are also sharing your configuration.  Both of them are in zookeeper.
 All collections (and all shards within a collection) which use a given
 config name are using the same copy.

 Any copies of your config/schema that might be on your disk are *NOT* being
 used.  If you are starting Solr with any bootstrap options, then the config
 set that is in zookeeper might be getting overwritten by whats on your disk
 when Solr restarts, but otherwise SolrCloud *only* uses zookeeper for
 config/schema. The bootstrap options are meant to be used once, and I
 actually prefer to get SolrCloud operational without using bootstrap options
 at all.

 Thanks,
 Shawn



Re: How to share Schema between multicore on Solr 4.4

2013-10-09 Thread Shawn Heisey

On 10/9/2013 6:24 AM, Erick Erickson wrote:

Hmmm, I hadn't thought about that before. The shareSchema
stuff is keyed off the absolute directory (and timestamp) of
the schema.xml file associated with a core and is about
sharing the internal object that holds the parsed schema.

Do you know for sure if the fact that this is coming from ZK
actually shares the schema object? 'Cause I've never
looked to see and it would be a good thing to have in my
head...


With SolrCloud, I have no idea whether the actual internal objects are 
shared.  Just now I tried to figure that out from the code, but I don't 
already have an understanding of how that code works, and a quick glance 
isn't enough to gain that knowledge.I can guarantee that you have a much 
deeper understanding of those internals than I do!


My comments were to indicate that SolrCloud creates a situation where 
the config/schema are shared in the sense that there's only one 
canonical copy.


Thanks,
Shawn



Re: How to share Schema between multicore on Solr 4.4

2013-10-09 Thread Erick Erickson
bq: ...in the sense that there's only one canonical copy.

Agreed, and as you say that copy is kept in ZooKeeper.

And I pretty much guarantee that the internal solrconfig object
is NOT shared. I doubt the schema object is shared, but it seems
like it could be with some work.

But the savings potential here is rather small unless you have a
large number of cores. The LotsOfCores option is really, at this
point, orthogonal to SolrCloud, I don't think (and we have some
anecdotal evidence) that they don't play nice together

Erick

On Wed, Oct 9, 2013 at 12:17 PM, Shawn Heisey s...@elyograg.org wrote:
 On 10/9/2013 6:24 AM, Erick Erickson wrote:

 Hmmm, I hadn't thought about that before. The shareSchema
 stuff is keyed off the absolute directory (and timestamp) of
 the schema.xml file associated with a core and is about
 sharing the internal object that holds the parsed schema.

 Do you know for sure if the fact that this is coming from ZK
 actually shares the schema object? 'Cause I've never
 looked to see and it would be a good thing to have in my
 head...


 With SolrCloud, I have no idea whether the actual internal objects are
 shared.  Just now I tried to figure that out from the code, but I don't
 already have an understanding of how that code works, and a quick glance
 isn't enough to gain that knowledge.I can guarantee that you have a much
 deeper understanding of those internals than I do!

 My comments were to indicate that SolrCloud creates a situation where the
 config/schema are shared in the sense that there's only one canonical copy.

 Thanks,
 Shawn



Re: How to share Schema between multicore on Solr 4.4

2013-10-08 Thread Shawn Heisey

On 10/7/2013 6:02 AM, Dharmendra Jaiswal wrote:

I am using Solr 4.4 version with SolrCloud on Windows machine.
Somehow i am not able to share schema between multiple core.


If you're in SolrCloud mode, then you already *are* sharing your 
schema.  You are also sharing your configuration.  Both of them are in 
zookeeper.  All collections (and all shards within a collection) which 
use a given config name are using the same copy.


Any copies of your config/schema that might be on your disk are *NOT* 
being used.  If you are starting Solr with any bootstrap options, then 
the config set that is in zookeeper might be getting overwritten by 
whats on your disk when Solr restarts, but otherwise SolrCloud *only* 
uses zookeeper for config/schema. The bootstrap options are meant to be 
used once, and I actually prefer to get SolrCloud operational without 
using bootstrap options at all.


Thanks,
Shawn



How to share Schema between multicore on Solr 4.4

2013-10-07 Thread Dharmendra Jaiswal
I am using Solr 4.4 version with SolrCloud on Windows machine.
Somehow i am not able to share schema between multiple core.

My solr.xml file look like:-
solr
str name=shareSchema${shareSchema:true}/str
solrcloud
str name=hostContext${hostContext:SolrEngine}/str
int name=hostPort${tomcat.port:8080}/int
int name=zkClientTimeout${zkClientTimeout:15000}/int
/solrcloud

I have used core.properties file for each core. One of the core (say
collection1) contains schema.xml file and rest will having all the config
file excluding schema.xml.

core.properties file contains
name=corename

After deployment 
I am getting following error

collection2:
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
Error loading schema resource schema.xml 

Please note that i have provided shareSchema=true in solr.xml file.

Please let me know if anything is missing.
Any pointer will be helpful.

Thanks,
Dharmendra Jaiswal



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-share-Schema-between-multicore-on-Solr-4-4-tp4093881.html
Sent from the Solr - User mailing list archive at Nabble.com.


Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Utkarsh Sengar
Hello,

I am trying to update schema.xml for a core in a multicore setup and this
is what I do to update it:

I have 3 nodes in my solr cluster.

1. Pick node1 and manually update schema.xml

2. Restart node1 with -Dbootstrap_conf=true
java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
-DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar

3. Restart the other 2 nodes using this command (without
-Dbootstrap_conf=true since these should pull from zk).:
java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
-DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar

But, when I do that. node1 displays all of my cores and the other 2 nodes
displays just one core.

Then, I found this:
http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
Which says bootstrap_conf is used for multicore setup.


But if I use bootstrap_conf for every node, then I will have to manually
update schema.xml (for any config file) everywhere? That does not sound
like an efficient way of managing configuration right?


-- 
Thanks,
-Utkarsh


Re: Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Jan Høydahl
Hi,

The -Dbootstrap_confdir option is really only meant for a first-time bootstrap 
for your development environment, not for serious use.

Once you got your config into ZK you should modify the config directly in ZK.
There are many tools (also 3rd party) for this. But your best choice is 
probably zkCli shipping with Solr.
See http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
This means you will NOT need to start Solr with -Dboostrap_confdir at all.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

25. juni 2013 kl. 10:29 skrev Utkarsh Sengar utkarsh2...@gmail.com:

 Hello,
 
 I am trying to update schema.xml for a core in a multicore setup and this
 is what I do to update it:
 
 I have 3 nodes in my solr cluster.
 
 1. Pick node1 and manually update schema.xml
 
 2. Restart node1 with -Dbootstrap_conf=true
 java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
 -DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
 3. Restart the other 2 nodes using this command (without
 -Dbootstrap_conf=true since these should pull from zk).:
 java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
 But, when I do that. node1 displays all of my cores and the other 2 nodes
 displays just one core.
 
 Then, I found this:
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
 Which says bootstrap_conf is used for multicore setup.
 
 
 But if I use bootstrap_conf for every node, then I will have to manually
 update schema.xml (for any config file) everywhere? That does not sound
 like an efficient way of managing configuration right?
 
 
 -- 
 Thanks,
 -Utkarsh



Re: Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Utkarsh Sengar
But as when I launch a solr instance without -Dbootstrap_conf=true, just
once core is launched and I cannot see the other core.

This behavior is the same as Mark's reply here:
http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E

- bootstrap_conf: you pass it true and it reads solr.xml and uploads
the conf set for each
SolrCore it finds, gives the conf set the name of the collection and
associates each collection
with the same named config set.

So the first just lets you boot strap one collection easily...but what
if you start with a
multi-core, multi-collection setup that you want to bootstrap into
SolrCloud? And they don't
share a common config set? That's what the second command is for. You
can setup 30 local SolrCores
in solr.xml and then just bootstrap all 30 different config sets up
and have them fully linked
with each collection just by passing bootstrap_conf=true.



Note: I am using -Dbootstrap_conf=true and not -Dbootstrap_confdir


Thanks,
-Utkarsh


On Tue, Jun 25, 2013 at 2:14 AM, Jan Høydahl jan@cominvent.com wrote:

 Hi,

 The -Dbootstrap_confdir option is really only meant for a first-time
 bootstrap for your development environment, not for serious use.

 Once you got your config into ZK you should modify the config directly in
 ZK.
 There are many tools (also 3rd party) for this. But your best choice is
 probably zkCli shipping with Solr.
 See http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
 This means you will NOT need to start Solr with -Dboostrap_confdir at all.

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. juni 2013 kl. 10:29 skrev Utkarsh Sengar utkarsh2...@gmail.com:

  Hello,
 
  I am trying to update schema.xml for a core in a multicore setup and this
  is what I do to update it:
 
  I have 3 nodes in my solr cluster.
 
  1. Pick node1 and manually update schema.xml
 
  2. Restart node1 with -Dbootstrap_conf=true
  java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
  -DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar
 start.jar
 
  3. Restart the other 2 nodes using this command (without
  -Dbootstrap_conf=true since these should pull from zk).:
  java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
  -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
  But, when I do that. node1 displays all of my cores and the other 2 nodes
  displays just one core.
 
  Then, I found this:
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
  Which says bootstrap_conf is used for multicore setup.
 
 
  But if I use bootstrap_conf for every node, then I will have to manually
  update schema.xml (for any config file) everywhere? That does not sound
  like an efficient way of managing configuration right?
 
 
  --
  Thanks,
  -Utkarsh




-- 
Thanks,
-Utkarsh


Re: Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Jan Høydahl
Hi,

As I understand, your initial bootstrap works ok (boostrap_conf). What you want 
help with is *changing* the config on a live system.
That's when you are encouraged to use zkCli and don't mess with trying to let 
Solr bootstrap things - after all it's not a bootstrap anymore, it's a change.

Did you try updating schema.xml for a specific collection using zkCli? Any 
issues?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

25. juni 2013 kl. 11:24 skrev Utkarsh Sengar utkarsh2...@gmail.com:

 But as when I launch a solr instance without -Dbootstrap_conf=true, just
 once core is launched and I cannot see the other core.
 
 This behavior is the same as Mark's reply here:
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
 
 - bootstrap_conf: you pass it true and it reads solr.xml and uploads
 the conf set for each
 SolrCore it finds, gives the conf set the name of the collection and
 associates each collection
 with the same named config set.
 
 So the first just lets you boot strap one collection easily...but what
 if you start with a
 multi-core, multi-collection setup that you want to bootstrap into
 SolrCloud? And they don't
 share a common config set? That's what the second command is for. You
 can setup 30 local SolrCores
 in solr.xml and then just bootstrap all 30 different config sets up
 and have them fully linked
 with each collection just by passing bootstrap_conf=true.
 
 
 
 Note: I am using -Dbootstrap_conf=true and not -Dbootstrap_confdir
 
 
 Thanks,
 -Utkarsh
 
 
 On Tue, Jun 25, 2013 at 2:14 AM, Jan Høydahl jan@cominvent.com wrote:
 
 Hi,
 
 The -Dbootstrap_confdir option is really only meant for a first-time
 bootstrap for your development environment, not for serious use.
 
 Once you got your config into ZK you should modify the config directly in
 ZK.
 There are many tools (also 3rd party) for this. But your best choice is
 probably zkCli shipping with Solr.
 See http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
 This means you will NOT need to start Solr with -Dboostrap_confdir at all.
 
 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com
 
 25. juni 2013 kl. 10:29 skrev Utkarsh Sengar utkarsh2...@gmail.com:
 
 Hello,
 
 I am trying to update schema.xml for a core in a multicore setup and this
 is what I do to update it:
 
 I have 3 nodes in my solr cluster.
 
 1. Pick node1 and manually update schema.xml
 
 2. Restart node1 with -Dbootstrap_conf=true
 java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
 -DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar
 start.jar
 
 3. Restart the other 2 nodes using this command (without
 -Dbootstrap_conf=true since these should pull from zk).:
 java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
 But, when I do that. node1 displays all of my cores and the other 2 nodes
 displays just one core.
 
 Then, I found this:
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
 Which says bootstrap_conf is used for multicore setup.
 
 
 But if I use bootstrap_conf for every node, then I will have to manually
 update schema.xml (for any config file) everywhere? That does not sound
 like an efficient way of managing configuration right?
 
 
 --
 Thanks,
 -Utkarsh
 
 
 
 
 -- 
 Thanks,
 -Utkarsh



Re: Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Utkarsh Sengar
Yes, I have tried zkCli and it works.
But I also need to restart solr after the schema change right?

I tried to reload the core, but I think there is an open bug where a core
reload is successful but a shard goes down for that core. I just tried it
out, i.e tried to reload a core after config change via zkCli and a shard
went down.

Since I am not able to reload a core, I am restarting the whole solr
process for make the change.

Thanks,
-Utkarsh


On Tue, Jun 25, 2013 at 2:46 AM, Jan Høydahl jan@cominvent.com wrote:

 Hi,

 As I understand, your initial bootstrap works ok (boostrap_conf). What you
 want help with is *changing* the config on a live system.
 That's when you are encouraged to use zkCli and don't mess with trying to
 let Solr bootstrap things - after all it's not a bootstrap anymore, it's a
 change.

 Did you try updating schema.xml for a specific collection using zkCli? Any
 issues?

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. juni 2013 kl. 11:24 skrev Utkarsh Sengar utkarsh2...@gmail.com:

  But as when I launch a solr instance without -Dbootstrap_conf=true,
 just
  once core is launched and I cannot see the other core.
 
  This behavior is the same as Mark's reply here:
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
 
  - bootstrap_conf: you pass it true and it reads solr.xml and uploads
  the conf set for each
  SolrCore it finds, gives the conf set the name of the collection and
  associates each collection
  with the same named config set.
 
  So the first just lets you boot strap one collection easily...but what
  if you start with a
  multi-core, multi-collection setup that you want to bootstrap into
  SolrCloud? And they don't
  share a common config set? That's what the second command is for. You
  can setup 30 local SolrCores
  in solr.xml and then just bootstrap all 30 different config sets up
  and have them fully linked
  with each collection just by passing bootstrap_conf=true.
 
 
 
  Note: I am using -Dbootstrap_conf=true and not -Dbootstrap_confdir
 
 
  Thanks,
  -Utkarsh
 
 
  On Tue, Jun 25, 2013 at 2:14 AM, Jan Høydahl jan@cominvent.com
 wrote:
 
  Hi,
 
  The -Dbootstrap_confdir option is really only meant for a first-time
  bootstrap for your development environment, not for serious use.
 
  Once you got your config into ZK you should modify the config directly
 in
  ZK.
  There are many tools (also 3rd party) for this. But your best choice is
  probably zkCli shipping with Solr.
  See http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
  This means you will NOT need to start Solr with -Dboostrap_confdir at
 all.
 
  --
  Jan Høydahl, search solution architect
  Cominvent AS - www.cominvent.com
 
  25. juni 2013 kl. 10:29 skrev Utkarsh Sengar utkarsh2...@gmail.com:
 
  Hello,
 
  I am trying to update schema.xml for a core in a multicore setup and
 this
  is what I do to update it:
 
  I have 3 nodes in my solr cluster.
 
  1. Pick node1 and manually update schema.xml
 
  2. Restart node1 with -Dbootstrap_conf=true
  java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
  -DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar
  start.jar
 
  3. Restart the other 2 nodes using this command (without
  -Dbootstrap_conf=true since these should pull from zk).:
  java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
  -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
  But, when I do that. node1 displays all of my cores and the other 2
 nodes
  displays just one core.
 
  Then, I found this:
 
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
  Which says bootstrap_conf is used for multicore setup.
 
 
  But if I use bootstrap_conf for every node, then I will have to
 manually
  update schema.xml (for any config file) everywhere? That does not sound
  like an efficient way of managing configuration right?
 
 
  --
  Thanks,
  -Utkarsh
 
 
 
 
  --
  Thanks,
  -Utkarsh




-- 
Thanks,
-Utkarsh


Re: Updating solrconfig and schema.xml for solrcloud in multicore setup

2013-06-25 Thread Utkarsh Sengar
I believe I am hitting this bug:
https://issues.apache.org/jira/browse/SOLR-4805
I am using solr 4.3.1


-Utkarsh


On Tue, Jun 25, 2013 at 2:56 AM, Utkarsh Sengar utkarsh2...@gmail.comwrote:

 Yes, I have tried zkCli and it works.
 But I also need to restart solr after the schema change right?

 I tried to reload the core, but I think there is an open bug where a core
 reload is successful but a shard goes down for that core. I just tried it
 out, i.e tried to reload a core after config change via zkCli and a shard
 went down.

 Since I am not able to reload a core, I am restarting the whole solr
 process for make the change.

 Thanks,
 -Utkarsh


 On Tue, Jun 25, 2013 at 2:46 AM, Jan Høydahl jan@cominvent.comwrote:

 Hi,

 As I understand, your initial bootstrap works ok (boostrap_conf). What
 you want help with is *changing* the config on a live system.
 That's when you are encouraged to use zkCli and don't mess with trying to
 let Solr bootstrap things - after all it's not a bootstrap anymore, it's a
 change.

 Did you try updating schema.xml for a specific collection using zkCli?
 Any issues?

 --
 Jan Høydahl, search solution architect
 Cominvent AS - www.cominvent.com

 25. juni 2013 kl. 11:24 skrev Utkarsh Sengar utkarsh2...@gmail.com:

  But as when I launch a solr instance without -Dbootstrap_conf=true,
 just
  once core is launched and I cannot see the other core.
 
  This behavior is the same as Mark's reply here:
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
 
  - bootstrap_conf: you pass it true and it reads solr.xml and uploads
  the conf set for each
  SolrCore it finds, gives the conf set the name of the collection and
  associates each collection
  with the same named config set.
 
  So the first just lets you boot strap one collection easily...but what
  if you start with a
  multi-core, multi-collection setup that you want to bootstrap into
  SolrCloud? And they don't
  share a common config set? That's what the second command is for. You
  can setup 30 local SolrCores
  in solr.xml and then just bootstrap all 30 different config sets up
  and have them fully linked
  with each collection just by passing bootstrap_conf=true.
 
 
 
  Note: I am using -Dbootstrap_conf=true and not -Dbootstrap_confdir
 
 
  Thanks,
  -Utkarsh
 
 
  On Tue, Jun 25, 2013 at 2:14 AM, Jan Høydahl jan@cominvent.com
 wrote:
 
  Hi,
 
  The -Dbootstrap_confdir option is really only meant for a first-time
  bootstrap for your development environment, not for serious use.
 
  Once you got your config into ZK you should modify the config directly
 in
  ZK.
  There are many tools (also 3rd party) for this. But your best choice is
  probably zkCli shipping with Solr.
  See http://wiki.apache.org/solr/SolrCloud#Command_Line_Util
  This means you will NOT need to start Solr with -Dboostrap_confdir at
 all.
 
  --
  Jan Høydahl, search solution architect
  Cominvent AS - www.cominvent.com
 
  25. juni 2013 kl. 10:29 skrev Utkarsh Sengar utkarsh2...@gmail.com:
 
  Hello,
 
  I am trying to update schema.xml for a core in a multicore setup and
 this
  is what I do to update it:
 
  I have 3 nodes in my solr cluster.
 
  1. Pick node1 and manually update schema.xml
 
  2. Restart node1 with -Dbootstrap_conf=true
  java -Dsolr.solr.home=multicore -DnumShards=3 -Dbootstrap_conf=true
  -DzkHost=localhost:2181 -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar
  start.jar
 
  3. Restart the other 2 nodes using this command (without
  -Dbootstrap_conf=true since these should pull from zk).:
  java -Dsolr.solr.home=multicore -DnumShards=3 -DzkHost=localhost:2181
  -DSTOP.PORT=8079 -DSTOP.KEY=mysecret -jar start.jar
 
  But, when I do that. node1 displays all of my cores and the other 2
 nodes
  displays just one core.
 
  Then, I found this:
 
 
 http://mail-archives.apache.org/mod_mbox/lucene-dev/201205.mbox/%3cbb7ad9bf-389b-4b94-8c1b-bbfc4028a...@gmail.com%3E
  Which says bootstrap_conf is used for multicore setup.
 
 
  But if I use bootstrap_conf for every node, then I will have to
 manually
  update schema.xml (for any config file) everywhere? That does not
 sound
  like an efficient way of managing configuration right?
 
 
  --
  Thanks,
  -Utkarsh
 
 
 
 
  --
  Thanks,
  -Utkarsh




 --
 Thanks,
 -Utkarsh




-- 
Thanks,
-Utkarsh


Re: multicore vs multi collection

2013-03-28 Thread hupadhyay
Does that means i can create multiple collections with different
configurations ?
can you please outline basic steps to create multiple collections,cause i am
not able to 
create them on solr 4.0



--
View this message in context: 
http://lucene.472066.n3.nabble.com/multicore-vs-multi-collection-tp4051352p4052002.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: multicore vs multi collection

2013-03-28 Thread Jack Krupansky

Unable? In what way?

Did you look at the Solr example?

Did you look at solr.xml?

Did you see the core element? (Needs to be one per core/collection.)

Did you see the multicore directory in the example?

Did you look at the solr.xml file in multicore?

Did you see how there are separate directories for each collection/core in 
multicore?


Did you see how there is a core element in solr.xml in multicore, one for 
each collection directory (instance)?


Did you try setting up your own test directory parallel to multicore in 
example?


Did you read the README.txt files in the Solr example directories?

Did you see the command to start Solr with a specific Solr home 
directory? -


   java -Dsolr.solr.home=multicore -jar start.jar

Did you try that for your own test solr home directory created above?

So... what exactly was the problem you were encountering? Be specific.

My guess is that you simply need to re-read the README.txt files more 
carefully in the Solr example directories.


If you have questions about what the README.txt files say, please ask them, 
but please be specific.


-- Jack Krupansky

-Original Message- 
From: hupadhyay

Sent: Thursday, March 28, 2013 5:35 AM
To: solr-user@lucene.apache.org
Subject: Re: multicore vs multi collection

Does that means i can create multiple collections with different
configurations ?
can you please outline basic steps to create multiple collections,cause i am
not able to
create them on solr 4.0



--
View this message in context: 
http://lucene.472066.n3.nabble.com/multicore-vs-multi-collection-tp4051352p4052002.html
Sent from the Solr - User mailing list archive at Nabble.com. 



Accessing multicore setup using solrj

2013-03-26 Thread J Mohamed Zahoor
Hi I am having a multi core setup with 2 core core0 and core1.
How do i insert doc in core 1?

I am using as below.

 searchServer = new CloudSolrServer(zooQourumUrl);
 searchServer.setDefaultCollection(core1);
 searchServer.connect();

and i get No live solr servers exception.
But i could see both the cores in UI up and running.


am i missing something.?

./zahoor





multicore vs multi collection

2013-03-26 Thread J Mohamed Zahoor
Hi

I am kind of confuzed between multi core and multi collection.
Docs dont seem to clarify this.. can someone enlighten me what is ther 
difference between a core and a collection?
Are they same?

./zahoor

Re: multicore vs multi collection

2013-03-26 Thread Furkan KAMACI
Did you check that document:
http://wiki.apache.org/solr/SolrCloud#A_little_about_SolrCores_and_CollectionsIt
says:
On a single instance, Solr has something called a
SolrCorehttp://wiki.apache.org/solr/SolrCorethat is essentially a
single index. If you want multiple indexes, you
create multiple SolrCores http://wiki.apache.org/solr/SolrCores. With
SolrCloud, a single index can span multiple Solr instances. This means that
a single index can be made up of multiple
SolrCorehttp://wiki.apache.org/solr/SolrCore's
on different machines. We call all of these
SolrCoreshttp://wiki.apache.org/solr/SolrCoresthat make up one
logical index a collection. A collection is a essentially
a single index that spans many
SolrCorehttp://wiki.apache.org/solr/SolrCore's,
both for index scaling as well as redundancy. If you wanted to move your 2
SolrCore http://wiki.apache.org/solr/SolrCore Solr setup to SolrCloud,
you would have 2 collections, each made up of multiple individual
SolrCoreshttp://wiki.apache.org/solr/SolrCores.


2013/3/26 J Mohamed Zahoor zah...@indix.com

 Hi

 I am kind of confuzed between multi core and multi collection.
 Docs dont seem to clarify this.. can someone enlighten me what is ther
 difference between a core and a collection?
 Are they same?

 ./zahoor


Re: multicore vs multi collection

2013-03-26 Thread J Mohamed Zahoor
Thanks.

This make it clear than the wiki.

How do you create multiple collection which can have different schema?

./zahoor

On 26-Mar-2013, at 3:52 PM, Furkan KAMACI furkankam...@gmail.com wrote:

 Did you check that document:
 http://wiki.apache.org/solr/SolrCloud#A_little_about_SolrCores_and_CollectionsIt
 says:
 On a single instance, Solr has something called a
 SolrCorehttp://wiki.apache.org/solr/SolrCorethat is essentially a
 single index. If you want multiple indexes, you
 create multiple SolrCores http://wiki.apache.org/solr/SolrCores. With
 SolrCloud, a single index can span multiple Solr instances. This means that
 a single index can be made up of multiple
 SolrCorehttp://wiki.apache.org/solr/SolrCore's
 on different machines. We call all of these
 SolrCoreshttp://wiki.apache.org/solr/SolrCoresthat make up one
 logical index a collection. A collection is a essentially
 a single index that spans many
 SolrCorehttp://wiki.apache.org/solr/SolrCore's,
 both for index scaling as well as redundancy. If you wanted to move your 2
 SolrCore http://wiki.apache.org/solr/SolrCore Solr setup to SolrCloud,
 you would have 2 collections, each made up of multiple individual
 SolrCoreshttp://wiki.apache.org/solr/SolrCores.
 
 
 2013/3/26 J Mohamed Zahoor zah...@indix.com
 
 Hi
 
 I am kind of confuzed between multi core and multi collection.
 Docs dont seem to clarify this.. can someone enlighten me what is ther
 difference between a core and a collection?
 Are they same?
 
 ./zahoor



Re: Accessing multicore setup using solrj

2013-03-26 Thread Mark Miller
Are you using SolrCloud mode?

- Mark

On Mar 26, 2013, at 4:49 AM, J Mohamed Zahoor zah...@indix.com wrote:

 Hi I am having a multi core setup with 2 core core0 and core1.
 How do i insert doc in core 1?
 
 I am using as below.
 
 searchServer = new CloudSolrServer(zooQourumUrl);
 searchServer.setDefaultCollection(core1);
 searchServer.connect();
 
 and i get No live solr servers exception.
 But i could see both the cores in UI up and running.
 
 
 am i missing something.?
 
 ./zahoor
 
 
 



Re: multicore vs multi collection

2013-03-26 Thread Furkan KAMACI
Also from there http://wiki.apache.org/solr/SolrCloud:

*Q:* What is the difference between a Collection and a
SolrCorehttp://wiki.apache.org/solr/SolrCore?

*A:* In classic single node Solr, a
SolrCorehttp://wiki.apache.org/solr/SolrCoreis basically equivalent
to a Collection. It presents one logical index. In
SolrCloud, the SolrCore http://wiki.apache.org/solr/SolrCore's on
multiple nodes form a Collection. This is still just one logical index, but
multiple SolrCores http://wiki.apache.org/solr/SolrCores host different
'shards' of the full collection. So a
SolrCorehttp://wiki.apache.org/solr/SolrCoreencapsulates a single
physical index on an instance. A Collection is a
combination of all of the SolrCores
http://wiki.apache.org/solr/SolrCoresthat together provide a logical
index that is distributed across many
nodes.

2013/3/26 J Mohamed Zahoor zah...@indix.com

 Thanks.

 This make it clear than the wiki.

 How do you create multiple collection which can have different schema?

 ./zahoor

 On 26-Mar-2013, at 3:52 PM, Furkan KAMACI furkankam...@gmail.com wrote:

  Did you check that document:
 
 http://wiki.apache.org/solr/SolrCloud#A_little_about_SolrCores_and_CollectionsIt
  says:
  On a single instance, Solr has something called a
  SolrCorehttp://wiki.apache.org/solr/SolrCorethat is essentially a
  single index. If you want multiple indexes, you
  create multiple SolrCores http://wiki.apache.org/solr/SolrCores. With
  SolrCloud, a single index can span multiple Solr instances. This means
 that
  a single index can be made up of multiple
  SolrCorehttp://wiki.apache.org/solr/SolrCore's
  on different machines. We call all of these
  SolrCoreshttp://wiki.apache.org/solr/SolrCoresthat make up one
  logical index a collection. A collection is a essentially
  a single index that spans many
  SolrCorehttp://wiki.apache.org/solr/SolrCore's,
  both for index scaling as well as redundancy. If you wanted to move your
 2
  SolrCore http://wiki.apache.org/solr/SolrCore Solr setup to SolrCloud,
  you would have 2 collections, each made up of multiple individual
  SolrCoreshttp://wiki.apache.org/solr/SolrCores.
 
 
  2013/3/26 J Mohamed Zahoor zah...@indix.com
 
  Hi
 
  I am kind of confuzed between multi core and multi collection.
  Docs dont seem to clarify this.. can someone enlighten me what is ther
  difference between a core and a collection?
  Are they same?
 
  ./zahoor




Multicore Master - Slave - solr 3.6.1

2013-02-27 Thread Sujatha Arun
We have a multicore setup with more that 200 cores . Some of the cores have
different schema based on the search type /language.

While trying to migrate to Master /Slave set up.

I see that we can specify the  Master /Slave properties in
solrcore.properties file . However does this have to done at a core level
.What are the options for defining this globally across the cores so that
when promoting a Slave to master  /vice versa ,i do not have to this for
each core.

I tried the following:

1) Added the properties as name value pairs in the solr.xml  - *But these
values are lost on Server Restart*
*
*
2) Tried defining the properties in  a single  file and tried to reference
to the same file for ever core at core creation with below command .*But
this is not picked up and not reflected in solr.xml*

Command:
http://localhost:8983/solr/admin/cores?action=CREATEname=coreXinstanceDir=path_to_instance_directoryhttp://localhost:8983/solr/admin/cores?action=CREATEname=coreXinstanceDir=path_to_instance_directoryconfig=config_file_name.xmlschema=schem_file_name.xmldataDir=dataproperties=path
to common properties file

3)Tried adding the solrcore,properties at the level of solr.xml  file ,but
this also doe not work

The last 2 methods if works ,i  guess would involve server restart for any
changes as opposed to core relaod


4) I do not want to share a same common InstanceDir as we
have different type of schema and this will add confusion to OPS team on
creating the cores.


So any pointers on how we can define the global solrcore.properties file
 ?Thanks

Regards
Sujatha


Re: Multicore Master - Slave - solr 3.6.1

2013-02-27 Thread Michael Della Bitta
On Wed, Feb 27, 2013 at 7:01 AM, Sujatha Arun suja.a...@gmail.com wrote:
 1) Added the properties as name value pairs in the solr.xml  - *But these
 values are lost on Server Restart*

This is how you do it in my experience. Just make sure
persistent=true is set, and don't edit the file while the server is
running...


Michael Della Bitta


Appinions
18 East 41st Street, 2nd Floor
New York, NY 10017-6271

www.appinions.com

Where Influence Isn’t a Game


Re: Multicore search with ManifoldCF security not working

2013-02-05 Thread Ahmet Arslan

Hello,

Aha so you are using nabble. Please follow the instructions described here : 
http://manifoldcf.apache.org/en_US/mail.html

And subscribe 'ManifoldCF User Mailing List' and send your question there.

Ahmet
--- On Mon, 1/28/13, eShard zim...@yahoo.com wrote:

 From: eShard zim...@yahoo.com
 Subject: Re: Multicore search with ManifoldCF security not working
 To: solr-user@lucene.apache.org
 Date: Monday, January 28, 2013, 8:26 PM
 I'm sorry, I don't know what you
 mean.
 I clicked on the hidden email link, filled out the form and
 when I hit
 submit; 
 I got this error:
 Domain starts with dot
 Please fix the error and try again.
 
 Who exactly am I sending this to and how do I get the form
 to work?
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Multicore-search-with-ManifoldCF-security-not-working-tp4036776p4036829.html
 Sent from the Solr - User mailing list archive at
 Nabble.com.
 


Multicore search with ManifoldCF security not working

2013-01-28 Thread eShard
Good morning,
I used this post here to join to search 2 different cores and return one
data set.
http://stackoverflow.com/questions/2139030/search-multiple-solr-cores-and-return-one-result-set
The good news is that it worked!
The bad news is that one of the cores is Opentext and the ManifoldCF
security check isn't firing!
So users could see documents that they aren't supposed to.
The opentext security works if I call the core handler individually. it
fails for the merged result.
I need to find a way to get the AuthenticatedUserName parameter to the
opentext core.
Here's my /query handler for the merged result
  requestHandler name=/query class=solr.SearchHandler
  
  lst name=defaults
str name=q.alt*:*/str

str name=flid, attr_general_name, attr_general_owner,
attr_general_creator, attr_general_modifier, attr_general_description,
attr_general_creationdate, attr_general_modifydate, solr.title, 
content, category, link, pubdateiso
/str
str
name=shardslocalhost:8080/solr/opentext/,localhost:8080/solr/Profiles//str
  /lst 
  arr name=last-components
strmanifoldCFSecurity/str
  /arr
  /requestHandler

As you can see, I tried calling manifoldCFSecurity first and it didn't work. 
I was thinking perhaps I can call the shards directly in the URL and put the
AuthenticatedUserName on the opentext shard but I'm getting pulled in
different directions currently.

Can anyone point me in the right direction?
Thanks,






--
View this message in context: 
http://lucene.472066.n3.nabble.com/Multicore-search-with-ManifoldCF-security-not-working-tp4036776.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Multicore search with ManifoldCF security not working

2013-01-28 Thread Ahmet Arslan
Hello,

Can you post this question to u...@manifoldcf.apache.org too?



--- On Mon, 1/28/13, eShard zim...@yahoo.com wrote:

 From: eShard zim...@yahoo.com
 Subject: Multicore search with ManifoldCF security not working
 To: solr-user@lucene.apache.org
 Date: Monday, January 28, 2013, 6:16 PM
 Good morning,
 I used this post here to join to search 2 different cores
 and return one
 data set.
 http://stackoverflow.com/questions/2139030/search-multiple-solr-cores-and-return-one-result-set
 The good news is that it worked!
 The bad news is that one of the cores is Opentext and the
 ManifoldCF
 security check isn't firing!
 So users could see documents that they aren't supposed to.
 The opentext security works if I call the core handler
 individually. it
 fails for the merged result.
 I need to find a way to get the AuthenticatedUserName
 parameter to the
 opentext core.
 Here's my /query handler for the merged result
   requestHandler name=/query
 class=solr.SearchHandler
   
   lst name=defaults
     str name=q.alt*:*/str
 
     str name=flid, attr_general_name,
 attr_general_owner,
 attr_general_creator, attr_general_modifier,
 attr_general_description,
 attr_general_creationdate, attr_general_modifydate,
 solr.title, 
     content, category, link, pubdateiso
     /str
     str
 name=shardslocalhost:8080/solr/opentext/,localhost:8080/solr/Profiles//str
   /lst 
         arr
 name=last-components
        
 strmanifoldCFSecurity/str
       /arr
   /requestHandler
 
 As you can see, I tried calling manifoldCFSecurity first and
 it didn't work. 
 I was thinking perhaps I can call the shards directly in the
 URL and put the
 AuthenticatedUserName on the opentext shard but I'm getting
 pulled in
 different directions currently.
 
 Can anyone point me in the right direction?
 Thanks,
 
 
 
 
 
 
 --
 View this message in context: 
 http://lucene.472066.n3.nabble.com/Multicore-search-with-ManifoldCF-security-not-working-tp4036776.html
 Sent from the Solr - User mailing list archive at
 Nabble.com.



Re: Multicore search with ManifoldCF security not working

2013-01-28 Thread eShard
I'm sorry, I don't know what you mean.
I clicked on the hidden email link, filled out the form and when I hit
submit; 
I got this error:
Domain starts with dot
Please fix the error and try again.

Who exactly am I sending this to and how do I get the form to work?



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Multicore-search-with-ManifoldCF-security-not-working-tp4036776p4036829.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Velocity in Multicore

2013-01-18 Thread Erik Hatcher
Paul -

In case you haven't already sussed this one out already, the likely issue is 
that each core is separately configured and only the single core example 
collection1 core comes with the VelocityResponseWriter wired in fully.  You 
need these lines (paths likely need adjusting!) in your solrconfig.xml:

  lib dir=../../../contrib/velocity/lib regex=.*\.jar /
  lib dir=../../../dist/ regex=apache-solr-velocity-\d.*\.jar /

and 

queryResponseWriter name=velocity class=solr.VelocityResponseWriter 
startup=lazy/


I used the example-DIH, launching via [java 
-Dsolr.solr.home=./example-DIH/solr/ -jar start.jar), after adding the above 
to the db/conf/solrconfig.xml file and this works:

  
http://localhost:8983/solr/db/select?q=*:*wt=velocityv.template=hellov.template.hello=Hello%20World!

Before adding getting VrW registered, Solr fell back to the XML response writer 
as you experienced.

Erik




On Jan 14, 2013, at 14:05 , Ramirez, Paul M (388J) wrote:

 Hi,
 
 I've been unable to get the velocity response writer to work in a multicore 
 environment. Working from the examples that are distributed with Solr I 
 simply started from the multicore example and added a hello.vm into 
 core0/conf/velocity directory. I then updated the solrconfig.xml to add a new 
 request handler as shown below. I've tried to use the v.base_dir to no 
 success. Essentially what I always end up with is the default solr response. 
 Has anyone been able to get the velocity response writer to work in a 
 multicore environment? If so, could you point me to the documentation on how 
 to do so.
 
 hello.vm
 
 Hello World!
 
 solrconfig.xml
 ===
 …
 requestHandler name=/hello class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   !-- VelocityResponseWriter settings --
   str name=wtvelocity/str
   str name=v.templatehello/str
   !-- I've tried all the following in addition to not specifying any. --
   !--str name=v.base_dircore0/conf/velocity/str--
   !--str name=v.base_dirconf/velocity/str--
   !--str name=v.base_dirmulticore/core0/conf/velocity/str--
 /lst
 
  /requestHandler
 …
 
 
 
 Regards,
 Paul Ramirez



Solr multicore aborts with socket timeout exceptions

2013-01-17 Thread eShard
I'm currently running Solr 4.0 final on tomcat v7.0.34 with ManifoldCF v1.2
dev running on Jetty.

I have solr multicore set up with 10 cores. (Is this too much?)
I so I also have at least 10 connectors set up in ManifoldCF (1 per core, 10
JVMs per connection)
From the look of it; Solr couldn't handle all the data that ManifoldCF was
sending it and the connection would abort socket timeout exceptions.
I tried increasing the maxThreads to 200 on tomcat and it didn't work.
In the ManifoldCF throttling section, I decreased the number of JVMs per
connection from 10 down to 1 and not only did the crawl speed up
significantly, the socket exceptions went away (for the most part)
Here's the ticket for this issue:
https://issues.apache.org/jira/browse/CONNECTORS-608

My question is this: how do I increase the number of connections on the solr
side so I can run multiple ManifoldCF jobs concurrently without aborting or
timeouts?

The ManifoldCF team did mention that there was a committer who had socket
timeout exceptions in a newer version of Solr and he fixed it by increasing
the timeout window. I'm looking for that patch if available.

Thanks,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Solr-multicore-aborts-with-socket-timeout-exceptions-tp4034250.html
Sent from the Solr - User mailing list archive at Nabble.com.


Multicore configuration

2013-01-15 Thread Bruno Dusausoy

Hi,

I'd like to use two separate indexes (Solr 3.6.1).
I've read several wiki pages and looked at the multicore example bundled 
with the distribution but it seems I missing something.



I have this hierarchy :
solr-home/
|
-- conf
  |
   -- solr.xml
   -- solrconfig.xml (if I don't put it, solr complains)
   -- schema.xml (idem)
   -- ...
|
-- cores
  |
  -- dossier
|
 -- conf
   |
-- dataconfig.xml
-- schema.xml
-- solrconfig.xml
|
 -- data
  |
  -- procedure
|
 -- conf
   |
-- dataconfig.xml
-- schema.xml
-- solrconfig.xml
|
 -- data

Here's the content of my solr.xml file :
http://paste.debian.net/224818/

And I launch my servlet container with 
-Dsolr.solr.home=my-directory/solr-home.


I've put nearly nothing in my solr-home/conf/schema.xml so Solr 
complains, but that's not the point.


When I go to the admin of core dossier,
http://localhost:8080/solr/dossier/admin, the container says it doesn't 
exist.
But when I go to http://localhost:8080/solr/admin it finds it, which 
makes me guess that Solr is stil in single core mode.


What am I missing ?

Regards.
--
Bruno Dusausoy
Software Engineer
YP5 Software
--
Pensez environnement : limitez l'impression de ce mail.
Please don't print this e-mail unless you really need to.


Re: Multicore configuration

2013-01-15 Thread Dariusz Borowski
Hi Bruno,

Maybe this helps. I wrote something about it:
http://www.coderthing.com/solr-with-multicore-and-database-hook-part-1/

Dariusz



On Tue, Jan 15, 2013 at 9:52 AM, Bruno Dusausoy bdusau...@yp5.be wrote:

 Hi,

 I'd like to use two separate indexes (Solr 3.6.1).
 I've read several wiki pages and looked at the multicore example bundled
 with the distribution but it seems I missing something.


 I have this hierarchy :
 solr-home/
 |
 -- conf
   |
-- solr.xml
-- solrconfig.xml (if I don't put it, solr complains)
-- schema.xml (idem)
-- ...
 |
 -- cores
   |
   -- dossier
 |
  -- conf
|
 -- dataconfig.xml
 -- schema.xml
 -- solrconfig.xml
 |
  -- data
   |
   -- procedure
 |
  -- conf
|
 -- dataconfig.xml
 -- schema.xml
 -- solrconfig.xml
 |
  -- data

 Here's the content of my solr.xml file :
 http://paste.debian.net/**224818/ http://paste.debian.net/224818/

 And I launch my servlet container with -Dsolr.solr.home=my-directory/**
 solr-home.

 I've put nearly nothing in my solr-home/conf/schema.xml so Solr complains,
 but that's not the point.

 When I go to the admin of core dossier,
 http://localhost:8080/solr/**dossier/adminhttp://localhost:8080/solr/dossier/admin,
 the container says it doesn't exist.
 But when I go to 
 http://localhost:8080/solr/**adminhttp://localhost:8080/solr/adminit finds 
 it, which makes me guess that Solr is stil in single core mode.

 What am I missing ?

 Regards.
 --
 Bruno Dusausoy
 Software Engineer
 YP5 Software
 --
 Pensez environnement : limitez l'impression de ce mail.
 Please don't print this e-mail unless you really need to.



Re: Multicore configuration

2013-01-15 Thread Upayavira
You should put your solr.xml into your 'cores' directory, and set
-Dsolr.solr.home=cores

That should get you going. 'cores' *is* your Solr Home. Otherwise, your
instanceDir entries in your current solr.xml will need correct paths to
../cores/procedure/ etc.

Upayavira

On Tue, Jan 15, 2013, at 08:52 AM, Bruno Dusausoy wrote:
 Hi,
 
 I'd like to use two separate indexes (Solr 3.6.1).
 I've read several wiki pages and looked at the multicore example bundled 
 with the distribution but it seems I missing something.
 
 
 I have this hierarchy :
 solr-home/
 |
 -- conf
|
 -- solr.xml
 -- solrconfig.xml (if I don't put it, solr complains)
 -- schema.xml (idem)
 -- ...
 |
 -- cores
|
-- dossier
  |
   -- conf
 |
  -- dataconfig.xml
  -- schema.xml
  -- solrconfig.xml
  |
   -- data
|
-- procedure
  |
   -- conf
 |
  -- dataconfig.xml
  -- schema.xml
  -- solrconfig.xml
  |
   -- data
 
 Here's the content of my solr.xml file :
 http://paste.debian.net/224818/
 
 And I launch my servlet container with 
 -Dsolr.solr.home=my-directory/solr-home.
 
 I've put nearly nothing in my solr-home/conf/schema.xml so Solr 
 complains, but that's not the point.
 
 When I go to the admin of core dossier,
 http://localhost:8080/solr/dossier/admin, the container says it doesn't 
 exist.
 But when I go to http://localhost:8080/solr/admin it finds it, which 
 makes me guess that Solr is stil in single core mode.
 
 What am I missing ?
 
 Regards.
 -- 
 Bruno Dusausoy
 Software Engineer
 YP5 Software
 --
 Pensez environnement : limitez l'impression de ce mail.
 Please don't print this e-mail unless you really need to.


Re: Multicore configuration

2013-01-15 Thread Bruno Dusausoy

Dariusz Borowski a écrit :

Hi Bruno,

Maybe this helps. I wrote something about it:
http://www.coderthing.com/solr-with-multicore-and-database-hook-part-1/


Hi Darius,

Thanks for the link.
I've found my - terrible - mistake : solr.xml was not in solr.home dir 
but in solr.home/conf dir, so it didn't take it :-/

It works perfectly now.

Sorry for the noise.

Regards.
--
Bruno Dusausoy
Software Engineer
YP5 Software
--
Pensez environnement : limitez l'impression de ce mail.
Please don't print this e-mail unless you really need to.


Velocity in Multicore

2013-01-14 Thread Ramirez, Paul M (388J)
Hi,

I've been unable to get the velocity response writer to work in a multicore 
environment. Working from the examples that are distributed with Solr I simply 
started from the multicore example and added a hello.vm into 
core0/conf/velocity directory. I then updated the solrconfig.xml to add a new 
request handler as shown below. I've tried to use the v.base_dir to no success. 
Essentially what I always end up with is the default solr response. Has anyone 
been able to get the velocity response writer to work in a multicore 
environment? If so, could you point me to the documentation on how to do so.

hello.vm

Hello World!

solrconfig.xml
===
…
 requestHandler name=/hello class=solr.SearchHandler
 lst name=defaults
   str name=echoParamsexplicit/str
   !-- VelocityResponseWriter settings --
   str name=wtvelocity/str
   str name=v.templatehello/str
   !-- I've tried all the following in addition to not specifying any. --
   !--str name=v.base_dircore0/conf/velocity/str--
   !--str name=v.base_dirconf/velocity/str--
   !--str name=v.base_dirmulticore/core0/conf/velocity/str--
 /lst

  /requestHandler
…



Regards,
Paul Ramirez


Re: Unable to run two multicore Solr instances under Tomcat

2012-11-15 Thread Erick Erickson
Thanks for wrapping this up, it's always nice to get closure, especially
when it comes to googling G..


On Wed, Nov 14, 2012 at 5:34 AM, Adam Neal an...@mass.co.uk wrote:

 Just to wrap up this one. Previously all the lib jars were located in the
 war file on our setup, this was mainly to ease deployment as it's just a
 single file. Moving the lib directory external to the war seems to have
 fixed the issue.

 Thanks for the pointer Erick.


 -Original Message-
 From: Erick Erickson [mailto:erickerick...@gmail.com]
 Sent: Tue 13/11/2012 12:05
 To: solr-user@lucene.apache.org
 Subject: Re: Unable to run two multicore Solr instances under Tomcat


 At a guess you have leftover jars from your earlier installation in your
 classpath that are being picked up. I've always found that figuring out how
 _that_ happened is...er... interesting...

 Best
 Erick


 On Mon, Nov 12, 2012 at 7:44 AM, Adam Neal an...@mass.co.uk wrote:

  Hi,
 
  I have been running two multicore Solr instances under Tomcat using a
  nightly build of 4.0 from September 2011. This has been running fine but
  when I try to update these instances to the release version of 4.0 I'm
  hitting problems when the second instance starts up. If I have one
 instance
  on the release version and one on the nightly build it also works fine.
 
  It's running on a Solaris 10 box using Tomcat 6.0.26 and Java 1.6.0_20
 
  I can run up either instance on it's own and it works fine, it's just
 when
  starting both together so I'm pretty sure my configs aren't the issue.
 
  Snippet from the log is below, please note that I have had to type this
  out so there may be some typos, hopefully not!
 
  Any ideas?
 
  Adam
 
 
  12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader
 locateSolrHome
  INFO: Using JNDI solr.home: /conf_solr/instance2
  12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader init
  INFO: new SolrResourceLoader for deduced Solr Home:
 '/conf_solr/instance2/'
  12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
  INFO: SolrDispatchFilter.init()
  12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader
 locateSolrHome
  INFO: Using JNDI solr.home /conf_solr/instance2
  12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer$Initializer
  initialize
  INFO: looking for solr.xml: /conf_solr/instance2/solr.xml
  12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer init
  INFO: New CoreContainer 15471347
  12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer load
  INFO: Loading CoreContainer using Solr Home: '/conf_solr/instance2/'
  12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader init
  INFO: new SOlrResourceLoader for directory: '/conf_solr/instance2/'
  12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
  SEVERE: Could not start Solr. Check solr/home property and the logs
  12-Nov-2012 09:58:52 org.apache.solr.common.SolrException log
  SEVERE: null:java.lang.ClassCastException:
  org.apache.xerces.parsers.XIncludeAwareParserConfiguration cannot be cast
  to org.apache.xerces.xni.parser.XMLParserConfiguration
  at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
  at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
  at org.apache.xerces.jaxp.DocumentBuilderImpl.init(Unknown
  Source)
  at
 
 org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown
  Source)
  at
 
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.createDocument(SAX2DOM.java:324)
  at
 
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.init(SAX2DOM.java:84)
  at
 
 com.sun.org.apache.xalan.internal.xsltc.runtime.output.TranslateOutputHandlerFactory.getSerializationHanlder(TransletOutputHandlerFactory.java:187)
  at
 
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:392)
  at
 
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:298)
  at
  org.apache.solr.core.CoreContainer.copyDoc(CoreContainer.java:551)
  at
 org.apache.solr.core.CoreContainer.load(CoreContainer.java:381)
  at
 org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
  at
 
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
  at
 
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
  at
 
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
  at
 
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3838)
  at
  org.apache.catalina.core.StandardContext.start(StandardContext.java:4488

RE: Unable to run two multicore Solr instances under Tomcat

2012-11-14 Thread Adam Neal
Just to wrap up this one. Previously all the lib jars were located in the war 
file on our setup, this was mainly to ease deployment as it's just a single 
file. Moving the lib directory external to the war seems to have fixed the 
issue.

Thanks for the pointer Erick.


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tue 13/11/2012 12:05
To: solr-user@lucene.apache.org
Subject: Re: Unable to run two multicore Solr instances under Tomcat
 

At a guess you have leftover jars from your earlier installation in your
classpath that are being picked up. I've always found that figuring out how
_that_ happened is...er... interesting...

Best
Erick


On Mon, Nov 12, 2012 at 7:44 AM, Adam Neal an...@mass.co.uk wrote:

 Hi,

 I have been running two multicore Solr instances under Tomcat using a
 nightly build of 4.0 from September 2011. This has been running fine but
 when I try to update these instances to the release version of 4.0 I'm
 hitting problems when the second instance starts up. If I have one instance
 on the release version and one on the nightly build it also works fine.

 It's running on a Solaris 10 box using Tomcat 6.0.26 and Java 1.6.0_20

 I can run up either instance on it's own and it works fine, it's just when
 starting both together so I'm pretty sure my configs aren't the issue.

 Snippet from the log is below, please note that I have had to type this
 out so there may be some typos, hopefully not!

 Any ideas?

 Adam


 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home: /conf_solr/instance2
 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home /conf_solr/instance2
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml: /conf_solr/instance2/solr.xml
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 15471347
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader init
 INFO: new SOlrResourceLoader for directory: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 SEVERE: Could not start Solr. Check solr/home property and the logs
 12-Nov-2012 09:58:52 org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.ClassCastException:
 org.apache.xerces.parsers.XIncludeAwareParserConfiguration cannot be cast
 to org.apache.xerces.xni.parser.XMLParserConfiguration
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.init(Unknown
 Source)
 at
 org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown
 Source)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.createDocument(SAX2DOM.java:324)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.init(SAX2DOM.java:84)
 at
 com.sun.org.apache.xalan.internal.xsltc.runtime.output.TranslateOutputHandlerFactory.getSerializationHanlder(TransletOutputHandlerFactory.java:187)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:392)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:298)
 at
 org.apache.solr.core.CoreContainer.copyDoc(CoreContainer.java:551)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:381)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
 at
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
 at
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
 at
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
 at
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
 at
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
 at
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3838)
 at
 org.apache.catalina.core.StandardContext.start(StandardContext.java:4488)
 at
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
 at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
 at
 org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546

Re: Unable to run two multicore Solr instances under Tomcat

2012-11-13 Thread Erick Erickson
At a guess you have leftover jars from your earlier installation in your
classpath that are being picked up. I've always found that figuring out how
_that_ happened is...er... interesting...

Best
Erick


On Mon, Nov 12, 2012 at 7:44 AM, Adam Neal an...@mass.co.uk wrote:

 Hi,

 I have been running two multicore Solr instances under Tomcat using a
 nightly build of 4.0 from September 2011. This has been running fine but
 when I try to update these instances to the release version of 4.0 I'm
 hitting problems when the second instance starts up. If I have one instance
 on the release version and one on the nightly build it also works fine.

 It's running on a Solaris 10 box using Tomcat 6.0.26 and Java 1.6.0_20

 I can run up either instance on it's own and it works fine, it's just when
 starting both together so I'm pretty sure my configs aren't the issue.

 Snippet from the log is below, please note that I have had to type this
 out so there may be some typos, hopefully not!

 Any ideas?

 Adam


 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home: /conf_solr/instance2
 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home /conf_solr/instance2
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml: /conf_solr/instance2/solr.xml
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 15471347
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader init
 INFO: new SOlrResourceLoader for directory: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 SEVERE: Could not start Solr. Check solr/home property and the logs
 12-Nov-2012 09:58:52 org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.ClassCastException:
 org.apache.xerces.parsers.XIncludeAwareParserConfiguration cannot be cast
 to org.apache.xerces.xni.parser.XMLParserConfiguration
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.init(Unknown
 Source)
 at
 org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown
 Source)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.createDocument(SAX2DOM.java:324)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.init(SAX2DOM.java:84)
 at
 com.sun.org.apache.xalan.internal.xsltc.runtime.output.TranslateOutputHandlerFactory.getSerializationHanlder(TransletOutputHandlerFactory.java:187)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:392)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:298)
 at
 org.apache.solr.core.CoreContainer.copyDoc(CoreContainer.java:551)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:381)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
 at
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
 at
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
 at
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
 at
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
 at
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
 at
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3838)
 at
 org.apache.catalina.core.StandardContext.start(StandardContext.java:4488)
 at
 org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
 at
 org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
 at
 org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
 at
 org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
 at
 org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
 at
 org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
 at
 org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
 at
 org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
 at
 org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119

RE: Unable to run two multicore Solr instances under Tomcat

2012-11-13 Thread Adam Neal
Hi Erick,

Thanks for the info, I figured out that it was a jar problem earlier today but 
I don't think it is an old jar. Both of the instances I ran included the 
extraction libraries and it appears that the problem is due to the 
xercesImpl-2.9.1.jar. If I remove the extraction tool jars from one of the 
instances, or even just the specific jar, then everything works as normal. 
Fortunately I only need the extraction tools in one of my instances so this 
work around is good for now.

I can't see any old jars that would interfere, I will try and test this at some 
point on a clean install of 4.0 and see if the same problem occurs.


-Original Message-
From: Erick Erickson [mailto:erickerick...@gmail.com]
Sent: Tue 13/11/2012 12:05
To: solr-user@lucene.apache.org
Subject: Re: Unable to run two multicore Solr instances under Tomcat
 
At a guess you have leftover jars from your earlier installation in your
classpath that are being picked up. I've always found that figuring out how
_that_ happened is...er... interesting...

Best
Erick


On Mon, Nov 12, 2012 at 7:44 AM, Adam Neal an...@mass.co.uk wrote:

 Hi,

 I have been running two multicore Solr instances under Tomcat using a
 nightly build of 4.0 from September 2011. This has been running fine but
 when I try to update these instances to the release version of 4.0 I'm
 hitting problems when the second instance starts up. If I have one instance
 on the release version and one on the nightly build it also works fine.

 It's running on a Solaris 10 box using Tomcat 6.0.26 and Java 1.6.0_20

 I can run up either instance on it's own and it works fine, it's just when
 starting both together so I'm pretty sure my configs aren't the issue.

 Snippet from the log is below, please note that I have had to type this
 out so there may be some typos, hopefully not!

 Any ideas?

 Adam


 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home: /conf_solr/instance2
 12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader locateSolrHome
 INFO: Using JNDI solr.home /conf_solr/instance2
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml: /conf_solr/instance2/solr.xml
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer init
 INFO: New CoreContainer 15471347
 12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer load
 INFO: Loading CoreContainer using Solr Home: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader init
 INFO: new SOlrResourceLoader for directory: '/conf_solr/instance2/'
 12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
 SEVERE: Could not start Solr. Check solr/home property and the logs
 12-Nov-2012 09:58:52 org.apache.solr.common.SolrException log
 SEVERE: null:java.lang.ClassCastException:
 org.apache.xerces.parsers.XIncludeAwareParserConfiguration cannot be cast
 to org.apache.xerces.xni.parser.XMLParserConfiguration
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
 at org.apache.xerces.jaxp.DocumentBuilderImpl.init(Unknown
 Source)
 at
 org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown
 Source)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.createDocument(SAX2DOM.java:324)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.init(SAX2DOM.java:84)
 at
 com.sun.org.apache.xalan.internal.xsltc.runtime.output.TranslateOutputHandlerFactory.getSerializationHanlder(TransletOutputHandlerFactory.java:187)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:392)
 at
 com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:298)
 at
 org.apache.solr.core.CoreContainer.copyDoc(CoreContainer.java:551)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:381)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
 at
 org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
 at
 org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
 at
 org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
 at
 org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
 at
 org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
 at
 org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3838

Unable to run two multicore Solr instances under Tomcat

2012-11-12 Thread Adam Neal
Hi,

I have been running two multicore Solr instances under Tomcat using a nightly 
build of 4.0 from September 2011. This has been running fine but when I try to 
update these instances to the release version of 4.0 I'm hitting problems when 
the second instance starts up. If I have one instance on the release version 
and one on the nightly build it also works fine.

It's running on a Solaris 10 box using Tomcat 6.0.26 and Java 1.6.0_20

I can run up either instance on it's own and it works fine, it's just when 
starting both together so I'm pretty sure my configs aren't the issue.

Snippet from the log is below, please note that I have had to type this out so 
there may be some typos, hopefully not!

Any ideas?

Adam


12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: Using JNDI solr.home: /conf_solr/instance2
12-Nov-2012 09:58:50 org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for deduced Solr Home: '/conf_solr/instance2/'
12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: Using JNDI solr.home /conf_solr/instance2
12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer$Initializer initialize
INFO: looking for solr.xml: /conf_solr/instance2/solr.xml
12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer init
INFO: New CoreContainer 15471347
12-Nov-2012 09:58:52 org.apache.solr.core.CoreContainer load
INFO: Loading CoreContainer using Solr Home: '/conf_solr/instance2/'
12-Nov-2012 09:58:52 org.apache.solr.core.SolrResourceLoader init
INFO: new SOlrResourceLoader for directory: '/conf_solr/instance2/'
12-Nov-2012 09:58:52 org.apache.solr.servlet.SolrDispatchFilter init
SEVERE: Could not start Solr. Check solr/home property and the logs
12-Nov-2012 09:58:52 org.apache.solr.common.SolrException log
SEVERE: null:java.lang.ClassCastException: 
org.apache.xerces.parsers.XIncludeAwareParserConfiguration cannot be cast to 
org.apache.xerces.xni.parser.XMLParserConfiguration
at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
at org.apache.xerces.parsers.DOMParser.init(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.init(Unknown Source)
at 
org.apache.xerces.jaxp.DocumentBuilderFactoryImpl.newDocumentBuilder(Unknown 
Source)
at 
com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.createDocument(SAX2DOM.java:324)
at 
com.sun.org.apache.xalan.internal.xsltc.trax.SAX2DOM.init(SAX2DOM.java:84)
at 
com.sun.org.apache.xalan.internal.xsltc.runtime.output.TranslateOutputHandlerFactory.getSerializationHanlder(TransletOutputHandlerFactory.java:187)
at 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.getOutputHandler(TransformerImpl.java:392)
at 
com.sun.org.apache.xalan.internal.xsltc.trax.TransformerImpl.transform(TransformerImpl.java:298)
at org.apache.solr.core.CoreContainer.copyDoc(CoreContainer.java:551)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:381)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:356)
at 
org.apache.solr.core.CoreContainer$Initializer.initialize(CoreContainer.java:308)
at 
org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:107)
at 
org.apache.catalina.core.ApplicationFilterConfig.getFilter(ApplicationFilterConfig.java:295)
at 
org.apache.catalina.core.ApplicationFilterConfig.setFilterDef(ApplicationFilterConfig.java:422)
at 
org.apache.catalina.core.ApplicationFilterConfig.init(ApplicationFilterConfig.java:115)
at 
org.apache.catalina.core.StandardContext.filterStart(StandardContext.java:3838)
at 
org.apache.catalina.core.StandardContext.start(StandardContext.java:4488)
at 
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791)
at 
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546)
at 
org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637)
at 
org.apache.catalina.startup.HostConfig.deployDescriptors(HostConfig.java:563)
at 
org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:498)
at org.apache.catalina.startup.HostConfig.start(HostConfig.java:1277)
at 
org.apache.catalina.startup.HostConfig.lifecycleEvent(HostConfig.java:321)
at 
org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053)
at org.apache.catalina.core.StandardHost.start(StandardHost.java:785)
at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1045)
at 
org.apache.catalina.core.StandardEngine.start(StandardEngine.java:443

Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-20 Thread Rogerio Pereira
Here`s the catalina.out contents:

Out 20, 2012 12:55:58 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: using system property solr.solr.home: /home/rogerio/Dados/salutisvitae
Out 20, 2012 12:55:58 PM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for deduced Solr Home:
'/home/rogerio/Dados/salutisvitae/'
Out 20, 2012 12:55:58 PM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
Out 20, 2012 12:55:58 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: No /solr/home in JNDI
Out 20, 2012 12:55:58 PM org.apache.solr.core.SolrResourceLoader
locateSolrHome
INFO: using system property solr.solr.home: /home/rogerio/Dados/salutisvitae
Out 20, 2012 12:55:58 PM org.apache.solr.core.CoreContainer$Initializer
initialize
INFO: looking for solr.xml: /home/rogerio/Dados/salutisvitae/solr.xml
Out 20, 2012 12:55:58 PM org.apache.solr.core.CoreContainer init
INFO: New CoreContainer 1806276996

/home/rogerio/Dados/salutisvitae really exists and has two core dirs,
collection1 and collection2, but only collection1 is initialized as we can
see below:

INFO: unique key field: id
Out 20, 2012 12:56:29 PM org.apache.solr.core.SolrCore init
INFO: [collection1] Opening new SolrCore at
/home/rogerio/Dados/salutisvitae/collection1/,
dataDir=/home/rogerio/Dados/salutisvitae/collection1/data/
Out 20, 2012 12:56:29 PM org.apache.solr.core.SolrCore init
INFO: JMX monitoring not detected for core: collection1
Out 20, 2012 12:56:29 PM org.apache.solr.core.SolrCore getNewIndexDir
WARNING: New index directory detected: old=null
new=/home/rogerio/Dados/salutisvitae/collection1/data/index/
Out 20, 2012 12:56:29 PM org.apache.solr.core.CachingDirectoryFactory get
INFO: return new directory for
/home/rogerio/Dados/salutisvitae/collection1/data/index forceNew:false

No more cores are initialized after collection1.

Note, I`m just making a simple copy of multicore example
to /home/rogerio/Dados/salutisvitae and renaming core1 to collection1,
copying collection1 to collection2 and doing the configuration changes on
solrconfig.xml, and to set the path above I`m using the solr.solr.home
system property with solr admin deployed on tomcat from solr.war

I`m getting the same strange behavior on both Xubuntu 10.04 and Ubuntu 12.10

2012/10/16 Chris Hostetter hossman_luc...@fucit.org

 : To answer your question, I tried both -Dsolr.solr.home and solr/home JNDI
 : variable, in both cases I got the same result.
 :
 : I checked the logs several times, solr always only loads up the
 collection1,

 That doesn't really answer any of the questions i was asking you.

 *Before* solr logs anything about loading collection1, it will log
 information about how/where it is locating the solr home dir and
 solr.xml

 : if you look at the logging when solr first starts up, you should ese
 : several messages about how/where it's trying to locate the Solr Home Dir
 : ... please double check that it's finding the one you intended.
 :
 : Please give us more details about those log messages related to the solr
 : home dir, as well as how you are trying to set it, and what your
 directory
 : structure looks like in tomcat.

 For example, this is what Solr logs if it can't detect either the system
 property, or JNDI, and is assuming it should use ./solr ...

 Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or
 JNDI)
 Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader init
 INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
 Oct 16, 2012 8:48:53 AM org.apache.solr.servlet.SolrDispatchFilter init
 INFO: SolrDispatchFilter.init()
 Oct 16, 2012 8:48:53 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: JNDI not configured for solr (NoInitialContextEx)
 Oct 16, 2012 8:48:53 AM org.apache.solr.core.SolrResourceLoader
 locateSolrHome
 INFO: solr home defaulted to 'solr/' (could not find system property or
 JNDI)
 Oct 16, 2012 8:48:53 AM org.apache.solr.core.CoreContainer$Initializer
 initialize
 INFO: looking for solr.xml:
 /home/hossman/lucene/dev/solr/example/solr/solr.xml

 What do your startup logs look like as far as finding the solr home dir?

 because my suspicion is that the reason it's not loading your
 multicore setup, or complaining about malformed xml in your solr.xml
 file, is because it's not fiding the directory you want at all.



 -Hoss




-- 
Regards,

Rogério Pereira Araújo

Blogs: http://faces.eti.br, http://ararog.blogspot.com
Twitter: http://twitter.com/ararog
Skype: rogerio.araujo
MSN: ara...@hotmail.com
Gtalk/FaceTime: rogerio.ara...@gmail.com

(0xx62) 8240 7212
(0xx62) 3920 2666


Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-16 Thread Rogério Pereira Araújo

Hi Chris,

To answer your question, I tried both -Dsolr.solr.home and solr/home JNDI 
variable, in both cases I got the same result.


I checked the logs several times, solr always only loads up the collection1, 
if I rename the cores on solr.xml to anything else or add more cores, 
nothing happens.


Even if I put some garbage on solr.xml, by removing closing tags, no 
exception is generated.


I'm running Tomcat 7 and Solr 4 on Xubuntu 10.04, but I don't think the OS 
is the problem, I'll do the same test on other OSes.


-Mensagem Original- 
From: Chris Hostetter

Sent: Monday, October 15, 2012 5:38 PM
To: solr-user@lucene.apache.org ; rogerio.ara...@gmail.com
Subject: Re: Multicore setup is ignored when deploying solr.war on Tomcat 
5/6/7



: on Tomcat I setup the system property pointing to solr/home path,
: unfortunatelly when I start tomcat the solr.xml is ignored and only the

Please elaborate on how exactly you pointed tomcat at your solr/home.

you mentioned system property but when using system properties to set
the Solr Home you wnat to set solr.solr.home .. solr/home is the JNDI
variable name used as an alternative.

if you look at the logging when solr first starts up, you should ese
several messages about how/where it's trying to locate the Solr Home Dir
... please double check that it's finding the one you intended.

Please give us more details about those log messages related to the solr
home dir, as well as how you are trying to set it, and what your directory
structure looks like in tomcat.

If you haven't seen it yet...

https://wiki.apache.org/solr/SolrTomcat



-Hoss 



Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-16 Thread Chris Hostetter
: To answer your question, I tried both -Dsolr.solr.home and solr/home JNDI
: variable, in both cases I got the same result.
: 
: I checked the logs several times, solr always only loads up the collection1,

That doesn't really answer any of the questions i was asking you.

*Before* solr logs anything about loading collection1, it will log 
information about how/where it is locating the solr home dir and 
solr.xml

: if you look at the logging when solr first starts up, you should ese
: several messages about how/where it's trying to locate the Solr Home Dir
: ... please double check that it's finding the one you intended.
: 
: Please give us more details about those log messages related to the solr
: home dir, as well as how you are trying to set it, and what your directory
: structure looks like in tomcat.

For example, this is what Solr logs if it can't detect either the system 
property, or JNDI, and is assuming it should use ./solr ...

Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
Oct 16, 2012 8:48:52 AM org.apache.solr.core.SolrResourceLoader init
INFO: new SolrResourceLoader for deduced Solr Home: 'solr/'
Oct 16, 2012 8:48:53 AM org.apache.solr.servlet.SolrDispatchFilter init
INFO: SolrDispatchFilter.init()
Oct 16, 2012 8:48:53 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: JNDI not configured for solr (NoInitialContextEx)
Oct 16, 2012 8:48:53 AM org.apache.solr.core.SolrResourceLoader locateSolrHome
INFO: solr home defaulted to 'solr/' (could not find system property or JNDI)
Oct 16, 2012 8:48:53 AM org.apache.solr.core.CoreContainer$Initializer 
initialize
INFO: looking for solr.xml: /home/hossman/lucene/dev/solr/example/solr/solr.xml

What do your startup logs look like as far as finding the solr home dir?

because my suspicion is that the reason it's not loading your 
multicore setup, or complaining about malformed xml in your solr.xml 
file, is because it's not fiding the directory you want at all.



-Hoss


Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-15 Thread Vadim Kisselmann
Hi Rogerio,
i can imagine what it is. Tomcat extract the war-files in
/var/lib/tomcatXX/webapps.
If you already run an older Solr-Version on your server, the old
extracted Solr-war could still be there (keyword: tomcat cache).
Delete the /var/lib/tomcatXX/webapps/solr - folder and restart tomcat,
when Tomcat should put your new war-file.
Best regards
Vadim



2012/10/14 Rogerio Pereira rogerio.ara...@gmail.com:
 I'll try to be more specific Jack.

 I just download the apache-solr-4.0.0.zip, from this archive I took the
 core1 and core2 folders from multicore example and rename them to
 collection1 and collection2, I also did all necessary changes on solr.xml
 and solrconfig.xml and schema.xml on these two correct to reflect the new
 names.

 After this step I just tried to deploy and war file on tomcat pointing to
 the the directory (solr/home) where these two cores are located, solr.xml
 is there, with collection1 and collection2 properly configured.

 The question is, now matter what is contained on solr.xml, this file isn't
 read at Tomcat startup, I tried to cause a parser error on solr.xml by
 removing closing tags, but even with this change I can't get at least a
 parser error.

 I hope to be clear now.


 2012/10/14 Jack Krupansky j...@basetechnology.com

 I can't quite parse the same multicore deployment as we have on apache
 solr 4.0 distribution archive. Could you rephrase and be more specific.
 What archive?

 Were you already using 4.0-ALPHA or BETA (or some snapshot of 4.0) or are
 you moving from pre-4.0 to 4.0? The directory structure did change in 4.0.
 Look at the example/solr directory.

 -- Jack Krupansky

 -Original Message- From: Rogerio Pereira
 Sent: Sunday, October 14, 2012 10:01 AM
 To: solr-user@lucene.apache.org
 Subject: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7


 Hi,

 I tried to perform the same multicore deployment as we have on apache solr
 4.0 distribution archive, I created a directory for solr/home with solr.xml
 inside and two subdirectories collection1 and collection2, these two cores
 are properly configured with conf folder and solrconfi.xml and schema.xml,
 on Tomcat I setup the system property pointing to solr/home path,
 unfortunatelly when I start tomcat the solr.xml is ignored and only the
 default collection1 is loaded.

 As a test, I made changes on solr.xml to cause parser errors, and guess
 what? These errors aren't reported on tomcat startup.

 The same thing doesn't happens on multicore example that comes on
 distribution archive, now I'm trying to figure out what's the black magic
 happening.

 Let me do the same kind of deployment on Windows and Mac OSX, if persist,
 I'll update this thread.

 Regards,

 Rogério




 --
 Regards,

 Rogério Pereira Araújo

 Blogs: http://faces.eti.br, http://ararog.blogspot.com
 Twitter: http://twitter.com/ararog
 Skype: rogerio.araujo
 MSN: ara...@hotmail.com
 Gtalk/FaceTime: rogerio.ara...@gmail.com

 (0xx62) 8240 7212
 (0xx62) 3920 2666


Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-15 Thread Rogério Pereira Araújo

Hi Vadim,

In fact tomcat is running in another non standard path, there's no old 
version deployed on tomcat, I double checked it.


Let me try in another environment.

-Mensagem Original- 
From: Vadim Kisselmann

Sent: Monday, October 15, 2012 6:01 AM
To: solr-user@lucene.apache.org ; rogerio.ara...@gmail.com
Subject: Re: Multicore setup is ignored when deploying solr.war on Tomcat 
5/6/7


Hi Rogerio,
i can imagine what it is. Tomcat extract the war-files in
/var/lib/tomcatXX/webapps.
If you already run an older Solr-Version on your server, the old
extracted Solr-war could still be there (keyword: tomcat cache).
Delete the /var/lib/tomcatXX/webapps/solr - folder and restart tomcat,
when Tomcat should put your new war-file.
Best regards
Vadim



2012/10/14 Rogerio Pereira rogerio.ara...@gmail.com:

I'll try to be more specific Jack.

I just download the apache-solr-4.0.0.zip, from this archive I took the
core1 and core2 folders from multicore example and rename them to
collection1 and collection2, I also did all necessary changes on solr.xml
and solrconfig.xml and schema.xml on these two correct to reflect the new
names.

After this step I just tried to deploy and war file on tomcat pointing to
the the directory (solr/home) where these two cores are located, solr.xml
is there, with collection1 and collection2 properly configured.

The question is, now matter what is contained on solr.xml, this file isn't
read at Tomcat startup, I tried to cause a parser error on solr.xml by
removing closing tags, but even with this change I can't get at least a
parser error.

I hope to be clear now.


2012/10/14 Jack Krupansky j...@basetechnology.com


I can't quite parse the same multicore deployment as we have on apache
solr 4.0 distribution archive. Could you rephrase and be more specific.
What archive?

Were you already using 4.0-ALPHA or BETA (or some snapshot of 4.0) or are
you moving from pre-4.0 to 4.0? The directory structure did change in 
4.0.

Look at the example/solr directory.

-- Jack Krupansky

-Original Message- From: Rogerio Pereira
Sent: Sunday, October 14, 2012 10:01 AM
To: solr-user@lucene.apache.org
Subject: Multicore setup is ignored when deploying solr.war on Tomcat 
5/6/7



Hi,

I tried to perform the same multicore deployment as we have on apache 
solr
4.0 distribution archive, I created a directory for solr/home with 
solr.xml
inside and two subdirectories collection1 and collection2, these two 
cores
are properly configured with conf folder and solrconfi.xml and 
schema.xml,

on Tomcat I setup the system property pointing to solr/home path,
unfortunatelly when I start tomcat the solr.xml is ignored and only the
default collection1 is loaded.

As a test, I made changes on solr.xml to cause parser errors, and guess
what? These errors aren't reported on tomcat startup.

The same thing doesn't happens on multicore example that comes on
distribution archive, now I'm trying to figure out what's the black magic
happening.

Let me do the same kind of deployment on Windows and Mac OSX, if persist,
I'll update this thread.

Regards,

Rogério





--
Regards,

Rogério Pereira Araújo

Blogs: http://faces.eti.br, http://ararog.blogspot.com
Twitter: http://twitter.com/ararog
Skype: rogerio.araujo
MSN: ara...@hotmail.com
Gtalk/FaceTime: rogerio.ara...@gmail.com

(0xx62) 8240 7212
(0xx62) 3920 2666 




Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-15 Thread Chris Hostetter

: on Tomcat I setup the system property pointing to solr/home path,
: unfortunatelly when I start tomcat the solr.xml is ignored and only the

Please elaborate on how exactly you pointed tomcat at your solr/home.

you mentioned system property but when using system properties to set 
the Solr Home you wnat to set solr.solr.home .. solr/home is the JNDI 
variable name used as an alternative.

if you look at the logging when solr first starts up, you should ese 
several messages about how/where it's trying to locate the Solr Home Dir 
... please double check that it's finding the one you intended.

Please give us more details about those log messages related to the solr 
home dir, as well as how you are trying to set it, and what your directory 
structure looks like in tomcat.

If you haven't seen it yet...

https://wiki.apache.org/solr/SolrTomcat



-Hoss


Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-14 Thread Jack Krupansky
I can't quite parse the same multicore deployment as we have on apache solr 
4.0 distribution archive. Could you rephrase and be more specific. What 
archive?


Were you already using 4.0-ALPHA or BETA (or some snapshot of 4.0) or are 
you moving from pre-4.0 to 4.0? The directory structure did change in 4.0. 
Look at the example/solr directory.


-- Jack Krupansky

-Original Message- 
From: Rogerio Pereira

Sent: Sunday, October 14, 2012 10:01 AM
To: solr-user@lucene.apache.org
Subject: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

Hi,

I tried to perform the same multicore deployment as we have on apache solr
4.0 distribution archive, I created a directory for solr/home with solr.xml
inside and two subdirectories collection1 and collection2, these two cores
are properly configured with conf folder and solrconfi.xml and schema.xml,
on Tomcat I setup the system property pointing to solr/home path,
unfortunatelly when I start tomcat the solr.xml is ignored and only the
default collection1 is loaded.

As a test, I made changes on solr.xml to cause parser errors, and guess
what? These errors aren't reported on tomcat startup.

The same thing doesn't happens on multicore example that comes on
distribution archive, now I'm trying to figure out what's the black magic
happening.

Let me do the same kind of deployment on Windows and Mac OSX, if persist,
I'll update this thread.

Regards,

Rogério 



Re: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7

2012-10-14 Thread Rogerio Pereira
I'll try to be more specific Jack.

I just download the apache-solr-4.0.0.zip, from this archive I took the
core1 and core2 folders from multicore example and rename them to
collection1 and collection2, I also did all necessary changes on solr.xml
and solrconfig.xml and schema.xml on these two correct to reflect the new
names.

After this step I just tried to deploy and war file on tomcat pointing to
the the directory (solr/home) where these two cores are located, solr.xml
is there, with collection1 and collection2 properly configured.

The question is, now matter what is contained on solr.xml, this file isn't
read at Tomcat startup, I tried to cause a parser error on solr.xml by
removing closing tags, but even with this change I can't get at least a
parser error.

I hope to be clear now.


2012/10/14 Jack Krupansky j...@basetechnology.com

 I can't quite parse the same multicore deployment as we have on apache
 solr 4.0 distribution archive. Could you rephrase and be more specific.
 What archive?

 Were you already using 4.0-ALPHA or BETA (or some snapshot of 4.0) or are
 you moving from pre-4.0 to 4.0? The directory structure did change in 4.0.
 Look at the example/solr directory.

 -- Jack Krupansky

 -Original Message- From: Rogerio Pereira
 Sent: Sunday, October 14, 2012 10:01 AM
 To: solr-user@lucene.apache.org
 Subject: Multicore setup is ignored when deploying solr.war on Tomcat 5/6/7


 Hi,

 I tried to perform the same multicore deployment as we have on apache solr
 4.0 distribution archive, I created a directory for solr/home with solr.xml
 inside and two subdirectories collection1 and collection2, these two cores
 are properly configured with conf folder and solrconfi.xml and schema.xml,
 on Tomcat I setup the system property pointing to solr/home path,
 unfortunatelly when I start tomcat the solr.xml is ignored and only the
 default collection1 is loaded.

 As a test, I made changes on solr.xml to cause parser errors, and guess
 what? These errors aren't reported on tomcat startup.

 The same thing doesn't happens on multicore example that comes on
 distribution archive, now I'm trying to figure out what's the black magic
 happening.

 Let me do the same kind of deployment on Windows and Mac OSX, if persist,
 I'll update this thread.

 Regards,

 Rogério




-- 
Regards,

Rogério Pereira Araújo

Blogs: http://faces.eti.br, http://ararog.blogspot.com
Twitter: http://twitter.com/ararog
Skype: rogerio.araujo
MSN: ara...@hotmail.com
Gtalk/FaceTime: rogerio.ara...@gmail.com

(0xx62) 8240 7212
(0xx62) 3920 2666


solr multicore problem on SLES 11

2012-09-17 Thread Jochen Lienhard

Hello,

I have a problem with solr and multicores on SLES 11 SP 2.

I have 3 cores, each with more than 20 segments.
When I try to start the tomcat6, it can not start the CoreContainer.
Caused by: java.lang.OutOfMemoryError: Map failed
at sun.nio.ch.FileChannelImpl.map0(Native Method)

I read a lot about this problem, but I do not find the solution.

The strange problem is now:

It works fine under openSuSE 12.x, tomcat6, openjdk.

But the virtual maschine with SLES 11 SP 2, tomcat6, openjdk  it 
crashes.


Both tomcat/java configurations are the same.

Has anyboday a idea, how to solve this problem?

I have another SLES maschine with 5 core, but each has only 1 segment 
(very small index), and this maschine runs fine.


Greetings

Jochen

--
Dr. rer. nat. Jochen Lienhard
Dezernat EDV

Albert-Ludwigs-Universität Freiburg
Universitätsbibliothek
Rempartstr. 10-16  | Postfach 1629
79098 Freiburg | 79016 Freiburg

Telefon: +49 761 203-3908
E-Mail: lienh...@ub.uni-freiburg.de
Internet: www.ub.uni-freiburg.de




smime.p7s
Description: Kryptographische S/MIME-Signatur


AW: solr multicore problem on SLES 11

2012-09-17 Thread André Widhani
The first thing I would check is the virtual memory limit (ulimit -v, check 
this for the operating system user that runs Tomcat /Solr).

It should be set to unlimited, but this is as far as i remember not the 
default settings on SLES 11.

Since 3.1, Solr maps the index files to virtual memory. So if the size of your 
index files are larger than the allowed virtual memory, it may fail.

Regards,
André


Von: Jochen Lienhard [lienh...@ub.uni-freiburg.de]
Gesendet: Montag, 17. September 2012 09:17
An: solr-user@lucene.apache.org
Betreff: solr multicore problem on SLES 11

Hello,

I have a problem with solr and multicores on SLES 11 SP 2.

I have 3 cores, each with more than 20 segments.
When I try to start the tomcat6, it can not start the CoreContainer.
Caused by: java.lang.OutOfMemoryError: Map failed
 at sun.nio.ch.FileChannelImpl.map0(Native Method)

I read a lot about this problem, but I do not find the solution.

The strange problem is now:

It works fine under openSuSE 12.x, tomcat6, openjdk.

But the virtual maschine with SLES 11 SP 2, tomcat6, openjdk  it
crashes.

Both tomcat/java configurations are the same.

Has anyboday a idea, how to solve this problem?

I have another SLES maschine with 5 core, but each has only 1 segment
(very small index), and this maschine runs fine.

Greetings

Jochen

--
Dr. rer. nat. Jochen Lienhard
Dezernat EDV

Albert-Ludwigs-Universität Freiburg
Universitätsbibliothek
Rempartstr. 10-16  | Postfach 1629
79098 Freiburg | 79016 Freiburg

Telefon: +49 761 203-3908
E-Mail: lienh...@ub.uni-freiburg.de
Internet: www.ub.uni-freiburg.de




Re: solr multicore problem on SLES 11

2012-09-17 Thread Jochen Lienhard

Great. Thanks.
That solves my problem.

Greetings

Jochen

André Widhani schrieb:

The first thing I would check is the virtual memory limit (ulimit -v, check 
this for the operating system user that runs Tomcat /Solr).

It should be set to unlimited, but this is as far as i remember not the 
default settings on SLES 11.

Since 3.1, Solr maps the index files to virtual memory. So if the size of your 
index files are larger than the allowed virtual memory, it may fail.

Regards,
André


Von: Jochen Lienhard [lienh...@ub.uni-freiburg.de]
Gesendet: Montag, 17. September 2012 09:17
An: solr-user@lucene.apache.org
Betreff: solr multicore problem on SLES 11

Hello,

I have a problem with solr and multicores on SLES 11 SP 2.

I have 3 cores, each with more than 20 segments.
When I try to start the tomcat6, it can not start the CoreContainer.
Caused by: java.lang.OutOfMemoryError: Map failed
  at sun.nio.ch.FileChannelImpl.map0(Native Method)

I read a lot about this problem, but I do not find the solution.

The strange problem is now:

It works fine under openSuSE 12.x, tomcat6, openjdk.

But the virtual maschine with SLES 11 SP 2, tomcat6, openjdk  it
crashes.

Both tomcat/java configurations are the same.

Has anyboday a idea, how to solve this problem?

I have another SLES maschine with 5 core, but each has only 1 segment
(very small index), and this maschine runs fine.

Greetings

Jochen

--
Dr. rer. nat. Jochen Lienhard
Dezernat EDV

Albert-Ludwigs-Universität Freiburg
Universitätsbibliothek
Rempartstr. 10-16  | Postfach 1629
79098 Freiburg | 79016 Freiburg

Telefon: +49 761 203-3908
E-Mail: lienh...@ub.uni-freiburg.de
Internet: www.ub.uni-freiburg.de






--
Dr. rer. nat. Jochen Lienhard
Dezernat EDV

Albert-Ludwigs-Universität Freiburg
Universitätsbibliothek
Rempartstr. 10-16  | Postfach 1629
79098 Freiburg | 79016 Freiburg

Telefon: +49 761 203-3908
E-Mail: lienh...@ub.uni-freiburg.de
Internet: www.ub.uni-freiburg.de




smime.p7s
Description: Kryptographische S/MIME-Signatur


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-20 Thread Nicholas Ball

hi lance,

how would that work? generation is essentially versioning right?
i also don't see why you need to use zk to do this as it's all on a single
machine, was hoping for a simpler solution :)

On Sun, 19 Aug 2012 19:26:41 -0700, Lance Norskog goks...@gmail.com
wrote:
 I would use generation numbers on documents, and communicate a global
 generation number in ZK.
 
 On Thu, Aug 16, 2012 at 2:22 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 I've been close to implementing a 2PC protocol before for something
else,
 however for this it's not needed.
 As the move operation will be done on a single node which has both the
 cores, this could be done differently. Just not entirely sure how to do
 it.

 When a commit is done at the moment, the core must get locked somehow,
it
 is at this point where we should lock the other core too if a move
 operation is being executed.

 Nick

 On Thu, 16 Aug 2012 10:32:10 +0800, Li Li fancye...@gmail.com wrote:


http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit

 On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Haven't managed to find a good way to do this yet. Does anyone have
any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the
 two
 cores at the exact same time?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog
goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until
both
 report that indexing worked. If one of the cores throws an
exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation
across
 multiple cores on the same solr instance i.e. a multi-core
 environment.

 An example would be to move a set of docs from one core onto
another
 core
 and ensure that a softcommit is done as the exact same time. If
one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but
wanted
 to
 know what the best way to tackle this would be and where should I
be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-20 Thread Lance Norskog
Yes, by generations I meant versioning. The problem is that you have
to have a central holder of the current generation number. ZK does
this very well. It is a distributed synchronized file system for very
small files. If you have a more natural place to store the current
generation number, that's fine also.

On Mon, Aug 20, 2012 at 2:47 PM, Nicholas Ball
nicholas.b...@nodelay.com wrote:

 hi lance,

 how would that work? generation is essentially versioning right?
 i also don't see why you need to use zk to do this as it's all on a single
 machine, was hoping for a simpler solution :)

 On Sun, 19 Aug 2012 19:26:41 -0700, Lance Norskog goks...@gmail.com
 wrote:
 I would use generation numbers on documents, and communicate a global
 generation number in ZK.

 On Thu, Aug 16, 2012 at 2:22 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 I've been close to implementing a 2PC protocol before for something
 else,
 however for this it's not needed.
 As the move operation will be done on a single node which has both the
 cores, this could be done differently. Just not entirely sure how to do
 it.

 When a commit is done at the moment, the core must get locked somehow,
 it
 is at this point where we should lock the other core too if a move
 operation is being executed.

 Nick

 On Thu, 16 Aug 2012 10:32:10 +0800, Li Li fancye...@gmail.com wrote:


 http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit

 On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Haven't managed to find a good way to do this yet. Does anyone have
 any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the
 two
 cores at the exact same time?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog
 goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until
 both
 report that indexing worked. If one of the cores throws an
 exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation
 across
 multiple cores on the same solr instance i.e. a multi-core
 environment.

 An example would be to move a set of docs from one core onto
 another
 core
 and ensure that a softcommit is done as the exact same time. If
 one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but
 wanted
 to
 know what the best way to tackle this would be and where should I
 be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix



-- 
Lance Norskog
goks...@gmail.com


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-19 Thread Lance Norskog
I would use generation numbers on documents, and communicate a global
generation number in ZK.

On Thu, Aug 16, 2012 at 2:22 AM, Nicholas Ball
nicholas.b...@nodelay.com wrote:

 I've been close to implementing a 2PC protocol before for something else,
 however for this it's not needed.
 As the move operation will be done on a single node which has both the
 cores, this could be done differently. Just not entirely sure how to do it.

 When a commit is done at the moment, the core must get locked somehow, it
 is at this point where we should lock the other core too if a move
 operation is being executed.

 Nick

 On Thu, 16 Aug 2012 10:32:10 +0800, Li Li fancye...@gmail.com wrote:

 http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit

 On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Haven't managed to find a good way to do this yet. Does anyone have any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the
 two
 cores at the exact same time?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core
 environment.

 An example would be to move a set of docs from one core onto another
 core
 and ensure that a softcommit is done as the exact same time. If one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but wanted
 to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix



-- 
Lance Norskog
goks...@gmail.com


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-16 Thread Nicholas Ball

I've been close to implementing a 2PC protocol before for something else,
however for this it's not needed.
As the move operation will be done on a single node which has both the
cores, this could be done differently. Just not entirely sure how to do it.

When a commit is done at the moment, the core must get locked somehow, it
is at this point where we should lock the other core too if a move
operation is being executed.

Nick

On Thu, 16 Aug 2012 10:32:10 +0800, Li Li fancye...@gmail.com wrote:

http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit
 
 On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Haven't managed to find a good way to do this yet. Does anyone have any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the
two
 cores at the exact same time?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core
 environment.

 An example would be to move a set of docs from one core onto another
 core
 and ensure that a softcommit is done as the exact same time. If one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but wanted
to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-15 Thread Nicholas Ball

Haven't managed to find a good way to do this yet. Does anyone have any
ideas on how I could implement this feature?
Really need to move docs across from one core to another atomically.

Many thanks,
Nicholas

On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the two
 cores at the exact same time?
 
 Cheers,
 Nicholas
 
 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.
 
 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core
environment.

 An example would be to move a set of docs from one core onto another
 core
 and ensure that a softcommit is done as the exact same time. If one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but wanted to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-15 Thread Li Li
在 2012-7-2 傍晚6:37,Nicholas Ball nicholas.b...@nodelay.com写道:


 That could work, but then how do you ensure commit is called on the two
 cores at the exact same time?
that may needs something like two phrase commit in relational dB. lucene
has prepareCommit, but to implement 2pc, many things need to do.
 Also, any way to commit a specific update rather then all the back-logged
 ones?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
 wrote:
  Index all documents to both cores, but do not call commit until both
  report that indexing worked. If one of the cores throws an exception,
  call roll back on both cores.
 
  On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
  nicholas.b...@nodelay.com wrote:
 
  Hey all,
 
  Trying to figure out the best way to perform atomic operation across
  multiple cores on the same solr instance i.e. a multi-core environment.
 
  An example would be to move a set of docs from one core onto another
 core
  and ensure that a softcommit is done as the exact same time. If one
 were
  to
  fail so would the other.
  Obviously this would probably require some customization but wanted to
  know what the best way to tackle this would be and where should I be
  looking in the source.
 
  Many thanks for the help in advance,
  Nicholas a.k.a. incunix


Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-15 Thread Li Li
do you really need this?
distributed transaction is a difficult problem. in 2pc, every node could
fail, including coordinator. something like leader election needed to make
sure it works. you maybe try zookeeper.
but if the transaction is not very very important like transfer money in
bank, you can do like this.
coordinator:
在 2012-8-16 上午7:42,Nicholas Ball nicholas.b...@nodelay.com写道:


 Haven't managed to find a good way to do this yet. Does anyone have any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
  That could work, but then how do you ensure commit is called on the two
  cores at the exact same time?
 
  Cheers,
  Nicholas
 
  On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
  wrote:
  Index all documents to both cores, but do not call commit until both
  report that indexing worked. If one of the cores throws an exception,
  call roll back on both cores.
 
  On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
  nicholas.b...@nodelay.com wrote:
 
  Hey all,
 
  Trying to figure out the best way to perform atomic operation across
  multiple cores on the same solr instance i.e. a multi-core
 environment.
 
  An example would be to move a set of docs from one core onto another
  core
  and ensure that a softcommit is done as the exact same time. If one
  were
  to
  fail so would the other.
  Obviously this would probably require some customization but wanted to
  know what the best way to tackle this would be and where should I be
  looking in the source.
 
  Many thanks for the help in advance,
  Nicholas a.k.a. incunix



Re: Atomic Multicore Operations - E.G. Move Docs

2012-08-15 Thread Li Li
http://zookeeper.apache.org/doc/r3.3.6/recipes.html#sc_recipes_twoPhasedCommit

On Thu, Aug 16, 2012 at 7:41 AM, Nicholas Ball
nicholas.b...@nodelay.com wrote:

 Haven't managed to find a good way to do this yet. Does anyone have any
 ideas on how I could implement this feature?
 Really need to move docs across from one core to another atomically.

 Many thanks,
 Nicholas

 On Mon, 02 Jul 2012 04:37:12 -0600, Nicholas Ball
 nicholas.b...@nodelay.com wrote:
 That could work, but then how do you ensure commit is called on the two
 cores at the exact same time?

 Cheers,
 Nicholas

 On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
 wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.

 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core
 environment.

 An example would be to move a set of docs from one core onto another
 core
 and ensure that a softcommit is done as the exact same time. If one
 were
 to
 fail so would the other.
 Obviously this would probably require some customization but wanted to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix


Re: How config multicore using solr cloud feature

2012-08-03 Thread Mark Miller
Configure all your cores as you would in a single node setup. Then use
-Dbootstrap_config=true rather than the bootstrap option where you point at
one directory and give a config set name. That will bootstrap all of your
cores with the config they have locally, naming the config sets created
after the collection name.

The other option is to use the new collections API to create further
collections - but I have not gotten around to documenting it on the wiki
yet - I will shortly.

I'm not positive if the collections API is in 4alpha without looking, but I
think it may be.

If not, the bootstrap method is pretty simple as well.

On Sun, Jul 29, 2012 at 11:00 PM, Qun Wang qun.w...@morningstar.com wrote:

 Hi,
  I'm a new user and our program need use multicore to manage
 index. I found that Solr 4.0 ALPHA has Solr cloud feature which I could use
 for load balance in query and sync for update. But the wiki for Solr cloud
 just tell me how to use single core for sync. For my requirement should use
 it for multicore synchronized in update. Could someone tell me how to
 configure it?

 Thanks.




-- 
- Mark

http://www.lucidimagination.com


How config multicore using solr cloud feature

2012-07-29 Thread Qun Wang
Hi,
 I'm a new user and our program need use multicore to manage index. I 
found that Solr 4.0 ALPHA has Solr cloud feature which I could use for load 
balance in query and sync for update. But the wiki for Solr cloud just tell me 
how to use single core for sync. For my requirement should use it for multicore 
synchronized in update. Could someone tell me how to configure it?

Thanks.


Configuring Apache SOLR with Multicore on IBM Websphere Application Server

2012-07-23 Thread Senthil Kk Mani

Hi,

I currently have Apache SOLR 3.6 running on Tomcat 7.0.27. I was able to
successfully configure multicores too. This was my development environment
and hence used tomcat - however the production environment is WAS. I need
to migrate the existing multicores SOLR index from tomcat to WAS. Is there
any documentation available on how to install SOLR on WebSphere and
configure the multicores?

Thanks,
-Senthil



Re: Multicore admin problem in Websphere

2012-07-23 Thread kmsenthil
Hi,

I am currently looking for some information on how to host multiple SOLR
indexes on Websphere. I have this already working on tomcat. 

Do you have any documentation on how to set it up on websphere?

Thanks
Senthil



--
View this message in context: 
http://lucene.472066.n3.nabble.com/Multicore-admin-problem-in-Websphere-tp764471p3996691.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Atomic Multicore Operations - E.G. Move Docs

2012-07-02 Thread Nicholas Ball

That could work, but then how do you ensure commit is called on the two
cores at the exact same time?
Also, any way to commit a specific update rather then all the back-logged
ones?

Cheers,
Nicholas

On Sat, 30 Jun 2012 16:19:31 -0700, Lance Norskog goks...@gmail.com
wrote:
 Index all documents to both cores, but do not call commit until both
 report that indexing worked. If one of the cores throws an exception,
 call roll back on both cores.
 
 On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
 nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core environment.

 An example would be to move a set of docs from one core onto another
core
 and ensure that a softcommit is done as the exact same time. If one
were
 to
 fail so would the other.
 Obviously this would probably require some customization but wanted to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix


Atomic Multicore Operations - E.G. Move Docs

2012-06-30 Thread Nicholas Ball

Hey all,

Trying to figure out the best way to perform atomic operation across
multiple cores on the same solr instance i.e. a multi-core environment.

An example would be to move a set of docs from one core onto another core
and ensure that a softcommit is done as the exact same time. If one were to
fail so would the other.
Obviously this would probably require some customization but wanted to
know what the best way to tackle this would be and where should I be
looking in the source.

Many thanks for the help in advance,
Nicholas a.k.a. incunix


Re: Atomic Multicore Operations - E.G. Move Docs

2012-06-30 Thread Lance Norskog
Index all documents to both cores, but do not call commit until both
report that indexing worked. If one of the cores throws an exception,
call roll back on both cores.

On Sat, Jun 30, 2012 at 6:50 AM, Nicholas Ball
nicholas.b...@nodelay.com wrote:

 Hey all,

 Trying to figure out the best way to perform atomic operation across
 multiple cores on the same solr instance i.e. a multi-core environment.

 An example would be to move a set of docs from one core onto another core
 and ensure that a softcommit is done as the exact same time. If one were to
 fail so would the other.
 Obviously this would probably require some customization but wanted to
 know what the best way to tackle this would be and where should I be
 looking in the source.

 Many thanks for the help in advance,
 Nicholas a.k.a. incunix



-- 
Lance Norskog
goks...@gmail.com


Multicore master-slaver replication in Solr Cloud

2012-06-19 Thread fabio curti
Hi,
i tried to set a Multicore master-slaver replication in Solr Cloud found in
this post
http://pulkitsinghal.blogspot.it/2011/09/multicore-master-slave-replication-in.html
but
i get the following problem

SEVERE: Error while trying to recover.
org.apache.solr.client.solrj.SolrServerException: Server at
http://myserver:8983/solr was not found (404).
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:372)
at
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:182)
at
org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:192)
at
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:303)
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:213)
Jun 19, 2012 3:17:49 PM org.apache.solr.cloud.RecoveryStrategy doRecovery
SEVERE: Recovery failed - trying again...

The infrastructure will look like:

   - Solr-Instance-A
  - master1 (indexes changes for shard1)
  - slave1-master2 (replicates changes from shard2)
  - slave2-master2 (replicates changes from shard2)
   - Solr-Instance-B
  - master2 (indexes changes for shard2)
  - slave1-master1 (replicates changes from shard1)
  - slave2-master1 (replicates changes from shard1)


Any idea?


Re: Multicore master-slaver replication in Solr Cloud

2012-06-19 Thread Mark Miller

On Jun 19, 2012, at 9:59 AM, fabio curti wrote:

 Hi,
 i tried to set a Multicore master-slaver replication in Solr Cloud found in
 this post
 http://pulkitsinghal.blogspot.it/2011/09/multicore-master-slave-replication-in.html
 but
 i get the following problem
 
 SEVERE: Error while trying to recover.
 org.apache.solr.client.solrj.SolrServerException: Server at
 http://myserver:8983/solr was not found (404).
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:372)
 at
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:182)
 at
 org.apache.solr.cloud.RecoveryStrategy.sendPrepRecoveryCmd(RecoveryStrategy.java:192)
 at
 org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:303)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:213)
 Jun 19, 2012 3:17:49 PM org.apache.solr.cloud.RecoveryStrategy doRecovery
 SEVERE: Recovery failed - trying again...
 
 The infrastructure will look like:
 
   - Solr-Instance-A
  - master1 (indexes changes for shard1)
  - slave1-master2 (replicates changes from shard2)
  - slave2-master2 (replicates changes from shard2)
   - Solr-Instance-B
  - master2 (indexes changes for shard2)
  - slave1-master1 (replicates changes from shard1)
  - slave2-master1 (replicates changes from shard1)
 
 
 Any idea?


You don't want to explicitly setup master - slave replication when using 
solrcloud. Just define an empty replication handler (and make sure you have 
some other required config) and the rest is automatic.

http://wiki.apache.org/solr/SolrCloud#Required_Config

- Mark Miller
lucidimagination.com













Re: Multicore Issue - Server Restart

2012-05-30 Thread Sujatha Arun
Yes ,that is correct.

Regards
Sujatha

On Tue, May 29, 2012 at 7:23 PM, lboutros boutr...@gmail.com wrote:

 Hi Suajtha,

 each webapps has its own solr home ?

 Ludovic.

 -
 Jouve
 France.
 --
 View this message in context:
 http://lucene.472066.n3.nabble.com/Multicore-Issue-Server-Restart-tp3986516p3986602.html
 Sent from the Solr - User mailing list archive at Nabble.com.



Re: Multicore Issue - Server Restart

2012-05-30 Thread Siva Kommuri
Hi Sujatha,

Which version of Solr are you using?

Best Wishes,
Siva

On Wed, May 30, 2012 at 12:22 AM, Sujatha Arun suja.a...@gmail.com wrote:

 Yes ,that is correct.

 Regards
 Sujatha

 On Tue, May 29, 2012 at 7:23 PM, lboutros boutr...@gmail.com wrote:

  Hi Suajtha,
 
  each webapps has its own solr home ?
 
  Ludovic.
 
  -
  Jouve
  France.
  --
  View this message in context:
 
 http://lucene.472066.n3.nabble.com/Multicore-Issue-Server-Restart-tp3986516p3986602.html
  Sent from the Solr - User mailing list archive at Nabble.com.
 



Re: Multicore Issue - Server Restart

2012-05-30 Thread Sujatha Arun
solr 1.3

Regards
Sujatha

On Wed, May 30, 2012 at 8:26 PM, Siva Kommuri snv.komm...@gmail.com wrote:

 Hi Sujatha,

 Which version of Solr are you using?

 Best Wishes,
 Siva

 On Wed, May 30, 2012 at 12:22 AM, Sujatha Arun suja.a...@gmail.com
 wrote:

  Yes ,that is correct.
 
  Regards
  Sujatha
 
  On Tue, May 29, 2012 at 7:23 PM, lboutros boutr...@gmail.com wrote:
 
   Hi Suajtha,
  
   each webapps has its own solr home ?
  
   Ludovic.
  
   -
   Jouve
   France.
   --
   View this message in context:
  
 
 http://lucene.472066.n3.nabble.com/Multicore-Issue-Server-Restart-tp3986516p3986602.html
   Sent from the Solr - User mailing list archive at Nabble.com.
  
 



Re: Multicore Issue - Server Restart

2012-05-29 Thread lboutros
Hi Suajtha,

each webapps has its own solr home ?

Ludovic.

-
Jouve
France.
--
View this message in context: 
http://lucene.472066.n3.nabble.com/Multicore-Issue-Server-Restart-tp3986516p3986602.html
Sent from the Solr - User mailing list archive at Nabble.com.


  1   2   3   4   5   6   7   >