IndexWriter has closed

2019-03-27 Thread Aroop Ganguly
Hi Everyone

My indexing jobs are failing with “this IndexWriter has closed” errors..
This is a solr 7.5 setup, with an NRT index.

In deeper logs I see, some of these exceptions,
Any idea what could have caused this ?

o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: 
java.io.IOException: Input/output error
at 
org.apache.solr.update.TransactionLog.writeCommit(TransactionLog.java:477)
at org.apache.solr.update.UpdateLog.postCommit(UpdateLog.java:833)
at org.apache.solr.update.UpdateLog.preCommit(UpdateLog.java:817)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:669)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:93)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:68)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1959)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1935)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processCommit(LogUpdateProcessorFactory.java:160)
at 
org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:62)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
at 

Questions on nested child document split using the "split" parameter

2019-03-27 Thread Zheng Lin Edwin Yeo
Hi,

I am trying to find more information regarding this upgrade in Solr 8.0.0,
but I could not really understand what it means, and couldn't find much
information on that, especailly that the Solr User Guide for 8.0.0 is not
released yet.

SOLR-12633 : When JSON
data is sent to Solr with nested child documents split using the "split"
parameter, the child docs will now be associated to their parents by the
field/label string used in the JSON instead of anonymously. Most users
probably won't notice the distinction since the label is lost any way
unless special fields are in the schema. This choice used to be toggleable
with an internal/expert "anonChildDocs" parameter flag which is now gone.

Will it affect the query or output of the nested child documents?

Regards,
Edwin


Re: coreNodeName core_node2 does not exist in shard shard1, ignore the exception if the replica was deleted Solr 8.0.0

2019-03-27 Thread Zheng Lin Edwin Yeo
Normally I will not remove the version-2 folder in ZooKeeper when making
changes to the schema.xml or solrconfig.xml.
I will just upconfig the new configuration to the ZooKeeper, and do the
Reload of the collection in Solr.

Regards,
Edwin

On Wed, 27 Mar 2019 at 20:53, vishal patel 
wrote:

> First time i have successfully made product collection using GUI admin
> panel in solr 8.0.0.
> After some changes in schema.xml, i removed the version-2 folder from zoo
> keeper and again upconfig using below command
>
> zkcli.bat -zkhost 192.168.100.222:3181,192.168.100.222:3182,
> 192.168.100.222:3183 -cmd upconfig -confdir
> F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf
> -confname product
>
> Solr start but i can not find the product collection and below error
>
> 2019-03-27 11:48:17.276 ERROR
> (coreContainerWorkExecutor-2-thread-1-processing-n:192.168.100.222:7992_solr)
> [   ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on
> startup
> org.apache.solr.cloud.ZkController$NotInClusterStateException:
> coreNodeName core_node2 does not exist in shard shard1, ignore the
> exception if the replica was deleted
> at
> org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1830)
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729)
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
> jimczi - 2019-03-08 12:06:06]
> at
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
> jimczi - 2019-03-08 12:06:06]
> at
> org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695)
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer$$Lambda$258/1816397102.call(Unknown
> Source) ~[?:?]
> at
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
> ~[metrics-core-3.2.6.jar:3.2.6]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 -
> jimczi - 2019-03-08 12:06:10]
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$46/1324551716.run(Unknown
> Source) [solr-solrj-8.0.0.jar:8.0.0
> 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 2019-03-08 12:06:10]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [?:1.8.0_45]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [?:1.8.0_45]
> at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]
>
> is it needed to remove version-2 folder of zoo keeper when any changes in
> schema.xml? Any solution to make a automatic coreNodeName(core_node2) when
> upconfig and solr start?
> 
>


Re: Problem with white space or special characters in function queries

2019-03-27 Thread shamik
I'm using Solr 7.5, here's the query:

q=line=language:"english"=Source2:("topicarticles"+OR+"sfdcarticles")=url,title=ADSKFeature:"CUI+(Command)"^7=recip(ms(NOW/DAY,PublishDate),3.16e-11,1,1)^2+if(termfreq(ADSKFeature,'CUI
(Command)'),log(CaseCount),sqrt(CaseCount))=10



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Document deletes by ID on sharded collection

2019-03-27 Thread Brian Panulla
I have a Near-Realtime Search implementation on Solr Cloud 7.5 and I'm
having an issue with deleting documents from a sharded collection.

I'm deleting documents right now using a query for the document ID, and
everything seems to be working properly, aside from the fact that deletes
are *really slow*. Delete request come in via a message queue, and
overnight when most of our document removals happen the queue backs up,
sometimes taking several hours to clear a few thousand documents.

{
  "delete": {
"query": "+id:12345678"
  }
}

We've seen suggestions that document deletes by ID are preferred due to
performance, so I built an implementation that works correctly with a
single unsharded core on a standalone Solr server. But when I try deletes
by ID on our sharded development SolrCloud cluster the deletes are
unreliable. It's not clear to me how they happen, if at all. I've had some
luck sending the delete directly to the node where the document lives, but
even that doesn't seem to work consistently. The cluster is sharded using
the compositeId router on a second field common to all of our records if
that matters.

{
  "delete": {
"id": "12345678"
  }
}

Does anyone have any advice about what *should* work? Is there some
contributing factor I'm missing, like how we're sharding? We're trying to
move away from full DataImporHandler reindexes in the future as a means of
removing deleted documents, but we need to be able to delete specific
documents directly in an efficient way before this could be a reality.


coreNodeName core_node2 does not exist in shard shard1, ignore the exception if the replica was deleted Solr 8.0.0

2019-03-27 Thread vishal patel
First time i have successfully made product collection using GUI admin panel in 
solr 8.0.0.
After some changes in schema.xml, i removed the version-2 folder from zoo 
keeper and again upconfig using below command

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start but i can not find the product collection and below error

2019-03-27 11:48:17.276 ERROR 
(coreContainerWorkExecutor-2-thread-1-processing-n:192.168.100.222:7992_solr) [ 
  ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on startup
org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName 
core_node2 does not exist in shard shard1, ignore the exception if the replica 
was deleted
at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1830) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
 ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi - 
2019-03-08 12:06:06]
at org.apache.solr.core.CoreContainer$$Lambda$258/1816397102.call(Unknown 
Source) ~[?:?]
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
 ~[metrics-core-3.2.6.jar:3.2.6]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
 [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
- 2019-03-08 12:06:10]
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$46/1324551716.run(Unknown
 Source) [solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
jimczi - 2019-03-08 12:06:10]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:1.8.0_45]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_45]

is it needed to remove version-2 folder of zoo keeper when any changes in 
schema.xml? Any solution to make a automatic coreNodeName(core_node2) when 
upconfig and solr start?



Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-27 Thread vishal patel
solr 6.1.0 folder structure

F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   core.properties

---   solr.xml

---   zoo.cfg

Note : core.properties below data

name=product

shard=shard1

collection=product

upconfig command :

zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183

Solr start command :

solr start -p 7992

*

Now I am upgrading solr 8.0.0 and make a folder structure like

F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr

---   product

---   data

---   core.properties

---   configsets

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   solr.xml

---   zoo.cfg

Note : core.properties below data
collection.configName=product
name=product
shard=shard1
collection=product
coreNodeName=core_node2

upconfig command :

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start command :

solr start -p 7992

Its working if i make a folder structure like this.

Why should not configure with same folder structure when upgrade the solr 
8.0.0? is it necessary to make a configsets?we did successfully up without 
making a configsets in solr 6.1.0.


From: Erick Erickson 
Sent: Tuesday, March 26, 2019 8:16 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

How did you create your “product” collection? It looks like you have the config 
resident on your local disk and _not_ on ZooKeeper.

Your configset has to be in ZooKeeper when you create your collection of 
course. Do not try to individually edit the core.properties files, that’ll be 
very difficult to do correctly.

And you’ll have to completely re-index anyway since Lucene 8.x will not open an 
index created with 6.x, so why not just start completely anew?

Best,
Erick

> On Mar 26, 2019, at 6:49 AM, vishal patel  
> wrote:
>
>
> My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I am 
> upgrading solr version 8.0.0 and zoo keeper 3.4.13.
> In solr 6.1.0 my collection(product) folder server\solr\product
> conf
> schema.xml
> solrconfig.xml
> core.properties
>
> In core.properties ::
> name=product
> shard=shard1
> collection=product
>
> In solr 8.0.0, I changed only solrconfig.xml and all other things keep same.
> I created 3 zoo keeper and one shard. First I start all 3 zoo keeper and then 
> start the solr, below ERROR come
>
>
> 2019-03-26 13:06:49.367 ERROR 
> (coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
> [c:product s:shard1  x:product] o.a.s.c.ZkController
> org.apache.solr.common.SolrException: Could not find collection : product
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
>  ~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:10]
> at 
> org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854)
>  ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
>  [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
> Source) [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>  [metrics-core-3.2.6.jar:3.2.6]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
> at 
> 

Re: Autoscaling rack awareness

2019-03-27 Thread Richard Goodman
So I managed to get this working by the following policy:

{"replica":"<2","shard":"#EACH","sysprop.racklocation": "#EACH"}


On Tue, 26 Mar 2019 at 14:03, Richard Goodman 
wrote:

> Hi, I'm currently running into some trouble trying to set up rack
> awareness as a cluster policy.
>
> I run my cluster with 3 way replication, currently a few collection-shards
> have 4 replicas, which shows as violations under my current set policies:
>
> {
> "set-cluster-policy":[
> {
> "replica":"<2",
> "shard":"#EACH",
> "node":"#ANY"
> },
> {
> "replica":0,
> "freedisk":"<50",
> "strict":false
> }
> ]
> }
>
> {
> "collection":"collection_name_one",
> "shard":"shard12",
> "node":"1.2.3.4:8080_solr",
> "tagKey":"1.2.3.4:8080_solr",
> "violation":{
>
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
>   "delta":1},
> "clause":{
>   "replica":"<2",
>   "shard":"#EACH",
>   "node":"#ANY",
>   "collection":"collection_name_one"}
> },
>
> I want to implement rack awareness as a policy, there are examples of
> availability zone policies, however, not really anything for rack
> awareness. Currently we set this when creating a collection:
>
> sysprop.racklocation:*,shard:*,replica:<2
>
> So I tried to implement this via the following policy rule
>
> {"replica": "<2", "shard": "#EACH", "sysprop.racklocation": "*"}
>
> However, this hasn't worked *(because with the extra replication I have
> atm, it would certainly raise this as a violation)*, so not sure how I
> can implement this?
> I saw in the 7.7 docs this following example:
> {"replica":"#ALL", "shard":"shard1", "sysprop.rack":"730"}
> However, this forces shard 1 of all replicas to belong to a certain rack,
> which I don't want to do, I'd rather the replicas have free choice of where
> they are placed, providing if two replicas appear on the same racklocation,
> it would raise a violation.
>
> Has anyone had experience of setting something like this up, or have any
> advice / see an error in my policy set up?
>
> *(Currently running solr 7.4)*
>
> Thanks,
> Richard
>


-- 

Richard Goodman|Data Infrastructure Engineer

richa...@brandwatch.com


NEW YORK   | BOSTON  | BRIGHTON   | LONDON   | BERLIN   |   STUTTGART   |
SINGAPORE   | SYDNEY | PARIS





bin/post command not working when run from crontab

2019-03-27 Thread Carsten Agger
I'm working with a script where I want to send a command to delete all
elements in an index; notably,


/opt/solr/bin/post -c  -d  "*:*"


When run interactively, this works fine.

However, when run automatically as a cron job, it gives this interesting
output:


Unrecognized argument:   "*:*"

If this was intended to be a data file, it does not exist relative to /root

The culprit seems to be these lines, 143-148:

 if [[ ! -t 0 ]]; then
   MODE="stdin"
 else
   # when no stdin exists and -d specified, the rest of the arguments
   # are assumed to be strings to post as-is
   MODE="args"

This code seems to be doing the opposite of what the comment says - it
sets MODE="stdin" if stdin is NOT a terminal, but if it IS (i.e., there
IS an stdin) it assumes the rest of the args can be posted as-is.

On the other hand, if the condition is reversed, my command will fail
interactively but not when run as a cron job. Both options are, of
course, unsatisfactory.

It /will/ actually work in both cases, if instead the command to delete
the contents of the index is written as:

echo "*:*" |  /opt/solr/bin/post -c departments 
-d


I've seen this bug in SOLR 7.5.0 and 7.7.1. Should I report it as a bug
or is there an easy explanation?


Best

Carsten Agger


-- 
Carsten Agger

Chief Technologist
Magenta ApS
Skt. Johannes Allé 2
8000 Århus C

Tlf  +45 5060 1476
http://www.magenta-aps.dk
carst...@magenta-aps.dk



Re: SolrCore Initialization Failures in Solr 8.0.0

2019-03-27 Thread vishal patel
solr 6.1.0 folder structure

F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr\

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   core.properties

---   solr.xml

---   zoo.cfg

Note : core.properties below data

name=product

shard=shard1

collection=product

upconfig command :

zkcli.bat -cmd bootstrap -solrhome 
F:\SolrCloud-6.1.0\solr-6.1.0-shard-1\server\solr -z 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183

Solr start command :

solr start -p 7992

*

Now I am upgrading solr 8.0.0 and make a folder structure like

F:\SolrCloud-8-0-0\solr-8.0.0-shard-1\server\solr

---   product

---   data

---   core.properties

---   configsets

---   product

---   conf

---   schema.xml

---   solrconfig.xml

---   solr.xml

---   zoo.cfg

Note : core.properties below data
collection.configName=product
name=product
shard=shard1
collection=product
coreNodeName=core_node2

upconfig command :

zkcli.bat -zkhost 
192.168.100.222:3181,192.168.100.222:3182,192.168.100.222:3183 -cmd upconfig 
-confdir 
F:/SolrCloud-8-0-0/solr-8.0.0-shard-1/server/solr/configsets/product/conf 
-confname product

Solr start command :

solr start -p 7992

Its working if i make a folder structure like this.

Why should not configure with same folder structure when upgrade the solr 
8.0.0? is it necessary to make a configsets?we did successfully up without 
making a configsets in solr 6.1.0.

Sent from Outlook

From: Erick Erickson 
Sent: Tuesday, March 26, 2019 8:16 PM
To: solr-user@lucene.apache.org
Subject: Re: SolrCore Initialization Failures in Solr 8.0.0

How did you create your “product” collection? It looks like you have the config 
resident on your local disk and _not_ on ZooKeeper.

Your configset has to be in ZooKeeper when you create your collection of 
course. Do not try to individually edit the core.properties files, that’ll be 
very difficult to do correctly.

And you’ll have to completely re-index anyway since Lucene 8.x will not open an 
index created with 6.x, so why not just start completely anew?

Best,
Erick

> On Mar 26, 2019, at 6:49 AM, vishal patel  
> wrote:
>
>
> My previous solr version was 6.1.0 and zoo keeper version was 3.4.6. Now I am 
> upgrading solr version 8.0.0 and zoo keeper 3.4.13.
> In solr 6.1.0 my collection(product) folder server\solr\product
> conf
> schema.xml
> solrconfig.xml
> core.properties
>
> In core.properties ::
> name=product
> shard=shard1
> collection=product
>
> In solr 8.0.0, I changed only solrconfig.xml and all other things keep same.
> I created 3 zoo keeper and one shard. First I start all 3 zoo keeper and then 
> start the solr, below ERROR come
>
>
> 2019-03-26 13:06:49.367 ERROR 
> (coreLoadExecutor-13-thread-1-processing-n:192.168.100.145:7991_solr) 
> [c:product s:shard1  x:product] o.a.s.c.ZkController
> org.apache.solr.common.SolrException: Could not find collection : product
> at 
> org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
>  ~[solr-solrj-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:10]
> at 
> org.apache.solr.core.CoreContainer.repairCoreProperty(CoreContainer.java:1854)
>  ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1790) 
> ~[solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1729) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at 
> org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1182)
>  [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:695) 
> [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - jimczi 
> - 2019-03-08 12:06:06]
> at org.apache.solr.core.CoreContainer$$Lambda$259/523051393.call(Unknown 
> Source) [solr-core-8.0.0.jar:8.0.0 2ae4746365c1ee72a0047ced7610b2096e438979 - 
> jimczi - 2019-03-08 12:06:06]
> at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>  [metrics-core-3.2.6.jar:3.2.6]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_45]
> at 
> 

Re: Re: solr _route_ key now working

2019-03-27 Thread Jay Potharaju
I was reading the debug info incorrectly it is working as expected
...thanks for the help.
Thanks
Jay Potharaju



On Tue, Mar 26, 2019 at 10:58 PM Jay Potharaju 
wrote:

> Edwin, I tried escaping the special characters but it does not seems to
> work. I am using 7.7
> Thanks Jeremy for the example.
> id:123:456!789
> I do see that the data for the same key is co-located in the same shard by
> running. I can see that all the data is co-located in the same shard when
> querying the shard.
> fq=fieldB:456=shard1.
>
> Any suggestions why that would not be working when using _route_ to query
> the documents.
>
> Thanks
> Jay Potharaju
>
>
>
> On Tue, Mar 26, 2019 at 5:58 AM Branham, Jeremy (Experis) <
> jb...@allstate.com> wrote:
>
>> Jay –
>> I’m not familiar with the document ID format you mention [having a “:” in
>> the prefix], but it looks similar to the composite ID routing I’m using.
>> Document Id format: “a/1!id”
>>
>> Then I can use a _route_ value of “a/1!” when querying.
>>
>> Example Doc IDs:
>> a/1!768456
>> a/1!563575
>> b/1!456234
>> b/1!245698
>>
>> The document ID prefix “x/1!” tells Solr to spread the documents over ½
>> of the available shards. When querying with the same value for _route_ it
>> will retrieve documents only from those shards.
>>
>> Jeremy Branham
>> jb...@allstate.com
>>
>> On 3/25/19, 9:13 PM, "Zheng Lin Edwin Yeo"  wrote:
>>
>> Hi,
>>
>> Sorry, didn't see that you have an exclamation mark in your query as
>> well.
>> You will need to escape the exclamation mark as well.
>> So you can try it with the query _route_=“123\:456\!”
>>
>> You can refer to the message in the link on which special characters
>> requires escaping.
>>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_21914956_which-2Dspecial-2Dcharacters-2Dneed-2Descaping-2Din-2Da-2Dsolr-2Dquery=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=81cWucTr4zf8Cn2FliZ2fYFfqIb_g605mWVAxLxuQCc=30JCckpa6ctmrBupqeGhxJ7pPIcicy7VcIoeTEw_vpQ=
>>
>> By the way, which Solr version are you using?
>>
>> Regards,
>> Edwin
>>
>> On Tue, 26 Mar 2019 at 01:12, Jay Potharaju 
>> wrote:
>>
>> > That did not work . Any other suggestions
>> > My id is 123:456!678
>> > Tried running query as _route_=“123\:456!” But didn’t give expected
>> > results
>> > Thanks
>> > Jay
>> >
>> > > On Mar 24, 2019, at 8:30 PM, Zheng Lin Edwin Yeo <
>> edwinye...@gmail.com>
>> > wrote:
>> > >
>> > > Hi,
>> > >
>> > > The character ":" is a special character, so it requires escaping
>> during
>> > > the search.
>> > > You can try to search with query _route_="a\:b!".
>> > >
>> > > Regards,
>> > > Edwin
>> > >
>> > >> On Mon, 25 Mar 2019 at 07:59, Jay Potharaju <
>> jspothar...@gmail.com>
>> > wrote:
>> > >>
>> > >> Hi,
>> > >> My document id has a format of a:b!c, when I query
>> _route_="a:b!" it
>> > does
>> > >> not return any values. Any suggestions?
>> > >>
>> > >> Thanks
>> > >> Jay Potharaju
>> > >>
>> >
>>
>>
>>