[jira] [Created] (SOLR-13240) UTILIZENODE action results in an exception

2019-02-10 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-13240:
--

 Summary: UTILIZENODE action results in an exception
 Key: SOLR-13240
 URL: https://issues.apache.org/jira/browse/SOLR-13240
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.6
Reporter: Hendrik Haddorp


When I invoke the UTILIZENODE action the REST call fails like this after it 
moved a few replicas:
{
  "responseHeader":{
"status":500,
"QTime":40220},
  "Operation utilizenode caused 
exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
 Comparison method violates its general contract!",
  "exception":{
"msg":"Comparison method violates its general contract!",
"rspCode":-1},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Comparison method violates its general contract!",
"trace":"org.apache.solr.common.SolrException: Comparison method violates 
its general contract!\n\tat 
org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
 
org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:531)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)\n\tat 
org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)\n\tat 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)\n\tat
 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)\n\tat
 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)\n\tat
 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)\n\tat
 java.lang.Thread.run(Thread.java:748)\n",
"code":500}} 

The logs show this 

[jira] [Created] (SOLR-13239) CollectionStateWatcher reports new collections before they really exist

2019-02-10 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-13239:
--

 Summary: CollectionStateWatcher reports new collections before 
they really exist
 Key: SOLR-13239
 URL: https://issues.apache.org/jira/browse/SOLR-13239
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: clients - java
Affects Versions: 7.6
Reporter: Hendrik Haddorp


A CollectionStateWatcher registered via 
org.apache.solr.common.cloud.ZkStateReader.registerCloudCollectionsListener 
gets invoked as soon as the CloudSolrClient detects a new collection. This is 
based on having a watch on the /collections znode. When the 
CollectionStateWatcher tries to read out information about the new collection 
via zkStateReader.getClusterState() there is a good chance that no 
DocCollection can be found. The reason for that is that a DocCollection is 
based on the state.json below the collection. As this znode is below the 
collection znode it does need to be created a bit later. So there is a race 
condition.

One can run into the same problem if one tries to register a 
CollectionStateWatcher via ZkStateReader.registerCollectionStateWatcher 
straight after a new collection is found. The watcher is then being invoked 
with the DocCollection set to null as it also can not find the DocCollection 
object. Null does however indicate that the collection was being deleted.

see also the mail thread about this:
https://www.mail-archive.com/search?l=solr-u...@lucene.apache.org=subject:%22Re%5C%3A+CloudSolrClient+getDocCollection%22=newest=1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12467) allow to change the autoscaling configuration via SolrJ

2018-06-08 Thread Hendrik Haddorp (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16505921#comment-16505921
 ] 

Hendrik Haddorp commented on SOLR-12467:


If no update straight in ZK is desired then it would still be nice if SolrJ 
would offer an update request. Given that this is not done all the time this 
would be fine for me as well. Now the SolrJ API just looks a bit incomplete ;-)

> allow to change the autoscaling configuration via SolrJ
> ---
>
> Key: SOLR-12467
> URL: https://issues.apache.org/jira/browse/SOLR-12467
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.3.1
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Using SolrJ's CloudSolrClient it is possible to read the autoscaling 
> configuration:
> cloudSolrClient.getZkStateReader().getAutoScalingConfig()
> There is however no way to update it. One can only read out the list of life 
> nodes and then do a call to Solr using the LBHttpSolrClient for example. 
> Given that the config is stored in ZooKeeper and thus could be written 
> directly and even when no Solr instance is running this is not optimal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12467) allow to change the autoscaling configuration via SolrJ

2018-06-08 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-12467:
--

 Summary: allow to change the autoscaling configuration via SolrJ
 Key: SOLR-12467
 URL: https://issues.apache.org/jira/browse/SOLR-12467
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 7.3.1
Reporter: Hendrik Haddorp


Using SolrJ's CloudSolrClient it is possible to read the autoscaling 
configuration:
cloudSolrClient.getZkStateReader().getAutoScalingConfig()

There is however no way to update it. One can only read out the list of life 
nodes and then do a call to Solr using the LBHttpSolrClient for example. Given 
that the config is stored in ZooKeeper and thus could be written directly and 
even when no Solr instance is running this is not optimal.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12224) there is no API to read collection properties

2018-04-15 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438831#comment-16438831
 ] 

Hendrik Haddorp commented on SOLR-12224:


An option could also be to have an option to control how much detail 
CLUSTERSTATUS is returning. But getting the information via COLLECTIONPROP is 
also fine to me.

> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12224) there is no API to read collection properties

2018-04-15 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438639#comment-16438639
 ] 

Hendrik Haddorp commented on SOLR-12224:


As the clusterstatus contains the cluster properties already it would make 
sense to me to also include the collection properties.

> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12224) there is no API to read collection properties

2018-04-14 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16438392#comment-16438392
 ] 

Hendrik Haddorp commented on SOLR-12224:


There is actually a way to read the collection properties using 
CloudSolrClient.getZkStateReader().getCollectionProperties(collection)
A REST API would still be nice but for me personally this is actually already 
enough.

> there is no API to read collection properties
> -
>
> Key: SOLR-12224
> URL: https://issues.apache.org/jira/browse/SOLR-12224
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.3
>Reporter: Hendrik Haddorp
>Priority: Major
>
> Solr 7.3 added the COLLECTIONPROP API call 
> (https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop)
>  that allows to set arbitrary properties on a collection. There is however no 
> API call that returns the data. The only option is to manually read out the 
> collectionprops.json file in ZK below the collection.
> Options could be that the COLLECTIONPROP command has an option to retrieve 
> properties, have a special command to list the properties and/or to have the 
> properties listed in the clusterstatus output for a collection.
> Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12224) there is no API to read collection properties

2018-04-14 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-12224:
--

 Summary: there is no API to read collection properties
 Key: SOLR-12224
 URL: https://issues.apache.org/jira/browse/SOLR-12224
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 7.3
Reporter: Hendrik Haddorp


Solr 7.3 added the COLLECTIONPROP API call 
(https://lucene.apache.org/solr/guide/7_3/collections-api.html#collectionprop) 
that allows to set arbitrary properties on a collection. There is however no 
API call that returns the data. The only option is to manually read out the 
collectionprops.json file in ZK below the collection.

Options could be that the COLLECTIONPROP command has an option to retrieve 
properties, have a special command to list the properties and/or to have the 
properties listed in the clusterstatus output for a collection.

Would be great if SolrJ would also be supported.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2018-04-08 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16429676#comment-16429676
 ] 

Hendrik Haddorp commented on SOLR-6305:
---

[~elyograg] When you store a file in HDFS it ends up being stored in blocks and 
these blocks get replicated to multiple nodes for increased safety. You can 
configure a default block replication factor but you can also create files with 
a specific replication factor. The problem in Solr is that some parts take the 
default replication factor as it is defined on the HDFS name node and some take 
the default. The default is 3 unless you tell Solr that you have a local HDFS 
configuration (using solr.hdfs.confdir). So when you set the default HDFS 
replication factor (done on the name node) to 1 Solr still ends up creating 
files that want to have a replication factor of 3. In case you are using a 
small HDFS test setup that only has one node (data node to be exact) then your 
blocks are still being created but they are under recplicated.

When you tell SolrCloud to use a replicationFactor of 3 then Solr creates 3 
copies of the collection files in HDFS, just like it does in the local case. So 
yes, in your case one could saw that the data exists 9 times. One could also 
see Solr on HDFS like Solr on a shared RAID filesystem. In RAID the files are 
however all replicated in the same way while in HDFS the replication factor of 
the files can be different and changed dynamically.

In my opinion the only problem is that Solr does not create the files in HDFS 
in the same way. Some pick the replication factor as defined on the HDFS name 
node while others don't.

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2018-04-04 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16425097#comment-16425097
 ] 

Hendrik Haddorp commented on SOLR-6305:
---

I had just tested Solr 7.2.1 with an HDFS setup that had only one node. For 
that I also saw that some files got created with a replication factor of 3 as 
that is default on the client side if nothing else is configured while others 
got created with a factor of 1, as configured on the name node. I then started 
Solr with the system property "solr.hdfs.confdir" pointing to a directory 
containing just the file "hdfs-site.xml". In that file I set "dfs.replication" 
to 1. After that I did not find any files in HDFS anymore that had a 
replication factor of 3.
It would however been nice if the replication factor could consistently being 
controlled by the client side.

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11707) allow to configure the HDFS block size

2017-11-30 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-11707:
--

 Summary: allow to configure the HDFS block size
 Key: SOLR-11707
 URL: https://issues.apache.org/jira/browse/SOLR-11707
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: hdfs
Reporter: Hendrik Haddorp
Priority: Minor


Currently index files are created in HDFS with the block size that is defined 
on the namenode. For that the HdfsFileWriter reads out the config from the 
server and then specifies the size (and replication factor) in the 
FileSystem.create call.

For the write.lock files things work slightly different. These are being 
created by the HdfsLockFactory without specifying a block size (or replication 
factor). This results in a default being picked by the HDFS client, which is 
128MB.

So currently files are being created with different block sizes if the namenode 
is configured to something else then 128MB. It would be good if Solr would 
allow to configure the block size to be used. This is especially useful if the 
Solr admin is not the HDFS admin and if you have different applications using 
HDFS that have different requirements for their block size.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-24 Thread Hendrik Haddorp (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hendrik Haddorp updated SOLR-10092:
---
Attachment: SOLR-10092.patch

This new patch works with legacyCloud=false correctly for me but I must admit 
that I do not fully understand what the code tries to do.

The flow in Solr is like this:
1) OverseerAutoReplicaFailoverThread decides to create a new core to replace a 
failed one
2) CoreContainer.create(String coreName, Path instancePath, Map 
parameters, boolean newCollection) gets invoked
3) CoreContainer.create(CoreDescriptor dcore, boolean publishState, boolean 
newCollection)
4) ZkController.preRegister
5) ZkController.checkStateInZk

If the legacyCloud mode is on nothing at all happens in step 5 and one check in 
step 2 is also not made.

When legacyCloud mode is on things work but if it is off the code fails in step 
5 because no shardId is set in the create core call done from the Overseer. 
This I fixed in my first patch so that the shared id/name gets passed into the 
core creation. The code in step 5 does check if the core creation data matches 
to what is stored in ZK. This can however not work in this case as the 
"baseUrl" will of course not match as we are trying to replace the core with a 
new one. So I now removed the baseUrl comparison and everything seems to work 
fine for with legacyClound on and off. Given that I don't really understand 
what check is done here and why that is only done when legacyCloud=false my fix 
might also not be correct and should be done different. But my patched version 
works at least ;-)

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch, SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-23 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880651#comment-15880651
 ] 

Hendrik Haddorp commented on SOLR-10092:


Sorry for the spam looks like I tested my patch wrong last time. Solr 6.3 on 
HDFS with legacyMode=false fails with the stated exception. But just using my 
patch does not fix that. The exception is gone but then I get:
org.apache.solr.common.SolrException: coreNodeName core_node1 exists, but does 
not match expected node or core name: 
DocCollection(test.test3//collections/test.test3/state.json/50)={

}
at 
org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1562)
at 
org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1488)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:837)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:779)

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-23 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880421#comment-15880421
 ] 

Hendrik Haddorp commented on SOLR-10092:


ok, tested again. plain solr 6.3 works in the default legacyCloud mode. but 
once I set legacyCloud=false it fails with the exception above.
after some more testing I do now actually have a different problem. I do not 
get the exception from above by the following message and no fail over happens:
2017-02-23 13:29:07.876 WARN  
(OverseerHdfsCoreFailoverThread-25449593782534174-search-integration.rtp.raleigh.ibm.com:9005_solr-n_15)
 [   ] o.a.s.c.OverseerAutoReplicaFailoverThread Could not find dataDir or 
ulogDir in cluster state.

sorry, all quite inconsistent at the moment.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-22 Thread Hendrik Haddorp (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15880065#comment-15880065
 ] 

Hendrik Haddorp commented on SOLR-10092:


For a setup using a local filesystem I did not see this code to be triggered at 
all. But I was just trying to reproduce this on an unpatched installation and 
for some reason it looks like it worked now as well. So am going to recheck 
again. From what I saw in the code it looked like the code required the shard 
id/name to be set, which is also what the exception said, but the 
OverseerAutoReplicaFailoverThread is not doing that.

Regarding the instance dir. I'm seeing this in the logs:
2017-02-23 06:43:13.968 INFO  (qtp1224347463-12) [c:test.test s:shard1 
r:core_node3 x:test.test_shard1_replica1] o.a.s.c.SolrCore 
[[test.test_shard1_replica1] ] Opening new SolrCore at 
[/var/opt/solr/test.test_shard1_replica1], 
dataDir=[hdfs://my-hdfs-namenode:8000/solr/test.test/core_node3/data/]
So even for HDFS there is local information. The folder only contains a 
core.properties file. Seems to contain everything required to determine the 
replica. Not sure why this is not taken from ZooKeeper though.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-21 Thread Hendrik Haddorp (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hendrik Haddorp updated SOLR-10092:
---
Attachment: SOLR-10092.patch

With this patch the automatic replica fail over worked for me on Solr 6.3.

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
> Attachments: SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10092) HDFS: AutoAddReplica fails

2017-02-03 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-10092:
--

 Summary: HDFS: AutoAddReplica fails
 Key: SOLR-10092
 URL: https://issues.apache.org/jira/browse/SOLR-10092
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: hdfs
Affects Versions: 6.3
Reporter: Hendrik Haddorp


OverseerAutoReplicaFailoverThread fails to create replacement core with this 
exception:

o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
replica on 
http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
 Error from server at http://...:9000/solr: Error CREATEing SolrCore 
'test2.collection-09_shard1_replica1': Unable to create core 
[test2.collection-09_shard1_replica1] Caused by: No shard id for 
CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745) 

also see this mail thread about the issue: 
https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8042) SolrJ CollectionAdminRequest.Reload fails

2015-09-11 Thread Hendrik Haddorp (JIRA)
Hendrik Haddorp created SOLR-8042:
-

 Summary: SolrJ CollectionAdminRequest.Reload fails
 Key: SOLR-8042
 URL: https://issues.apache.org/jira/browse/SOLR-8042
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.3
Reporter: Hendrik Haddorp
Priority: Minor


The following code fails stating that the collection name must be set:
CollectionAdminRequest.Reload reloadReq = new CollectionAdminRequest.Reload();
reloadReq.process(client, collection);

call stack is:

Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://xxx.xxx.xxx.xxx:10001/solr: Missing required parameter: 
name
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)

This can be prevented by adding
reloadReq.setCollectionName(collection);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org