[jira] [Commented] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795873#comment-16795873
 ] 

ASF subversion and git services commented on SOLR-13297:


Commit 6064b03ac6f61b077dcfc6262568e466f2bf6467 in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6064b03ac ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11381) HdfsDirectoryFactory throws NPE on cleanup because file system has been closed

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795871#comment-16795871
 ] 

ASF subversion and git services commented on SOLR-11381:


Commit 6064b03ac6f61b077dcfc6262568e466f2bf6467 in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6064b03ac ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> HdfsDirectoryFactory throws NPE on cleanup because file system has been closed
> --
>
> Key: SOLR-11381
> URL: https://issues.apache.org/jira/browse/SOLR-11381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 8.1, master (9.0)
>
>
> I saw this happening on tests related to autoscaling. The old directory clean 
> up is triggered on core close in a separate thread. This can cause a race 
> condition where the filesystem is closed before the cleanup starts running. 
> Then a NPE is thrown and cleanup fails.
> Fixing the NPE is simple but I think this is a real bug where old directories 
> can be left around on HDFS. I don't know enough about HDFS to investigate 
> further. Leaving it here for interested people to pitch in.
> {code}
> 105029 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Error 
> checking for old index directories to clean-up.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:540)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> 105030 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore Failed to cleanup 
> old index directories for core control_collection_shard1_replica_n1
> java.lang.NullPointerException
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:558)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13330) Improve HDFS tests

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795869#comment-16795869
 ] 

ASF subversion and git services commented on SOLR-13330:


Commit 6064b03ac6f61b077dcfc6262568e466f2bf6467 in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6064b03ac ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> Improve HDFS tests
> --
>
> Key: SOLR-13330
> URL: https://issues.apache.org/jira/browse/SOLR-13330
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13330.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Currently there are failures with HDFS tests intermittently. Some of these 
> are due to resource constraints on the Apache Jenkins machines others are due 
> to poor test assumptions.
> Related JIRAs:
> * SOLR-11010
> * SOLR-11381
> * SOLR-12040
> * SOLR-13297
> Changes:
> * Consolidate hdfs configuration into HdfsTestUtil
> * Ensure socketTimeout long enough for HDFS tests
> * Ensure HdfsTestUtil.getClientConfiguration used in tests
> * Replace deprecated HDFS calls
> * Use try-with-resources to ensure closing of HDFS resources



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795872#comment-16795872
 ] 

ASF subversion and git services commented on SOLR-12040:


Commit 6064b03ac6f61b077dcfc6262568e466f2bf6467 in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6064b03ac ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-19 Thread Ishan Chattopadhyaya (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11876:

Fix Version/s: (was: 7.7.1)

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> 

[jira] [Assigned] (SOLR-11381) HdfsDirectoryFactory throws NPE on cleanup because file system has been closed

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-11381:
---

Assignee: Kevin Risden

> HdfsDirectoryFactory throws NPE on cleanup because file system has been closed
> --
>
> Key: SOLR-11381
> URL: https://issues.apache.org/jira/browse/SOLR-11381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Shalin Shekhar Mangar
>Assignee: Kevin Risden
>Priority: Trivial
> Fix For: 8.1, master (9.0)
>
>
> I saw this happening on tests related to autoscaling. The old directory clean 
> up is triggered on core close in a separate thread. This can cause a race 
> condition where the filesystem is closed before the cleanup starts running. 
> Then a NPE is thrown and cleanup fails.
> Fixing the NPE is simple but I think this is a real bug where old directories 
> can be left around on HDFS. I don't know enough about HDFS to investigate 
> further. Leaving it here for interested people to pitch in.
> {code}
> 105029 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Error 
> checking for old index directories to clean-up.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:540)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> 105030 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore Failed to cleanup 
> old index directories for core control_collection_shard1_replica_n1
> java.lang.NullPointerException
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:558)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-11010:
---

Assignee: Kevin Risden

> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Affects Versions: 7.0, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>[junit4]   2>  at 
> 

[jira] [Updated] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11010:

Fix Version/s: master (9.0)
   8.1

> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Affects Versions: 7.0, 8.0
>Reporter: Andrzej Bialecki 
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>[junit4]   2>  at 
> 

[jira] [Updated] (SOLR-7360) Enable HDFS HA NameNode setup and fail-over testing added in SOLR-7311.

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-7360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-7360:
---
Component/s: Hadoop Integration

> Enable HDFS HA NameNode setup and fail-over testing added in SOLR-7311.
> ---
>
> Key: SOLR-7360
> URL: https://issues.apache.org/jira/browse/SOLR-7360
> Project: Solr
>  Issue Type: Test
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10222) Remove per-core blockcache

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10222:

Component/s: Hadoop Integration

> Remove per-core blockcache
> --
>
> Key: SOLR-10222
> URL: https://issues.apache.org/jira/browse/SOLR-10222
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Mike Drob
>Priority: Major
>
> We should clean up some of the details around the use of the block cache.
> Can we deprecate the per-core blockcache usage in Solr 6.x and remove it from 
> 7? Or does that need to happen in 7 and 8?
> Maybe it makes sense to move the configuration to solr.xml at the same time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11707) allow to configure the HDFS block size

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11707:

Component/s: Hadoop Integration

> allow to configure the HDFS block size
> --
>
> Key: SOLR-11707
> URL: https://issues.apache.org/jira/browse/SOLR-11707
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Hendrik Haddorp
>Priority: Minor
>
> Currently index files are created in HDFS with the block size that is defined 
> on the namenode. For that the HdfsFileWriter reads out the config from the 
> server and then specifies the size (and replication factor) in the 
> FileSystem.create call.
> For the write.lock files things work slightly different. These are being 
> created by the HdfsLockFactory without specifying a block size (or 
> replication factor). This results in a default being picked by the HDFS 
> client, which is 128MB.
> So currently files are being created with different block sizes if the 
> namenode is configured to something else then 128MB. It would be good if Solr 
> would allow to configure the block size to be used. This is especially useful 
> if the Solr admin is not the HDFS admin and if you have different 
> applications using HDFS that have different requirements for their block size.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-6305:
---
Component/s: Hadoop Integration

> Ability to set the replication factor for index files created by 
> HDFSDirectoryFactory
> -
>
> Key: SOLR-6305
> URL: https://issues.apache.org/jira/browse/SOLR-6305
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
> Environment: hadoop-2.2.0
>Reporter: Timothy Potter
>Priority: Major
> Attachments: 
> 0001-OIQ-23224-SOLR-6305-Fixed-SOLR-6305-by-reading-the-r.patch
>
>
> HdfsFileWriter doesn't allow us to create files in HDFS with a different 
> replication factor than the configured DFS default because it uses: 
> {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
> Since we have two forms of replication going on when using 
> HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
> factor for the Solr directories to a lower value than the default. I realize 
> this might reduce the chance of data locality but since Solr cores each have 
> their own path in HDFS, we should give operators the option to reduce it.
> My original thinking was to just use Hadoop setrep to customize the 
> replication factor, but that's a one-time shot and doesn't affect new files 
> created. For instance, I did:
> {{hadoop fs -setrep -R 1 solr49/coll1}}
> My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
> example
> Then added some more docs to the coll1 and did:
> {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
> 3 <-- should be 1
> So it looks like new files don't inherit the repfact from their parent 
> directory.
> Not sure if we need to go as far as allowing different replication factor per 
> collection but that should be considered if possible.
> I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
> this using the Configuration object but nothing jumped out at me ... and the 
> implementation for getServerDefaults(path) is just:
>   public FsServerDefaults getServerDefaults(Path p) throws IOException {
> return getServerDefaults();
>   }
> Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9958) The FileSystem used by HdfsBackupRepository gets closed before the backup completes.

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9958:
---
Component/s: hdfs

> The FileSystem used by HdfsBackupRepository gets closed before the backup 
> completes.
> 
>
> Key: SOLR-9958
> URL: https://issues.apache.org/jira/browse/SOLR-9958
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>Priority: Critical
> Attachments: SOLR-9958.patch
>
>
> My shards get backed up correctly, but then it fails when backing up the 
> state from ZK. From the logs, it looks like the underlying FS gets closed 
> before the config stuff is written:
> {code}
> DEBUG - 2017-01-11 22:39:12.889; [   ] 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase; GHFS.close:=> 
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.handler.SnapShooter; 
> Done creating backup snapshot: shard1 at 
> gs://master-sector-142100.appspot.com/backups2/tim5
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/cores 
> params={core=gettingstarted_shard1_replica1=/admin/cores=shard1=BACKUPCORE=gs://master-sector-142100.appspot.com/backups2/tim5=javabin=2}
>  status=0 QTime=24954
> INFO  - 2017-01-11 22:39:12.890; [   ] org.apache.solr.cloud.BackupCmd; 
> Starting to backup ZK data for backupName=tim5
> INFO  - 2017-01-11 22:39:12.890; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; Load collection config from: 
> [/collections/gettingstarted]
> INFO  - 2017-01-11 22:39:12.891; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; 
> path=[/collections/gettingstarted] [configName]=[gettingstarted] specified 
> config exists in ZooKeeper
> ERROR - 2017-01-11 22:39:12.892; [   ] org.apache.solr.common.SolrException; 
> Collection: gettingstarted operation: backup failed:java.io.IOException: 
> GoogleHadoopFileSystem has been closed or not initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.mkdirs(GoogleHadoopFileSystemBase.java:1367)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.createDirectory(HdfsBackupRepository.java:153)
> at 
> org.apache.solr.core.backup.BackupManager.downloadConfigDir(BackupManager.java:186)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:111)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:222)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11335) HdfsDirectory & Factory should not close the FileSystem object retrieved with get

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11335?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11335:

Component/s: hdfs

> HdfsDirectory & Factory should not close the FileSystem object retrieved with 
> get
> -
>
> Key: SOLR-11335
> URL: https://issues.apache.org/jira/browse/SOLR-11335
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Timothy Potter
>Priority: Minor
>
> I'm seeing issues where the Hadoop FileSystem instance is closed out from 
> under other objects. From what I understand, the Hadoop FileSystem object 
> (org.apache.hadoop.fs.FileSystem) retrieved with {{FileSystem.get}} as is 
> done in HdfsDirectory's ctor is a shared object that if closed, can affect 
> other code using that same shared instance. You can see this is a cached, 
> shared object here -> 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java#L455
> Thus, I suspect Solr should not be closing any FileSystem instance retrieved 
> with get. It's important to mention that if I set the 
> {{fs.$SCHEME.impl.disable.cache}} to true, then my problems go away, which 
> seems to confirm that Solr is using the API incorrectly. That being said, I'm 
> surprised this hasn't been raised before, so maybe I've missed something 
> basic in Solr's use of HDFS?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8169) Need LockFactory impl that uses ZooKeeper as replacement for HdfsLockFactory

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8169:
---
Component/s: hdfs

> Need LockFactory impl that uses ZooKeeper as replacement for HdfsLockFactory
> 
>
> Key: SOLR-8169
> URL: https://issues.apache.org/jira/browse/SOLR-8169
> Project: Solr
>  Issue Type: New Feature
>  Components: Hadoop Integration, hdfs
>Reporter: Timothy Potter
>Priority: Major
>
> It would be good to have an option to use a ZooKeeper backed LockFactory 
> implementation as a replacement for the HdfsLockFactory. FWIW - I've seen 
> instances in Solr on YARN environments where the lock file doesn't get 
> cleaned up correctly, which prevents using the index w/o some manual 
> intervention.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8166) Require merge instances to be consumed in the thread that created them

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795936#comment-16795936
 ] 

ASF subversion and git services commented on LUCENE-8166:
-

Commit 577bef53dd85734877e598539e7b528b2c1af179 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=577bef5 ]

LUCENE-8166: Require merge instances to be consumed in the thread that created 
them.


> Require merge instances to be consumed in the thread that created them
> --
>
> Key: LUCENE-8166
> URL: https://issues.apache.org/jira/browse/LUCENE-8166
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> I would like to improve AssertingCodec to add the restriction that merge 
> instances can only be consumed from the thread that pulled them. This is 
> something that I relied on in order to avoid cloning index inputs too much 
> when implementing LUCENE-4198, but this is neither documented nor checked.
> For the record, I found that it was already an issue before LUCENE-4198 was 
> merged, for instance if you would pull a merge instance of the default stored 
> fields reader and clone it (for use in another thread), it would no longer be 
> a merge instance. So I think this new restriction makes sense?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8138) Check that dv producers return the same values with advanceExact

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795937#comment-16795937
 ] 

ASF subversion and git services commented on LUCENE-8138:
-

Commit 07f35357939b0ba391c3be86808279138db0de46 in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=07f3535 ]

LUCENE-8138: Check that dv producers's next/advance and advanceExact impls are 
consistent.


> Check that dv producers return the same values with advanceExact
> 
>
> Key: LUCENE-8138
> URL: https://issues.apache.org/jira/browse/LUCENE-8138
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8138.patch
>
>
> Follow-up of LUCENE-8117. I'd like to make CheckIndex verify that doc values 
> producers return the same values regardless of whether the iterator was moved 
> with nextDoc/advance or advanceExact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11473) Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11473:

Component/s: Hadoop Integration

> Make HDFSDirectoryFactory support other prefixes (besides hdfs:/)
> -
>
> Key: SOLR-11473
> URL: https://issues.apache.org/jira/browse/SOLR-11473
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.6.1
>Reporter: Radu Gheorghe
>Priority: Minor
> Attachments: SOLR-11473.patch
>
>
> Not sure if it's a bug or a missing feature :) I'm trying to make Solr work 
> on Alluxio, as described by [~thelabdude] in 
> https://www.slideshare.net/thelabdude/running-solr-in-the-cloud-at-memory-speed-with-alluxio/1
> The problem I'm facing here is with autoAddReplicas. If I have 
> replicationFactor=1 and the node with that replica dies, the node taking over 
> incorrectly assigns the data directory. For example:
> before
> {code}"dataDir":"alluxio://localhost:19998/solr/test/",{code}
> after
> {code}"dataDir":"alluxio://localhost:19998/solr/test/core_node1/alluxio://localhost:19998/solr/test/",{code}
> The same happens for ulogDir. Apparently, this has to do with this bit from 
> HDFSDirectoryFactory:
> {code}  public boolean isAbsolute(String path) {
> return path.startsWith("hdfs:/");
>   }{code}
> If I add "alluxio:/" in there, the paths are correct and the index is 
> recovered.
> I see a few options here:
> * add "alluxio:/" to the list there
> * add a regular expression in the lines of \[a-z]*:/ I hope that's not too 
> expensive, I'm not sure how often this method is called
> * don't do anything and expect alluxio to work with an "hdfs:/" path? I 
> actually tried that and didn't manage to make it work
> * have a different DirectoryFactory or something else?
> What do you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795952#comment-16795952
 ] 

Kevin Risden edited comment on SOLR-12040 at 3/19/19 10:27 AM:
---

While I was looking at SOLR-13330 (and related HDFS tests), I think I saw this 
fail due to a race condition shutting down ZK client. I'll have to check again 
but the gist was:
* connection manager shutdown
* shutdown ZK client
* zk client stuck in trying to send / read packet during shutdown
* suite eventually times out from not being able to make any progress.


was (Author: risdenk):
While I was looking at SOLR-13330 (and related HDFS tests), I think I saw this 
fail due to a race condition shutting down ZK client. I'll have to check again 
but the gist was:
* connection manager shutdown
* shutdown ZK client
* zk client stuck in trying to send / read packet during shutdown

* suite eventually times out from not being able to make any progress.

> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13331) Atomic Update Multivalue remove does not work

2019-03-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795857#comment-16795857
 ] 

Thomas Wöckinger commented on SOLR-13331:
-

The used field was a TextField. I think unit tests should be added for know 
conversions.

> Atomic Update Multivalue remove does not work
> -
>
> Key: SOLR-13331
> URL: https://issues.apache.org/jira/browse/SOLR-13331
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.7, 7.7.1, 8.0
> Environment: Standalone Solr Server
>Reporter: Thomas Wöckinger
>Priority: Critical
>
> When using JavaBinCodec the values of collections are of type 
> ByteArrayUtf8CharSequence, existing field values are Strings so the remove 
> Operation does not have any effect.
> The relevant code is located in class AtomicUpdateDocumentMerger method 
> doRemove.
> The method parameter fieldVal contains the collection values of type 
> ByteArrayUtf8CharSequence, the variable original contains the collection of 
> Strings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8150) Remove references to segments.gen.

2019-03-19 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8150:
-
Attachment: LUCENE-8150.patch

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch, LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-19 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795886#comment-16795886
 ] 

Adrien Grand commented on LUCENE-8150:
--

Here is a new patch based on the above comments. The "segments.gen" string only 
exists in SegmentInfos now.

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch, LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-19 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795915#comment-16795915
 ] 

Ishan Chattopadhyaya commented on SOLR-11876:
-

Oops! I think fix version 7.7.1 was added to the issue before I committed here, 
and I didn't realize it. I've removed fix version 7.7.1; I can backport this to 
branch_7_7 and branch_7x so that it can be included in a 7.7.2 (whenever that 
happens). What do you think?

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> 

[jira] [Resolved] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-13297.
-
Resolution: Fixed

> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk closed pull request #34: Move hdfs stuff out into a new contrib

2019-03-19 Thread GitBox
risdenk closed pull request #34: Move hdfs stuff out into a new contrib
URL: https://github.com/apache/lucene-solr/pull/34
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9958) The FileSystem used by HdfsBackupRepository gets closed before the backup completes.

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795919#comment-16795919
 ] 

Kevin Risden commented on SOLR-9958:


Relates to SOLR-11473

> The FileSystem used by HdfsBackupRepository gets closed before the backup 
> completes.
> 
>
> Key: SOLR-9958
> URL: https://issues.apache.org/jira/browse/SOLR-9958
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>Priority: Critical
> Attachments: SOLR-9958.patch
>
>
> My shards get backed up correctly, but then it fails when backing up the 
> state from ZK. From the logs, it looks like the underlying FS gets closed 
> before the config stuff is written:
> {code}
> DEBUG - 2017-01-11 22:39:12.889; [   ] 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase; GHFS.close:=> 
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.handler.SnapShooter; 
> Done creating backup snapshot: shard1 at 
> gs://master-sector-142100.appspot.com/backups2/tim5
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/cores 
> params={core=gettingstarted_shard1_replica1=/admin/cores=shard1=BACKUPCORE=gs://master-sector-142100.appspot.com/backups2/tim5=javabin=2}
>  status=0 QTime=24954
> INFO  - 2017-01-11 22:39:12.890; [   ] org.apache.solr.cloud.BackupCmd; 
> Starting to backup ZK data for backupName=tim5
> INFO  - 2017-01-11 22:39:12.890; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; Load collection config from: 
> [/collections/gettingstarted]
> INFO  - 2017-01-11 22:39:12.891; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; 
> path=[/collections/gettingstarted] [configName]=[gettingstarted] specified 
> config exists in ZooKeeper
> ERROR - 2017-01-11 22:39:12.892; [   ] org.apache.solr.common.SolrException; 
> Collection: gettingstarted operation: backup failed:java.io.IOException: 
> GoogleHadoopFileSystem has been closed or not initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.mkdirs(GoogleHadoopFileSystemBase.java:1367)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.createDirectory(HdfsBackupRepository.java:153)
> at 
> org.apache.solr.core.backup.BackupManager.downloadConfigDir(BackupManager.java:186)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:111)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:222)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on issue #34: Move hdfs stuff out into a new contrib

2019-03-19 Thread GitBox
risdenk commented on issue #34: Move hdfs stuff out into a new contrib
URL: https://github.com/apache/lucene-solr/pull/34#issuecomment-474278148
 
 
   There hasn't been any movement on this in ~2 years and Hadoop 3 upgrade 
definitely affects the same files. Closing this as stale.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795920#comment-16795920
 ] 

Jan Høydahl commented on SOLR-11876:


I'm merging to branch_7_7 now, I can commit it. I believe branch_7x will be 
deleted since there will be no 7.8 release.

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> 

[jira] [Updated] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13297:

Fix Version/s: master (9.0)
   8.1

> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-19 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reopened SOLR-11876:


Reopening for 7.7.2 backport

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> 

[jira] [Updated] (SOLR-11082) MoveReplica API for shared file systems should not delete the old replica if the source node is not live

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11082:

Fix Version/s: (was: 8.1)
   (was: master (9.0))

> MoveReplica API for shared file systems should not delete the old replica if 
> the source node is not live
> 
>
> Key: SOLR-11082
> URL: https://issues.apache.org/jira/browse/SOLR-11082
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
>
> MoveReplica API for shared file systems attempts to delete the old replica 
> and then creates a new replica (with same core and coreNode name) on the 
> target node. If the overseer fails between the two operations then the 
> replica is lost. The API should detect that if the source is not live, it 
> only needs to create the new replica. Then the old replica (upon coming back 
> online) auto-detects a replacement and unloads itself. This is also how 
> OverseerAutoReplicaFailoverThread works today.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9169) External file fields do not work with HDFS

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9169:
---
Component/s: Hadoop Integration

> External file fields do not work with HDFS
> --
>
> Key: SOLR-9169
> URL: https://issues.apache.org/jira/browse/SOLR-9169
> Project: Solr
>  Issue Type: Bug
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.0
>Reporter: David Johnson
>Priority: Major
>
> The external file fields do not currently have HDFS support.  They attempt to 
> read using the VersionedFile class, which only uses the basic JAVA IO 
> classes, which results in an unable to open file / file not found error.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8033:
---
Fix Version/s: master (9.0)
   8.1

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
> Fix For: 8.1, master (9.0)
>
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) Remove debug if branch in HdfsTransactionLog

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795939#comment-16795939
 ] 

ASF subversion and git services commented on SOLR-8033:
---

Commit 9fea3060b928cae4a87c6da9895602ce075775a8 in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9fea306 ]

SOLR-8033: Remove debug if branch in HdfsTransactionLog

Signed-off-by: Kevin Risden 


> Remove debug if branch in HdfsTransactionLog
> 
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-8033.patch
>
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795952#comment-16795952
 ] 

Kevin Risden commented on SOLR-12040:
-

While I was looking at SOLR-13330 (and related HDFS tests), I think I saw this 
fail due to a race condition shutting down ZK client. I'll have to check again 
but the gist was:
* connection manager shutdown
* shutdown ZK client
* zk client stuck in trying to send / read packet during shutdown

* suite eventually times out from not being able to make any progress.

> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7958) Give TermInSetQuery better advancing capabilities

2019-03-19 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795905#comment-16795905
 ] 

Adrien Grand commented on LUCENE-7958:
--

Thanks for sharing [~hermes]. I should resurrect the above patch when I have 
some time!

> Give TermInSetQuery better advancing capabilities
> -
>
> Key: LUCENE-7958
> URL: https://issues.apache.org/jira/browse/LUCENE-7958
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7958.patch
>
>
> If a TermInSetQuery has more than 15 matching terms on a given segment, then 
> we consume all postings lists into a bitset and return an iterator over this 
> bitset as a scorer. I would like to change it so that we keep the 15 postings 
> lists that have the largest document frequencies and consume all other 
> (shorter) postings lists into a bitset. In the end we return a disjunction 
> over the N longest postings lists and the bit set. This could help consume 
> fewer doc ids if the TermInSetQuery is intersected with other queries, 
> especially if the document frequencies of the terms it wraps have a zipfian 
> distribution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11876) InPlace update fails when resolving from Tlog if schema has a required field

2019-03-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795910#comment-16795910
 ] 

Jan Høydahl commented on SOLR-11876:


[~ichattopadhyaya], this is listed as fixed in 7.7.1 but it seems to not have 
been merged after all, even if the issue was resolved two weeks before 7.7.1 
release. Can you confirm and correct the "Fix Version" flag of this JIRA? I was 
really hoping to upgrade a customer to 7.7.1 to fix this bug guess that must 
wait until 7.7.2 then.

> InPlace update fails when resolving from Tlog if schema has a required field
> 
>
> Key: SOLR-11876
> URL: https://issues.apache.org/jira/browse/SOLR-11876
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: OSX High Sierra
> java version "1.8.0_152"
> Java(TM) SE Runtime Environment (build 1.8.0_152-b16)
> Java HotSpot(TM) 64-Bit Server VM (build 25.152-b16, mixed mode)
>Reporter: Justin Deoliveira
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 7.7.1, 8.1, master (9.0)
>
> Attachments: SOLR-11876.patch, SOLR-11876.patch, SOLR-11876.patch, 
> SOLR-11876.patch
>
>
> The situation is doing an in place update of a non-indexed/stored numeric doc 
> values field multiple times in fast succession. The schema 
> has a required field ("name") in it. On the third update request the update 
> fails complaining "missing required field: name". It seems this happens 
> when the update document is being resolved from from the TLog.
> To reproduce:
> 1. Setup a schema that has:
>     - A required field other than the uniquekey field, in my case it's called 
> "name"
>     - A numeric doc values field suitable for in place update (non-indexed, 
> non-stored), in my case it's called "likes"
> 2. Execute an in place update of the document a few times in fast succession:
> {noformat}
> for i in `seq 10`; do
> curl -X POST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/core1/update' --data-binary '
> [{
>  "id": "1",
>  "likes": { "inc": 1 }
> }]'
> done{noformat}
> The resulting stack trace:
> {noformat}
> 2018-01-19 21:27:26.644 ERROR (qtp1873653341-14) [ x:core1] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: [doc=1] 
> missing required field: name
>  at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:233)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.toSolrDoc(RealTimeGetComponent.java:767)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.resolveFullDocument(RealTimeGetComponent.java:423)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocumentFromTlog(RealTimeGetComponent.java:551)
>  at 
> org.apache.solr.handler.component.RealTimeGetComponent.getInputDocument(RealTimeGetComponent.java:609)
>  at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:253)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1279)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1008)
>  at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:617)
>  at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
>  at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
>  at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
>  at 
> 

[jira] [Commented] (SOLR-9958) The FileSystem used by HdfsBackupRepository gets closed before the backup completes.

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795917#comment-16795917
 ] 

Kevin Risden commented on SOLR-9958:


Pretty sure this is also related to "fs.SCHEME.impl.disable.cache" = true . 

> The FileSystem used by HdfsBackupRepository gets closed before the backup 
> completes.
> 
>
> Key: SOLR-9958
> URL: https://issues.apache.org/jira/browse/SOLR-9958
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.2.1
>Reporter: Timothy Potter
>Priority: Critical
> Attachments: SOLR-9958.patch
>
>
> My shards get backed up correctly, but then it fails when backing up the 
> state from ZK. From the logs, it looks like the underlying FS gets closed 
> before the config stuff is written:
> {code}
> DEBUG - 2017-01-11 22:39:12.889; [   ] 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase; GHFS.close:=> 
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.handler.SnapShooter; 
> Done creating backup snapshot: shard1 at 
> gs://master-sector-142100.appspot.com/backups2/tim5
> INFO  - 2017-01-11 22:39:12.889; [   ] org.apache.solr.servlet.HttpSolrCall; 
> [admin] webapp=null path=/admin/cores 
> params={core=gettingstarted_shard1_replica1=/admin/cores=shard1=BACKUPCORE=gs://master-sector-142100.appspot.com/backups2/tim5=javabin=2}
>  status=0 QTime=24954
> INFO  - 2017-01-11 22:39:12.890; [   ] org.apache.solr.cloud.BackupCmd; 
> Starting to backup ZK data for backupName=tim5
> INFO  - 2017-01-11 22:39:12.890; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; Load collection config from: 
> [/collections/gettingstarted]
> INFO  - 2017-01-11 22:39:12.891; [   ] 
> org.apache.solr.common.cloud.ZkStateReader; 
> path=[/collections/gettingstarted] [configName]=[gettingstarted] specified 
> config exists in ZooKeeper
> ERROR - 2017-01-11 22:39:12.892; [   ] org.apache.solr.common.SolrException; 
> Collection: gettingstarted operation: backup failed:java.io.IOException: 
> GoogleHadoopFileSystem has been closed or not initialized.
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.checkOpen(GoogleHadoopFileSystemBase.java:1927)
> at 
> com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.mkdirs(GoogleHadoopFileSystemBase.java:1367)
> at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1877)
> at 
> org.apache.solr.core.backup.repository.HdfsBackupRepository.createDirectory(HdfsBackupRepository.java:153)
> at 
> org.apache.solr.core.backup.BackupManager.downloadConfigDir(BackupManager.java:186)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:111)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:222)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795924#comment-16795924
 ] 

Kevin Risden commented on SOLR-9075:


We might be able to look since upgraded to Hadoop 3 in SOLR-9515

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11083) MoveReplica API can lose replicas for shared file systems on overseer restart if source node is live

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11083:

Fix Version/s: (was: 8.1)
   (was: master (9.0))

> MoveReplica API can lose replicas for shared file systems on overseer restart 
> if source node is live
> 
>
> Key: SOLR-11083
> URL: https://issues.apache.org/jira/browse/SOLR-11083
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: hdfs, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
>
> MoveReplica unloads the old replica and creates a new one for shared file 
> systems. But if the overseer restarts between the two operations then the old 
> replica is lost. It is upto the user to detect the failure (using request 
> status API) and retry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11083) MoveReplica API can lose replicas for shared file systems on overseer restart if source node is live

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-11083:

Component/s: (was: hdfs)

> MoveReplica API can lose replicas for shared file systems on overseer restart 
> if source node is live
> 
>
> Key: SOLR-11083
> URL: https://issues.apache.org/jira/browse/SOLR-11083
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Priority: Major
>
> MoveReplica unloads the old replica and creates a new one for shared file 
> systems. But if the overseer restarts between the two operations then the old 
> replica is lost. It is upto the user to detect the failure (using request 
> status API) and retry.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8033) Remove debug if branch in HdfsTransactionLog

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795943#comment-16795943
 ] 

ASF subversion and git services commented on SOLR-8033:
---

Commit 105979fb4cf59391af3906ad7e1f1a60ef179988 in lucene-solr's branch 
refs/heads/branch_8x from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=105979f ]

SOLR-8033: Remove debug if branch in HdfsTransactionLog

Signed-off-by: Kevin Risden 


> Remove debug if branch in HdfsTransactionLog
> 
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-8033.patch
>
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12040) HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every jenkins run

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795867#comment-16795867
 ] 

ASF subversion and git services commented on SOLR-12040:


Commit cf828163bdfa010c87f1171b6919e444bd0ff01c in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cf82816 ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 fail on virtually every 
> jenkins run
> 
>
> Key: SOLR-12040
> URL: https://issues.apache.org/jira/browse/SOLR-12040
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Hoss Man
>Assignee: Mark Miller
>Priority: Major
>
> HdfsBasicDistributedZkTest & HdfsBasicDistributedZk2 are thin subclasses of 
> BasicDistributedZkTest & BasicDistributedZk2 that just tweak the setup to use 
> HDFS, and only run @Nightly.
> These tests are failing virtually every time they are run by jenkins - either 
> at a method level, or at a suite level (due to threadleaks, timeouts, etc...) 
> yet their non-HDFS superclasss virtually never fail.
> Per the jenkins failure rates reports i've setup, here's the failure rates of 
> all tests matching "BasicDistributed" for the past 7days (note that the 
> non-HDFS tests aren't even listed, because they haven't failed at all even 
> though they are non-nightly and have cumulatively run ~750 times in the past 
> 7 days)
> http://fucit.org/solr-jenkins-reports/failure-report.html
> {noformat}
> "Suite?","Class","Method","Rate","Runs","Fails"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","","53.3","15","8"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test","test","18.75","16","3"
> "true","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","","46.1538461538462","13","6"
> "false","org.apache.solr.cloud.hdfs.HdfsBasicDistributedZkTest","test","7.69230769230769","13","1"
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795865#comment-16795865
 ] 

ASF subversion and git services commented on SOLR-11010:


Commit cf828163bdfa010c87f1171b6919e444bd0ff01c in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cf82816 ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Affects Versions: 7.0, 8.0
>Reporter: Andrzej Bialecki 
>Priority: Major
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   

[jira] [Commented] (SOLR-13297) StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795868#comment-16795868
 ] 

ASF subversion and git services commented on SOLR-13297:


Commit cf828163bdfa010c87f1171b6919e444bd0ff01c in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cf82816 ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> StressHdfsTest and HdfsUnloadDistributedZkTest fail more often after Hadoop 3
> -
>
> Key: SOLR-13297
> URL: https://issues.apache.org/jira/browse/SOLR-13297
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: builds.a.o_solr_badapple_nightly_8x_7.txt.gz, 
> builds.a.o_solr_badapple_nightly_master_51.txt.gz, 
> builds.a.o_solr_nightly_8x_36.txt.gz, builds.a.o_solr_repro_2965.txt.gz
>
>
> As reported by [~hossman] on SOLR-9515, StressHdfsTest and 
> HdfsUnloadDistributedZkTest are failing more regularly. Should figure out 
> what is going on here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13330) Improve HDFS tests

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795864#comment-16795864
 ] 

ASF subversion and git services commented on SOLR-13330:


Commit cf828163bdfa010c87f1171b6919e444bd0ff01c in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cf82816 ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> Improve HDFS tests
> --
>
> Key: SOLR-13330
> URL: https://issues.apache.org/jira/browse/SOLR-13330
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13330.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently there are failures with HDFS tests intermittently. Some of these 
> are due to resource constraints on the Apache Jenkins machines others are due 
> to poor test assumptions.
> Related JIRAs:
> * SOLR-11010
> * SOLR-11381
> * SOLR-12040
> * SOLR-13297
> Changes:
> * Consolidate hdfs configuration into HdfsTestUtil
> * Ensure socketTimeout long enough for HDFS tests
> * Ensure HdfsTestUtil.getClientConfiguration used in tests
> * Replace deprecated HDFS calls
> * Use try-with-resources to ensure closing of HDFS resources



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11381) HdfsDirectoryFactory throws NPE on cleanup because file system has been closed

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795866#comment-16795866
 ] 

ASF subversion and git services commented on SOLR-11381:


Commit cf828163bdfa010c87f1171b6919e444bd0ff01c in lucene-solr's branch 
refs/heads/master from Kevin Risden
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cf82816 ]

SOLR-13330: Improve HDFS tests

Related JIRAs:
* SOLR-11010
* SOLR-11381
* SOLR-12040
* SOLR-13297

Changes:
* Consolidate hdfs configuration into HdfsTestUtil
* Ensure socketTimeout long enough for HDFS tests
* Ensure HdfsTestUtil.getClientConfiguration used in tests
* Replace deprecated HDFS calls
* Use try-with-resources to ensure closing of HDFS resources

Signed-off-by: Kevin Risden 


> HdfsDirectoryFactory throws NPE on cleanup because file system has been closed
> --
>
> Key: SOLR-11381
> URL: https://issues.apache.org/jira/browse/SOLR-11381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 8.1, master (9.0)
>
>
> I saw this happening on tests related to autoscaling. The old directory clean 
> up is triggered on core close in a separate thread. This can cause a race 
> condition where the filesystem is closed before the cleanup starts running. 
> Then a NPE is thrown and cleanup fails.
> Fixing the NPE is simple but I think this is a real bug where old directories 
> can be left around on HDFS. I don't know enough about HDFS to investigate 
> further. Leaving it here for interested people to pitch in.
> {code}
> 105029 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Error 
> checking for old index directories to clean-up.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:540)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> 105030 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore Failed to cleanup 
> old index directories for core control_collection_shard1_replica_n1
> java.lang.NullPointerException
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:558)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk merged pull request #609: SOLR-13330: Improve HDFS tests

2019-03-19 Thread GitBox
risdenk merged pull request #609: SOLR-13330: Improve HDFS tests
URL: https://github.com/apache/lucene-solr/pull/609
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Solr CHANGES section for 8.0.0 lacks 7.7.1 entries

2019-03-19 Thread Jan Høydahl
Hi

Just realized that the 8.0.0 Changes 
(https://lucene.apache.org/solr/8_0_0/changes/Changes.html) has not merged in 
the two bug fixes that were released in 7.7.1 just before the 8.0 release. 
Guess this is a minor..

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11381) HdfsDirectoryFactory throws NPE on cleanup because file system has been closed

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-11381.
-
Resolution: Fixed

> HdfsDirectoryFactory throws NPE on cleanup because file system has been closed
> --
>
> Key: SOLR-11381
> URL: https://issues.apache.org/jira/browse/SOLR-11381
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Reporter: Shalin Shekhar Mangar
>Assignee: Kevin Risden
>Priority: Trivial
> Fix For: 8.1, master (9.0)
>
>
> I saw this happening on tests related to autoscaling. The old directory clean 
> up is triggered on core close in a separate thread. This can cause a race 
> condition where the filesystem is closed before the cleanup starts running. 
> Then a NPE is thrown and cleanup fails.
> Fixing the NPE is simple but I think this is a real bug where old directories 
> can be left around on HDFS. I don't know enough about HDFS to investigate 
> further. Leaving it here for interested people to pitch in.
> {code}
> 105029 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.HdfsDirectoryFactory Error 
> checking for old index directories to clean-up.
> java.io.IOException: Filesystem closed
>   at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:808)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2083)
>   at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:2069)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:791)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:860)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1517)
>   at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1557)
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:540)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> 105030 ERROR 
> (OldIndexDirectoryCleanupThreadForCore-control_collection_shard1_replica_n1) 
> [n:127.0.0.1:58542_ c:control_collection s:shard1 r:core_node2 
> x:control_collection_shard1_replica_n1] o.a.s.c.SolrCore Failed to cleanup 
> old index directories for core control_collection_shard1_replica_n1
> java.lang.NullPointerException
>   at 
> org.apache.solr.core.HdfsDirectoryFactory.cleanupOldIndexDirectories(HdfsDirectoryFactory.java:558)
>   at 
> org.apache.solr.core.SolrCore.lambda$cleanupOldIndexDirectories$32(SolrCore.java:3019)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11010) OutOfMemoryError in tests when using HDFS BlockCache

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-11010.
-
Resolution: Fixed

> OutOfMemoryError in tests when using HDFS BlockCache
> 
>
> Key: SOLR-11010
> URL: https://issues.apache.org/jira/browse/SOLR-11010
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs, Tests
>Affects Versions: 7.0, 8.0
>Reporter: Andrzej Bialecki 
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> Spin-off from SOLR-10878: the newly added {{MoveReplicaHDFSTest}} fails on 
> jenkins (but rarely locally) with the following stacktrace:
> {code}
>[junit4]   2> 13619 ERROR (qtp1885193567-48) [n:127.0.0.1:50324_solr 
> c:movereplicatest_coll s:shard2 r:core_node4 
> x:movereplicatest_coll_shard2_replica_n2] o.a.s.h.RequestHandlerBase 
> org.apache.solr.common.SolrException: Error CREATEing SolrCore 
> 'movereplicatest_coll_shard2_replica_n2': Unable to create core 
> [movereplicatest_coll_shard2_replica_n2] Caused by: Direct buffer memory
>[junit4]   2>  at 
> org.apache.solr.core.CoreContainer.create(CoreContainer.java:938)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$164(CoreAdminOperation.java:91)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
>[junit4]   2>  at 
> org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
>[junit4]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:745)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:726)
>[junit4]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:507)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:378)
>[junit4]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:322)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>[junit4]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>[junit4]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)
>[junit4]   2>  at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>[junit4]   2>  at 
> org.eclipse.jetty.server.Server.handle(Server.java:534)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>[junit4]   2>  at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>[junit4]   2>  at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>[junit4]   2>  at 
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>[junit4]   2>  at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>[junit4]   2>  at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>[junit4]   2>  at 
> 

[jira] [Assigned] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-9075:
--

Assignee: (was: Mark Miller)

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8033) Remove debug if branch in HdfsTransactionLog

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8033:
---
Attachment: SOLR-8033.patch

> Remove debug if branch in HdfsTransactionLog
> 
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-8033.patch
>
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8033) useless if branch (commented out log.debug in HdfsTransactionLog constructor)

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-8033:
--

Assignee: Kevin Risden

> useless if branch (commented out log.debug in HdfsTransactionLog constructor)
> -
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8033) Remove debug if branch in HdfsTransactionLog

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-8033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-8033:
---
Summary: Remove debug if branch in HdfsTransactionLog  (was: useless if 
branch (commented out log.debug in HdfsTransactionLog constructor))

> Remove debug if branch in HdfsTransactionLog
> 
>
> Key: SOLR-8033
> URL: https://issues.apache.org/jira/browse/SOLR-8033
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Affects Versions: 5.0, 5.1
>Reporter: songwanging
>Assignee: Kevin Risden
>Priority: Minor
>  Labels: newdev
> Fix For: 8.1, master (9.0)
>
>
> In method HdfsTransactionLog() of class HdfsTransactionLog 
> (solr\core\src\java\org\apache\solr\update\HdfsTransactionLog.java)
> The if branch presented in the following code snippet performs no actions, we 
> should add more code to handle this or directly delete this if branch.
> HdfsTransactionLog(FileSystem fs, Path tlogFile, Collection 
> globalStrings, boolean openExisting) {
>   ...
> try {
>   if (debug) {
> //log.debug("New TransactionLog file=" + tlogFile + ", exists=" + 
> tlogFile.exists() + ", size=" + tlogFile.length() + ", openExisting=" + 
> openExisting);
>   }
> ...
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10092) HDFS: AutoAddReplica fails

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10092:

Component/s: Hadoop Integration

> HDFS: AutoAddReplica fails
> --
>
> Key: SOLR-10092
> URL: https://issues.apache.org/jira/browse/SOLR-10092
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration, hdfs
>Affects Versions: 6.3
>Reporter: Hendrik Haddorp
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-10092.patch, SOLR-10092.patch
>
>
> OverseerAutoReplicaFailoverThread fails to create replacement core with this 
> exception:
> o.a.s.c.OverseerAutoReplicaFailoverThread Exception trying to create new 
> replica on 
> http://...:9000/solr:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
>  Error from server at http://...:9000/solr: Error CREATEing SolrCore 
> 'test2.collection-09_shard1_replica1': Unable to create core 
> [test2.collection-09_shard1_replica1] Caused by: No shard id for 
> CoreDescriptor[name=test2.collection-09_shard1_replica1;instanceDir=/var/opt/solr/test2.collection-09_shard1_replica1]
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:593)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
> at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.createSolrCore(OverseerAutoReplicaFailoverThread.java:456)
> at 
> org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.lambda$addReplica$0(OverseerAutoReplicaFailoverThread.java:251)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745) 
> also see this mail thread about the issue: 
> https://lists.apache.org/thread.html/%3CCAA70BoWyzbvQuJTyzaG4Kx1tj0Djgcm+MV=x_hoac1e6cse...@mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw opened a new pull request #610: Load FST off-heap if reader is not opened from an index writer

2019-03-19 Thread GitBox
s1monw opened a new pull request #610: Load FST off-heap if reader is not 
opened from an index writer
URL: https://github.com/apache/lucene-solr/pull/610
 
 
   Today we never load FSTs of ID-like fields off-heap since we need
   very fast access for updates. Yet, a reader that is not loaded from
   an index wirter can also leave the FST on disk. This change adds
   this information to SegmentReadState to allow the postiings format
   to make this decision without configuration.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw closed pull request #606: LUCENE-8671: Allow more fine-grained control over off-heap term dictionaries

2019-03-19 Thread GitBox
s1monw closed pull request #606: LUCENE-8671: Allow more fine-grained control 
over off-heap term dictionaries
URL: https://github.com/apache/lucene-solr/pull/606
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] s1monw commented on issue #606: LUCENE-8671: Allow more fine-grained control over off-heap term dictionaries

2019-03-19 Thread GitBox
s1monw commented on issue #606: LUCENE-8671: Allow more fine-grained control 
over off-heap term dictionaries
URL: https://github.com/apache/lucene-solr/pull/606#issuecomment-474409410
 
 
   @mikemccand I agree - I close this one and try to break it into smaller 
chunks and different solutions. see #610 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9079) Upgrade commons-lang to version 3.x

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9079:
---
Issue Type: Improvement  (was: Wish)

> Upgrade commons-lang to version 3.x
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9079:
---
Summary: Remove commons-lang as a dependency  (was: Upgrade commons-lang to 
version 3.x)

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796193#comment-16796193
 ] 

Kevin Risden commented on SOLR-9079:


Replaced commons-lang with commons-lang3 where appropriate. Opened a PR 
https://github.com/apache/lucene-solr/pull/611 for easier review. Replaced 
deprecated usages as appropriate as well.

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-19 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796199#comment-16796199
 ] 

Michael McCandless commented on LUCENE-8150:


+1

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch, LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #611: SOLR-9079: Remove commons-lang as a dependency

2019-03-19 Thread GitBox
janhoy commented on issue #611: SOLR-9079: Remove commons-lang as a dependency
URL: https://github.com/apache/lucene-solr/pull/611#issuecomment-474427162
 
 
   Looks good, have not tried to compile...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-13240.patch

2019-03-19 Thread Richard Goodman
Hi there,

I've commented on this ticket before witnessing the same kind of exception
being thrown when using the Solr autoscaling api. I've got a patch that I
have tested on my own local 5 box cluster, and no longer get the exception *(as
well as supplying a new unit test case)*. This is my first contirbution to
Solr, so sorry if I haven't got the patch submission format properly.


-- 

Richard Goodman|Data Infrastructure Engineer

richa...@brandwatch.com


NEW YORK   | BOSTON  | BRIGHTON   | LONDON   | BERLIN   |   STUTTGART   |
SINGAPORE   | SYDNEY | PARIS





SOLR-13240.patch
Description: Binary data

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] [lucene-solr] s1monw commented on issue #610: LUCENE-8671: Load FST off-heap if reader is not opened from an index writer

2019-03-19 Thread GitBox
s1monw commented on issue #610: LUCENE-8671: Load FST off-heap if reader is not 
opened from an index writer
URL: https://github.com/apache/lucene-solr/pull/610#issuecomment-474409691
 
 
   @mikemccand FYI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9079) Upgrade commons-lang to version 3.x

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-9079:
--

 Assignee: Kevin Risden
Fix Version/s: master (9.0)
   8.1
   Attachment: SOLR-9079.patch

> Upgrade commons-lang to version 3.x
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk opened a new pull request #611: SOLR-9079: Remove commons-lang as a dependency

2019-03-19 Thread GitBox
risdenk opened a new pull request #611: SOLR-9079: Remove commons-lang as a 
dependency
URL: https://github.com/apache/lucene-solr/pull/611
 
 
   Removes commons-lang as a dependency. Migrates to commons-lang3 where 
appropriate. Removes deprecated usages where appropriate as well.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796195#comment-16796195
 ] 

Kevin Risden commented on SOLR-9079:


ping [~janhoy] since you created the original commons-lang patch on SOLR-9459

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-master-Linux (64bit/jdk1.8.0_172) - Build # 179 - Failure!

2019-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/179/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.testPreemptiveCreation

Error Message:
took over 10 seconds after collection creation to update aliases

Stack Trace:
java.lang.AssertionError: took over 10 seconds after collection creation to 
update aliases
at 
__randomizedtesting.SeedInfo.seed([A787E64CF3B300E5:CAED63E7CBD1EDD1]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.update.processor.RoutedAliasUpdateProcessorTest.waitColAndAlias(RoutedAliasUpdateProcessorTest.java:77)
at 
org.apache.solr.update.processor.TimeRoutedAliasUpdateProcessorTest.testPreemptiveCreation(TimeRoutedAliasUpdateProcessorTest.java:476)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15574 lines...]
   [junit4] 

[GitHub] [lucene-solr] mikemccand commented on a change in pull request #610: LUCENE-8671: Load FST off-heap if reader is not opened from an index writer

2019-03-19 Thread GitBox
mikemccand commented on a change in pull request #610: LUCENE-8671: Load FST 
off-heap if reader is not opened from an index writer
URL: https://github.com/apache/lucene-solr/pull/610#discussion_r266947394
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/FieldReader.java
 ##
 @@ -91,7 +92,8 @@
   clone.seek(indexStartFP);
   // Initialize FST offheap if index is MMapDirectory and
   // docCount != sumDocFreq implying field is not primary key
-  if (clone instanceof ByteBufferIndexInput && this.docCount != 
this.sumDocFreq) {
+  isFSTOffHeap = clone instanceof ByteBufferIndexInput && ((this.docCount 
!= this.sumDocFreq) || openedFromWriter == false);
 
 Review comment:
   Is the idea here that it's only `IndexWriter` that needs fast ID lookups 
(since it uses this when deleting docs by ID field)?  E.g. apps that open 
`IndexReader` themselves (outside of `IndexWriter`) don't need fast ID lookups 
by default?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #610: LUCENE-8671: Load FST off-heap if reader is not opened from an index writer

2019-03-19 Thread GitBox
mikemccand commented on a change in pull request #610: LUCENE-8671: Load FST 
off-heap if reader is not opened from an index writer
URL: https://github.com/apache/lucene-solr/pull/610#discussion_r266948427
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/SegmentReadState.java
 ##
 @@ -49,23 +49,29 @@
*  {@link IndexFileNames#segmentFileName(String,String,String)}). */
   public final String segmentSuffix;
 
+  /**
+   * True iff this SegmentReadState is opened from an index writer.
 
 Review comment:
   s/`index writer`/`IndexWriter`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on issue #611: SOLR-9079: Remove commons-lang as a dependency

2019-03-19 Thread GitBox
risdenk commented on issue #611: SOLR-9079: Remove commons-lang as a dependency
URL: https://github.com/apache/lucene-solr/pull/611#issuecomment-474428275
 
 
   Compiles and precommit passed for me. Running tests locally. The Solr patch 
review (patch attached to the JIRA) should catch any egregious errors as well. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13331) Atomic Update Multivalue remove does not work

2019-03-19 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796085#comment-16796085
 ] 

Thomas Wöckinger commented on SOLR-13331:
-

I can provide a patch for TextField, but i have to do more testing for all the 
other types. So i am not sure that toNativeType is the right place to fix this 
issue.

> Atomic Update Multivalue remove does not work
> -
>
> Key: SOLR-13331
> URL: https://issues.apache.org/jira/browse/SOLR-13331
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: 7.7, 7.7.1, 8.0
> Environment: Standalone Solr Server
>Reporter: Thomas Wöckinger
>Priority: Critical
>
> When using JavaBinCodec the values of collections are of type 
> ByteArrayUtf8CharSequence, existing field values are Strings so the remove 
> Operation does not have any effect.
> The relevant code is located in class AtomicUpdateDocumentMerger method 
> doRemove.
> The method parameter fieldVal contains the collection values of type 
> ByteArrayUtf8CharSequence, the variable original contains the collection of 
> Strings



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11127) Add a Collections API command to migrate the .system collection schema from Trie-based (pre-7.0) to Points-based (7.0+)

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796080#comment-16796080
 ] 

ASF subversion and git services commented on SOLR-11127:


Commit b778417054e735cf323139a43e84d6262ce9dcd7 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b778417 ]

SOLR-11127: REINDEXCOLLECTION command for re-indexing of existing collections.


> Add a Collections API command to migrate the .system collection schema from 
> Trie-based (pre-7.0) to Points-based (7.0+)
> ---
>
> Key: SOLR-11127
> URL: https://issues.apache.org/jira/browse/SOLR-11127
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11127.patch, SOLR-11127.patch, SOLR-11127.patch, 
> SOLR-11127.patch
>
>
> SOLR-9 will switch the Trie fieldtypes in the .system collection's schema 
> to Points.
> Users with pre-7.0 .system collections will no longer be able to use them 
> once Trie fields have been removed (8.0).
> Solr should provide a Collections API command MIGRATESYSTEMCOLLECTION to 
> automatically convert a Trie-based .system collection to a Points-based one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13334) Expose FeatureField

2019-03-19 Thread Adrien Grand (JIRA)
Adrien Grand created SOLR-13334:
---

 Summary: Expose FeatureField
 Key: SOLR-13334
 URL: https://issues.apache.org/jira/browse/SOLR-13334
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Adrien Grand


It'd be nice to expose Lucene's FeatureField. This is especially useful in 
conjunction with SOLR-13289 since FeatureField can skip non-competitive hits, 
which makes it realistic to apply custom scoring over an entire collection 
rather than only via a rescorer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 8.0 jobs disabled on ASF Jenkins

2019-03-19 Thread Uwe Schindler
I did the same for the Policeman Jenkins last weekend when I updated JDK 
versions.

Uwe

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Adrien Grand 
> Sent: Tuesday, March 19, 2019 1:55 PM
> To: Lucene Dev 
> Subject: 8.0 jobs disabled on ASF Jenkins
> 
> FYI I disabled 8.0 jobs on ASF Jenkins except the one about the reference
> guide.
> 
> --
> Adrien
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Upgrade commons-lang to version 3.x

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796136#comment-16796136
 ] 

Kevin Risden commented on SOLR-9079:


On current master there are 63 usages of org.apache.commons.lang. 2 are in 
lucene in javadoc. There is a really old patch from 2016 on SOLR-9459 to 
upgrade to lang3.

git grep -F org.apache.commons.lang. | wc -l
  63

> Upgrade commons-lang to version 3.x
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Wish
>Reporter: Christine Poerschke
>Priority: Minor
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple for today Woweeeeeee!!!!!! Nothing reported!!!!

2019-03-19 Thread Michael Sokolov
Oh that is great! It's work just to *keep* things passing. Climbing out
from under a big pile of failures as folks have been doing here is extra
hard, so thank you!

On Mon, Mar 18, 2019 at 1:02 PM Erick Erickson 
wrote:

> There are still annoying failing tests, but apparently nothing annotated
> that’s failed consistently over the last 4 weeks.
>
> Wowwee!! There aren’t any to annotate! Which makes me wonder if my
> scanning program is working correctly….
>
> If you don’t occasionally glance at Hoss’ report page, you should. This is
> the “last 7 days” link.
> http://fucit.org/solr-jenkins-reports/failure-report.html
>
>
>
>
> Full report attached.
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-9.0.4) - Build # 119 - Unstable!

2019-03-19 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/119/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 3 object(s) that were not released!!! [SolrCore, 
MockDirectoryWrapper, MockDirectoryWrapper] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:99)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:779)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:368)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:747)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:976)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1193)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1103)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
  at java.base/java.lang.Thread.run(Thread.java:844)   expected null, but 
was:(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 

autoGeneratePhraseQuery throws an error when added to a definition

2019-03-19 Thread Erick Erickson
Is this intended or should I create a JIRA?

If I put autoGeneratePhraseQuery on a , it works just fine. But 
putting it on a  generates an error on core load. Is this intentional?

If not I’ll raise a JIRA.
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796310#comment-16796310
 ] 

ASF subversion and git services commented on SOLR-12955:


Commit 5b7866b0851eff66cb7e929beef5249e3c72ac36 in lucene-solr's branch 
refs/heads/master from Bar Rotstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5b7866b ]

SOLR-12955: Refactored DistributedUpdateProcessor to put SolrCloud specifics 
into a subclass
Closes #528


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12955-no-commit.patch, SOLR-12955.patch, 
> SOLR-12955.patch, SOLR-12955.patch, SOLR-12955.patch
>
>  Time Spent: 9h 40m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] asfgit closed pull request #528: SOLR-12955 2

2019-03-19 Thread GitBox
asfgit closed pull request #528: SOLR-12955 2
URL: https://github.com/apache/lucene-solr/pull/528
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2019-03-19 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796312#comment-16796312
 ] 

ASF subversion and git services commented on SOLR-12955:


Commit de58717183d0690254daa56d7dad7692bb435c4a in lucene-solr's branch 
refs/heads/branch_8x from Bar Rotstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=de58717 ]

SOLR-12955: Refactored DistributedUpdateProcessor to put SolrCloud specifics 
into a subclass
Closes #528

(cherry picked from commit 5b7866b0851eff66cb7e929beef5249e3c72ac36)


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12955-no-commit.patch, SOLR-12955.patch, 
> SOLR-12955.patch, SOLR-12955.patch, SOLR-12955.patch
>
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #610: LUCENE-8671: Load FST off-heap if reader is not opened from an index writer

2019-03-19 Thread GitBox
jpountz commented on a change in pull request #610: LUCENE-8671: Load FST 
off-heap if reader is not opened from an index writer
URL: https://github.com/apache/lucene-solr/pull/610#discussion_r267016239
 
 

 ##
 File path: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/FieldReader.java
 ##
 @@ -91,7 +92,8 @@
   clone.seek(indexStartFP);
   // Initialize FST offheap if index is MMapDirectory and
   // docCount != sumDocFreq implying field is not primary key
-  if (clone instanceof ByteBufferIndexInput && this.docCount != 
this.sumDocFreq) {
+  isFSTOffHeap = clone instanceof ByteBufferIndexInput && ((this.docCount 
!= this.sumDocFreq) || openedFromWriter == false);
 
 Review comment:
   I guess some users need fast lookups on read-only indices, but this proposal 
looks like a good default to me.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden closed SOLR-9075.
--

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9075) Look at using hdfs-client jar for smaller core dependency

2019-03-19 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden resolved SOLR-9075.

Resolution: Fixed

hadoop-hdfs-client was used during the upgrade to Hadoop 3 in SOLR-9515

> Look at using hdfs-client jar for smaller core dependency
> -
>
> Key: SOLR-9075
> URL: https://issues.apache.org/jira/browse/SOLR-9075
> Project: Solr
>  Issue Type: Improvement
>  Components: Hadoop Integration, hdfs
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8150) Remove references to segments.gen.

2019-03-19 Thread Uwe Schindler (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796250#comment-16796250
 ] 

Uwe Schindler commented on LUCENE-8150:
---

Hi, looks fine to me, the simple check in SegmentInfos is enough. Just because 
I am interested: Can't we throw exception earlier on opening index?

> Remove references to segments.gen.
> --
>
> Key: LUCENE-8150
> URL: https://issues.apache.org/jira/browse/LUCENE-8150
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: LUCENE-8150.patch, LUCENE-8150.patch
>
>
> This was the way we wrote pending segment files before we switch to 
> {{pending_segments_N}} in LUCENE-5925.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on issue #611: SOLR-9079: Remove commons-lang as a dependency

2019-03-19 Thread GitBox
risdenk commented on issue #611: SOLR-9079: Remove commons-lang as a dependency
URL: https://github.com/apache/lucene-solr/pull/611#issuecomment-474466592
 
 
   Sigh all went well except for the solr/contrib/velocity tests. 
   
   ```
  [junit4]> Throwable #1: java.lang.NoClassDefFoundError: 
org/apache/commons/lang/StringUtils
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([7014262216574C1B:AE369B21B26146B8]:0)
  [junit4]> at 
org.apache.velocity.runtime.resource.ResourceManagerImpl.initialize(ResourceManagerImpl.java:161)
  [junit4]> at 
org.apache.velocity.runtime.RuntimeInstance.initializeResourceManager(RuntimeInstance.java:730)
  [junit4]> at 
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:263)
  [junit4]> at 
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:646)
  [junit4]> at 
org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:116)
  [junit4]> at 
org.apache.solr.response.VelocityResponseWriter.createEngine(VelocityResponseWriter.java:345)
  [junit4]> at 
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:153)
  [junit4]> at 
org.apache.solr.velocity.VelocityResponseWriterTest.testCustomParamTemplate(VelocityResponseWriterTest.java:57)
  [junit4]> at java.lang.Thread.run(Thread.java:748)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796274#comment-16796274
 ] 

Kevin Risden commented on SOLR-9079:


So a few issues:
* solr/contrib/velocity/ivy.xml doesn't even reference commons-lang
* velocity 1.7 was released - 2010-11-29
* LUCENE-5249 from 2013 was the last time velocity was changed in 
lucene/ivy-versions.properties

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796278#comment-16796278
 ] 

Kevin Risden edited comment on SOLR-9079 at 3/19/19 4:57 PM:
-

* velocity-tools 2.0 has an optional dependency on commons-lang
* velocity 1.7 has a hard dependency on commons-lang.

Upgrading velocity 1.7 -> 2.0
* http://velocity.apache.org/engine/2.0/upgrading.html
* Change velocity to velocity-engine-core
* upgrades commons-lang to commons-lang3

So if we want to finish removing commons-lang, we need to upgrade velocity.


was (Author: risdenk):
* velocity-tools 2.0 has an optional dependency on commons-lang
*velocity 1.7 has a hard dependency on commons-lang.

Upgrading velocity 1.7 -> 2.0
* http://velocity.apache.org/engine/2.0/upgrading.html
* Change velocity to velocity-engine-core
* upgrades commons-lang to commons-lang3

So if we want to finish removing commons-lang, we need to upgrade velocity.

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796278#comment-16796278
 ] 

Kevin Risden commented on SOLR-9079:


* velocity-tools 2.0 has an optional dependency on commons-lang
*velocity 1.7 has a hard dependency on commons-lang.

Upgrading velocity 1.7 -> 2.0
* http://velocity.apache.org/engine/2.0/upgrading.html
* Change velocity to velocity-engine-core
* upgrades commons-lang to commons-lang3

So if we want to finish removing commons-lang, we need to upgrade velocity.

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas Patch)

2019-03-19 Thread GitBox
cpoerschke commented on a change in pull request #300: SOLR-11831: Skip second 
grouping step if group.limit is 1 (aka Las Vegas Patch)
URL: https://github.com/apache/lucene-solr/pull/300#discussion_r266972873
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java
 ##
 @@ -532,6 +578,12 @@ protected int groupedDistributedProcess(ResponseBuilder 
rb) {
   nextStage = ResponseBuilder.STAGE_DONE;
 }
 
+if (rb.stage == ResponseBuilder.STAGE_EXECUTE_QUERY && 
rb.getGroupingSpec().isSkipSecondGroupingStep()) {
+  shardRequestFactory = new StoredFieldsShardRequestFactory();
+  nextStage = ResponseBuilder.STAGE_DONE;
+  rb.stage = ResponseBuilder.STAGE_GET_FIELDS;
 
 Review comment:
   **Background re: existing code:**
   
   
[SearchHandler.handleRequestBody](https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.0.0/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java#L345-L349)
 calls `distributedProcess` on all configured components `c` and via the
   ```
   nextStage = Math.min(nextStage, c.distributedProcess(rb));
   ```
   formula each component thus gets a say in what `nextStage` is. The overall 
decision is then effected via the [rb.stage = 
nextStage;](https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.0.0/solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java#L342)
 assignment.
   
   **Observation re: this proposed `QueryComponent.groupedDistributedProcess` 
change:**
   
   If `QueryComponent` also assigns to `rb.stage` then that could 'confuse' the 
`SearchHandler` logic e.g. specifically components running after the 
`QueryComponent` would not have an opportunity to do anything in the 
`ResponseBuilder.STAGE_EXECUTE_QUERY` stage.
   
   **Question/Suggestion:**
   
   Might an alternative be to turn the `ResponseBuilder.STAGE_EXECUTE_QUERY` 
stage into a 'no op' as far as the `QueryComponent` (in 
`isSkipSecondGroupingStep==true` circumstances) is concerned? 
https://github.com/bloomberg/lucene-solr/pull/229 aims to illustrate an 
alternative `QueryComponent.groupedDistributedProcess` modification, changes 
elsewhere might be needed too (haven't looked into that yet).


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796268#comment-16796268
 ] 

Kevin Risden commented on SOLR-9079:


Sigh all went well except for the solr/contrib/velocity tests. 

{code:java}
   [junit4]> Throwable #1: java.lang.NoClassDefFoundError: 
org/apache/commons/lang/StringUtils
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([7014262216574C1B:AE369B21B26146B8]:0)
   [junit4]>at 
org.apache.velocity.runtime.resource.ResourceManagerImpl.initialize(ResourceManagerImpl.java:161)
   [junit4]>at 
org.apache.velocity.runtime.RuntimeInstance.initializeResourceManager(RuntimeInstance.java:730)
   [junit4]>at 
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:263)
   [junit4]>at 
org.apache.velocity.runtime.RuntimeInstance.init(RuntimeInstance.java:646)
   [junit4]>at 
org.apache.velocity.app.VelocityEngine.init(VelocityEngine.java:116)
   [junit4]>at 
org.apache.solr.response.VelocityResponseWriter.createEngine(VelocityResponseWriter.java:345)
   [junit4]>at 
org.apache.solr.response.VelocityResponseWriter.write(VelocityResponseWriter.java:153)
   [junit4]>at 
org.apache.solr.velocity.VelocityResponseWriterTest.testCustomParamTemplate(VelocityResponseWriterTest.java:57)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{code}


> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9079) Remove commons-lang as a dependency

2019-03-19 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796286#comment-16796286
 ] 

Kevin Risden commented on SOLR-9079:


SOLR-10705 is the only Jira I found with a quick search that even mentions 
upgrading/removing velocity.

> Remove commons-lang as a dependency
> ---
>
> Key: SOLR-9079
> URL: https://issues.apache.org/jira/browse/SOLR-9079
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-9079.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Current version used is [/commons-lang/commons-lang = 
> 2.6|https://github.com/apache/lucene-solr/blob/master/lucene/ivy-versions.properties#L68]
>  and a key motivation would be to have 
> [commons.lang3|http://commons.apache.org/proper/commons-lang/apidocs/org/apache/commons/lang3/package-summary.html]
>  APIs available e.g. 
> [org.apache.commons.lang3.tuple.Pair|http://commons.apache.org/proper/commons-lang/apidocs/index.html?org/apache/commons/lang3/tuple/Pair.html]
>  as an alternative to 
> [org.apache.solr.common.util.Pair|https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/common/util/Pair.java]
>  variant.
> [This|http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e]
>  dev list posting reports on exploring use of 3.4 instead of 2.6 and 
> concludes with the discovery of an optional zookeeper dependency on 
> commons-lang-2.4 version.
> So upgrading commons-lang can't happen anytime soon but this ticket here to 
> track motivations and findings so far for future reference.
> selected links into other relevant dev list threads:
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CA9C1B04B-EA67-4F2F-A9F3-B24A2AFB8598%40gmail.com%3E
> *  
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdSrZXDJk7VwuVzxDeqdocagS33Fx%2BstYD3yTx5--WXiA%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3CCAN4YXvdWmCDSzXV40-wz1sr766GSwONGFem7UutkdXnsy0%2BXrg%40mail.gmail.com%3E
> * 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201605.mbox/%3cc6c4e67c-9506-cb1f-1ca5-cfa6fc880...@elyograg.org%3e



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



<    1   2