[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486754#comment-16486754
 ] 

ASF subversion and git services commented on LUCENE-8328:
-

Commit 245d6cd6ad2c3babed1c48c8a79b598506f20f4f in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=245d6cd ]

LUCENE-8328: Ensure ReadersAndUpdates consistently executes under lock


> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486755#comment-16486755
 ] 

ASF subversion and git services commented on LUCENE-8324:
-

Commit 3ed9f98ed8083716e24bf0aa5d72138da2d8b518 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3ed9f98 ]

LUCENE-8324: Fix test to exclude the write.lock in expected files


> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8324) Unreferenced files of dropped segments should be released

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486753#comment-16486753
 ] 

ASF subversion and git services commented on LUCENE-8324:
-

Commit 14a7cd1159bacec38fc1efc8a772f3fbd2abc6ed in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=14a7cd1 ]

LUCENE-8324: Fix test to exclude the write.lock in expected files


> Unreferenced files of dropped segments should be released
> -
>
> Key: LUCENE-8324
> URL: https://issues.apache.org/jira/browse/LUCENE-8324
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8324.patch, release-files.patch
>
>
> {quote} This has the side-effect that flushed segments that are 100% hard 
> deleted are also
> cleaned up right after they are flushed, previously these segments were 
> sticking
> around for a while until they got picked for a merge or received another 
> delete.{quote}
>  
> Since LUCENE-8253, a fully deleted segment is dropped immediately when it's 
> flushed, however, its files might be kept around even after a commit. In 
> other words, we may have unreferenced files which are retained by Deleter.
> I am not entirely sure if we should fix this but it's nice to have a 
> consistent content between current files and commit points as before.
> I attached a failed test for this.
> /cc [~simonw]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486752#comment-16486752
 ] 

ASF subversion and git services commented on LUCENE-8328:
-

Commit b54e5946debdbf72b4772f1357d9bc6df8b5a3a7 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b54e594 ]

LUCENE-8328: Ensure ReadersAndUpdates consistently executes under lock


> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3567) Spellcheck custom parameters not being passed through due to wrong prefix creation

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486745#comment-16486745
 ] 

ASF subversion and git services commented on SOLR-3567:
---

Commit e0ccf88100dd225fdc69c5d14dec269c05e572ac in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e0ccf88 ]

SOLR-3567: Spellcheck custom parameters not being passed through due to wrong 
prefix creation

(cherry picked from commit 9b1cb66)


> Spellcheck custom parameters not being passed through due to wrong prefix 
> creation
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Labels: patch, spellchecker
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3567) Spellcheck custom parameters not being passed through due to wrong prefix creation

2018-05-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-3567.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> Spellcheck custom parameters not being passed through due to wrong prefix 
> creation
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Labels: patch, spellchecker
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3567) Spellcheck custom parameters not being passed through due to wrong prefix creation

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486744#comment-16486744
 ] 

ASF subversion and git services commented on SOLR-3567:
---

Commit 9b1cb6646f5e7ab13df2b95e38b2a862bde87e0c in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b1cb66 ]

SOLR-3567: Spellcheck custom parameters not being passed through due to wrong 
prefix creation


> Spellcheck custom parameters not being passed through due to wrong prefix 
> creation
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Labels: patch, spellchecker
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3567) Spellcheck custom parameters not being passed through due to wrong prefix creation

2018-05-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-3567:

Summary: Spellcheck custom parameters not being passed through due to wrong 
prefix creation  (was: Spellcheck custom parameters not being passed thru)

> Spellcheck custom parameters not being passed through due to wrong prefix 
> creation
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Priority: Major
>  Labels: patch, spellchecker
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3567) Spellcheck custom parameters not being passed through due to wrong prefix creation

2018-05-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar reassigned SOLR-3567:
---

Assignee: Shalin Shekhar Mangar

> Spellcheck custom parameters not being passed through due to wrong prefix 
> creation
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Assignee: Shalin Shekhar Mangar
>Priority: Major
>  Labels: patch, spellchecker
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3567) Spellcheck custom parameters not being passed thru

2018-05-22 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486742#comment-16486742
 ] 

Shalin Shekhar Mangar commented on SOLR-3567:
-

I ran into this bug which escaped our attention for a long time. Patch updated 
with fix to test.

> Spellcheck custom parameters not being passed thru
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Priority: Major
>  Labels: patch, spellchecker
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3567) Spellcheck custom parameters not being passed thru

2018-05-22 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-3567:

Attachment: SOLR-3567.patch

> Spellcheck custom parameters not being passed thru
> --
>
> Key: SOLR-3567
> URL: https://issues.apache.org/jira/browse/SOLR-3567
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.0-ALPHA
>Reporter: josh lucas
>Priority: Major
>  Labels: patch, spellchecker
> Attachments: SOLR-3567.patch, SOLR-3567.patch
>
>   Original Estimate: 0.5h
>  Remaining Estimate: 0.5h
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486734#comment-16486734
 ] 

Simon Willnauer commented on LUCENE-8328:
-

this looks great [~dnhatn] I will push this today

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+14) - Build # 7333 - Still Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7333/
Java: 64bit/jdk-11-ea+14 -XX:-UseCompressedOops -XX:+UseG1GC

19 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection

Error Message:
Timeout waiting for new leader null Live Nodes: [127.0.0.1:59543_solr, 
127.0.0.1:59559_solr, 127.0.0.1:59575_solr] Last available state: 
DocCollection(collection1//collections/collection1/state.json/14)={   
"pullReplicas":"0",   "replicationFactor":"3",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node62":{   "core":"collection1_shard1_replica_n61",   
"base_url":"http://127.0.0.1:59527/solr;,   
"node_name":"127.0.0.1:59527_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node64":{ 
  "core":"collection1_shard1_replica_n63",   
"base_url":"http://127.0.0.1:59543/solr;,   
"node_name":"127.0.0.1:59543_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node66":{ 
  "core":"collection1_shard1_replica_n65",   
"base_url":"http://127.0.0.1:59559/solr;,   
"node_name":"127.0.0.1:59559_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Timeout waiting for new leader
null
Live Nodes: [127.0.0.1:59543_solr, 127.0.0.1:59559_solr, 127.0.0.1:59575_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node62":{
  "core":"collection1_shard1_replica_n61",
  "base_url":"http://127.0.0.1:59527/solr;,
  "node_name":"127.0.0.1:59527_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node64":{
  "core":"collection1_shard1_replica_n63",
  "base_url":"http://127.0.0.1:59543/solr;,
  "node_name":"127.0.0.1:59543_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node66":{
  "core":"collection1_shard1_replica_n65",
  "base_url":"http://127.0.0.1:59559/solr;,
  "node_name":"127.0.0.1:59559_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([C9357BF12963224C:6129674BEB231666]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection(LeaderVoteWaitTimeoutTest.java:187)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-12354) org.apache.solr.security.PKIAuthenticationPlugin does not check response code when retrieving remotePublicKey

2018-05-22 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486648#comment-16486648
 ] 

Noble Paul commented on SOLR-12354:
---

It is not very useful to handle this exception because it is not supposed top 
happen in the first place. Can you give steps to reproduce this?

 

> org.apache.solr.security.PKIAuthenticationPlugin does not check response code 
> when retrieving remotePublicKey
> -
>
> Key: SOLR-12354
> URL: https://issues.apache.org/jira/browse/SOLR-12354
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.6.2, 6.6.3
>Reporter: hamada
>Priority: Major
>
> in decipherHeader(), if keyCache does not contain the key of interest, then a 
> remote call is made to retrieve the key from the remote host, by calling 
> getRemotePublicKey, which fails if the server returns an html error page.
> e.g.:
> org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
> BEFORE='<' AFTER='html>  

[JENKINS] Lucene-Solr-repro - Build # 680 - Still Unstable

2018-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/680/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/619/consoleText

[repro] Revision: cc2ee2305001a49536886653d2133ee1a3b51b82

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=74D58E81841AE715 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-SD 
-Dtests.timezone=Europe/Gibraltar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=74D58E81841AE715 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-SD 
-Dtests.timezone=Europe/Gibraltar -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testTriggerThrottling -Dtests.seed=74D58E81841AE715 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=mk 
-Dtests.timezone=Antarctica/Davis -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
0a730d4c1a74b6a090e685990e620f482139303f
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout cc2ee2305001a49536886653d2133ee1a3b51b82

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.TestTriggerIntegration" 
-Dtests.showOutput=onerror  -Dtests.seed=74D58E81841AE715 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ar-SD -Dtests.timezone=Europe/Gibraltar 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 10032 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout 0a730d4c1a74b6a090e685990e620f482139303f

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12366) Avoid SlowAtomicReader.getLiveDocs -- it's slow

2018-05-22 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486633#comment-16486633
 ] 

Lucene/Solr QA commented on SOLR-12366:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 14s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}163m 20s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}171m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestGenericDistributedQueue |
|   | solr.security.BasicAuthIntegrationTest |
|   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
|   | solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12366 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924480/SOLR-12366.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / af59c46 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/101/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/101/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/101/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Avoid SlowAtomicReader.getLiveDocs -- it's slow
> ---
>
> Key: SOLR-12366
> URL: https://issues.apache.org/jira/browse/SOLR-12366
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12366.patch, SOLR-12366.patch, SOLR-12366.patch, 
> SOLR-12366.patch
>
>
> SlowAtomicReader is of course slow, and it's getLiveDocs (based on MultiBits) 
> is slow as it uses a binary search for each lookup.  There are various places 
> in Solr that use SolrIndexSearcher.getSlowAtomicReader and then get the 
> liveDocs.  Most of these places ought to work with SolrIndexSearcher's 
> getLiveDocs method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-22 Thread chengpohi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengpohi updated LUCENE-8325:
--
Comment: was deleted

(was: Thanks [~rcmuir] review, I have updated the patch by feedback, please 
help review again.)

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-22 Thread chengpohi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486624#comment-16486624
 ] 

chengpohi commented on LUCENE-8325:
---

Thanks [~rcmuir] review, I have updated the patch by feedback, please help 
review again.

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-22 Thread chengpohi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486625#comment-16486625
 ] 

chengpohi commented on LUCENE-8325:
---

Thanks [~rcmuir] review, I have updated the patch by feedback, please help 
review again.

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-22 Thread chengpohi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengpohi updated LUCENE-8325:
--
Attachment: handle_surrogate_char_for_smartcn_2018-05-23.patch

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
> Attachments: handle_surrogate_char_for_smartcn_2018-05-23.patch
>
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8325) smartcn analyzer can't handle SURROGATE char

2018-05-22 Thread chengpohi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chengpohi updated LUCENE-8325:
--
Attachment: (was: handle-surrogate-char-for-smartcn.patch)

> smartcn analyzer can't handle SURROGATE char
> 
>
> Key: LUCENE-8325
> URL: https://issues.apache.org/jira/browse/LUCENE-8325
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: chengpohi
>Priority: Minor
>  Labels: newbie, patch
>
> This issue is from [https://github.com/elastic/elasticsearch/issues/30739]
> smartcn analyzer can't handle SURROGATE char, Example:
>  
>  
> {code:java}
> Analyzer ca = new SmartChineseAnalyzer(); 
> String sentence = "\uD862\uDE0F"; // 訏 a surrogate char 
> TokenStream tokenStream = ca.tokenStream("", sentence); 
> CharTermAttribute charTermAttribute = 
> tokenStream.addAttribute(CharTermAttribute.class); 
> tokenStream.reset(); 
> while (tokenStream.incrementToken()) { 
> String term = charTermAttribute.toString(); 
> System.out.println(term); 
> } 
> {code}
>  
> In the above code snippet will output: 
>  
> {code:java}
> ? 
> ? 
> {code}
>  
>  and I have created a *PATCH* to try to fix this, please help review(since 
> *smartcn* only support *GBK* char, so it's only just handle it as a *single 
> char*).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486617#comment-16486617
 ] 

ASF subversion and git services commented on SOLR-12247:


Commit 8db3912cac9ba73ff391cb1e9eead0c527f018f8 in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8db3912 ]

SOLR-12247: NodeAddedTriggerTest.testRestoreState() failure: Did not expect the 
processor to fire on first run


> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12247) NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor to fire on first run!

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486616#comment-16486616
 ] 

ASF subversion and git services commented on SOLR-12247:


Commit 0a730d4c1a74b6a090e685990e620f482139303f in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0a730d4 ]

SOLR-12247: NodeAddedTriggerTest.testRestoreState() failure: Did not expect the 
processor to fire on first run


> NodeAddedTriggerTest.testRestoreState() failure: Did not expect the processor 
> to fire on first run!
> ---
>
> Key: SOLR-12247
> URL: https://issues.apache.org/jira/browse/SOLR-12247
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Cao Manh Dat
>Priority: Major
>
> 100% reproducing seed from 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/203/]:
> {noformat}
> Checking out Revision 1b5690203de6d529f1eda671f84d710abd561bea 
> (refs/remotes/origin/branch_7x)
> [...]
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=NodeAddedTriggerTest -Dtests.method=testRestoreState 
> -Dtests.seed=B9D447011147FCB6 -Dtests.multiplier=2 -Dtests.locale=fr-BE 
> -Dtests.timezone=MIT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[smoker][junit4] FAILURE 3.38s J2 | 
> NodeAddedTriggerTest.testRestoreState <<<
>[smoker][junit4]> Throwable #1: java.lang.AssertionError: Did not 
> expect the processor to fire on first run! event={
>[smoker][junit4]>   
> "id":"16bf1f58bda2d8Ta3xzeiz95jejbcrchofogpdj2",
>[smoker][junit4]>   "source":"node_added_trigger",
>[smoker][junit4]>   "eventTime":6402590841348824,
>[smoker][junit4]>   "eventType":"NODEADDED",
>[smoker][junit4]>   "properties":{
>[smoker][junit4]> "eventTimes":[6402590841348824],
>[smoker][junit4]> "nodeNames":["127.0.0.1:40637_solr"]}}
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B9D447011147FCB6:777AE392E97E84A0]:0)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
>[smoker][junit4]>  at 
> org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
>[smoker][junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[smoker][junit4]   2> NOTE: test params are: 
> codec=Asserting(Lucene70), sim=RandomSimilarity(queryNorm=true): {}, 
> locale=fr-BE, timezone=MIT
>[smoker][junit4]   2> NOTE: Linux 4.4.0-112-generic amd64/Oracle 
> Corporation 1.8.0_152 (64-bit)/cpus=4,threads=1,free=70702960,total=428867584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 679 - Unstable

2018-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/679/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/61/consoleText

[repro] Revision: 0bf1eae92c4117659e2608111a8d64294009cc98

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=71E476A51D330AAC -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=el-GR 
-Dtests.timezone=America/Anguilla -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=71E476A51D330AAC 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-IE -Dtests.timezone=Asia/Oral -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.method=testDistributedQueue -Dtests.seed=71E476A51D330AAC 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-IN -Dtests.timezone=Indian/Comoro -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.seed=71E476A51D330AAC -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=en-IN -Dtests.timezone=Indian/Comoro 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
af59c46363f3497d44548021e4ff15d924ddbec3
[repro] git fetch
[repro] git checkout 0bf1eae92c4117659e2608111a8d64294009cc98

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestGenericDistributedQueue
[repro]   TestLargeCluster
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestGenericDistributedQueue|*.TestLargeCluster|*.IndexSizeTriggerTest"
 -Dtests.showOutput=onerror  -Dtests.seed=71E476A51D330AAC -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-IN 
-Dtests.timezone=Indian/Comoro -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 33022 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestLargeCluster|*.IndexSizeTriggerTest" 
-Dtests.showOutput=onerror  -Dtests.seed=71E476A51D330AAC -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en-IE 
-Dtests.timezone=Asia/Oral -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 33921 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

[repro] Re-testing 100% failures at the tip of master without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLargeCluster" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-IE -Dtests.timezone=Asia/Oral -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 20998 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master without a seed:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout af59c46363f3497d44548021e4ff15d924ddbec3

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6733) Umbrella issue - Solr as a standalone application

2018-05-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486608#comment-16486608
 ] 

Shawn Heisey commented on SOLR-6733:


Is there anyone out there with significant Ant experience, and possibly a 
significant understanding of Solr's build system in particular, that could help 
me write a build.xml for a "start" module, and integrate it into the overall 
build system?

> Umbrella issue - Solr as a standalone application
> -
>
> Key: SOLR-6733
> URL: https://issues.apache.org/jira/browse/SOLR-6733
> Project: Solr
>  Issue Type: New Feature
>Reporter: Shawn Heisey
>Priority: Major
>
> Umbrella issue.
> Solr should be a standalone application, where the main method is provided by 
> Solr source code.
> Here are the major tasks I envision, if we choose to embed Jetty:
>  * Create org.apache.solr.start.Main (and possibly other classes in the same 
> package), to be placed in solr-start.jar.  The Main class will contain the 
> main method that starts the embedded Jetty and Solr.  I do not know how to 
> adjust the build system to do this successfully.
>  * Handle central configurations in code -- TCP port, SSL, and things like 
> web.xml.
>  * For each of these steps, clean up any test fallout.
>  * Handle cloud-related configurations in code -- port, hostname, protocol, 
> etc.  Use the same information as the central configurations.
>  * Consider whether things like authentication need changes.
>  * Handle any remaining container configurations.
> I am currently imagining this work happening in a new branch and ultimately 
> being applied only to master, not the stable branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 641 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/641/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState

Error Message:
Did not expect the processor to fire on first run! event={   
"id":"82f3c263a97d0Tem0lr4npep3x95anmd5gaq9q",   "source":"node_added_trigger", 
  "eventTime":2303735199602640,   "eventType":"NODEADDED",   "properties":{ 
"eventTimes":[   2303735199602640,   2303735199604860,   
2303735199606508,   2303735199607995], "nodeNames":[   
"127.0.0.1:61925_solr",   "127.0.0.1:44485_solr",   
"127.0.0.1:53948_solr",   "127.0.0.1:46194_solr"]}}

Stack Trace:
java.lang.AssertionError: Did not expect the processor to fire on first run! 
event={
  "id":"82f3c263a97d0Tem0lr4npep3x95anmd5gaq9q",
  "source":"node_added_trigger",
  "eventTime":2303735199602640,
  "eventType":"NODEADDED",
  "properties":{
"eventTimes":[
  2303735199602640,
  2303735199604860,
  2303735199606508,
  2303735199607995],
"nodeNames":[
  "127.0.0.1:61925_solr",
  "127.0.0.1:44485_solr",
  "127.0.0.1:53948_solr",
  "127.0.0.1:46194_solr"]}}
at 
__randomizedtesting.SeedInfo.seed([BA3D4752AB1F3B9A:7493E3C15326438C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.lambda$new$0(NodeAddedTriggerTest.java:49)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTrigger.run(NodeAddedTrigger.java:161)
at 
org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState(NodeAddedTriggerTest.java:257)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 1959 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1959/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testPerformance

Error Message:
java.util.concurrent.ExecutionException: java.lang.AssertionError

Stack Trace:
java.lang.RuntimeException: java.util.concurrent.ExecutionException: 
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3F7FB601C285CC38:F89E4423A931F497]:0)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGenerator.getConfusionMatrix(ConfusionMatrixGenerator.java:131)
at 
org.apache.lucene.classification.SimpleNaiveBayesClassifierTest.testPerformance(SimpleNaiveBayesClassifierTest.java:104)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.lucene.classification.utils.ConfusionMatrixGenerator.getConfusionMatrix(ConfusionMatrixGenerator.java:94)
... 36 more
Caused by: java.lang.AssertionError
at 
java.base/java.util.HashMap$TreeNode.moveRootToFront(HashMap.java:1901)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1549 - Still Unstable

2018-05-22 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1549/

5 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([28C437F3E0D5B714:7B7D754302C422EE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([28C437F3E0D5B714:4B0F0171791AC439]:0)
at 

[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486519#comment-16486519
 ] 

Nhat Nguyen commented on LUCENE-8328:
-

[~simonw] I've updated the test.

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1885 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1885/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexWriterWithThreads.testIOExceptionDuringAbortWithThreads

Error Message:
MockDirectoryWrapper: cannot close: there are still 84 open files: 
{_b_LuceneVarGapDocFreqInterval_0.doc=1, 
_5_LuceneVarGapDocFreqInterval_0.tib=1, _8_LuceneVarGapDocFreqInterval_0.pos=1, 
_8_Lucene70_0.dvd=1, _1.tvd=1, _7_LuceneVarGapDocFreqInterval_0.doc=1, 
_1.nvd=1, _3_Lucene70_0.dvd=1, _8.fdt=1, _b.nvd=1, 
_0_LuceneVarGapDocFreqInterval_0.tib=1, _0.nvd=1, 
_3_LuceneVarGapDocFreqInterval_0.pos=1, _2_LuceneVarGapDocFreqInterval_0.doc=1, 
_b.tvd=1, _2.nvd=1, _2.tvd=1, _6.fdt=1, _9.fdt=1, 
_1_LuceneVarGapDocFreqInterval_0.doc=1, _6_LuceneVarGapDocFreqInterval_0.doc=1, 
_0_Lucene70_0.dvd=1, _5_Lucene70_0.dvd=1, _a.nvd=1, 
_b_LuceneVarGapDocFreqInterval_0.pos=1, _a.tvd=1, 
_9_LuceneVarGapDocFreqInterval_0.tib=1, _5.fdt=1, _1_Lucene70_0.dvd=1, 
_8.tvd=1, _1_LuceneVarGapDocFreqInterval_0.pos=1, 
_3_LuceneVarGapDocFreqInterval_0.doc=1, _8_LuceneVarGapDocFreqInterval_0.doc=1, 
_7_LuceneVarGapDocFreqInterval_0.pos=1, _8.nvd=1, 
_2_LuceneVarGapDocFreqInterval_0.pos=1, _6_LuceneVarGapDocFreqInterval_0.pos=1, 
_0.tvd=1, _4.fdt=1, _2_Lucene70_0.dvd=1, _9.tvd=1, 
_4_LuceneVarGapDocFreqInterval_0.tib=1, _a_LuceneVarGapDocFreqInterval_0.tib=1, 
_3.fdt=1, _9.nvd=1, _0_LuceneVarGapDocFreqInterval_0.pos=1, 
_4_LuceneVarGapDocFreqInterval_0.doc=1, _6.tvd=1, _2.fdt=1, _6.nvd=1, 
_3_LuceneVarGapDocFreqInterval_0.tib=1, _4_Lucene70_0.dvd=1, _7.tvd=1, 
_9_LuceneVarGapDocFreqInterval_0.doc=1, _5_LuceneVarGapDocFreqInterval_0.pos=1, 
_8_LuceneVarGapDocFreqInterval_0.tib=1, _b_LuceneVarGapDocFreqInterval_0.tib=1, 
_7.nvd=1, _9_Lucene70_0.dvd=1, _1.fdt=1, 
_1_LuceneVarGapDocFreqInterval_0.tib=1, _4_LuceneVarGapDocFreqInterval_0.pos=1, 
_3.nvd=1, _a.fdt=1, _4.tvd=1, _a_LuceneVarGapDocFreqInterval_0.doc=1, _3.tvd=1, 
_b_Lucene70_0.dvd=1, _6_Lucene70_0.dvd=1, _4.nvd=1, _0.fdt=1, 
_6_LuceneVarGapDocFreqInterval_0.tib=1, _7_Lucene70_0.dvd=1, 
_a_LuceneVarGapDocFreqInterval_0.pos=1, _7_LuceneVarGapDocFreqInterval_0.tib=1, 
_2_LuceneVarGapDocFreqInterval_0.tib=1, _5.tvd=1, _5.nvd=1, _7.fdt=1, 
_9_LuceneVarGapDocFreqInterval_0.pos=1, _5_LuceneVarGapDocFreqInterval_0.doc=1, 
_b.fdt=1, _a_Lucene70_0.dvd=1, _0_LuceneVarGapDocFreqInterval_0.doc=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
84 open files: {_b_LuceneVarGapDocFreqInterval_0.doc=1, 
_5_LuceneVarGapDocFreqInterval_0.tib=1, _8_LuceneVarGapDocFreqInterval_0.pos=1, 
_8_Lucene70_0.dvd=1, _1.tvd=1, _7_LuceneVarGapDocFreqInterval_0.doc=1, 
_1.nvd=1, _3_Lucene70_0.dvd=1, _8.fdt=1, _b.nvd=1, 
_0_LuceneVarGapDocFreqInterval_0.tib=1, _0.nvd=1, 
_3_LuceneVarGapDocFreqInterval_0.pos=1, _2_LuceneVarGapDocFreqInterval_0.doc=1, 
_b.tvd=1, _2.nvd=1, _2.tvd=1, _6.fdt=1, _9.fdt=1, 
_1_LuceneVarGapDocFreqInterval_0.doc=1, _6_LuceneVarGapDocFreqInterval_0.doc=1, 
_0_Lucene70_0.dvd=1, _5_Lucene70_0.dvd=1, _a.nvd=1, 
_b_LuceneVarGapDocFreqInterval_0.pos=1, _a.tvd=1, 
_9_LuceneVarGapDocFreqInterval_0.tib=1, _5.fdt=1, _1_Lucene70_0.dvd=1, 
_8.tvd=1, _1_LuceneVarGapDocFreqInterval_0.pos=1, 
_3_LuceneVarGapDocFreqInterval_0.doc=1, _8_LuceneVarGapDocFreqInterval_0.doc=1, 
_7_LuceneVarGapDocFreqInterval_0.pos=1, _8.nvd=1, 
_2_LuceneVarGapDocFreqInterval_0.pos=1, _6_LuceneVarGapDocFreqInterval_0.pos=1, 
_0.tvd=1, _4.fdt=1, _2_Lucene70_0.dvd=1, _9.tvd=1, 
_4_LuceneVarGapDocFreqInterval_0.tib=1, _a_LuceneVarGapDocFreqInterval_0.tib=1, 
_3.fdt=1, _9.nvd=1, _0_LuceneVarGapDocFreqInterval_0.pos=1, 
_4_LuceneVarGapDocFreqInterval_0.doc=1, _6.tvd=1, _2.fdt=1, _6.nvd=1, 
_3_LuceneVarGapDocFreqInterval_0.tib=1, _4_Lucene70_0.dvd=1, _7.tvd=1, 
_9_LuceneVarGapDocFreqInterval_0.doc=1, _5_LuceneVarGapDocFreqInterval_0.pos=1, 
_8_LuceneVarGapDocFreqInterval_0.tib=1, _b_LuceneVarGapDocFreqInterval_0.tib=1, 
_7.nvd=1, _9_Lucene70_0.dvd=1, _1.fdt=1, 
_1_LuceneVarGapDocFreqInterval_0.tib=1, _4_LuceneVarGapDocFreqInterval_0.pos=1, 
_3.nvd=1, _a.fdt=1, _4.tvd=1, _a_LuceneVarGapDocFreqInterval_0.doc=1, _3.tvd=1, 
_b_Lucene70_0.dvd=1, _6_Lucene70_0.dvd=1, _4.nvd=1, _0.fdt=1, 
_6_LuceneVarGapDocFreqInterval_0.tib=1, _7_Lucene70_0.dvd=1, 
_a_LuceneVarGapDocFreqInterval_0.pos=1, _7_LuceneVarGapDocFreqInterval_0.tib=1, 
_2_LuceneVarGapDocFreqInterval_0.tib=1, _5.tvd=1, _5.nvd=1, _7.fdt=1, 
_9_LuceneVarGapDocFreqInterval_0.pos=1, _5_LuceneVarGapDocFreqInterval_0.doc=1, 
_b.fdt=1, _a_Lucene70_0.dvd=1, _0_LuceneVarGapDocFreqInterval_0.doc=1}
at 
__randomizedtesting.SeedInfo.seed([12A313D6AE5EEE13:730E1B5972706DAF]:0)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:841)
at 
org.apache.lucene.index.TestIndexWriterWithThreads._testMultipleThreadsFailure(TestIndexWriterWithThreads.java:341)
at 

[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Attachment: LUCENE-8328.patch

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch, LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12337) Remove QueryWrapperFilter

2018-05-22 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16486494#comment-16486494
 ] 

Lucene/Solr QA commented on SOLR-12337:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
12s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 51s{color} | {color:green} Release audit (RAT) rat-sources 
passed {color} |
| {color:red}-1{color} | {color:red} Release audit (RAT) {color} | {color:red}  
0m  6s{color} | {color:red} Release audit (RAT) rat-sources failed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
36s{color} | {color:green} analytics in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m  8s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.HttpPartitionTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12337 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12924472/SOLR-12337.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / af59c46 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| Release audit (RAT) | 
https://builds.apache.org/job/PreCommit-SOLR-Build/100/artifact/out/patch-rat-sources-solr_core.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/100/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/100/testReport/ |
| modules | C: solr/contrib/analytics solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/100/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Remove QueryWrapperFilter
> -
>
> Key: SOLR-12337
> URL: https://issues.apache.org/jira/browse/SOLR-12337
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.4
>
> Attachments: SOLR-12337.patch, SOLR-12337.patch
>
>
> QueryWrapperFilter has not been needed ever since Filter was changed to 
> extend Query -- LUCENE-1518.  It was retained because there was at least one 
> place in Lucene that had a Filter/Query distinction, but it was forgotten 
> when Filter moved to Solr.  It contains some code that creates a temporary 
> IndexSearcher but forgets to null out the cache on it, and so 
> QueryWrapperFilter can add non-trivial overhead.  We should simply remove it 
> altogether.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12378:
---
Attachment: (was: SOLR-12378.patch)

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12378:
---
Attachment: SOLR-12378.patch

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16485777#comment-16485777
 ] 

Mark Miller commented on SOLR-12378:


Looks like we have doc on this URP in the ref guide - new patch with an entry 
for this option.

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12378:
---
Attachment: SOLR-12378.patch

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, SOLR-12378.patch, 
> supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Oliver Bates (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484772#comment-16484772
 ] 

Oliver Bates commented on SOLR-12378:
-

Thanks Mark :)

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-12378:
--

Assignee: Mark Miller

I can take this - updated patch to trunk. Option looks clean, added testing 
looks good. Thanks Oliver!

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Assignee: Mark Miller
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12378:
---
Attachment: SOLR-12378.patch

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Priority: Minor
>  Labels: features, patch
> Attachments: SOLR-12378.patch, supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread Oliver Bates (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484694#comment-16484694
 ] 

Oliver Bates commented on SOLR-12378:
-

{quote}Did you intentionally mispell an existing comment RE "identiy" ?
{quote}
Lol yeah I'm just here to wreak havoc. Actually that typo existed in an older 
version of the file, which my original patch was based on (and I had actually 
fixed that typo too!) but I clearly messed something up when I rebased on the 
latest** master.
{quote}Can you please update your patch for master?  This file was recently 
split in two.
{quote}
**Obviously not latest enough.

Sorry about that. [~tomasflobbe] actually mentioned to me a couple days ago 
that the files had changed but I didn't get a chance to fix it yet. Will upload 
a new patch later today.

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Priority: Minor
>  Labels: features, patch
> Attachments: supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484618#comment-16484618
 ] 

Nhat Nguyen commented on LUCENE-8328:
-

Sure. I will look at that test class.

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12378) Support missing versionField on indexed docs in DocBasedVersionConstraintsURP

2018-05-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484612#comment-16484612
 ] 

David Smiley commented on SOLR-12378:
-

Did you intentionally mispell an existing comment RE "identiy" ?

Can you please update your patch for master?  This file was recently split in 
two.

> Support missing versionField on indexed docs in DocBasedVersionConstraintsURP
> -
>
> Key: SOLR-12378
> URL: https://issues.apache.org/jira/browse/SOLR-12378
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UpdateRequestProcessors
>Affects Versions: master (8.0)
>Reporter: Oliver Bates
>Priority: Minor
>  Labels: features, patch
> Attachments: supportMissingVersionOnOldDocs-v1.patch
>
>
> -If we want to start using DocBasedVersionConstraintsUpdateRequestProcessor 
> on an existing index, we have to reindex everything to set value for the 
> 'versionField' field, otherwise- We can't start using 
> DocBasedVersionConstraintsUpdateRequestProcessor on an existing index because 
> we get this line throwing shade:
> {code:java}
> throw new SolrException(SERVER_ERROR,
> "Doc exists in index, but has null versionField: "
> + versionFieldName);
> {code}
> We have to reindex everything into a new collection, which isn't always 
> practical/possible. The proposal here is to have an option to allow the 
> existing docs to be missing this field and to simply treat those docs as 
> older than anything coming in with that field set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484603#comment-16484603
 ] 

Simon Willnauer commented on LUCENE-8328:
-

+1 good catch. I do wonder if we can get a test that is more explicit and 
doesn't for 4 seconds? maybe in _TestReaderPool_

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484599#comment-16484599
 ] 

David Smiley commented on SOLR-11779:
-

Maybe it doesn't make sense in 7x to have enable=true by default?  (thus 
enable=false)?  Or would it be completely benign so no big deal?  Would an 
existing collection suddenly start getting metrics collected for it?

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484591#comment-16484591
 ] 

David Smiley commented on SOLR-12386:
-

Analysis strategy:
 * Enumerate each test that has failed this way (share here).  What do they 
have in common?
 * Try to make a test that easily reproduces; maybe with _some_ beasting.
 * ? go digging; come up with hair brained theories ?

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9685) tag a query in JSON syntax

2018-05-22 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned SOLR-9685:
--

Assignee: Mikhail Khludnev

> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-9685.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BadApple candidates

2018-05-22 Thread David Smiley
Please don't bad-apple:
* CreateRoutedAliasTest.  The failure I observed, thetaphi build 226  was
on branch_7_3 which does not have SOLR-12308 (which is on master and
branch7x) that I think solves that failure.

Could you change your failure reporting to only considers master & branch7x?

I just now applied @AwaitsFix to ConcurrentCreateRoutedAliasTest (similar
name but not same) after filing SOLR-12386 for it -- the infamous (to us)
"Can't find resource" relating to a configset file that ought to be there.

AFAICT there is no other "alias" related fails pending.

On Mon, May 21, 2018 at 11:01 AM Erick Erickson 
wrote:

> I'm going to change how I collect the badapple candidates. After
> getting a little
> overwhelmed by the number of failure e-mails (even ignoring the ones with
> BadApple enabled), "It come to me in a vision! In a flash!"" (points if you
> know where that comes from, hint: Old music involving a pickle).
>
> Since I collect failures for a week then run filter them by what's
> also in Hoss's
> results from two  weeks ago, that's really equivalent to creating the
> candidate
> list from the intersection of the most recent week of Hoss's results and
> the
> results from _three_ weeks ago. Much faster too. Thanks Hoss!
>
> So that's what I'll do going forward.
>
> Meanwhile, here's the list for this Thursday.
>
> BadApple candidates: I'll BadApple these on Thursday unless there are
> objections
>   org.apache.lucene.search.TestLRUQueryCache.testBulkScorerLocking
>org.apache.solr.TestDistributedSearch.test
>org.apache.solr.cloud.AddReplicaTest.test
>org.apache.solr.cloud.AssignBackwardCompatibilityTest.test
>
>  org.apache.solr.cloud.CreateRoutedAliasTest.testCollectionNamesMustBeAbsent
>org.apache.solr.cloud.CreateRoutedAliasTest.testTimezoneAbsoluteDate
>org.apache.solr.cloud.CreateRoutedAliasTest.testV1
>org.apache.solr.cloud.CreateRoutedAliasTest.testV2
>
>  
> org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica
>org.apache.solr.cloud.LIRRollingUpdatesTest.testNewReplicaOldLeader
>org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.basicTest
>
>  
> org.apache.solr.cloud.LeaderVoteWaitTimeoutTest.testMostInSyncReplicasCanWinElection
>org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove
>org.apache.solr.cloud.RestartWhileUpdatingTest.test
>
>  
> org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader
>org.apache.solr.cloud.TestPullReplica.testCreateDelete
>org.apache.solr.cloud.TestPullReplica.testKillLeader
>org.apache.solr.cloud.TestSolrCloudWithKerberosAlt.testBasics
>org.apache.solr.cloud.UnloadDistributedZkTest.test
>
>  
> org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest.testAsyncRequests
>
>  
> org.apache.solr.cloud.api.collections.CustomCollectionTest.testCustomCollectionsAPI
>org.apache.solr.cloud.autoscaling.ComputePlanActionTest.testNodeLost
>
>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration
>
>  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration
>org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger
>org.apache.solr.cloud.autoscaling.NodeAddedTriggerTest.testRestoreState
>
>  
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testBelowSearchRate
>
>  
> org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode
>org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger
>org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test
>org.apache.solr.cloud.hdfs.StressHdfsTest.test
>org.apache.solr.handler.TestSQLHandler.doTest
>org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth
>org.apache.solr.uninverting.TestDocTermOrds.testTriggerUnInvertLimit
>org.apache.solr.update.TestHdfsUpdateLog.testFSThreadSafety
>org.apache.solr.update.TestInPlaceUpdatesDistrib.test
>
>
> Number of AwaitsFix: 21 Number of BadApples: 99
>
> *AwaitsFix Annotations:
>
>
> Lucene AwaitsFix
> GeoPolygonTest.java
>testLUCENE8276_case3()
>//@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8276
> ")
>
> GeoPolygonTest.java
>testLUCENE8280()
>//@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8280
> ")
>
> GeoPolygonTest.java
>testLUCENE8281()
>//@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281
> ")
>
> RandomGeoPolygonTest.java
>testCompareBigPolygons()
>//@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281
> ")
>
> RandomGeoPolygonTest.java
>testCompareSmallPolygons()
>//@AwaitsFix(bugUrl="https://issues.apache.org/jira/browse/LUCENE-8281
> ")
>
> TestControlledRealTimeReopenThread.java
>testCRTReopen()
>@AwaitsFix(bugUrl = "https://issues.apache.org/jira/browse/LUCENE-5737
> ")
>
> TestICUNormalizer2CharFilter.java
>

[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484582#comment-16484582
 ] 

ASF subversion and git services commented on SOLR-12386:


Commit 982268efd14147ab99ab5b3e152fd4106e6581f1 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=982268e ]

SOLR-12386: Apply AwaitsFix to ConcurrentCreateRoutedAliasTest

(cherry picked from commit af59c46)


> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9685) tag a query in JSON syntax

2018-05-22 Thread Dmitry Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484580#comment-16484580
 ] 

Dmitry Tikhonov commented on SOLR-9685:
---

Patch with new tests was attached. Pls review. 

> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: SOLR-9685.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9685) tag a query in JSON syntax

2018-05-22 Thread Dmitry Tikhonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Tikhonov updated SOLR-9685:
--
Attachment: SOLR-9685.patch

> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
> Attachments: SOLR-9685.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484575#comment-16484575
 ] 

ASF subversion and git services commented on SOLR-12386:


Commit af59c46363f3497d44548021e4ff15d924ddbec3 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=af59c46 ]

SOLR-12386: Apply AwaitsFix to ConcurrentCreateRoutedAliasTest


> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Attachment: (was: LUCENE-8328.patch)

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484572#comment-16484572
 ] 

Nhat Nguyen commented on LUCENE-8328:
-

/cc [~simonw]

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> {noformat}
> Merge stack trace:
> at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
> at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
> at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
> at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
> at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
> at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
> at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
> Refresh stack trace:
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.
{noformat}
Merge stack trace:

at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0)
at org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257)
at org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355)
at org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043)
at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)

Refresh stack trace:
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at 

[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Attachment: LUCENE-8328.patch

> ReadersAndUpdates#getLatestReader should execute under lock
> ---
>
> Key: LUCENE-8328
> URL: https://issues.apache.org/jira/browse/LUCENE-8328
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.4, master (8.0)
>Reporter: Nhat Nguyen
>Priority: Major
> Attachments: LUCENE-8328.patch
>
>
> It's possible for a merge thread to acquire an index reader which is closed 
> before it can incRef.
> *Merge stack trace:*
> {noformat}
> Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader 
> is closed at 
> org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198)
>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
>  at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
> org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
>  at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
> org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
>  at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
>  at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
>  at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
>  
> *Refresh stack trace:*
> {noformat}
> at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
> at 
> org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
> at 
> org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
> at 
> org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
> at 
> org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
> at 
> org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
> at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
> at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
> at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
> at 
> org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
> at 
> org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
> at 
> org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
>  
> The problem is that `ReadersAndUpdates#getLatestReader` is executed 
> concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at 

__randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 

{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at 

[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at 

__randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 

{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*

{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 

{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 

[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*

{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 

{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 

[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

 

 
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

 

** Merge stack trace:*

 
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 

** Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 

[jira] [Created] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)
Nhat Nguyen created LUCENE-8328:
---

 Summary: ReadersAndUpdates#getLatestReader should execute under 
lock
 Key: LUCENE-8328
 URL: https://issues.apache.org/jira/browse/LUCENE-8328
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 7.4, master (8.0)
Reporter: Nhat Nguyen


It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

 

** Merge stack trace:*

 
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 

** Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8328) ReadersAndUpdates#getLatestReader should execute under lock

2018-05-22 Thread Nhat Nguyen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8328:

Description: 
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

*Merge stack trace:*
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 

*Refresh stack trace:*
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 
org.apache.lucene.index.ReadersAndUpdates.swapNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:686)
at 
org.apache.lucene.index.ReadersAndUpdates.getLatestReader(ReadersAndUpdates.java:260)
at 
org.elasticsearch.index.shard.ElasticsearchMergePolicy.keepFullyDeletedSegment(ElasticsearchMergePolicy.java:143)
at 
org.apache.lucene.index.ReadersAndUpdates.keepFullyDeletedSegment(ReadersAndUpdates.java:769)
at org.apache.lucene.index.IndexWriter.isFullyDeleted(IndexWriter.java:5124)
at org.apache.lucene.index.IndexWriter.writeReaderPool(IndexWriter.java:3306)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:514)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
at 
org.apache.lucene.index.FilterDirectoryReader.doOpenIfChanged(FilterDirectoryReader.java:104)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140){noformat}
 

The problem is that `ReadersAndUpdates#getLatestReader` is executed 
concurrently without holding lock.

  was:
It's possible for a merge thread to acquire an index reader which is closed 
before it can incRef.

 

 
{noformat}
Caused by: org.apache.lucene.store.AlreadyClosedException: this IndexReader is 
closed at __randomizedtesting.SeedInfo.seed([136983A068AA2F9D]:0) at 
org.apache.lucene.index.IndexReader.ensureOpen(IndexReader.java:257) at 
org.apache.lucene.index.IndexReader.incRef(IndexReader.java:184) at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:198) 
at 
org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:728)
 at org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4355) at 
org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:4043) at 
org.apache.lucene.index.SerialMergeScheduler.merge(SerialMergeScheduler.java:40)
 at org.apache.lucene.index.IndexWriter.maybeMerge(IndexWriter.java:2145) at 
org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:542) at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:288)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:263)
 at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:253)
 at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140) 
at 
org.apache.lucene.index.TestSoftDeletesRetentionMergePolicy.lambda$testMergeAndRefreshDeletedSegmentsConcurrently$21(TestSoftDeletesRetentionMergePolicy.java:597)
 ... 1 more
{noformat}
 
{noformat}
at org.apache.lucene.index.IndexReader.decRef(IndexReader.java:238)
at 
org.apache.lucene.index.ReadersAndUpdates.createNewReaderWithLatestLiveDocs(ReadersAndUpdates.java:675)
at 

[jira] [Updated] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12386:

Description: Some tests, especially ConcurrentCreateRoutedAliasTest, have 
failed sporadically failed with the message "Can't find resource" pertaining to 
a file that is in the default ConfigSet yet mysteriously can't be found.  This 
happens when a collection is being created that ultimately fails for this 
reason.

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>
> Some tests, especially ConcurrentCreateRoutedAliasTest, have failed 
> sporadically failed with the message "Can't find resource" pertaining to a 
> file that is in the default ConfigSet yet mysteriously can't be found.  This 
> happens when a collection is being created that ultimately fails for this 
> reason.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12386:

Environment: (was: Some tests, especially 
ConcurrentCreateRoutedAliasTest, have failed sporadically failed with the 
message "Can't find resource" pertaining to a file that is in the default 
ConfigSet yet mysteriously can't be found.  This happens when a collection is 
being created that ultimately fails for this reason.)

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12386:

Attachment: cant find resource, stacktrace.txt

> Test fails for "Can't find resource" for files in the _default configset
> 
>
> Key: SOLR-12386
> URL: https://issues.apache.org/jira/browse/SOLR-12386
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
> Environment: Some tests, especially ConcurrentCreateRoutedAliasTest, 
> have failed sporadically failed with the message "Can't find resource" 
> pertaining to a file that is in the default ConfigSet yet mysteriously can't 
> be found.  This happens when a collection is being created that ultimately 
> fails for this reason.
>Reporter: David Smiley
>Priority: Minor
> Attachments: cant find resource, stacktrace.txt
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12283) Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in .system collection

2018-05-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12283:

Attachment: (was: image.png)

> Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in 
> .system collection
> 
>
> Key: SOLR-12283
> URL: https://issues.apache.org/jira/browse/SOLR-12283
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.6.1, 7.3
> Environment: Debian
> SolrCloud
>Reporter: Maxence SAUNIER
>Priority: Blocker
> Attachments: indexation_events.xml, modified-DIH.zip, 
> modified-DIH.zip, mysql-connector-java-5.1.46-bin.jar, 
> mysql-connector-java-5.1.46.jar, request_handler_config.json, 
> solr-core-7.3.0.jar, solr-dataimporthandler-7.3.0.jar, 
> solr-dataimporthandler-extras-7.3.0.jar, solr-solrj-7.3.0.jar, solr.log, 
> solr.log, solr.log
>
>
> Hello,
> It's been 2 weeks that I try to correct this problem with the community 
> user-solr but no success. I seriously wonder if this is not a problem in the 
> code. I do not have the impression that many people use DIH with Solr's cloud 
> version.
> On Internet, no similar problem.
> For information, the following configuration of DIH comes from DIHs that work 
> in production on a single Solr server. The connections to the databases are 
> therefore correct.
> *Errors messages:*
> {panel:title=DataImporter}
> {code:java}
> Full Import 
> failed:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to PropertyWriter implementation:ZKPropertiesWriter
>   at 
> org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataImporter.java:339)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:420)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
>   at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:183)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
>   at 
> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4651 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4651/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at 
__randomizedtesting.SeedInfo.seed([28243E0556A97653:4BEF0887CF66057E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger(IndexSizeTriggerTest.java:180)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
waitFor not elapsed but produced an event

Stack Trace:
java.lang.AssertionError: waitFor not elapsed but produced an event
at 

[jira] [Updated] (SOLR-12283) Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in .system collection

2018-05-22 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12283:

Attachment: image.png

> Unable To Load ZKPropertiesWriter when dih.jar is added as runtimelib BLOB in 
> .system collection
> 
>
> Key: SOLR-12283
> URL: https://issues.apache.org/jira/browse/SOLR-12283
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 6.6.1, 7.3
> Environment: Debian
> SolrCloud
>Reporter: Maxence SAUNIER
>Priority: Blocker
> Attachments: image.png, indexation_events.xml, modified-DIH.zip, 
> modified-DIH.zip, mysql-connector-java-5.1.46-bin.jar, 
> mysql-connector-java-5.1.46.jar, request_handler_config.json, 
> solr-core-7.3.0.jar, solr-dataimporthandler-7.3.0.jar, 
> solr-dataimporthandler-extras-7.3.0.jar, solr-solrj-7.3.0.jar, solr.log, 
> solr.log, solr.log
>
>
> Hello,
> It's been 2 weeks that I try to correct this problem with the community 
> user-solr but no success. I seriously wonder if this is not a problem in the 
> code. I do not have the impression that many people use DIH with Solr's cloud 
> version.
> On Internet, no similar problem.
> For information, the following configuration of DIH comes from DIHs that work 
> in production on a single Solr server. The connections to the databases are 
> therefore correct.
> *Errors messages:*
> {panel:title=DataImporter}
> {code:java}
> Full Import 
> failed:org.apache.solr.handler.dataimport.DataImportHandlerException: Unable 
> to PropertyWriter implementation:ZKPropertiesWriter
>   at 
> org.apache.solr.handler.dataimport.DataImporter.createPropertyWriter(DataImporter.java:339)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:420)
>   at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:483)
>   at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:183)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
>   at org.eclipse.jetty.server.Server.handle(Server.java:530)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)
>   at 
> 

[jira] [Created] (SOLR-12386) Test fails for "Can't find resource" for files in the _default configset

2018-05-22 Thread David Smiley (JIRA)
David Smiley created SOLR-12386:
---

 Summary: Test fails for "Can't find resource" for files in the 
_default configset
 Key: SOLR-12386
 URL: https://issues.apache.org/jira/browse/SOLR-12386
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
 Environment: Some tests, especially ConcurrentCreateRoutedAliasTest, 
have failed sporadically failed with the message "Can't find resource" 
pertaining to a file that is in the default ConfigSet yet mysteriously can't be 
found.  This happens when a collection is being created that ultimately fails 
for this reason.
Reporter: David Smiley






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-22 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484495#comment-16484495
 ] 

Andrzej Bialecki  commented on SOLR-11779:
--

Updated patch:

* Overseer leader node now collects aggregated metrics from all nodes. 
Aggregation method is just addition for now, which makes sense with the default 
metrics that this patch collects. There's one history DB per each collection, 
and one for {{solr.jvm}} and {{solr.node}} representing sort of global view of 
the cluster.
* added support for configuration via {{solr.xml:/solr/metrics/history}} or via 
{{/clusterprops.json:/metrics/history}} which overrides values specified in 
{{solr.xml}}. Currently supported config options are:
** enable - boolean, default is true: enables collection of metrics history 
(note that it's always possible to retrieve existing metrics history even when 
enable == false)
** enableReplicas - boolean, default is false: enables collection of local 
per-replica (core) metrics history
** enableNodes - boolean, default is false: enables collection of local node 
and jvm metrics history.
** collectPeriod - int, in seconds, default is 60: metrics are collected and 
updated this often.
** syncPeriod - int, in seconds, default is 60: in-memory DBs are persisted at 
most this often (if modified).

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11779) Basic long-term collection of aggregated metrics

2018-05-22 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11779:
-
Attachment: SOLR-11779.patch

> Basic long-term collection of aggregated metrics
> 
>
> Key: SOLR-11779
> URL: https://issues.apache.org/jira/browse/SOLR-11779
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Attachments: SOLR-11779.patch, SOLR-11779.patch, SOLR-11779.patch, 
> c1.png, c2.png, core.json, d1.png, d2.png, d3.png, jvm-list.json, 
> jvm-string.json, jvm.json, o1.png, u1.png
>
>
> Tracking the key metrics over time is very helpful in understanding the 
> cluster and user behavior.
> Currently even basic metrics tracking requires setting up an external system 
> and either polling {{/admin/metrics}} or using {{SolrMetricReporter}}-s. The 
> advantage of this setup is that these external tools usually provide a lot of 
> sophisticated functionality. The downside is that they don't ship out of the 
> box with Solr and require additional admin effort to set up.
> Solr could collect some of the key metrics and keep their historical values 
> in a round-robin database (eg. using RRD4j) to keep the size of the historic 
> data constant (eg. ~64kB per metric), but at the same providing out of the 
> box useful insights into the basic system behavior over time. This data could 
> be persisted to the {{.system}} collection as blobs, and it could be also 
> presented in the Admin UI as graphs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2018-05-22 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484468#comment-16484468
 ] 

David Smiley commented on SOLR-11616:
-

I simply found by code inspection/reviewing all callers of getSearcher.  See 
SOLR-12374.  We can continue this discussion there.

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-22 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484441#comment-16484441
 ] 

Pascal Proulx edited comment on SOLR-12353 at 5/22/18 7:19 PM:
---

Sorry I thought I had replied. I believe any of that would remedy the issue in 
our case. I suppose I'd agree about hosts file on prod machines. But in our 
case it affects developer(s) as well, where that's impractical, so it does 
happen. Thanks


was (Author: pplx):
Sorry I thought I had replied. I believe any of that would remedy the issue in 
our case. I suppose I'd agree about hosts file on prod machines. But in our 
case it affects developer(s) as well, so it does happen. Thanks

> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Assignee: Erick Erickson
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to circumvent the 
> servlet API was doubtful.
> Thank you
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-22 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484441#comment-16484441
 ] 

Pascal Proulx commented on SOLR-12353:
--

Sorry I thought I had replied. I believe any of that would remedy the issue in 
our case. I suppose I'd agree about hosts file on prod machines. But in our 
case it affects developer(s) as well, so it does happen.

> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Assignee: Erick Erickson
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to circumvent the 
> servlet API was doubtful.
> Thank you
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12353) SolrDispatchFilter expensive non-conditional debug line degrades performance

2018-05-22 Thread Pascal Proulx (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484441#comment-16484441
 ] 

Pascal Proulx edited comment on SOLR-12353 at 5/22/18 7:19 PM:
---

Sorry I thought I had replied. I believe any of that would remedy the issue in 
our case. I suppose I'd agree about hosts file on prod machines. But in our 
case it affects developer(s) as well, so it does happen. Thanks


was (Author: pplx):
Sorry I thought I had replied. I believe any of that would remedy the issue in 
our case. I suppose I'd agree about hosts file on prod machines. But in our 
case it affects developer(s) as well, so it does happen.

> SolrDispatchFilter expensive non-conditional debug line degrades performance
> 
>
> Key: SOLR-12353
> URL: https://issues.apache.org/jira/browse/SOLR-12353
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, Authentication, logging
>Affects Versions: 6.6.3
>Reporter: Pascal Proulx
>Assignee: Erick Erickson
>Priority: Major
>
> Hello,
> We use Solr 6.6.3. Recently on one network when switching on authentication 
> (security.json) began experiencing significant delays (5-10 seconds) to 
> fulfill each request to /solr index.
> I debugged the issue and it was essentially triggered by line 456 of 
> SolrDispatchFilter.java:
> {code:java}
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> {code}
> The issue is that on machines and networks with poor configuration or DNS 
> issues in particular, request.getLocalName() can trigger expensive reverse 
> DNS queries for the ethernet interfaces, and will only return within 
> reasonable timeframe if manually written into /etc/hosts.
> More to the point, request.getLocalName() should be considered an expensive 
> operation in general, and in SolrDispatchFilter it runs unconditionally even 
> if debug is disabled.
> I would suggest to either replace request.getLocalName/Port here, or at the 
> least, wrap the debug operation so it doesn't affect any production systems:
> {code:java}
> if (log.isDebugEnabled()) {
> log.debug("Request to authenticate: {}, domain: {}, port: {}", request, 
> request.getLocalName(), request.getLocalPort());
> }
> {code}
> The authenticateRequest method in question is private so we could not 
> override it and making another HttpServletRequestWrapper to circumvent the 
> servlet API was doubtful.
> Thank you
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11616) Backup failing on a constantly changing index with NoSuchFileException

2018-05-22 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484430#comment-16484430
 ] 

Varun Thacker commented on SOLR-11616:
--

+1  . Curious how you ran into this? 

> Backup failing on a constantly changing index with NoSuchFileException
> --
>
> Key: SOLR-11616
> URL: https://issues.apache.org/jira/browse/SOLR-11616
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11616.patch, SOLR-11616.patch, solr-6.3.log, 
> solr-7.1.log
>
>
> As reported by several users on SOLR-9120 , Solr backups fail with 
> NoSuchFileException on a constantly changing index. 
> Users linked SOLR-9120 to the root cause as the stack trace is the same , but 
> the fix proposed there won't fix backups to stop failing.
> We need to implement a similar fix in {{SnapShooter#createSnapshot}} to fix 
> the problem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_172) - Build # 607 - Still Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/607/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=330300

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=330300
at 
__randomizedtesting.SeedInfo.seed([252CC9674EFB50C2:1D40BA42DA2BF284]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:52)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:31)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15888 lines...]
   [junit4] Suite: org.apache.solr.common.util.TestTimeSource
   [junit4]   2> 84491 INFO  
(SUITE-TestTimeSource-seed#[252CC9674EFB50C2]-worker) [] 

[jira] [Commented] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484420#comment-16484420
 ] 

Steve Rowe commented on SOLR-12385:
---

Yonik commented about the need to fold SolrTestCaseHS into the standard 
framework [on 
SOLR-7214|https://issues.apache.org/jira/browse/SOLR-7214?focusedCommentId=14379043=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14379043]:

{quote}
We still need to handle SolrTestCaseHS as well (HS stands for HelioSearch). 
Some of that class should prob just go back into SolrTestCaseJ4, but some of it 
(the client stuff) might make sense somewhere else.
{quote}

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward and others did a bunch of test conversions to SolrCloudTestCase 
> (e.g. SOLR-9132, SOLR-9110, SOLR-9065), but AFAICT never mentioned 
> SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484367#comment-16484367
 ] 

ASF subversion and git services commented on SOLR-9480:
---

Commit 4ea26fbca2f14cb45aa38f00d821c99079939120 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4ea26fb ]

SOLR-9480: minor cleanup of nits found by sarowe

(cherry picked from commit f9091473e0587a3470751f705e143e5b5796714c)


> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11453) Create separate logger for slow requests

2018-05-22 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484356#comment-16484356
 ] 

Shawn Heisey commented on SOLR-11453:
-

I have been having a severe lack of free time lately.  Haven't had a chance to 
look at the latest.  Feel free to take over!

Because of the duplicate logging that would result if we don't log the slow 
logger to a separate log by default, i think it should default to a separate 
file.

Something I'm thinking about for a separate issue:  Offer an option (not 
enabled by default) that would send all of the standard request logging 
(org.apache.solr.core.SolrCore.Request) to a separate file.  I think might 
significantly reduce the overall size of the main solr.log.


> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch, slowlog-informational.patch, 
> slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Description: 
SolrTestCaseHS is extended only by JSON facet and JSON request tests.

SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
is short for HelioSearch, where the functionality was originally developed). It 
appears to enable a primitive distributed functionality, with no ZooKeeper, 
allowing tight control of document distribution, like SolrCloud's implicit 
routing.  Some JSON-specific handling stuff in there too, which I think could 
be relocated to JSONTestUtil.

Alan Woodward and others did a bunch of test conversions to SolrCloudTestCase 
(e.g. SOLR-9132, SOLR-9110, SOLR-9065), but AFAICT never mentioned 
SolrTestCaseHS-based tests as a target.

  was:
SolrTestCaseHS is extended only by JSON facet and JSON request tests.

SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
is short for HelioSearch, where the functionality was originally developed). It 
appears to enable a primitive distributed functionality, with no ZooKeeper, 
allowing tight control of document distribution, like SolrCloud's implicit 
routing.  Some JSON-specific handling stuff in there too, which I think could 
be relocated to JSONTestUtil.

Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but AFAICT 
never mentioned SolrTestCaseHS-based tests as a target.


> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward and others did a bunch of test conversions to SolrCloudTestCase 
> (e.g. SOLR-9132, SOLR-9110, SOLR-9065), but AFAICT never mentioned 
> SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Environment: (was: SolrTestCaseHS is extended only by JSON facet and 
JSON request tests.

SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
is short for HelioSearch, where the functionality was originally developed). It 
appears to enable a primitive distributed functionality, with no ZooKeeper, 
allowing tight control of document distribution, like SolrCloud's implicit 
routing.  Some JSON-specific handling stuff in there too, which I think could 
be relocated to JSONTestUtil.

Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but AFAICT 
never mentioned SolrTestCaseHS-based tests as a target.)

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-22 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484342#comment-16484342
 ] 

ASF subversion and git services commented on SOLR-9480:
---

Commit f9091473e0587a3470751f705e143e5b5796714c in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f909147 ]

SOLR-9480: minor cleanup of nits found by sarowe


> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Issue Type: Task  (was: Bug)

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but 
> AFAICT never mentioned SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Description: 
SolrTestCaseHS is extended only by JSON facet and JSON request tests.

SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
is short for HelioSearch, where the functionality was originally developed). It 
appears to enable a primitive distributed functionality, with no ZooKeeper, 
allowing tight control of document distribution, like SolrCloud's implicit 
routing.  Some JSON-specific handling stuff in there too, which I think could 
be relocated to JSONTestUtil.

Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but AFAICT 
never mentioned SolrTestCaseHS-based tests as a target.

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Major
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but 
> AFAICT never mentioned SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Priority: Minor  (was: Major)

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but 
> AFAICT never mentioned SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-12385:
-

 Summary: Tests extending SolrTestCaseHS should be cut over to 
SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded 
into the standard test infrastructure
 Key: SOLR-12385
 URL: https://issues.apache.org/jira/browse/SOLR-12385
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
 Environment: SolrTestCaseHS is extended only by JSON facet and JSON 
request tests.

SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
is short for HelioSearch, where the functionality was originally developed). It 
appears to enable a primitive distributed functionality, with no ZooKeeper, 
allowing tight control of document distribution, like SolrCloud's implicit 
routing.  Some JSON-specific handling stuff in there too, which I think could 
be relocated to JSONTestUtil.

Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but AFAICT 
never mentioned SolrTestCaseHS-based tests as a target.
Reporter: Steve Rowe






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12385) Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and unique SolrTestCaseHS functionality should be folded into the standard test infrastructure

2018-05-22 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12385:
--
Component/s: Tests

> Tests extending SolrTestCaseHS should be cut over to SolrCloudTestCase, and 
> unique SolrTestCaseHS functionality should be folded into the standard test 
> infrastructure
> --
>
> Key: SOLR-12385
> URL: https://issues.apache.org/jira/browse/SOLR-12385
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Priority: Minor
>
> SolrTestCaseHS is extended only by JSON facet and JSON request tests.
> SolrTestCaseHS was introduced with JSON faceting in SOLR-7214 (I believe "HS" 
> is short for HelioSearch, where the functionality was originally developed). 
> It appears to enable a primitive distributed functionality, with no 
> ZooKeeper, allowing tight control of document distribution, like SolrCloud's 
> implicit routing.  Some JSON-specific handling stuff in there too, which I 
> think could be relocated to JSONTestUtil.
> Alan Woodward did a bunch of test conversions to SolrCloudTestCase, but 
> AFAICT never mentioned SolrTestCaseHS-based tests as a target.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-22 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484313#comment-16484313
 ] 

Alessandro Benedetti commented on SOLR-9480:


+1 very interesting !
I opened a Jira issue long time ago ( and nver worked on it, which seems quite 
related [1] )
I remember at the time I investigate some different relatedness metrices ( some 
of them are available in Elasticsearch [2]) 

Great work, I am curious to take a look to the implementation!

 [1]  https://issues.apache.org/jira/browse/SOLR-9851
[2]  [https://www.elastic.co/blog/significant-terms-aggregation]

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-22 Thread Alessandro Benedetti (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484313#comment-16484313
 ] 

Alessandro Benedetti edited comment on SOLR-9480 at 5/22/18 5:21 PM:
-

+1 very interesting !
 I opened a Jira issue long time ago ( and nver worked on it, which seems quite 
related [1] )
 I remember at the time I investigate some different relatedness metrics ( some 
of them are available in Elasticsearch [2]) 

Great work, I am curious to take a look to the implementation!

[1]  https://issues.apache.org/jira/browse/SOLR-9851
 [2]  [https://www.elastic.co/blog/significant-terms-aggregation]


was (Author: alessandro.benedetti):
+1 very interesting !
I opened a Jira issue long time ago ( and nver worked on it, which seems quite 
related [1] )
I remember at the time I investigate some different relatedness metrices ( some 
of them are available in Elasticsearch [2]) 

Great work, I am curious to take a look to the implementation!

 [1]  https://issues.apache.org/jira/browse/SOLR-9851
[2]  [https://www.elastic.co/blog/significant-terms-aggregation]

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 1956 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1956/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([848A4E9C1641F81D:E741781E8F8E8B30]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 14292 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
   [junit4]   2> 

[jira] [Commented] (SOLR-9480) Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)

2018-05-22 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484301#comment-16484301
 ] 

Steve Rowe commented on SOLR-9480:
--

+1, I found a couple nits in a quick review:
 * In {{RelatednessAgg.SKGSlotAcc}}, {{fgFilters}} and {{bgSet}} are assigned 
but never used (never referenced outside the ctor)
 * In {{RelatednessAgg.SKGSlotAcc.processSlot()}}, a code comment includes a 
reference to a function named {{skg()}}, which has since been renamed to 
{{relatedness()}}. 

> Graph Traversal for Significantly Related Terms (Semantic Knowledge Graph)
> --
>
> Key: SOLR-9480
> URL: https://issues.apache.org/jira/browse/SOLR-9480
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch, 
> SOLR-9480.patch, SOLR-9480.patch, SOLR-9480.patch
>
>
> This issue is to track the contribution of the Semantic Knowledge Graph Solr 
> Plugin (request handler), which exposes a graph-like interface for 
> discovering and traversing significant relationships between entities within 
> an inverted index.
> This data model has been described in the following research paper: [The 
> Semantic Knowledge Graph: A compact, auto-generated model for real-time 
> traversal and ranking of any relationship within a 
> domain|https://arxiv.org/abs/1609.00464], as well as in presentations I gave 
> in October 2015 at [Lucene/Solr 
> Revolution|http://www.slideshare.net/treygrainger/leveraging-lucenesolr-as-a-knowledge-graph-and-intent-engine]
>  and November 2015 at the [Bay Area Search 
> Meetup|http://www.treygrainger.com/posts/presentations/searching-on-intent-knowledge-graphs-personalization-and-contextual-disambiguation/].
> The source code for this project is currently available at 
> [https://github.com/careerbuilder/semantic-knowledge-graph], and the folks at 
> CareerBuilder (where this was built) have given me the go-ahead to now 
> contribute this back to the Apache Solr Project, as well.
> Check out the Github repository, research paper, or presentations for a more 
> detailed description of this contribution. Initial patch coming soon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7332 - Still Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7332/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([3C8D4D73B73F1BDC:6F340FC3552E8E26]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:425)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22078 - Unstable!

2018-05-22 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22078/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest

Error Message:
expected:<4> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<4> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([6006467E943A6054:14F6A73BD01E27DB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.TestCloudRecovery.leaderRecoverFromLogOnStartupTest(TestCloudRecovery.java:103)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:

[jira] [Updated] (SOLR-12340) Solr 7 does not do a phrase search by default for certain queries.

2018-05-22 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12340:
-
Fix Version/s: (was: 7.2.1)

> Solr 7 does not do a phrase search by default for certain queries.
> --
>
> Key: SOLR-12340
> URL: https://issues.apache.org/jira/browse/SOLR-12340
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.2
> Environment: windows 7 x64 
> solr-spec 5.2.1
> lucene-spec 5.2.1
> java.runtime.version 1.8.0_112-b15
> jetty.version 9.3.8.v20160314
> solr-spec 7.2.1
> lucene-impl 7.2.1
> java.version 9.0.4
> jetty.version 9.3.8.v20160314
>Reporter: piyush nayak
>Priority: Major
> Attachments: managed-schema-solr7, schema-solr5.xml
>
>
> we have recently upgraded from Solr5 to Solr7. I'm running into a change of 
> behavior detailed below:
> For the term "test3" Solr7 splits the numeric and alphabetical components and 
> does a simple term search while Solr 5 did a phrase search.
> ---
> lucene/solr-spec: 7.2.1
> [http://localhost:8991/solr/solr4/select?q=test3=test=json=true=true]
>  
> "debug":{
>     "rawquerystring":"test3",
>     "querystring":"test3",
>     "parsedquery":"contents:test contents:3",
>     "parsedquery_toString":"contents:test contents:3",
>  
> ---
> lucene/solr-spec 5.2.1
> [http://localhost:8989/solr/solr4/select?q=test3=test=json=true=true]
>  
> "debug":{
>     "rawquerystring":"test3",
>     "querystring":"test3",
>     "parsedquery":"PhraseQuery(contents:\"test 3\")",
>     "parsedquery_toString":"contents:\"test 3\"",
> 
> passing "sow=true" in the URL for Solr 7 makes it behave like 5.
> The schema.xml in both Solr versions for me is the one that gets copied from 
> the default template folder to the collections's conf folder.
> The fieldtype that corresponds to field "contents" is "text", and the 
> definition of "text" field in 5 and the schema backup on 7 is the same.
>  
> I tried the analysis tab. Looks like all the classes (WT, SF ...) in 7 list a 
> property (termFrequency = 1) that is missing in 5.
> attaching the schema for Solr 5 and 7.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11453) Create separate logger for slow requests

2018-05-22 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484153#comment-16484153
 ] 

Varun Thacker commented on SOLR-11453:
--

Hi Shawn,

Do you want me to pick this up and commit it or is this still on your radar?

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch, slowlog-informational.patch, 
> slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8311) Leverage impacts for phrase queries

2018-05-22 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16484120#comment-16484120
 ] 

Adrien Grand commented on LUCENE-8311:
--

Here is a run with DFR I(ne)L1:

{noformat}
   LowPhrase   19.89  (1.2%)   16.59  (1.0%)  
-16.6% ( -18% -  -14%)
   MedPhrase   15.94  (1.2%)   13.36  (1.1%)  
-16.1% ( -18% -  -14%)
   HighTermMonthSort   90.26 (10.9%)   81.72 (11.6%)   
-9.5% ( -28% -   14%)
HighSloppyPhrase1.84  (1.9%)1.69  (2.2%)   
-7.9% ( -11% -   -3%)
 LowSloppyPhrase7.87  (2.0%)7.28  (2.5%)   
-7.4% ( -11% -   -3%)
 MedSloppyPhrase   10.17  (1.6%)9.43  (2.0%)   
-7.3% ( -10% -   -3%)
   HighTermDayOfYearSort   64.33 (11.6%)   60.25 (10.4%)   
-6.3% ( -25% -   17%)
HighTerm  476.13  (2.5%)  452.30  (1.8%)   
-5.0% (  -9% -0%)
  Fuzzy1  211.47  (4.1%)  203.28  (3.3%)   
-3.9% ( -10% -3%)
  IntNRQ   31.99  (2.5%)   30.96  (7.6%)   
-3.2% ( -12% -6%)
 MedTerm  653.93  (2.4%)  634.02  (1.8%)   
-3.0% (  -7% -1%)
  Fuzzy2  218.64  (5.9%)  212.25  (5.4%)   
-2.9% ( -13% -8%)
  OrHighHigh   17.28  (1.6%)   16.93  (1.7%)   
-2.0% (  -5% -1%)
 LowTerm 1405.19  (2.9%) 1380.15  (2.3%)   
-1.8% (  -6% -3%)
 AndHighHigh   21.96  (2.1%)   21.62  (2.5%)   
-1.5% (  -5% -3%)
   OrHighMed   59.73  (1.5%)   58.89  (1.7%)   
-1.4% (  -4% -1%)
 Prefix3   73.07  (4.8%)   72.07  (5.8%)   
-1.4% ( -11% -9%)
Wildcard   64.42  (3.6%)   63.72  (4.5%)   
-1.1% (  -8% -7%)
 Respell  181.31  (2.4%)  180.69  (2.3%)   
-0.3% (  -4% -4%)
  AndHighLow  982.32  (2.5%)  981.63  (3.1%)   
-0.1% (  -5% -5%)
  AndHighMed   47.62  (2.0%)   47.60  (2.5%)   
-0.0% (  -4% -4%)
 LowSpanNear   49.59  (3.4%)   49.65  (3.0%)
0.1% (  -6% -6%)
   OrHighLow  314.16  (2.2%)  314.60  (1.7%)
0.1% (  -3% -4%)
HighSpanNear5.92  (4.6%)5.98  (4.1%)
1.0% (  -7% -   10%)
 MedSpanNear5.53  (6.7%)5.66  (5.5%)
2.2% (  -9% -   15%)
  HighPhrase3.87  (1.5%)4.36  (1.6%)   
12.6% (   9% -   15%)
{noformat}

> Leverage impacts for phrase queries
> ---
>
> Key: LUCENE-8311
> URL: https://issues.apache.org/jira/browse/LUCENE-8311
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8311.patch
>
>
> Now that we expose raw impacts, we could leverage them for phrase queries.
> For instance for exact phrases, we could take the minimum term frequency for 
> each unique norm value in order to get upper bounds of the score for the 
> phrase.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12358) Autoscaling suggestions fail randomly and for certain policies

2018-05-22 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-12358:
-

Assignee: Noble Paul

> Autoscaling suggestions fail randomly and for certain policies
> --
>
> Key: SOLR-12358
> URL: https://issues.apache.org/jira/browse/SOLR-12358
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.3.1
>Reporter: Jerry Bao
>Assignee: Noble Paul
>Priority: Critical
> Attachments: SOLR-12358.patch, diagnostics, nodes
>
>
> For the following policy
> {code:java}
> {"cores": "<4","node": "#ANY"}{code}
> the suggestions endpoint fails
> {code:java}
> "error": {"msg": "Comparison method violates its general contract!","trace": 
> "java.lang.IllegalArgumentException: Comparison method violates its general 
> contract!\n\tat java.util.TimSort.mergeHi(TimSort.java:899)\n\tat 
> java.util.TimSort.mergeAt(TimSort.java:516)\n\tat 
> java.util.TimSort.mergeCollapse(TimSort.java:441)\n\tat 
> java.util.TimSort.sort(TimSort.java:245)\n\tat 
> java.util.Arrays.sort(Arrays.java:1512)\n\tat 
> java.util.ArrayList.sort(ArrayList.java:1462)\n\tat 
> java.util.Collections.sort(Collections.java:175)\n\tat 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.setApproxValuesAndSortNodes(Policy.java:363)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.applyRules(Policy.java:310)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.(Policy.java:272)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.createSession(Policy.java:376)\n\tat
>  
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSuggestions(PolicyHelper.java:214)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleSuggestions(AutoScalingHandler.java:158)\n\tat
>  
> org.apache.solr.cloud.autoscaling.AutoScalingHandler.handleRequestBody(AutoScalingHandler.java:133)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)\n\tat
>  org.apache.solr.api.ApiBag$ReqHandlerToApi.call(ApiBag.java:242)\n\tat 
> org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:311)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:498)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1629)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:190)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:166)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1155)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  org.eclipse.jetty.server.Server.handle(Server.java:530)\n\tat 
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:347)\n\tat 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:256)\n\tat
>  
> 

[jira] [Updated] (SOLR-12384) Auto-created ".system" collection may still reject the first update(s)

2018-05-22 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12384:
-
Summary: Auto-created ".system" collection may still reject the first 
update(s)  (was: Auto-created ".system" may still reject the first update(s))

> Auto-created ".system" collection may still reject the first update(s)
> --
>
> Key: SOLR-12384
> URL: https://issues.apache.org/jira/browse/SOLR-12384
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: test.log
>
>
> The first update request to {{.system}} collection is supposed to 
> automatically create this collection, if it's missing, and then proceed to 
> process the update request.
> However, I encountered a scenario shown in the attached log, where the 
> collection is created but the first request fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12384) Auto-created ".system" may still reject the first update(s)

2018-05-22 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12384:
-
Summary: Auto-created ".system" may still reject the first update(s)  (was: 
Auto-created ".system" may still reject the first update)

> Auto-created ".system" may still reject the first update(s)
> ---
>
> Key: SOLR-12384
> URL: https://issues.apache.org/jira/browse/SOLR-12384
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: test.log
>
>
> The first update request to {{.system}} collection is supposed to 
> automatically create this collection, if it's missing, and then proceed to 
> process the update request.
> However, I encountered a scenario shown in the attached log, where the 
> collection is created but the first request fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12384) Auto-created ".system" may still reject the first update

2018-05-22 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12384?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-12384:
-
Description: 
The first update request to {{.system}} collection is supposed to automatically 
create this collection, if it's missing, and then proceed to process the update 
request.

However, I encountered a scenario shown in the attached log, where the 
collection is created but the first request fails.

> Auto-created ".system" may still reject the first update
> 
>
> Key: SOLR-12384
> URL: https://issues.apache.org/jira/browse/SOLR-12384
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Andrzej Bialecki 
>Priority: Major
> Attachments: test.log
>
>
> The first update request to {{.system}} collection is supposed to 
> automatically create this collection, if it's missing, and then proceed to 
> process the update request.
> However, I encountered a scenario shown in the attached log, where the 
> collection is created but the first request fails.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >