[
https://issues.apache.org/jira/browse/LUCENE-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186885#comment-15186885
]
ASF subversion and git services commented on LUCENE-7080:
---------------------------------------------------------
Commit 588aeeaab731f34af9063ec0dedb714f8740e0b2 in lucene-solr's branch
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=588aeea ]
LUCENE-7080: Sort files to corrupt to prevent HashSet iteration order issues
across JVMs
> MockDirectoryWrapper relies on HashSet iteration order
> ------------------------------------------------------
>
> Key: LUCENE-7080
> URL: https://issues.apache.org/jira/browse/LUCENE-7080
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/test
> Affects Versions: 5.5, 6.0
> Reporter: Simon Willnauer
> Assignee: Simon Willnauer
> Attachments: LUCENE-7080.patch
>
>
> MDW relies on HashSet iteration order in
> {code}
> public synchronized void corruptFiles(Collection<String> files) throws
> IOException {
> // Must make a copy because we change the incoming unsyncedFiles
> // when we create temp files, delete, etc., below:
> for(String name : new ArrayList<>(files)) { // <<<<< this should be sorted
> int damage = randomState.nextInt(6);
> {code}
> this causes reproducibility issues when files get corrupted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]