[
https://issues.apache.org/jira/browse/LUCENE-7080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15186896#comment-15186896
]
ASF subversion and git services commented on LUCENE-7080:
---------------------------------------------------------
Commit 6aa9aa66e334b8c415fa6d9976bbef581ea352c9 in lucene-solr's branch
refs/heads/branch_6_0 from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6aa9aa6 ]
LUCENE-7080: Sort files to corrupt to prevent HashSet iteration order issues
across JVMs
> MockDirectoryWrapper relies on HashSet iteration order
> ------------------------------------------------------
>
> Key: LUCENE-7080
> URL: https://issues.apache.org/jira/browse/LUCENE-7080
> Project: Lucene - Core
> Issue Type: Bug
> Components: general/test
> Affects Versions: 5.5, 6.0
> Reporter: Simon Willnauer
> Assignee: Simon Willnauer
> Attachments: LUCENE-7080.patch
>
>
> MDW relies on HashSet iteration order in
> {code}
> public synchronized void corruptFiles(Collection<String> files) throws
> IOException {
> // Must make a copy because we change the incoming unsyncedFiles
> // when we create temp files, delete, etc., below:
> for(String name : new ArrayList<>(files)) { // <<<<< this should be sorted
> int damage = randomState.nextInt(6);
> {code}
> this causes reproducibility issues when files get corrupted.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]