[jira] [Updated] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Bowyer updated LUCENE-4332: Attachment: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Test coverage or testing the tests
Ok first cut at a version of this in the build https://issues.apache.org/jira/browse/LUCENE-4332 On 27/08/12 18:05, Greg Bowyer wrote: On 27/08/12 17:30, Chris Hostetter wrote: : This is cool. I'd say lets get it up and going on jenkins (even weekly : or something). why worry about the imperfections in any of these : coverage tools, whats way more important is when the results find : situations where you thought you were testing something, but really +1. Even if it hammers the machine so bad it can't be run on mortal hardware, it's still worth it to hook it into the build system so people with god like hardware can easily run it and file bugs based on what they see. -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org The machine I ran it on cost me $5 from ec2 :D - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Bowyer updated LUCENE-4332: Attachment: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch Corrected jcommander license Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3763) Make solr use lucene filters directly
[ https://issues.apache.org/jira/browse/SOLR-3763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Bowyer updated SOLR-3763: -- Description: Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 200))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 300))), SHOULD) ) {code} This would yeild better cache usage I think as we can resuse docsets across multiple queries as well as avoid issues when filters are presented in differing orders ∘ Instead of end users providing costing we might (and this is a big might FWIW), be able to create a sort of execution plan of filters, leveraging a combination of what the index is able to tell us as well as sampling and ‟educated guesswork”; in essence this is what some DBMS software, for example postgresql does - it has a genetic algo that attempts to solve the travelling salesman - to great effect ∘ I am sure I will probably come up with other ambitious ideas to plug in here . :S Patches obviously forthcoming but the bulk of the work can be followed here https://github.com/GregBowyer/lucene-solr/commits/solr-uses-lucene-filters was: Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD),
[jira] [Created] (SOLR-3763) Make solr use lucene filters directly
Greg Bowyer created SOLR-3763: - Summary: Make solr use lucene filters directly Key: SOLR-3763 URL: https://issues.apache.org/jira/browse/SOLR-3763 Project: Solr Issue Type: Improvement Affects Versions: 4.0, 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 200))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 300))), SHOULD) ) {code} This would yeild better cache usage I think as we can resuse docsets across multiple queries as well as avoid issues when filters are presented in differing orders ∘ Instead of end users providing costing we might (and this is a big might FWIW), be able to create a sort of execution plan of filters, leveraging a combination of what the index is able to tell us as well as sampling and ‟educated guesswork”; in essence this is what some DBMS software, for example postgresql does - it has a genetic algo that attempts to solve the travelling salesman - to great effect ∘ I am sure I will probably come up with other ambitious ideas to plug in here . :S -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (LUCENE-3923) fail the build on wrong svn:eol-style
[ https://issues.apache.org/jira/browse/LUCENE-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reopened LUCENE-3923: --- Assignee: Uwe Schindler I have a new patch, which is very fast without custom task. fail the build on wrong svn:eol-style - Key: LUCENE-3923 URL: https://issues.apache.org/jira/browse/LUCENE-3923 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Assignee: Uwe Schindler Fix For: 5.0, 4.0 Attachments: LUCENE-3923.patch I'm tired of fixing this before releases. Jenkins should detect and fail on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3763) Make solr use lucene filters directly
[ https://issues.apache.org/jira/browse/SOLR-3763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Greg Bowyer updated SOLR-3763: -- Attachment: SOLR-3763-Make-solr-use-lucene-filters-directly.patch Initial version, this has some hacks in it and does not pass testing for caches since that needs to be reworked Make solr use lucene filters directly - Key: SOLR-3763 URL: https://issues.apache.org/jira/browse/SOLR-3763 Project: Solr Issue Type: Improvement Affects Versions: 4.0, 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Attachments: SOLR-3763-Make-solr-use-lucene-filters-directly.patch Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 200))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 300))), SHOULD) ) {code} This would yeild better cache usage I think as we can resuse docsets across multiple queries as well as avoid issues when filters are presented in differing orders ∘ Instead of end users providing costing we might (and this is a big might FWIW), be able to create a sort of execution plan of filters, leveraging a combination of what the index is able to tell us as well as sampling and ‟educated guesswork”; in essence this is what some DBMS software, for example postgresql does - it has a genetic algo that attempts to solve the travelling salesman - to great effect ∘ I am sure I will probably come up with other ambitious ideas to plug in here . :S Patches obviously forthcoming but the bulk of the work can be followed here https://github.com/GregBowyer/lucene-solr/commits/solr-uses-lucene-filters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-3923) fail the build on wrong svn:eol-style
[ https://issues.apache.org/jira/browse/LUCENE-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-3923: -- Attachment: LUCENE-3923.patch The attached task fixes the slowness problem on windows and ASF jenkins and speeds up linux, too: - The root build.xml file now has a combined check-svn-working-copy target that looks for unversioned files (leftovers after tests) and checks for the svn props - The work is done by JavaScript using svnkit. SvnKits license is not ASF conform, but we dont link against it nor we ship with it, it is just a tool downloaded to ivy:cachepath. This is not different to your GNU linux suite with ls,... tools with GPL. I will commit soon to get Jenkins running better. fail the build on wrong svn:eol-style - Key: LUCENE-3923 URL: https://issues.apache.org/jira/browse/LUCENE-3923 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Assignee: Uwe Schindler Fix For: 5.0, 4.0 Attachments: LUCENE-3923.patch, LUCENE-3923.patch I'm tired of fixing this before releases. Jenkins should detect and fail on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-3923) fail the build on wrong svn:eol-style
[ https://issues.apache.org/jira/browse/LUCENE-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13442998#comment-13442998 ] Uwe Schindler commented on LUCENE-3923: --- The patch also fixes some other problems with svn usage (svnversion returns a different string in SVN 1.7 when dir is no working copy...) and cleans up code a bit. The main part is in extra-targets.xml fail the build on wrong svn:eol-style - Key: LUCENE-3923 URL: https://issues.apache.org/jira/browse/LUCENE-3923 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Assignee: Uwe Schindler Fix For: 5.0, 4.0 Attachments: LUCENE-3923.patch, LUCENE-3923.patch I'm tired of fixing this before releases. Jenkins should detect and fail on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-3923) fail the build on wrong svn:eol-style
[ https://issues.apache.org/jira/browse/LUCENE-3923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-3923. --- Resolution: Fixed Committed trunk revision: 1377991 Committed 3.x revision: 1377992 BTW: I tried to convert also the internal svnversion calls to simple java fork=false tasks (or scripts), but this failed du to the well-known ANT permgen issue. I will look into this another time, for now we still need the svn.exe and svnversion.exe sysprops. fail the build on wrong svn:eol-style - Key: LUCENE-3923 URL: https://issues.apache.org/jira/browse/LUCENE-3923 Project: Lucene - Core Issue Type: Task Components: general/build Reporter: Robert Muir Assignee: Uwe Schindler Fix For: 5.0, 4.0 Attachments: LUCENE-3923.patch, LUCENE-3923.patch I'm tired of fixing this before releases. Jenkins should detect and fail on this. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443017#comment-13443017 ] Uwe Schindler commented on LUCENE-4332: --- What is the difference to run ant clover, which is already integrated and done by a 10 liner build.xml task? I see no real difference and the reports look much more terse than clover. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443018#comment-13443018 ] Dawid Weiss commented on LUCENE-4332: - It is mutation testing overlaid on coverage testing. I have some doubts about mutation testing but I don't have any practical experience with it so I'll just sit and watch :) Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443019#comment-13443019 ] Uwe Schindler commented on LUCENE-4332: --- I agree with adding this if it helps to find ineffective tests, but can 't this be done like clover without downloading the JAR files into the lib folder. License files are not needed as it's a build tool only and we dont ship with those licenses. So I would opt to do a simple transitive ivy:cachepath to build pit's classpath. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443026#comment-13443026 ] Uwe Schindler commented on LUCENE-4332: --- Hi, I looked into the reports (somehow the report for Lucene core is missing?): My problem with pitest is the following (and that really makes me afraid, so I will not run it on my jenkins server at all!): It manipulates the codeby adding random modifications to the Lucene bytecode. As Lucene is a heavy file-system related stuff, it can happen that the code removes some code part or changes the parameters of a method call (it also does this!), so it could suddenly delete or modify files outside the working copy (e.g. in our heavy crazy code deleting index files). Damage on the O/S can be prevented by running the tests as a separate user, but it can still crash and corrupt my whole Jenkins installation. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443037#comment-13443037 ] Uwe Schindler commented on LUCENE-4332: --- Another thing to take into account: Mutation testing runs the test 2 times, one time without mutations, one time with. For this to work correctly, both test runs must have the same tests.seed value! Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4333) NPE in TermGroupFacetCollector when faceting on mv fields
Martijn van Groningen created LUCENE-4333: - Summary: NPE in TermGroupFacetCollector when faceting on mv fields Key: LUCENE-4333 URL: https://issues.apache.org/jira/browse/LUCENE-4333 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.0-BETA, 4.0-ALPHA Reporter: Martijn van Groningen Assignee: Martijn van Groningen Fix For: 4.0 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3762) NullPointerException when using grouping
[ https://issues.apache.org/jira/browse/SOLR-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443055#comment-13443055 ] Martijn van Groningen commented on SOLR-3762: - This is actually a bug in the Lucene grouping module. It is easy to fix. NullPointerException when using grouping Key: SOLR-3762 URL: https://issues.apache.org/jira/browse/SOLR-3762 Project: Solr Issue Type: Bug Affects Versions: 4.0-BETA Reporter: Jesse MacVicar Initial index is fine, seems to occur after additional documents have been added/deleted. Simple index using grouping and group.facet. Full error posted below. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443059#comment-13443059 ] Robert Muir commented on LUCENE-4332: - Uwe I agree there is some risk, but I think we should set it up in the build to get started (maybe someone volunteers to run it in a sandbox'ed jenkins once a week, or whatever). it doesnt hurt anything to set it up in build.xml: though I agree we should instead use an ivy:cachepath instead of introducing so many third party dependencies for a task/tool that our actual codebase doesn't rely on. I also agree tests should somehow be rerun with the same seed thru this thing. maybe the ant task for this can just generate a random seed itself, and pass that with a -D. eventually once someone gets it going, i'm sure some tuning will take place, e.g. it should be necessary to set parameters to ignore MMapDirectory etc (wrong file to use mutation testing with, sorry). Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443060#comment-13443060 ] Robert Muir commented on LUCENE-4332: - {quote} License files are not needed as it's a build tool only and we dont ship with those licenses. So I would opt to do a simple transitive ivy:cachepath to build pit's classpath. {quote} This also applies to the asm jar (asm-debug-all-4.0.jar.sha1) ! Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4333) NPE in TermGroupFacetCollector when faceting on mv fields
[ https://issues.apache.org/jira/browse/LUCENE-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn van Groningen updated LUCENE-4333: -- Attachment: LUCENE-4333.patch NPE in TermGroupFacetCollector when faceting on mv fields - Key: LUCENE-4333 URL: https://issues.apache.org/jira/browse/LUCENE-4333 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.0-ALPHA, 4.0-BETA Reporter: Martijn van Groningen Assignee: Martijn van Groningen Fix For: 4.0 Attachments: LUCENE-4333.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4333) NPE in TermGroupFacetCollector when faceting on mv fields
[ https://issues.apache.org/jira/browse/LUCENE-4333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Martijn van Groningen resolved LUCENE-4333. --- Resolution: Fixed Committed in trunk and branch 4.x NPE in TermGroupFacetCollector when faceting on mv fields - Key: LUCENE-4333 URL: https://issues.apache.org/jira/browse/LUCENE-4333 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.0-ALPHA, 4.0-BETA Reporter: Martijn van Groningen Assignee: Martijn van Groningen Fix For: 4.0 Attachments: LUCENE-4333.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3762) NullPointerException when using grouping
[ https://issues.apache.org/jira/browse/SOLR-3762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443073#comment-13443073 ] Martijn van Groningen commented on SOLR-3762: - I committed a fix for this via LUCENE-4333. Can you check if this issue still occurs in your setup? NullPointerException when using grouping Key: SOLR-3762 URL: https://issues.apache.org/jira/browse/SOLR-3762 Project: Solr Issue Type: Bug Affects Versions: 4.0-BETA Reporter: Jesse MacVicar Initial index is fine, seems to occur after additional documents have been added/deleted. Simple index using grouping and group.facet. Full error posted below. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443076#comment-13443076 ] Uwe Schindler commented on LUCENE-4332: --- bq. This also applies to the asm jar (asm-debug-all-4.0.jar.sha1) ! I partially agree here, but we actually compile our own code (the ANT TASK) against it. This is different from using the JAR file. Also we need the JAR file in our lib-structure to make development of the ANT TASK possible with eclipse co. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443081#comment-13443081 ] Uwe Schindler commented on LUCENE-4332: --- bq. I also agree tests should somehow be rerun with the same seed thru this thing. maybe the ant task for this can just generate a random seed itself, and pass that with a -D. Thats the only way to go. A good old Javascript-script/ inside ant can do this. I was thinking about the same for my Jenkins infrastructure. Because currenty you cannot tell my SDDS Jenkins instance to repeat the same test-run. I was thinking about a similar thing (parametrizable build that passes a random master seed to ant test). The Groovy code in Jenkins doing the JDK randomization will do this. I just had no time, but it is on my todo list. bq. it doesnt hurt anything to set it up in build.xml: though I agree we should instead use an ivy:cachepath instead of introducing so many third party dependencies for a task/tool that our actual codebase doesn't rely on. That is what I am optimg for. The extra test-framework/ivy.xml additions should not be there and the cachepath used directly inline mode. - revert test-framework/ivy.xml - add dependency inline to ivy:cachepatch or use a separate pitest-ivy.xml referenced from cachepath only (not resolve). bq. Uwe I agree there is some risk, but I think we should set it up in the build to get started (maybe someone volunteers to run it in a sandbox'ed jenkins once a week, or whatever). I would take care of a sandbox. The windows tests on SDDS Jenkins are running in a VirtualBOX. The Jenkins virtualBOX plugin has some options about starting/shutting down engines. I would create a minimal Linux VBOX instance (32 bit, few ram to run tests or like that) and make a virtual harddisk snapshot. Whenever the pitest runs weekly, Jenkins starts a new instance using the saved snapshot (which is plain empty clean), runs pitests and then shuts it down again, loosing all changed data on the virtual disk. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4334) remove unnecessary ant-junit dependency
[ https://issues.apache.org/jira/browse/LUCENE-4334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-4334: Attachment: LUCENE-4334.patch remove unnecessary ant-junit dependency --- Key: LUCENE-4334 URL: https://issues.apache.org/jira/browse/LUCENE-4334 Project: Lucene - Core Issue Type: Bug Components: general/build Reporter: Robert Muir Attachments: LUCENE-4334.patch We don't use this integration anymore for running tests: we use randomizedtesting's junit4. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4334) remove unnecessary ant-junit dependency
Robert Muir created LUCENE-4334: --- Summary: remove unnecessary ant-junit dependency Key: LUCENE-4334 URL: https://issues.apache.org/jira/browse/LUCENE-4334 Project: Lucene - Core Issue Type: Bug Components: general/build Reporter: Robert Muir We don't use this integration anymore for running tests: we use randomizedtesting's junit4. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4334) remove unnecessary ant-junit dependency
[ https://issues.apache.org/jira/browse/LUCENE-4334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-4334. - Resolution: Fixed Fix Version/s: 4.0 5.0 seemed straightforward to me. Hope I'm not missing anything. We should look for any other unused jars. remove unnecessary ant-junit dependency --- Key: LUCENE-4334 URL: https://issues.apache.org/jira/browse/LUCENE-4334 Project: Lucene - Core Issue Type: Bug Components: general/build Reporter: Robert Muir Fix For: 5.0, 4.0 Attachments: LUCENE-4334.patch We don't use this integration anymore for running tests: we use randomizedtesting's junit4. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3507) Refactor parts of solr doing inter node communication to use shardhandlerfactory/shardhandler
[ https://issues.apache.org/jira/browse/SOLR-3507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sami Siren updated SOLR-3507: - Attachment: SOLR-3507.patch new patch updated to latest trunk with cleanups. Refactor parts of solr doing inter node communication to use shardhandlerfactory/shardhandler - Key: SOLR-3507 URL: https://issues.apache.org/jira/browse/SOLR-3507 Project: Solr Issue Type: Improvement Reporter: Sami Siren Assignee: Sami Siren Priority: Minor Attachments: SOLR-3507.patch, SOLR-3507.patch, SOLR-3507.patch Sequal to SOLR-3480, the aim is to change most (all?) parts of solr that need to talk to different nodes to use ShardHandlerFacory from corecontainer. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3659) non-reproducible failures from RecoveryZkTest - mostly NRTCachingDirectory.deleteFile
[ https://issues.apache.org/jira/browse/SOLR-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443160#comment-13443160 ] Mark Miller commented on SOLR-3659: --- This should be fixed now. non-reproducible failures from RecoveryZkTest - mostly NRTCachingDirectory.deleteFile - Key: SOLR-3659 URL: https://issues.apache.org/jira/browse/SOLR-3659 Project: Solr Issue Type: Bug Reporter: Hoss Man Attachments: just-failures.txt, RecoveryZkTest.testDistribSearch-100-tests-failures.txt.tgz Since getting my new laptop, i've noticed some sporadic failures from RecoveryZkTest, so last night tried running 100 iterations againts trunk (r1363555), and got 5 errors/failures... * 3 asertion failures from NRTCachingDirectory.deleteFile * 1 node recovery assertion from AbstractDistributedZkTestCase.waitForRecoveriesToFinish caused by OOM * 1 searcher leak assertion: opens=1658 closes=1652 (possibly lingering affects from OOM?) see comments/attachments for details -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-3465) Replication Causes Two Searcher Warmups
[ https://issues.apache.org/jira/browse/SOLR-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned SOLR-3465: - Assignee: Mark Miller Replication Causes Two Searcher Warmups Key: SOLR-3465 URL: https://issues.apache.org/jira/browse/SOLR-3465 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 4.0-ALPHA Reporter: Michael Garski Assignee: Mark Miller Fix For: 4.0, 5.0 I'm doing some testing with the current trunk, and am seeing that when a slave retrieves index updates from the master the warmup searcher registration is performed twice. Here is a snippet of the log that demonstrates this: May 16, 2012 6:02:02 PM org.apache.solr.handler.SnapPuller fetchLatestIndex INFO: Total time taken for download : 92 secs May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy onInit INFO: SolrDeletionPolicy.onInit: commits:num=2 commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_10,generation=36,filenames=[_45_0.tim, _45.fdt, segments_10, _45_0.tip, _45.fdx, _45.fnm, _45_0.frq, _45.per, _45_0.prx] May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy updateCommits INFO: newest commit = 36 May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@559fe5e6 main May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@42101da9 main May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} I am trying to determine the cause, does anyone have any idea where to start? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3465) Replication Causes Two Searcher Warmups
[ https://issues.apache.org/jira/browse/SOLR-3465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3465: -- Fix Version/s: 5.0 4.0 Replication Causes Two Searcher Warmups Key: SOLR-3465 URL: https://issues.apache.org/jira/browse/SOLR-3465 Project: Solr Issue Type: Bug Components: replication (java) Affects Versions: 4.0-ALPHA Reporter: Michael Garski Assignee: Mark Miller Fix For: 4.0, 5.0 I'm doing some testing with the current trunk, and am seeing that when a slave retrieves index updates from the master the warmup searcher registration is performed twice. Here is a snippet of the log that demonstrates this: May 16, 2012 6:02:02 PM org.apache.solr.handler.SnapPuller fetchLatestIndex INFO: Total time taken for download : 92 secs May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy onInit INFO: SolrDeletionPolicy.onInit: commits:num=2 commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_1,generation=1,filenames=[segments_1] commit{dir=/Users/mgarski/Code/indexes/solr2/geo/index,segFN=segments_10,generation=36,filenames=[_45_0.tim, _45.fdt, segments_10, _45_0.tip, _45.fdx, _45.fnm, _45_0.frq, _45.per, _45_0.prx] May 16, 2012 6:02:02 PM org.apache.solr.core.SolrDeletionPolicy updateCommits INFO: newest commit = 36 May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@559fe5e6 main May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@559fe5e6 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: start commit{flags=0,version=0,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false} May 16, 2012 6:02:02 PM org.apache.solr.search.SolrIndexSearcher init INFO: Opening Searcher@42101da9 main May 16, 2012 6:02:02 PM org.apache.solr.update.DirectUpdateHandler2 commit INFO: end_commit_flush May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener sending requests to Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} May 16, 2012 6:02:02 PM org.apache.solr.core.QuerySenderListener newSearcher INFO: QuerySenderListener done. May 16, 2012 6:02:02 PM org.apache.solr.core.SolrCore registerSearcher INFO: [geo] Registered new searcher Searcher@42101da9 main{StandardDirectoryReader(segments_10:335:nrt _45(4.0):C1096375)} I am trying to determine the cause, does anyone have any idea where to start? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3347) deleteByQuery failing with SolrCloud without _version_ in schema.xml
[ https://issues.apache.org/jira/browse/SOLR-3347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3347: -- Fix Version/s: 5.0 deleteByQuery failing with SolrCloud without _version_ in schema.xml Key: SOLR-3347 URL: https://issues.apache.org/jira/browse/SOLR-3347 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Benson Margulies Fix For: 4.0, 5.0 Attachments: 0001-Attempt-to-repro-problem-with-del-and-SolrCloud.patch, provision-and-start.sh, schema.xml, solrconfig.xml Distributed execution of deleteByQuery(\*:\*) depends on the existence of a field \_version\_ in the schema. The default schema has no comment on this field to indicate its important or relevance to SolrCloud, and no message is logged nor error status returned when there is no such field. The code in DistributedUpdateProcessor just has an if statement that never ever does any local deleting without it. I don't know whether the intention was that this should work or not. If someone would clue me in, I'd make a patch for schema.xml to add comments, or a patch to D-U-P to add logging. If it was supposed to work, I'm probably not qualified to make the fix to make it work. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3659) non-reproducible failures from RecoveryZkTest - mostly NRTCachingDirectory.deleteFile
[ https://issues.apache.org/jira/browse/SOLR-3659?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443162#comment-13443162 ] Dawid Weiss commented on SOLR-3659: --- Thanks Mark. non-reproducible failures from RecoveryZkTest - mostly NRTCachingDirectory.deleteFile - Key: SOLR-3659 URL: https://issues.apache.org/jira/browse/SOLR-3659 Project: Solr Issue Type: Bug Reporter: Hoss Man Attachments: just-failures.txt, RecoveryZkTest.testDistribSearch-100-tests-failures.txt.tgz Since getting my new laptop, i've noticed some sporadic failures from RecoveryZkTest, so last night tried running 100 iterations againts trunk (r1363555), and got 5 errors/failures... * 3 asertion failures from NRTCachingDirectory.deleteFile * 1 node recovery assertion from AbstractDistributedZkTestCase.waitForRecoveriesToFinish caused by OOM * 1 searcher leak assertion: opens=1658 closes=1652 (possibly lingering affects from OOM?) see comments/attachments for details -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-3088) create shard placeholders
[ https://issues.apache.org/jira/browse/SOLR-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-3088. --- Resolution: Fixed Assignee: Sami Siren create shard placeholders - Key: SOLR-3088 URL: https://issues.apache.org/jira/browse/SOLR-3088 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Yonik Seeley Assignee: Sami Siren Fix For: 4.1 Attachments: SOLR-3088.patch, SOLR-3088.patch When creating a new collection, a placeholder for each shard should be created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-3088) create shard placeholders
[ https://issues.apache.org/jira/browse/SOLR-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-3088: -- Fix Version/s: (was: 4.1) 5.0 4.0 create shard placeholders - Key: SOLR-3088 URL: https://issues.apache.org/jira/browse/SOLR-3088 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Yonik Seeley Assignee: Sami Siren Fix For: 4.0, 5.0 Attachments: SOLR-3088.patch, SOLR-3088.patch When creating a new collection, a placeholder for each shard should be created. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/DisjunctionMaxQuery.java
Hi Mike, The easier way (and also looks more code-style like): {@code CollectionQuery} - This escapes automatically and you can read it better. It also added code/code around! See my recent additions to WeakIdentityMap :-) Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: mikemcc...@apache.org [mailto:mikemcc...@apache.org] Sent: Tuesday, August 28, 2012 4:02 PM To: comm...@lucene.apache.org Subject: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java Author: mikemccand Date: Tue Aug 28 14:02:19 2012 New Revision: 1378140 URL: http://svn.apache.org/viewvc?rev=1378140view=rev Log: escape generics to HTML Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java URL: http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apac he/lucene/search/DisjunctionMaxQuery.java?rev=1378140r1=1378139r2=1 378140view=diff == --- lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java (original) +++ lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disju +++ nctionMaxQuery.java Tue Aug 28 14:02:19 2012 @@ -61,7 +61,7 @@ public class DisjunctionMaxQuery extends /** * Creates a new DisjunctionMaxQuery - * @param disjuncts a CollectionQuery of all the disjuncts to add + * @param disjuncts a Collectionlt;Querygt; of all the disjuncts to + add * @param tieBreakerMultiplier the weight to give to each matching non- maximum disjunct */ public DisjunctionMaxQuery(CollectionQuery disjuncts, float tieBreakerMultiplier) { @@ -77,14 +77,14 @@ public class DisjunctionMaxQuery extends } /** Add a collection of disjuncts to this disjunction - * via IterableQuery + * via Iterablelt;Querygt; * @param disjuncts a collection of queries to add as disjuncts. */ public void add(CollectionQuery disjuncts) { this.disjuncts.addAll(disjuncts); } - /** @return An IteratorQuery over the disjuncts */ + /** @return An Iteratorlt;Querygt; over the disjuncts */ public IteratorQuery iterator() { return disjuncts.iterator(); } - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3763) Make solr use lucene filters directly
[ https://issues.apache.org/jira/browse/SOLR-3763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443202#comment-13443202 ] Yonik Seeley commented on SOLR-3763: Interesting work Greg! A few points: bq. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. This really depends on the problem - sometimes top-level cache is more optimal, and sometimes per-segment caches are more optimal. IMO, we shouldn't force either, but add the ability to cache per-segment. There are already issues open for caching disjunction clauses separately too - it's a rather orthogonal issue. It might be a better idea to start off small: we could make a QParser that creates a CachingWrapperFilter wrapped in a FilteredQuery and hence will cache per-segment. That should be simple and non-invasive enough to make it into 4.0 Make solr use lucene filters directly - Key: SOLR-3763 URL: https://issues.apache.org/jira/browse/SOLR-3763 Project: Solr Issue Type: Improvement Affects Versions: 4.0, 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Attachments: SOLR-3763-Make-solr-use-lucene-filters-directly.patch Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 200))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 300))), SHOULD) ) {code} This would yeild better cache usage I think as we can resuse docsets across multiple queries as well as avoid issues when filters are presented in differing orders ∘ Instead of end users providing costing we might (and this is a big might FWIW), be able to create a sort of execution plan of filters, leveraging a combination of what the index is able to tell us as well as sampling and ‟educated guesswork”; in essence this is what some DBMS software, for example postgresql does - it has a genetic algo that attempts to solve the travelling salesman - to great effect ∘ I am sure I will probably come up with other ambitious ideas to plug in here . :S Patches obviously forthcoming but the bulk of the work can be followed here https://github.com/GregBowyer/lucene-solr/commits/solr-uses-lucene-filters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: large messages from Jenkins failures
Hi Dawid, Unfortunately, -Dtests.showOutput=never penalizes all tests that don't have megabytes of failure output because some do. What do you think of adding an option to limit output size (e.g. -Dtests.outputLimitKB=10), and truncating to that size if it's exceeded? If you think this would be reasonable, I'm willing to (try to) do the work. Steve -Original Message- From: Steven A Rowe [mailto:sar...@syr.edu] Sent: Monday, August 20, 2012 4:10 PM To: dev@lucene.apache.org Subject: RE: large messages from Jenkins failures +1 to using -Dtests.showOutput=never for Jenkins jobs. - Steve -Original Message- From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf Of Dawid Weiss Sent: Monday, August 20, 2012 2:20 PM To: dev@lucene.apache.org Subject: Re: large messages from Jenkins failures This is partially aggregated by solr failure logs (the output for successful suites is not emitted to the console). As for myself I don't look at those e-mails directly, I typicall click on the jenkins link to see the full output. Alternatively we could suppress the console output for failures too (it would still show the stack trace and everything, just not the stdout/sysouts) -- this is relatively easy to override even from jenkins level: -Dtests.showOutput=never Dawid On Fri, Aug 17, 2012 at 5:04 PM, Dyer, James james.d...@ingramcontent.com wrote: Is there any way we can limit the size of the messages Jenkins emails this list? Responsing to a your mailbox is full warning, I found I had 32 recent Jenkins messages all over 1mb (a few were 10mb). A few weeks ago I returned from vacation to find my mail account partially disabled because Jenkins had used up most of my storage. Maybe, if the log is more than so many lines to just supplies a link to it than have the whole thing in the email? I realize a lot of you have unlimited storage on your email accounts, but unfortunately I do not. James Dyer E-Commerce Systems Ingram Content Group (615) 213-4311 - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/DisjunctionMaxQuery.java
Aha! Most excellent. I'll switch to {@code ...}. Thanks for the pointer. Mike McCandless http://blog.mikemccandless.com On Tue, Aug 28, 2012 at 10:35 AM, Uwe Schindler u...@thetaphi.de wrote: Hi Mike, The easier way (and also looks more code-style like): {@code CollectionQuery} - This escapes automatically and you can read it better. It also added code/code around! See my recent additions to WeakIdentityMap :-) Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: mikemcc...@apache.org [mailto:mikemcc...@apache.org] Sent: Tuesday, August 28, 2012 4:02 PM To: comm...@lucene.apache.org Subject: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java Author: mikemccand Date: Tue Aug 28 14:02:19 2012 New Revision: 1378140 URL: http://svn.apache.org/viewvc?rev=1378140view=rev Log: escape generics to HTML Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java URL: http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/org/apac he/lucene/search/DisjunctionMaxQuery.java?rev=1378140r1=1378139r2=1 378140view=diff == --- lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java (original) +++ lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disju +++ nctionMaxQuery.java Tue Aug 28 14:02:19 2012 @@ -61,7 +61,7 @@ public class DisjunctionMaxQuery extends /** * Creates a new DisjunctionMaxQuery - * @param disjuncts a CollectionQuery of all the disjuncts to add + * @param disjuncts a Collectionlt;Querygt; of all the disjuncts to + add * @param tieBreakerMultiplier the weight to give to each matching non- maximum disjunct */ public DisjunctionMaxQuery(CollectionQuery disjuncts, float tieBreakerMultiplier) { @@ -77,14 +77,14 @@ public class DisjunctionMaxQuery extends } /** Add a collection of disjuncts to this disjunction - * via IterableQuery + * via Iterablelt;Querygt; * @param disjuncts a collection of queries to add as disjuncts. */ public void add(CollectionQuery disjuncts) { this.disjuncts.addAll(disjuncts); } - /** @return An IteratorQuery over the disjuncts */ + /** @return An Iteratorlt;Querygt; over the disjuncts */ public IteratorQuery iterator() { return disjuncts.iterator(); } - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443246#comment-13443246 ] Adrien Grand commented on LUCENE-4322: -- It looks like one of the reasons of the size increase of lucene-core.jar is its number of classes. Although some code has been moved to modules, its number of classes has increased from 972 in 3.6.1 to 1471 in trunk. Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-4322: - Attachment: LUCENE-4322.patch Patch that tries to reduce the JAR size: - unspecialized encode methods, - specialized decode methods only when 0 bitsPerValue = 24. Overall, it makes the core jar 361kb bytes smaller (2700542 bytes before applying the patch, 2330514 after). I ran a quick run of lucene-util in debug mode with blockPostingsFormat=For and it showed no performance difference. Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443267#comment-13443267 ] Michael McCandless commented on LUCENE-4322: Patch looks good! That's a nice reduction. Too bad we have to duplicate decode 4 times (from long[]/byte[] to long[]/int[]). We could still shrink things further by doing less loop unrolling ourselves? Eg for BulkOperationPacked2, when decoding from byte[], that code is replicated 8 times but could be done as just 8* iters. Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443284#comment-13443284 ] Adrien Grand commented on LUCENE-4322: -- bq. We could still shrink things further by doing less loop unrolling ourselves? Eg for BulkOperationPacked2, when decoding from byte[], that code is replicated 8 times but could be done as just 8* iters. Yes, I had planned to work on it too! Unless someone doesn't like my last patch, I'll commit it and will start working on this loop unrolling issue soon... Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443297#comment-13443297 ] Robert Muir commented on LUCENE-4322: - +1 for the first iteration. Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4324) extend checkJavaDocs.py to methods,constants,fields
[ https://issues.apache.org/jira/browse/LUCENE-4324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443299#comment-13443299 ] Robert Muir commented on LUCENE-4324: - I think this is working pretty well now as far as output. also the overridden method situation mentioned by Uwe is resolved, and Mike has it detecting more broken html. I think we should port to java6 now before it gets any harder. If no one beats me to it I'll try to look at this later tonight (but please feel free, otherwise I will be hacking on trying to fill in missing docs themselves) extend checkJavaDocs.py to methods,constants,fields --- Key: LUCENE-4324 URL: https://issues.apache.org/jira/browse/LUCENE-4324 Project: Lucene - Core Issue Type: New Feature Components: general/build Reporter: Robert Muir Attachments: LUCENE-4322.patch, LUCENE-4324_crawl.patch We have a large amount of classes in the source code, its nice that we have checkJavaDocs.py to ensure packages and classes have some human-level description. But I think we need it for methods etc too. (it is also part of our contribution/style guidelines: http://wiki.apache.org/lucene-java/HowToContribute#Making_Changes) The reason is that like classes and packages, once we can enforce this in the build, people will quickly add forgotten documentation soon after their commit when its fresh in their mind. Otherwise, its likely to never happen. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443304#comment-13443304 ] Greg Bowyer commented on LUCENE-4332: - Wow lot of interest . I will try to answer some of the salient points Core was missing until today as one test (TestLuceneConstantVersion) didn't run correctly as it was lacking the Lucene version system property. Currently pit refuses to run unless the underlying suite is all green (a good thing IMHO) so I didn't have core from my first run (its there now). This takes a long time to run, all of the ancillary Lucene packages take roughly 4 hours to run on the largest CPU ec2 instance, core takes 8 hours (this was the other reason core was missing, I was waiting for it to finish crunching) As to the random seed, I completely agree and it was one of the things I mentioned on the mailing list that makes the output of this tool not perfect. I do feel that the tests that are randomised typically do a better job at gaining coverage, but its a good idea to stabilise the seed. Jars and build.xml, I have no problems changing this to whatever people think fits best into the build. My impression was that clover is handled the way it is because it is not technically opensource and as a result has screwball licensing concerns, essentially I didn't know any better :S I will try to get a chance to make it use the ivy:cachepath approach. Regarding the risks posed by mutations, I cannot prove or say there are no risks; however mutation testing is not random in the mutations applied, they are formulaic and quite simple. It will not permute arguments nor will it mutate complex objects (it can and does mess with object references turning references in arguments to nulls). I can conceive of ways in which it could screwup mutated code making it possible to delete random files but I don't think they are going to be extremely likely situations. FWIW I would be less worried about this deleting something on the filesystem and far more worried about it accidentally leaving corpses of undeleted files. Sandboxing it could solve that issue, if that is too much effort another approach might be to work with the pitest team and build a security manager that is militant about file access, disallowing anything that canonicalises outside of a given path. Oh and as Robert suggested we can always point it away from key things. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443304#comment-13443304 ] Greg Bowyer edited comment on LUCENE-4332 at 8/29/12 4:51 AM: -- Wow lot of interest . I will try to answer some of the salient points Core was missing until today as one test (TestLuceneConstantVersion) didn't run correctly as it was lacking the Lucene version system property. Currently pit refuses to run unless the underlying suite is all green (a good thing IMHO) so I didn't have core from my first run (its there now). This takes a long time to run, all of the ancillary Lucene packages take roughly 4 hours to run on the largest CPU ec2 instance, core takes 8 hours (this was the other reason core was missing, I was waiting for it to finish crunching) As to the random seed, I completely agree and it was one of the things I mentioned on the mailing list that makes the output of this tool not perfect. I do feel that the tests that are randomised typically do a better job at gaining coverage, but its a good idea to stabilise the seed. Jars and build.xml, I have no problems changing this to whatever people think fits best into the build. My impression was that clover is handled the way it is because it is not technically opensource and as a result has screwball licensing concerns, essentially I didn't know any better :S I will try to get a chance to make it use the ivy:cachepath approach. Regarding the risks posed by mutations, I cannot prove or say there are no risks; however mutation testing is not random in the mutations applied, they are formulaic and quite simple. It will not permute arguments nor will it mutate complex objects (it can and does mess with object references turning references in arguments to nulls). I can conceive of ways in which it could screwup mutated code making it possible to delete random files but I don't think they are going to be extremely likely situations. FWIW I would be less worried about this deleting something on the filesystem and far more worried about it accidentally leaving corpses of undeleted files. Sandboxing it could solve that issue, if that is too much effort another approach might be to work with the pitest team and build a security manager that is militant about file access, disallowing anything that canonicalises outside of a given path. Oh and as Robert suggested we can always point it away from key things. At the end of the day its a tool like any other, I have exactly the same feelings as Robert on this {quote} This is cool. I'd say lets get it up and going on jenkins (even weekly or something). why worry about the imperfections in any of these coverage tools, whats way more important is when the results find situations where you thought you were testing something, but really arent, etc (here was a recent one found by clover http://svn.apache.org/viewvc?rev=1376722view=rev). so imo just another tool to be able to identify serious gaps/test-bugs after things are up and running. and especially looking at deltas from line coverage to identify stuff thats 'executing' but not actually being tested. {quote} was (Author: gbow...@fastmail.co.uk): Wow lot of interest . I will try to answer some of the salient points Core was missing until today as one test (TestLuceneConstantVersion) didn't run correctly as it was lacking the Lucene version system property. Currently pit refuses to run unless the underlying suite is all green (a good thing IMHO) so I didn't have core from my first run (its there now). This takes a long time to run, all of the ancillary Lucene packages take roughly 4 hours to run on the largest CPU ec2 instance, core takes 8 hours (this was the other reason core was missing, I was waiting for it to finish crunching) As to the random seed, I completely agree and it was one of the things I mentioned on the mailing list that makes the output of this tool not perfect. I do feel that the tests that are randomised typically do a better job at gaining coverage, but its a good idea to stabilise the seed. Jars and build.xml, I have no problems changing this to whatever people think fits best into the build. My impression was that clover is handled the way it is because it is not technically opensource and as a result has screwball licensing concerns, essentially I didn't know any better :S I will try to get a chance to make it use the ivy:cachepath approach. Regarding the risks posed by mutations, I cannot prove or say there are no risks; however mutation testing is not random in the mutations applied, they are formulaic and quite simple. It will not permute arguments nor will it mutate complex objects (it can and does mess with object references turning references in arguments to nulls). I can conceive of ways in which it could
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443313#comment-13443313 ] Michael McCandless commented on LUCENE-4322: +1 for the first iteration. Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443332#comment-13443332 ] Hoss Man commented on LUCENE-4332: -- I had never heard of this technique until Greg mentioned it on the list, but the key thing that really impresses me about it is that (from what i can see) it can help us find code whose behavior does not affect the output of the test -- this is something no code coverage tool like clover and emma can do. Clover is great for reporting that when the tests are run, method bar() is executed 200 times but foo() is never executed at all, but that doesn't tell us anything about whether the success of a test is actually dependent on the results of bar() being correct. With these kinds of mutation testing, we will be able to see reports that say bar() was executed 200 times, but when i munged the result of of bar() it didn't cause any tests to fail which could be a big help for identifying tests where we assert the results of method calls, but forget to assert the side effects of those calls. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/DisjunctionMaxQuery.java
Glad to help you! - {@code ...} creates a code.../code environment, so ist formatted like code. - there is also {@literal ...} which makes the thing escaped, but plain text (so its same font as remaining text) Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: Michael McCandless [mailto:luc...@mikemccandless.com] Sent: Tuesday, August 28, 2012 6:16 PM To: dev@lucene.apache.org Subject: Re: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjunction MaxQuery.java Aha! Most excellent. I'll switch to {@code ...}. Thanks for the pointer. Mike McCandless http://blog.mikemccandless.com On Tue, Aug 28, 2012 at 10:35 AM, Uwe Schindler u...@thetaphi.de wrote: Hi Mike, The easier way (and also looks more code-style like): {@code CollectionQuery} - This escapes automatically and you can read it better. It also added code/code around! See my recent additions to WeakIdentityMap :-) Uwe - Uwe Schindler H.-H.-Meier-Allee 63, D-28213 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de -Original Message- From: mikemcc...@apache.org [mailto:mikemcc...@apache.org] Sent: Tuesday, August 28, 2012 4:02 PM To: comm...@lucene.apache.org Subject: svn commit: r1378140 - /lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disju nction MaxQuery.java Author: mikemccand Date: Tue Aug 28 14:02:19 2012 New Revision: 1378140 URL: http://svn.apache.org/viewvc?rev=1378140view=rev Log: escape generics to HTML Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjun ction MaxQuery.java Modified: lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjun ction MaxQuery.java URL: http://svn.apache.org/viewvc/lucene/dev/trunk/lucene/core/src/java/or g/apac he/lucene/search/DisjunctionMaxQuery.java?rev=1378140r1=1378139r2=1 378140view=diff == --- lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Disjun ction MaxQuery.java (original) +++ lucene/dev/trunk/lucene/core/src/java/org/apache/lucene/search/Di +++ sju nctionMaxQuery.java Tue Aug 28 14:02:19 2012 @@ -61,7 +61,7 @@ public class DisjunctionMaxQuery extends /** * Creates a new DisjunctionMaxQuery - * @param disjuncts a CollectionQuery of all the disjuncts to add + * @param disjuncts a Collectionlt;Querygt; of all the disjuncts + to add * @param tieBreakerMultiplier the weight to give to each matching non- maximum disjunct */ public DisjunctionMaxQuery(CollectionQuery disjuncts, float tieBreakerMultiplier) { @@ -77,14 +77,14 @@ public class DisjunctionMaxQuery extends } /** Add a collection of disjuncts to this disjunction - * via IterableQuery + * via Iterablelt;Querygt; * @param disjuncts a collection of queries to add as disjuncts. */ public void add(CollectionQuery disjuncts) { this.disjuncts.addAll(disjuncts); } - /** @return An IteratorQuery over the disjuncts */ + /** @return An Iteratorlt;Querygt; over the disjuncts */ public IteratorQuery iterator() { return disjuncts.iterator(); } - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443343#comment-13443343 ] Dawid Weiss commented on LUCENE-4322: - Good idea, Andrien. More classes are definitely worse than more static methods, this is a normal result of how ZIP format works (each file is encoded individually, compression dictionaries are inefficient for many small files). Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
Unfortunately, -Dtests.showOutput=never penalizes all tests that don't have megabytes of failure output because some do. It doesn't penalize them, it says exactly what it does. A full report is always written to disk, including all sysouts -- look at tests-report.txt if I recall right. What do you think of adding an option to limit output size (e.g. -Dtests.outputLimitKB=10), and truncating to that size if it's exceeded? If you think this would be reasonable, I'm willing to (try to) do the work. I don't know... this seems like monkey patching for something that is wrong in the first place. Here are my thoughts on this: 1) the problem is not really in big e-mails but that they're frequent failures resulting from pretty much a fixed set of classes that we don't know how to stabilize. 2) I think Solr emits a LOT of logging information to the console. I don't know if all of it is really useful -- I doubt it, really. The solutions I see are simple -- disable the tests that fail 3-5 times and we still don't know what causes the problem. Disable them and file a JIRA issue. An alternative is to redirect these logs on Solr tests to a file or a circular memory buffer and only emit like a tail of N most recent messages if we know a test failed (which is easy to do with a simple rule). Patching the test runner to truncate log output is doable of course but I think it's powdering the corpse or whatever the English idiom for that is, you get me. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4322) Can we make oal.util.packed.BulkOperation* smaller?
[ https://issues.apache.org/jira/browse/LUCENE-4322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443388#comment-13443388 ] Uwe Schindler commented on LUCENE-4322: --- +1!! Smaller, smaller, smaller :-) Can we make oal.util.packed.BulkOperation* smaller? --- Key: LUCENE-4322 URL: https://issues.apache.org/jira/browse/LUCENE-4322 Project: Lucene - Core Issue Type: Bug Reporter: Michael McCandless Fix For: 5.0, 4.0 Attachments: LUCENE-4322.patch These source files add up to a lot of sources ... it caused problems when compiling under Maven and InteliJ. I committed a change to make separates files, but in aggregate this is still a lot ... EG maybe we don't need to specialize encode? -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
On Tue, Aug 28, 2012 at 2:43 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote: 2) I think Solr emits a LOT of logging information to the console. I don't know if all of it is really useful -- I doubt it, really. The solutions I see are simple -- disable the tests that fail 3-5 times and we still don't know what causes the problem. Disable them and file a JIRA issue. Another option is to redirect solr fails to a different mailing list that only those that care about solr development can follow. Tests that fail a small percent of the time are still hugely valuable (i.e. when they fail for a different reason than usual, or they start failing much more often). Simply disabling them is far worse for the project. -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
On Mon, Aug 20, 2012 at 2:22 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote: Oh, one more thing -- if we suppress the console output we would absolutely have to keep (at jenkins) multiple tests-report.txt files because these always contain full output dumps (regardless of console settings). Otherwise we'd suppress potentially important info. +1 to not forward truckloads of info to the mailing lists, as long as we can easily get at it via jenkins or some other mechanism. -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-3763) Make solr use lucene filters directly
[ https://issues.apache.org/jira/browse/SOLR-3763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443409#comment-13443409 ] Greg Bowyer commented on SOLR-3763: --- I guess my next step is to get caching working, I am not sure quite how to take baby steps with this beyond getting to feature parity. Make solr use lucene filters directly - Key: SOLR-3763 URL: https://issues.apache.org/jira/browse/SOLR-3763 Project: Solr Issue Type: Improvement Affects Versions: 4.0, 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Attachments: SOLR-3763-Make-solr-use-lucene-filters-directly.patch Presently solr uses bitsets, queries and collectors to implement the concept of filters. This has proven to be very powerful, but does come at the cost of introducing a large body of code into solr making it harder to optimise and maintain. Another issue here is that filters currently cache sub-optimally given the changes in lucene towards atomic readers. Rather than patch these issues, this is an attempt to rework the filters in solr to leverage the Filter subsystem from lucene as much as possible. In good time the aim is to get this to do the following: ∘ Handle setting up filter implementations that are able to correctly cache with reference to the AtomicReader that they are caching for rather that for the entire index at large ∘ Get the post filters working, I am thinking that this can be done via lucenes chained filter, with the ‟expensive” filters being put towards the end of the chain - this has different semantics internally to the original implementation but IMHO should have the same result for end users ∘ Learn how to create filters that are potentially more efficient, at present solr basically runs a simple query that gathers a DocSet that relates to the documents that we want filtered; it would be interesting to make use of filter implementations that are in theory faster than query filters (for instance there are filters that are able to query the FieldCache) ∘ Learn how to decompose filters so that a complex filter query can be cached (potentially) as its constituent parts; for example the filter below currently needs love, care and feeding to ensure that the filter cache is not unduly stressed {code} 'category:(100) OR category:(200) OR category:(300)' {code} Really there is no reason not to express this in a cached form as {code} BooleanFilter( FilterClause(CachedFilter(TermFilter(Term(category, 100))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 200))), SHOULD), FilterClause(CachedFilter(TermFilter(Term(category, 300))), SHOULD) ) {code} This would yeild better cache usage I think as we can resuse docsets across multiple queries as well as avoid issues when filters are presented in differing orders ∘ Instead of end users providing costing we might (and this is a big might FWIW), be able to create a sort of execution plan of filters, leveraging a combination of what the index is able to tell us as well as sampling and ‟educated guesswork”; in essence this is what some DBMS software, for example postgresql does - it has a genetic algo that attempts to solve the travelling salesman - to great effect ∘ I am sure I will probably come up with other ambitious ideas to plug in here . :S Patches obviously forthcoming but the bulk of the work can be followed here https://github.com/GregBowyer/lucene-solr/commits/solr-uses-lucene-filters -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4335) Builds should regenerate all generated sources
Michael McCandless created LUCENE-4335: -- Summary: Builds should regenerate all generated sources Key: LUCENE-4335 URL: https://issues.apache.org/jira/browse/LUCENE-4335 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless We have more and more sources that are generated programmatically (query parsers, fuzzy levN tables from Moman, packed ints specialized decoders, etc.), and it's dangerous because developers may directly edit the generated sources and forget to edit the meta-source. It's happened to me several times ... most recently just after landing the BlockPostingsFormat branch. I think we should re-gen all of these in our builds and fail the build if this creates a difference. I know some generators (eg JavaCC) embed timestamps and so always create mods ... we can leave them out of this for starters (or maybe post-process the sources to remove the timestamps) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443424#comment-13443424 ] Greg Bowyer commented on LUCENE-4332: - {quote} Thats a cool idea also for our own tests! We should install a SecurityManager always and only allow files in build/test. LuceneTestCase can enforce this SecurityManager installed! And if a test writes outside, fail it! {quote} Should we split out that as a separate thing and get a security manager built that hooks into the awesome carrot testing stuffs Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4335) Builds should regenerate all generated sources
[ https://issues.apache.org/jira/browse/LUCENE-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443427#comment-13443427 ] Robert Muir commented on LUCENE-4335: - I think we should use replaceRegexp commands (like that are already there) to remove the various system information (time, paths, etc) that jflex/javacc/etc add from the generated code. then we could have an 'ant regenerate' command that regens all sources, and our usual 'svn status' check would ensure nothing changed. Builds should regenerate all generated sources -- Key: LUCENE-4335 URL: https://issues.apache.org/jira/browse/LUCENE-4335 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless We have more and more sources that are generated programmatically (query parsers, fuzzy levN tables from Moman, packed ints specialized decoders, etc.), and it's dangerous because developers may directly edit the generated sources and forget to edit the meta-source. It's happened to me several times ... most recently just after landing the BlockPostingsFormat branch. I think we should re-gen all of these in our builds and fail the build if this creates a difference. I know some generators (eg JavaCC) embed timestamps and so always create mods ... we can leave them out of this for starters (or maybe post-process the sources to remove the timestamps) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443430#comment-13443430 ] Robert Muir commented on LUCENE-4332: - i think a security manager would be useful for that reason separately. also we should ask Dawid if each forked jvm can get its own sandbox'ed java.io.tmpdir in build/test: this would only prevent problems and would be nice for various external libs that might write to java.io.tmpdir or whatever. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4336) javacc tasks should use ivy
Robert Muir created LUCENE-4336: --- Summary: javacc tasks should use ivy Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-139) Support updateable/modifiable documents
[ https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443446#comment-13443446 ] Dilip Maddi commented on SOLR-139: -- Christopher, Here is how I am able to update a document by posting an XML add doc field name=idVA902B/field field name=price update=set300/field /doc /add Support updateable/modifiable documents --- Key: SOLR-139 URL: https://issues.apache.org/jira/browse/SOLR-139 Project: Solr Issue Type: New Feature Components: update Reporter: Ryan McKinley Attachments: Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, SOLR-139_createIfNotExist.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, SOLR-139.patch, SOLR-139.patch, SOLR-139-XmlUpdater.patch, SOLR-269+139-ModifiableDocumentUpdateProcessor.patch It would be nice to be able to update some fields on a document without having to insert the entire document. Given the way lucene is structured, (for now) one can only modify stored fields. While we are at it, we can support incrementing an existing value - I think this only makes sense for numbers. for background, see: http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293 -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443449#comment-13443449 ] Greg Bowyer commented on LUCENE-4332: - Following up it turns out to be *very* simple to do the security manager trick {code:java} import java.io.File; public class Test { public static void main(String... args) { System.setSecurityManager(new SecurityManager() { public void checkDelete(String file) throws SecurityException { File fp = new File(file); String path = fp.getAbsolutePath(); if (!path.startsWith(/tmp)) { throw new SecurityException(Bang!); } } }); new File(/home/greg/test).delete(); } } {code} {code} Exception in thread main java.lang.SecurityException: Bang! at Test$1.checkDelete(Test.java:12) at java.io.File.delete(File.java:971) at Test.main(Test.java:17) {code} Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443448#comment-13443448 ] Uwe Schindler commented on LUCENE-4336: --- +1, I can look into this! I hope this can be solved from maven repo with a simple taskdef using cachepath. javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443449#comment-13443449 ] Greg Bowyer edited comment on LUCENE-4332 at 8/29/12 6:48 AM: -- Following up it turns out to be *very* simple to do the security manager trick {code:java} import java.io.File; public class Test { public static void main(String... args) { System.setSecurityManager(new SecurityManager() { public void checkDelete(String file) throws SecurityException { File fp = new File(file); String path = fp.getAbsolutePath(); if (!path.startsWith(/tmp)) { throw new SecurityException(Bang!); } } }); new File(/home/greg/test).delete(); } } {code} {code} Exception in thread main java.lang.SecurityException: Bang! at Test$1.checkDelete(Test.java:12) at java.io.File.delete(File.java:971) at Test.main(Test.java:17) {code} There is a lot of scope here if you want to abuse checking for all sorts of things (files, sockets, threads etc) was (Author: gbow...@fastmail.co.uk): Following up it turns out to be *very* simple to do the security manager trick {code:java} import java.io.File; public class Test { public static void main(String... args) { System.setSecurityManager(new SecurityManager() { public void checkDelete(String file) throws SecurityException { File fp = new File(file); String path = fp.getAbsolutePath(); if (!path.startsWith(/tmp)) { throw new SecurityException(Bang!); } } }); new File(/home/greg/test).delete(); } } {code} {code} Exception in thread main java.lang.SecurityException: Bang! at Test$1.checkDelete(Test.java:12) at java.io.File.delete(File.java:971) at Test.main(Test.java:17) {code} Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4335) Builds should regenerate all generated sources
[ https://issues.apache.org/jira/browse/LUCENE-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443455#comment-13443455 ] Uwe Schindler commented on LUCENE-4335: --- Thats a good idea, there is one problem with one of the tools, not sure if jflex or javacc. It happens that one of these tools reorders the switch statement's case XX: labels and so creating different source. This seems to depend on JDK version used, if you regen again its the same, but often i changed the metafile (like fixing /** to /* for license) and regened, it was different order. The pattern looks like one of these tools use a HashSet/HashMap of case statements, where the order is undefined. We should check what causes this. bq. then we could have an 'ant regenerate' command that regens all sources, and our usual 'svn status' check would ensure nothing changed. We have to extend that one to also detect modifications. The current checker task only looks for unversioned files and checks properties. By this you can run it before commit. This one would need to check for mods, too. Builds should regenerate all generated sources -- Key: LUCENE-4335 URL: https://issues.apache.org/jira/browse/LUCENE-4335 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless We have more and more sources that are generated programmatically (query parsers, fuzzy levN tables from Moman, packed ints specialized decoders, etc.), and it's dangerous because developers may directly edit the generated sources and forget to edit the meta-source. It's happened to me several times ... most recently just after landing the BlockPostingsFormat branch. I think we should re-gen all of these in our builds and fail the build if this creates a difference. I know some generators (eg JavaCC) embed timestamps and so always create mods ... we can leave them out of this for starters (or maybe post-process the sources to remove the timestamps) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-3755) shard splitting
[ https://issues.apache.org/jira/browse/SOLR-3755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13440698#comment-13440698 ] Yonik Seeley edited comment on SOLR-3755 at 8/29/12 6:51 AM: - We need to associate hash ranges with shards and allow overlapping shards (i.e. 1-10, 1-5,6-10) General Strategy for splitting w/ no service interruptions: - Bring up 2 new cores on the same node, covering the new hash ranges - Both cores should go into recovery mode (i.e. leader should start forwarding updates) - leaders either need to consider these new smaller shards as replicas, or they need to forward to the leader for the new smaller shard - searches should no longer go across all shards, but should just span the complete hash range - leader does a hard commit and splits the index - Smaller indexes are installed on the new cores - Overseer should create new replicas for new shards - Mark old shard as “retired” – some mechanism to shut it down (after there is an acceptable amount of coverage of the new shards via replicas) Future: allow splitting even with “custom” shards was (Author: ysee...@gmail.com): We need to associate hash ranges with shards and allow overlapping shards (i.e. 1-10, 1-5,6-10) General Strategy for splitting w/ no service interruptions: - Bring up 2 new cores on the same node, covering the new hash ranges - Both cores should go into recovery mode (i.e. leader should start forwarding updates) - leader does a hard commit and splits the index - Smaller indexes are installed on the new cores - Overseer should create new replicas for new shards - Mark old shard as “retired” – some mechanism to shut it down (after there is an acceptable amount of coverage of the new shards via replicas) Future: allow splitting even with “custom” shards shard splitting --- Key: SOLR-3755 URL: https://issues.apache.org/jira/browse/SOLR-3755 Project: Solr Issue Type: New Feature Components: SolrCloud Reporter: Yonik Seeley We can currently easily add replicas to handle increases in query volume, but we should also add a way to add additional shards dynamically by splitting existing shards. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: large messages from Jenkins failures
Actually, after discussing with Uwe on #lucene-dev IRC, I'm looking into another mechanism to reduce the size of email messages: the Jenkins Email-Ext plugin has a per-build-job configuration item named Pre-send script that allows you to modify the MimeMessage object representing an email via a Groovy script. Here's what I've got so far - I'm going to enable this now on all the jobs on Uwe's Jenkins (the msg variable, of type MimeMessage, is made available by the plugin to the script): maxLength = 20; trailingLength = 1; content = msg.getContent(); // assumption: mime type is text/plain contentLength = content.length(); if (content.length() maxLength) { text = content.substring(0, maxLength - trailingLength) + \n\n[... truncated too long message ...]\n\n + content.substring(contentLength - trailingLength); msg.setText(text, UTF-8); } Steve -Original Message- From: ysee...@gmail.com [mailto:ysee...@gmail.com] On Behalf Of Yonik Seeley Sent: Tuesday, August 28, 2012 3:11 PM To: dev@lucene.apache.org Subject: Re: large messages from Jenkins failures On Mon, Aug 20, 2012 at 2:22 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote: Oh, one more thing -- if we suppress the console output we would absolutely have to keep (at jenkins) multiple tests-report.txt files because these always contain full output dumps (regardless of console settings). Otherwise we'd suppress potentially important info. +1 to not forward truckloads of info to the mailing lists, as long as we can easily get at it via jenkins or some other mechanism. -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
Another option is to redirect solr fails to a different mailing list that only those that care about solr development can follow. I don't make a distinction between solr and lucene development, call me odd. I did try to help with those few tests (and I fixed some others) but no luck. Tests that fail a small percent of the time are still hugely valuable (i.e. when they fail for a different reason than usual, or they start failing much more often). Simply disabling them is far worse for the project. I don't agree with you here. I think having two or three failures daily from the same test (and typically with the same message) is far worse than not having it at all. You get used to having failing tests and this is bad. A test failure should be a red flag, something you eagerly look into because you're curious about what happened. I stopped having that feeling after a while, this seems bad to me. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4335) Builds should regenerate all generated sources
[ https://issues.apache.org/jira/browse/LUCENE-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443464#comment-13443464 ] Robert Muir commented on LUCENE-4335: - {quote} We should check what causes this. {quote} I agree, this is always scary when it happens. It makes it harder to tell if something really changed. Builds should regenerate all generated sources -- Key: LUCENE-4335 URL: https://issues.apache.org/jira/browse/LUCENE-4335 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless We have more and more sources that are generated programmatically (query parsers, fuzzy levN tables from Moman, packed ints specialized decoders, etc.), and it's dangerous because developers may directly edit the generated sources and forget to edit the meta-source. It's happened to me several times ... most recently just after landing the BlockPostingsFormat branch. I think we should re-gen all of these in our builds and fail the build if this creates a difference. I know some generators (eg JavaCC) embed timestamps and so always create mods ... we can leave them out of this for starters (or maybe post-process the sources to remove the timestamps) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443465#comment-13443465 ] Robert Muir commented on LUCENE-4336: - The javacc 5.0 is in maven repo (I upgraded to it locally because it fixes some bugs in generated javadocs code). So it seems possible. Jflex is harder, i think 1.5 is still unreleased? But we can do just javacc for now: its an improvement. javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
On Tue, Aug 28, 2012 at 3:04 PM, Yonik Seeley yo...@lucidworks.com wrote: On Tue, Aug 28, 2012 at 2:43 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote: 2) I think Solr emits a LOT of logging information to the console. I don't know if all of it is really useful -- I doubt it, really. The solutions I see are simple -- disable the tests that fail 3-5 times and we still don't know what causes the problem. Disable them and file a JIRA issue. Another option is to redirect solr fails to a different mailing list that only those that care about solr development can follow. I don't think splintering the dev community is healthy. What I really want is for the tests (or the bugs in Solr/Lucene causing the test failures) to be fixed, for a Solr dev who understands the test to dig into it. Tests that fail a small percent of the time are still hugely valuable (i.e. when they fail for a different reason than usual, or they start failing much more often). Simply disabling them is far worse for the project. I agree, for tests that don't fail frequently. This is the power/purpose of having a test. The problem is certain Solr tests fail very frequently and nobody jumps on those failures / we become complacent: such failures quickly stop being helpful. I know Mark has jumped on some of the test failures (thank you!), but he's only one person and we still have certain Solr tests failing frequently. This really reflects a deeper problem: Solr doesn't have enough dev coverage, or devs that have time/itch/energy to dig into hard test failures. When a test fails devs should be eager to fix it. That's the polar opposite of Solr's failures today. Mike McCandless http://blog.mikemccandless.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443471#comment-13443471 ] Dawid Weiss commented on LUCENE-4332: - bq. toFacyDawidWeissSeedFormat That's just a hex representation of eight bytes (unsigned long), nothing fancy. bq. We should install a SecurityManager always and only allow files in build/test I didn't cater for the presence of a security manager in the runner and it will probably break things in the runner that will be tough to debug. Just a fair warning. You will probably have to give the runner a policy of being able to do everything and it still may fail to run. bq. also we should ask Dawid if each forked jvm can get its own sandbox'ed java.io.tmpdir in build/test: They already do I think because tmpdir property is overridden with . and cwd is set to J0/J1/J2/JN under the test dir. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
On Tue, Aug 28, 2012 at 3:57 PM, Dawid Weiss dawid.we...@cs.put.poznan.pl wrote: I don't agree with you here. I think having two or three failures daily from the same test (and typically with the same message) is far worse than not having it at all. Imperfect test coverage is better than no test coverage? Seems like we could simply disable all of our tests and then be happy because they will never fail ;-) Some of these tests fail because of threads left over that are hard to control - we have a lot more moving parts like jetty and zookeeper. Some tests started failing more often because of more stringent checks (like threads left over after a test). If these can't be fixed in a timely manner, it seems like the most logical thing to do is relax the checks - that maximises test coverage. You get used to having failing tests and this is bad. A test failure should be a red flag, something you eagerly look into because you're curious about what happened. I stopped having that feeling after a while, this seems bad to me. It is bad, but disabling seems even worse, unless we're just not worried about test code coverage at all. -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443477#comment-13443477 ] Robert Muir commented on LUCENE-4332: - {quote} They already do I think because tmpdir property is overridden with . and cwd is set to J0/J1/J2/JN under the test dir. {quote} OK, cool: its something we never did before that I always thought should be done... Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4337) Create Java security manager for forcible asserting behaviours in testing
Greg Bowyer created LUCENE-4337: --- Summary: Create Java security manager for forcible asserting behaviours in testing Key: LUCENE-4337 URL: https://issues.apache.org/jira/browse/LUCENE-4337 Project: Lucene - Core Issue Type: Bug Affects Versions: 4.1, 5.0, 4.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Following on from conversations about mutation testing, there is an interest in building a Java security manager that is able to assert / guarantee certain behaviours -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443489#comment-13443489 ] Dawid Weiss commented on LUCENE-4332: - bq. OK, cool: its something we never did before that I always thought should be done... Yes, I checked. Don't let this confuse you: {code} junit4:junit4 dir=@{tempDir} tempdir=@{tempDir} {code} the tempdir is a temporary folder for all the extra files the runner needs (spills, event files, etc.), forked JVMs get their own subfolder. See the isolateWorkingDirectories parameter here; it's true by default so omitted: http://labs.carrotsearch.com/download/randomizedtesting/2.0.0-SNAPSHOT/docs/junit4-ant/Tasks/junit4.html I looked at the common-build.xml file though and I see only this: {code} !-- Temporary directory in the cwd. -- sysproperty key=tempDir value=./ {code} I was wrong then, the default 'java.io.tmpdir' is not overriden here and I think it should. I wrote a small test asking for a File.createTempFile and it did use a global temp (not good). Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443491#comment-13443491 ] Robert Muir commented on LUCENE-4332: - {quote} I didn't cater for the presence of a security manager in the runner and it will probably break things in the runner that will be tough to debug. Just a fair warning. You will probably have to give the runner a policy of being able to do everything and it still may fail to run. {quote} Well if its for test purposes only and not enforcing actual security, we should be able to give the runner a nice backdoor (e.g. static boolean BACKDOORED) if we really need to right? Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443497#comment-13443497 ] Dawid Weiss commented on LUCENE-4332: - As for generating a common seed for all runs -- if you slurp junit4 taskdefs at the main level somewhere then you can do: {code} !-- Pick the random seed now (unless already set). -- junit4:pickseed property=tests.seed / {code} and as long as this property is passed to subants it will remain the same. But you can just as well just generate it, even from the shell level: {code} openssl rand -hex 8 {code} Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443501#comment-13443501 ] Dawid Weiss commented on LUCENE-4332: - bq. Well if its for test purposes only and not enforcing actual security, we should be able to give the runner a nice backdoor (e.g. static boolean BACKDOORED) if we really need to right? I don't know what it will do and how it will fail. I never run with a security manager, I think the design of SM is complex, weird and poorly documented (as evidenced by the security vuln. published today...). I just don't think of it much because I know very few people that actually run with a SM enabled. I am fairly sure I do many things in the code that I know won't work with an installed SM -- scanning properties, opening files, etc. I'm sure it is possible to configure it do grant permissions to certain packages but I've no idea how to do it. I'm not saying no to the idea, but I'm just letting you know that things may break in unexpected ways and I will have limited time to learn SM internals... Not that I'm especially looking forward to that. :) Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
On Tue, Aug 28, 2012 at 4:03 PM, Michael McCandless luc...@mikemccandless.com wrote: Another option is to redirect solr fails to a different mailing list that only those that care about solr development can follow. I don't think splintering the dev community is healthy. Well, it seems like some people would prefer tests that fail sometimes to be disabled so they don't see the failure messages. Others (like me) find those tests to be extremely valuable since they represent coverage for key features. How else to resolve that? Just fix the test isn't an answer... unless one is personally committing the time to do it themselves. -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
Imperfect test coverage is better than no test coverage? Seems like we could simply disable all of our tests and then be happy because they will never fail ;-) I didn't say that. I said the opposite - that having imperfect tests (or rather tests that cannot be fixed for whatever reason) discourages from looking at test failures and makes one just unsubscribe from the jenkins mails. If this is the case then yes, I think not having a test like that at all is better than having it. Some of these tests fail because of threads left over that are hard to control - we have a lot more moving parts like jetty and zookeeper. I understand that but these tests have been failing long before those checks were added. I also understand the complexity involved -- like I said, I also tried to fix those tests and failed. timely manner, it seems like the most logical thing to do is relax the checks - that maximises test coverage. These thread leak checks are meant to isolate test suites from each other and I think they do a good job at it. It is bad, but disabling seems even worse, unless we're just not worried about test code coverage at all. We have different viewpoints on this, sorry. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443509#comment-13443509 ] Robert Muir commented on LUCENE-4332: - I agree with you, i dont know anything about security managers either. But it seems like we could use such a thing to find test bugs. Of course our security manager would have vulnerabilities (possibly introduced intentionally in case we need to backdoor it). But this is more like locking your front door so it won't blow open when its windy outside. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443512#comment-13443512 ] Uwe Schindler commented on LUCENE-4332: --- You would need a simple Security manager that allows all, except creating/deleting some files outside a working directory. We can check this out (by enabling a noop security manager), if it then still passes we are fine. Then we can go forward. The security manager would only be activated by LTC before after a test class and then disabled again. I would not restrict file access too much. For Pitest, I would only disallow everything outside working directory root and later disable more. I think a simple restriction to build/ dir would also help to prevent solr creating files in test-files src folder. An all-allowing security manager is easy, template is available to extend. The problem you are talking about are too complex security restrictions dictated by J2EE (that limit things like creating classes or classloader) - we dont want to do this, we only want a hook into (new FileOutputStream) and want to throw an Exception on wrong path. If you allow all other security manager requests, there is no issue. Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443513#comment-13443513 ] Greg Bowyer commented on LUCENE-4332: - I can codify a security manager, they are somewhat complex but I see our needs here as very simple (essentially assert file paths) Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4332) Integrate PiTest mutation coverage tool into build
[ https://issues.apache.org/jira/browse/LUCENE-4332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443520#comment-13443520 ] Dawid Weiss commented on LUCENE-4332: - Sure, all I'm saying is that I know very little about SMs so if something breaks I'll just stare at you blindly ;) Integrate PiTest mutation coverage tool into build -- Key: LUCENE-4332 URL: https://issues.apache.org/jira/browse/LUCENE-4332 Project: Lucene - Core Issue Type: Improvement Affects Versions: 4.1, 5.0 Reporter: Greg Bowyer Assignee: Greg Bowyer Labels: build Attachments: LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch, LUCENE-4332-Integrate-PiTest-mutation-coverage-tool-into-build.patch As discussed briefly on the mailing list, this patch is an attempt to integrate the PiTest mutation coverage tool into the lucene build -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-4338) Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test
Dawid Weiss created LUCENE-4338: --- Summary: Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test Key: LUCENE-4338 URL: https://issues.apache.org/jira/browse/LUCENE-4338 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4338) Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test
[ https://issues.apache.org/jira/browse/LUCENE-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-4338: Attachment: LUCENE-4338.patch Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test - Key: LUCENE-4338 URL: https://issues.apache.org/jira/browse/LUCENE-4338 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Attachments: LUCENE-4338.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4338) Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test
[ https://issues.apache.org/jira/browse/LUCENE-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443534#comment-13443534 ] Robert Muir commented on LUCENE-4338: - I don't know why I thought it would be trickier than that :) commit it! Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test - Key: LUCENE-4338 URL: https://issues.apache.org/jira/browse/LUCENE-4338 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Attachments: LUCENE-4338.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4338) Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test
[ https://issues.apache.org/jira/browse/LUCENE-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443536#comment-13443536 ] Dawid Weiss commented on LUCENE-4338: - Simple but does the job. I'm running tests right now to see if nothing breaks, will commit soon. Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test - Key: LUCENE-4338 URL: https://issues.apache.org/jira/browse/LUCENE-4338 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Attachments: LUCENE-4338.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4338) Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test
[ https://issues.apache.org/jira/browse/LUCENE-4338?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss resolved LUCENE-4338. - Resolution: Fixed Fix Version/s: 4.0 5.0 Redirect java.io.tmpdir so that each JVM gets their own tmpdir under a build/test - Key: LUCENE-4338 URL: https://issues.apache.org/jira/browse/LUCENE-4338 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Fix For: 5.0, 4.0 Attachments: LUCENE-4338.patch -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-4335) Builds should regenerate all generated sources
[ https://issues.apache.org/jira/browse/LUCENE-4335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443560#comment-13443560 ] Steven Rowe commented on LUCENE-4335: - I'm not sure about Javacc, but I've seen JFlex reorder cases in switch statements, even when there are no .jflex source changes, when run under different JVM versions. I recall seeing this specifically when generating under Java5 and Java6, both Oracle JVMs on Windows. I'll look into the generator to see how to fix the output order. Builds should regenerate all generated sources -- Key: LUCENE-4335 URL: https://issues.apache.org/jira/browse/LUCENE-4335 Project: Lucene - Core Issue Type: Improvement Reporter: Michael McCandless We have more and more sources that are generated programmatically (query parsers, fuzzy levN tables from Moman, packed ints specialized decoders, etc.), and it's dangerous because developers may directly edit the generated sources and forget to edit the meta-source. It's happened to me several times ... most recently just after landing the BlockPostingsFormat branch. I think we should re-gen all of these in our builds and fail the build if this creates a difference. I know some generators (eg JavaCC) embed timestamps and so always create mods ... we can leave them out of this for starters (or maybe post-process the sources to remove the timestamps) ... -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
: I didn't say that. I said the opposite - that having imperfect tests : (or rather tests that cannot be fixed for whatever reason) discourages : from looking at test failures and makes one just unsubscribe from the : jenkins mails. If this is the case then yes, I think not having a test : like that at all is better than having it. As i've said before... Running these problematic tests in jenkins on machines like builds.apache.org is still very helpful because in many cases folks are unable to reproduce the failures anywhere else (or in some cases: some people can reproduce them, but not the people who have the knowledge/energy to fix them) If folks are concerned that certian tests fail to frequently to be considered stable and included in the main build, then let's: 1) slap a special @UnstableTest annotation on them 2) set up a new jenkins job that *only* runs these @UnstableTest jobs 3) configure this new jenkins job to not send any email ...seems like that would satisfy everyone right? -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: large messages from Jenkins failures
1) slap a special @UnstableTest annotation on them 2) set up a new jenkins job that *only* runs these @UnstableTest jobs 3) configure this new jenkins job to not send any email ...seems like that would satisfy everyone right? I'm all for it. We can rename @BadApple to @Unstable and make it disabled by default. As for (2) this will be tricky because there's no way to run just a specific group. I like this idea as a feature though so if there's no vetos I'll add it to the runner. Dawid - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #64: POMs out of sync
On Fri, Aug 17, 2012 at 9:42 AM, Steven A Rowe sar...@syr.edu wrote: Five out of the last seven builds have failed with this exact same ERROR. I can't reproduce on my Win7+Cygwin environment. Does anybody know what's happening here? If not, I'll ignore this test under Maven. Pretty strange. I can't reproduce locally. multiple values encountered for non multiValued field val_i: [10, 20] This should be very deterministic (i.e. it should always fail if it were actually a non multiValued field). The *_i fields are multivalued according to schema.xml, so this exception should not happen (the version=1.0 in schema.xml means multiValued=true by default). Off of the top of my head, the only thing I can figure is that the maven based tests are somehow getting the wrong schema sometimes. Maybe if there's some different with how solr homes are set between ant and maven? -Yonik http://lucidworks.com - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: [JENKINS-MAVEN] Lucene-Solr-Maven-4.x #64: POMs out of sync
: Off of the top of my head, the only thing I can figure is that the : maven based tests are somehow getting the wrong schema sometimes. : Maybe if there's some different with how solr homes are set between : ant and maven? that should be easy to sanity check right? add something like this into the @Before method... assertEquals(test-schema-1.0, core.getSchema().getSchemaName()) ...and then double check that all of the test schema files have unique name attributes in their XML. -Hoss - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler reassigned LUCENE-4336: - Assignee: Uwe Schindler javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Assignee: Uwe Schindler its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-4336: -- Attachment: LUCENE-4336.patch Here the patch. The javacc task is a little bit complicated (needed some help from blogs,...) because javacc.jar does not contain the ant task. The task is shipped with ANT and ANT expects a attribute called javacchome, that must point to the folder of an extracte javacc distribution (although it only uses the JAR). javacc/ looks inside this dir for bin/lib and inside this dir for a file named javacc.jar (without version). So we cannot use cachepath and ivy:retrieve is too unflexible (it allows to setup directory layout, but filename always contains version). The trick here is a handwritten resolve/rewrite: - get ivy:cachefileset for javacc-5.0.jar - create a fake release folder in ${build.dir} and then copy the cachefileset into it with the mergemapper (to javacc.jar). This patch also cleans up javacc at all. We only use javacc in the queryparser module so I moved all ant logic there. I also removed some unused tasks and properties. I will commit soon. javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Assignee: Uwe Schindler Attachments: LUCENE-4336.patch its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13443644#comment-13443644 ] Uwe Schindler edited comment on LUCENE-4336 at 8/29/12 10:21 AM: - Here the patch. The javacc task is a little bit complicated (needed some help from blogs,...) because javacc.jar does not contain the ant task. The task is shipped with ANT and ANT expects a attribute called javacchome, that must point to the home folder of an extracted javacc distribution (although it only uses the JAR). javacc/ looks inside this dir for bin/lib and inside this dir for a file named javacc.jar (without version). So we cannot use ivy:cachepath and ivy:retrieve is too unflexible (it allows to setup directory layout, but filename always contains version). The trick here is a handwritten resolve/rewrite: - get ivy:cachefileset for javacc-5.0.jar artifact - create a fake release folder in ${build.dir} and then copy the cachefileset into it with a mergemapper to=javacc.jar/. - the javacc.home dir is then located in ${build.dir} and nuked on ant clean. This patch also cleans up javacc at all. We only use javacc in the queryparser module so I moved all ant logic there. I also removed some unused tasks and properties. I will commit soon. was (Author: thetaphi): Here the patch. The javacc task is a little bit complicated (needed some help from blogs,...) because javacc.jar does not contain the ant task. The task is shipped with ANT and ANT expects a attribute called javacchome, that must point to the folder of an extracte javacc distribution (although it only uses the JAR). javacc/ looks inside this dir for bin/lib and inside this dir for a file named javacc.jar (without version). So we cannot use cachepath and ivy:retrieve is too unflexible (it allows to setup directory layout, but filename always contains version). The trick here is a handwritten resolve/rewrite: - get ivy:cachefileset for javacc-5.0.jar - create a fake release folder in ${build.dir} and then copy the cachefileset into it with the mergemapper (to javacc.jar). This patch also cleans up javacc at all. We only use javacc in the queryparser module so I moved all ant logic there. I also removed some unused tasks and properties. I will commit soon. javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Assignee: Uwe Schindler Attachments: LUCENE-4336.patch its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-4336) javacc tasks should use ivy
[ https://issues.apache.org/jira/browse/LUCENE-4336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-4336. --- Resolution: Fixed Fix Version/s: 4.0 5.0 Committed trunk revision: 1378381 Committed 4.x revision: 1378382 javacc tasks should use ivy --- Key: LUCENE-4336 URL: https://issues.apache.org/jira/browse/LUCENE-4336 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Assignee: Uwe Schindler Fix For: 5.0, 4.0 Attachments: LUCENE-4336.patch its a hassle to set this up currently. we should be able to just download javacc this way to run those tasks instead of making you download it yourself from the java.net site and setting build.properties options and stuff. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org