[jira] [Commented] (SOLR-11205) Make arbitrary metrics values available for policies
[ https://issues.apache.org/jira/browse/SOLR-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119502#comment-16119502 ] Noble Paul commented on SOLR-11205: --- we should have a 1:1 mapping between this syntax and the metrics API. we should add a parameter which returns exactly that one value. SOLR-11215 > Make arbitrary metrics values available for policies > > > Key: SOLR-11205 > URL: https://issues.apache.org/jira/browse/SOLR-11205 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > Any metric available in the metrics API should be available for policy > configurations. > Example; > {code} > {'replica': 0, 'metrics:solr.jvm/os.systemLoadAverage': '<0.5'} > {code} > So the syntax to use a metric is: > {code} > metrics:/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11205) Make arbitrary metrics values available for policies
[ https://issues.apache.org/jira/browse/SOLR-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-11205: - Assignee: Noble Paul (was: Andrzej Bialecki ) > Make arbitrary metrics values available for policies > > > Key: SOLR-11205 > URL: https://issues.apache.org/jira/browse/SOLR-11205 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Noble Paul > > Any metric available in the metrics API should be available for policy > configurations. > Example; > {code} > {'replica': 0, 'metrics:solr.jvm/os.systemLoadAverage': '<0.5'} > {code} > So the syntax to use a metric is: > {code} > metrics:/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11215) Make a metric accessible through a single param
[ https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-11215: - Assignee: Andrzej Bialecki > Make a metric accessible through a single param > --- > > Key: SOLR-11215 > URL: https://issues.apache.org/jira/browse/SOLR-11215 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Andrzej Bialecki > > example > {code} > /admin/metrics?key=solr.jvm:classes.loaded&key=solr.jvm:system.properties:java.specification.version > {code} > The above request must return just the two items in their corresponding path -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11205) Make arbitrary metrics values available for policies
[ https://issues.apache.org/jira/browse/SOLR-11205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul reassigned SOLR-11205: - Assignee: Andrzej Bialecki (was: Noble Paul) > Make arbitrary metrics values available for policies > > > Key: SOLR-11205 > URL: https://issues.apache.org/jira/browse/SOLR-11205 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Andrzej Bialecki > > Any metric available in the metrics API should be available for policy > configurations. > Example; > {code} > {'replica': 0, 'metrics:solr.jvm/os.systemLoadAverage': '<0.5'} > {code} > So the syntax to use a metric is: > {code} > metrics:/ > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
Thomas Poppe created LUCENE-7921: Summary: More efficient way to transform a RegExp to an Automaton Key: LUCENE-7921 URL: https://issues.apache.org/jira/browse/LUCENE-7921 Project: Lucene - Core Issue Type: Improvement Affects Versions: 6.5.1 Reporter: Thomas Poppe Priority: Minor Consider the following example: public static void main(String[] args) { org.apache.lucene.util.automaton.RegExp regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); org.apache.lucene.util.automaton.Automaton automaton = regExp.toAutomaton(); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); try { regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); automaton = regExp.toAutomaton(); System.out.println("Will not happen..."); } catch (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { automaton = regExp.toAutomaton(1_000_000); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); } } Both regular expressions are equivalent, but it's much more efficient to "unroll" the repetition. It might be possible to optimize the Regex#toAutomaton() method to handle this repetition without going over the default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thomas Poppe updated LUCENE-7921: - Description: Consider the following example: {code:title=ToAutomatonExample.java|borderStyle=solid} public static void main(String[] args) { org.apache.lucene.util.automaton.RegExp regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); org.apache.lucene.util.automaton.Automaton automaton = regExp.toAutomaton(); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); try { regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); automaton = regExp.toAutomaton(); System.out.println("Will not happen..."); } catch (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { automaton = regExp.toAutomaton(1_000_000); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); } } {code} Both regular expressions are equivalent, but it's much more efficient to "unroll" the repetition. It might be possible to optimize the Regex#toAutomaton() method to handle this repetition without going over the default number of determinized states, and using less memory and CPU? was: Consider the following example: public static void main(String[] args) { org.apache.lucene.util.automaton.RegExp regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); org.apache.lucene.util.automaton.Automaton automaton = regExp.toAutomaton(); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); try { regExp = new org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); automaton = regExp.toAutomaton(); System.out.println("Will not happen..."); } catch (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { automaton = regExp.toAutomaton(1_000_000); System.out.println("states: " + automaton.getNumStates()); System.out.println("transitions: " + automaton.getNumTransitions()); System.out.println("---"); } } Both regular expressions are equivalent, but it's much more efficient to "unroll" the repetition. It might be possible to optimize the Regex#toAutomaton() method to handle this repetition without going over the default number of determinized states, and using less memory and CPU? > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default n
[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.
[ https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119509#comment-16119509 ] Cao Manh Dat commented on SOLR-10126: - [~shalinmangar] It's not actually fix the problem, but It mades the test hardly to fail. The test get failed when requestVersions response contain a new update that not present in replica recentUpdates ( in the above example, this is update 9). Therefore by putting a sleep between requestVersions and get recentUpdates, we will make sure that update 9 will present in replica's recentUpdates. > PeerSyncReplicationTest is a flakey test. > - > > Key: SOLR-10126 > URL: https://issues.apache.org/jira/browse/SOLR-10126 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Attachments: faillogs.tar.gz, SOLR-10126.patch > > > Could be related to SOLR-9555, but I will see what else pops up under > beasting. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119512#comment-16119512 ] Dawid Weiss commented on LUCENE-7921: - Two identical regexps have an identical minimal deterministic automaton, so no unrolling will get you a benefit? I don't quite understand why you get the the "too complex" exception in one case vs. another, but it has to be a side effect of how this check is implemented (didn't look at the code); in the general sense both should be throwing the exception I think. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119523#comment-16119523 ] Thomas Poppe commented on LUCENE-7921: -- It's the opposite: unrolling gets you the benefit. I was hoping more for the conclusion that none of the cases should be throwing the exception - as the regexp is not that complex, and neither is the resulting automaton. Elasticsearch has no problems executing it (in the unrolled variant). > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119534#comment-16119534 ] Dawid Weiss commented on LUCENE-7921: - No, what I meant is this: {code} states: 118 transitions: 319 --- states: 118 transitions: 319 --- {code} As you see the two final minimal representations are identical -- the code that converts the automaton to a minimal deterministic automaton should be looked into as to why it explodes in the second case; the state count check shouldn't explode then, just as in the first example. So I'm not saying you're wrong, but it's not about optimizing or rewriting the regexp, it's about fixing the determinization routine. bq. Elasticsearch has no problems executing it (in the unrolled variant). ES uses Lucene code underneath (correct me if I'm wrong) so if you use the same Lucene version you should observe the same result. There were some recent commits to this expansion check -- perhaps it's a regression. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119540#comment-16119540 ] Thomas Poppe commented on LUCENE-7921: -- Thanks for your comment Dawid. One more think I would like to note: the second case also takes more memory and CPU to convert to an automaton, so there might be an opportunity to optimize. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119540#comment-16119540 ] Thomas Poppe edited comment on LUCENE-7921 at 8/9/17 7:56 AM: -- Thanks for your comment Dawid. One more think I would like to note: the second case also takes more memory and CPU to convert to an automaton, so there might be an opportunity to optimize - but I guess you were already suggesting that. was (Author: thomaspoppe): Thanks for your comment Dawid. One more think I would like to note: the second case also takes more memory and CPU to convert to an automaton, so there might be an opportunity to optimize. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119548#comment-16119548 ] Dawid Weiss commented on LUCENE-7921: - Automaton determinisation is a long topic... And has been for a long time too. :) I quickly looked at the code - the difference is in: {code} private Automaton toAutomatonInternal(Map automata, AutomatonProvider automaton_provider, int maxDeterminizedStates) {code} the two representations will undergo different paths. I wonder if we could minimize subautomatons before we apply repeat, for example here (but in multiple places, really): {code} case REGEXP_REPEAT_MINMAX: a = Operations.repeat( exp1.toAutomatonInternal(automata, automaton_provider, maxDeterminizedStates), min, max); a = MinimizationOperations.minimize(a, maxDeterminizedStates); break; {code} > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11216) Make PeerSync more robust
Cao Manh Dat created SOLR-11216: --- Summary: Make PeerSync more robust Key: SOLR-11216 URL: https://issues.apache.org/jira/browse/SOLR-11216 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Cao Manh Dat First of all, I will change the issue's title with a better name when I have. When digging into SOLR-10126. I found a case that can make peerSync fail. * leader and replica receive update from 1 to 4 * replica stop * replica miss updates 5, 6 * replica start recovery ## replica buffer updates 7, 8 ## replica request versions from leader, ## replica get recent versions which is 1,2,3,4,7,8 ## in the same time leader receive update 9, so it will return updates from 1 to 9 (for request versions) ## replica do peersync and request updates 5, 6, 9 from leader ## replica apply updates 5, 6, 9. Its index does not have update 7, 8 and maxVersionSpecified for fingerprint is 9, therefore compare fingerprint will fail My idea here is why replica request update 9 (step 6) while it knows that updates with lower version ( update 7, 8 ) are on its buffering tlog. Should we request only updates that lower than the lowest update in its buffering tlog ( < 7 )? Someone my ask that what if replica won't receive update 9. In that case, leader will put the replica into LIR state, so replica will run recovery process again. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119564#comment-16119564 ] Dawid Weiss commented on LUCENE-7921: - Ooops, sorry -- it calls {{toAutomatonInternal}} so it's already minimized. Still, your observation is right: the minimization there should be handled more efficiently for this type of automata (so that it's not fully expanded). > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
when runTestEdisMaxSolrFeature,the collection1 use the wrong schema
Hi, * the error :* [image: 内嵌图片 1] * a、 debug found**: the schema in "C:\Users\Administrator\git\lucene-solr\eclipse-build\main\solr\collection1\conf"** is wrong. * * b 、 when i clean the project ,here the schema is.* * c、environment : eclipse ,windows , branch of ** solr master latest.* * Other junit tests is right ,i am not sure if it is a problem ? * thanks a lot!
[jira] [Updated] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7921: Attachment: capture-7.png capture-8.png A smaller example that shows what's going on. {{REGEXP_CONCATENATION}} expands elements of a concatenated regexp; the repeated-min-max on its own is much larger than a sequence of optionals. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > Attachments: capture-7.png, capture-8.png > > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7917) Wildcard query parser of MultiFieldQueryParser should support boosts
[ https://issues.apache.org/jira/browse/LUCENE-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yegor Dovganich updated LUCENE-7917: Attachment: (was: JIRA-7919.patch) > Wildcard query parser of MultiFieldQueryParser should support boosts > > > Key: LUCENE-7917 > URL: https://issues.apache.org/jira/browse/LUCENE-7917 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Yegor Dovganich > Attachments: LUCENE-7919.patch > > > https://stackoverflow.com/questions/45454710/getwildcardquery-method-of-multifieldqueryparser-doesnt-process-boosts-map-as-g > For some reason getWildcardQuery of MultiFieldQueryParser doesn't handle > boosts map as getFieldQuery does. But it'd be great if getWildcardQuery does > it as well. > The patch in attachments. Please, check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7917) Wildcard query parser of MultiFieldQueryParser should support boosts
[ https://issues.apache.org/jira/browse/LUCENE-7917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yegor Dovganich updated LUCENE-7917: Attachment: LUCENE-7919.patch > Wildcard query parser of MultiFieldQueryParser should support boosts > > > Key: LUCENE-7917 > URL: https://issues.apache.org/jira/browse/LUCENE-7917 > Project: Lucene - Core > Issue Type: Improvement > Components: core/queryparser >Reporter: Yegor Dovganich > Attachments: LUCENE-7919.patch > > > https://stackoverflow.com/questions/45454710/getwildcardquery-method-of-multifieldqueryparser-doesnt-process-boosts-map-as-g > For some reason getWildcardQuery of MultiFieldQueryParser doesn't handle > boosts map as getFieldQuery does. But it'd be great if getWildcardQuery does > it as well. > The patch in attachments. Please, check it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+178) - Build # 20286 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20286/ Java: 32bit/jdk-9-ea+178 -client -XX:+UseSerialGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream Error Message: Error from server at https://127.0.0.1:46513/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:46513/solr/workQueue_shard2_replica_n3: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/workQueue_shard2_replica_n3/update. Reason: Can not find: /solr/workQueue_shard2_replica_n3/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 at __randomizedtesting.SeedInfo.seed([A4B1321301176E87:8671B3E8227D4497]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testExecutorStream(StreamExpressionTest.java:6798) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverrid
[jira] [Comment Edited] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119580#comment-16119580 ] Dawid Weiss edited comment on LUCENE-7921 at 8/9/17 8:41 AM: - I think we should optimize {{Operations.repeat}} so that it produces saner input for minimization (single start state and epsilon arcs if x>=1 in \{x,y\}). It still wouldn't be the same behavior (concatenation would see different input clauses of a larger regexp), but it should be less costly (fewer states). was (Author: dweiss): I think we should optimize {{Operations.repeat}} so that it produces saner input for minimization (single start state and epsilon arcs if x>=1 in {x,y}). It still wouldn't be the same behavior (concatenation would see different input clauses of a larger regexp), but it should be less costly (fewer states). > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > Attachments: capture-7.png, capture-8.png > > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119580#comment-16119580 ] Dawid Weiss commented on LUCENE-7921: - I think we should optimize {{Operations.repeat}} so that it produces saner input for minimization (single start state and epsilon arcs if x>=1 in {x,y}). It still wouldn't be the same behavior (concatenation would see different input clauses of a larger regexp), but it should be less costly (fewer states). > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > Attachments: capture-7.png, capture-8.png > > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119585#comment-16119585 ] Mark Miller commented on SOLR-10032: Yeah, I'll move it to a public repo at some point. First report is done, they will show up here: http://apache-solr.bitballoon.com/ One thing to note is that SharedFSAutoReplicaFailoverTest looks broken. That may be the same on the 7.0 release branch. One of the things I'm looking forward to with this is some more visibility on nightlies - too easy to break them and not care or notice as it is. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-10032. Resolution: Fixed Fix Version/s: master (8.0) 7.0 > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 7.0, master (8.0) > > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_141) - Build # 6809 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6809/ Java: 64bit/jdk1.8.0_141 -XX:-UseCompressedOops -XX:+UseSerialGC 2 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: Doc with id=1 not found in http://127.0.0.1:56501/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=1 not found in http://127.0.0.1:56501/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([98C62E1EFCD43A82:7E511ADEC556C3E3]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterM
LUCENE-7917: Wildcard query parser of MultiFieldQueryParser should support boosts
Hello! For some reason getWildcardQuery of MultiFieldQueryParser doesn't handle boosts map as getFieldQuery does. But it'd be great if getWildcardQuery does it as well. I created the issue in Jira and attached a patch. https://issues.apache.org/jira/browse/LUCENE-7917 -- Regards, Yegor Dovganich
[jira] [Commented] (SOLR-11196) Solr 6.5.0 consuming entire Heap suddenly while working smoothly on Solr 6.1.0
[ https://issues.apache.org/jira/browse/SOLR-11196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119629#comment-16119629 ] Amit commented on SOLR-11196: - This is a Master-Slave architecture. Indexing happens on master only. While searching is on both master and slave through a load balancer. Both Master and Slave gets OOM frequently. Both master and slave works smoothly on 6.1.0 with the same configurations. > Solr 6.5.0 consuming entire Heap suddenly while working smoothly on Solr 6.1.0 > -- > > Key: SOLR-11196 > URL: https://issues.apache.org/jira/browse/SOLR-11196 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.5, 6.6 >Reporter: Amit > > Please note, this issue does not occurs on Solr-6.1.0 while the same occurs > on Solr-6.5.0 and above. To fix this we had to move back to Solr-6.1.0 > version. > We have been hit by a Solr Behavior in production which we are unable to > debug. To start with here are the configurations for solr: > Solr Version: 6.5, Master with 1 Slave of the same configuration as mentioned > below. > *JVM Config:* > > {code:java} > -Xms2048m > -Xmx4096m > -XX:+ParallelRefProcEnabled > -XX:+UseCMSInitiatingOccupancyOnly > -XX:CMSInitiatingOccupancyFraction=50 > {code} > Rest all are default values. > *Solr Config* : > > {code:java} > > > {solr.autoCommit.maxTime:30} > false > > > > {solr.autoSoftCommit.maxTime:90} > > > > 1024 >autowarmCount="0" /> >autowarmCount="0" /> >autowarmCount="0" /> >initialSize="0" autowarmCount="10" regenerator="solr.NoOpRegenerator" /> > true > 20 > ${solr.query.max.docs:40} > > false > 2 > > {code} > *The Host (AWS) configurations are:* > RAM: 7.65GB > Cores: 4 > Now, our solr works perfectly fine for hours and sometimes for days but > sometimes suddenly memory jumps up and the GC kicks in causing long big > pauses with not much to recover. We are seeing this happening most often when > one or multiple segments gets added or deleted post a hard commit. It doesn't > matter how many documents got indexed. The images attached shows that just 1 > document was indexed, causing an addition of one segment and it all got > messed up till we restarted the Solr. > Here are the images from NewRelic and Sematext (Kindly click on the links to > view): > [JVM Heap Memory Image | https://i.stack.imgur.com/9dQAy.png] > [1 Document and 1 Segment addition Image | > https://i.stack.imgur.com/6N4FC.png] > Update: Here is the JMap output when SOLR last died, we have now increased > the JVM memory to xmx of 12GB: > > {code:java} > num #instances #bytes class name > -- > 1: 11210921 1076248416 > org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat$IntBlockTermState > 2: 10623486 934866768 [Lorg.apache.lucene.index.TermState; > 3: 15567646 475873992 [B > 4: 10623485 424939400 > org.apache.lucene.search.spans.SpanTermQuery$SpanTermWeight > 5: 15508972 372215328 org.apache.lucene.util.BytesRef > 6: 15485834 371660016 org.apache.lucene.index.Term > 7: 15477679 371464296 > org.apache.lucene.search.spans.SpanTermQuery > 8: 10623486 339951552 org.apache.lucene.index.TermContext > 9: 1516724 150564320 [Ljava.lang.Object; > 10:724486 50948800 [C > 11: 1528110 36674640 java.util.ArrayList > 12:849884 27196288 > org.apache.lucene.search.spans.SpanNearQuery > 13:582008 23280320 > org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight > 14:481601 23116848 org.apache.lucene.document.FieldType > 15:623073 19938336 org.apache.lucene.document.StoredField > 16:721649 17319576 java.lang.String > 17: 327297329640 [J > 18: 146435788376 [F > {code} > The load on Solr is not much - max it goes to 2000 requests per minute. The > indexing load can sometimes be in burst but most of the time its pretty low. > But as mentioned above sometimes even a single document indexing can put solr > into tizzy and sometimes it just works like a charm. > Edit : > The last configuration on which 6.1 works but not 6.5 is: > *JVM Config:* > > {code:java} > Xms: 2 GB > Xmx: 12 GB > {code} > *Solr Config:* > We also removed soft commit. > {code:java} > > >${solr.autoCommit.maxTime:30} >true > > {code} > *The Host (AWS) configurations:* > RAM: 16GB >
[jira] [Comment Edited] (LUCENE-7921) More efficient way to transform a RegExp to an Automaton
[ https://issues.apache.org/jira/browse/LUCENE-7921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119540#comment-16119540 ] Thomas Poppe edited comment on LUCENE-7921 at 8/9/17 9:46 AM: -- Thanks for your comment Dawid. One more thing I would like to note: the second case also takes more memory and CPU to convert to an automaton, so there might be an opportunity to optimize - but I guess you were already suggesting that. was (Author: thomaspoppe): Thanks for your comment Dawid. One more think I would like to note: the second case also takes more memory and CPU to convert to an automaton, so there might be an opportunity to optimize - but I guess you were already suggesting that. > More efficient way to transform a RegExp to an Automaton > > > Key: LUCENE-7921 > URL: https://issues.apache.org/jira/browse/LUCENE-7921 > Project: Lucene - Core > Issue Type: Improvement >Affects Versions: 6.5.1 >Reporter: Thomas Poppe >Priority: Minor > Attachments: capture-7.png, capture-8.png > > > Consider the following example: > {code:title=ToAutomatonExample.java|borderStyle=solid} > public static void main(String[] args) { > org.apache.lucene.util.automaton.RegExp regExp = > new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z][a-z]?[a-z]?[a-z]?[a-z]?[a-z]{0,8}"); > org.apache.lucene.util.automaton.Automaton automaton = > regExp.toAutomaton(); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + automaton.getNumTransitions()); > System.out.println("---"); > try { > regExp = new > org.apache.lucene.util.automaton.RegExp("[a-z]{1,13}x[a-z]{1,13}"); > automaton = regExp.toAutomaton(); > System.out.println("Will not happen..."); > } catch > (org.apache.lucene.util.automaton.TooComplexToDeterminizeException e) { > automaton = regExp.toAutomaton(1_000_000); > System.out.println("states: " + automaton.getNumStates()); > System.out.println("transitions: " + > automaton.getNumTransitions()); > System.out.println("---"); > } > } > {code} > Both regular expressions are equivalent, but it's much more efficient to > "unroll" the repetition. It might be possible to optimize the > Regex#toAutomaton() method to handle this repetition without going over the > default number of determinized states, and using less memory and CPU? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_141) - Build # 229 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/229/ Java: 64bit/jdk1.8.0_141 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 3 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: Doc with id=1 not found in http://127.0.0.1:34099/_evt/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=1 not found in http://127.0.0.1:34099/_evt/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([DA0FE0A9B4AF86E:EB37CACAA2C8010F]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleI
[jira] [Updated] (SOLR-11215) Make a metric accessible through a single param
[ https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-11215: - Fix Version/s: 7.2 > Make a metric accessible through a single param > --- > > Key: SOLR-11215 > URL: https://issues.apache.org/jira/browse/SOLR-11215 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Andrzej Bialecki > Fix For: 7.2 > > > example > {code} > /admin/metrics?key=solr.jvm:classes.loaded&key=solr.jvm:system.properties:java.specification.version > {code} > The above request must return just the two items in their corresponding path -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11215) Make a metric accessible through a single param
[ https://issues.apache.org/jira/browse/SOLR-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119672#comment-16119672 ] Andrzej Bialecki commented on SOLR-11215: -- Currently users of {{MetricsHandler}} API have to specify group, prefix and property separately, and it's difficult to select precisely multiple items without inadvertently pulling in other partially matching items. Additionally, {{prefix}} syntax always requires a scan through all available metrics in a registry, unlike accessing a single concrete metric by key. > Make a metric accessible through a single param > --- > > Key: SOLR-11215 > URL: https://issues.apache.org/jira/browse/SOLR-11215 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Noble Paul >Assignee: Andrzej Bialecki > Fix For: 7.2 > > > example > {code} > /admin/metrics?key=solr.jvm:classes.loaded&key=solr.jvm:system.properties:java.specification.version > {code} > The above request must return just the two items in their corresponding path -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11090) add Replica.getProperty accessor
[ https://issues.apache.org/jira/browse/SOLR-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119673#comment-16119673 ] ASF subversion and git services commented on SOLR-11090: Commit 8e2dab7315739a0f5194600ee524f6a2ea616af6 in lucene-solr's branch refs/heads/master from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8e2dab7 ] SOLR-11090: Add Replica.getProperty accessor. > add Replica.getProperty accessor > > > Key: SOLR-11090 > URL: https://issues.apache.org/jira/browse/SOLR-11090 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-11090.patch, SOLR-11090.patch > > > {code} > ?action=ADDREPLICAPROP&...&property=propertyName&property.value=value > {code} > and > {code} > ?action=ADDREPLICAPROP&...&property=property.propertyName&property.value=value > {code} > are equivalent forms for use of the > [ADDREPLICAPROP|https://lucene.apache.org/solr/guide/6_6/collections-api.html] > collection API action. > At present within the code only the generic getStr i.e. > {code} > replica.getStr("property.propertyName") > {code} > is available to obtain a replica property. > This ticket proposes a {{replica.getProperty(String)}} accessor which > supports both equivalent forms i.e. > {code} > replica.getProperty("propertyName") > {code} > and > {code} > replica.getProperty("property.propertyName") > {code} > to be used. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11090) add Replica.getProperty accessor
[ https://issues.apache.org/jira/browse/SOLR-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119691#comment-16119691 ] ASF subversion and git services commented on SOLR-11090: Commit 18616c66d2e48c803cac75332d00f382e30530da in lucene-solr's branch refs/heads/branch_7x from [~cpoerschke] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=18616c6 ] SOLR-11090: Add Replica.getProperty accessor. > add Replica.getProperty accessor > > > Key: SOLR-11090 > URL: https://issues.apache.org/jira/browse/SOLR-11090 > Project: Solr > Issue Type: Task >Reporter: Christine Poerschke >Assignee: Christine Poerschke >Priority: Minor > Attachments: SOLR-11090.patch, SOLR-11090.patch > > > {code} > ?action=ADDREPLICAPROP&...&property=propertyName&property.value=value > {code} > and > {code} > ?action=ADDREPLICAPROP&...&property=property.propertyName&property.value=value > {code} > are equivalent forms for use of the > [ADDREPLICAPROP|https://lucene.apache.org/solr/guide/6_6/collections-api.html] > collection API action. > At present within the code only the generic getStr i.e. > {code} > replica.getStr("property.propertyName") > {code} > is available to obtain a replica property. > This ticket proposes a {{replica.getProperty(String)}} accessor which > supports both equivalent forms i.e. > {code} > replica.getProperty("propertyName") > {code} > and > {code} > replica.getProperty("property.propertyName") > {code} > to be used. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.
[ https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119719#comment-16119719 ] Shalin Shekhar Mangar commented on SOLR-10126: -- Thanks for explaining. I see that you have created a follow-up issue to fix the root cause. I have linked SOLR-11216 to this issue. > PeerSyncReplicationTest is a flakey test. > - > > Key: SOLR-10126 > URL: https://issues.apache.org/jira/browse/SOLR-10126 > Project: Solr > Issue Type: Test > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Mark Miller > Attachments: faillogs.tar.gz, SOLR-10126.patch > > > Could be related to SOLR-9555, but I will see what else pops up under > beasting. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7922) Remove packed FST support?
Dawid Weiss created LUCENE-7922: --- Summary: Remove packed FST support? Key: LUCENE-7922 URL: https://issues.apache.org/jira/browse/LUCENE-7922 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Fix For: 7.0 I've been looking at the FST code we have today. Complex to read, even more complex to modify. I think it could benefit if we cleaned it up a bit (there are a few issues out there already that mention this). The first baby step would be to remove the "packed" representation of FSTs -- I searched the codebase and I don't see a single place where {{pack}} would actually be {{true}}. The overhead associated with node packing seems to be not worth it in practice (since most FSTs are already fairly small). It'd be a breaking API change, but it's probably something worth undertaking for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7922) Remove packed FST support?
[ https://issues.apache.org/jira/browse/LUCENE-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss resolved LUCENE-7922. - Resolution: Duplicate Fix Version/s: (was: 7.0) Argh. Sorry, duplicate and already done. I was looking at 6.x branch. > Remove packed FST support? > -- > > Key: LUCENE-7922 > URL: https://issues.apache.org/jira/browse/LUCENE-7922 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss > > I've been looking at the FST code we have today. Complex to read, even more > complex to modify. I think it could benefit if we cleaned it up a bit (there > are a few issues out there already that mention this). > The first baby step would be to remove the "packed" representation of FSTs -- > I searched the codebase and I don't see a single place where {{pack}} would > actually be {{true}}. The overhead associated with node packing seems to be > not worth it in practice (since most FSTs are already fairly small). > It'd be a breaking API change, but it's probably something worth undertaking > for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7922) Remove packed FST support?
[ https://issues.apache.org/jira/browse/LUCENE-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119787#comment-16119787 ] Michael McCandless commented on LUCENE-7922: But please keep looking for simplifications! > Remove packed FST support? > -- > > Key: LUCENE-7922 > URL: https://issues.apache.org/jira/browse/LUCENE-7922 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss > > I've been looking at the FST code we have today. Complex to read, even more > complex to modify. I think it could benefit if we cleaned it up a bit (there > are a few issues out there already that mention this). > The first baby step would be to remove the "packed" representation of FSTs -- > I searched the codebase and I don't see a single place where {{pack}} would > actually be {{true}}. The overhead associated with node packing seems to be > not worth it in practice (since most FSTs are already fairly small). > It'd be a breaking API change, but it's probably something worth undertaking > for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7922) Remove packed FST support?
[ https://issues.apache.org/jira/browse/LUCENE-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7922: Attachment: node.patch Didn't run tests, but I don't think it'll cause any harm to remove it. > Remove packed FST support? > -- > > Key: LUCENE-7922 > URL: https://issues.apache.org/jira/browse/LUCENE-7922 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss > Attachments: node.patch > > > I've been looking at the FST code we have today. Complex to read, even more > complex to modify. I think it could benefit if we cleaned it up a bit (there > are a few issues out there already that mention this). > The first baby step would be to remove the "packed" representation of FSTs -- > I searched the codebase and I don't see a single place where {{pack}} would > actually be {{true}}. The overhead associated with node packing seems to be > not worth it in practice (since most FSTs are already fairly small). > It'd be a breaking API change, but it's probably something worth undertaking > for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7922) Remove packed FST support?
[ https://issues.apache.org/jira/browse/LUCENE-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119794#comment-16119794 ] Dawid Weiss commented on LUCENE-7922: - I'd love to. :) I don't have much time these days, unfortunately. But wait. I do have a contribution: we can remove the 'node' field which isn't used anywhere. :) > Remove packed FST support? > -- > > Key: LUCENE-7922 > URL: https://issues.apache.org/jira/browse/LUCENE-7922 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss > Attachments: node.patch > > > I've been looking at the FST code we have today. Complex to read, even more > complex to modify. I think it could benefit if we cleaned it up a bit (there > are a few issues out there already that mention this). > The first baby step would be to remove the "packed" representation of FSTs -- > I searched the codebase and I don't see a single place where {{pack}} would > actually be {{true}}. The overhead associated with node packing seems to be > not worth it in practice (since most FSTs are already fairly small). > It'd be a breaking API change, but it's probably something worth undertaking > for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (LUCENE-7922) Remove packed FST support?
[ https://issues.apache.org/jira/browse/LUCENE-7922?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss closed LUCENE-7922. --- > Remove packed FST support? > -- > > Key: LUCENE-7922 > URL: https://issues.apache.org/jira/browse/LUCENE-7922 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss > Attachments: node.patch > > > I've been looking at the FST code we have today. Complex to read, even more > complex to modify. I think it could benefit if we cleaned it up a bit (there > are a few issues out there already that mention this). > The first baby step would be to remove the "packed" representation of FSTs -- > I searched the codebase and I don't see a single place where {{pack}} would > actually be {{true}}. The overhead associated with node packing seems to be > not worth it in practice (since most FSTs are already fairly small). > It'd be a breaking API change, but it's probably something worth undertaking > for 7.0, unless I'm missing some use cases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
Dawid Weiss created LUCENE-7923: --- Summary: Remove FST.Arc.node field (not used anywhere) Key: LUCENE-7923 URL: https://issues.apache.org/jira/browse/LUCENE-7923 Project: Lucene - Core Issue Type: Task Reporter: Dawid Weiss Assignee: Dawid Weiss Priority: Trivial Fix For: 7.0 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
[ https://issues.apache.org/jira/browse/LUCENE-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss updated LUCENE-7923: Attachment: LUCENE-7923.patch > Remove FST.Arc.node field (not used anywhere) > - > > Key: LUCENE-7923 > URL: https://issues.apache.org/jira/browse/LUCENE-7923 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Fix For: 7.0 > > Attachments: LUCENE-7923.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
[ https://issues.apache.org/jira/browse/LUCENE-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dawid Weiss resolved LUCENE-7923. - Resolution: Fixed > Remove FST.Arc.node field (not used anywhere) > - > > Key: LUCENE-7923 > URL: https://issues.apache.org/jira/browse/LUCENE-7923 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Fix For: 7.0 > > Attachments: LUCENE-7923.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
[ https://issues.apache.org/jira/browse/LUCENE-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119828#comment-16119828 ] ASF subversion and git services commented on LUCENE-7923: - Commit bd94c62a88b93db84a8378c9a80ab0b2886e41e5 in lucene-solr's branch refs/heads/branch_7_0 from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd94c62 ] LUCENE-7923: Removed FST.Arc.node field (unused). > Remove FST.Arc.node field (not used anywhere) > - > > Key: LUCENE-7923 > URL: https://issues.apache.org/jira/browse/LUCENE-7923 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Fix For: 7.0 > > Attachments: LUCENE-7923.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
[ https://issues.apache.org/jira/browse/LUCENE-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119829#comment-16119829 ] ASF subversion and git services commented on LUCENE-7923: - Commit c5a09c446f5849bc8337d2b7f0a117fece7acd82 in lucene-solr's branch refs/heads/branch_7x from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c5a09c4 ] LUCENE-7923: Removed FST.Arc.node field (unused). > Remove FST.Arc.node field (not used anywhere) > - > > Key: LUCENE-7923 > URL: https://issues.apache.org/jira/browse/LUCENE-7923 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Fix For: 7.0 > > Attachments: LUCENE-7923.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7923) Remove FST.Arc.node field (not used anywhere)
[ https://issues.apache.org/jira/browse/LUCENE-7923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119831#comment-16119831 ] ASF subversion and git services commented on LUCENE-7923: - Commit 5a36775d6517cbb36429981ccf4eb923dc1c7b33 in lucene-solr's branch refs/heads/master from [~dawid.weiss] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5a36775 ] LUCENE-7923: Removed FST.Arc.node field (unused). > Remove FST.Arc.node field (not used anywhere) > - > > Key: LUCENE-7923 > URL: https://issues.apache.org/jira/browse/LUCENE-7923 > Project: Lucene - Core > Issue Type: Task >Reporter: Dawid Weiss >Assignee: Dawid Weiss >Priority: Trivial > Fix For: 7.0 > > Attachments: LUCENE-7923.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 100 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/100/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC 4 tests failed. FAILED: org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection Error Message: Stack Trace: java.lang.AssertionError at __randomizedtesting.SeedInfo.seed([FBAD443757FF6F3C:FC78457C706B0A31]:0) at org.junit.Assert.fail(Assert.java:92) at org.junit.Assert.assertTrue(Assert.java:43) at org.junit.Assert.assertTrue(Assert.java:54) at org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateAndDeleteCollection(CollectionsAPISolrJTest.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr a
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119899#comment-16119899 ] Cassandra Targett commented on SOLR-10032: -- [~markrmil...@gmail.com] - This is really great, thank you. I have one question, out of curiosity: Why do a few tests show up as failing more than 100% of the time? > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 7.0, master (8.0) > > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.0 - Build # 105 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/105/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr at __randomizedtesting.SeedInfo.seed([206B883F6041D1BE]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:306) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:303) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:303) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:512) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:467) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:454) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:441) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:233) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth.setupClass(TestZkAclsWithHadoopAuth.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: 5 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth: 1) Thread[id=23348, name=NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0, state=RUNNABLE, group=TGRP-TestZkAclsWithHadoopAuth] at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:173) at java.lang.Thread.run(Thread.java:748)2) Thread[id=23349, name=SessionTracker, state=TIMED_WAITING, grou
[jira] [Commented] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119951#comment-16119951 ] ASF subversion and git services commented on SOLR-11061: Commit d4b4782943f79787d0931b24b839e9cc99e81c20 in lucene-solr's branch refs/heads/master from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d4b4782 ] SOLR-11061: Add a spins metric for data directory paths. > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki resolved SOLR-11061. -- Resolution: Fixed > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrzej Bialecki updated SOLR-11061: - Fix Version/s: master (8.0) > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119952#comment-16119952 ] ASF subversion and git services commented on SOLR-11061: Commit 6a4e3c3564fe16d4be345686aac7dcd42c413ac3 in lucene-solr's branch refs/heads/branch_7x from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a4e3c3 ] SOLR-11061: Add a spins metric for data directory paths. > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119959#comment-16119959 ] Mark Miller commented on SOLR-10032: Those are special codes to order and identify tests with annotations. So if a test is ignored, it's not run at all and gets a 122 or whatever. If it's @BadApple and fails 100%, it gets a 112, if it's @AwaitFix and fails 100% it gets a 113. So those 100% fails are basically expected. If it is 100%, it's a test that won't pass and doesn't have one of these annotations, so really bad. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 7.0, master (8.0) > > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11199) Support OR queries in the Payload Score Parser
[ https://issues.apache.org/jira/browse/SOLR-11199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16119966#comment-16119966 ] Erik Hatcher commented on SOLR-11199: - Nice work Varun! Both `sum` and `phrase`/`or` - handy improvements! > Support OR queries in the Payload Score Parser > --- > > Key: SOLR-11199 > URL: https://issues.apache.org/jira/browse/SOLR-11199 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11199.patch, SOLR-11199.patch > > > PayloadScoreQParserPlugin always creates a SpanNearQuery. > In my use-case I want to be able to do an OR query and then use a sum > function to sum up all the scores. > So if the PayloadScoreQParserPlugin supported an operator param which could > be used to pick between phrase searches ( the default currently ) OR and ANDs -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_141) - Build # 230 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/230/ Java: 64bit/jdk1.8.0_141 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr at __randomizedtesting.SeedInfo.seed([F90A02903FB99AA7]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:306) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:303) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:303) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:512) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:467) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:454) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:441) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:233) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth.setupClass(TestZkAclsWithHadoopAuth.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: 5 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth: 1) Thread[id=13789, name=SUITE-TestZkAclsWithHadoopAuth-seed#[F90A02903FB99AA7]-worker-EventThread, state=WAITING, group=TGRP-TestZkAclsWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:501) 2) Thread[id=13784, name=NIOServerCxn
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120050#comment-16120050 ] Erick Erickson commented on SOLR-10032: --- This is excellent, thanks for all your hard work here! > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 7.0, master (8.0) > > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-7.0-Linux (32bit/jdk1.8.0_144) - Build # 173 - Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/173/ Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr at __randomizedtesting.SeedInfo.seed([9BDCE853B68E8258]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:306) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:303) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:303) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:512) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:467) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:454) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:441) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:233) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth.setupClass(TestZkAclsWithHadoopAuth.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: 5 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth: 1) Thread[id=9435, name=Thread-2455, state=WAITING, group=TGRP-TestZkAclsWithHadoopAuth] at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:297) at org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:309) at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:490) 2) Thread[id=9439, name=ProcessThread(sid:0 cport:43009):, state=WAITING, group=TGRP-TestZkAclsWithHadoopAuth] at
[JENKINS] Lucene-Solr-Tests-7.x - Build # 129 - Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/129/ 3 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr at __randomizedtesting.SeedInfo.seed([2F199689F10BCB56]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:306) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:303) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:303) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:512) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:467) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:454) at org.apache.solr.common.cloud.SolrZkClient.makePath(SolrZkClient.java:441) at org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:233) at org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190) at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth.setupClass(TestZkAclsWithHadoopAuth.java:69) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: 5 threads leaked from SUITE scope at org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth: 1) Thread[id=6639, name=Thread-1242, state=WAITING, group=TGRP-TestZkAclsWithHadoopAuth] at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1252) at java.lang.Thread.join(Thread.java:1326) at org.apache.zookeeper.server.NIOServerCnxnFactory.join(NIOServerCnxnFactory.java:297) at org.apache.solr.cloud.ZkTestServer$ZKServerMain.runFromConfig(ZkTestServer.java:309) at org.apache.solr.cloud.ZkTestServer$2.run(ZkTestServer.java:490) 2) Thread[id=6643, name=ProcessThread(sid:0 cport:42672):, state=WAITING, group=TGRP-TestZkAclsWithHadoopAuth] at sun.misc.Unsafe.park(Native Method) at ja
[jira] [Commented] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120057#comment-16120057 ] ASF subversion and git services commented on SOLR-11061: Commit f27e4b94441cabf00c72ef57c6d5f659f82faad2 in lucene-solr's branch refs/heads/branch_7x from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f27e4b9 ] SOLR-11061: Fix incorrect metric path. > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11061) Add a spins metric for all directory paths
[ https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120058#comment-16120058 ] ASF subversion and git services commented on SOLR-11061: Commit 915b36564fcb728f467949775a4c18b274a933a7 in lucene-solr's branch refs/heads/master from [~ab] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=915b365 ] SOLR-11061: Fix incorrect metric path. > Add a spins metric for all directory paths > -- > > Key: SOLR-11061 > URL: https://issues.apache.org/jira/browse/SOLR-11061 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: metrics >Reporter: Shalin Shekhar Mangar >Assignee: Andrzej Bialecki > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11061.patch > > > See org.apache.lucene.util.IOUtils.spins. It currently only works for linux > and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount > and maxMergeCount. > We should expose this as a metric for solr.data.home and each core's data > dir. One thing to note is that the CMS overrides the value detected by the > spins method using {{lucene.cms.override_spins}} system property. This > property is supposed to be for tests but if it is set then the metrics API > should also take that into account. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10032) Create report to assess Solr test quality at a commit point.
[ https://issues.apache.org/jira/browse/SOLR-10032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120074#comment-16120074 ] Mark Miller commented on SOLR-10032: Thanks! It actually ended up being a ton of work. It wasn't so bad just to stitch something together for Solr with me to fill in gaps, but to make it generic for any project using docker, to allow it to have tests (docker within docker!), to allow you to point it at 10 freshly provisioned machines with no setup on your part, to make it easy to debug and add new project support easily, etc, was actually many, many, many hours of effort. Still some polish and minor things to do, but very happy it's ready to start pushing out reports regularly now. > Create report to assess Solr test quality at a commit point. > > > Key: SOLR-10032 > URL: https://issues.apache.org/jira/browse/SOLR-10032 > Project: Solr > Issue Type: Task > Security Level: Public(Default Security Level. Issues are Public) > Components: Tests >Reporter: Mark Miller >Assignee: Mark Miller > Fix For: 7.0, master (8.0) > > Attachments: Lucene-Solr Master Test Beast Results > 01-24-2017-9899cbd031dc3fc37a384b1f9e2b379e90a9a3a6 Level Medium- Running 30 > iterations, 12 at a time .pdf, Lucene-Solr Master Test Beasults > 02-01-2017-bbc455de195c83d9f807980b510fa46018f33b1b Level Medium- Running 30 > iterations, 10 at a time.pdf, Lucene-Solr Master Test Beasults > 02-08-2017-6696eafaae18948c2891ce758c7a2ec09873dab8 Level Medium+- Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02-14-2017- Level Medium+-a1f114f70f3800292c25be08213edf39b3e37f6a Running 30 > iterations, 10 at a time, 8 cores.pdf, Lucene-Solr Master Test Beasults > 02%2F17%2F2017-19c8ec2bf1882bed1bb34d0b55198d03f2018838 Level Hard Running > 100 iterations, 12 at a time, 8 cores.pdf > > > We have many Jenkins instances blasting tests, some official, some policeman, > I and others have or had their own, and the email trail proves the power of > the Jenkins cluster to find test fails. > However, I still have a very hard time with some basic questions: > what tests are flakey right now? which test fails actually affect devs most? > did I break it? was that test already flakey? is that test still flakey? what > are our worst tests right now? is that test getting better or worse? > We really need a way to see exactly what tests are the problem, not because > of OS or environmental issues, but more basic test quality issues. Which > tests are flakey and how flakey are they at any point in time. > Reports: > https://drive.google.com/drive/folders/0ByYyjsrbz7-qa2dOaU1UZDdRVzg?usp=sharing > 01/24/2017 - > https://docs.google.com/spreadsheets/d/1JySta2j2s7A_p16wA1UO-l6c4GsUHBIb4FONS2EzW9k/edit?usp=sharing > 02/01/2017 - > https://docs.google.com/spreadsheets/d/1FndoyHmihaOVL2o_Zns5alpNdAJlNsEwQVoJ4XDWj3c/edit?usp=sharing > 02/08/2017 - > https://docs.google.com/spreadsheets/d/1N6RxH4Edd7ldRIaVfin0si-uSLGyowQi8-7mcux27S0/edit?usp=sharing > 02/14/2017 - > https://docs.google.com/spreadsheets/d/1eZ9_ds_0XyqsKKp8xkmESrcMZRP85jTxSKkNwgtcUn0/edit?usp=sharing > 02/17/2017 - > https://docs.google.com/spreadsheets/d/1LEPvXbsoHtKfIcZCJZ3_P6OHp7S5g2HP2OJgU6B2sAg/edit?usp=sharing -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11217) Mathematical notation not supported in Solr Ref Guide
Houston Putman created SOLR-11217: - Summary: Mathematical notation not supported in Solr Ref Guide Key: SOLR-11217 URL: https://issues.apache.org/jira/browse/SOLR-11217 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: documentation Reporter: Houston Putman Priority: Minor The template used to build the Solr Ref Guide from the asciidoctor pages removes the needed javascript for mathematical notation. When building the webpage, asciidoctor puts a tag like the one below at the bottom of the html {code:html} {code} and some other tags as well. However these are not included in the sections that are inserted into the template, so they are left out and the mathematical notation is not converted to MathJax that can be viewed in a browser. This can be tested by adding any stem notation in an asciidoctor solr-ref-page, such as the following text: {code} asciimath:[sqrt(4) = 2]. {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11144) Analytics Component Documentation
[ https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Houston Putman updated SOLR-11144: -- Affects Version/s: (was: 7.0) > Analytics Component Documentation > - > > Key: SOLR-11144 > URL: https://issues.apache.org/jira/browse/SOLR-11144 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.1 >Reporter: Houston Putman >Priority: Critical > > Adding a Solr Reference Guide page for the Analytics Component. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-master - Build # 2069 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2069/ 4 tests failed. FAILED: org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete Error Message: Error from server at http://127.0.0.1:41595/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at http://127.0.0.1:41595/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 at __randomizedtesting.SeedInfo.seed([2656ADCC0CFAF6DA:85AC03698B121C7F]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.TestCollectionsAPIViaSolrCloudCluster.testCollectionCreateSearchDelete(TestCollectionsAPIViaSolrCloudCluster.java:167) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting
[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 100 - Still unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/100/ Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC 5 tests failed. FAILED: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation.testForwarding Error Message: Error from server at http://127.0.0.1:53980/solr: KeeperErrorCode = Session expired for /overseer/collection-queue-work/qnr- Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:53980/solr: KeeperErrorCode = Session expired for /overseer/collection-queue-work/qnr- at __randomizedtesting.SeedInfo.seed([CD0CAEA214D512FA:2C8AC75C09E3F413]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195) at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation.create1ShardCollection(TestSolrCloudWithSecureImpersonation.java:185) at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation.testForwarding(TestSolrCloudWithSecureImpersonation.java:342) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120198#comment-16120198 ] Nawab Zada Asad iqbal commented on SOLR-11200: -- [~sarkaramr...@gmail.com] I just reviewed your patch, it looks good. For the name, what about `enableIOThrottle` ? The word `Auto` does not seem necessary. I will test it now and report if I see any issues. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120220#comment-16120220 ] Nawab Zada Asad iqbal commented on SOLR-11200: -- [~sarkaramr...@gmail.com] which branch are you working from 6.6 or 7? I am getting an error while applying the patch. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
RE: Ready for JDK 9 ?
Hi Rory, Thank you for heads-up. I installed JDK 8 update 144 and Java 9 build 181 a minute ago. Once the first runs have succeeded, I'll report back. About the current state: - Apache Lucene 6.6 and the coming Apache Lucene 7.0 is fully comliant to Java 9 and works with "--illegal-access=deny", so the "kill switch" is not needed. - Apache Solr 6.6 and Apache Solr 7.0 work (Java wise), but the startup (shell) scripts don't detect the Java version correctly. I think a fix is in the make (for Windows and Linux). But if you ignore the startup scripts and do it yourself, it works. We applied some fixes for third party libraries that don't work correctly (e.g. Hadoop in the version we use). Older Solr and Lucene versions may still have problems, as the Module system changed some internal APIs we need for unmapping files, but generally they should work, but not everything might be with best performance (e.g. it chooses slow NIOFSDirectory instead on memory mapping). We currently do not support Lucene with Automodules, so you *have* to use Lucene on classpath. The reason is that the JAR files share same packages. So you cannot make modules out of Lucene or Solr. We may support this in later versions, but that's not an important reason for us. You can still combine all of Lucene and Solr and make one huge "Uber Module" out of it (and that's what I personally recommend), but that's up to the user. Uwe - Uwe Schindler Achterdiek 19, D-28357 Bremen http://www.thetaphi.de eMail: u...@thetaphi.de > -Original Message- > From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] > Sent: Tuesday, August 8, 2017 12:04 PM > To: Dawid Weiss ; Uwe Schindler > > Cc: rory.odonn...@oracle.com; Dalibor Topic ; > Balchandra Vaidya ; Muneer Kolarkunnu > ; dev@lucene.apache.org > Subject: Ready for JDK 9 ? > > > Hi Uwe & Dawid, > > Thank you very much for all your testing of JDK 9 during its > development! Such contributions have significantly helped shape and > improve JDK 9. > > Now that we have reached the JDK 9 Final Release Candidate phase [1] , I > would like to ask if your project can be considered to be 'ready for JDK > 9', or if there are any remaining show stopper issues which you've > encountered when testing with the JDK 9 release candidate. > > JDK 9 b181 is available at http://jdk.java.net/9/ > > If you have a public web page, mailing list post, or even a tweet > announcing you project's readiness for JDK 9, I'd love to add the URL to > the upcoming JDK 9 readiness page on the Quality Outreach wiki. > > > Looking forward to hearing from you, > Rory > > [1] http://openjdk.java.net/projects/jdk9/ > > -- > Rgds,Rory O'Donnell > Quality Engineering Manager > Oracle EMEA , Dublin, Ireland > > > - > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org > For additional commands, e-mail: dev-h...@lucene.apache.org - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-7.0 - Build # 106 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/106/ 4 tests failed. FAILED: junit.framework.TestSuite.org.apache.solr.analytics.NoFacetCloudTest Error Message: org.apache.http.ParseException: Invalid content type: Stack Trace: org.apache.solr.client.solrj.SolrServerException: org.apache.http.ParseException: Invalid content type: at __randomizedtesting.SeedInfo.seed([4188D01F7FAE0A7D]:0) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:523) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1121) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195) at org.apache.solr.analytics.AbstractAnalyticsStatsCloudTest.setupCluster(AbstractAnalyticsStatsCloudTest.java:75) at org.apache.solr.analytics.NoFacetCloudTest.populate(NoFacetCloudTest.java:62) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.http.ParseException: Invalid content type: at org.apache.http.entity.ContentType.parse(ContentType.java:273) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242) at org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483) ... 32 more FAILED: junit.framework.TestSuite.org.apache.solr.security.hadoop.TestZkAclsWithHadoopAuth Error Message: KeeperErrorCode = AuthFailed for /solr Stack Trace: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /solr at __randomizedtesting.SeedInfo.seed([440A9D9DD209B6A4]:0) at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1102) at org.apache.solr.common.cloud.SolrZkClient$4.execute(SolrZkClient.java:306) at org.apache.solr.comm
[jira] [Comment Edited] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle
[ https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120198#comment-16120198 ] Nawab Zada Asad iqbal edited comment on SOLR-11200 at 8/9/17 4:53 PM: -- [~sarkaramr...@gmail.com] I just reviewed your patch, it looks good. For the name, what about `enableAutoIOThrottle` ? I will test it now and report if I see any issues. PS: I edited it after realizing that the long config name is initiating from LUCENE code. My previous suggestion was `enableIOThrottle` was (Author: niqbal): [~sarkaramr...@gmail.com] I just reviewed your patch, it looks good. For the name, what about `enableIOThrottle` ? The word `Auto` does not seem necessary. I will test it now and report if I see any issues. > provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle > --- > > Key: SOLR-11200 > URL: https://issues.apache.org/jira/browse/SOLR-11200 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Nawab Zada Asad iqbal >Priority: Minor > Attachments: SOLR-11200.patch, SOLR-11200.patch > > > This config can be useful while bulk indexing. Lucene introduced it > https://issues.apache.org/jira/browse/LUCENE-6119 . -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-7924) dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't match "latest" version
Hoss Man created LUCENE-7924: Summary: dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't match "latest" version Key: LUCENE-7924 URL: https://issues.apache.org/jira/browse/LUCENE-7924 Project: Lucene - Core Issue Type: Wish Reporter: Hoss Man Spinning this idea out of some comments/concerns in SOLR-10595... It would be nice if all the various "version specific" pages we have (ie: javadocs, tutorials, solr ref-guide) could include some standard snippet of text drawing users attention to the fact that they are looking at docs for an "older" version of lucene/solr -- ideally with a link to the current version. ala... {panel} This page is part of the documentation refers to Lucene 5.4.0. The current version of [Lucene is 6.6.0|http://lucene.apache.org/core/6_6_0/core/]. {panel} The details of how this could work aren't clear cut -- particularly since for any arbitrary URL the "latest" version of those docs may not contain the exact same path/file (ie: deprecated/moved classes in future releases, etc...) but ideally it would be some very generic mod_include / javascript directive that could be included in all generated HTML, that would only "activate" when the page was loaded from lucene.apache.org and would only inject the "warning" into the page based on the version number in the URL compared to some server side configured version number (ex: the way we already have the "latest" version# hardcoded in our .htaccess file for redirects) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[GitHub] lucene-solr pull request #229: SOLR-11144: Initial version of the analytics ...
GitHub user HoustonPutman opened a pull request: https://github.com/apache/lucene-solr/pull/229 SOLR-11144: Initial version of the analytics component reference. You can merge this pull request into a Git repository by running: $ git pull https://github.com/HoustonPutman/lucene-solr analytics-solr_ref_guide Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/229.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #229 commit 54410ff9d13efcf174cff3ad0d8667cbe84e75a1 Author: Houston Putman Date: 2017-08-03T16:33:00Z SOLR-11144: Initial version of the analytics component reference. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastruct...@apache.org or file a JIRA ticket with INFRA. --- - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11144) Analytics Component Documentation
[ https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120305#comment-16120305 ] ASF GitHub Bot commented on SOLR-11144: --- GitHub user HoustonPutman opened a pull request: https://github.com/apache/lucene-solr/pull/229 SOLR-11144: Initial version of the analytics component reference. You can merge this pull request into a Git repository by running: $ git pull https://github.com/HoustonPutman/lucene-solr analytics-solr_ref_guide Alternatively you can review and apply these changes as the patch at: https://github.com/apache/lucene-solr/pull/229.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #229 commit 54410ff9d13efcf174cff3ad0d8667cbe84e75a1 Author: Houston Putman Date: 2017-08-03T16:33:00Z SOLR-11144: Initial version of the analytics component reference. > Analytics Component Documentation > - > > Key: SOLR-11144 > URL: https://issues.apache.org/jira/browse/SOLR-11144 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.1 >Reporter: Houston Putman >Priority: Critical > > Adding a Solr Reference Guide page for the Analytics Component. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11144) Analytics Component Documentation
[ https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Cassandra Targett reassigned SOLR-11144: Assignee: Cassandra Targett > Analytics Component Documentation > - > > Key: SOLR-11144 > URL: https://issues.apache.org/jira/browse/SOLR-11144 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.1 >Reporter: Houston Putman >Assignee: Cassandra Targett >Priority: Critical > > Adding a Solr Reference Guide page for the Analytics Component. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-7924) dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't match "latest" version
[ https://issues.apache.org/jira/browse/LUCENE-7924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120323#comment-16120323 ] Hoss Man commented on LUCENE-7924: -- Rough (untested) sketch of how this might work... * Generated HTML documents can be tweaked to start including something like {{}} in all pages -- where the relative path {{../../../}} is based on how deep the generated HTML doc is in it's "set" of docs (ie: relative to the 'root' of the javadocs for this version, or the 'root' of this version of the ref-guide) ** the generated docs can/should include an empty {{latest-warning.html}} file at that path, so external users who host their own copy don't get mod_include errors for a missing file * the .htaccess file(s) used on lucene.apache.org can use mod_rewrite rules to route any request for {{latest-warning.html}} to a new CGI, preserving the (resolved) path from the mod_include request as a "request param" for the CGI to use * the CGI can look at the version# in the path and compare it to the "latest" version (which we can start setting in an .htaccess SetEnv variable), outputing HTML as needed if they don't match ** the generate HTML can use the original (resolved) path from the request for {{latest-warning.html}} to know where to link people to. * once this is setup and working, it could be backported as far back as we want to go, and the hosted javadocs/ref-guides could be regenerated & re-published. So for example: * https://lucene.apache.org/core/5_2_0/queries/org/apache/lucene/queries/TermsQuery.html ** {{}} * .htaccess rewrites https://lucene.apache.org/core/5_2_0/latest-warning.html to something like https://lucene.apache.org/latest-warning.cgi?path=core/5_2_0/ * latest-warning.cgi extracts "5_2_0" from {{$path}} and compares it to some env variable (currently) set to "6_6_0" and decides to output a warning ** in that generated warning HTML, the URL to link to is built by replacing "5_2_0" with "6_6_0" -- ie: https://lucene.apache.org/core/6_6_0/ * if the {{$path}} already matches the latest version, then the CGI generates blank output > dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't > match "latest" version > -- > > Key: LUCENE-7924 > URL: https://issues.apache.org/jira/browse/LUCENE-7924 > Project: Lucene - Core > Issue Type: Wish >Reporter: Hoss Man > > Spinning this idea out of some comments/concerns in SOLR-10595... > It would be nice if all the various "version specific" pages we have (ie: > javadocs, tutorials, solr ref-guide) could include some standard snippet of > text drawing users attention to the fact that they are looking at docs for an > "older" version of lucene/solr -- ideally with a link to the current version. > ala... > {panel} > This page is part of the documentation refers to Lucene 5.4.0. The current > version of [Lucene is 6.6.0|http://lucene.apache.org/core/6_6_0/core/]. > {panel} > The details of how this could work aren't clear cut -- particularly since for > any arbitrary URL the "latest" version of those docs may not contain the > exact same path/file (ie: deprecated/moved classes in future releases, > etc...) but ideally it would be some very generic mod_include / javascript > directive that could be included in all generated HTML, that would only > "activate" when the page was loaded from lucene.apache.org and would only > inject the "warning" into the page based on the version number in the URL > compared to some server side configured version number (ex: the way we > already have the "latest" version# hardcoded in our .htaccess file for > redirects) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11144) Analytics Component Documentation
[ https://issues.apache.org/jira/browse/SOLR-11144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120325#comment-16120325 ] Cassandra Targett commented on SOLR-11144: -- Thanks for the pull request! I'll assign this to myself and will try to get you feedback in the next couple of days. > Analytics Component Documentation > - > > Key: SOLR-11144 > URL: https://issues.apache.org/jira/browse/SOLR-11144 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Affects Versions: 7.1 >Reporter: Houston Putman >Assignee: Cassandra Targett >Priority: Critical > > Adding a Solr Reference Guide page for the Analytics Component. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10595) Redirect Confluence pages to new HTML Guide
[ https://issues.apache.org/jira/browse/SOLR-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120341#comment-16120341 ] Hoss Man commented on SOLR-10595: - bq. But that can be mitigated by flashing a warning that a newer version exists, and preferably offer a link to corresponding page for that version. Spun off into LUCENE-7924 bq. Doing a redirect makes it harder to find, say ADDREPLICA for instance. Does it make sense to sequence redirects after search works on the new site? Adding search to the new ref-guide seems like an orthoginal issue to adding redirects. But for the sake of argument, let's assume for now they should be considered part and parcel... To you, today, as an experienced user of cwiki: adding redirects may make it harder to find the docs on ADDREPLICA because you have preconcieved impressions that going to an existing page on cwiki.apache.org and doing a search in that search box will help you find it -- but the docs you find that way are stale and out of date. A new user -- even if you deliberately instilled in them the preconcieved knowledge that going to cwiki is the best way to find docs -- may get frustrated when they can't find docs on commands/features added *after* the ref-guide migration using that same approach (and the likehook of that happening will only increase -- never decrease -- as time goes on and more docs are added/changed). In the more general case that a new user does *NOT* already have preconcieved knowledge that going to cwiki is the best way to find docs, they are most likely to try and find dogs using google/web-search -- in which case the (current) lack of redirects means they are in roughly the same boat: they are very likely to first find stale / out of date (and growing more out of date daily) documentation. adding cwiki->lucene.apache.org redirects seems like it can only improve the situation for most users -- independent of the question of when/how we add new (explicit) search functionality for the current hosted ref-guide. I'll prep some mapping files and file an INFRA link soon. > Redirect Confluence pages to new HTML Guide > --- > > Key: SOLR-10595 > URL: https://issues.apache.org/jira/browse/SOLR-10595 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Hoss Man > Attachments: new-page-urls.txt, page-tree.xml > > > Once the new Ref Guide is live, we may want to redirect pages from Confluence > to the new HTML version. > I'm undecided if this is the best idea, I can see pros and cons to it. On the > pro side, I think it helps firmly establish the move away from Confluence and > helps users adjust to the new location. But I could see the argument that > redirecting is overly invasive or unnecessary and we should just add a big > warning to the page instead. > At any rate, if we do decide to do it, I found some Javascript we could tell > Confluence to add to the HEAD of each page to auto-redirect. With some > probably simple modifications to it, we could get people to the right page in > the HTML site: > https://community.atlassian.com/t5/Confluence-questions/How-to-apply-redirection-on-all-pages-on-a-space/qaq-p/229949 > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-7924) dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't match "latest" version
[ https://issues.apache.org/jira/browse/LUCENE-7924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hoss Man updated LUCENE-7924: - Component/s: general/website > dynamic infobox on javadocs/tutorials/ref-guide html pages when URL doesn't > match "latest" version > -- > > Key: LUCENE-7924 > URL: https://issues.apache.org/jira/browse/LUCENE-7924 > Project: Lucene - Core > Issue Type: Wish > Components: general/website >Reporter: Hoss Man > > Spinning this idea out of some comments/concerns in SOLR-10595... > It would be nice if all the various "version specific" pages we have (ie: > javadocs, tutorials, solr ref-guide) could include some standard snippet of > text drawing users attention to the fact that they are looking at docs for an > "older" version of lucene/solr -- ideally with a link to the current version. > ala... > {panel} > This page is part of the documentation refers to Lucene 5.4.0. The current > version of [Lucene is 6.6.0|http://lucene.apache.org/core/6_6_0/core/]. > {panel} > The details of how this could work aren't clear cut -- particularly since for > any arbitrary URL the "latest" version of those docs may not contain the > exact same path/file (ie: deprecated/moved classes in future releases, > etc...) but ideally it would be some very generic mod_include / javascript > directive that could be included in all generated HTML, that would only > "activate" when the page was loaded from lucene.apache.org and would only > inject the "warning" into the page based on the version number in the URL > compared to some server side configured version number (ex: the way we > already have the "latest" version# hardcoded in our .htaccess file for > redirects) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-9177) Support oom hook when running Solr in foreground mode
[ https://issues.apache.org/jira/browse/SOLR-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey reassigned SOLR-9177: -- Assignee: Shawn Heisey > Support oom hook when running Solr in foreground mode > - > > Key: SOLR-9177 > URL: https://issues.apache.org/jira/browse/SOLR-9177 > Project: Solr > Issue Type: New Feature >Reporter: Anshum Gupta >Assignee: Shawn Heisey > > After reading through the comments on SOLR-8145 and from my own experience, > seems like a reasonable number of people run Solr in foreground mode in > production. > To give some more context, I've seen Solr hit OOM, which leads to IW being > closed by Lucene. The Solr process hangs in there and without the oom killer, > while all queries continue to work, all update requests start failing. > I think it makes sense to add support to the bin/solr script to add the oom > hook when running in fg mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9177) Support oom hook when running Solr in foreground mode
[ https://issues.apache.org/jira/browse/SOLR-9177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shawn Heisey updated SOLR-9177: --- Attachment: SOLR-9177.patch Patch with fix, CHANGES.txt entry in 7.0.0. > Support oom hook when running Solr in foreground mode > - > > Key: SOLR-9177 > URL: https://issues.apache.org/jira/browse/SOLR-9177 > Project: Solr > Issue Type: New Feature >Reporter: Anshum Gupta >Assignee: Shawn Heisey > Attachments: SOLR-9177.patch > > > After reading through the comments on SOLR-8145 and from my own experience, > seems like a reasonable number of people run Solr in foreground mode in > production. > To give some more context, I've seen Solr hit OOM, which leads to IW being > closed by Lucene. The Solr process hangs in there and without the oom killer, > while all queries continue to work, all update requests start failing. > I think it makes sense to add support to the bin/solr script to add the oom > hook when running in fg mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10595) Redirect Confluence pages to new HTML Guide
[ https://issues.apache.org/jira/browse/SOLR-10595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120388#comment-16120388 ] Cassandra Targett commented on SOLR-10595: -- bq. I'll prep some mapping files and file an INFRA link soon. +1, I'm on board with what you've outlined so far. > Redirect Confluence pages to new HTML Guide > --- > > Key: SOLR-10595 > URL: https://issues.apache.org/jira/browse/SOLR-10595 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Cassandra Targett >Assignee: Hoss Man > Attachments: new-page-urls.txt, page-tree.xml > > > Once the new Ref Guide is live, we may want to redirect pages from Confluence > to the new HTML version. > I'm undecided if this is the best idea, I can see pros and cons to it. On the > pro side, I think it helps firmly establish the move away from Confluence and > helps users adjust to the new location. But I could see the argument that > redirecting is overly invasive or unnecessary and we should just add a big > warning to the page instead. > At any rate, if we do decide to do it, I found some Javascript we could tell > Confluence to add to the HEAD of each page to auto-redirect. With some > probably simple modifications to it, we could get people to the right page in > the HTML site: > https://community.atlassian.com/t5/Confluence-questions/How-to-apply-redirection-on-all-pages-on-a-space/qaq-p/229949 > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11146) Analytics Component 2.0 Bug Fixes
[ https://issues.apache.org/jira/browse/SOLR-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Houston Putman updated SOLR-11146: -- Issue Type: Bug (was: Improvement) > Analytics Component 2.0 Bug Fixes > - > > Key: SOLR-11146 > URL: https://issues.apache.org/jira/browse/SOLR-11146 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.1 >Reporter: Houston Putman >Priority: Critical > Fix For: 7.0 > > > The new Analytics Component has several small bugs in mapping functions and > other places. This ticket is a fix for a large number of them. This patch > should allow all unit tests created in SOLR-11145 to pass. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: What should we do with the 6x code line?
+1 I think a 6.7 release would be very good. On Tue, Aug 8, 2017 at 5:46 PM Mike Drob wrote: > +1 > > Release early, release often! > > On Tue, Aug 8, 2017 at 4:27 PM, Erick Erickson > wrote: > >> Solr and Lucene have had fixes backported to 6x (not 6.6) since the >> 7.0 label was set, most in Solr. Some of the fixes are useful "in the >> field", I've back-ported some of them myself. >> >> What objections are there to a 6.7 release? We'd always prefer to >> release nothing except important bug fixes on a prior branch, but the >> release process for 7.0 has taken some time and changes have >> accumulated. >> >> This might be the last, best time to wrap up 6x with a 6.7 as much as >> we can before officially releasing 7.0. >> >> What do people think? >> >> Erick >> >> - >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org >> For additional commands, e-mail: dev-h...@lucene.apache.org >> >> > -- Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.solrenterprisesearchserver.com
[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 6810 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6810/ Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC 1 tests failed. FAILED: org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery Error Message: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:55396_solr, 127.0.0.1:55397_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"https://127.0.0.1:55396/solr";, "node_name":"127.0.0.1:55396_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"https://127.0.0.1:55397/solr";, "node_name":"127.0.0.1:55397_solr", "state":"active", "type":"NRT", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} Stack Trace: java.lang.AssertionError: Expected a collection with one shard and two replicas null Live Nodes: [127.0.0.1:55396_solr, 127.0.0.1:55397_solr] Last available state: DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={ "pullReplicas":"0", "replicationFactor":"2", "shards":{"shard1":{ "range":"8000-7fff", "state":"active", "replicas":{ "core_node3":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n1", "base_url":"https://127.0.0.1:55396/solr";, "node_name":"127.0.0.1:55396_solr", "state":"down", "type":"NRT"}, "core_node4":{ "core":"MissingSegmentRecoveryTest_shard1_replica_n2", "base_url":"https://127.0.0.1:55397/solr";, "node_name":"127.0.0.1:55397_solr", "state":"active", "type":"NRT", "leader":"true", "router":{"name":"compositeId"}, "maxShardsPerNode":"1", "autoAddReplicas":"false", "nrtReplicas":"2", "tlogReplicas":"0"} at __randomizedtesting.SeedInfo.seed([891B1F51BD4EB834:D94E8752E46F0E29]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269) at org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(Sta
[JENKINS] Lucene-Solr-Tests-7.x - Build # 130 - Still Unstable
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/130/ 4 tests failed. FAILED: org.apache.solr.cloud.TestAuthenticationFramework.testBasics Error Message: Error from server at https://127.0.0.1:38162/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html.Error 404HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 Stack Trace: org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from server at https://127.0.0.1:38162/solr/testcollection_shard1_replica_n2: Expected mime type application/octet-stream but got text/html. Error 404 HTTP ERROR: 404 Problem accessing /solr/testcollection_shard1_replica_n2/update. Reason: Can not find: /solr/testcollection_shard1_replica_n2/update http://eclipse.org/jetty";>Powered by Jetty:// 9.3.14.v20161028 at __randomizedtesting.SeedInfo.seed([9EA2F281189DD6EA:A37A5CAD2073889A]:0) at org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:539) at org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:993) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:862) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:793) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178) at org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233) at org.apache.solr.cloud.TestAuthenticationFramework.collectionCreateSearchDeleteTwice(TestAuthenticationFramework.java:126) at org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:74) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule
[jira] [Commented] (SOLR-11217) Mathematical notation not supported in Solr Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120453#comment-16120453 ] Cassandra Targett commented on SOLR-11217: -- I hadn't realized until now that Asciidoctor had stem notation support built into it. According to Asciidoctor docs (http://asciidoctor.org/docs/user-manual/#activating-stem-support), it's as simple as adding a {{:stem:}} attribute to a page (or globally, which we would prefer). If I add that to a page and use Asciidoctor itself to convert the page to HTML it works well. In order to support it for Jekyll, we'd change the attribute slightly to {{:page-stem:}}. Note we're not using Asciidoctor's converters to build our HTML, we're using Jekyll's with a plugin from the Asciidoctor project to allow Jekyll to support AsciiDoc formatted files. Adding the attribute to the page front-matter (the stuff at the top of each file), however, has no impact whatsoever. It may be that this is not yet supported, or we need to add something as an extension or plugin to Jekyll, or we may need to modify the templates as in your suggestion. I've asked the jekyll-asciidoc project what their recommendation is: https://github.com/asciidoctor/jekyll-asciidoc/issues/163 > Mathematical notation not supported in Solr Ref Guide > - > > Key: SOLR-11217 > URL: https://issues.apache.org/jira/browse/SOLR-11217 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Houston Putman >Priority: Minor > > The template used to build the Solr Ref Guide from the asciidoctor pages > removes the needed javascript for mathematical notation. > When building the webpage, asciidoctor puts a tag like the one below at the > bottom of the html > {code:html} > src="#{cdn_base}/mathjax/2.6.0/MathJax.js?config=TeX-MML-AM_HTMLorMML"> > {code} > and some other tags as well. > However these are not included in the sections that are inserted into the > template, so they are left out and the mathematical notation is not converted > to MathJax that can be viewed in a browser. > This can be tested by adding any stem notation in an asciidoctor > solr-ref-page, such as the following text: > {code} > asciimath:[sqrt(4) = 2]. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11218) Add a test that insures that you can delete the underlying collection if you have an alias of the same name pointing to a different collection
Erick Erickson created SOLR-11218: - Summary: Add a test that insures that you can delete the underlying collection if you have an alias of the same name pointing to a different collection Key: SOLR-11218 URL: https://issues.apache.org/jira/browse/SOLR-11218 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Reporter: Erick Erickson -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (SOLR-11218) Add a test that insures that you can delete the underlying collection if you have an alias of the same name pointing to a different collection
[ https://issues.apache.org/jira/browse/SOLR-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erick Erickson reassigned SOLR-11218: - Assignee: Erick Erickson > Add a test that insures that you can delete the underlying collection if you > have an alias of the same name pointing to a different collection > -- > > Key: SOLR-11218 > URL: https://issues.apache.org/jira/browse/SOLR-11218 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11218) Add a test that insures that you can delete the underlying collection if you have an alias of the same name pointing to a different collection
[ https://issues.apache.org/jira/browse/SOLR-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120470#comment-16120470 ] Erick Erickson commented on SOLR-11218: --- It's common to recommend when people need to re-index for any reason that they: 1> create a new collection 2> index the corpus to the new collection and verify it 3> create an alias pointing to the new collection as their original collection 4> delete the old collection. They may or may not have an alias already pointing to the old collection that's being replaced. If they don't already have an alias, this leaves them with: > a collection named old_collection > a collection named new_collection > an alias old_collection->new_collection What happens when the delete old_collection now? Current behavior is that delete "does the right thing" and deletes old_collection rather than new_collection, but if this behavior changes it could be disastrous for users so this test insures that this behavior. I have a test patch in progress I'll commit today if it works. > Add a test that insures that you can delete the underlying collection if you > have an alias of the same name pointing to a different collection > -- > > Key: SOLR-11218 > URL: https://issues.apache.org/jira/browse/SOLR-11218 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-9-ea+181) - Build # 231 - Still Unstable!
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/231/ Java: 64bit/jdk-9-ea+181 -XX:+UseCompressedOops -XX:+UseParallelGC --illegal-access=deny 1 tests failed. FAILED: org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader Error Message: Doc with id=1 not found in http://127.0.0.1:42565/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} Stack Trace: java.lang.AssertionError: Doc with id=1 not found in http://127.0.0.1:42565/forceleader_test_collection due to: Path not found: /id; rsp={doc=null} at __randomizedtesting.SeedInfo.seed([52C5C44F60220C91:B452F08F59A0F5F0]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.assertTrue(Assert.java:43) at org.apache.solr.cloud.HttpPartitionTest.assertDocExists(HttpPartitionTest.java:603) at org.apache.solr.cloud.HttpPartitionTest.assertDocsExistInAllReplicas(HttpPartitionTest.java:556) at org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:142) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:564) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907) at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943) at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64
[jira] [Commented] (SOLR-11217) Mathematical notation not supported in Solr Ref Guide
[ https://issues.apache.org/jira/browse/SOLR-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120485#comment-16120485 ] Cassandra Targett commented on SOLR-11217: -- OK, got a quick reply. The Asciidoctor part is working properly if {{:stem:}} is added to the page front-matter in the sense that it wraps the notation in markup for MathJax to do its part rendering it. So, we need to add MathJax to our templates as you thought. The question now is how to add it - I tried a couple of variations of simply adding it as a script in the header or page template, but it's going to be a little bit more complex than that. I'll see if I can find time in the coming days to work some more on getting this to work. > Mathematical notation not supported in Solr Ref Guide > - > > Key: SOLR-11217 > URL: https://issues.apache.org/jira/browse/SOLR-11217 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: documentation >Reporter: Houston Putman >Priority: Minor > > The template used to build the Solr Ref Guide from the asciidoctor pages > removes the needed javascript for mathematical notation. > When building the webpage, asciidoctor puts a tag like the one below at the > bottom of the html > {code:html} > src="#{cdn_base}/mathjax/2.6.0/MathJax.js?config=TeX-MML-AM_HTMLorMML"> > {code} > and some other tags as well. > However these are not included in the sections that are inserted into the > template, so they are left out and the mathematical notation is not converted > to MathJax that can be viewed in a browser. > This can be tested by adding any stem notation in an asciidoctor > solr-ref-page, such as the following text: > {code} > asciimath:[sqrt(4) = 2]. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-11190: - Attachment: SOLR-11190.patch Updated Karthik's patch with some more validation. All tests pass. I'll give it another review and commit it > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11181) Deploying Maven artifacts (generate-maven-artifacts) pushes the same artifacts multiple times
[ https://issues.apache.org/jira/browse/SOLR-11181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120515#comment-16120515 ] Steve Rowe commented on SOLR-11181: --- [~lmonson], I ran the following with your patch applied to upload 8.0-SNAPSHOT artifacts to the Apache Snapshot Repository. {noformat} ant -Dm2.repository.id=apache.snapshots.https -Dm2.repository.url=https://repository.apache.org/content/repositories/snapshots generate-maven-artifacts {noformat} You can see the result for {{lucene-core}} here (artifacts share the infix {{-8.0.0-20170809.185958-28.}}): https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-core/8.0.0-SNAPSHOT/ Unlike all previous runs, neither {{-sources}} nor {{-javadoc}} artifacts were uploaded when I used your patch. This is not acceptable. When you run with your patch, do these artifacts get uploaded? Next I'll try to reproduce the problem you're trying to solve with the unpatched build. > Deploying Maven artifacts (generate-maven-artifacts) pushes the same > artifacts multiple times > - > > Key: SOLR-11181 > URL: https://issues.apache.org/jira/browse/SOLR-11181 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Build >Affects Versions: 6.6, master (8.0), 7.1 >Reporter: Lynn Monson >Assignee: Steve Rowe >Priority: Minor > Attachments: SOLR-11181.patch > > > When following the instructions in the README.maven file, and watching the > wire traffic, the build system issues HTTP PUT operations for the same > artifacts multiple times. For example, issuing this command: > ant -Dm2.repository.id=my-repo-id \ > -Dm2.repository.url=http://example.org/my/repo \ > generate-maven-artifacts > from the lucene/ directory will generate redundant puts. For example: > PUT > //org/apache/lucene/lucene-core//lucene-core-4.10.4-fs.31-sources.jar > > PUT > //org/apache/lucene/lucene-core//lucene-core-4.10.4-fs.31-sources.jar.sha1 > > PUT > //org/apache/lucene/lucene-core//lucene-core-4.10.4-fs.31-sources.jar.md5 > ... > PUT > //org/apache/lucene/lucene-core//lucene-core-4.10.4-fs.31-sources.jar > > ... > The maven repo I am using does not allow the second PUT and, hence, the build > fails. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 833 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/833/ No tests ran. Build Log: [...truncated 25698 lines...] prepare-release-no-sign: [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist [copy] Copying 476 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene [copy] Copying 215 files to /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8 [smoker] NOTE: output encoding is UTF-8 [smoker] [smoker] Load release URL "file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"... [smoker] [smoker] Test Lucene... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.02 sec (13.1 MB/sec) [smoker] check changes HTML... [smoker] download lucene-8.0.0-src.tgz... [smoker] 29.0 MB in 0.06 sec (509.8 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-8.0.0.tgz... [smoker] 68.9 MB in 0.13 sec (548.9 MB/sec) [smoker] verify md5/sha1 digests [smoker] download lucene-8.0.0.zip... [smoker] 79.2 MB in 0.14 sec (557.2 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack lucene-8.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6136 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-8.0.0.zip... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] test demo with 1.8... [smoker] got 6136 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] check Lucene's javadoc JAR [smoker] unpack lucene-8.0.0-src.tgz... [smoker] make sure no JARs/WARs in src dist... [smoker] run "ant validate" [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'... [smoker] test demo with 1.8... [smoker] got 213 hits for query "lucene" [smoker] checkindex with 1.8... [smoker] generate javadocs w/ Java 8... [smoker] [smoker] Crawl/parse... [smoker] [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] success! [smoker] [smoker] Test Solr... [smoker] test basics... [smoker] get KEYS [smoker] 0.2 MB in 0.00 sec (276.2 MB/sec) [smoker] check changes HTML... [smoker] download solr-8.0.0-src.tgz... [smoker] 49.8 MB in 0.09 sec (549.1 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-8.0.0.tgz... [smoker] 142.4 MB in 0.28 sec (507.0 MB/sec) [smoker] verify md5/sha1 digests [smoker] download solr-8.0.0.zip... [smoker] 143.4 MB in 0.26 sec (559.5 MB/sec) [smoker] verify md5/sha1 digests [smoker] unpack solr-8.0.0.tgz... [smoker] verify JAR metadata/identity/no javax.* or java.* classes... [smoker] unpack lucene-8.0.0.tgz... [smoker] **WARNING**: skipping check of /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar: it has javax.* classes [smoker] **WARNING**: skipping check of /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar: it has javax.* classes [smoker] copying unpacked distribution for Java 8 ... [smoker] test solr example w/ Java 8... [smoker] start Solr instance (log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)... [smoker] No process found for Solr node running on port 8983 [smoker] Running techproducts example on port 8983 from /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8 [smoker] Creating Solr home directory /home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr [smoker] [smoker] Starting up Solr on port 8983 using command: [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr" [smoker] [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|] [/] [-] [\] [|] [/] [-] [\] [|] [
[jira] [Commented] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120533#comment-16120533 ] Varun Thacker commented on SOLR-11190: -- Previous patch seemed to remove a test {code} -doGraph( params("node_id","node_dps", "edge_id","edge_dps") ); {code} Adding it back and uploading the patch which I will commit > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker updated SOLR-11190: - Attachment: SOLR-11190.patch > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120542#comment-16120542 ] ASF subversion and git services commented on SOLR-11190: Commit e7062b6f91c161965aec0cef5a9ae68280f306a4 in lucene-solr's branch refs/heads/master from [~varunthacker] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e7062b6 ] SOLR-11190: GraphQuery also supports string fields which are indexed=false and docValues=true > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120544#comment-16120544 ] ASF subversion and git services commented on SOLR-11190: Commit 2d3f4d5c29d2ee920a6e8a35d80ee175c743deb3 in lucene-solr's branch refs/heads/branch_7x from [~varunthacker] [ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d3f4d5 ] SOLR-11190: GraphQuery also supports string fields which are indexed=false and docValues=true > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11190) GraphQuery not working for string fields that has only docValues
[ https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Thacker resolved SOLR-11190. -- Resolution: Fixed Fix Version/s: 7.1 master (8.0) Thanks Karthik for the patches and Yonik for the review! > GraphQuery not working for string fields that has only docValues > > > Key: SOLR-11190 > URL: https://issues.apache.org/jira/browse/SOLR-11190 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: query parsers >Affects Versions: 6.6 >Reporter: Karthik Ramachandran >Assignee: Karthik Ramachandran > Fix For: master (8.0), 7.1 > > Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch, > SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch > > > Graph traversal is not working if string field has only docValues since the > construction of leaf or parent node queries uses only TermQuery. > \\ \\ > {code:xml|title=managed-schema|borderStyle=solid} > > docValues="true" /> > docValues="true" /> > docValues="true" /> > docValues="true" /> > id > > precisionStep="0" positionIncrementGap="0"/> > > {code} > {code} > curl -XPOST -H 'Content-Type: application/json' > 'http://localhost:8983/solr/graph/update' --data-binary ' { > "add" : { "doc" : { "id" : "1", "name" : "Root1" } }, > "add" : { "doc" : { "id" : "2", "name" : "Root2" } }, > "add" : { "doc" : { "id" : "3", "name" : "Root3" } }, > "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } > }, > "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } > }, > "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } > }, > "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } > }, > "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } > }, > "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 > Child1" } }, > "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 > Child2" } }, > "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 > Child1" } }, > "commit" : {} > }' > {code} > {code} > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=parentid > to=id}id:1 > or > http://localhost:8983/solr/graph/select?q=*:*&fq={!graph from=id > to=parentid}id:122 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org