I have reproduced the stack trace above and condition where collections
cannot be created by upgrading an 8.6.0 with 8.6.1, filing an issue right
now, changing my vote to -1 from +0


On Mon, Aug 3, 2020 at 8:20 AM Jan Høydahl <[email protected]> wrote:

> I keep getting HDFS related test failures and timeouts, so I cannot vote.
> (macOS)
>
> Jan
>
> > 3. aug. 2020 kl. 09:43 skrev Atri Sharma <[email protected]>:
> >
> > +1
> >
> > SUCCESS! [1:27:33.14892]
> >
> > On Mon, Aug 3, 2020 at 1:11 PM Marcus Eagan <[email protected]>
> wrote:
> >>
> >> Community,
> >>
> >> Results from my local smoke test (Mac OS 10.15.5 | 1.8.0_265, x86_64:
> "Amazon Corretto 8"):
> >>
> >> SUCCESS! [1:33:51.132902]
> >>
> >> I'm still going through and checking a few aforementioned issues, but
> non-binding +1 from me. Wanted to share with the community because most
> probably are not running Corretto.
> >>
> >> Hope this helps.
> >>
> >> marcus
> >>
> >>
> >>
> >> On Sun, Aug 2, 2020 at 9:36 PM Gus Heck <[email protected]> wrote:
> >>>
> >>> Digging a little further, I notice that the deployment that had the
> error has this autoscaling (whereas the working deployment does not).
> >>>
> >>> "cluster-preferences":[{
> >>> "minimize":"cores",
> >>> "precision":1},
> >>> {"maximize":"freedisk"}],
> >>> "cluster-policy":[
> >>> {
> >>> "replica":"<2",
> >>> "shard":"#EACH",
> >>> "node":"#ANY",
> >>> "strict":"false"},
> >>> {
> >>> "replica":"#EQUAL",
> >>> "node":"#ANY",
> >>> "strict":"false"},
> >>> {
> >>> "cores":"#EQUAL",
> >>> "node":"#ANY",
> >>> "strict":"false"}],
> >>>
> >>> So this may raise the question of whether or not we have an issue
> upgrading an 8.6.0 version to 8.6.1... also, not very familiar with
> autoscaling's error messages, but it kinda looks dodgy too since "one extra
> tag in cores" appears to be referring to a cores attribute that has only
> one value, but no idea yet if I'm reading that error message right.  ... As
> to how I got that, I'm pretty sure it was one of the times when my edits to
> cloud.sh errored and  tried to deploy an existing branch_8x build. Zk
> probably was not clean, and retained the old config.
> >>>
> >>> Tomorrow I'll try to deploy 8_6_0 and then upgrade it to 8_6_1 (late
> here now) and see if I get a similar result.
> >>>
> >>>
> >>> On Sun, Aug 2, 2020 at 11:59 PM Gus Heck <[email protected]> wrote:
> >>>>
> >>>> I Got:
> >>>>
> >>>> Ubuntu 18.04.4 LTS:
> >>>> SUCCESS! [0:53:02.203047]
> >>>> Mac OS 10.13:
> >>>>
> >>>> SUCCESS! [1:00:57.938586]
> >>>>
> >>>>
> >>>> BUT... when I deployed the tarball locally and tried to create a
> collection (single shard, _default config, via the solr UI), I got:
> >>>>
> >>>>
> >>>> 2020-08-03 02:55:15.585 INFO  (zkCallback-14-thread-1) [   ]
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (1) -> (2)
> >>>>
> >>>> 2020-08-03 02:55:21.288 INFO  (zkCallback-14-thread-1) [   ]
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (2) -> (3)
> >>>>
> >>>> 2020-08-03 02:55:26.705 INFO  (zkCallback-14-thread-1) [   ]
> o.a.s.c.c.ZkStateReader Updated live nodes from ZooKeeper... (3) -> (4)
> >>>>
> >>>> 2020-08-03 03:00:07.521 INFO
> (OverseerThreadFactory-22-thread-1-processing-n:192.168.2.106:8981_solr)
> [   ] o.a.s.c.a.c.CreateCollectionCmd Create collection test
> >>>>
> >>>> 2020-08-03 03:00:07.672 ERROR
> (OverseerThreadFactory-22-thread-1-processing-n:192.168.2.106:8981_solr)
> [   ] o.a.s.c.a.c.OverseerCollectionMessageHandler Collection: test
> operation: create failed:org.apache.solr.common.SolrException
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:347)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264)
> >>>>
> >>>> at
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:517)
> >>>>
> >>>> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:212)
> >>>>
> >>>> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> >>>>
> >>>> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> >>>>
> >>>> at java.lang.Thread.run(Thread.java:745)
> >>>>
> >>>> Caused by: java.lang.RuntimeException: Only one extra tag supported
> for the tag cores in {
> >>>>
> >>>>  "cores":"#EQUAL",
> >>>>
> >>>>  "node":"#ANY",
> >>>>
> >>>>  "strict":"false"}
> >>>>
> >>>> at
> org.apache.solr.client.solrj.cloud.autoscaling.Clause.<init>(Clause.java:122)
> >>>>
> >>>> at
> org.apache.solr.client.solrj.cloud.autoscaling.Clause.create(Clause.java:235)
> >>>>
> >>>> at
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> >>>>
> >>>> at
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
> >>>>
> >>>> at
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> >>>>
> >>>> at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> >>>>
> >>>> at
> java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
> >>>>
> >>>> at
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> >>>>
> >>>> at
> java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
> >>>>
> >>>> at
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.<init>(Policy.java:144)
> >>>>
> >>>> at
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:372)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.Assign.usePolicyFramework(Assign.java:300)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.Assign.usePolicyFramework(Assign.java:277)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.Assign$AssignStrategyFactory.create(Assign.java:661)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.buildReplicaPositions(CreateCollectionCmd.java:415)
> >>>>
> >>>> at
> org.apache.solr.cloud.api.collections.CreateCollectionCmd.call(CreateCollectionCmd.java:192)
> >>>>
> >>>> ... 6 more
> >>>>
> >>>>
> >>>> However, when I re-did everything a second time to double check
> creating a collection worked just fine and now I can't seem to reproduce
> this.
> >>>>
> >>>>
> >>>> If nobody else gets this I'll figure I just managed to mangle
> something while working on
> https://issues.apache.org/jira/browse/SOLR-14704
> >>>>
> >>>>
> >>>> But others should perhaps give it a spin to look for this, So I'll
> give it +0
> >>>>
> >>>>
> >>>>
> >>>> On Fri, Jul 31, 2020 at 8:14 AM Noble Paul <[email protected]>
> wrote:
> >>>>>
> >>>>> success SUCCESS! [1:03:21.786536]
> >>>>> Ubuntu 20.04 LTS
> >>>>>
> >>>>>
> >>>>> On Fri, Jul 31, 2020 at 7:34 AM Houston Putman <
> [email protected]> wrote:
> >>>>>>
> >>>>>> Due to the weekend the vote will be open until 2020-08-03 22:00
> UTC. That's 96 hours, and two business days.
> >>>>>>
> >>>>>> I can leave the vote open for longer if people want an additional
> business day, but will end it on Monday otherwise.
> >>>>>>
> >>>>>> - Houston
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Thu, Jul 30, 2020 at 5:07 PM Houston Putman <
> [email protected]> wrote:
> >>>>>>>
> >>>>>>> Please vote for release candidate 1 for Lucene/Solr 8.6.1
> >>>>>>>
> >>>>>>> The artifacts can be downloaded from:
> >>>>>>>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.1-RC1-reva32a3ac4e43f629df71e5ae30a3330be94b095f2
> >>>>>>>
> >>>>>>> You can run the smoke tester directly with this command:
> >>>>>>>
> >>>>>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
> >>>>>>>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.1-RC1-reva32a3ac4e43f629df71e5ae30a3330be94b095f2
> >>>>>>>
> >>>>>>> The vote will be open for at least 72 hours i.e. until 2020-08-02
> 22:00 UTC.
> >>>>>>>
> >>>>>>> [ ] +1  approve
> >>>>>>> [ ] +0  no opinion
> >>>>>>> [ ] -1  disapprove (and reason why)
> >>>>>>>
> >>>>>>> Here is my +1
> >>>>>
> >>>>>
> >>>>>
> >>>>> --
> >>>>> -----------------------------------------------------
> >>>>> Noble Paul
> >>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>> To unsubscribe, e-mail: [email protected]
> >>>>> For additional commands, e-mail: [email protected]
> >>>>>
> >>>>
> >>>>
> >>>> --
> >>>> http://www.needhamsoftware.com (work)
> >>>> http://www.the111shift.com (play)
> >>>
> >>>
> >>>
> >>> --
> >>> http://www.needhamsoftware.com (work)
> >>> http://www.the111shift.com (play)
> >>
> >>
> >>
> >> --
> >> Marcus Eagan
> >>
> >
> >
> > --
> > Regards,
> >
> > Atri
> > Apache Concerted
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [email protected]
> > For additional commands, e-mail: [email protected]
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
>

-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)

Reply via email to