Hi All,

Voting has ended with:
0 binding +1s
0 non-binding +1s (including mine)
0 binding -1s
0 non-binding -1s

I'm going to proceed and ask for you to vote another time on the RC1 soon.

Regards,
Gabor

On Wed, May 13, 2020 at 8:17 AM Mukund Madhav Thakur <mtha...@cloudera.com>
wrote:

> Yes , I have azure-auth-keys only. And abfs tests work fine in the current
> trunk branch with my same azure-auth-keys.xml file.
> Fyi:  I  am running mvn -T 1C -Dparallel-tests=abfs clean verify
> *One thing I noticed here is , in this branch there are no ABFS related
> classes.*
>
> @Gabor Bota <gabor.b...@cloudera.com>  I did the packaging of the release
> as well using
>
> mvn package -Pdist -DskipTests -Dmaven.javadoc.skip=true  -DskipShade
> and ran some hadoop fs commands as well. All good there.
>
> On Wed, May 13, 2020 at 9:10 AM Masatake Iwasaki <
> iwasak...@oss.nttdata.co.jp> wrote:
>
>> > Also I am trying to run the abfs tests but all tests are getting skipped
>> > even if I think I have right auth-keys.xml. I will debug this and
>> update.
>>
>> hadoop-azure expects azure-auth-keys.xml instead of auth-keys.xml?
>> I think it should be consistent with other FS modules..
>>
>> Masatake Iwasaki
>>
>> On 2020/05/13 0:33, Mukund Madhav Thakur wrote:
>> > I compiled and ran s3 tests using
>> >
>> > mvn clean verify -Ds3guard -Ddynamo -Dauth. I see some failures. I ran
>> > these separately as well but it still fails for me.
>> >
>> >
>> > Also I am trying to run the abfs tests but all tests are getting skipped
>> > even if I think I have right auth-keys.xml. I will debug this and
>> update.
>> >
>> >
>> >
>> > [*ERROR*] *Failures: *
>> >
>> > [*ERROR*] *
>> >
>> ITestS3AMiscOperations.testEmptyFileChecksums:147->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>> > checksums expected:<etag: "4eb50cc3151f8d174a2ff79a7c16277e"> but
>> > was:<etag: "cb190410cc285024b2066360c44c006a">*
>> >
>> > [*ERROR*] *
>> >
>> ITestS3AMiscOperations.testNonEmptyFileChecksumsUnencrypted:199->Assert.assertEquals:118->Assert.failNotEquals:743->Assert.fail:88
>> > checksums expected:<etag: "90d890f880e286b6d62534dfde5ce720"> but
>> > was:<etag: "e9181a730651153656d3570226667ef0">*
>> >
>> > [*INFO*]
>> >
>> > [*ERROR*] *Tests run: 12, Failures: 2, Errors: 0, Skipped: 0*
>> >
>> >
>> > [*ERROR*] *Errors: *
>> >
>> > [*ERROR*] *
>> >
>> ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle:224->AbstractS3GuardToolTestBase.exec:286->AbstractS3GuardToolTestBase.exec:308
>> > ยป AWSServiceIO*
>> >
>> >
>> >
>> >
>> > On Tue, May 5, 2020 at 11:48 PM Steve Loughran
>> <ste...@cloudera.com.invalid>
>> > wrote:
>> >
>> >> mvn -T 1  -Phadoop-3.2 -Dhadoop.version=3.1.4 -Psnapshots-and-staging
>> >> -Phadoop-cloud,yarn,kinesis-asl,yarn clean install -DskipTests
>> >>
>> >> Then a test run of the cloud bits
>> >>
>> >> mvn -T 1  -Phadoop-3.2 -Dhadoop.version=3.1.4 -Psnapshots-and-staging
>> >> -Phadoop-cloud,yarn,kinesis-asl test --pl hadoop-cloud
>> >>
>> >> And I got a guava binding stack trace (joy!)
>> >>
>> >>
>> >> CommitterBindingSuite:
>> >> *** RUN ABORTED ***
>> >>    java.lang.NoSuchMethodError:
>> >>
>> >>
>> com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
>> >>    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
>> >>    at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
>> >>    at
>> >> org.apache.spark.internal.io
>> >> .cloud.CommitterBindingSuite.newJob(CommitterBindingSuite.scala:89)
>> >>    at
>> >> org.apache.spark.internal.io
>> >>
>> .cloud.CommitterBindingSuite.$anonfun$new$1(CommitterBindingSuite.scala:55)
>> >>    at org.scalatest.OutcomeOf.outcomeOf(OutcomeOf.scala:85)
>> >>    at org.scalatest.OutcomeOf.outcomeOf$(OutcomeOf.scala:83)
>> >>    at org.scalatest.OutcomeOf$.outcomeOf(OutcomeOf.scala:104)
>> >>    at org.scalatest.Transformer.apply(Transformer.scala:22)
>> >>    at org.scalatest.Transformer.apply(Transformer.scala:20)
>> >>    at org.scalatest.FunSuiteLike$$anon$1.apply(FunSuiteLike.scala:186)
>> >>    ...
>> >>
>> >> Fix: spark's guava version needs to be bumped up. There's a bigger
>> patch
>> >> for that, but I've got a minor PR which lets someone change it on the
>> maven
>> >> CLI to match the hadoop version, e.g. -Dguava.version=27.0-jre
>> >>
>> >> https://issues.apache.org/jira/browse/SPARK-31644
>> >>
>> >> There's nothing which can be done except highlight to all that this has
>> >> happened and what that stack trace means.
>> >>
>> >> On Tue, 5 May 2020 at 02:08, Wei-Chiu Chuang
>> <weic...@cloudera.com.invalid
>> >> wrote:
>> >>
>> >>> Gabor, I'm sorry there's a test failure in branch-3.1 HDFS-14599
>> >>> <https://issues.apache.org/jira/browse/HDFS-14599>
>> >>>
>> >>>     1.
>> >>>        1. I just cherrypicked the fix to branch-3.2 branch-3.1. It's a
>> >> test
>> >>>        only fix so technically I could live with it. But it would be
>> best
>> >>> to add
>> >>>        the fix to 3.1.4 as well.
>> >>>
>> >>>
>> >>> On Mon, May 4, 2020 at 3:20 PM Gabor Bota <gab...@apache.org> wrote:
>> >>>
>> >>>> Hi folks,
>> >>>>
>> >>>> I have put together a release candidate (RC0) for Hadoop 3.1.4.
>> >>>>
>> >>>> The RC is available at:
>> >>> http://people.apache.org/~gabota/hadoop-3.1.4-RC0/
>> >>>> The RC tag in git is here:
>> >>>> https://github.com/apache/hadoop/releases/tag/release-3.1.4-RC0
>> >>>> The maven artifacts are staged at
>> >>>>
>> >>
>> https://repository.apache.org/content/repositories/orgapachehadoop-1266/
>> >>>> You can find my public key at:
>> >>>> https://dist.apache.org/repos/dist/release/hadoop/common/KEYS
>> >>>> and
>> http://keys.gnupg.net/pks/lookup?op=get&search=0xB86249D83539B38C
>> >>>>
>> >>>> Please try the release and vote. The vote will run for 5 weekdays,
>> >>>> until May 11. 2020. 23:00 CET.
>> >>>>
>> >>>> Thanks,
>> >>>> Gabor
>> >>>>
>> >>>> ---------------------------------------------------------------------
>> >>>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> >>>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>> >>>>
>> >>>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
>> For additional commands, e-mail: common-dev-h...@hadoop.apache.org
>>
>>

Reply via email to