HI Jonathan,

I have updated the artifacts, so now

https://repository.apache.org/#nexus-search;gav~org.apache.hadoop~~3.0.2~~
https://repository.apache.org/#nexus-search;gav~org.apache.hadoop~~3.0.3~~

are more consistent, except that 3.0.3 has an extra entry for rbf. Would
you please try again?

The propagation to
https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
will take some time. I did nothing different than last time, so keep finger
crossed that it will propagate there.

Thanks Sammi Chen and Andrew Wang for info and advice, and sorry for the
inconvenience again.

Best,

--Yongjun

On Mon, Jul 2, 2018 at 9:30 AM, Jonathan Eagles <jeag...@gmail.com> wrote:

> Release 3.0.3 is still broken due to the missing artifacts. Any update on
> when these artifacts will be published?
>
> On Wed, Jun 27, 2018 at 8:25 PM, Chen, Sammi <sammi.c...@intel.com> wrote:
>
>> Hi Yongjun,
>>
>>
>>
>>
>>
>> The artifacts will be pushed to https://mvnrepository.com/arti
>> fact/org.apache.hadoop/hadoop-project after step 6 of Publishing steps.
>>
>>
>> For 2.9.1, I remember I absolutely did the step before. I redo the step 6
>> today and now 2.9.1 is pushed to the mvn repo.
>>
>> You can double check it. I suspect sometimes Nexus may fail to notify
>> user when this is unexpected failures.
>>
>>
>>
>>
>>
>> Bests,
>>
>> Sammi
>>
>> *From:* Yongjun Zhang [mailto:yzh...@cloudera.com]
>> *Sent:* Sunday, June 17, 2018 12:17 PM
>> *To:* Jonathan Eagles <jeag...@gmail.com>; Chen, Sammi <
>> sammi.c...@intel.com>
>> *Cc:* Eric Payne <erichadoo...@yahoo.com>; Hadoop Common <
>> common-dev@hadoop.apache.org>; Hdfs-dev <hdfs-...@hadoop.apache.org>;
>> mapreduce-...@hadoop.apache.org; yarn-...@hadoop.apache.org
>> *Subject:* Re: [VOTE] Release Apache Hadoop 3.0.3 (RC0)
>>
>>
>>
>> + Junping, Sammi
>>
>>
>>
>> Hi Jonathan,
>>
>>
>>
>> Many thanks for reporting the issues and sorry for the inconvenience.
>>
>>
>>
>> 1. Shouldn't the build be looking for artifacts in
>>
>>
>>
>> https://repository.apache.org/content/repositories/releases
>>
>> rather than
>>
>>
>>
>> https://repository.apache.org/content/repositories/snapshots
>>
>> ?
>>
>>
>>
>> 2.
>>
>> Not seeing the artifact published here as well.
>>
>> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
>>
>>
>>
>> Indeed, I did not see 2.9.1 there too. So included Sammi Chen.
>>
>>
>>
>> Hi Junping, would you please share which step in
>>
>> https://wiki.apache.org/hadoop/HowToRelease
>>
>> should have done this?
>>
>>
>>
>> Thanks a lot.
>>
>>
>>
>> --Yongjun
>>
>>
>>
>> On Fri, Jun 15, 2018 at 10:52 PM, Jonathan Eagles <jeag...@gmail.com>
>> wrote:
>>
>> Upgraded Tez dependency to hadoop 3.0.3 and found this issue. Anyone else
>> seeing this issue?
>>
>>
>>
>> [ERROR] Failed to execute goal on project hadoop-shim: Could not resolve
>> dependencies for project org.apache.tez:hadoop-shim:jar:0.10.0-SNAPSHOT:
>> Failed to collect dependencies at 
>> org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
>> Failed to read artifact descriptor for 
>> org.apache.hadoop:hadoop-yarn-api:jar:3.0.3:
>> Could not find artifact org.apache.hadoop:hadoop-project:pom:3.0.3 in
>> apache.snapshots.https (https://repository.apache.org
>> /content/repositories/snapshots) -> [Help 1]
>>
>> [ERROR]
>>
>> [ERROR] To see the full stack trace of the errors, re-run Maven with the
>> -e switch.
>>
>> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
>>
>> [ERROR]
>>
>> [ERROR] For more information about the errors and possible solutions,
>> please read the following articles:
>>
>> [ERROR] [Help 1] http://cwiki.apache.org/conflu
>> ence/display/MAVEN/DependencyResolutionException
>>
>> [ERROR]
>>
>> [ERROR] After correcting the problems, you can resume the build with the
>> command
>>
>> [ERROR]   mvn <goals> -rf :hadoop-shim
>>
>>
>>
>> Not seeing the artifact published here as well.
>>
>> https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-project
>>
>>
>>
>> On Tue, Jun 12, 2018 at 6:44 PM, Yongjun Zhang <yzh...@cloudera.com>
>> wrote:
>>
>> Thanks Eric!
>>
>> --Yongjun
>>
>>
>> On Mon, Jun 11, 2018 at 8:05 AM, Eric Payne <erichadoo...@yahoo.com>
>> wrote:
>>
>> > Sorry, Yongjun. My +1 is also binding
>> > +1 (binding)
>> > -Eric Payne
>> >
>> > On Friday, June 1, 2018, 12:25:36 PM CDT, Eric Payne <
>> > eric.payne1...@yahoo.com> wrote:
>> >
>> >
>> >
>> >
>> > Thanks a lot, Yongjun, for your hard work on this release.
>> >
>> > +1
>> > - Built from source
>> > - Installed on 6 node pseudo cluster
>> >
>> >
>> > Tested the following in the Capacity Scheduler:
>> > - Verified that running apps in labelled queues restricts tasks to the
>> > labelled nodes.
>> > - Verified that various queue config properties for CS are refreshable
>> > - Verified streaming jobs work as expected
>> > - Verified that user weights work as expected
>> > - Verified that FairOrderingPolicy in a CS queue will evenly assign
>> > resources
>> > - Verified running yarn shell application runs as expected
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Friday, June 1, 2018, 12:48:26 AM CDT, Yongjun Zhang <
>> > yjzhan...@apache.org> wrote:
>> >
>> >
>> >
>> >
>> >
>> > Greetings all,
>> >
>> > I've created the first release candidate (RC0) for Apache Hadoop
>> > 3.0.3. This is our next maintenance release to follow up 3.0.2. It
>> includes
>> > about 249
>> > important fixes and improvements, among which there are 8 blockers. See
>> > https://issues.apache.org/jira/issues/?filter=12343997
>> >
>> > The RC artifacts are available at:
>> > https://dist.apache.org/repos/dist/dev/hadoop/3.0.3-RC0/
>> >
>> > The maven artifacts are available via
>> > https://repository.apache.org/content/repositories/orgapachehadoop-1126
>> >
>> > Please try the release and vote; the vote will run for the usual 5
>> working
>> > days, ending on 06/07/2018 PST time. Would really appreciate your
>> > participation here.
>> >
>> > I bumped into quite some issues along the way, many thanks to quite a
>> few
>> > people who helped, especially Sammi Chen, Andrew Wang, Junping Du, Eddy
>> Xu.
>> >
>> > Thanks,
>> >
>> > --Yongjun
>> >
>>
>>
>>
>>
>>
>
>

Reply via email to