[jira] [Created] (HADOOP-15930) Exclude MD5 checksum files from release artifact

2018-11-13 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15930:
--

 Summary: Exclude MD5 checksum files from release artifact
 Key: HADOOP-15930
 URL: https://issues.apache.org/jira/browse/HADOOP-15930
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


create-release script creates md5 checksum file, but now it is useless.

https://www.apache.org/dev/release-distribution.html#sigs-and-sums
bq. For new releases, PMCs MUST supply SHA-256 and/or SHA-512; and SHOULD NOT 
supply MD5 or SHA-1. Existing releases do not need to be changed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[VOTE] Release Apache Hadoop 2.9.2 (RC0)

2018-11-13 Thread Akira Ajisaka
Hi folks,

I have put together a release candidate (RC0) for Hadoop 2.9.2. It
includes 204 bug fixes and improvements since 2.9.1. [1]

The RC is available at http://home.apache.org/~aajisaka/hadoop-2.9.2-RC0/
Git signed tag is release-2.9.2-RC0 and the checksum is
826afbeae31ca687bc2f8471dc841b66ed2c6704
The maven artifacts are staged at
https://repository.apache.org/content/repositories/orgapachehadoop-1166/

You can find my public key at:
https://dist.apache.org/repos/dist/release/hadoop/common/KEYS

Please try the release and vote. The vote will run for 5 days.

[1] https://s.apache.org/2.9.2-fixed-jiras

Thanks,
Akira

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



Re: branch-2.9.2 is almost closed for commit

2018-11-13 Thread Akira Ajisaka
Hi all,

I wanted to push release-2.9.2-RC0 tag and wrongly pushed all the
local tags by 'git push --tags'. I deleted some tags but I couldn't
delete 'rel/release-' tag because all the tags start with 'rel/' are
protected. I'll ask infra to delete this tag.
Sorry for pushing the wrong tags.

-Akira

2018年11月13日(火) 16:07 Akira Ajisaka :
>
> Recently I hit the gpg-agent cache ttl issue while creating 2.9.2 RC0,
> and the issue was fixed by HADOOP-15923.
> I'll create RC0 and start the vote by the end of this week. Sorry for the 
> delay.
>
> Thanks,
> Akira
> 2018年11月6日(火) 13:51 Akira Ajisaka :
> >
> > Hi folks,
> >
> > Now there is only 1 critical issues targeted for 2.9.2 (YARN-8233), so
> > I'd like to close branch-2.9.2 except YARN-8233.
> > I create RC0 as soon as YARN-8233 is committed to branch-2.9.2.
> >
> > Thanks,
> > Akira

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15929) org.apache.hadoop.ipc.TestIPC fail

2018-11-13 Thread Elaine Ang (JIRA)
Elaine Ang created HADOOP-15929:
---

 Summary: org.apache.hadoop.ipc.TestIPC fail
 Key: HADOOP-15929
 URL: https://issues.apache.org/jira/browse/HADOOP-15929
 Project: Hadoop Common
  Issue Type: Test
  Components: common
Affects Versions: 2.8.5
Reporter: Elaine Ang
 Attachments: org.apache.hadoop.ipc.TestIPC-output.txt

The unit test for module **hadoop-common-project/hadoop-common (version 2.8.5 
checkout from Github) failed.

Reproduce:
 # Clone [Hadoop Github reop|https://github.com/apache/hadoop] and checkout tag 
release-2.8.5-RC0
 # Compile
{noformat}
mvn clean compile{noformat}

 #  
{noformat}
cd hadoop-common-project/hadoop-common/
mvn test{noformat}

 

 

Below is the failed test log when running as non-root user.

 
{noformat}
Failed tests:
 
TestSymlinkLocalFSFileSystem>TestSymlinkLocalFS.testSetTimesSymlinkToDir:233->SymlinkBaseTest.testSetTimesSymlinkToDir:1395
 expected:<3000> but was:<1542140218000>
 TestIPC.testUserBinding:1495->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872)

 TestIPC.testProxyUserBinding:1500->checkUserBinding:1516
Wanted but not invoked:
socket.bind(OptiPlex/127.0.1.1:0);
-> at org.apache.hadoop.ipc.TestIPC.checkUserBinding(TestIPC.java:1516)

However, there were other interactions with this mock:
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:645)
-> at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:646)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:515)
-> at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
-> at 
org.apache.hadoop.ipc.Client$Connection.closeConnection(Client.java:872){noformat}
 

 Attached is a more verbosed test output.  
[^org.apache.hadoop.ipc.TestIPC-output.txt]

^Suggestions regarding how to resolve this would be helpful.^



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



YARN-8789

2018-11-13 Thread dam6923
Is anyone able to assist in getting YARN-8789 reviewed and committed?

Thanks!


Re: Hadoop 3.2 Release Plan proposal

2018-11-13 Thread Sunil G
Hi Folks,

All blockers are closed by last weekend and corrected jiras as well.
Preparing RC0 now. Facing some issue with shaded jars file size. Hence
respinning again.
Planning to complete this by end of this week.

Thanks
Sunil

On Fri, Oct 26, 2018 at 2:34 AM Konstantin Shvachko 
wrote:

> Another thing is that I see a bunch of jiras under HDFS-8707, which don't
> have the Fix Version field listing 3.2, some have it just empty.
> This means they will not be populated into release notes.
>
> Thanks,
> --Konstantin
>
> On Thu, Oct 25, 2018 at 7:59 AM Sunil G  wrote:
>
> > Thanks Konstantin for pointing out.
> > As 3.2 is pretty much on RC level, its better we try to find a good
> > solution to this issue.
> >
> > I ll follow up on this in the jira.
> >
> > - Sunil
> >
> > On Thu, Oct 25, 2018 at 11:35 AM Konstantin Shvachko <
> shv.had...@gmail.com>
> > wrote:
> >
> >> I've tried to attract attention to an incompatibility issue through the
> >> jira, but it didn't work. So pitching in in this thread.
> >> https://issues.apache.org/jira/browse/HDFS-12026
> >> It introduced binary incompatibility, which will prevent people from
> >> upgrading from 3.1 to 3.2.
> >> I think it can get messy if we release anything with this feature.
> >>
> >> Thanks,
> >> --Konstantin
> >>
> >> On Mon, Oct 22, 2018 at 5:01 AM Steve Loughran 
> >> wrote:
> >>
> >>> its in.
> >>>
> >>> good catch!
> >>>
> >>> On 20 Oct 2018, at 01:35, Wei-Chiu Chuang   >>> weic...@cloudera.com>> wrote:
> >>>
> >>> Thanks Sunil G for driving the release,
> >>> I filed HADOOP-15866<
> https://issues.apache.org/jira/browse/HADOOP-15866>
> >>> for a compat fix. If any one has cycle please review it, as I think it
> is
> >>> needed for 3.2.0.
> >>>
> >>> On Thu, Oct 18, 2018 at 4:43 AM Sunil G  >>> sun...@apache.org>> wrote:
> >>> Hi Folks,
> >>>
> >>> As we previously communicated for 3.2.0 release, we have delayed due to
> >>> few
> >>> blockers in our gate.
> >>>
> >>> I just cut branch-3.2.0 for release purpose. branch-3.2 will be open
> for
> >>> all bug fixes.
> >>>
> >>> - Sunil
> >>>
> >>>
> >>> On Tue, Oct 16, 2018 at 8:59 AM Sunil G  >>> sun...@apache.org>> wrote:
> >>>
> >>> > Hi Folks,
> >>> >
> >>> > We are now close to RC as other blocker issues are now merged to
> trunk
> >>> and
> >>> > branch-3.2. Last 2 critical issues are closer to merge and will be
> >>> > committed in few hours.
> >>> > With this, I will be creating 3.2.0 branch today and will go ahead
> >>> with RC
> >>> > related process.
> >>> >
> >>> > - Sunil
> >>> >
> >>> > On Mon, Oct 15, 2018 at 11:43 PM Jonathan Bender <
> jonben...@stripe.com
> >>> >
> >>> > wrote:
> >>> >
> >>> >> Hello, were there any updates around the 3.2.0 RC timing? All I see
> in
> >>> >> the current blockers are related to the new Submarine subproject,
> >>> wasn't
> >>> >> sure if that is what is holding things up.
> >>> >>
> >>> >> Cheers,
> >>> >> Jon
> >>> >>
> >>> >> On Tue, Oct 2, 2018 at 7:13 PM, Sunil G  >>> sun...@apache.org>> wrote:
> >>> >>
> >>> >>> Thanks Robert and Haibo for quickly correcting same.
> >>> >>> Sigh, I somehow missed one file while committing the change. Sorry
> >>> for
> >>> >>> the
> >>> >>> trouble.
> >>> >>>
> >>> >>> - Sunil
> >>> >>>
> >>> >>> On Wed, Oct 3, 2018 at 5:22 AM Robert Kanter  >>> >
> >>> >>> wrote:
> >>> >>>
> >>> >>> > Looks like there's two that weren't updated:
> >>> >>> > >> [115] 16:32 : hadoop-common (trunk) :: grep "3.2.0-SNAPSHOT" .
> >>> -r
> >>> >>> > --include=pom.xml
> >>> >>> > ./hadoop-project/pom.xml:
> >>> >>> >
> >>> 3.2.0-SNAPSHOT
> >>> >>> > ./pom.xml:3.2.0-SNAPSHOT
> >>> >>> >
> >>> >>> > I've just pushed in an addendum commit to fix those.
> >>> >>> > In the future, please make sure to do a sanity compile when
> >>> updating
> >>> >>> poms.
> >>> >>> >
> >>> >>> > thanks
> >>> >>> > - Robert
> >>> >>> >
> >>> >>> > On Tue, Oct 2, 2018 at 11:44 AM Aaron Fabbri
> >>> >>> mailto:fab...@cloudera.com.invalid>>
> >>> >>> > wrote:
> >>> >>> >
> >>> >>> >> Trunk is not building for me.. Did you miss a 3.2.0-SNAPSHOT in
> >>> the
> >>> >>> >> top-level pom.xml?
> >>> >>> >>
> >>> >>> >>
> >>> >>> >> On Tue, Oct 2, 2018 at 10:16 AM Sunil G  >>> > wrote:
> >>> >>> >>
> >>> >>> >> > Hi All
> >>> >>> >> >
> >>> >>> >> > As mentioned in earlier mail, I have cut branch-3.2 and reset
> >>> trunk
> >>> >>> to
> >>> >>> >> > 3.3.0-SNAPSHOT. I will share the RC details sooner once all
> >>> >>> necessary
> >>> >>> >> > patches are pulled into branch-3.2.
> >>> >>> >> >
> >>> >>> >> > Thank You
> >>> >>> >> > - Sunil
> >>> >>> >> >
> >>> >>> >> >
> >>> >>> >> > On Mon, Sep 24, 2018 at 2:00 PM Sunil G  >>> > wrote:
> >>> >>> >> >
> >>> >>> >> > > Hi All
> >>> >>> >> > >
> >>> >>> >> > > We are now down to the last Blocker and HADOOP-15407 is
> >>> merged to
> >>> >>> >> trunk.
> >>> >>> >> > > Thanks for the support.
> >>> >>> >> > >
> 

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-11-13 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/

[Nov 12, 2018 6:27:02 AM] (sunilg) YARN-8902. [CSI] Add volume manager that 
manages CSI volume lifecycle.
[Nov 12, 2018 9:54:41 AM] (elek) HDDS-767. OM should not search for STDOUT root 
logger for audit logging.
[Nov 12, 2018 10:18:23 AM] (wwei) YARN-8987. Usability improvements 
node-attributes CLI. Contributed by 
[Nov 12, 2018 12:58:05 PM] (stevel) HADOOP-15110. Gauges are getting logged in 
exceptions from
[Nov 12, 2018 3:39:30 PM] (sunilg) YARN-8877. [CSI] Extend service spec to 
allow setting resource
[Nov 12, 2018 6:03:11 PM] (nanda) HDDS-576. Move ContainerWithPipeline creation 
to RPC endpoint.
[Nov 12, 2018 6:42:30 PM] (billie) YARN-8776. Implement Container Exec feature 
in LinuxContainerExecutor.
[Nov 12, 2018 10:08:39 PM] (jitendra) HDDS-709. Modify Close Container handling 
sequence on datanodes.
[Nov 12, 2018 11:06:43 PM] (gifuma) YARN-8997. [Submarine] Small refactors of 
modifier, condition check and
[Nov 12, 2018 11:31:42 PM] (arp) HDFS-14065. Failed Storage Locations shows 
nothing in the Datanode
[Nov 13, 2018 12:53:10 AM] (eyang) YARN-8761. Service AM support for 
decommissioning component instances.  




-1 overall


The following subsystems voted -1:
asflicense findbugs hadolint pathlen unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.util.TestDiskCheckerWithDiskIo 
   hadoop.util.TestReadWriteDiskValidator 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.resourcemanager.TestCapacitySchedulerMetrics 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   hadoop.yarn.sls.TestSLSRunner 
   hadoop.yarn.sls.TestSLSStreamAMSynth 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-compile-javac-root.txt
  [324K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-checkstyle-root.txt
  [17M]

   hadolint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-patch-hadolint.txt
  [4.0K]

   pathlen:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/pathlen.txt
  [12K]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-patch-pylint.txt
  [40K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-patch-shellcheck.txt
  [68K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/whitespace-eol.txt
  [9.3M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/whitespace-tabs.txt
  [1.1M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-hdds_framework.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_objectstore-service.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_ozone-manager.txt
  [4.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_ozonefs.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/956/artifact/out/branch-findbugs-hadoop-ozone_s3gateway.txt
  [4.0K]
   

[VOTE] Release Apache Hadoop Ozone 0.3.0-alpha (RC0)

2018-11-13 Thread Elek, Marton
Hi all,

I've created the first release candidate (RC0) for Apache Hadoop Ozone
0.3.0-alpha according to the plans shared here previously.

This is the second release of Apache Hadoop Ozone. Notable changes since
the first release:

* A new S3 compatible rest server is added. Ozone can be used from any
S3 compatible tools (HDDS-434)
* Ozone Hadoop file system URL prefix is renamed from o3:// to o3fs://
(HDDS-651)
* Extensive testing and stability improvements of OzoneFs.
* Spark, YARN and Hive support and stability improvements.
* Improved Pipeline handling and recovery.
* Separated/dedicated classpath definitions for all the Ozone
components. (HDDS-447)

The RC artifacts are available from:
https://home.apache.org/~elek/ozone-0.3.0-alpha-rc0/

The RC tag in git is: ozone-0.3.0-alpha-RC0 (dc661083683)

Please try it out, vote, or just give us feedback.

The vote will run for 5 days, ending on November 18, 2018 13:00 UTC.


Thank you very much,
Marton

PS:

The easiest way to try it out is:

1. Download the binary artifact
2. Read the docs from ./docs/index.html
3. TLDR; cd compose/ozone && docker-compose up -d
4. open localhost:9874 or localhost:9876



The easiest way to try it out from the source:

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha && docker-compose up -d



The easiest way to test basic functionality (with acceptance tests):

1. mvn  install -DskipTests -Pdist -Dmaven.javadoc.skip=true -Phdds
-DskipShade -am -pl :hadoop-ozone-dist
2. cd hadoop-ozone/dist/target/ozone-0.3.0-alpha/smoketest
3. ./test.sh

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9973) wrong dependencies

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9973.

Resolution: Won't Fix

I think I'm going to resolve as a wontfix I'm afraid, on account of the age of 
the JIRA. That said, "wrong dependencies" is probably an eternal JIRA, the dark 
twin of HADOOP-9991

> wrong dependencies
> --
>
> Key: HADOOP-9973
> URL: https://issues.apache.org/jira/browse/HADOOP-9973
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.1.1-beta
>Reporter: Nicolas Liochon
>Priority: Minor
>
> See HBASE-9557 for the impact: for some of them, it seems it's pushing these 
> dependencies to the client applications even if they are not used.
> mvn dependency:analyze -pl hadoop-common
> [WARNING] Used undeclared dependencies found:
> [WARNING]com.google.code.findbugs:jsr305:jar:1.3.9:compile
> [WARNING]commons-collections:commons-collections:jar:3.2.1:compile
> [WARNING] Unused declared dependencies found:
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]tomcat:jasper-compiler:jar:5.5.23:runtime
> [WARNING]tomcat:jasper-runtime:jar:5.5.23:runtime
> [WARNING]javax.servlet.jsp:jsp-api:jar:2.1:runtime
> [WARNING]commons-el:commons-el:jar:1.0:runtime
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:runtime
> mvn dependency:analyze -pl hadoop-yarn-client
> [WARNING] Used undeclared dependencies found:
> [WARNING]org.mortbay.jetty:jetty-util:jar:6.1.26:provided
> [WARNING]log4j:log4j:jar:1.2.17:compile
> [WARNING]com.google.guava:guava:jar:11.0.2:provided
> [WARNING]commons-lang:commons-lang:jar:2.5:provided
> [WARNING]commons-logging:commons-logging:jar:1.1.1:provided
> [WARNING]commons-cli:commons-cli:jar:1.2:provided
> [WARNING]
> org.apache.hadoop:hadoop-yarn-server-common:jar:2.1.2-SNAPSHOT:test
> [WARNING] Unused declared dependencies found:
> [WARNING]org.slf4j:slf4j-api:jar:1.7.5:compile
> [WARNING]org.slf4j:slf4j-log4j12:jar:1.7.5:compile
> [WARNING]com.google.inject.extensions:guice-servlet:jar:3.0:compile
> [WARNING]io.netty:netty:jar:3.6.2.Final:compile
> [WARNING]com.google.protobuf:protobuf-java:jar:2.5.0:compile
> [WARNING]commons-io:commons-io:jar:2.1:compile
> [WARNING]org.apache.hadoop:hadoop-hdfs:jar:2.1.2-SNAPSHOT:test
> [WARNING]com.google.inject:guice:jar:3.0:compile
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-core:jar:1.9:test
> [WARNING]
> com.sun.jersey.jersey-test-framework:jersey-test-framework-grizzly2:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-server:jar:1.9:compile
> [WARNING]com.sun.jersey:jersey-json:jar:1.9:compile
> [WARNING]com.sun.jersey.contribs:jersey-guice:jar:1.9:compile



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-9911) hadoop 2.1.0-beta tarball only contains 32bit native libraries

2018-11-13 Thread Steve Loughran (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-9911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-9911.

Resolution: Won't Fix

think this isn't going to be fixed

> hadoop 2.1.0-beta tarball only contains 32bit native libraries
> --
>
> Key: HADOOP-9911
> URL: https://issues.apache.org/jira/browse/HADOOP-9911
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.1.0-beta, 2.2.0
>Reporter: André Kelpe
>Priority: Major
>
> I am setting up a cluster on 64bit linux and I noticed, that the tarball only 
> ships with 32 bit libraries:
> $ pwd
> /opt/hadoop-2.1.0-beta/lib/native
> $ ls -al
> total 2376
> drwxr-xr-x 2 67974 users   4096 Aug 15 20:59 .
> drwxr-xr-x 3 67974 users   4096 Aug 15 20:59 ..
> -rw-r--r-- 1 67974 users 598578 Aug 15 20:59 libhadoop.a
> -rw-r--r-- 1 67974 users 764772 Aug 15 20:59 libhadooppipes.a
> lrwxrwxrwx 1 67974 users 18 Aug 15 20:59 libhadoop.so -> 
> libhadoop.so.1.0.0
> -rwxr-xr-x 1 67974 users 407568 Aug 15 20:59 libhadoop.so.1.0.0
> -rw-r--r-- 1 67974 users 304632 Aug 15 20:59 libhadooputils.a
> -rw-r--r-- 1 67974 users 184414 Aug 15 20:59 libhdfs.a
> lrwxrwxrwx 1 67974 users 16 Aug 15 20:59 libhdfs.so -> libhdfs.so.0.0.0
> -rwxr-xr-x 1 67974 users 149556 Aug 15 20:59 libhdfs.so.0.0.0
> $ file *
> libhadoop.a:current ar archive
> libhadooppipes.a:   current ar archive
> libhadoop.so:   symbolic link to `libhadoop.so.1.0.0'
> libhadoop.so.1.0.0: ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0x527e88ec3e92a95389839bd3fc9d7dbdebf654d6, not stripped
> libhadooputils.a:   current ar archive
> libhdfs.a:  current ar archive
> libhdfs.so: symbolic link to `libhdfs.so.0.0.0'
> libhdfs.so.0.0.0:   ELF 32-bit LSB shared object, Intel 80386, version 1 
> (SYSV), dynamically linked, 
> BuildID[sha1]=0xddb2abae9272f584edbe22c76b43fcda9436f685, not stripped
> $ hadoop checknative
> 13/08/28 18:11:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
> library for your platform... using builtin-java classes where applicable
> Native library checking:
> hadoop: false 
> zlib:   false 
> snappy: false 
> lz4:false 
> bzip2:  false 
> 13/08/28 18:11:17 INFO util.ExitUtil: Exiting with status 1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15927) Add @threadSafe annotation to hadoop-maven-plugins to enable Maven parallel build

2018-11-13 Thread Akira Ajisaka (JIRA)
Akira Ajisaka created HADOOP-15927:
--

 Summary: Add @threadSafe annotation to hadoop-maven-plugins to 
enable Maven parallel build
 Key: HADOOP-15927
 URL: https://issues.apache.org/jira/browse/HADOOP-15927
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira Ajisaka


Maven 3.x can build modules in parallel. 
https://cwiki.apache.org/confluence/display/MAVEN/Parallel+builds+in+Maven+3
When trying this feature, got the following warning:
{noformat}
[WARNING] *
[WARNING] * Your build is requesting parallel execution, but project  *
[WARNING] * contains the following plugin(s) that have goals not marked   *
[INFO] 
[WARNING] * as @threadSafe to support parallel building.  *
[WARNING] * While this /may/ work fine, please look for plugin updates*
[WARNING] * and/or request plugins be made thread-safe.   *
[WARNING] * If reporting an issue, report it against the plugin in*
[WARNING] * question, not against maven-core  *
[WARNING] *
[WARNING] The following plugins are not marked @threadSafe in Apache Hadoop 
Common:
[WARNING] org.apache.hadoop:hadoop-maven-plugins:3.3.0-SNAPSHOT
[WARNING] Enable debug to see more precisely which goals are not marked 
@threadSafe.
[WARNING] *
{noformat}
Let's mark hadoop-maven-plugins as @threadSafe to remove the warning.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org