[jira] [Created] (HDDS-50) EventQueue: Add a priority based execution model for events in eventqueue.

2018-05-11 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-50:
-

 Summary: EventQueue: Add a priority based execution model for 
events in eventqueue.
 Key: HDDS-50
 URL: https://issues.apache.org/jira/browse/HDDS-50
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Affects Versions: 0.2.1
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently all the events in SCM are executed with the same priority. This jira 
will add a priority based execution model where the "niceness" value of an 
event will determine the priority of the execution of the event.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-49) Standalone protocol should use grpc in place of netty.

2018-05-11 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HDDS-49:
-

 Summary: Standalone protocol should use grpc in place of netty.
 Key: HDDS-49
 URL: https://issues.apache.org/jira/browse/HDDS-49
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Datanode
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
 Fix For: 0.2.1


Currently an Ozone client in standalone communicates with datanode over netty. 
However while using ratis, grpc is the default protocol. 

In order to reduce the number of rpc protocol and handling, this jira aims to 
convert the standalone protocol to use grpc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDDS-41) Ozone: C/C++ implementation of ozone client using curl

2018-05-11 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDDS-41?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer resolved HDDS-41.
--
Resolution: Incomplete

> Ozone: C/C++ implementation of ozone client using curl
> --
>
> Key: HDDS-41
> URL: https://issues.apache.org/jira/browse/HDDS-41
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Native
>Affects Versions: 0.2.1
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: OzonePostMerge
> Fix For: 0.2.1
>
> Attachments: HDFS-12340-HDFS-7240.001.patch, 
> HDFS-12340-HDFS-7240.002.patch, HDFS-12340-HDFS-7240.003.patch, 
> HDFS-12340-HDFS-7240.004.patch, main.C, ozoneClient.C, ozoneClient.h
>
>
> This Jira is introduced for implementation of ozone client in C/C++ using 
> curl library.
> All these calls will make use of HTTP protocol and would require libcurl. The 
> libcurl API are referenced from here:
> https://curl.haxx.se/libcurl/
> Additional details would be posted along with the patches.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDDS-48) ContainerIO - Storage Management

2018-05-11 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HDDS-48:
--

 Summary: ContainerIO - Storage Management
 Key: HDDS-48
 URL: https://issues.apache.org/jira/browse/HDDS-48
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
 Attachments: ContainerIO-StorageManagement-DesignDoc.pdf

We propose refactoring the HDDS DataNode IO path to enforce clean separation 
between the Container management and the Storage layers. All components 
requiring access to HDDS containers on a Datanode should do so via this Storage 
layer.

The proposed Storage layer would be responsible for end-to-end disk and volume 
management. This involves running disk checks and detecting disk failures, 
distributing data across disks as per the configured policy, collecting 
performance statistics and verifying the integrity of the data. 

Attached Design Doc gives an overview of the proposed class diagram.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13547) Add ingress port based sasl resolver

2018-05-11 Thread Chen Liang (JIRA)
Chen Liang created HDFS-13547:
-

 Summary: Add ingress port based sasl resolver
 Key: HDFS-13547
 URL: https://issues.apache.org/jira/browse/HDFS-13547
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: security
Reporter: Chen Liang
Assignee: Chen Liang


This Jira extends the SASL properties resolver interface to take an ingress 
port parameter, and also adds an implementation based on this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-11 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/464/

[May 10, 2018 4:31:59 PM] (rkanter) YARN-8202. DefaultAMSProcessor should 
properly check units of requested
[May 10, 2018 4:41:16 PM] (inigoiri) HADOOP-15454. 
TestRollingFileSystemSinkWithLocal fails on Windows.
[May 10, 2018 5:46:55 PM] (haibochen) MAPREDUCE-7095. Race conditions in 
closing FadvisedChunkedFile. (Miklos
[May 10, 2018 6:01:01 PM] (haibochen) YARN-7715. Support NM promotion/demotion 
of running containers. (Miklos
[May 10, 2018 6:36:25 PM] (aengineer) HDDS-30. Fix TestContainerSQLCli. 
Contributed by Shashikant Banerjee.
[May 10, 2018 6:44:14 PM] (aengineer) HDDS-42. Inconsistent module names and 
descriptions. Contributed by Tsz
[May 10, 2018 7:43:13 PM] (aengineer) HDDS-31. Fix TestSCMCli. Contributed by 
Lokesh Jain.
[May 10, 2018 9:49:58 PM] (xyao) HDDS-16. Remove Pipeline from Datanode 
Container Protocol protobuf
[May 10, 2018 11:27:21 PM] (aengineer) HDDS-37. Remove dependency of 
hadoop-hdds-common and
[May 11, 2018 12:08:26 AM] (aengineer) HDDS-34. Remove .meta file during 
creation of container Contributed by
[May 11, 2018 12:24:40 AM] (Bharat) HDDS-43: Rename hdsl to hdds in 
hadoop-ozone/acceptance-test/README.md.
[May 11, 2018 2:05:35 AM] (vinodkv) YARN-8249. Fixed few REST APIs in 
RMWebServices to have static-user
[May 11, 2018 2:47:04 AM] (wwei) YARN-7003. DRAINING state of queues is not 
recovered after RM restart.
[May 11, 2018 6:51:30 AM] (yqlin) HDFS-13346. RBF: Fix synchronization of 
router quota and nameservice




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


Specific tests:

Failed junit tests :

   hadoop.security.authentication.util.TestZKSignerSecretProvider 
   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestDelegationTokenRenewer 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestLocalFSFileContextMainOperations 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestTrash 
   hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.security.TestGroupsCaching 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.delegation.TestZKDelegationTokenSecretManager 
   hadoop.security.token.TestDtUtilShell 
   hadoop.util.curator.TestChildReaper 
   hadoop.util.TestNativeCodeLoader 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.fs.http.server.TestHttpFSServerWebServer 
   hadoop.hdfs.nfs.nfs3.TestRpcProgramNfs3 
   hadoop.hdfs.nfs.nfs3.TestWrites 
   hadoop.fs.contract.router.TestRouterHDFSContractOpen 
   hadoop.fs.contract.router.TestRouterHDFSContractRename 
   hadoop.fs.contract.router.TestRouterHDFSContractRootDirectory 
   hadoop.fs.contract.router.TestRouterHDFSContractSeek 
   hadoop.fs.contract.router.TestRouterHDFSContractSetTimes 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractAppend 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractConcat 
   hadoop.fs.contract.router.web.TestRouterWebHDFSContractCreate 
   hadoop.yarn.server.webproxy.amfilter.TestAmFilter 
  

   mvninstall:

   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-mvninstall-root.txt
  [824K]

   compile:

   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-compile-root.txt
  [272K]

   cc:

   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-compile-root.txt
  [272K]

   javac:

   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-compile-root.txt
  [272K]

   unit:

   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-common-project_hadoop-auth.txt
  [28K]
   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [372K]
   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [740K]
   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-native-client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt
  [20K]
   
https://builds.apache.org/job/hadoop-trunk-win/464/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-nfs.txt
  [16K]
   

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2018-05-11 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/

[May 10, 2018 3:45:46 AM] (bibinchundatt) YARN-8201. Skip stacktrace of few 
exception from ClientRMService.
[May 10, 2018 5:17:48 AM] (vrushali) YARN-8247 Incorrect HTTP status code 
returned by ATSv2 for
[May 10, 2018 6:00:13 AM] (aengineer) HDDS-18. Ozone Shell should use 
RestClient and RpcClient. Contributed by
[May 10, 2018 9:38:08 AM] (aajisaka) HADOOP-15354. hadoop-aliyun & hadoop-azure 
modules to mark hadoop-common
[May 10, 2018 4:31:59 PM] (rkanter) YARN-8202. DefaultAMSProcessor should 
properly check units of requested
[May 10, 2018 4:41:16 PM] (inigoiri) HADOOP-15454. 
TestRollingFileSystemSinkWithLocal fails on Windows.
[May 10, 2018 5:46:55 PM] (haibochen) MAPREDUCE-7095. Race conditions in 
closing FadvisedChunkedFile. (Miklos
[May 10, 2018 6:01:01 PM] (haibochen) YARN-7715. Support NM promotion/demotion 
of running containers. (Miklos
[May 10, 2018 6:36:25 PM] (aengineer) HDDS-30. Fix TestContainerSQLCli. 
Contributed by Shashikant Banerjee.
[May 10, 2018 6:44:14 PM] (aengineer) HDDS-42. Inconsistent module names and 
descriptions. Contributed by Tsz
[May 10, 2018 7:43:13 PM] (aengineer) HDDS-31. Fix TestSCMCli. Contributed by 
Lokesh Jain.
[May 10, 2018 9:49:58 PM] (xyao) HDDS-16. Remove Pipeline from Datanode 
Container Protocol protobuf
[May 10, 2018 11:27:21 PM] (aengineer) HDDS-37. Remove dependency of 
hadoop-hdds-common and
[May 11, 2018 12:08:26 AM] (aengineer) HDDS-34. Remove .meta file during 
creation of container Contributed by
[May 11, 2018 12:24:40 AM] (Bharat) HDDS-43: Rename hdsl to hdds in 
hadoop-ozone/acceptance-test/README.md.




-1 overall


The following subsystems voted -1:
asflicense compile findbugs unit xml


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-hdds/common 
   Found reliance on default encoding in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]):in 
org.apache.hadoop.utils.MetadataKeyFilters$KeyPrefixFilter.filterKey(byte[], 
byte[], byte[]): String.getBytes() At MetadataKeyFilters.java:[line 97] 

Failed junit tests :

   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.TestRollingUpgrade 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   
hadoop.yarn.server.resourcemanager.reservation.TestCapacityOverTimePolicy 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/patch-compile-root.txt
  [572K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/patch-compile-root.txt
  [572K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/patch-compile-root.txt
  [572K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/diff-checkstyle-root.txt
  [17M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/diff-patch-pylint.txt
  [24K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/diff-patch-shelldocs.txt
  [12K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/whitespace-eol.txt
  [9.4M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/whitespace-tabs.txt
  [1.1M]

   xml:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/xml.txt
  [4.0K]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-hdds_client.txt
  [28K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-hdds_common-warnings.html
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-hdds_container-service.txt
  [36K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-hdds_server-scm.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-hdds_tools.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-ozone_client.txt
  [8.0K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/778/artifact/out/branch-findbugs-hadoop-ozone_common.txt
  

[jira] [Created] (HDDS-47) Add acceptance tests for Ozone Shell

2018-05-11 Thread Lokesh Jain (JIRA)
Lokesh Jain created HDDS-47:
---

 Summary: Add acceptance tests for Ozone Shell
 Key: HDDS-47
 URL: https://issues.apache.org/jira/browse/HDDS-47
 Project: Hadoop Distributed Data Store
  Issue Type: Test
Reporter: Lokesh Jain
Assignee: Lokesh Jain






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



RE: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

2018-05-11 Thread Chen, Sammi
Thanks Junping for driving the release!

+1 (binding)
 
   - Download src tarball, verified checksums, verified signature
   - Build successfully from the source code 
   - Start a pseudo distributed hdfs and YARN cluster on Mac
   - verify hadoop runtime version
   - Check NN, RM webUI
   - Run Pi , wordcount
   - Verify basic hdfs operations
   - Verify basic HDFS storage policy commands
   - Verify copy/download file between HDFS and local filesystem 

Bests,
Sammi

-Original Message-
From: Zsolt Venczel [mailto:zvenc...@cloudera.com] 
Sent: Thursday, May 10, 2018 9:08 PM
To: Takanobu Asanuma 
Cc: Gabor Bota ; junping...@apache.org; 
ajay.ku...@hortonworks.com; Hadoop Common ; 
Hdfs-dev ; mapreduce-...@hadoop.apache.org; 
yarn-...@hadoop.apache.org
Subject: Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)

Thanks Junping for working on this!

+1 (non-binding)
  - checked out git tag release-2.8.4-RC0
  - successfully run "mvn clean package -Pdist,native -Dtar -DskipTests"
(Ubuntu 16.04.4 LTS)
  - started hadoop clulster with 1 master and 2 slaves
  - run pi estimator, teragen, terasort, teravalidate
  - verified Web UI (file browser)

Thanks,
Zsolt



On Thu, May 10, 2018 at 8:50 AM, Takanobu Asanuma 
wrote:

> Thanks for working on this, Junping!
>
> +1 (non-binding)
>- verified checksums
>- succeeded "mvn clean package -Pdist,native -Dtar -DskipTests" 
> (CentOS
> 7)
>- started hadoop cluster with 1 master and 5 slaves
>- run TeraGen/TeraSort
>- verified Web UI (NN, RM, JobHistory, Timeline)
>- verified some Archival Storage operations
>
> Thanks,
> -Takanobu
>
> > -Original Message-
> > From: Gabor Bota [mailto:gabor.b...@cloudera.com]
> > Sent: Wednesday, May 09, 2018 5:20 PM
> > To: ajay.ku...@hortonworks.com
> > Cc: junping...@apache.org; Hadoop Common 
> > ; Hdfs-dev 
> > ; mapreduce-...@hadoop.apache.org; 
> > yarn-...@hadoop.apache.org
> > Subject: Re: [VOTE] Release Apache Hadoop 2.8.4 (RC0)
> >
> >   Thanks for the work Junping!
> >
> >   +1 (non-binding)
> >
> >-   checked out git tag release-2.8.4-RC0
> >-   hadoop-aws unit tests ran successfully
> >-   built from source on Mac OS X 10.13.4, java 8.0.171-oracle
> >-   deployed on a 3 node cluster (HDFS HA, Non-HA YARN)
> >-   verified pi job (yarn), teragen, terasort and teravalidate
> >
> >
> >   Regards,
> >   Gabor Bota
> >
> > On Wed, May 9, 2018 at 12:14 AM Ajay Kumar 
> > 
> > wrote:
> >
> > > Thanks for work on this, Junping!!
> > >
> > > +1 (non-binding)
> > >   - verified binary checksum
> > > - built from source and setup 4 node cluster
> > > - run basic hdfs command
> > > - run wordcount, pi & TestDFSIO (read/write)
> > > - basic check for NN UI
> > >
> > > Best,
> > > Ajay
> > >
> > > On 5/8/18, 10:41 AM, "俊平堵"  wrote:
> > >
> > > Hi all,
> > >  I've created the first release candidate (RC0) for Apache
> Hadoop
> > > 2.8.4. This is our next maint release to follow up 2.8.3. It
> includes
> > > 77
> > > important fixes and improvements.
> > >
> > > The RC artifacts are available at:
> > > http://home.apache.org/~junping_du/hadoop-2.8.4-RC0
> > >
> > > The RC tag in git is: release-2.8.4-RC0
> > >
> > > The maven artifacts are available via repository.apache.org<
> > > http://repository.apache.org> at:
> > >
> > >
> > https://repository.apache.org/content/repositories/orgapachehadoop-1
> > 11
> > 8
> > >
> > > Please try the release and vote; the vote will run for the
> usual
> > 5
> > > working days, ending on 5/14/2018 PST time.
> > >
> > > Thanks,
> > >
> > > Junping
> > >
> > >
> > >
> > > --
> > > --- To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> > > For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
> > >
>
> -
> To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
> For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org
>
>


REMINDER: Apache EU Roadshow 2018 schedule announced!

2018-05-11 Thread sharan

Hello Apache Supporters and Enthusiasts

This is a reminder that the schedule for the Apache EU Roadshow 2018 in 
Berlin has been announced.


http://apachecon.com/euroadshow18/schedule.html

Please note that we will not be running an ApacheCon in Europe this year 
which means that this Apache EU Roadshow will be the main Apache event 
in Europe for 2018.


The Apache EU Roadshow tracks take place on the 13th and 14th June 2018, 
and will feature 28 sessions across the following themes; Apache Tomcat, 
IoT , Cloud Technologies, Microservices and Apache Httpd Server.


Please note that the Apache EU Roadshow is co-located with FOSS 
Backstage and their schedule (https://foss-backstage.de/sessions) 
includes many Apache related sessions such as Incubator, Apache Way, 
Open Source Governance, Legal, Trademarks as well as a full range 
community related presentations and panel discussions.


One single registration gives you access to both events - the Apache EU 
Roadshow and FOSS Backstage.


Registration includes catering (breakfast & lunch both days) and also an 
attendee evening event. And if you want to have a project meet-up, hack 
or simply spend time and relax in our on-site Apache Lounge between 
sessions, then you are more than welcome.


We look forward to seeing you in Berlin!

Thanks
Sharan Foga, VP Apache Community Development

PLEASE NOTE: You are receiving this message because you are subscribed 
to a user@ or dev@ list of one or more Apache Software Foundation projects.





[jira] [Created] (HDFS-13546) local replica can't sync directory on Windows

2018-05-11 Thread SonixLegend (JIRA)
SonixLegend created HDFS-13546:
--

 Summary: local replica can't sync directory on Windows
 Key: HDFS-13546
 URL: https://issues.apache.org/jira/browse/HDFS-13546
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode
Affects Versions: 3.1.0
 Environment: Windows 10 64bit

JDK 1.8.0_172

Hadoop 3.1.0

HBase 2.0.0
Reporter: SonixLegend


I have run Hadoop and HBase on Windows, it's development environment. But I got 
a error when I started hbase master node on same machine that was running hdfs.
{code:java}
2018-05-11 18:34:52,320 INFO datanode.DataNode: PacketResponder: 
BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026, 
type=LAST_IN_PIPELINE: Thread is interrupted.
2018-05-11 18:34:52,320 INFO datanode.DataNode: PacketResponder: 
BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026, 
type=LAST_IN_PIPELINE terminating
2018-05-11 18:34:52,321 INFO datanode.DataNode: opWriteBlock 
BP-471749493-192.168.154.244-1526032382905:blk_1073741850_1026 received 
exception java.io.IOException: Failed to sync 
C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
2018-05-11 18:34:52,321 ERROR datanode.DataNode: 
LAPTOP-460HNFM9:9866:DataXceiver error processing WRITE_BLOCK operation  src: 
/127.0.0.1:10842 dst: /127.0.0.1:9866
java.io.IOException: Failed to sync 
C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
    at 
org.apache.hadoop.hdfs.server.datanode.LocalReplica.fsyncDirectory(LocalReplica.java:523)
    at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.flushOrSync(BlockReceiver.java:429)
    at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:809)
    at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:971)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:890)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
    at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
    at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:289)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.file.AccessDeniedException: 
C:\hadoop\data\hdfs\data1\current\BP-471749493-192.168.154.244-1526032382905\current\rbw
    at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
    at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
    at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
    at 
sun.nio.fs.WindowsFileSystemProvider.newFileChannel(WindowsFileSystemProvider.java:115)
    at java.nio.channels.FileChannel.open(FileChannel.java:287)
    at java.nio.channels.FileChannel.open(FileChannel.java:335)
    at org.apache.hadoop.io.IOUtils.fsync(IOUtils.java:421)
    at 
org.apache.hadoop.hdfs.server.datanode.FileIoProvider.dirSync(FileIoProvider.java:169)
    at 
org.apache.hadoop.hdfs.server.datanode.LocalReplica.fsyncDirectory(LocalReplica.java:521)
    ... 8 more
{code}
I never got the error when I use hadoop 3.0.0 and hbase 1.4.x. And I found this 
is windows issue that windows can't sync dir and filechannel can't open dir 
with any permission option. I have changed the code on IOUtils fsync, and it's 
work for me.
{code:java}
if(!isDir || !Shell.WINDOWS)
    try(FileChannel channel = FileChannel.open(fileToSync.toPath(),
    isDir ? StandardOpenOption.READ : StandardOpenOption.WRITE)){
  fsync(channel, isDir);
    }
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-13545) "guarded" is misspelled as "gaurded" in FSPermissionChecker.java

2018-05-11 Thread Jianchao Jia (JIRA)
Jianchao Jia created HDFS-13545:
---

 Summary:  "guarded" is misspelled as "gaurded" in 
FSPermissionChecker.java
 Key: HDFS-13545
 URL: https://issues.apache.org/jira/browse/HDFS-13545
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs
Reporter: Jianchao Jia


 "guarded" is misspelled as "gaurded"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Windows/x64

2018-05-11 Thread Apache Jenkins Server
For more details, see https://builds.apache.org/job/hadoop-trunk-win/463/

[May 9, 2018 1:36:07 PM] (msingh) HDDS-20. Ozone: Add support for rename key 
within a bucket for rpc
[May 9, 2018 2:50:21 PM] (msingh) HDDS-28. Ozone:Duplicate declaration in
[May 9, 2018 5:32:51 PM] (eyang) YARN-8261.  Fixed a bug in creation of 
localized container directory.   
[May 9, 2018 9:15:51 PM] (mackrorysd) HADOOP-15356. Make HTTP timeout 
configurable in ADLS connector.
[May 9, 2018 11:52:09 PM] (inigoiri) HDFS-13537. TestHdfsHelper does not 
generate jceks path properly for
[May 10, 2018 3:45:46 AM] (bibinchundatt) YARN-8201. Skip stacktrace of few 
exception from ClientRMService.
[May 10, 2018 5:17:48 AM] (vrushali) YARN-8247 Incorrect HTTP status code 
returned by ATSv2 for
[May 10, 2018 6:00:13 AM] (aengineer) HDDS-18. Ozone Shell should use 
RestClient and RpcClient. Contributed by
[May 10, 2018 9:38:08 AM] (aajisaka) HADOOP-15354. hadoop-aliyun & hadoop-azure 
modules to mark hadoop-common




-1 overall


The following subsystems voted -1:
compile mvninstall unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h 00m 00s)
unit


Specific tests:

Failed junit tests :

   hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec 
   hadoop.fs.contract.rawlocal.TestRawlocalContractAppend 
   hadoop.fs.TestFsShellCopy 
   hadoop.fs.TestFsShellList 
   hadoop.fs.TestLocalFileSystem 
   hadoop.fs.TestRawLocalFileSystemContract 
   hadoop.fs.TestSymlinkLocalFSFileContext 
   hadoop.fs.TestTrash 
   hadoop.http.TestHttpServer 
   hadoop.http.TestHttpServerLogs 
   hadoop.io.nativeio.TestNativeIO 
   hadoop.ipc.TestCallQueueManager 
   hadoop.ipc.TestProtoBufRpc 
   hadoop.ipc.TestSocketFactory 
   hadoop.metrics2.impl.TestStatsDMetrics 
   hadoop.metrics2.sink.TestRollingFileSystemSinkWithLocal 
   hadoop.security.TestSecurityUtil 
   hadoop.security.TestShellBasedUnixGroupsMapping 
   hadoop.security.token.TestDtUtilShell 
   hadoop.test.TestLambdaTestUtils 
   hadoop.util.TestNativeCodeLoader 
   hadoop.util.TestNodeHealthScriptRunner 
   hadoop.util.TestWinUtils 
   hadoop.fs.TestResolveHdfsSymlink 
   hadoop.hdfs.client.impl.TestBlockReaderLocal 
   hadoop.hdfs.crypto.TestHdfsCryptoStreams 
   hadoop.hdfs.qjournal.client.TestQuorumJournalManager 
   hadoop.hdfs.qjournal.server.TestJournalNode 
   hadoop.hdfs.qjournal.server.TestJournalNodeSync 
   hadoop.hdfs.server.blockmanagement.TestNameNodePrunesMissingStorages 
   hadoop.hdfs.server.blockmanagement.TestOverReplicatedBlocks 
   
hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness
 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestFsDatasetImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistLockedMemory 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistPolicy 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement 
   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestProvidedImpl 
   hadoop.hdfs.server.datanode.fsdataset.impl.TestSpaceReservation 
   hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage 
   hadoop.hdfs.server.datanode.TestBlockRecovery 
   hadoop.hdfs.server.datanode.TestBlockScanner 
   hadoop.hdfs.server.datanode.TestDataNodeFaultInjector 
   hadoop.hdfs.server.datanode.TestDataNodeMetrics 
   hadoop.hdfs.server.datanode.TestDataNodeUUID 
   hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure 
   hadoop.hdfs.server.datanode.TestDirectoryScanner 
   hadoop.hdfs.server.datanode.TestHSync 
   hadoop.hdfs.server.datanode.TestStorageReport 
   hadoop.hdfs.server.datanode.web.TestDatanodeHttpXFrame 
   hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand 
   hadoop.hdfs.server.diskbalancer.TestDiskBalancerRPC 
   hadoop.hdfs.server.mover.TestStorageMover 
   hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA 
   hadoop.hdfs.server.namenode.ha.TestEditLogTailer 
   hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA 
   hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics 
   
hadoop.hdfs.server.namenode.snapshot.TestINodeFileUnderConstructionWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots 
   hadoop.hdfs.server.namenode.snapshot.TestSnapRootDescendantDiff 
   hadoop.hdfs.server.namenode.snapshot.TestSnapshotDiffReport 
   hadoop.hdfs.server.namenode.TestAddBlock