[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varada Hemeswari updated HADOOP-14768: -- Attachment: HADOOP-14768.005.patch > Honoring sticky bit during Deletion when authorization is enabled in WASB > - > > Key: HADOOP-14768 > URL: https://issues.apache.org/jira/browse/HADOOP-14768 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Varada Hemeswari >Assignee: Varada Hemeswari > Labels: fs, secure, wasb > Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, > HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, > HADOOP-14768.004.patch, HADOOP-14768.005.patch > > > When authorization is enabled in WASB filesystem, there is a need for > stickybit in cases where multiple users can create files under a shared > directory. This additional check for sticky bit is reqired since any user can > delete another user's file because the parent has WRITE permission for all > users. > The purpose of this jira is to implement sticky bit equivalent for 'delete' > call when authorization is enabled. > Note : Sticky bit implementation for 'Rename' operation is not done as part > of this JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varada Hemeswari updated HADOOP-14768: -- Attachment: (was: HADOOP-14768.005.patch) > Honoring sticky bit during Deletion when authorization is enabled in WASB > - > > Key: HADOOP-14768 > URL: https://issues.apache.org/jira/browse/HADOOP-14768 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Varada Hemeswari >Assignee: Varada Hemeswari > Labels: fs, secure, wasb > Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, > HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, > HADOOP-14768.004.patch, HADOOP-14768.005.patch > > > When authorization is enabled in WASB filesystem, there is a need for > stickybit in cases where multiple users can create files under a shared > directory. This additional check for sticky bit is reqired since any user can > delete another user's file because the parent has WRITE permission for all > users. > The purpose of this jira is to implement sticky bit equivalent for 'delete' > call when authorization is enabled. > Note : Sticky bit implementation for 'Rename' operation is not done as part > of this JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176235#comment-16176235 ] Hadoop QA commented on HADOOP-14768: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 7 new + 24 unchanged - 1 fixed = 31 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 2s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14768 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888472/HADOOP-14768.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 26ba839d52ee 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c71d137 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13354/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13354/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13354/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13354/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Honoring sticky bit during Deletion when authorization is enabled in WASB > - > > Key: HADOOP-14768 > URL: https://issues.apache.org/jira/browse/HADOOP-14768 >
[jira] [Commented] (HADOOP-14895) Consider exposing SimpleCopyListing#computeSourceRootPath() for downstream project
[ https://issues.apache.org/jira/browse/HADOOP-14895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176233#comment-16176233 ] Steve Loughran commented on HADOOP-14895: - ted, whats the component, version, etc? > Consider exposing SimpleCopyListing#computeSourceRootPath() for downstream > project > -- > > Key: HADOOP-14895 > URL: https://issues.apache.org/jira/browse/HADOOP-14895 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > > Over in HBASE-18843, [~vrodionov] needs to override > SimpleCopyListing#computeSourceRootPath() . > Since the method is private, some duplicated code appears in hbase. > We should consider exposing SimpleCopyListing#computeSourceRootPath() so that > its behavior can be overridden. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14531) Improve S3A error handling & reporting
[ https://issues.apache.org/jira/browse/HADOOP-14531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176228#comment-16176228 ] Steve Loughran commented on HADOOP-14531: - you can get a 443 sometimes; no response. Retryable on idempotent calls only {code} 2017-02-07 10:00:44,200 INFO [main] org.apache.hadoop.mapred.Task: Task attempt_1486454881801_0009_m_24_0 is allowed to commit now 2017-02-07 10:01:07,950 INFO [s3a-transfer-shared-pool1-t7] com.amazonaws.http.AmazonHttpClient: Unable to execute HTTP request: hwdev-rajesh-new2.s3.amazonaws.com:443 failed to respond org.apache.http.NoHttpResponseException: hwdev-rajesh-new2.s3.amazonaws.com:443 failed to respond at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:143) at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57) at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261) at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283) at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:259) at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:209) at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272) at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66) at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124) at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:686) at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:488) at org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:884) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82) at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55) at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:728) at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:489) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:310) at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3785) at com.amazonaws.services.s3.AmazonS3Client.copyPart(AmazonS3Client.java:1731) at com.amazonaws.services.s3.transfer.internal.CopyPartCallable.call(CopyPartCallable.java:41) at com.amazonaws.services.s3.transfer.internal.CopyPartCallable.call(CopyPartCallable.java:28) at org.apache.hadoop.fs.s3a.SemaphoredDelegatingExecutor$CallableWithPermitRelease.call(SemaphoredDelegatingExecutor.java:222) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) {code} > Improve S3A error handling & reporting > -- > > Key: HADOOP-14531 > URL: https://issues.apache.org/jira/browse/HADOOP-14531 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > Improve S3a error handling and reporting > this includes > # looking at error codes and translating to more specific exceptions > # better retry logic where present > # adding retry logic where not present > # more diagnostics in exceptions > # docs > Overall goals > * things that can be retried and will go away are retried for a bit > * things that don't go away when retried failfast (302, no auth, unknown > host, connection refused) > * meaningful exceptions are built in translate exception > * diagnostics are included, where possible > * our troubleshooting docs are expanded with new failures we encounter > AWS S3 error codes: > http://docs.aws.amazon.com/AmazonS3/latest/API/ErrorResponses.html -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB
[ https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varada Hemeswari updated HADOOP-14768: -- Attachment: HADOOP-14768.005.patch > Honoring sticky bit during Deletion when authorization is enabled in WASB > - > > Key: HADOOP-14768 > URL: https://issues.apache.org/jira/browse/HADOOP-14768 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Varada Hemeswari >Assignee: Varada Hemeswari > Labels: fs, secure, wasb > Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, > HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, > HADOOP-14768.004.patch, HADOOP-14768.005.patch > > > When authorization is enabled in WASB filesystem, there is a need for > stickybit in cases where multiple users can create files under a shared > directory. This additional check for sticky bit is reqired since any user can > delete another user's file because the parent has WRITE permission for all > users. > The purpose of this jira is to implement sticky bit equivalent for 'delete' > call when authorization is enabled. > Note : Sticky bit implementation for 'Rename' operation is not done as part > of this JIRA -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176200#comment-16176200 ] Hadoop QA commented on HADOOP-14872: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 56s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 7s{color} | {color:orange} root: The patch generated 1 new + 70 unchanged - 0 fixed = 71 total (was 70) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 16s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 52s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}182m 17s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14872 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888445/HADOOP-14872.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1d551f98740b 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c71d137 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13352/artifact/patchprocess/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13352/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1
[ https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176153#comment-16176153 ] Steve Loughran commented on HADOOP-14799: - No problem with this as a followup; I just do want us to fix that version in so that we don't get randomness. I don't even know why/where the search for the 2.3-SNAPSHOT is coming from (i.e. where is the pom which is triggering this message being picked up). But at least it's not actually being included in the binaries as of now. > Update nimbus-jose-jwt to 4.41.1 > > > Key: HADOOP-14799 > URL: https://issues.apache.org/jira/browse/HADOOP-14799 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Fix For: 3.0.0-beta1 > > Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, > HADOOP-14799.003.patch > > > Update the dependency > com.nimbusds:nimbus-jose-jwt:3.9 > to the latest (4.41.1) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14898) Create official Docker images for development and testing features
Elek, Marton created HADOOP-14898: - Summary: Create official Docker images for development and testing features Key: HADOOP-14898 URL: https://issues.apache.org/jira/browse/HADOOP-14898 Project: Hadoop Common Issue Type: Improvement Reporter: Elek, Marton Assignee: Elek, Marton This is the original mail from the mailing list: {code} TL;DR: I propose to create official hadoop images and upload them to the dockerhub. GOAL/SCOPE: I would like improve the existing documentation with easy-to-use docker based recipes to start hadoop clusters with various configuration. The images also could be used to test experimental features. For example ozone could be tested easily with these compose file and configuration: https://gist.github.com/elek/1676a97b98f4ba561c9f51fce2ab2ea6 Or even the configuration could be included in the compose file: https://github.com/elek/hadoop/blob/docker-2.8.0/example/docker-compose.yaml I would like to create separated example compose files for federation, ha, metrics usage, etc. to make it easier to try out and understand the features. CONTEXT: There is an existing Jira https://issues.apache.org/jira/browse/HADOOP-13397 But it’s about a tool to generate production quality docker images (multiple types, in a flexible way). If no objections, I will create a separated issue to create simplified docker images for rapid prototyping and investigating new features. And register the branch to the dockerhub to create the images automatically. MY BACKGROUND: I am working with docker based hadoop/spark clusters quite a while and run them succesfully in different environments (kubernetes, docker-swarm, nomad-based scheduling, etc.) My work is available from here: https://github.com/flokkr but they could handle more complex use cases (eg. instrumenting java processes with btrace, or read/reload configuration from consul). And IMHO in the official hadoop documentation it’s better to suggest to use official apache docker images and not external ones (which could be changed). {code} The next list will enumerate the key decision points regarding to docker image creating A. automated dockerhub build / jenkins build Docker images could be built on the dockerhub (a branch pattern should be defined for a github repository and the location of the Docker files) or could be built on a CI server and pushed. The second one is more flexible (it's more easy to create matrix build, for example) The first one had the advantage that we can get an additional flag on the dockerhub that the build is automated (and built from the source by the dockerhub). The decision is easy as ASF supports the first approach: (see https://issues.apache.org/jira/browse/INFRA-12781?focusedCommentId=15824096=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15824096) B. source: binary distribution or source build The second question is about creating the docker image. One option is to build the software on the fly during the creation of the docker image the other one is to use the binary releases. I suggest to use the second approach as: 1. In that case the hadoop:2.7.3 could contain exactly the same hadoop distrubution as the downloadable one 2. We don't need to add development tools to the image, the image could be more smaller (which is important as the goal for this image to getting started as fast as possible) 3. The docker definition will be more simple (and more easy to maintain) Usually this approach is used in other projects (I checked Apache Zeppelin and Apache Nutch) C. branch usage Other question is the location of the Docker file. It could be on the official source-code branches (branch-2, trunk, etc.) or we can create separated branches for the dockerhub (eg. docker/2.7 docker/2.8 docker/3.0) For the first approach it's easier to find the docker images, but it's less flexible. For example if we had a Dockerfile for on the source code it should be used for every release (for example the Docker file from the tag release-3.0.0 should be used for the 3.0 hadoop docker image). In that case the release process is much more harder: in case of a Dockerfile error (which could be test on dockerhub only after the taging), a new release should be added after fixing the Dockerfile. Another problem is that with using tags it's not possible to improve the Dockerfiles. I can imagine that we would like to improve for example the hadoop:2.7 images (for example adding more smart startup scripts) with using exactly the same hadoop 2.7 distribution. Finally with tag based approach we can't create images for the older releases (2.8.1 for example) So I suggest to create separated branches for the Dockerfiles. D. Versions We can create a separated branch for every version (2.7.1/2.7.2/2.7.3) or just for the main version
[jira] [Commented] (HADOOP-14836) multiple versions of maven-clean-plugin in use
[ https://issues.apache.org/jira/browse/HADOOP-14836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176129#comment-16176129 ] Hadoop QA commented on HADOOP-14836: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 30s{color} | {color:green} hadoop-yarn-ui in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HADOOP-14836 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12888416/HADOOP-14836.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 72043292dd5c 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c71d137 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13353/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-ui | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13353/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > multiple versions of maven-clean-plugin in use > -- > > Key: HADOOP-14836 > URL: https://issues.apache.org/jira/browse/HADOOP-14836 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Huafeng Wang > Attachments: HADOOP-14836.001.patch > > > hadoop-yarn-ui re-declares maven-clean-plugin with 3.0 while the rest of the > source tree uses 2.5. This should get synced up. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14836) multiple versions of maven-clean-plugin in use
[ https://issues.apache.org/jira/browse/HADOOP-14836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Huafeng Wang updated HADOOP-14836: -- Status: Patch Available (was: Open) > multiple versions of maven-clean-plugin in use > -- > > Key: HADOOP-14836 > URL: https://issues.apache.org/jira/browse/HADOOP-14836 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-beta1 >Reporter: Allen Wittenauer >Assignee: Huafeng Wang > Attachments: HADOOP-14836.001.patch > > > hadoop-yarn-ui re-declares maven-clean-plugin with 3.0 while the rest of the > source tree uses 2.5. This should get synced up. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Attachment: HADOOP-14872.004.patch Patch 004 * Add enum SETDROPBEHIND, SETREADAHEAD, UNBUFFER to StreamCapabilities * Implement StreamCapabilities in FSDataInputStream, CryptoInputStream, CryptoOutputStream, and HarFsInputStream [~steve_l] Is this something you have in mind? Focus on hadoop-common changes for now. Will make HDFS changes in a separate JIRA. > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HADOOP-14897: -- Issue Type: Bug (was: Improvement) > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, native >Reporter: Chris Douglas >Priority: Blocker > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton reassigned HADOOP-14897: - Assignee: Daniel Templeton > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, native >Reporter: Chris Douglas >Assignee: Daniel Templeton >Priority: Blocker > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176005#comment-16176005 ] Daniel Templeton commented on HADOOP-14897: --- Thanks for the JIRA. I'll get a patch posted soon so that we have time to review. > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Task > Components: documentation, native >Reporter: Chris Douglas > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HADOOP-14897: -- Issue Type: Improvement (was: Task) > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, native >Reporter: Chris Douglas >Priority: Blocker > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Templeton updated HADOOP-14897: -- Priority: Blocker (was: Major) > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Task > Components: documentation, native >Reporter: Chris Douglas >Priority: Blocker > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
[ https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176001#comment-16176001 ] Chris Douglas commented on HADOOP-14897: As [mentioned|https://issues.apache.org/jira/browse/HADOOP-13714?focusedCommentId=16175844=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16175844] in the original ticket, this could interfere with routine maintenance and prevent us from accepting new contributions of native code. > Loosen compatibility guidelines for native dependencies > --- > > Key: HADOOP-14897 > URL: https://issues.apache.org/jira/browse/HADOOP-14897 > Project: Hadoop Common > Issue Type: Task > Components: documentation, native >Reporter: Chris Douglas > > Within a major version, the compatibility guidelines forbid raising the > minimum required version of any native dependency or tool required to build > native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3
[ https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175993#comment-16175993 ] Chris Douglas commented on HADOOP-13714: Filed HADOOP-14897 > Tighten up our compatibility guidelines for Hadoop 3 > > > Key: HADOOP-13714 > URL: https://issues.apache.org/jira/browse/HADOOP-13714 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.7.3 >Reporter: Karthik Kambatla >Assignee: Daniel Templeton >Priority: Blocker > Fix For: 3.0.0-beta1 > > Attachments: Compatibility.pdf, HADOOP-13714.001.patch, > HADOOP-13714.002.patch, HADOOP-13714.003.patch, HADOOP-13714.004.patch, > HADOOP-13714.005.patch, HADOOP-13714.006.patch, HADOOP-13714.007.patch, > HADOOP-13714.008.patch, HADOOP-13714.WIP-001.patch, > InterfaceClassification.pdf > > > Our current compatibility guidelines are incomplete and loose. For many > categories, we do not have a policy. It would be nice to actually define > those policies so our users know what to expect and the developers know what > releases to target their changes. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14897) Loosen compatibility guidelines for native dependencies
Chris Douglas created HADOOP-14897: -- Summary: Loosen compatibility guidelines for native dependencies Key: HADOOP-14897 URL: https://issues.apache.org/jira/browse/HADOOP-14897 Project: Hadoop Common Issue Type: Task Components: documentation, native Reporter: Chris Douglas Within a major version, the compatibility guidelines forbid raising the minimum required version of any native dependency or tool required to build native components. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14885) DistSum should use Time.monotonicNow for measuring durations
[ https://issues.apache.org/jira/browse/HADOOP-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14885: --- Issue Type: Bug (was: Sub-task) Parent: (was: HADOOP-14713) > DistSum should use Time.monotonicNow for measuring durations > > > Key: HADOOP-14885 > URL: https://issues.apache.org/jira/browse/HADOOP-14885 > Project: Hadoop Common > Issue Type: Bug >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Attachments: HADOOP-14885.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14885) DistSum should use Time.monotonicNow for measuring durations
[ https://issues.apache.org/jira/browse/HADOOP-14885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16175965#comment-16175965 ] Akira Ajisaka commented on HADOOP-14885: +1 > DistSum should use Time.monotonicNow for measuring durations > > > Key: HADOOP-14885 > URL: https://issues.apache.org/jira/browse/HADOOP-14885 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Chetna Chaudhari >Assignee: Chetna Chaudhari >Priority: Minor > Attachments: HADOOP-14885.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org