RE: Updated 2.8.0-SNAPSHOT artifact

2016-10-17 Thread Brahma Reddy Battula
Hi Vinod,

Any plan on first RC for branch-2.8 ? I think, it has been long time.




--Brahma Reddy Battula

-Original Message-
From: Vinod Kumar Vavilapalli [mailto:vino...@apache.org] 
Sent: 20 August 2016 00:56
To: Jonathan Eagles
Cc: common-...@hadoop.apache.org
Subject: Re: Updated 2.8.0-SNAPSHOT artifact

Jon,

That is around the time when I branched 2.8, so I guess you were getting 
SNAPSHOT artifacts till then from the branch-2 nightly builds.

If you need it, we can set up SNAPSHOT builds. Or just wait for the first RC, 
which is around the corner.

+Vinod

> On Jul 28, 2016, at 4:27 PM, Jonathan Eagles  wrote:
> 
> Latest snapshot is uploaded in Nov 2015, but checkins are still coming 
> in quite frequently.
> https://repository.apache.org/content/repositories/snapshots/org/apach
> e/hadoop/hadoop-yarn-api/
> 
> Are there any plans to start producing updated SNAPSHOT artifacts for 
> current hadoop development lines?


-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org


-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: Chrome extension to collapse JIRA comments

2016-10-17 Thread Karthik Kambatla
Never included the link :)

https://github.com/gezapeti/jira-comment-collapser


On Mon, Oct 17, 2016 at 6:46 PM, Karthik Kambatla 
wrote:

> Hi folks
>
> Sorry for the widespread email, but thought you would find this useful.
>
> My colleague, Peter, had put together this chrome extension to collapse
> comments from certain users (HadoopQA, Githubbot) that makes tracking
> conversations in JIRAs much easier.
>
> Cheers!
> Karthik
>
>
>


Chrome extension to collapse JIRA comments

2016-10-17 Thread Karthik Kambatla
Hi folks

Sorry for the widespread email, but thought you would find this useful.

My colleague, Peter, had put together this chrome extension to collapse
comments from certain users (HadoopQA, Githubbot) that makes tracking
conversations in JIRAs much easier.

Cheers!
Karthik


[jira] [Resolved] (HDFS-8662) In failover state, HDFS commands fail on "Retrying connect to server:"

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-8662.
---
Resolution: Not A Problem

Resolving per Nicholas' comment, this looks like it's working as intended.

> In failover state, HDFS commands fail on "Retrying connect to 
> server:"
> ---
>
> Key: HDFS-8662
> URL: https://issues.apache.org/jira/browse/HDFS-8662
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.3.0
> Environment: 3-master cluster with 3 data nodes and 1 edge node.
>Reporter: Lakshmi VS
>Priority: Blocker
> Attachments: hadoop-hdfs-journalnode-infinity2.log, 
> hadoop-hdfs-namenode-infinity2.log
>
>
> Steps (on a 3M cluster):
> 1. Poweroff master1.
> 2. Failover to master2.
> 3. On HDFS commands, there are messages indicating failed attempt to connect 
> to master1, though namenode is active and running on master2.
> {code}
> infinity2:~ # su - hdfs -c "hdfs haadmin -getServiceState nn2"
> active
> infinity2:~ # su hdfs -c "hdfs dfs -ls /"
> 15/06/24 15:28:15 INFO ipc.Client: Retrying connect to server: 
> infinity1.labs.teradata.com/39.0.24.1:8020. Already tried 0 time(s); retry 
> policy is RetryPolicy[MultipleLinearRandomRetry[6x1ms, 10x6ms], 
> TryOnceThenFail]
> 15/06/24 15:28:31 INFO ipc.Client: Retrying connect to server: 
> infinity1.labs.teradata.com/39.0.24.1:8020. Already tried 1 time(s); retry 
> policy is RetryPolicy[MultipleLinearRandomRetry[6x1ms, 10x6ms], 
> TryOnceThenFail]
> Found 7 items
> drwxrwxrwx   - yarn   hadoop  0 2015-06-24 14:42 /app-logs
> drwxr-xr-x   - hdfs   hdfs0 2015-06-24 06:16 /apps
> drwxr-xr-x   - hdfs   hdfs0 2015-06-24 06:14 /hdp
> drwxr-xr-x   - mapred hdfs0 2015-06-24 06:15 /mapred
> drwxrwxrwx   - mapred hadoop  0 2015-06-24 06:15 /mr-history
> drwxrwxrwx   - hdfs   hdfs0 2015-06-24 06:16 /tmp
> drwxr-xr-x   - hdfs   hdfs0 2015-06-24 06:17 /user
> infinity2:~ # ps -fu hdfs
> UIDPID  PPID  C STIME TTY  TIME CMD
> hdfs 16318 1  0 06:25 ?00:00:40 
> /opt/teradata/jvm64/jdk8/bin/java -Dproc_journalnode -Xmx4096m 
> -Dhdp.version=2.3.0.0-2462 -Djava.net.prefe
> hdfs 16859 1  0 06:26 ?00:00:29 
> /opt/teradata/jvm64/jdk8/bin/java -Dproc_zkfc -Xmx4096m 
> -Dhdp.version=2.3.0.0-2462 -Djava.net.preferIPv4St
> hdfs 17791 1  0 06:26 ?00:02:34 
> /opt/teradata/jvm64/jdk8/bin/java -Dproc_namenode -Xmx4096m 
> -Dhdp.version=2.3.0.0-2462 -Djava.net.preferIP
> infinity2:~ #
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-3153) For HA, a logical name is visible in URIs - add an explicit logical name

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-3153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-3153.
---
Resolution: Won't Fix

I'm going to WONTFIX this one per ATM's last comment, and since it involves 
changing configs and missed the 2.x GA release that introduced federation.

If you'd like to resume work on this, feel free to re-open and update the 
target versions.

> For HA, a logical name is visible in URIs - add an explicit logical name
> 
>
> Key: HDFS-3153
> URL: https://issues.apache.org/jira/browse/HDFS-3153
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Affects Versions: 2.0.0-alpha
>Reporter: Sanjay Radia
>Assignee: Tsz Wo Nicholas Sze
>Priority: Critical
>
> Please see this 
> [comment|https://issues.apache.org/jira/browse/HDFS-2839?focusedCommentId=13227729=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13227729]
>  for a discussion of logical names.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-4368) Backport HDFS-3553 (hftp proxy tokens) to branch-1

2016-10-17 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-4368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HDFS-4368.
---
Resolution: Won't Fix

I'm going to WONTFIX this one, since it's been alive for almost 4 years, and 
HFTP has been deprecated (in 2.x) and removed (in 3.x) in favor of WebHDFS. 
Feel free to re-open if you intend to work on this.

> Backport HDFS-3553 (hftp proxy tokens) to branch-1
> --
>
> Key: HDFS-4368
> URL: https://issues.apache.org/jira/browse/HDFS-4368
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 1.0.2
>Reporter: Daryn Sharp
>Priority: Critical
> Attachments: HDFS-4368-branch-1.1.patch
>
>
> Proxy tokens are broken for hftp.  The impact is systems using proxy tokens, 
> such as oozie jobs, cannot use hftp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11023) I/O based throttling of DN replication work

2016-10-17 Thread Andrew Wang (JIRA)
Andrew Wang created HDFS-11023:
--

 Summary: I/O based throttling of DN replication work
 Key: HDFS-11023
 URL: https://issues.apache.org/jira/browse/HDFS-11023
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: erasure-coding
Affects Versions: 3.0.0-alpha1
Reporter: Andrew Wang


Datanode recovery work is currently throttled based on the number of blocks to 
be recovered. This is fine in a replicated world, since each block is equally 
expensive to recover. However, EC blocks are much more expensive to recover, 
and the amount depends on the EC policy.

It'd be better to have recovery throttles that accounted for the amount of I/O 
rather than just the # of blocks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-11022) DataNode unable to remove corrupt block replica due to race condition

2016-10-17 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HDFS-11022:
--

 Summary: DataNode unable to remove corrupt block replica due to 
race condition
 Key: HDFS-11022
 URL: https://issues.apache.org/jira/browse/HDFS-11022
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: datanode, namenode
Affects Versions: 2.6.0
 Environment: CDH5.7.0
Reporter: Wei-Chiu Chuang
Priority: Critical



Scenario:
# A client reads a replica blk_A_x from a data node and detected corruption.
# In the meantime, the replica is appended, updating its generation stamp from 
x to y.
# The client tells NN to mark the replica blk_A_x corrupt.
# NN tells the data node to (1) delete replica blk_A_x and (2) replicate the 
newer replica blk_A_y from another datanode. Due to block placement policy, 
blk_A_y is replicated to the same node. (It's a small cluster)
# DN is unable to receive the newer replica blk_A_y, because the replica 
already exists.
# DN is also unable to delete replica blk_A_y because blk_A_y does not exist.
# The replica on the DN is not part of data pipeline, so it becomes stale.

If another replica becomes corrupt and NameNode wants to replicate a healthy 
replica to this DataNode, it can't, because a stale replica exists. Because 
this is a small cluster, soon enough (in a matter of a hour) no DataNode is 
able to receive a healthy replica.

This cluster also suffers from HDFS-11019, so even though DataNode later 
detected data corruption, it was unable to report to NameNode.

Note that we are still investigating the root cause of the corruption.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Planning for 3.0.0-alpha2

2016-10-17 Thread Andrew Wang
Hi folks,

It's been a month since 3.0.0-alpha1, and we've been incorporating fixes
based on downstream feedback. Thus, it's getting to be time for
3.0.0-alpha2. I'm using this JIRA query to track open issues:

https://issues.apache.org/jira/issues/?jql=project%20in%20(HADOOP%2C%20HDFS%2C%20MAPREDUCE%2C%20YARN)%20AND%20%22Target%20Version%2Fs%22%20in%20(3.0.0-alpha2%2C%203.0.0-beta1%2C%202.8.0)%20AND%20statusCategory%20not%20in%20(Complete)%20ORDER%20BY%20priority

If alpha2 goes well, we can declare feature freeze, cut branch-3, and move
onto beta1. My plan for the 3.0.0 release timeline looks like this:

* alpha2 in early November
* beta1 in early Jan
* GA in early March

I'd appreciate everyone's help in resolving blocker and critical issues on
the above JIRA search.

Thanks,
Andrew


Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-10-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/

[Oct 17, 2016 12:04:49 PM] (weichiu) HADOOP-13661. Upgrade HTrace version. 
Contributed by Sean Mackrory.




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl 
   hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport 
   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   
hadoop.yarn.server.nodemanager.containermanager.logaggregation.TestLogAggregationService
 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 

Timed out junit tests :

   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt
  [312K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt
  [312K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-compile-root.txt
  [312K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [196K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [52K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timelineservice.txt
  [20K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
  [72K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/127/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-timeline-pluginstorage.txt
  [28K]
   

[jira] [Created] (HDFS-11021) Add FSNamesystemLock metrics for BlockManager operations

2016-10-17 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-11021:
--

 Summary: Add FSNamesystemLock metrics for BlockManager operations
 Key: HDFS-11021
 URL: https://issues.apache.org/jira/browse/HDFS-11021
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Erik Krogen
Assignee: Erik Krogen


Right now the operations which the {{BlockManager}} issues to the 
{{Namesystem}} will not emit metrics about which operation caused the 
{{FSNamesystemLock}} to be held; they are all grouped under "OTHER". We should 
fix this since the {{BlockManager}} creates many acquisitions of both the read 
and write locks. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-10-17 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/

No changes




-1 overall


The following subsystems voted -1:
asflicense findbugs unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

FindBugs :

   module:hadoop-common-project/hadoop-kms 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At KMS.java:is not 
thrown in org.apache.hadoop.crypto.key.kms.server.KMS.createKey(Map) At 
KMS.java:[line 169] 
   Exception is caught when Exception is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:is not thrown in 
org.apache.hadoop.crypto.key.kms.server.KMS.generateEncryptedKeys(String, 
String, int) At KMS.java:[line 501] 

Failed junit tests :

   hadoop.ha.TestZKFailoverController 
   hadoop.hdfs.server.namenode.snapshot.TestOpenFilesWithSnapshot 
   
hadoop.yarn.server.nodemanager.containermanager.queuing.TestQueuingContainerManager
 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/whitespace-tabs.txt
  [1.3M]

   findbugs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html
  [8.0K]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
  [120K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [148K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [40K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [124K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/197/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Created] (HDFS-11020) Enable HDFS transparent encryption doc

2016-10-17 Thread Yi Liu (JIRA)
Yi Liu created HDFS-11020:
-

 Summary: Enable HDFS transparent encryption doc
 Key: HDFS-11020
 URL: https://issues.apache.org/jira/browse/HDFS-11020
 Project: Hadoop HDFS
  Issue Type: Task
  Components: documentation, encryption, fs
Reporter: Yi Liu
Assignee: Yi Liu
Priority: Minor


We need correct version of Openssl which supports hardware acceleration of AES 
CTR.
Let's add more doc about how to configure the correct Openssl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org