[jira] [Created] (HDFS-10899) Add functionality to re-encrypt EDEKs.

2016-09-23 Thread Xiao Chen (JIRA)
Xiao Chen created HDFS-10899:


 Summary: Add functionality to re-encrypt EDEKs.
 Key: HDFS-10899
 URL: https://issues.apache.org/jira/browse/HDFS-10899
 Project: Hadoop HDFS
  Issue Type: New Feature
  Components: encryption
Reporter: Xiao Chen
Assignee: Xiao Chen


Currently when an encryption zone (EZ) key is rotated, it only takes effect on 
new EDEKs. We should provide a util to re-encrypt EDEKs after the EZ key 
rotation, for improved security.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



[jira] [Created] (HDFS-10898) libhdfs++: Make logs more informative and consistent

2016-09-23 Thread James Clampffer (JIRA)
James Clampffer created HDFS-10898:
--

 Summary: libhdfs++: Make logs more informative and consistent
 Key: HDFS-10898
 URL: https://issues.apache.org/jira/browse/HDFS-10898
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: James Clampffer
Assignee: James Clampffer
Priority: Trivial


Most of the public C++ FileHandle/FileSystem operations have a LOG_TRACE level 
message about parameters passed in etc.  However many methods use LOG_DEBUG and 
a couple use LOG_INFO.

We most likely want FS operations that happen a lot (read/open/seek/stat) to 
stick to LOG_DEBUG consistently and only use LOG_INFO for things like 
FileSystem::Connect or RpcConnection:: that don't get called often and are 
important enough to warrant showing up in the log.  LOG_TRACE can be reserved 
for things happening deeper inside public methods and methods that aren't part 
of the public API.

Related improvements that could be brought into this to avoid opening a ton of 
small Jiras:
-Print the "this" pointer address in the log message to make it easier to 
correlate objects when there's concurrent work being done.  This has been very 
helpful in the past but often got stripped out before patches went in.  People 
just need be aware that operator new may eventually place an object of the same 
type at the same address sometime in the future.
-For objects owned by other objects, but created on the fly, include a pointer 
back to the parent/creator object if that pointer is already being tracked (See 
the nested stucts in BlockReaderImpl).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/ppc64le

2016-09-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/

[Sep 22, 2016 4:37:06 PM] (ozawa) HADOOP-13602. Fix some warnings by findbugs 
in hadoop-maven-plugin.
[Sep 22, 2016 6:43:11 PM] (wang) HDFS-10877. Make 
RemoteEditLogManifest.committedTxnId optional in
[Sep 22, 2016 11:12:56 PM] (rkanter) MAPREDUCE-6632. Master.getMasterAddress() 
should be updated to use
[Sep 22, 2016 11:45:34 PM] (rkanter) YARN-4973. YarnWebParams 
next.fresh.interval should be
[Sep 23, 2016 1:00:49 AM] (naganarasimha_gr) YARN-3692. Allow REST API to set a 
user generated message when killing
[Sep 23, 2016 2:36:16 AM] (aengineer) HDFS-10871. DiskBalancerWorkItem should 
not import jackson relocated by
[Sep 23, 2016 7:53:54 AM] (varunsaxena) TimelineClient failed to retry on 
java.net.SocketTimeoutException: Read
[Sep 23, 2016 7:55:46 AM] (varunsaxena) Revert "TimelineClient failed to retry 
on
[Sep 23, 2016 7:57:31 AM] (varunsaxena) YARN-5539. TimelineClient failed to 
retry on
[Sep 23, 2016 9:01:30 AM] (stevel) HADOOP-13643. Math error in 
AbstractContractDistCpTest. Contributed by




-1 overall


The following subsystems voted -1:
compile unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc javac


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer 
   hadoop.hdfs.server.namenode.ha.TestRequestHedgingProxyProvider 
   hadoop.hdfs.TestDatanodeRegistration 
   hadoop.hdfs.web.TestWebHdfsTimeouts 
   hadoop.yarn.server.nodemanager.recovery.TestNMLeveldbStateStoreService 
   hadoop.yarn.server.nodemanager.TestNodeManagerShutdown 
   hadoop.yarn.server.timeline.TestRollingLevelDB 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.timeline.TestTimelineDataManager 
   hadoop.yarn.server.timeline.TestLeveldbTimelineStore 
   hadoop.yarn.server.timeline.recovery.TestLeveldbTimelineStateStore 
   hadoop.yarn.server.timeline.TestRollingLevelDBTimelineStore 
   
hadoop.yarn.server.applicationhistoryservice.TestApplicationHistoryServer 
   hadoop.yarn.server.timelineservice.storage.common.TestRowKeys 
   hadoop.yarn.server.timelineservice.storage.common.TestKeyConverters 
   hadoop.yarn.server.timelineservice.storage.common.TestSeparator 
   hadoop.yarn.server.resourcemanager.recovery.TestLeveldbRMStateStore 
   hadoop.yarn.server.resourcemanager.TestRMRestart 
   hadoop.yarn.server.resourcemanager.TestResourceTrackerService 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
   hadoop.yarn.client.api.impl.TestNMClient 
   hadoop.yarn.server.timeline.TestLevelDBCacheTimelineStore 
   hadoop.yarn.server.timeline.TestOverrideTimelineStoreYarnClient 
   hadoop.yarn.server.timeline.TestEntityGroupFSTimelineStore 
   hadoop.yarn.server.timelineservice.storage.TestHBaseTimelineStorage 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRunCompaction
 
   hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowRun 
   
hadoop.yarn.server.timelineservice.storage.TestPhoenixOfflineAggregationWriterImpl
 
   
hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage
 
   
hadoop.yarn.server.timelineservice.storage.flow.TestHBaseStorageFlowActivity 
   hadoop.yarn.applications.distributedshell.TestDistributedShell 
   hadoop.mapred.TestShuffleHandler 
   hadoop.mapreduce.v2.hs.TestHistoryServerLeveldbStateStoreService 
   hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers 

Timed out junit tests :

   org.apache.hadoop.hdfs.TestFileChecksum 
   org.apache.hadoop.hdfs.TestReconstructStripedFile 
   org.apache.hadoop.hdfs.TestWriteReadStripedFile 
   org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCache 
   org.apache.hadoop.mapred.TestMRIntermediateDataEncryption 
   org.apache.hadoop.mapred.TestMROpportunisticMaps 
  

   compile:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/artifact/out/patch-compile-root.txt
  [308K]

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/artifact/out/patch-compile-root.txt
  [308K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/artifact/out/patch-compile-root.txt
  [308K]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [196K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-ppc/103/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt
  [52K]
   
https://builds.apache.org/job/hadoop-q

[jira] [Created] (HDFS-10897) Ozone: SCM: Add NodeManager

2016-09-23 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-10897:
---

 Summary: Ozone: SCM: Add NodeManager
 Key: HDFS-10897
 URL: https://issues.apache.org/jira/browse/HDFS-10897
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: ozone
Affects Versions: HDFS-7240
Reporter: Anu Engineer
Assignee: Anu Engineer


Add a nodeManager class that will be used by Storage Controller Manager 
eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Re: HADOOP-13636 and io.bytes.per.checksum

2016-09-23 Thread Andrew Wang
Have you git blamed to dig up the original JIRA conversation? I think that
deprecation predates many of us, so you might not get much historical
perspective from the mailing list.

I'm happy to lend a +1 though, since like you said, it doesn't seem like
that config key is going anywhere.

On Fri, Sep 23, 2016 at 1:52 AM, Steve Loughran 
wrote:

> I got silence from HDFS dev here, so I'm raising it on common dev.
>
> Why is HDFS tagging as deprecated " io.bytes.per.checksum ", given its an
> option being set in core-default, and used by other filesystems?
>
>
> >INFO  Configuration.deprecation 
> >(Configuration.java:warnOnceIfDeprecated(1182))
> - io.bytes.per.checksum is deprecated. Instead, use dfs.bytes-per-checksum
>
> I don't see why it should be deprecated. If it isn't what HDFS likes, then
> the code there could be smarter: look for the dfs value, and if not found
> then grab the io.bytes one —warning the user.
>
> I will volunteer to write this code if I get a promise that someone agrees
> with the premise and is willing to help nurture it in.
>
> Begin forwarded message:
>
> From: Steve Loughran mailto:ste...@hortonworks.com
> >>
> Subject: HADOOP-13636 and io.bytes.per.checksum
> Date: 21 September 2016 at 17:12:00 BST
> To: "hdfs-dev@hadoop.apache.org" <
> hdfs-dev@hadoop.apache.org>
>
> I'm getting told off for using the deprecated option: io.bytes.per.checksum
>
> https://issues.apache.org/jira/browse/HADOOP-13636
>
> Except: I'm not. FileSystem.getServerDefaults() is, which is used by Trash
> to work out where to delete things.
>
> It strikes me that the system is inconsitent: HdfsConfiguration is
> deprecating a property that everything else is happy to use; I see it in
> four places in production, and various tests, plus core-default.xml
>
> Is it really deprecated? If so, are there any volunteers to remove it from
> the codebase, while pulling up the default value into core-default?
>
> otherwise: how about the complaint is turned off?
>
>


[jira] [Created] (HDFS-10896) Move lock logging logic from FSNamesystem into FSNamesystemLock

2016-09-23 Thread Erik Krogen (JIRA)
Erik Krogen created HDFS-10896:
--

 Summary: Move lock logging logic from FSNamesystem into 
FSNamesystemLock
 Key: HDFS-10896
 URL: https://issues.apache.org/jira/browse/HDFS-10896
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: namenode
Reporter: Erik Krogen
Assignee: Erik Krogen


There are a number of tickets (HDFS-10742, HDFS-10817, HDFS-10713, this 
subtask's story HDFS-10475) which are adding/improving logging/metrics around 
the {{FSNamesystemLock}}. All of this is done in {{FSNamesystem}} right now, 
which is polluting the namesystem with ThreadLocal variables, timing counters, 
etc. which are only relevant to the lock itself and the number of these 
increases as the logging/metrics become more sophisticated. It would be best to 
move these all into FSNamesystemLock to keep the metrics/logging tied directly 
to the item of interest. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86

2016-09-23 Thread Apache Jenkins Server
For more details, see 
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/

[Sep 22, 2016 4:37:06 PM] (ozawa) HADOOP-13602. Fix some warnings by findbugs 
in hadoop-maven-plugin.
[Sep 22, 2016 6:43:11 PM] (wang) HDFS-10877. Make 
RemoteEditLogManifest.committedTxnId optional in
[Sep 22, 2016 11:12:56 PM] (rkanter) MAPREDUCE-6632. Master.getMasterAddress() 
should be updated to use
[Sep 22, 2016 11:45:34 PM] (rkanter) YARN-4973. YarnWebParams 
next.fresh.interval should be
[Sep 23, 2016 1:00:49 AM] (naganarasimha_gr) YARN-3692. Allow REST API to set a 
user generated message when killing
[Sep 23, 2016 2:36:16 AM] (aengineer) HDFS-10871. DiskBalancerWorkItem should 
not import jackson relocated by
[Sep 23, 2016 7:53:54 AM] (varunsaxena) TimelineClient failed to retry on 
java.net.SocketTimeoutException: Read
[Sep 23, 2016 7:55:46 AM] (varunsaxena) Revert "TimelineClient failed to retry 
on
[Sep 23, 2016 7:57:31 AM] (varunsaxena) YARN-5539. TimelineClient failed to 
retry on




-1 overall


The following subsystems voted -1:
asflicense unit


The following subsystems voted -1 but
were configured to be filtered/ignored:
cc checkstyle javac javadoc pylint shellcheck shelldocs whitespace


The following subsystems are considered long running:
(runtime bigger than 1h  0m  0s)
unit


Specific tests:

Failed junit tests :

   
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery 
   hadoop.yarn.server.applicationhistoryservice.webapp.TestAHSWebServices 
   hadoop.yarn.server.TestMiniYarnClusterNodeUtilization 
   hadoop.yarn.server.TestContainerManagerSecurity 
  

   cc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-compile-cc-root.txt
  [4.0K]

   javac:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-compile-javac-root.txt
  [168K]

   checkstyle:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-checkstyle-root.txt
  [16M]

   pylint:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-patch-pylint.txt
  [16K]

   shellcheck:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-patch-shellcheck.txt
  [20K]

   shelldocs:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-patch-shelldocs.txt
  [16K]

   whitespace:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/whitespace-eol.txt
  [11M]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/whitespace-tabs.txt
  [1.3M]

   javadoc:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/diff-javadoc-javadoc-root.txt
  [2.2M]

   unit:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
  [188K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice.txt
  [12K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-tests.txt
  [268K]
   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/patch-unit-hadoop-mapreduce-project_hadoop-mapreduce-client_hadoop-mapreduce-client-nativetask.txt
  [120K]

   asflicense:

   
https://builds.apache.org/job/hadoop-qbt-trunk-java8-linux-x86/173/artifact/out/patch-asflicense-problems.txt
  [4.0K]

Powered by Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org



-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org

[jira] [Resolved] (HDFS-10888) dfshealth.html#tab-datanode

2016-09-23 Thread Alexey Ivanchin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-10888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Ivanchin resolved HDFS-10888.

   Resolution: Duplicate
Fix Version/s: 2.8.0

> dfshealth.html#tab-datanode
> ---
>
> Key: HDFS-10888
> URL: https://issues.apache.org/jira/browse/HDFS-10888
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: ui
>Affects Versions: 2.7.3
>Reporter: Alexey Ivanchin
>Assignee: Weiwei Yang
> Fix For: 2.8.0
>
>
> When you click on the tab NN:50070/dfshealth.html#tab-overview i see live 
> datanode and other info. 
> When you click on the tab NN:50070/dfshealth.html#tab-datanode I see a blank 
> page. How to fix?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: hdfs-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-h...@hadoop.apache.org



Fwd: HADOOP-13636 and io.bytes.per.checksum

2016-09-23 Thread Steve Loughran
I got silence from HDFS dev here, so I'm raising it on common dev.

Why is HDFS tagging as deprecated " io.bytes.per.checksum ", given its an 
option being set in core-default, and used by other filesystems?


>INFO  Configuration.deprecation 
>(Configuration.java:warnOnceIfDeprecated(1182)) - io.bytes.per.checksum is 
>deprecated. Instead, use dfs.bytes-per-checksum

I don't see why it should be deprecated. If it isn't what HDFS likes, then the 
code there could be smarter: look for the dfs value, and if not found then grab 
the io.bytes one —warning the user.

I will volunteer to write this code if I get a promise that someone agrees with 
the premise and is willing to help nurture it in.

Begin forwarded message:

From: Steve Loughran mailto:ste...@hortonworks.com>>
Subject: HADOOP-13636 and io.bytes.per.checksum
Date: 21 September 2016 at 17:12:00 BST
To: "hdfs-dev@hadoop.apache.org" 
mailto:hdfs-dev@hadoop.apache.org>>

I'm getting told off for using the deprecated option: io.bytes.per.checksum

https://issues.apache.org/jira/browse/HADOOP-13636

Except: I'm not. FileSystem.getServerDefaults() is, which is used by Trash to 
work out where to delete things.

It strikes me that the system is inconsitent: HdfsConfiguration is deprecating 
a property that everything else is happy to use; I see it in four places in 
production, and various tests, plus core-default.xml

Is it really deprecated? If so, are there any volunteers to remove it from the 
codebase, while pulling up the default value into core-default?

otherwise: how about the complaint is turned off?