[jira] [Commented] (HDFS-13374) TestCommonConfigurationFields is broken by HADOOP-15312
[ https://issues.apache.org/jira/browse/HDFS-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420112#comment-16420112 ] Hari Matta commented on HDFS-13374: --- [~shv] I would like to work on this. Thanks, Hari Gopal > TestCommonConfigurationFields is broken by HADOOP-15312 > --- > > Key: HDFS-13374 > URL: https://issues.apache.org/jira/browse/HDFS-13374 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Priority: Major > > TestCommonConfigurationFields is failing after HADOOP-15312. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
[ https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420098#comment-16420098 ] Yiqun Lin commented on HDFS-13359: -- [~jojochuang], thanks for your reference of HDFS-10682. {quote} Do you have a reference to a performance measurement between ReentrantLock and object lock? Just curious and would like to learn more about it. {quote} Can see this link: https://www.ibm.com/developerworks/java/library/j-jtp10264/index.html {quote} The ReentrantLock class, which implements Lock, has the same concurrency and memory semantics as synchronized, but also adds features like lock polling, timed lock waits, and interruptible lock waits. Additionally, it offers far better performance under heavy contention. {quote} > DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream > - > > Key: HDFS-13359 > URL: https://issues.apache.org/jira/browse/HDFS-13359 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13359.001.patch, stack.jpg > > > DataXceiver hung due to the lock that locked by > {{FsDatasetImpl#getBlockInputStream}} (have attached stack). > {code:java} > @Override // FsDatasetSpi > public InputStream getBlockInputStream(ExtendedBlock b, > long seekOffset) throws IOException { > ReplicaInfo info; > synchronized(this) { > info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock()); > } > ... > } > {code} > The lock {{synchronized(this)}} used here is expensive, there is already one > {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it > instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420075#comment-16420075 ] genericqa commented on HDFS-13365: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 54s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13365 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916926/HDFS-13365.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3616a881e21c 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2216bde | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23731/testReport/ | | Max. process+thread count | 930 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23731/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 >
[jira] [Commented] (HDFS-10419) Building HDFS on top of new storage layer (HDSL)
[ https://issues.apache.org/jira/browse/HDFS-10419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420058#comment-16420058 ] Konstantin Shvachko commented on HDFS-10419: Looks like "Hadoop Distributed Data Store" is winning by popularity. I see collisions for most acronyms I can come up with for HD**, so we should just go with the winner name I guess. > Building HDFS on top of new storage layer (HDSL) > > > Key: HDFS-10419 > URL: https://issues.apache.org/jira/browse/HDFS-10419 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Jing Zhao >Assignee: Jing Zhao >Priority: Major > Attachments: Evolving NN using new block-container layer.pdf > > > In HDFS-7240, Ozone defines storage containers to store both the data and the > metadata. The storage container layer provides an object storage interface > and aims to manage data/metadata in a distributed manner. More details about > storage containers can be found in the design doc in HDFS-7240. > HDFS can adopt the storage containers to store and manage blocks. The general > idea is: > # Each block can be treated as an object and the block ID is the object's key. > # Blocks will still be stored in DataNodes but as objects in storage > containers. > # The block management work can be separated out of the NameNode and will be > handled by the storage container layer in a more distributed way. The > NameNode will only manage the namespace (i.e., files and directories). > # For each file, the NameNode only needs to record a list of block IDs which > are used as keys to obtain real data from storage containers. > # A new DFSClient implementation talks to both NameNode and the storage > container layer to read/write. > HDFS, especially the NameNode, can get much better scalability from this > design. Currently the NameNode's heaviest workload comes from the block > management, which includes maintaining the block-DataNode mapping, receiving > full/incremental block reports, tracking block states (under/over/miss > replicated), and joining every writing pipeline protocol to guarantee the > data consistency. These work bring high memory footprint and make NameNode > suffer from GC. HDFS-5477 already proposes to convert BlockManager as a > service. If we can build HDFS on top of the storage container layer, we not > only separate out the BlockManager from the NameNode, but also replace it > with a new distributed management scheme. > The storage container work is currently in progress in HDFS-7240, and the > work proposed here is still in an experimental/exploring stage. We can do > this experiment in a feature branch so that people with interests can be > involved. > A design doc will be uploaded later explaining more details. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13374) TestCommonConfigurationFields is broken by HADOOP-15312
[ https://issues.apache.org/jira/browse/HDFS-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420047#comment-16420047 ] Konstantin Shvachko edited comment on HDFS-13374 at 3/30/18 1:29 AM: - Here is the log {noformat} 2018-03-29 14:54:59,747 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(524)) - File core-default.xml (170 properties) 2018-03-29 14:54:59,757 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(532)) - core-default.xml has 2 properties missing in class org.apache.hadoop.fs.CommonConfigurationKeys class org.apache.hadoop.fs.CommonConfigurationKeysPublic class org.apache.hadoop.fs.local.LocalConfigKeys class org.apache.hadoop.fs.ftp.FtpConfigKeys class org.apache.hadoop.ha.SshFenceByTcpPort class org.apache.hadoop.security.LdapGroupsMapping class org.apache.hadoop.ha.ZKFailoverController class org.apache.hadoop.security.ssl.SSLFactory class org.apache.hadoop.security.CompositeGroupsMapping class org.apache.hadoop.io.erasurecode.CodecUtil class org.apache.hadoop.security.RuleBasedLdapGroupsMapping 2018-03-29 14:54:59,759 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.bitlength 2018-03-29 14:54:59,760 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.cipher {noformat} was (Author: shv): Here is the log {code} 2018-03-29 14:54:59,747 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(524)) - File core-default.xml (170 properties) 2018-03-29 14:54:59,757 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(532)) - core-default.xml has 2 properties missing in class org.apache.hadoop.fs.CommonConfigurationKeys class org.apache.hadoop.fs.CommonConfigurationKeysPublic class org.apache.hadoop.fs.local.LocalConfigKeys class org.apache.hadoop.fs.ftp.FtpConfigKeys class org.apache.hadoop.ha.SshFenceByTcpPort class org.apache.hadoop.security.LdapGroupsMapping class org.apache.hadoop.ha.ZKFailoverController class org.apache.hadoop.security.ssl.SSLFactory class org.apache.hadoop.security.CompositeGroupsMapping class org.apache.hadoop.io.erasurecode.CodecUtil class org.apache.hadoop.security.RuleBasedLdapGroupsMapping 2018-03-29 14:54:59,759 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.bitlength 2018-03-29 14:54:59,760 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.cipher {code} > TestCommonConfigurationFields is broken by HADOOP-15312 > --- > > Key: HDFS-13374 > URL: https://issues.apache.org/jira/browse/HDFS-13374 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Priority: Major > > TestCommonConfigurationFields is failing after HADOOP-15312. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13374) TestCommonConfigurationFields is broken by HADOOP-15312
[ https://issues.apache.org/jira/browse/HDFS-13374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420047#comment-16420047 ] Konstantin Shvachko commented on HDFS-13374: Here is the log {code} 2018-03-29 14:54:59,747 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(524)) - File core-default.xml (170 properties) 2018-03-29 14:54:59,757 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:testCompareXmlAgainstConfigurationClass(532)) - core-default.xml has 2 properties missing in class org.apache.hadoop.fs.CommonConfigurationKeys class org.apache.hadoop.fs.CommonConfigurationKeysPublic class org.apache.hadoop.fs.local.LocalConfigKeys class org.apache.hadoop.fs.ftp.FtpConfigKeys class org.apache.hadoop.ha.SshFenceByTcpPort class org.apache.hadoop.security.LdapGroupsMapping class org.apache.hadoop.ha.ZKFailoverController class org.apache.hadoop.security.ssl.SSLFactory class org.apache.hadoop.security.CompositeGroupsMapping class org.apache.hadoop.io.erasurecode.CodecUtil class org.apache.hadoop.security.RuleBasedLdapGroupsMapping 2018-03-29 14:54:59,759 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.bitlength 2018-03-29 14:54:59,760 INFO conf.TestConfigurationFieldsBase (TestConfigurationFieldsBase.java:lambda$appendMissingEntries$30(507)) - hadoop.security.key.default.cipher {code} > TestCommonConfigurationFields is broken by HADOOP-15312 > --- > > Key: HDFS-13374 > URL: https://issues.apache.org/jira/browse/HDFS-13374 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Affects Versions: 2.10.0 >Reporter: Konstantin Shvachko >Priority: Major > > TestCommonConfigurationFields is failing after HADOOP-15312. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13374) TestCommonConfigurationFields is broken by HADOOP-15312
Konstantin Shvachko created HDFS-13374: -- Summary: TestCommonConfigurationFields is broken by HADOOP-15312 Key: HDFS-13374 URL: https://issues.apache.org/jira/browse/HDFS-13374 Project: Hadoop HDFS Issue Type: Bug Components: test Affects Versions: 2.10.0 Reporter: Konstantin Shvachko TestCommonConfigurationFields is failing after HADOOP-15312. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-898) Sequential generation of block ids
[ https://issues.apache.org/jira/browse/HDFS-898?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze resolved HDFS-898. -- Resolution: Duplicate This was done by HDFS-4645. Resolving ... > Sequential generation of block ids > -- > > Key: HDFS-898 > URL: https://issues.apache.org/jira/browse/HDFS-898 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 0.20.1 >Reporter: Konstantin Shvachko >Priority: Major > Attachments: DuplicateBlockIds.patch, FreeBlockIds.pdf, > HighBitProjection.pdf, blockid.tex, blockid20100122.pdf > > > This is a proposal to replace random generation of block ids with a > sequential generator in order to avoid block id reuse in the future. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420027#comment-16420027 ] Íñigo Goiri edited comment on HDFS-13248 at 3/30/18 1:06 AM: - [~ajayydv] that sounds promising, do you mind posting a PoC? As logn as we don't modify the ClientProtocol, I'm fine with that. was (Author: elgoiri): [~ajayydv] that sounds promising, do you mind posting a PoC? > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420027#comment-16420027 ] Íñigo Goiri commented on HDFS-13248: [~ajayydv] that sounds promising, do you mind posting a PoC? > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420026#comment-16420026 ] Íñigo Goiri commented on HDFS-13365: I added the super user check and also I had forgotten to do the creation of new spans in the RouterRpcClient. > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, > HDFS-13365.003.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13365: --- Attachment: HDFS-13365.003.patch > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch, > HDFS-13365.003.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420014#comment-16420014 ] Ajay Kumar commented on HDFS-13248: --- [~elgoiri], May i propose an alternate approach of adding optional clientIp in related methods signatures of \{{NameNodeRpcServer}} and passing this additional param via \{{RouterRpcClient}} reflection calls. Since we can determine clientIp at \{{RouterRpcClient}} we don't have to modify end clients. > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420002#comment-16420002 ] genericqa commented on HDFS-13371: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 26s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 24s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13371 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916910/HDFS-13371.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2c3e03cc19d0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 2c6cfad | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23730/testReport/ | | Max. process+thread count | 331 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client U: hadoop-hdfs-project/hadoop-hdfs-client | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23730/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NPE for
[jira] [Comment Edited] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419948#comment-16419948 ] Erik Krogen edited comment on HDFS-13331 at 3/29/18 11:48 PM: -- One way to achieve it would be to make {{alignmentContext}} a ThreadLocal in {{Client}}. This matches well with how other state is maintained within {{Client}}. Then, before calling into the {{ClientProtocol}}, {{DFSClient}} can set the proper {{alignmentContext}}. It can reset it after the call completes. We could facilitate this by bundling it into a {{Closable}} so that it could be done in a try-with-resource statement like: {code:java} try (Client.setAlignmentContext(alignmentContext)) { namenode.doWhateverCall(...) } {code} Where {{setAlignmentContext}} returns a {{Closable}} whose {{close}} method sets the alignment context back to null (or whatever it was previously). This requires the least changes outside of DFSClient, but feels kind of hacky. Given the {{Client}} already maintains a map of {{ConnectionId}} -> {{Connection}} mapping, it seems we could leverage this to achieve a cleaner solution. It seems that really the place this reference should be stored is within {{Connection}}, or maybe {{ConnectionId}} but that seems a little less clean. The {{Connection}} objects are created via {{Client#getConnection}}, which is called by {{Client#call}} -> {{ProtobufRpcEngine#invoke}}, so we could have the {{DFSClient}} pass in its {{alignmentContext}} when it creates its proxy. Any thoughts on this approach? Sorry for not chiming in on the last JIRA, btw. was (Author: xkrogen): One way to achieve it would be to make {{alignmentContext}} a ThreadLocal in {{Client}}. This matches well with how other state is maintained within {{Client}}. Then, before calling into the {{ClientProtocol}}, {{DFSClient}} can set the proper {{alignmentContext}}. It can reset it after the call completes. We could facilitate this by bundling it into a {{Closable}} so that it could be done in a try-with-resource statement like: {code:java} try (Client.setAlignmentContext(alignmentContext)) { namenode.doWhateverCall(...) } {code} Where {{setAlignmentContext}} returns a {{Closable}} whose {{close}} method sets the alignment context back to null (or whatever it was previously). This requires the least changes outside of DFSClient, but feels kind of hacky. Given the {{Client}} already maintains a map of {{ConnectionId}} -> {{Connection}} mapping, it seems we could leverage this to achieve a cleaner solution. It seems that really the place this reference should be stored is within {{Connection}}, or maybe {{ConnectionId}} but that seems a little less clean. The {{Connection}}s are created via {{Client#getConnection}}, which is called by {{Client#call}} -> {{ProtobufRpcEngine#invoke}}, so we could have the {{DFSClient}} pass in its {{alignmentContext}} when it creates its proxy. Any thoughts on this approach? Sorry for not chiming in on the last JIRA, btw. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419948#comment-16419948 ] Erik Krogen edited comment on HDFS-13331 at 3/29/18 11:47 PM: -- One way to achieve it would be to make {{alignmentContext}} a ThreadLocal in {{Client}}. This matches well with how other state is maintained within {{Client}}. Then, before calling into the {{ClientProtocol}}, {{DFSClient}} can set the proper {{alignmentContext}}. It can reset it after the call completes. We could facilitate this by bundling it into a {{Closable}} so that it could be done in a try-with-resource statement like: {code:java} try (Client.setAlignmentContext(alignmentContext)) { namenode.doWhateverCall(...) } {code} Where {{setAlignmentContext}} returns a {{Closable}} whose {{close}} method sets the alignment context back to null (or whatever it was previously). This requires the least changes outside of DFSClient, but feels kind of hacky. Given the {{Client}} already maintains a map of {{ConnectionId}} -> {{Connection}} mapping, it seems we could leverage this to achieve a cleaner solution. It seems that really the place this reference should be stored is within {{Connection}}, or maybe {{ConnectionId}} but that seems a little less clean. The {{Connection}}s are created via {{Client#getConnection}}, which is called by {{Client#call}} -> {{ProtobufRpcEngine#invoke}}, so we could have the {{DFSClient}} pass in its {{alignmentContext}} when it creates its proxy. Any thoughts on this approach? Sorry for not chiming in on the last JIRA, btw. was (Author: xkrogen): One way to achieve it would be to make {{alignmentContext}} a ThreadLocal in {{Client}}. This matches well with how other state is maintained within {{Client}}. Then, before calling into the {{ClientProtocol}}, {{DFSClient}} can set the proper {{alignmentContext}}. It can (optionally?) reset it after the call completes. We could facilitate this by bundling it into a {{Closable}} so that it could be done in a try-with-resource statement like: {code:java} try (Client.setAlignmentContext(alignmentContext)) { namenode.doWhateverCall(...) } {code} This requires the least changes outside of DFSClient, but feels kind of hacky. Given the {{Client}} already maintains a map of {{ConnectionId}} -> {{Connection}} mapping, it seems we could leverage this to achieve a cleaner solution. It seems that really the place this reference should be stored is within {{Connection}}, or maybe {{ConnectionId}} but that seems a little less clean. The {{Connection}}s are created via {{Client#getConnection}}, which is called by {{Client#call}} -> {{ProtobufRpcEngine#invoke}}, so we could have the {{DFSClient}} pass in its {{alignmentContext}} when it creates its proxy. Any thoughts on this approach? Sorry for not chiming in on the last JIRA, btw. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419948#comment-16419948 ] Erik Krogen commented on HDFS-13331: One way to achieve it would be to make {{alignmentContext}} a ThreadLocal in {{Client}}. This matches well with how other state is maintained within {{Client}}. Then, before calling into the {{ClientProtocol}}, {{DFSClient}} can set the proper {{alignmentContext}}. It can (optionally?) reset it after the call completes. We could facilitate this by bundling it into a {{Closable}} so that it could be done in a try-with-resource statement like: {code:java} try (Client.setAlignmentContext(alignmentContext)) { namenode.doWhateverCall(...) } {code} This requires the least changes outside of DFSClient, but feels kind of hacky. Given the {{Client}} already maintains a map of {{ConnectionId}} -> {{Connection}} mapping, it seems we could leverage this to achieve a cleaner solution. It seems that really the place this reference should be stored is within {{Connection}}, or maybe {{ConnectionId}} but that seems a little less clean. The {{Connection}}s are created via {{Client#getConnection}}, which is called by {{Client#call}} -> {{ProtobufRpcEngine#invoke}}, so we could have the {{DFSClient}} pass in its {{alignmentContext}} when it creates its proxy. Any thoughts on this approach? Sorry for not chiming in on the last JIRA, btw. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10867) [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419934#comment-16419934 ] Chris Douglas commented on HDFS-10867: -- bq. Don't forget about legacy negative block ids that already aren't compatible with EC assumptions. You can't rob the upper bits of a block id. *nod* we saw HDFS-13350. Each partition increases the chances of a collision with a legacy ID (HDFS-898, particularly [~szetszwo]'s analysis). I'd stop short of considering partitioning as "robbing" the block ID namespace, though. We need a fix for EC blocks, and if our solution doesn't also cover this case, then it's probably incomplete. Since these features are only in 3.x, I would be comfortable adding a step to the upgrade guide that instructs users to run fsck and re-copy files with legacy block IDs (or never use these features). Of course, a tool or comprehensive solution would be better. When crossing a major version, we can stop supporting some cases. Requiring work from users to eliminate legacy block IDs as an ongoing concern... seems like an OK tradeoff, to me. > [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage > - > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Priority: Major > Attachments: Block Bit Field Allocation of Provided Storage.pdf > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13087) Snapshotted encryption zone information should be immutable
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13087: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.1 3.0.2 Status: Resolved (was: Patch Available) Pushed to trunk and branch-3.[0-1]. Thanks for the contribution, [~GeLiXin]! > Snapshotted encryption zone information should be immutable > --- > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Fix For: 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13087) Snapshotted encryption zone information should be immutable
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13087: - Fix Version/s: 3.2.0 > Snapshotted encryption zone information should be immutable > --- > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13371: -- Attachment: HDFS-13371.000.patch > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HDFS-13371.000.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13371: -- Attachment: (was: HADOOP-15336.000.patch) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HDFS-13371.000.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13371: -- Attachment: (was: HADOOP-15336.001.patch) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HDFS-13371.000.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13087) Snapshotted encryption zone information should be immutable
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419899#comment-16419899 ] Xiao Chen commented on HDFS-13087: -- +1, committing this > Snapshotted encryption zone information should be immutable > --- > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13087) Snapshotted encryption zone information should be immutable
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13087: - Summary: Snapshotted encryption zone information should be immutable (was: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes) > Snapshotted encryption zone information should be immutable > --- > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419895#comment-16419895 ] genericqa commented on HDFS-13373: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-12996 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 45s{color} | {color:green} HDFS-12996 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} HDFS-12996 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} HDFS-12996 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-12996 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} HDFS-12996 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} HDFS-12996 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 19 new + 280 unchanged - 1 fixed = 299 total (was 281) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 3m 25s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 21s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:22f9129 | | JIRA Issue | HDFS-13373 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916897/HDFS-13373-HDFS-12996.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux b81e2ccbafd2 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-12996 / 6ec3c09 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | mvninstall | https://builds.apache.org/job/PreCommit-HDFS-Build/23729/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/23729/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs.txt | | cc |
[jira] [Updated] (HDFS-13087) Snapshots On encryption zones get incorrect EZ settings when encryption zone changes
[ https://issues.apache.org/jira/browse/HDFS-13087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13087: - Summary: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes (was: Fix: Snapshots On encryption zones get incorrect EZ settings when encryption zone changes) > Snapshots On encryption zones get incorrect EZ settings when encryption zone > changes > > > Key: HDFS-13087 > URL: https://issues.apache.org/jira/browse/HDFS-13087 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 3.1.0 >Reporter: LiXin Ge >Assignee: LiXin Ge >Priority: Major > Attachments: HDFS-13087.001.patch, HDFS-13087.002.patch, > HDFS-13087.003.patch, HDFS-13087.004.patch, HDFS-13087.005.patch > > > Snapshots are supposed to be immutable and read only, so the EZ settings > which in a snapshot path shouldn't change when the origin encryption zone > changes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419882#comment-16419882 ] Daryn Sharp commented on HDFS-13310: Is there any way to generalize this feature? Scanning the patch, it looks like a leaky abstraction. I don't understand why the DN needs all kinds of new commands (here and other jiras) that are equivalent to "copy or move this block". If you want to do multi-part upload to s3 magic, that should be hidden behind the "provided" plugin when a block is copied/moved to it. Not leaked all throughout hdfs. > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10867) [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419853#comment-16419853 ] Daryn Sharp commented on HDFS-10867: Don't forget about legacy negative block ids that already aren't compatible with EC assumptions. You can't rob the upper bits of a block id. > [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage > - > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Priority: Major > Attachments: Block Bit Field Allocation of Provided Storage.pdf > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13248) RBF: Namenode need to choose block location for the client
[ https://issues.apache.org/jira/browse/HDFS-13248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16416571#comment-16416571 ] Ajay Kumar edited comment on HDFS-13248 at 3/29/18 9:50 PM: [~elgoiri] thanks for working on this. Using CallerContext to append client ip with some delimiter is bit fragile. Personally i think using UGI tokens will be cleaner. Even if we decide to go with CallerContext we should make delimiter configurable. was (Author: ajayydv): [~elgoiri] thanks for working on this. Using CallerContext to append client ip with some delimiter is bit hacky. Personally i think using UGI tokens will be cleaner. Even if we decide to go with CallerContext we should make delimiter configurable. > RBF: Namenode need to choose block location for the client > -- > > Key: HDFS-13248 > URL: https://issues.apache.org/jira/browse/HDFS-13248 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Weiwei Wu >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13248.000.patch, HDFS-13248.001.patch, > clientMachine-call-path.jpeg, debug-info-1.jpeg, debug-info-2.jpeg > > > When execute a put operation via router, the NameNode will choose block > location for the router, not for the real client. This will affect the file's > locality. > I think on both NameNode and Router, we should add a new addBlock method, or > add a parameter for the current addBlock method, to pass the real client > information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.
[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419827#comment-16419827 ] Xiao Chen commented on HDFS-13281: -- Fair enough for webhdfs. Please add necessary links. I'd think this as a subtask of HDFS-12355 and required by HDFS-12597. Could you write a design doc elaborating all these, and add to the umbrella jira? Not a blocker for this one but I'd feel more comfortable having a doc to look at first, instead of asking questions on each jira. I'm also not sure a first time reviewer would be able to catch up from any individual jira. > Namenode#createFile should be /.reserved/raw/ aware. > > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-13281.001.patch, HDFS-13281.002.patch > > > If I want to write to /.reserved/raw/ and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.
[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419803#comment-16419803 ] Rushabh S Shah commented on HDFS-13281: --- {quote}I think at the minimum we should setxattr immediately after the file is created. {quote} Exactly. I will add that functionality in HDFS-12597. The steps would be something like this. Webhdfs client will issue setXAttr after it issues {{create}} call to _datanode_ and before it starts streaming encrypted data to datanode. As you said that this just NN side change so I didn't incorporate that. bq. One atomic way is perhaps pass in xattr at file creation time. We did think that option also. Adding a new parameter {{FeInfo}} to create call. If {{FeInfo}} is present, then namenode will use that FeInfo otherwise it will generate new FeInfo but there was compatibility issue if namenode and datanode is old but client is new and then it would double encrypt. So we threw away that idea. > Namenode#createFile should be /.reserved/raw/ aware. > > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-13281.001.patch, HDFS-13281.002.patch > > > If I want to write to /.reserved/raw/ and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419788#comment-16419788 ] Erik Krogen commented on HDFS-13331: I see, sorry I skimmed the comments on the last patch but didn't look closely enough to notice your concerns. I'll take a look as well and see if any idea strikes me.. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Attachment: (was: HDFS-13373-HDFS-12996.00.patch) > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13373-HDFS-12996.00.patch > > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Attachment: HDFS-13373-HDFS-12996.00.patch > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13373-HDFS-12996.00.patch > > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Attachment: HDFS-13373-HDFS-12996.00.patch > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13373-HDFS-12996.00.patch > > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419772#comment-16419772 ] Bharat Viswanadham commented on HDFS-13373: --- This patch is dependant on HDFS-13372. > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13373-HDFS-12996.00.patch > > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Status: Patch Available (was: In Progress) > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13373-HDFS-12996.00.patch > > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Description: When DataNodes receive the DN_EXPUNGE command from Namenode, they will purge all the block replicas in replica-trash was: When DataNodes receive the DN_EXPUNGE†command from Namenode, they will purge all the block replicas in replica-trash > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When DataNodes receive the DN_EXPUNGE command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-13373 started by Bharat Viswanadham. - > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When DataNodes receive the DN_EXPUNGE†command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11639) [PROVIDED Phase 2] Encode the BlockAlias in the client protocol
[ https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419747#comment-16419747 ] genericqa commented on HDFS-11639: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 6s{color} | {color:red} HDFS-11639 does not apply to HDFS-9806. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11639 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12869427/HDFS-11639-HDFS-9806.005.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23727/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [PROVIDED Phase 2] Encode the BlockAlias in the client protocol > --- > > Key: HDFS-11639 > URL: https://issues.apache.org/jira/browse/HDFS-11639 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-11639-HDFS-9806.001.patch, > HDFS-11639-HDFS-9806.002.patch, HDFS-11639-HDFS-9806.003.patch, > HDFS-11639-HDFS-9806.004.patch, HDFS-11639-HDFS-9806.005.patch > > > As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which > encodes information about where the data comes from. i.e. URI, offset, > length, and nonce value. This data should be encoded in the protocol > ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is > available using the PROVIDED storage type. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts
[ https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419748#comment-16419748 ] genericqa commented on HDFS-12478: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-12478 does not apply to HDFS-9806. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-12478 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12899410/HDFS-12478-HDFS-9806.003.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23728/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup > mounts > - > > Key: HDFS-12478 > URL: https://issues.apache.org/jira/browse/HDFS-12478 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Minor > Attachments: HDFS-12478-HDFS-9806.001.patch, > HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch > > > This is a task for implementing the command line interface for attaching a > PROVIDED storage backup system (see HDFS-9806, HDFS-12090). > # The administrator should be able to mount a PROVIDED storage volume from > the command line. > {code}hdfs attach -create [-name ] path (external)>{code} > # Whitelist of users who are able to manage mounts (create, attach, detach). > # Be able to interrogate the status of the attached storage (last time a > snapshot was taken, files being backed up). > # The administrator should be able to remove an attached PROVIDED storage > volume from the command line. This simply means that the synchronization > process no longer runs. If the administrator has configured their setup to no > longer have local copies of the data, the blocks in the subtree are simply no > longer accessible as the external file store system is currently inaccessible. > {code}hdfs attach -remove [-force | -flush]{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12666) [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount
[ https://issues.apache.org/jira/browse/HDFS-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12666: -- Summary: [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount (was: Provided Storage Mount Manager (PSMM) mount) > [PROVIDED Phase 2] Provided Storage Mount Manager (PSMM) mount > -- > > Key: HDFS-12666 > URL: https://issues.apache.org/jira/browse/HDFS-12666 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Priority: Major > > Implement the Provided Storage Mount Manager. This is a service (thread) in > the Namenode that manages backup mounts, unmounts, snapshotting, and > monitoring the progress of backups. > On mount, the mount manager writes XATTR information at the top level of the > mount to do the appropriate bookkeeping. This is done to maintain state in > case the Namenode falls over. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12848) [PROVIDED Phase 2] Add a pluggable policy for selecting locations for Provided files.
[ https://issues.apache.org/jira/browse/HDFS-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12848: -- Summary: [PROVIDED Phase 2] Add a pluggable policy for selecting locations for Provided files. (was: [READ] Add a pluggable policy for selecting locations for Provided files.) > [PROVIDED Phase 2] Add a pluggable policy for selecting locations for > Provided files. > - > > Key: HDFS-12848 > URL: https://issues.apache.org/jira/browse/HDFS-12848 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Priority: Major > > Add a pluggable policy for selecting locations for Provided files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13186) [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations
[ https://issues.apache.org/jira/browse/HDFS-13186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13186: -- Summary: [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations (was: [WRITE] Multipart Multinode uploader API + Implementations) > [PROVIDED Phase 2] Multipart Multinode uploader API + Implementations > - > > Key: HDFS-13186 > URL: https://issues.apache.org/jira/browse/HDFS-13186 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13186.001.patch, HDFS-13186.002.patch, > HDFS-13186.003.patch > > > To write files in parallel to an external storage system as in HDFS-12090, > there are two approaches: > # Naive approach: use a single datanode per file that copies blocks locally > as it streams data to the external service. This requires a copy for each > block inside the HDFS system and then a copy for the block to be sent to the > external system. > # Better approach: Single point (e.g. Namenode or SPS style external client) > and Datanodes coordinate in a multipart - multinode upload. > This system needs to work with multiple back ends and needs to coordinate > across the network. So we propose an API that resembles the following: > {code:java} > public UploadHandle multipartInit(Path filePath) throws IOException; > public PartHandle multipartPutPart(InputStream inputStream, > int partNumber, UploadHandle uploadId) throws IOException; > public void multipartComplete(Path filePath, > List> handles, > UploadHandle multipartUploadId) throws IOException;{code} > Here, UploadHandle and PartHandle are opaque handlers in the vein of > PathHandle so they can be serialized and deserialized in hadoop-hdfs project > without knowledge of how to deserialize e.g. S3A's version of a UpoadHandle > and PartHandle. > In an object store such as S3A, the implementation is straight forward. In > the case of writing multipart/multinode to HDFS, we can write each block as a > file part. The complete call will perform a concat on the blocks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10867) [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage
[ https://issues.apache.org/jira/browse/HDFS-10867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-10867: -- Summary: [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage (was: Block Bit Field Allocation of Provided Storage) > [PROVIDED Phase 2] Block Bit Field Allocation of Provided Storage > - > > Key: HDFS-10867 > URL: https://issues.apache.org/jira/browse/HDFS-10867 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Priority: Major > Attachments: Block Bit Field Allocation of Provided Storage.pdf > > > We wish to design and implement the following related features for provided > storage: > # Dynamic mounting of provided storage within a Namenode (mount, unmount) > # Mount multiple provided storage systems on a single Namenode. > # Support updates to the provided storage system without having to regenerate > an fsimg. > A mount in the namespace addresses a corresponding set of block data. When > unmounted, any block data associated with the mount becomes invalid and > (eventually) unaddressable in HDFS. As with erasure-coded blocks, efficient > unmounting requires that all blocks with that attribute be identifiable by > the block management layer > In this subtask, we focus on changes and conventions to the block management > layer. Namespace operations are covered in a separate subtask. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13310) [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks
[ https://issues.apache.org/jira/browse/HDFS-13310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-13310: -- Summary: [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup blocks (was: [WRITE] The DatanodeProtocol should be have DNA_BACKUP to backup blocks) > [PROVIDED Phase 2] The DatanodeProtocol should be have DNA_BACKUP to backup > blocks > -- > > Key: HDFS-13310 > URL: https://issues.apache.org/jira/browse/HDFS-13310 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-13310-HDFS-12090.001.patch, > HDFS-13310-HDFS-12090.002.patch > > > As part of HDFS-12090, Datanodes should be able to receive DatanodeCommands > in the heartbeat response that instructs it to backup a block. > This should take the form of two sub commands: PUT_FILE (when the file is <=1 > block in size) and MULTIPART_PUT_PART when part of a Multipart Upload (see > HDFS-13186). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12478) [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts
[ https://issues.apache.org/jira/browse/HDFS-12478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-12478: -- Summary: [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup mounts (was: [WRITE] Command line tools for managing Provided Storage Backup mounts) > [PROVIDED Phase 2] Command line tools for managing Provided Storage Backup > mounts > - > > Key: HDFS-12478 > URL: https://issues.apache.org/jira/browse/HDFS-12478 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Minor > Attachments: HDFS-12478-HDFS-9806.001.patch, > HDFS-12478-HDFS-9806.002.patch, HDFS-12478-HDFS-9806.003.patch > > > This is a task for implementing the command line interface for attaching a > PROVIDED storage backup system (see HDFS-9806, HDFS-12090). > # The administrator should be able to mount a PROVIDED storage volume from > the command line. > {code}hdfs attach -create [-name ] path (external)>{code} > # Whitelist of users who are able to manage mounts (create, attach, detach). > # Be able to interrogate the status of the attached storage (last time a > snapshot was taken, files being backed up). > # The administrator should be able to remove an attached PROVIDED storage > volume from the command line. This simply means that the synchronization > process no longer runs. If the administrator has configured their setup to no > longer have local copies of the data, the blocks in the subtree are simply no > longer accessible as the external file store system is currently inaccessible. > {code}hdfs attach -remove [-force | -flush]{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11828) [PROVIDED Phase 2] Refactor FsDatasetImpl to use the BlockAlias from ClientProtocol for PROVIDED blocks.
[ https://issues.apache.org/jira/browse/HDFS-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11828: -- Summary: [PROVIDED Phase 2] Refactor FsDatasetImpl to use the BlockAlias from ClientProtocol for PROVIDED blocks. (was: [WRITE] Refactor FsDatasetImpl to use the BlockAlias from ClientProtocol for PROVIDED blocks.) > [PROVIDED Phase 2] Refactor FsDatasetImpl to use the BlockAlias from > ClientProtocol for PROVIDED blocks. > > > Key: HDFS-11828 > URL: https://issues.apache.org/jira/browse/HDFS-11828 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > > From HDFS-11639: > {quote}[~virajith] > Looking over this patch, one thing that occurred to me is if it makes sense > to unify FileRegionProvider with BlockProvider? They both have very close > functionality. > I like the use of BlockProvider#resolve(). If we unify FileRegionProvider > with BlockProvider, then resolve can return null if the block map is > accessible from the Datanodes also. If it is accessible only from the > Namenode, then a non-null value can be propagated to the Datanode. > One of the motivations for adding the BlockAlias to the client protocol was > to have the blocks map only on the Namenode. In this scenario, the ReplicaMap > in FsDatasetImpl of will not have any replicas apriori. Thus, one way to > ensure that the FsDatasetImpl interface continues to function as today is to > create a FinalizedProvidedReplica in FsDatasetImpl#getBlockInputStream when > BlockAlias is not null. > {quote} > {quote}[~ehiggs] > With the pending refactoring of the FsDatasetImpl which won't have replicas a > priori, I wonder if it makes sense for the Datanode to have a > FileRegionProvider or BlockProvider at all. They are given the appropriate > block ID and block alias in the readBlock or writeBlock message. Maybe I'm > overlooking what's still being provided.{quote} > {quote}[~virajith] > I was trying to reconcile the existing design (FsDatasetImpl knows about > provided blocks apriori) with the new design where FsDatasetImpl will not > know about these before but just constructs them on-the-fly using the > BlockAlias from readBlock or writeBlock. Using BlockProvider#resolve() allows > us to have both designs exist in parallel. I was wondering if we should still > retain the earlier given the latter design. > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11639) [PROVIDED Phase 2] Encode the BlockAlias in the client protocol
[ https://issues.apache.org/jira/browse/HDFS-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Virajith Jalaparti updated HDFS-11639: -- Summary: [PROVIDED Phase 2] Encode the BlockAlias in the client protocol (was: [WRITE] Encode the BlockAlias in the client protocol) > [PROVIDED Phase 2] Encode the BlockAlias in the client protocol > --- > > Key: HDFS-11639 > URL: https://issues.apache.org/jira/browse/HDFS-11639 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: Ewan Higgs >Assignee: Ewan Higgs >Priority: Major > Attachments: HDFS-11639-HDFS-9806.001.patch, > HDFS-11639-HDFS-9806.002.patch, HDFS-11639-HDFS-9806.003.patch, > HDFS-11639-HDFS-9806.004.patch, HDFS-11639-HDFS-9806.005.patch > > > As part of the {{PROVIDED}} storage type, we have a {{BlockAlias}} type which > encodes information about where the data comes from. i.e. URI, offset, > length, and nonce value. This data should be encoded in the protocol > ({{LocatedBlockProto}} and the {{BlockTokenIdentifier}}) when a block is > available using the PROVIDED storage type. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12090) Handling writes from HDFS to Provided storages
[ https://issues.apache.org/jira/browse/HDFS-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419742#comment-16419742 ] Virajith Jalaparti commented on HDFS-12090: --- Cut branch HDFS-12090 from trunk to track this feature. Let's try to prefix all the relevant sub-task JIRAs with [PROVIDED Phase 2]. > Handling writes from HDFS to Provided storages > -- > > Key: HDFS-12090 > URL: https://issues.apache.org/jira/browse/HDFS-12090 > Project: Hadoop HDFS > Issue Type: New Feature >Reporter: Virajith Jalaparti >Priority: Major > Attachments: HDFS-12090-Functional-Specification.001.pdf, > HDFS-12090-Functional-Specification.002.pdf, > HDFS-12090-Functional-Specification.003.pdf, HDFS-12090-design.001.pdf, > HDFS-12090..patch, HDFS-12090.0001.patch > > > HDFS-9806 introduces the concept of {{PROVIDED}} storage, which makes data in > external storage systems accessible through HDFS. However, HDFS-9806 is > limited to data being read through HDFS. This JIRA will deal with how data > can be written to such {{PROVIDED}} storages from HDFS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12165) getSnapshotDiffReport throws NegativeArraySizeException for very large snapshot diff summary
[ https://issues.apache.org/jira/browse/HDFS-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang resolved HDFS-12165. Resolution: Duplicate > getSnapshotDiffReport throws NegativeArraySizeException for very large > snapshot diff summary > > > Key: HDFS-12165 > URL: https://issues.apache.org/jira/browse/HDFS-12165 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Wei-Chiu Chuang >Priority: Major > > For a really large snapshot diff, getSnapshotDiffReport throws > NegativeArraySizeException > {noformat} > 2017-07-19 11:14:16,415 WARN org.apache.hadoop.ipc.Server: Error serializing > call response for call > org.apache.hadoop.hdfs.protocol.ClientProtocol.getSnapshotDiffReport > from 10.17.211.10:58223 Call#0 Retry#0 > java.lang.NegativeArraySizeException > at > com.google.protobuf.CodedOutputStream.newInstance(CodedOutputStream.java:105) > at > com.google.protobuf.AbstractMessageLite.writeDelimitedTo(AbstractMessageLite.java:87) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$RpcResponseWrapper.write(ProtobufRpcEngine.java:468) > at org.apache.hadoop.ipc.Server.setupResponse(Server.java:2410) > at org.apache.hadoop.ipc.Server.access$500(Server.java:134) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2182) > {noformat} > This particular snapshot diff contains more than 25 million different file > system objects, and which means the serialized response can be more than 2GB, > overflowing protobuf length calculation. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419735#comment-16419735 ] genericqa commented on HDFS-13350: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 25s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.hdfs.TestFileCorruption | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13350 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916865/HDFS-13350.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 70601268f9fc 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d7a903 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23724/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23724/testReport/ | | Max. process+thread count | 2976 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Updated] (HDFS-13372) New Expunge Replica Trash Client-Namenode-Protocol
[ https://issues.apache.org/jira/browse/HDFS-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13372: -- Attachment: HDFS-13372-HDFS-12996.00.patch > New Expunge Replica Trash Client-Namenode-Protocol > -- > > Key: HDFS-13372 > URL: https://issues.apache.org/jira/browse/HDFS-13372 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13372-HDFS-12996.00.patch > > > When client issues an expunge replica-trash RPC call to Namenode, the > Namenode will queue > a new heartbeat command response - DN_EXPUNGE directing the DataNodes to > expunge the > replica-trash. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13372) New Expunge Replica Trash Client-Namenode-Protocol
[ https://issues.apache.org/jira/browse/HDFS-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13372: -- Status: Patch Available (was: Open) > New Expunge Replica Trash Client-Namenode-Protocol > -- > > Key: HDFS-13372 > URL: https://issues.apache.org/jira/browse/HDFS-13372 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13372-HDFS-12996.00.patch > > > When client issues an expunge replica-trash RPC call to Namenode, the > Namenode will queue > a new heartbeat command response - DN_EXPUNGE directing the DataNodes to > expunge the > replica-trash. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Attachment: (was: HDFS-13372-HDFS-12996.00.patch) > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When DataNodes receive the DN_EXPUNGE†command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13373) Handle expunge command on NN and DN
[ https://issues.apache.org/jira/browse/HDFS-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13373: -- Attachment: HDFS-13372-HDFS-12996.00.patch > Handle expunge command on NN and DN > --- > > Key: HDFS-13373 > URL: https://issues.apache.org/jira/browse/HDFS-13373 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When DataNodes receive the DN_EXPUNGE†command from Namenode, they will > purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419704#comment-16419704 ] Plamen Jeliazkov commented on HDFS-13331: - Yes you are correct. I brought up that concern in the last patch as well stating that I was not happy with the use of a static method in Client as a way to handle alignmentContext, but it seems that point was glossed over. I mentioned this: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16372125=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16372125 And here: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16386960=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16386960 My issue was that I had difficulty finding a way to pass in alignmentContext into Client constructor. I am happy to try again though. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419704#comment-16419704 ] Plamen Jeliazkov edited comment on HDFS-13331 at 3/29/18 8:26 PM: -- Hi [~xkrogen], yes, you are correct. I brought up that concern in the last patch as well stating that I was not happy with the use of a static method in Client as a way to handle alignmentContext, but it seems that point was glossed over. I mentioned this: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16372125=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16372125 And here: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16386960=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16386960 My issue was that I had difficulty finding a way to pass in alignmentContext into Client constructor. I am happy to try again though. was (Author: zero45): Yes you are correct. I brought up that concern in the last patch as well stating that I was not happy with the use of a static method in Client as a way to handle alignmentContext, but it seems that point was glossed over. I mentioned this: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16372125=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16372125 And here: https://issues.apache.org/jira/browse/HDFS-12977?focusedCommentId=16386960=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16386960 My issue was that I had difficulty finding a way to pass in alignmentContext into Client constructor. I am happy to try again though. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419675#comment-16419675 ] genericqa commented on HDFS-13364: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 6s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 76m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13364 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916874/HDFS-13364.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux db1e461fca80 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d7a903 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23725/testReport/ | | Max. process+thread count | 1013 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23725/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL:
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419655#comment-16419655 ] Erik Krogen commented on HDFS-13331: Hey [~zero45], just looked through the most recent patch. I have a question about using a static {{alignmentContext}} in {{Client}}. We can potentially have two {{DFSClient}} objects in the same JVM which are communicating with completely difference namespaces, each of which should have their own {{alignmentContext}}. The current implementation would not support this, AFAICT. Am I correct? I don't think that is acceptable IIUC. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal
[ https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419647#comment-16419647 ] genericqa commented on HDFS-13279: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 22s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 29s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 7s{color} | {color:orange} root: The patch generated 12 new + 39 unchanged - 1 fixed = 51 total (was 40) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 26s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 42s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 5s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 44s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}224m 36s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.shell.TestCopyFromLocal | | | hadoop.conf.TestCommonConfigurationFields | | | hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13279 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916849/HDFS-13279.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux b5f802ae4c2e 3.13.0-141-generic
[jira] [Updated] (HDFS-13372) New Expunge Replica Trash Client-Namenode-Protocol
[ https://issues.apache.org/jira/browse/HDFS-13372?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13372: -- Description: When client issues an expunge replica-trash RPC call to Namenode, the Namenode will queue a new heartbeat command response - DN_EXPUNGE directing the DataNodes to expunge the replica-trash. was: When client issues a restore replica trash RPC call to Namenode, the Namenode will queue a new heartbeat command response - DN_RESTORE to communicate to the DataNodes to restore the replica trash > New Expunge Replica Trash Client-Namenode-Protocol > -- > > Key: HDFS-13372 > URL: https://issues.apache.org/jira/browse/HDFS-13372 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > When client issues an expunge replica-trash RPC call to Namenode, the > Namenode will queue > a new heartbeat command response - DN_EXPUNGE directing the DataNodes to > expunge the > replica-trash. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13373) Handle expunge command on NN and DN
Bharat Viswanadham created HDFS-13373: - Summary: Handle expunge command on NN and DN Key: HDFS-13373 URL: https://issues.apache.org/jira/browse/HDFS-13373 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham When DataNodes receive the DN_EXPUNGE†command from Namenode, they will purge all the block replicas in replica-trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13372) New Expunge Replica Trash Client-Namenode-Protocol
Bharat Viswanadham created HDFS-13372: - Summary: New Expunge Replica Trash Client-Namenode-Protocol Key: HDFS-13372 URL: https://issues.apache.org/jira/browse/HDFS-13372 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham When client issues a restore replica trash RPC call to Namenode, the Namenode will queue a new heartbeat command response - DN_RESTORE to communicate to the DataNodes to restore the replica trash -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419577#comment-16419577 ] Íñigo Goiri commented on HDFS-13365: A note on the TODO, the Namenode does {{namesystem.checkSuperuserPrivilege()}} which uses {{FSPermissionChecker}} but we don't have it available. We could just check if the user running the command is in the super user group. Thoughts? > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419567#comment-16419567 ] Íñigo Goiri commented on HDFS-13365: [^HDFS-13365.001.patch] is ready for review. [~giovanni.fumarola], [~linyiqun], [~ywskycn], any of you available for review? > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13359) DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream
[ https://issues.apache.org/jira/browse/HDFS-13359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419568#comment-16419568 ] Wei-Chiu Chuang commented on HDFS-13359: Thanks. I'm not so sure about performance of ReentrantLock. When HDFS-10682 introduced AutoClosable ReentrantLock, it was for {quote}{{Doing so will make it easier to measure lock statistics like lock held time and warn about potential lock contention due to slow disk operations.}} {quote} Do you have a reference to a performance measurement between ReentrantLock and object lock? Just curious and would like to learn more about it. Thank you! > DataXceiver hung due to the lock in FsDatasetImpl#getBlockInputStream > - > > Key: HDFS-13359 > URL: https://issues.apache.org/jira/browse/HDFS-13359 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13359.001.patch, stack.jpg > > > DataXceiver hung due to the lock that locked by > {{FsDatasetImpl#getBlockInputStream}} (have attached stack). > {code:java} > @Override // FsDatasetSpi > public InputStream getBlockInputStream(ExtendedBlock b, > long seekOffset) throws IOException { > ReplicaInfo info; > synchronized(this) { > info = volumeMap.get(b.getBlockPoolId(), b.getLocalBlock()); > } > ... > } > {code} > The lock {{synchronized(this)}} used here is expensive, there is already one > {{AutoCloseableLock}} type lock defined for {{ReplicaMap}}. We can use it > instead. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419561#comment-16419561 ] Íñigo Goiri commented on HDFS-13364: [^HDFS-13364.003.patch] is ready for review. Right now, TestRouterRpc is getting massive. Not sure is worth splitting though as the new test would require a new MiniDFSCluster which is pretty expensive. > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10682) Replace FsDatasetImpl object lock with a separate lock object
[ https://issues.apache.org/jira/browse/HDFS-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419559#comment-16419559 ] Wei-Chiu Chuang commented on HDFS-10682: For the record, the commit message was incorrect about the Jira ID. {quote}HADOOP-10682. Replace FsDatasetImpl object lock with a separate lock object. (Chen Liang) {quote} > Replace FsDatasetImpl object lock with a separate lock object > - > > Key: HDFS-10682 > URL: https://issues.apache.org/jira/browse/HDFS-10682 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Major > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-10682-branch-2.001.patch, > HDFS-10682-branch-2.002.patch, HDFS-10682-branch-2.003.patch, > HDFS-10682-branch-2.004.patch, HDFS-10682-branch-2.005.patch, > HDFS-10682-branch-2.006.patch, HDFS-10682.001.patch, HDFS-10682.002.patch, > HDFS-10682.003.patch, HDFS-10682.004.patch, HDFS-10682.005.patch, > HDFS-10682.006.patch, HDFS-10682.007.patch, HDFS-10682.008.patch, > HDFS-10682.009.patch, HDFS-10682.010.patch > > > This Jira proposes to replace the FsDatasetImpl object lock with a separate > lock object. Doing so will make it easier to measure lock statistics like > lock held time and warn about potential lock contention due to slow disk > operations. > Right now we can use org.apache.hadoop.util.AutoCloseableLock. In the future > we can also consider replacing the lock with a read-write lock. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13364: --- Attachment: HDFS-13364.003.patch > RBF: Support NamenodeProtocol in the Router > --- > > Key: HDFS-13364 > URL: https://issues.apache.org/jira/browse/HDFS-13364 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13364.000.patch, HDFS-13364.001.patch, > HDFS-13364.002.patch, HDFS-13364.003.patch > > > The Router should support the NamenodeProtocol to get blocks, versions, etc. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419549#comment-16419549 ] genericqa commented on HDFS-13365: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 6s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 2s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13365 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916862/HDFS-13365.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2cb9b059dd73 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d7a903 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23723/testReport/ | | Max. process+thread count | 957 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23723/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 >
[jira] [Commented] (HDFS-13331) Add lastSeenStateId to RpcRequestHeader.
[ https://issues.apache.org/jira/browse/HDFS-13331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419521#comment-16419521 ] Plamen Jeliazkov commented on HDFS-13331: - The failed unit tests appear to be unrelated to the patch. I verified locally that they all pass. I saw some of these fail during the Jenkins run of the previous HDFS-12977 patch as well. > Add lastSeenStateId to RpcRequestHeader. > > > Key: HDFS-13331 > URL: https://issues.apache.org/jira/browse/HDFS-13331 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13331-HDFS-12943.002.patch, > HDFS-13331-HDFS-12943.003..patch, HDFS-13331.trunk.001.patch, > HDFS_13331.trunk.000.patch > > > HDFS-12977 added a stateId into the RpcResponseHeader which is returned by > NameNode and stored by DFSClient. > This JIRA is to followup on that work and have the DFSClient send their > stored "lastSeenStateId" in the RpcRequestHeader so that ObserverNodes can > then compare with their own and act accordingly. > This JIRA work focuses on just the part of making DFSClient send their state > through RpcRequestHeader. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13364) RBF: Support NamenodeProtocol in the Router
[ https://issues.apache.org/jira/browse/HDFS-13364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419498#comment-16419498 ] genericqa commented on HDFS-13364: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 33s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 9s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13364 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12916851/HDFS-13364.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9f103b829e4d 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9d7a903 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23722/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23722/testReport/ | | Max. process+thread count | 956 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23722/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT
[jira] [Commented] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419497#comment-16419497 ] Lei (Eddy) Xu commented on HDFS-13350: -- The legacy negative block ID and EC block IDs are safe in the following cases: * NN bootstraps from fsimage / edit logs, because NN first checks INode type then allocate different block type for block IDs in fsimage. So for legacy block ID, the INode type is normal replication file, and NN does not check block ID value for such file. * NN assigns new block IDs for EC file, {{SequentialBlockGroupIdGenerator}} will check the existence of block Ids in block map, and skip any existed negative IDs. So it is also safe here. * HDFS-7994 addressed several cases in {{BlockManager}}. Other than these, {{isStripedBlockID()}} are mostly used in {{CorruptReplicasMap}} and {{InvalidateBlocks}}. Upload the patch to address the remained usages. > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Status: Patch Available (was: Open) > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Target Version/s: 3.0.2, 3.2.0, 3.1.1 (was: 3.1.0, 3.0.2) > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: HDFS-13350.00.patch > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: (was: HDFS-13350.00.patch) > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419489#comment-16419489 ] Jason Lowe commented on HDFS-13371: --- {quote}Is there a good way to handle this other than having test coverage for this (which I'm not even sure how) and catching when new optional fields are added? {quote} Reflection could be leveraged to generate protobuf records for testing. See TestPBImplRecords for an example. Theoretically any field marked optional in the protobuf record could be missing in the test record, although I suspect in practice many of the fields aren't really optional. So the hard part would be knowing which ones are truly optional despite what the protobuf field metadata says. So it might be a manual process to add a test for a new, optional field that was added. However if there was already a test framework in place and very simple to add the new field to the test (i.e.: by adding an annotation to a method/field or single line to a test) then it would be more likely to be done. > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419480#comment-16419480 ] Íñigo Goiri commented on HDFS-13371: I guess now [^HADOOP-15336.001.patch] should be renamed to HDFS-13371.00.patch. Hopefully Yetus will run the proper unit tests. > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13371: --- Priority: Minor (was: Major) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13371: --- Labels: (was: common) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Minor > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419472#comment-16419472 ] Íñigo Goiri commented on HDFS-12284: Thanks [~daryn] for chiming it. Would you be available for participating also in the DT forwarding part (HDFS-13358)? I think this JIRA has a couple sharp edges but I think you covered most of them in your review. On the other hand, the DT part will be way more problematic (we need a design doc and so on as you pointed out). Feel free to add others to these two reviews. > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-10467 > > Attachments: HDFS-12284.000.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13350) Negative legacy block ID will confuse Erasure Coding to be considered as striped block
[ https://issues.apache.org/jira/browse/HDFS-13350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HDFS-13350: - Attachment: HDFS-13350.00.patch > Negative legacy block ID will confuse Erasure Coding to be considered as > striped block > -- > > Key: HDFS-13350 > URL: https://issues.apache.org/jira/browse/HDFS-13350 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.1 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Major > Attachments: HDFS-13350.00.patch > > > HDFS-4645 has changed HDFS block ID from randomly generated to sequential > positive IDs. And later on, HDFS EC was built on the assumption that normal > 3x replica block IDs are positive, so EC re-use negative IDs as striped > blocks. > However, there are legacy block IDs can be negative in the system, we should > not use hardcode method to check whether a block is stripe or not: > {code} > public static boolean isStripedBlockID(long id) { > return BlockType.fromBlockId(id) == STRIPED; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419461#comment-16419461 ] Íñigo Goiri commented on HDFS-13371: {quote} Handling optional PB fields when re-encoding the (decoded) response back into a PB. {quote} Correct. In our current setup, all the components have the same version. Is there a good way to handle this other than having test coverage for this (which I'm not even sure how) and catching when new optional fields are added? > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > Labels: common > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13371: -- Attachment: (was: HADOOP-15336.001.patch) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > Labels: common > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Moved] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng moved HADOOP-15336 to HDFS-13371: Affects Version/s: (was: 3.2.0) (was: 3.1.0) 3.2.0 3.1.0 Target Version/s: 3.2.0 (was: 3.2.0) Key: HDFS-13371 (was: HADOOP-15336) Project: Hadoop HDFS (was: Hadoop Common) > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > Labels: common > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13371) NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication between 2.7 and 3.2
[ https://issues.apache.org/jira/browse/HDFS-13371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sherwood Zheng updated HDFS-13371: -- Attachment: HADOOP-15336.001.patch > NPE for FsServerDefaults.getKeyProviderUri() for clientProtocol communication > between 2.7 and 3.2 > - > > Key: HDFS-13371 > URL: https://issues.apache.org/jira/browse/HDFS-13371 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.1.0, 3.2.0 >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > Labels: common > Attachments: HADOOP-15336.000.patch, HADOOP-15336.001.patch > > > KeyProviderUri is not available in 2.7 so when 2.7 clients contact with 3.2 > services, it cannot find the key provider URI and triggers a > NullPointerException. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419451#comment-16419451 ] Íñigo Goiri commented on HDFS-13365: I added a unit test exclusively for this as it doesn't need a full MiniRouterDFSCluster. I couldn't find other unit tests for testing tracing but I think this covers the extend of this JIRA. > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13365) RBF: Adding trace support
[ https://issues.apache.org/jira/browse/HDFS-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13365: --- Attachment: HDFS-13365.001.patch > RBF: Adding trace support > - > > Key: HDFS-13365 > URL: https://issues.apache.org/jira/browse/HDFS-13365 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HDFS-13365.000.patch, HDFS-13365.001.patch > > > We should support HTrace and add spans. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13281) Namenode#createFile should be /.reserved/raw/ aware.
[ https://issues.apache.org/jira/browse/HDFS-13281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419441#comment-16419441 ] Xiao Chen commented on HDFS-13281: -- {quote} {quote}in which case there will be no step #3 {quote} Didn't get you. can you please elaborate. {quote} I was saying: This patch works for an hdfs client, writing to /.reserved/raw. In this case, the client writes raw data, so no encrypt in step #3, just raw bytes written: Let's step back from webhdfs, because here we're changing the NN code. Say you're using hdfs client, and you write to /.reserved/raw. Before this patch you'll always get a feinfo and encrypt (which probably isn't correct. But from NN's view there's no 'write-to-raw', it just resolves to a regular path). After this patch, you get no feinfo, and just write raw (presumably encrypted) bytes to DN. You'll have to setxattr on that file to NN, otherwise we'll end up with a file in HDFS that's raw and doesn't have feinfo, which is essentially corrupt data, right? Along this line, for webhdfs/hdfs client, after the file is created and been written (1 block closed, or file closed), and until setxattr, the file will be readable but undecryptable. I think at the minimum we should setxattr immediately after the file is created. One atomic way is perhaps pass in xattr at file creation time. What do you think [~shahrs87] [~daryn]? > Namenode#createFile should be /.reserved/raw/ aware. > > > Key: HDFS-13281 > URL: https://issues.apache.org/jira/browse/HDFS-13281 > Project: Hadoop HDFS > Issue Type: Bug > Components: encryption >Affects Versions: 2.8.3 >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Critical > Attachments: HDFS-13281.001.patch, HDFS-13281.002.patch > > > If I want to write to /.reserved/raw/ and if that directory happens to > be in EZ, then namenode *should not* create edek and just copy the raw bytes > from the source. > Namenode#startFileInt should be /.reserved/raw/ aware. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419417#comment-16419417 ] Tsz Wo Nicholas Sze commented on HDFS-13314: {quote} The test case would protect this feature if someone in future removes/modifies this if statement. {quote} [~shahrs87], yes or no since the "someone" probably may also modify the tests. The protection is very weak. {quote} Almost all of the code contains if, while or for statements. That doesn't mean it needs no test cases. {quote} Do you mean that you are testing all if, while or for statements in all your code? Wow, unbelievable! > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2 > > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch, HDFS-13314.05.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13358) RBF: Support for Delegation Token
[ https://issues.apache.org/jira/browse/HDFS-13358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419408#comment-16419408 ] Daryn Sharp commented on HDFS-13358: Is there a design doc for how security is being handled? It can't be added through disjoint subtasks w/o having a clear definition of the approach prior to implementation. > RBF: Support for Delegation Token > - > > Key: HDFS-13358 > URL: https://issues.apache.org/jira/browse/HDFS-13358 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Sherwood Zheng >Assignee: Sherwood Zheng >Priority: Major > > HDFS Router should support issuing / managing HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12284) RBF: Support for Kerberos authentication
[ https://issues.apache.org/jira/browse/HDFS-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419403#comment-16419403 ] Daryn Sharp commented on HDFS-12284: All due respect, there need to be domain experts reviewing these security and ipc changes. Other than adding service acls (which is really orthogonal to "support for kerberos"), I don't think the changes are correct. The jmx change: A doAs the current user is a no-op. It's already the current user. More importantly, if security is enabled and if the ugi is actually the remote user (as it should be), it won't have credentials to authenticate to the remote service. Ie. Never going to work. The remote user ugi will never have kerberos credentials, so checkTGTAndReloginFromKeytab is a meaningless no-op. Aside, the ipc layer will already automatically relogin if necessary. The check tgt is an old hack for http calls. The invokeMethod changes are very broken. * The ugi is passed in via the one obtained from the RPC server. That's the context the call is currently in. It's another doAs the current user, like the jmx case, which is a no-op. Even if it wasn't the current user, the doAs is still a no-op because the client proxy "locked in" the ugi when it was created. The current ugi is meaningless – unless the router client somehow circumvented it. * If the "secure" invoke fails, it catches Exception, re-throws if IOE, but logs all other exceptions and continues... * Regardless of whether security is enabled or not, _it always calls invoke again_. > RBF: Support for Kerberos authentication > > > Key: HDFS-12284 > URL: https://issues.apache.org/jira/browse/HDFS-12284 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: security >Reporter: Zhe Zhang >Assignee: Sherwood Zheng >Priority: Major > Fix For: HDFS-10467 > > Attachments: HDFS-12284.000.patch > > > HDFS Router should support Kerberos authentication and issuing / managing > HDFS delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13352) RBF: Add xsl stylesheet for hdfs-rbf-default.xml
[ https://issues.apache.org/jira/browse/HDFS-13352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-13352: -- Fix Version/s: (was: 3.1.1) Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0) > RBF: Add xsl stylesheet for hdfs-rbf-default.xml > > > Key: HDFS-13352 > URL: https://issues.apache.org/jira/browse/HDFS-13352 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13352.1.patch > > > {{configuration.xsl}} is required for browsing {{hdfs-rbf-default.xml}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-12884: -- Fix Version/s: (was: 3.1.1) Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0) > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0 > > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13204) RBF: Optimize name service safe mode icon
[ https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-13204: -- Fix Version/s: (was: 3.1.1) Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0) > RBF: Optimize name service safe mode icon > - > > Key: HDFS-13204 > URL: https://issues.apache.org/jira/browse/HDFS-13204 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Minor > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, > HDFS-13204.003.patch, HDFS-13204.004.patch, HDFS-13204.005.patch, > HDFS-13204.006.patch, HDFS-13204.007.patch, HDFS-13204.008.patch, > Routers.png, Subclusters.png, image-2018-02-28-18-33-09-972.png, > image-2018-02-28-18-33-47-661.png, image-2018-02-28-18-35-35-708.png, > image-2018-03-23-18-06-54-354.png, image-2018-03-26-10-10-10-930.png, > image-2018-03-26-10-21-24-171.png > > > In federation health webpage, the safe mode icons of Subclusters and Routers > are inconsistent. > The safe mode icon of Subclusters may induce users the name service is > maintaining. > !image-2018-02-28-18-33-09-972.png! > The safe mode icon of Routers: > !image-2018-02-28-18-33-47-661.png! > In fact, if the name service is in safe mode, users can't do writing related > operations. So I think the safe mode icon in Subclusters should be modified, > which may be more reasonable. > !image-2018-02-28-18-35-35-708.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13195) DataNode conf page cannot display the current value after reconfig
[ https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-13195: -- Fix Version/s: (was: 3.1.1) Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0) > DataNode conf page cannot display the current value after reconfig > --- > > Key: HDFS-13195 > URL: https://issues.apache.org/jira/browse/HDFS-13195 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1 >Reporter: maobaolong >Assignee: maobaolong >Priority: Minor > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0 > > Attachments: HDFS-13195-branch-2.7.001.patch, > HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch > > > Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i > reconfig this key, the conf page's value is still the old config value. > The reason is that: > {code:java} > public DatanodeHttpServer(final Configuration conf, > final DataNode datanode, > final ServerSocketChannel externalHttpChannel) > throws IOException { > this.conf = conf; > Configuration confForInfoServer = new Configuration(conf); > confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10); > HttpServer2.Builder builder = new HttpServer2.Builder() > .setName("datanode") > .setConf(confForInfoServer) > .setACL(new AccessControlList(conf.get(DFS_ADMIN, " "))) > .hostName(getHostnameForSpnegoPrincipal(confForInfoServer)) > .addEndpoint(URI.create("http://localhost:0;)) > .setFindPort(true); > this.infoServer = builder.build(); > {code} > The confForInfoServer is a new configuration instance, while the dfsadmin > reconfig the datanode's config, the config result cannot reflect to > confForInfoServer, so we should use the datanode's conf. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-12512: -- Fix Version/s: (was: 3.1.1) Doing 3.1.0 RC1 now, moved all 3.1.1 (branch-3.1) fixes to 3.1.0 (branch-3.1.0) > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Wei Yan >Priority: Major > Labels: RBF > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, > HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, > HDFS-12512.005.patch, HDFS-12512.006.patch, HDFS-12512.007.patch, > HDFS-12512.008.patch, HDFS-12512.009.patch, HDFS-12512.010.patch, > HDFS-12512.011.patch, HDFS-12512.012.patch, HDFS-12512.013.patch > > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org