[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree
[ https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhenzhao wang updated HADOOP-15891: --- Attachment: HDFS-13948.005.patch > Provide Regex Based Mount Point In Inode Tree > - > > Key: HADOOP-15891 > URL: https://issues.apache.org/jira/browse/HADOOP-15891 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: zhenzhao wang >Assignee: zhenzhao wang >Priority: Major > Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, > HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948.005.patch, HDFS-13948_ > Regex Link Type In Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount > Table-v1.pdf > > > This jira is created to support regex based mount point in Inode Tree. We > noticed that mount point only support fixed target path. However, we might > have user cases when target needs to refer some fields from source. e.g. We > might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we > want to refer `cluster` and `user` field in source to construct target. It's > impossible to archive this with current link type. Though we could set > one-to-one mapping, the mount table would become bloated if we have thousands > of users. Besides, a regex mapping would empower us more flexibility. So we > are going to build a regex based mount point which target could refer groups > from src regex mapping. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree
[ https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669480#comment-16669480 ] zhenzhao wang commented on HADOOP-15891: Updated design doc on interceptors. [^HDFS-13948_ Regex Link Type In Mount Table-v1.pdf] > Provide Regex Based Mount Point In Inode Tree > - > > Key: HADOOP-15891 > URL: https://issues.apache.org/jira/browse/HADOOP-15891 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: zhenzhao wang >Assignee: zhenzhao wang >Priority: Major > Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, > HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948_ Regex Link Type In > Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount Table-v1.pdf > > > This jira is created to support regex based mount point in Inode Tree. We > noticed that mount point only support fixed target path. However, we might > have user cases when target needs to refer some fields from source. e.g. We > might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we > want to refer `cluster` and `user` field in source to construct target. It's > impossible to archive this with current link type. Though we could set > one-to-one mapping, the mount table would become bloated if we have thousands > of users. Besides, a regex mapping would empower us more flexibility. So we > are going to build a regex based mount point which target could refer groups > from src regex mapping. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree
[ https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhenzhao wang updated HADOOP-15891: --- Attachment: HDFS-13948_ Regex Link Type In Mount Table-v1.pdf > Provide Regex Based Mount Point In Inode Tree > - > > Key: HADOOP-15891 > URL: https://issues.apache.org/jira/browse/HADOOP-15891 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: zhenzhao wang >Assignee: zhenzhao wang >Priority: Major > Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, > HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948_ Regex Link Type In > Mont Table-V0.pdf, HDFS-13948_ Regex Link Type In Mount Table-v1.pdf > > > This jira is created to support regex based mount point in Inode Tree. We > noticed that mount point only support fixed target path. However, we might > have user cases when target needs to refer some fields from source. e.g. We > might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we > want to refer `cluster` and `user` field in source to construct target. It's > impossible to archive this with current link type. Though we could set > one-to-one mapping, the mount table would become bloated if we have thousands > of users. Besides, a regex mapping would empower us more flexibility. So we > are going to build a regex based mount point which target could refer groups > from src regex mapping. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree
[ https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669471#comment-16669471 ] Chris Trezzo commented on HADOOP-15891: --- Moved Jira to HADOOP project. > Provide Regex Based Mount Point In Inode Tree > - > > Key: HADOOP-15891 > URL: https://issues.apache.org/jira/browse/HADOOP-15891 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: zhenzhao wang >Assignee: zhenzhao wang >Priority: Major > Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, > HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948_ Regex Link Type In > Mont Table-V0.pdf > > > This jira is created to support regex based mount point in Inode Tree. We > noticed that mount point only support fixed target path. However, we might > have user cases when target needs to refer some fields from source. e.g. We > might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we > want to refer `cluster` and `user` field in source to construct target. It's > impossible to archive this with current link type. Though we could set > one-to-one mapping, the mount table would become bloated if we have thousands > of users. Besides, a regex mapping would empower us more flexibility. So we > are going to build a regex based mount point which target could refer groups > from src regex mapping. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15886: --- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Committed this to trunk. Thanks [~elgoiri] for the review! > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Fix For: 3.3.0 > > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-15891) Provide Regex Based Mount Point In Inode Tree
[ https://issues.apache.org/jira/browse/HADOOP-15891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Trezzo moved HDFS-13948 to HADOOP-15891: -- Component/s: (was: fs) fs Key: HADOOP-15891 (was: HDFS-13948) Project: Hadoop Common (was: Hadoop HDFS) > Provide Regex Based Mount Point In Inode Tree > - > > Key: HADOOP-15891 > URL: https://issues.apache.org/jira/browse/HADOOP-15891 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: zhenzhao wang >Assignee: zhenzhao wang >Priority: Major > Attachments: HDFS-13948.001.patch, HDFS-13948.002.patch, > HDFS-13948.003.patch, HDFS-13948.004.patch, HDFS-13948_ Regex Link Type In > Mont Table-V0.pdf > > > This jira is created to support regex based mount point in Inode Tree. We > noticed that mount point only support fixed target path. However, we might > have user cases when target needs to refer some fields from source. e.g. We > might want a mapping of /cluster1/user1 => /cluster1-dc1/user-nn-user1, we > want to refer `cluster` and `user` field in source to construct target. It's > impossible to archive this with current link type. Though we could set > one-to-one mapping, the mount table would become bloated if we have thousands > of users. Besides, a regex mapping would empower us more flexibility. So we > are going to build a regex based mount point which target could refer groups > from src regex mapping. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669457#comment-16669457 ] Lars Francke commented on HADOOP-15687: --- Uploaded a new patch that should fix all new checkstyle warnings that the previous one introduced. > Credentials class should allow access to aliases > > > Key: HADOOP-15687 > URL: https://issues.apache.org/jira/browse/HADOOP-15687 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Trivial > Attachments: HADOOP-15687.2.patch, HADOOP-15687.patch, > HADOOP-15687.patch > > > The credentials class can read token files from disk which are keyed by an > alias. It also allows to retrieve tokens by alias and it also allows to list > all tokens. > It does not - however - allow to get the full map of all tokens including the > aliases (or at least a list of all aliases). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lars Francke updated HADOOP-15687: -- Attachment: HADOOP-15687.2.patch > Credentials class should allow access to aliases > > > Key: HADOOP-15687 > URL: https://issues.apache.org/jira/browse/HADOOP-15687 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Trivial > Attachments: HADOOP-15687.2.patch, HADOOP-15687.patch, > HADOOP-15687.patch > > > The credentials class can read token files from disk which are keyed by an > alias. It also allows to retrieve tokens by alias and it also allows to list > all tokens. > It does not - however - allow to get the full map of all tokens including the > aliases (or at least a list of all aliases). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669423#comment-16669423 ] Íñigo Goiri commented on HADOOP-15886: -- Yetus is not very happy but I think [^YARN-8956.01.patch] is the right way to go. +1 > Fix findbugs warnings in RegistryDNS.java > - > > Key: HADOOP-15886 > URL: https://issues.apache.org/jira/browse/HADOOP-15886 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > Attachments: YARN-8956.01.patch > > > {noformat} > FindBugs : >module:hadoop-common-project/hadoop-registry >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOTCP(InetAddress, int) > At RegistryDNS.java:[line 900] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.addNIOUDP(InetAddress, int) > At RegistryDNS.java:[line 926] >Exceptional return value of > java.util.concurrent.ExecutorService.submit(Callable) ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:ignored in > org.apache.hadoop.registry.server.dns.RegistryDNS.serveNIOTCP(ServerSocketChannel, > InetAddress, int) At RegistryDNS.java:[line 850] > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11391) Enabling HVE/node awareness does not rebalance replicas on data that existed prior to topology changes.
[ https://issues.apache.org/jira/browse/HADOOP-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated HADOOP-11391: -- Status: Patch Available (was: Open) > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > > > Key: HADOOP-11391 > URL: https://issues.apache.org/jira/browse/HADOOP-11391 > Project: Hadoop Common > Issue Type: Bug > Environment: VMWare w/ local storage >Reporter: ellen johansen >Assignee: Hrishikesh Gadre >Priority: Major > Attachments: HADOOP-11391-001.patch > > > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > [root@vmw-d10-001 jenkins]# more /opt/cloudera/topology.data > 10.20.xxx.161 /rack1/nodegroup1 > 10.20.xxx.162 /rack1/nodegroup1 > 10.20.xxx.163 /rack3/nodegroup1 > 10.20.xxx.164 /rack3/nodegroup1 > 172.17.xxx.71 /rack2/nodegroup1 > 172.17.xxx.72 /rack2/nodegroup1 > before HVE: > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 10.20.xxx.163:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 3. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742215_1391 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 4. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742216_1392 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 172.17.xxx.72:20002] > 5. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742217_1393 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 6. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742220_1396 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 10.20.xxx.163:20002] > 7. BP-1184748135-172.17.xxx.71-1418235396548:blk_107374_1398 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 8. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742224_1400 > len=106721297 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 172.17.xxx.72:20002] > - > Before enabling HVE: > Status: HEALTHY > Total size: 1648156454 B (Total open files size: 498 B) > Total dirs: 138 > Total files: 384 > Total symlinks: 0 (Files currently being written: 6) > Total blocks (validated):390 (avg. block size 4226042 B) (Total open > file blocks (not validated): 6) > Minimally replicated blocks: 390 (100.0 %) > Over-replicated blocks: 0 (0.0 %) > Under-replicated blocks: 1 (0.25641027 %) > Mis-replicated blocks: 0 (0.0 %) > Default replication factor: 3 > Average block replication: 2.8564103 > Corrupt blocks: 0 > Missing replicas:5 (0.44682753 %) > Number of data-nodes:5 > Number of racks: 1 > FSCK ended at Wed Dec 10 14:04:35 EST 2014 in 50 milliseconds > The filesystem under path '/' is HEALTHY > -- > after HVE (and NN restart): > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 3. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742215_1391 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 4. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742216_1392 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.161:20002] > 5. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742217_1393 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 6. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742220_1396 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.162:20002] > 7.
[jira] [Commented] (HADOOP-11391) Enabling HVE/node awareness does not rebalance replicas on data that existed prior to topology changes.
[ https://issues.apache.org/jira/browse/HADOOP-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669333#comment-16669333 ] Hrishikesh Gadre commented on HADOOP-11391: --- Please find the attached patch. As part of this patch, I have introduced an additional option in fsck command which will trigger the replication work in the NN for a specific path in the filesystem. The advantages of this approach are (a) Since it is a user driven solution, system admin can control when to schedule the replication (as against NN automatically scheduling in the background). (b) Since fsck command already scans the file-system metadata, this command can re-use this computation to schedule the replication work as well. Thanks to [~andrew.wang] for the discussions on this topic. [~aw] could you please review the patch and let me know your thoughts? > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > > > Key: HADOOP-11391 > URL: https://issues.apache.org/jira/browse/HADOOP-11391 > Project: Hadoop Common > Issue Type: Bug > Environment: VMWare w/ local storage >Reporter: ellen johansen >Assignee: Hrishikesh Gadre >Priority: Major > Attachments: HADOOP-11391-001.patch > > > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > [root@vmw-d10-001 jenkins]# more /opt/cloudera/topology.data > 10.20.xxx.161 /rack1/nodegroup1 > 10.20.xxx.162 /rack1/nodegroup1 > 10.20.xxx.163 /rack3/nodegroup1 > 10.20.xxx.164 /rack3/nodegroup1 > 172.17.xxx.71 /rack2/nodegroup1 > 172.17.xxx.72 /rack2/nodegroup1 > before HVE: > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 10.20.xxx.163:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 3. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742215_1391 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 4. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742216_1392 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 172.17.xxx.72:20002] > 5. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742217_1393 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 6. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742220_1396 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 10.20.xxx.163:20002] > 7. BP-1184748135-172.17.xxx.71-1418235396548:blk_107374_1398 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 8. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742224_1400 > len=106721297 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 172.17.xxx.72:20002] > - > Before enabling HVE: > Status: HEALTHY > Total size: 1648156454 B (Total open files size: 498 B) > Total dirs: 138 > Total files: 384 > Total symlinks: 0 (Files currently being written: 6) > Total blocks (validated):390 (avg. block size 4226042 B) (Total open > file blocks (not validated): 6) > Minimally replicated blocks: 390 (100.0 %) > Over-replicated blocks: 0 (0.0 %) > Under-replicated blocks: 1 (0.25641027 %) > Mis-replicated blocks: 0 (0.0 %) > Default replication factor: 3 > Average block replication: 2.8564103 > Corrupt blocks: 0 > Missing replicas:5 (0.44682753 %) > Number of data-nodes:5 > Number of racks: 1 > FSCK ended at Wed Dec 10 14:04:35 EST 2014 in 50 milliseconds > The filesystem under path '/' is HEALTHY > -- > after HVE (and NN restart): > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 3.
[jira] [Updated] (HADOOP-11391) Enabling HVE/node awareness does not rebalance replicas on data that existed prior to topology changes.
[ https://issues.apache.org/jira/browse/HADOOP-11391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre updated HADOOP-11391: -- Attachment: HADOOP-11391-001.patch > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > > > Key: HADOOP-11391 > URL: https://issues.apache.org/jira/browse/HADOOP-11391 > Project: Hadoop Common > Issue Type: Bug > Environment: VMWare w/ local storage >Reporter: ellen johansen >Assignee: Hrishikesh Gadre >Priority: Major > Attachments: HADOOP-11391-001.patch > > > Enabling HVE/node awareness does not rebalance replicas on data that existed > prior to topology changes. > [root@vmw-d10-001 jenkins]# more /opt/cloudera/topology.data > 10.20.xxx.161 /rack1/nodegroup1 > 10.20.xxx.162 /rack1/nodegroup1 > 10.20.xxx.163 /rack3/nodegroup1 > 10.20.xxx.164 /rack3/nodegroup1 > 172.17.xxx.71 /rack2/nodegroup1 > 172.17.xxx.72 /rack2/nodegroup1 > before HVE: > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 10.20.xxx.163:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 3. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742215_1391 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 4. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742216_1392 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.161:20002, > 172.17.xxx.72:20002] > 5. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742217_1393 > len=134217728 repl=3 [10.20.xxx.164:20002, 172.17.xxx.72:20002, > 10.20.xxx.163:20002] > 6. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742220_1396 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 10.20.xxx.163:20002] > 7. BP-1184748135-172.17.xxx.71-1418235396548:blk_107374_1398 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 8. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742224_1400 > len=106721297 repl=3 [10.20.xxx.164:20002, 10.20.xxx.162:20002, > 172.17.xxx.72:20002] > - > Before enabling HVE: > Status: HEALTHY > Total size: 1648156454 B (Total open files size: 498 B) > Total dirs: 138 > Total files: 384 > Total symlinks: 0 (Files currently being written: 6) > Total blocks (validated):390 (avg. block size 4226042 B) (Total open > file blocks (not validated): 6) > Minimally replicated blocks: 390 (100.0 %) > Over-replicated blocks: 0 (0.0 %) > Under-replicated blocks: 1 (0.25641027 %) > Mis-replicated blocks: 0 (0.0 %) > Default replication factor: 3 > Average block replication: 2.8564103 > Corrupt blocks: 0 > Missing replicas:5 (0.44682753 %) > Number of data-nodes:5 > Number of racks: 1 > FSCK ended at Wed Dec 10 14:04:35 EST 2014 in 50 milliseconds > The filesystem under path '/' is HEALTHY > -- > after HVE (and NN restart): > /user/impalauser/tpcds/store_sales > /user/impalauser/tpcds/store_sales/store_sales.dat 1180463121 bytes, 9 > block(s): OK > 0. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742xxx_1382 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.161:20002] > 1. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742213_1389 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.161:20002] > 2. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742214_1390 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 3. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742215_1391 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 4. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742216_1392 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.161:20002] > 5. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742217_1393 > len=134217728 repl=3 [172.17.xxx.72:20002, 10.20.xxx.164:20002, > 10.20.xxx.163:20002] > 6. BP-1184748135-172.17.xxx.71-1418235396548:blk_1073742220_1396 > len=134217728 repl=3 [10.20.xxx.164:20002, 10.20.xxx.163:20002, > 10.20.xxx.162:20002] > 7.
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669241#comment-16669241 ] Íñigo Goiri commented on HADOOP-15885: -- Right now my use case is a little different. We are enabling security in RBF and now to access HDFS we need Kerberos tickets. The problem is that we also need to access the data from Azure which does not have direct access to the HDFS cluster. So, we are using HttpFs in front of RBF to access the data. Then, we get delegation tokens from HttpFs and access the data through WebHDFS. > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens
[ https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669238#comment-16669238 ] Íñigo Goiri commented on HADOOP-15889: -- I added [^HADOOP-15889.000.patch] as a starting point. It does some cleanup of the code as there were a couple functions doing the same. It also changes the behavior a little as it doesn't give an exception when the token file is not there anymore. Let me know if that's acceptable. Regarding the patch itself, we now would be able to pass base64 tokens and use them right away. [~lmccay], [~ste...@apache.org] mentioned in HADOOP-15885 that you might be able to provide feedback here. > Add hadoop.token configuration parameter to load tokens > --- > > Key: HADOOP-15889 > URL: https://issues.apache.org/jira/browse/HADOOP-15889 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HADOOP-15889.000.patch > > > Currently, Hadoop allows passing files containing tokens. > WebHDFS provides base64 delegation tokens that can be used directly. > This JIRA adds the option to pass base64 tokens directly without using files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens
[ https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15889: - Attachment: HADOOP-15889.000.patch > Add hadoop.token configuration parameter to load tokens > --- > > Key: HADOOP-15889 > URL: https://issues.apache.org/jira/browse/HADOOP-15889 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HADOOP-15889.000.patch > > > Currently, Hadoop allows passing files containing tokens. > WebHDFS provides base64 delegation tokens that can be used directly. > This JIRA adds the option to pass base64 tokens directly without using files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens
[ https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15889: - Status: Patch Available (was: Open) > Add hadoop.token configuration parameter to load tokens > --- > > Key: HADOOP-15889 > URL: https://issues.apache.org/jira/browse/HADOOP-15889 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Attachments: HADOOP-15889.000.patch > > > Currently, Hadoop allows passing files containing tokens. > WebHDFS provides base64 delegation tokens that can be used directly. > This JIRA adds the option to pass base64 tokens directly without using files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15890) Some S3A committer tests don't match ITest* pattern; don't run in maven
Steve Loughran created HADOOP-15890: --- Summary: Some S3A committer tests don't match ITest* pattern; don't run in maven Key: HADOOP-15890 URL: https://issues.apache.org/jira/browse/HADOOP-15890 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3, test Affects Versions: 3.1.1, 3.2.0 Reporter: Steve Loughran Assignee: Steve Loughran some of the s3A committer tests don't have the right prefix for the maven IT test runs to pick up {code} ITMagicCommitMRJob.java ITStagingCommitMRJobBad ITDirectoryCommitMRJob ITStagingCommitMRJob {code} They all work when run by name or in the IDE (which is where I developed them), but they don't run in maven builds. Fix: rename. There are some new tests in branch-3.2 from HADOOP-15107 which aren't in 3.1; need patches for both. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
[ https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15888: Issue Type: Sub-task (was: Bug) Parent: HADOOP-15619 > ITestDynamoDBMetadataStore can leak (large) DDB tables in test > failures/timeout > --- > > Key: HADOOP-15888 > URL: https://issues.apache.org/jira/browse/HADOOP-15888 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.1.2 >Reporter: Steve Loughran >Priority: Major > Attachments: Screen Shot 2018-10-30 at 17.32.43.png > > > This is me doing some backporting of patches from branch-3.2, so it may be an > intermediate condition but > # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore > # so I set it up to work with teh right config opts (table and region) > # but the tests were timing out > # looking at DDB tables in the AWS console showed a number of DDB tables > "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 > write capacity (i.e. ~$50/month) > I haven't replicated this in trunk/branch-3.2 itself, but its clearly > dangerous. At the very least, we should have a size of 1 R/W in all > creations, so the cost of a test failure is neglible, and then we should > document the risk and best practise. > Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669212#comment-16669212 ] Steve Loughran commented on HADOOP-15885: - BTW, if you are looking at DTs and object stores, HADOOP-14556 is probably state of the art; reviews there encouraged. It can pick up your local creds, request a role session with restricted access and send that over as the token. Also is designed for alternate object store auth mechs, where review by people who have S3-compatible stores need to get involved. docs: [https://github.com/steveloughran/hadoop/blob/s3/HADOOP-14556-delegation-token/hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/delegation_tokens.md] > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
[ https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669205#comment-16669205 ] Steve Loughran edited comment on HADOOP-15888 at 10/30/18 7:01 PM: --- need to look at ITestS3GuardConcurrentOps too, as that's doing create & delete of tables. It already has a longer timeout of 5 * 60 * 1000 was (Author: ste...@apache.org): need to look at ITestS3GuardConcurrentOps too, as that's doing create & delete of tables. > ITestDynamoDBMetadataStore can leak (large) DDB tables in test > failures/timeout > --- > > Key: HADOOP-15888 > URL: https://issues.apache.org/jira/browse/HADOOP-15888 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.2 >Reporter: Steve Loughran >Priority: Major > Attachments: Screen Shot 2018-10-30 at 17.32.43.png > > > This is me doing some backporting of patches from branch-3.2, so it may be an > intermediate condition but > # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore > # so I set it up to work with teh right config opts (table and region) > # but the tests were timing out > # looking at DDB tables in the AWS console showed a number of DDB tables > "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 > write capacity (i.e. ~$50/month) > I haven't replicated this in trunk/branch-3.2 itself, but its clearly > dangerous. At the very least, we should have a size of 1 R/W in all > creations, so the cost of a test failure is neglible, and then we should > document the risk and best practise. > Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
[ https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669205#comment-16669205 ] Steve Loughran commented on HADOOP-15888: - need to look at ITestS3GuardConcurrentOps too, as that's doing create & delete of tables. > ITestDynamoDBMetadataStore can leak (large) DDB tables in test > failures/timeout > --- > > Key: HADOOP-15888 > URL: https://issues.apache.org/jira/browse/HADOOP-15888 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.2 >Reporter: Steve Loughran >Priority: Major > Attachments: Screen Shot 2018-10-30 at 17.32.43.png > > > This is me doing some backporting of patches from branch-3.2, so it may be an > intermediate condition but > # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore > # so I set it up to work with teh right config opts (table and region) > # but the tests were timing out > # looking at DDB tables in the AWS console showed a number of DDB tables > "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 > write capacity (i.e. ~$50/month) > I haven't replicated this in trunk/branch-3.2 itself, but its clearly > dangerous. At the very least, we should have a size of 1 R/W in all > creations, so the cost of a test failure is neglible, and then we should > document the risk and best practise. > Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669198#comment-16669198 ] Hadoop QA commented on HADOOP-15687: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 58s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 11s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 9 new + 18 unchanged - 3 fixed = 27 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 10s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 94m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue | HADOOP-15687 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12946206/HADOOP-15687.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux abffc9f20fd1 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 62d98ca | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15429/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15429/testReport/ | | Max. process+thread count | 1631 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15429/console | | Powered by | Apache
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669162#comment-16669162 ] Steve Loughran commented on HADOOP-15885: - talk to others about that, e.g [~lmccay] > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669165#comment-16669165 ] Steve Loughran commented on HADOOP-15885: - latest patch LGTM +1, pending Jenkins being happy > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-15889) Add hadoop.token configuration parameter to load tokens
[ https://issues.apache.org/jira/browse/HADOOP-15889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri moved HDFS-14040 to HADOOP-15889: - Key: HADOOP-15889 (was: HDFS-14040) Project: Hadoop Common (was: Hadoop HDFS) > Add hadoop.token configuration parameter to load tokens > --- > > Key: HADOOP-15889 > URL: https://issues.apache.org/jira/browse/HADOOP-15889 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > > Currently, Hadoop allows passing files containing tokens. > WebHDFS provides base64 delegation tokens that can be used directly. > This JIRA adds the option to pass base64 tokens directly without using files. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hrishikesh Gadre reassigned HADOOP-14128: - Assignee: Hrishikesh Gadre > ChecksumFs should override rename with overwrite flag > - > > Key: HADOOP-14128 > URL: https://issues.apache.org/jira/browse/HADOOP-14128 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Affects Versions: 2.8.1 >Reporter: Mathieu Chataigner >Assignee: Hrishikesh Gadre >Priority: Major > Attachments: HADOOP-14128.001.patch, HADOOP-14128.002.patch > > > When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a > LocalFs (which extends ChecksumFs), it does not update crc files. > Every subsequent read on moved files will result in failures due to crc > missmatch. > One solution is to override rename(src, dst, overwrite) the same way it's > done with rename(src, dst) and moving crc files accordingly. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669118#comment-16669118 ] Íñigo Goiri commented on HADOOP-15885: -- BTW, currently we have the configuration {{hadoop.token.files}} and {{HADOOP_TOKEN_FILE_LOCATION}}. Should we also add an option to pass a base64 token as a configuration key ({{hadoop.token}}) or an environment variable ({{HADOOP_TOKEN}})? This is similar to what Azure allows with the token: http://hadoop.apache.org/docs/current/hadoop-azure/index.html#Delegation_token_support_in_WASB Actually, the approach described there would also work for regular WeBHDFS. > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
[ https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669115#comment-16669115 ] Steve Loughran commented on HADOOP-15888: - Looks like table create time can be very slow, at least on S3 Ireland today. This could be triggering test timeout before the delete operations kick off. Further thoughts * this must be a scale test, so that it picks up the longer timeout * we need to make sure that whatever happens, the tables go away. As it is, I'm now scared of this test. Certainly I'm not going to have it running by default, though as a scale test it'll be less prone to timeouts. [~gabor.bota] FYI > ITestDynamoDBMetadataStore can leak (large) DDB tables in test > failures/timeout > --- > > Key: HADOOP-15888 > URL: https://issues.apache.org/jira/browse/HADOOP-15888 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.2 >Reporter: Steve Loughran >Priority: Major > Attachments: Screen Shot 2018-10-30 at 17.32.43.png > > > This is me doing some backporting of patches from branch-3.2, so it may be an > intermediate condition but > # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore > # so I set it up to work with teh right config opts (table and region) > # but the tests were timing out > # looking at DDB tables in the AWS console showed a number of DDB tables > "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 > write capacity (i.e. ~$50/month) > I haven't replicated this in trunk/branch-3.2 itself, but its clearly > dangerous. At the very least, we should have a size of 1 R/W in all > creations, so the cost of a test failure is neglible, and then we should > document the risk and best practise. > Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
[ https://issues.apache.org/jira/browse/HADOOP-15888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15888: Attachment: Screen Shot 2018-10-30 at 17.32.43.png > ITestDynamoDBMetadataStore can leak (large) DDB tables in test > failures/timeout > --- > > Key: HADOOP-15888 > URL: https://issues.apache.org/jira/browse/HADOOP-15888 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.2 >Reporter: Steve Loughran >Priority: Major > Attachments: Screen Shot 2018-10-30 at 17.32.43.png > > > This is me doing some backporting of patches from branch-3.2, so it may be an > intermediate condition but > # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore > # so I set it up to work with teh right config opts (table and region) > # but the tests were timing out > # looking at DDB tables in the AWS console showed a number of DDB tables > "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 > write capacity (i.e. ~$50/month) > I haven't replicated this in trunk/branch-3.2 itself, but its clearly > dangerous. At the very least, we should have a size of 1 R/W in all > creations, so the cost of a test failure is neglible, and then we should > document the risk and best practise. > Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15888) ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout
Steve Loughran created HADOOP-15888: --- Summary: ITestDynamoDBMetadataStore can leak (large) DDB tables in test failures/timeout Key: HADOOP-15888 URL: https://issues.apache.org/jira/browse/HADOOP-15888 Project: Hadoop Common Issue Type: Bug Components: fs/s3, test Affects Versions: 3.1.2 Reporter: Steve Loughran This is me doing some backporting of patches from branch-3.2, so it may be an intermediate condition but # I'd noticed I wasn't actually running ITestDynamoDBMetadataStore # so I set it up to work with teh right config opts (table and region) # but the tests were timing out # looking at DDB tables in the AWS console showed a number of DDB tables "testProvisionTable", "testProvisionTable", created, each with "500 read, 100 write capacity (i.e. ~$50/month) I haven't replicated this in trunk/branch-3.2 itself, but its clearly dangerous. At the very least, we should have a size of 1 R/W in all creations, so the cost of a test failure is neglible, and then we should document the risk and best practise. Also: use "s3guard" as the table prefix to make clear its origin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15847) S3Guard testConcurrentTableCreations to set r & w capacity == 1
[ https://issues.apache.org/jira/browse/HADOOP-15847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669096#comment-16669096 ] Steve Loughran commented on HADOOP-15847: - All tests should go for a capacity == 1 > S3Guard testConcurrentTableCreations to set r & w capacity == 1 > --- > > Key: HADOOP-15847 > URL: https://issues.apache.org/jira/browse/HADOOP-15847 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Major > > I just found a {{testConcurrentTableCreations}} DDB table lurking in a > region, presumably from an interrupted test. Luckily > test/resources/core-site.xml forces the r/w capacity to be 10, but it could > still run up bills. > Recommend > * explicitly set capacity = 1 for the test > * and add comments in the testing docs about keeping cost down. > I think we may also want to make this a scale-only test, so it's run less > often -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15808) Harden Token service loader use
[ https://issues.apache.org/jira/browse/HADOOP-15808?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669031#comment-16669031 ] Íñigo Goiri commented on HADOOP-15808: -- I have to say that the pattern of iterating and then catching exceptions is much uglier than just using the {{for(:)}} approach. I've been trying to see if there's any way to catch those in the iterator but there's nothing really clean. I guess we'll have to live with this. I don't fully understand why the {{Iterator#next()}} triggers the exception but service loading has lazy behaviors so I guess that must be it. Regarding the exception output, given that right now, it just crashes the whole thing, I think that going to debug (which is pretty much swallowing it), might be a little too much. I would log errors for those cases. In any case, I think we can add some coverage to these cases, not sure what's the cleanest way to trigger {{ServiceConfigurationError}}; a few options I can think: * Create a fake type that triggers this by not having the dependencies. * Spy some of the types to trigger the exception always. > Harden Token service loader use > --- > > Key: HADOOP-15808 > URL: https://issues.apache.org/jira/browse/HADOOP-15808 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.9.1, 3.1.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15808-001.patch, HADOOP-15808-002.patch, > HADOOP-15808-003.patch > > > The Hadoop token service loading (identifiers, renewers...) works provided > there's no problems loading any registered implementation. If there's a > classloading or classcasting problem, the exception raised will stop all > token support working; possibly the application not starting. > This matters for S3A/HADOOP-14556 as things may not load if aws-sdk isn't on > the classpath. It probably lurks in the wasb/abfs support too, but things > have worked there because the installations with DT support there have always > had correctly set up classpaths. > Fix: do what we did for the FS service loader. Catch failures to instantiate > a service provider impl and skip it -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15855) Review hadoop credential doc, including object store details
[ https://issues.apache.org/jira/browse/HADOOP-15855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15855: Resolution: Fixed Fix Version/s: 3.2.1 3.1.2 Status: Resolved (was: Patch Available) thanks, committed to 3.1.x+ > Review hadoop credential doc, including object store details > > > Key: HADOOP-15855 > URL: https://issues.apache.org/jira/browse/HADOOP-15855 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation, security >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.1.2, 3.2.1 > > Attachments: HADOOP-15855-001.patch, HADOOP-15855-002.patch > > > I've got some changes to make to the hadoop credentials API doc; some minor > editing and examples of credential paths in object stores with some extra > details (i.e how you can't refer to a store from the same store URI) > these examples need to come with unit tests to verify that the examples are > correct, obviously -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668979#comment-16668979 ] Íñigo Goiri commented on HADOOP-15885: -- Thanks [~ste...@apache.org] for taking a look. I added [^HADOOP-15885.004.patch] using the logger style (the rest of the class could use some similar cleaning). {quote} if you are playing with tokens, I've a patch which needs review HADOOP-15808 {quote} In the last few days, I've been going through the secure setup and the DT code so I'm somewhat familiar now; let me take a look. > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HADOOP-15885: - Attachment: HADOOP-15885.004.patch > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch, HADOOP-15885.004.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
[ https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668865#comment-16668865 ] Steve Loughran commented on HADOOP-15887: - Makes sense. * needs documentation * needs a test to verify that distcp will actually write data with locality disabled. Ideally some test in the existing distcp tests can be enhanced for this, a new test case added, etc. I don't see a mini HDFS cluster letting you verify that locality is being disabled, but we should check that the switch does not break things. * And a test in AbstractContractDistCpTest so that we can verify that it doesn't break the other filesystems. > Add an option to avoid writing data locally in Distcp > - > > Key: HADOOP-15887 > URL: https://issues.apache.org/jira/browse/HADOOP-15887 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.8.2, 3.0.0 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15887.001.patch > > > When copying large amount of data from one cluster to another via Distcp, and > the Distcp jobs run in the target cluster, the datanode local usage would be > imbalanced. Because the default placement policy chooses the local node to > store the first replication. > In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient > to avoid replicating to the local datanode. We can make use of this flag in > Distcp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
[ https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15887: Target Version/s: 3.3.0 Component/s: tools/distcp > Add an option to avoid writing data locally in Distcp > - > > Key: HADOOP-15887 > URL: https://issues.apache.org/jira/browse/HADOOP-15887 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.8.2, 3.0.0 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15887.001.patch > > > When copying large amount of data from one cluster to another via Distcp, and > the Distcp jobs run in the target cluster, the datanode local usage would be > imbalanced. Because the default placement policy chooses the local node to > store the first replication. > In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient > to avoid replicating to the local datanode. We can make use of this flag in > Distcp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15887) Add an option to avoid writing data locally in Distcp
[ https://issues.apache.org/jira/browse/HADOOP-15887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15887: Status: Patch Available (was: Open) > Add an option to avoid writing data locally in Distcp > - > > Key: HADOOP-15887 > URL: https://issues.apache.org/jira/browse/HADOOP-15887 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 3.0.0, 2.8.2 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HADOOP-15887.001.patch > > > When copying large amount of data from one cluster to another via Distcp, and > the Distcp jobs run in the target cluster, the datanode local usage would be > imbalanced. Because the default placement policy chooses the local node to > store the first replication. > In https://issues.apache.org/jira/browse/HDFS-3702 we add a flag in DFSClient > to avoid replicating to the local datanode. We can make use of this flag in > Distcp. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15781: Affects Version/s: 3.1.0 Target Version/s: 3.2.0, 3.1.2 (was: 3.2.0) Status: Patch Available (was: Reopened) Patch 002: for branch-3.1 only; changes asserts to not look for a specific string on failure. Tested: S3 ireland, before patch: failure, after patch: happiness. If jenkins doesn't veto it, I'll commit this ASAP. [~gabor.bota]: may be of interest to you > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.1.0, 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15781: Attachment: HADOOP-15781-branch-3.1-002.patch > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15781-001.patch, HADOOP-15781-branch-3.1-002.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-15781: - > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15781-001.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668689#comment-16668689 ] Steve Loughran commented on HADOOP-15781: - Fix for this is straightforward: a subset of this patch is needed -the bits which are more forgiving about error text, leaving out those caused by the SDK upgrade > S3A assumed role tests failing due to changed error text in AWS exceptions > -- > > Key: HADOOP-15781 > URL: https://issues.apache.org/jira/browse/HADOOP-15781 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3, test >Affects Versions: 3.2.0 > Environment: some of the fault-catching tests in {{ITestAssumeRole}} > are failing as the SDK update of HADOOP-15642 changed the text. Fix the > tests, perhaps by removing the text check entirely > —it's clearly too brittle >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15781-001.patch > > > This is caused by HADOOP-15642 but I'd missed it because I'd been playing > with assumed roles locally (restricting their rights) and mistook the > failures for "steve's misconfigured the test role", not "the SDK -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15781) S3A assumed role tests failing due to changed error text in AWS exceptions
[ https://issues.apache.org/jira/browse/HADOOP-15781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668687#comment-16668687 ] Steve Loughran commented on HADOOP-15781: - Stack trace from Branch 3.1. This *was* Working, so AWS has changed its exceptions {code} [ERROR] testAssumeRoleFSBadARN(org.apache.hadoop.fs.s3a.auth.ITestAssumeRole) Time elapsed: 0.761 s <<< FAILURE! java.lang.AssertionError: Expected to find 'Not authorized to perform sts:AssumeRole' but got unexpected exception: java.nio.file.AccessDeniedException: : Instantiate org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider on : com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Access denied (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied; Request ID: 6bf1a4c5-dc46-11e8-8744-edfffcc8f66f):AccessDenied at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:218) at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProvider(S3AUtils.java:674) at org.apache.hadoop.fs.s3a.S3AUtils.createAWSCredentialProviderSet(S3AUtils.java:566) at org.apache.hadoop.fs.s3a.DefaultS3ClientFactory.createS3Client(DefaultS3ClientFactory.java:52) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:256) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3303) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361) at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.lambda$expectFileSystemCreateFailure$0(ITestAssumeRole.java:123) at org.apache.hadoop.fs.s3a.S3ATestUtils.lambda$interceptClosing$0(S3ATestUtils.java:486) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:491) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:377) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:446) at org.apache.hadoop.fs.s3a.S3ATestUtils.interceptClosing(S3ATestUtils.java:484) at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.expectFileSystemCreateFailure(ITestAssumeRole.java:121) at org.apache.hadoop.fs.s3a.auth.ITestAssumeRole.testAssumeRoleFSBadARN(ITestAssumeRole.java:160) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.services.securitytoken.model.AWSSecurityTokenServiceException: Access denied (Service: AWSSecurityTokenService; Status Code: 403; Error Code: AccessDenied; Request ID: 6bf1a4c5-dc46-11e8-8744-edfffcc8f66f) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1639) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.doInvoke(AWSSecurityTokenServiceClient.java:1271) at com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClient.invoke(AWSSecurityTokenServiceClient.java:1247) at
[jira] [Commented] (HADOOP-15885) Add base64 (urlString) support to DTUtil
[ https://issues.apache.org/jira/browse/HADOOP-15885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668660#comment-16668660 ] Steve Loughran commented on HADOOP-15885: - coe looks good, esp. tests * logging is slf4j, so should use the LOG.info("Add token with service {}", token.getService()) style if you are playing with tokens, I've a patch which needs review HADOOP-15808 > Add base64 (urlString) support to DTUtil > > > Key: HADOOP-15885 > URL: https://issues.apache.org/jira/browse/HADOOP-15885 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HADOOP-15885.000.patch, HADOOP-15885.001.patch, > HADOOP-15885.002.patch, HADOOP-15885.003.patch > > > HADOOP-12563 added a utility to manage Delegation Token files. Currently, it > supports Java and Protobuf formats. However, When interacting with WebHDFS, > we use base64. In addition, when printing a token, we also print the base64 > value. We should be able to import base64 tokens in the utility. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15687: Target Version/s: 3.3.0, 3.2.1 Status: Patch Available (was: Open) patch LGTM; resubmitting to see what jenkins says > Credentials class should allow access to aliases > > > Key: HADOOP-15687 > URL: https://issues.apache.org/jira/browse/HADOOP-15687 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Trivial > Attachments: HADOOP-15687.patch, HADOOP-15687.patch > > > The credentials class can read token files from disk which are keyed by an > alias. It also allows to retrieve tokens by alias and it also allows to list > all tokens. > It does not - however - allow to get the full map of all tokens including the > aliases (or at least a list of all aliases). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15687: Attachment: HADOOP-15687.patch > Credentials class should allow access to aliases > > > Key: HADOOP-15687 > URL: https://issues.apache.org/jira/browse/HADOOP-15687 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Trivial > Attachments: HADOOP-15687.patch, HADOOP-15687.patch > > > The credentials class can read token files from disk which are keyed by an > alias. It also allows to retrieve tokens by alias and it also allows to list > all tokens. > It does not - however - allow to get the full map of all tokens including the > aliases (or at least a list of all aliases). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15687) Credentials class should allow access to aliases
[ https://issues.apache.org/jira/browse/HADOOP-15687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15687: Status: Open (was: Patch Available) > Credentials class should allow access to aliases > > > Key: HADOOP-15687 > URL: https://issues.apache.org/jira/browse/HADOOP-15687 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Lars Francke >Assignee: Lars Francke >Priority: Trivial > Attachments: HADOOP-15687.patch, HADOOP-15687.patch > > > The credentials class can read token files from disk which are keyed by an > alias. It also allows to retrieve tokens by alias and it also allows to list > all tokens. > It does not - however - allow to get the full map of all tokens including the > aliases (or at least a list of all aliases). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14999) AliyunOSS: provide one asynchronous multi-part based uploading mechanism
[ https://issues.apache.org/jira/browse/HADOOP-14999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668649#comment-16668649 ] Steve Loughran commented on HADOOP-14999: - Have you seen the multipart upload API being developed in trunk: this cloud connector should support it too, so that we'll have a single API for bulk uploads of data to any filestore > AliyunOSS: provide one asynchronous multi-part based uploading mechanism > > > Key: HADOOP-14999 > URL: https://issues.apache.org/jira/browse/HADOOP-14999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0-beta1 >Reporter: Genmao Yu >Assignee: Genmao Yu >Priority: Major > Fix For: 2.10.0, 2.9.1, 3.2.0, 3.1.1, 3.0.3 > > Attachments: HADOOP-14999-branch-2.001.patch, > HADOOP-14999-branch-2.002.patch, HADOOP-14999.001.patch, > HADOOP-14999.002.patch, HADOOP-14999.003.patch, HADOOP-14999.004.patch, > HADOOP-14999.005.patch, HADOOP-14999.006.patch, HADOOP-14999.007.patch, > HADOOP-14999.008.patch, HADOOP-14999.009.patch, HADOOP-14999.010.patch, > HADOOP-14999.011.patch, asynchronous_file_uploading.pdf, > diff-between-patch7-and-patch8.txt > > > This mechanism is designed for uploading file in parallel and asynchronously: > - improve the performance of uploading file to OSS server. Firstly, this > mechanism splits result to multiple small blocks and upload them in parallel. > Then, getting result and uploading blocks are asynchronous. > - avoid buffering too large result into local disk. To cite an extreme > example, there is a task which will output 100GB or even larger, we may need > to output this 100GB to local disk and then upload it. Sometimes, it is > inefficient and limited to disk space. > This patch reuse {{SemaphoredDelegatingExecutor}} as executor service and > depends on HADOOP-15039. > Attached {{asynchronous_file_uploading.pdf}} illustrated the difference > between previous {{AliyunOSSOutputStream}} and > {{AliyunOSSBlockOutputStream}}, i.e. this asynchronous multi-part based > uploading mechanism. > 1. {{AliyunOSSOutputStream}}: we need to output the whole result to local > disk before we can upload it to OSS. This will poses two problems: > - if the output file is too large, it will run out of the local disk. > - if the output file is too large, task will wait long time to upload result > to OSS before finish, wasting much compute resource. > 2. {{AliyunOSSBlockOutputStream}}: we cut the task output into small blocks, > i.e. some small local file, and each block will be packaged into a uploading > task. These tasks will be submitted into {{SemaphoredDelegatingExecutor}}. > {{SemaphoredDelegatingExecutor}} will upload this blocks in parallel, this > will improve performance greatly. > 3. Each task will retry 3 times to upload block to Aliyun OSS. If one of > those tasks failed, the whole file uploading will failed, and we will abort > current uploading. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15886) Fix findbugs warnings in RegistryDNS.java
[ https://issues.apache.org/jira/browse/HADOOP-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668303#comment-16668303 ] Hadoop QA commented on HADOOP-15886: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 64m 45s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s{color} | {color:red} hadoop-common-project/hadoop-registry in trunk has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-yarn-project/hadoop-yarn {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} hadoop-common-project/hadoop-registry generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 49s{color} | {color:green} hadoop-registry in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}145m 27s{color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 52s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}257m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.timelineservice.reader.TestTimelineReaderWebServicesHBaseStorage | | | hadoop.yarn.server.resourcemanager.scheduler.capacity.TestQueueManagementDynamicEditPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f | | JIRA Issue |
[jira] [Commented] (HADOOP-15865) ConcurrentModificationException in Configuration.overlay() method
[ https://issues.apache.org/jira/browse/HADOOP-15865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16668201#comment-16668201 ] Akira Ajisaka commented on HADOOP-15865: The patch looks good to me. Hi [~andrew.wang], could you double-check this? > ConcurrentModificationException in Configuration.overlay() method > - > > Key: HADOOP-15865 > URL: https://issues.apache.org/jira/browse/HADOOP-15865 > Project: Hadoop Common > Issue Type: Bug >Reporter: Oleksandr Shevchenko >Assignee: Oleksandr Shevchenko >Priority: Major > Attachments: HADOOP-15865.001.patch > > > Configuration.overlay() is not thread-safe and can be the cause of > ConcurrentModificationException since we use iteration over Properties > object. > {code} > private void overlay(Properties to, Properties from) { > for (Entry entry: from.entrySet()) { > to.put(entry.getKey(), entry.getValue()); > } > } > {code} > Properties class is thread-safe but iterator is not. We should manually > synchronize on the returned set of entries which we use for iteration. > We faced with ResourceManger fails during recovery caused by > ConcurrentModificationException: > {noformat} > 2018-10-12 08:00:56,968 INFO org.apache.hadoop.service.AbstractService: > Service ResourceManager failed in state STARTED; cause: > java.util.ConcurrentModificationException > java.util.ConcurrentModificationException > at java.util.Hashtable$Enumerator.next(Hashtable.java:1383) > at org.apache.hadoop.conf.Configuration.overlay(Configuration.java:2801) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2696) > at > org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2632) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2528) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1062) > at > org.apache.hadoop.conf.Configuration.getStringCollection(Configuration.java:1914) > at > org.apache.hadoop.security.alias.CredentialProviderFactory.getProviders(CredentialProviderFactory.java:53) > at > org.apache.hadoop.conf.Configuration.getPasswordFromCredentialProviders(Configuration.java:2043) > at org.apache.hadoop.conf.Configuration.getPassword(Configuration.java:2023) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.getPassword(WebAppUtils.java:452) > at > org.apache.hadoop.yarn.webapp.util.WebAppUtils.loadSslConfiguration(WebAppUtils.java:428) > at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:293) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1017) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1117) > at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193) > at > org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1251) > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.security.RMDelegationTokenSecretManager: > removing RMDelegation token with sequence number: 3489914 > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore: Removing > RMDelegationToken and SequenceNumber > 2018-10-12 08:00:56,968 INFO > org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore: > Removing RMDelegationToken_3489914 > 2018-10-12 08:00:56,969 INFO org.apache.hadoop.ipc.Server: Stopping server on > 8032 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org