[jira] [Updated] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java
[ https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hu xiaodong updated HADOOP-14807: - Attachment: HADOOP-14807.001.patch > should prevent the possibility of NPE about ReconfigurableBase.java > > > Key: HADOOP-14807 > URL: https://issues.apache.org/jira/browse/HADOOP-14807 > Project: Hadoop Common > Issue Type: Improvement >Reporter: hu xiaodong >Assignee: hu xiaodong >Priority: Minor > Attachments: HADOOP-14807.001.patch > > > 1.ReconfigurationThread.java may throw a ReconfigurationException which > getCause() is null > {code:title=ReconfigurationThread.java|borderStyle=solid} > protected String reconfigurePropertyImpl(String property, String newVal) > throws ReconfigurationException { > final DatanodeManager datanodeManager = namesystem.getBlockManager() > .getDatanodeManager(); > if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { > return reconfHeartbeatInterval(datanodeManager, property, newVal); > } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { > return reconfHeartbeatRecheckInterval(datanodeManager, property, > newVal); > } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { > return reconfProtectedDirectories(newVal); > } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { > return reconfCallerContextEnabled(newVal); > } else if (property.equals(ipcClientRPCBackoffEnable)) { > return reconfigureIPCBackoffEnabled(newVal); > } >//=== >//here may throw a ReconfigurationException which getCause() is null >//=== >else { > throw new ReconfigurationException(property, newVal, getConf().get( > property)); > } > } > {code} > 2. ReconfigurationThread.java will call > ReconfigurationException.getCause().getMessage() which will cause NPE. > {code:title=ReconfigurationThread.java|borderStyle=solid} > private static class ReconfigurationThread extends Thread { > private ReconfigurableBase parent; > ReconfigurationThread(ReconfigurableBase base) { > this.parent = base; > } > // See {@link ReconfigurationServlet#applyChanges} > public void run() { > LOG.info("Starting reconfiguration task."); > final Configuration oldConf = parent.getConf(); > final Configuration newConf = parent.getNewConf(); > final Collection changes = > parent.getChangedProperties(newConf, oldConf); > Mapresults = Maps.newHashMap(); > ConfigRedactor oldRedactor = new ConfigRedactor(oldConf); > ConfigRedactor newRedactor = new ConfigRedactor(newConf); > for (PropertyChange change : changes) { > String errorMessage = null; > String oldValRedacted = oldRedactor.redact(change.prop, > change.oldVal); > String newValRedacted = newRedactor.redact(change.prop, > change.newVal); > if (!parent.isPropertyReconfigurable(change.prop)) { > LOG.info(String.format( > "Property %s is not configurable: old value: %s, new value: %s", > change.prop, > oldValRedacted, > newValRedacted)); > continue; > } > LOG.info("Change property: " + change.prop + " from \"" > + ((change.oldVal == null) ? "" : oldValRedacted) > + "\" to \"" > + ((change.newVal == null) ? "" : newValRedacted) > + "\"."); > try { > String effectiveValue = > parent.reconfigurePropertyImpl(change.prop, change.newVal); > if (change.newVal != null) { > oldConf.set(change.prop, effectiveValue); > } else { > oldConf.unset(change.prop); > } > } catch (ReconfigurationException e) { > //=== > // here may occurs NPE, because e.getCause() may be null. > //=== > errorMessage = e.getCause().getMessage(); > } > results.put(change, Optional.fromNullable(errorMessage)); > } > synchronized (parent.reconfigLock) { > parent.endTime = Time.now(); > parent.status = Collections.unmodifiableMap(results); > parent.reconfigThread = null; > } > } > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail:
[jira] [Updated] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java
[ https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hu xiaodong updated HADOOP-14807: - Description: 1.ReconfigurationThread.java may throw a ReconfigurationException which getCause() is null {code:title=ReconfigurationThread.java|borderStyle=solid} protected String reconfigurePropertyImpl(String property, String newVal) throws ReconfigurationException { final DatanodeManager datanodeManager = namesystem.getBlockManager() .getDatanodeManager(); if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { return reconfHeartbeatInterval(datanodeManager, property, newVal); } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { return reconfHeartbeatRecheckInterval(datanodeManager, property, newVal); } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { return reconfProtectedDirectories(newVal); } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { return reconfCallerContextEnabled(newVal); } else if (property.equals(ipcClientRPCBackoffEnable)) { return reconfigureIPCBackoffEnabled(newVal); } //=== //here may throw a ReconfigurationException which getCause() is null //=== else { throw new ReconfigurationException(property, newVal, getConf().get( property)); } } {code} 2. ReconfigurationThread.java will call ReconfigurationException.getCause().getMessage() which will cause NPE. {code:title=ReconfigurationThread.java|borderStyle=solid} private static class ReconfigurationThread extends Thread { private ReconfigurableBase parent; ReconfigurationThread(ReconfigurableBase base) { this.parent = base; } // See {@link ReconfigurationServlet#applyChanges} public void run() { LOG.info("Starting reconfiguration task."); final Configuration oldConf = parent.getConf(); final Configuration newConf = parent.getNewConf(); final Collection changes = parent.getChangedProperties(newConf, oldConf); Mapresults = Maps.newHashMap(); ConfigRedactor oldRedactor = new ConfigRedactor(oldConf); ConfigRedactor newRedactor = new ConfigRedactor(newConf); for (PropertyChange change : changes) { String errorMessage = null; String oldValRedacted = oldRedactor.redact(change.prop, change.oldVal); String newValRedacted = newRedactor.redact(change.prop, change.newVal); if (!parent.isPropertyReconfigurable(change.prop)) { LOG.info(String.format( "Property %s is not configurable: old value: %s, new value: %s", change.prop, oldValRedacted, newValRedacted)); continue; } LOG.info("Change property: " + change.prop + " from \"" + ((change.oldVal == null) ? "" : oldValRedacted) + "\" to \"" + ((change.newVal == null) ? "" : newValRedacted) + "\"."); try { String effectiveValue = parent.reconfigurePropertyImpl(change.prop, change.newVal); if (change.newVal != null) { oldConf.set(change.prop, effectiveValue); } else { oldConf.unset(change.prop); } } catch (ReconfigurationException e) { //=== // here may occurs NPE, because e.getCause() may be null. //=== errorMessage = e.getCause().getMessage(); } results.put(change, Optional.fromNullable(errorMessage)); } synchronized (parent.reconfigLock) { parent.endTime = Time.now(); parent.status = Collections.unmodifiableMap(results); parent.reconfigThread = null; } } } {code} was: 1.ReconfigurationThread.java may throw a ReconfigurationException which getCause() is null {code:title=ReconfigurationThread.java|borderStyle=solid} protected String reconfigurePropertyImpl(String property, String newVal) throws ReconfigurationException { final DatanodeManager datanodeManager = namesystem.getBlockManager() .getDatanodeManager(); if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { return reconfHeartbeatInterval(datanodeManager, property, newVal); } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { return reconfHeartbeatRecheckInterval(datanodeManager, property, newVal); } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { return reconfProtectedDirectories(newVal); } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { return reconfCallerContextEnabled(newVal); } else if
[jira] [Updated] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java
[ https://issues.apache.org/jira/browse/HADOOP-14807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hu xiaodong updated HADOOP-14807: - Description: 1.ReconfigurationThread.java may throw a ReconfigurationException which getCause() is null {code:title=ReconfigurationThread.java|borderStyle=solid} protected String reconfigurePropertyImpl(String property, String newVal) throws ReconfigurationException { final DatanodeManager datanodeManager = namesystem.getBlockManager() .getDatanodeManager(); if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { return reconfHeartbeatInterval(datanodeManager, property, newVal); } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { return reconfHeartbeatRecheckInterval(datanodeManager, property, newVal); } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { return reconfProtectedDirectories(newVal); } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { return reconfCallerContextEnabled(newVal); } else if (property.equals(ipcClientRPCBackoffEnable)) { return reconfigureIPCBackoffEnabled(newVal); } //=== //here may throw a ReconfigurationException which getCause() is null //=== else { throw new ReconfigurationException(property, newVal, getConf().get( property)); } } {code} 2. ReconfigurationThread.java will call ReconfigurationException.getCause().getMessage() witch will cause NPE. {code:title=ReconfigurationThread.java|borderStyle=solid} private static class ReconfigurationThread extends Thread { private ReconfigurableBase parent; ReconfigurationThread(ReconfigurableBase base) { this.parent = base; } // See {@link ReconfigurationServlet#applyChanges} public void run() { LOG.info("Starting reconfiguration task."); final Configuration oldConf = parent.getConf(); final Configuration newConf = parent.getNewConf(); final Collection changes = parent.getChangedProperties(newConf, oldConf); Mapresults = Maps.newHashMap(); ConfigRedactor oldRedactor = new ConfigRedactor(oldConf); ConfigRedactor newRedactor = new ConfigRedactor(newConf); for (PropertyChange change : changes) { String errorMessage = null; String oldValRedacted = oldRedactor.redact(change.prop, change.oldVal); String newValRedacted = newRedactor.redact(change.prop, change.newVal); if (!parent.isPropertyReconfigurable(change.prop)) { LOG.info(String.format( "Property %s is not configurable: old value: %s, new value: %s", change.prop, oldValRedacted, newValRedacted)); continue; } LOG.info("Change property: " + change.prop + " from \"" + ((change.oldVal == null) ? "" : oldValRedacted) + "\" to \"" + ((change.newVal == null) ? "" : newValRedacted) + "\"."); try { String effectiveValue = parent.reconfigurePropertyImpl(change.prop, change.newVal); if (change.newVal != null) { oldConf.set(change.prop, effectiveValue); } else { oldConf.unset(change.prop); } } catch (ReconfigurationException e) { //=== // here may occurs NPE, because e.getCause() may be null. //=== errorMessage = e.getCause().getMessage(); } results.put(change, Optional.fromNullable(errorMessage)); } synchronized (parent.reconfigLock) { parent.endTime = Time.now(); parent.status = Collections.unmodifiableMap(results); parent.reconfigThread = null; } } } {code} was: 1. {code:title=ReconfigurationThread.java|borderStyle=solid} protected String reconfigurePropertyImpl(String property, String newVal) throws ReconfigurationException { final DatanodeManager datanodeManager = namesystem.getBlockManager() .getDatanodeManager(); if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { return reconfHeartbeatInterval(datanodeManager, property, newVal); } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { return reconfHeartbeatRecheckInterval(datanodeManager, property, newVal); } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { return reconfProtectedDirectories(newVal); } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { return reconfCallerContextEnabled(newVal); } else if (property.equals(ipcClientRPCBackoffEnable)) { return reconfigureIPCBackoffEnabled(newVal); }
[jira] [Created] (HADOOP-14807) should prevent the possibility of NPE about ReconfigurableBase.java
hu xiaodong created HADOOP-14807: Summary: should prevent the possibility of NPE about ReconfigurableBase.java Key: HADOOP-14807 URL: https://issues.apache.org/jira/browse/HADOOP-14807 Project: Hadoop Common Issue Type: Improvement Reporter: hu xiaodong Assignee: hu xiaodong Priority: Minor 1. {code:title=ReconfigurationThread.java|borderStyle=solid} protected String reconfigurePropertyImpl(String property, String newVal) throws ReconfigurationException { final DatanodeManager datanodeManager = namesystem.getBlockManager() .getDatanodeManager(); if (property.equals(DFS_HEARTBEAT_INTERVAL_KEY)) { return reconfHeartbeatInterval(datanodeManager, property, newVal); } else if (property.equals(DFS_NAMENODE_HEARTBEAT_RECHECK_INTERVAL_KEY)) { return reconfHeartbeatRecheckInterval(datanodeManager, property, newVal); } else if (property.equals(FS_PROTECTED_DIRECTORIES)) { return reconfProtectedDirectories(newVal); } else if (property.equals(HADOOP_CALLER_CONTEXT_ENABLED_KEY)) { return reconfCallerContextEnabled(newVal); } else if (property.equals(ipcClientRPCBackoffEnable)) { return reconfigureIPCBackoffEnabled(newVal); } //=== //here may throw a ReconfigurationException which getCause() is null //=== else { throw new ReconfigurationException(property, newVal, getConf().get( property)); } } {code} 2. {code:title=ReconfigurationThread.java|borderStyle=solid} private static class ReconfigurationThread extends Thread { private ReconfigurableBase parent; ReconfigurationThread(ReconfigurableBase base) { this.parent = base; } // See {@link ReconfigurationServlet#applyChanges} public void run() { LOG.info("Starting reconfiguration task."); final Configuration oldConf = parent.getConf(); final Configuration newConf = parent.getNewConf(); final Collection changes = parent.getChangedProperties(newConf, oldConf); Mapresults = Maps.newHashMap(); ConfigRedactor oldRedactor = new ConfigRedactor(oldConf); ConfigRedactor newRedactor = new ConfigRedactor(newConf); for (PropertyChange change : changes) { String errorMessage = null; String oldValRedacted = oldRedactor.redact(change.prop, change.oldVal); String newValRedacted = newRedactor.redact(change.prop, change.newVal); if (!parent.isPropertyReconfigurable(change.prop)) { LOG.info(String.format( "Property %s is not configurable: old value: %s, new value: %s", change.prop, oldValRedacted, newValRedacted)); continue; } LOG.info("Change property: " + change.prop + " from \"" + ((change.oldVal == null) ? "" : oldValRedacted) + "\" to \"" + ((change.newVal == null) ? "" : newValRedacted) + "\"."); try { String effectiveValue = parent.reconfigurePropertyImpl(change.prop, change.newVal); if (change.newVal != null) { oldConf.set(change.prop, effectiveValue); } else { oldConf.unset(change.prop); } } catch (ReconfigurationException e) { //=== // here may occurs NPE, because e.getCause() may be null. //=== errorMessage = e.getCause().getMessage(); } results.put(change, Optional.fromNullable(errorMessage)); } synchronized (parent.reconfigLock) { parent.endTime = Time.now(); parent.status = Collections.unmodifiableMap(results); parent.reconfigThread = null; } } } {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140828#comment-16140828 ] Ajay Kumar commented on HADOOP-14729: - [~arpitagarwal],[~ste...@apache.org] thanks for review. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch, > HADOOP-14729.012.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140762#comment-16140762 ] Hadoop QA commented on HADOOP-14220: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} HADOOP-13345 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 58s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-13345 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 12 new + 21 unchanged - 0 fixed = 33 total (was 21) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 6 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 11s{color} | {color:red} hadoop-tools_hadoop-aws generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 38s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14220 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883604/HADOOP-14220-HADOOP-13345-003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux aa4373e87642 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / f07c7aa | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13108/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/13108/artifact/patchprocess/whitespace-eol.txt | | javadoc | https://builds.apache.org/job/PreCommit-HADOOP-Build/13108/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13108/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13108/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. >
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140720#comment-16140720 ] Haibo Chen commented on HADOOP-14284: - Done. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, > HADOOP-14284.012.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Patch Available (was: Open) > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Attachment: HADOOP-14220-HADOOP-13345-003.patch HADOOP-14220 003 working on the CLI, esp trying to track down an init bug which appears due to having the per-bucket option set in s3guard init, as it still picks up the DDB binding & fails as it isn't there > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Open (was: Patch Available) > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch, HADOOP-14220-HADOOP-13345-003.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats
[ https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140710#comment-16140710 ] Mingliang Liu commented on HADOOP-13435: [~jnp] do you have time for a kind review? I plan to get in this along with [HADOOP-13032] if you agrees. Thanks, > Add thread local mechanism for aggregating file system storage stats > > > Key: HADOOP-13435 > URL: https://issues.apache.org/jira/browse/HADOOP-13435 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, > HADOOP-13435.002.patch, HADOOP-13435.003.patch, HADOOP-13435.004.patch > > > As discussed in [HADOOP-13032], this is to add thread local mechanism for > aggregating file system storage stats. This class will also be used in > [HADOOP-13031], which is to separate the distance-oriented rack-aware read > bytes logic from {{FileSystemStorageStatistics}} to new > DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the > {{FileSystemStorageStatistics}} can live without the to-be-removed > {{FileSystem$Statistics}} implementation. > A unit test should also be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13435) Add thread local mechanism for aggregating file system storage stats
[ https://issues.apache.org/jira/browse/HADOOP-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140710#comment-16140710 ] Mingliang Liu edited comment on HADOOP-13435 at 8/24/17 9:13 PM: - [~jnp] do you have time for a kind review? I plan to get in this along with [HADOOP-13032] if you agree. Thanks, was (Author: liuml07): [~jnp] do you have time for a kind review? I plan to get in this along with [HADOOP-13032] if you agrees. Thanks, > Add thread local mechanism for aggregating file system storage stats > > > Key: HADOOP-13435 > URL: https://issues.apache.org/jira/browse/HADOOP-13435 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13435.000.patch, HADOOP-13435.001.patch, > HADOOP-13435.002.patch, HADOOP-13435.003.patch, HADOOP-13435.004.patch > > > As discussed in [HADOOP-13032], this is to add thread local mechanism for > aggregating file system storage stats. This class will also be used in > [HADOOP-13031], which is to separate the distance-oriented rack-aware read > bytes logic from {{FileSystemStorageStatistics}} to new > DFSRackAwareStorageStatistics as it's DFS-specific. After this patch, the > {{FileSystemStorageStatistics}} can live without the to-be-removed > {{FileSystem$Statistics}} implementation. > A unit test should also be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140708#comment-16140708 ] Sean Busbey commented on HADOOP-14284: -- Sure, I'll take a look. Can you link HADOOP-14771 to the jira where the relevant yarn module was introduced? > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, > HADOOP-14284.012.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140677#comment-16140677 ] Steve Loughran commented on HADOOP-14729: - LGTM > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch, > HADOOP-14729.012.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140672#comment-16140672 ] Haibo Chen commented on HADOOP-14284: - [~busbey], the shaded hadoop-client work seems incomplete. Can you take a look at HADOOP-14771 and see if it is valid? > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, > HADOOP-14284.012.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140635#comment-16140635 ] Arpit Agarwal commented on HADOOP-14729: +1 for the v12 patch. I will hold off committing until tomorrow in case Akira or Steve have further comments. > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch, > HADOOP-14729.012.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14806) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
[ https://issues.apache.org/jira/browse/HADOOP-14806?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru reassigned HADOOP-14806: --- Assignee: Hanisha Koneru > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > - > > Key: HADOOP-14806 > URL: https://issues.apache.org/jira/browse/HADOOP-14806 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.1 >Reporter: Sai Nukavarapu >Assignee: Hanisha Koneru > > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > Looking at the code, i see below description. > {noformat} > `BlockVerificationFailures` | Total number of verifications failures | > `BlocksVerified` | Total number of blocks verified | > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140507#comment-16140507 ] Steve Loughran commented on HADOOP-14220: - Just tracked down an obscure "this is impossible what have I broken here" bug with {{s3guard init}}, one which *only* arises on the CLI & not the unit tests {code} ot exist in region eu-west-1; auto-creation is turned off ~/P/h/h/t/h/bin (s3guard/HADOOP-14220-info ⚡↩) hadoop s3guard init -write 5 -read 10 $bucket 2017-08-24 19:44:56,481 INFO Configuration.deprecation: fs.s3a.server-side-encryption-key is deprecated. Instead, use fs.s3a.server-side-encryption.key java.io.FileNotFoundException: DynamoDB table 'hwdev-steve-ireland-new' does not exist in region eu-west-1; auto-creation is turned off at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:830) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:245) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:97) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:292) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3258) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3307) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3281) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:529) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.initS3AFileSystem(S3GuardTool.java:252) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.parseDynamoDBRegion(S3GuardTool.java:169) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$Init.run(S3GuardTool.java:364) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:296) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1109) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.main(S3GuardTool.java:1118) Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource not found: Table: hwdev-steve-ireland-new not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 75RTSBU3AEJ9R7ICPS0U3AQIURVV4KQNSO5AEMVJF66Q9ASUAAJG) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1258) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1030) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:742) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:716) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:2089) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:2065) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDescribeTable(AmazonDynamoDBClient.java:1048) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.describeTable(AmazonDynamoDBClient.java:1024) at com.amazonaws.services.dynamodbv2.document.Table.describe(Table.java:137) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:792) ... 15 more {code} Cause: if you have a bucket-name-specific setting for your metastore, then the line {{conf.setClass(S3_METADATA_STORE_IMPL, NullMetadataStore.class, MetadataStore.class);}} doesn't disable any attempt to create the DDB binding, because fs.s3a.bucket.metastore.impl just overwrites it. > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch > > > Add a diagnostics command to s3guard which does
[jira] [Created] (HADOOP-14806) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
Sai Nukavarapu created HADOOP-14806: --- Summary: "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX Key: HADOOP-14806 URL: https://issues.apache.org/jira/browse/HADOOP-14806 Project: Hadoop Common Issue Type: Bug Components: metrics Affects Versions: 2.7.1 Reporter: Sai Nukavarapu "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX Looking at the code, i see below description. {noformat} `BlockVerificationFailures` | Total number of verifications failures | `BlocksVerified` | Total number of blocks verified | {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14802) Add support for using container saskeys for all accesses
[ https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140443#comment-16140443 ] Hadoop QA commented on HADOOP-14802: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 2s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14802 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883564/HADOOP-14802.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c0de6e9d3034 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13107/testReport/ | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13107/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add support for using container saskeys for all accesses > > > Key: HADOOP-14802 > URL: https://issues.apache.org/jira/browse/HADOOP-14802 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch > > > This JIRA tracks adding support for using container saskey for all storage > access. > Instead of using saskeys that are specific to each blob, it is
[jira] [Commented] (HADOOP-14802) Add support for using container saskeys for all accesses
[ https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140421#comment-16140421 ] Steve Loughran commented on HADOOP-14802: - Lets see what Yetus does. It's a testable feature on its own though, which can be used to show that (a) it actually has an effect and that (b) nobody who edits the coe in future go. Create two to blobs, see if they share the same sasl key. is it possible to do that? > Add support for using container saskeys for all accesses > > > Key: HADOOP-14802 > URL: https://issues.apache.org/jira/browse/HADOOP-14802 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch > > > This JIRA tracks adding support for using container saskey for all storage > access. > Instead of using saskeys that are specific to each blob, it is possible to > re-use the container saskey for all blob accesses. > This provides a performance improvement over using blob-specific saskeys -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14805) Upgrade to zstd 1.3.1
[ https://issues.apache.org/jira/browse/HADOOP-14805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140409#comment-16140409 ] Andrew Wang commented on HADOOP-14805: -- If there's no direct version dependency, then I guess this JIRA is for testing the new version then bundling in the binary release. > Upgrade to zstd 1.3.1 > - > > Key: HADOOP-14805 > URL: https://issues.apache.org/jira/browse/HADOOP-14805 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0, 3.0.0-alpha2 >Reporter: Andrew Wang > > zstandard 1.3.1 has been dual licensed under GPL and BSD. This clears up the > concerns regarding the Facebook-specific PATENTS file. If we upgrade to > 1.3.1, we can bundle zstd with binary distributions of Hadoop. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14805) Upgrade to zstd 1.3.1
[ https://issues.apache.org/jira/browse/HADOOP-14805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140385#comment-16140385 ] Jason Lowe commented on HADOOP-14805: - I do not believe Hadoop has a dependency on a specific version of zstd. It's up to the admin/user to install whatever zstd version they think is appropriate when building the native bits for Hadoop. Is there an incompatibility introduced in 1.3.x that requires a change on our part or is this also proposing to bundle zstd with the Hadoop release? > Upgrade to zstd 1.3.1 > - > > Key: HADOOP-14805 > URL: https://issues.apache.org/jira/browse/HADOOP-14805 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0, 3.0.0-alpha2 >Reporter: Andrew Wang > > zstandard 1.3.1 has been dual licensed under GPL and BSD. This clears up the > concerns regarding the Facebook-specific PATENTS file. If we upgrade to > 1.3.1, we can bundle zstd with binary distributions of Hadoop. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14805) Upgrade to zstd 1.3.1
[ https://issues.apache.org/jira/browse/HADOOP-14805?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-14805: - Description: zstandard 1.3.1 has been dual licensed under GPL and BSD. This clears up the concerns regarding the Facebook-specific PATENTS file. If we upgrade to 1.3.1, we can bundle zstd with binary distributions of Hadoop. > Upgrade to zstd 1.3.1 > - > > Key: HADOOP-14805 > URL: https://issues.apache.org/jira/browse/HADOOP-14805 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.9.0, 3.0.0-alpha2 >Reporter: Andrew Wang > > zstandard 1.3.1 has been dual licensed under GPL and BSD. This clears up the > concerns regarding the Facebook-specific PATENTS file. If we upgrade to > 1.3.1, we can bundle zstd with binary distributions of Hadoop. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14805) Upgrade to zstd 1.3.1
Andrew Wang created HADOOP-14805: Summary: Upgrade to zstd 1.3.1 Key: HADOOP-14805 URL: https://issues.apache.org/jira/browse/HADOOP-14805 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0-alpha2, 2.9.0 Reporter: Andrew Wang -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140368#comment-16140368 ] Hadoop QA commented on HADOOP-14797: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-14797 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12883561/HADOOP-14797.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 65a18da26c6a 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8196a07 | | Default Java | 1.8.0_144 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13106/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13106/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14797.001.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14802) Add support for using container saskeys for all accesses
[ https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140361#comment-16140361 ] Sivaguru Sankaridurg commented on HADOOP-14802: --- Hi Steve, I ran the all azure tests against azure west-us. There were no new tests because no new functionality was introduced. The code only changed how we got a reference to a CloudBlob -- the functionality after getting the reference, remained the same. In any case, I added some tests that flip the flags for using blob-specific sas-keys in the second patch. I also submitted patch.002 to yetus. Thanks Siva > Add support for using container saskeys for all accesses > > > Key: HADOOP-14802 > URL: https://issues.apache.org/jira/browse/HADOOP-14802 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch > > > This JIRA tracks adding support for using container saskey for all storage > access. > Instead of using saskeys that are specific to each blob, it is possible to > re-use the container saskey for all blob accesses. > This provides a performance improvement over using blob-specific saskeys -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses
[ https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaguru Sankaridurg updated HADOOP-14802: -- Status: Patch Available (was: Open) > Add support for using container saskeys for all accesses > > > Key: HADOOP-14802 > URL: https://issues.apache.org/jira/browse/HADOOP-14802 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch > > > This JIRA tracks adding support for using container saskey for all storage > access. > Instead of using saskeys that are specific to each blob, it is possible to > re-use the container saskey for all blob accesses. > This provides a performance improvement over using blob-specific saskeys -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14802) Add support for using container saskeys for all accesses
[ https://issues.apache.org/jira/browse/HADOOP-14802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sivaguru Sankaridurg updated HADOOP-14802: -- Attachment: HADOOP-14802.002.patch > Add support for using container saskeys for all accesses > > > Key: HADOOP-14802 > URL: https://issues.apache.org/jira/browse/HADOOP-14802 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Sivaguru Sankaridurg >Assignee: Sivaguru Sankaridurg > Labels: azure, fs, secure, wasb > Attachments: HADOOP-14802.001.patch, HADOOP-14802.002.patch > > > This JIRA tracks adding support for using container saskey for all storage > access. > Instead of using saskeys that are specific to each blob, it is possible to > re-use the container saskey for all blob accesses. > This provides a performance improvement over using blob-specific saskeys -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14797: Attachment: HADOOP-14797.001.patch > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang > Attachments: HADOOP-14797.001.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14797) Update re2j version to 1.1
[ https://issues.apache.org/jira/browse/HADOOP-14797?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ray Chiang updated HADOOP-14797: Assignee: Ray Chiang Status: Patch Available (was: Open) > Update re2j version to 1.1 > -- > > Key: HADOOP-14797 > URL: https://issues.apache.org/jira/browse/HADOOP-14797 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Ray Chiang >Assignee: Ray Chiang > Attachments: HADOOP-14797.001.patch > > > Update the dependency > com.google.re2j:re2j:1.0 > to the latest (1.1). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14786) HTTP default servlets do not require authentication when kerberos is enabled
[ https://issues.apache.org/jira/browse/HADOOP-14786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140275#comment-16140275 ] Brahma Reddy Battula commented on HADOOP-14786: --- bq.{noformat} kdestroy curl --negotiate -u: -k -sS 'https://:9871/jmx' {noformat} bq. Expect curl to fail, but it returns JMX anyway. when {{hadoop.security.instrumentation.requires.admin}} configured as {{true}} also same behavior...? I think, can ensure only admin can access when this enabled..? did I miss anything..? {code} // If user is a static user and auth Type is null, that means // there is a non-security environment and no need authorization, // otherwise, do the authorization. final ServletContext servletContext = getServletContext(); if (!HttpServer2.isStaticUserAndNoneAuthType(servletContext, request) && !isInstrumentationAccessAllowed(request, response)) { return; } {code} > HTTP default servlets do not require authentication when kerberos is enabled > > > Key: HADOOP-14786 > URL: https://issues.apache.org/jira/browse/HADOOP-14786 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge > > The default HttpServer2 servlet /jmx, /conf, /logLevel, and /stack do not > require authentication when Kerberos is enabled. > {code:java|title=HttpServer2#addDefaultServlets} > // set up default servlets > addServlet("stacks", "/stacks", StackServlet.class); > addServlet("logLevel", "/logLevel", LogLevel.Servlet.class); > addServlet("jmx", "/jmx", JMXJsonServlet.class); > addServlet("conf", "/conf", ConfServlet.class); > {code} > {code:java|title=HttpServer2#addServlet} > public void addServlet(String name, String pathSpec, >Class clazz) { > addInternalServlet(name, pathSpec, clazz, false); > addFilterPathMapping(pathSpec, webAppContext); > {code} > {code:java|title=Httpserver2#addInternalServlet} > addInternalServlet(…, bool requireAuth) > … > if(requireAuth && UserGroupInformation.isSecurityEnabled()) { > LOG.info("Adding Kerberos (SPNEGO) filter to " + name); > {code} > {{requireAuth}} is {{false}} for the default servlets inside > {{addInternalServlet}}. > The issue can be verified by running the following curl command against > NameNode web address when Kerberos is enabled: > {noformat} > kdestroy > curl --negotiate -u: -k -sS 'https://:9871/jmx' > {noformat} > Expect curl to fail, but it returns JMX anyway. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14284) Shade Guava everywhere
[ https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140260#comment-16140260 ] Daniel Templeton commented on HADOOP-14284: --- I don't like the idea of having some modules use relocated package names and some use the original package names. I guarantee the build will be broken more than once by careless commits. However, the goal for Hadoop 3.0 is to decouple clients from our internal dependencies, and shading only HDFS and the YARN clients accomplishes that goal. In the name of moving this forward, I'm willing to go with shading only the client-consumed artifacts. We can always come back and shade the rest in the future if we get tired of breaking the build. A campaign to get HDFS clients to stop using the server JARs is a grand idea, but it doesn't absolve us from shading the server JARs, because there will be hold-outs. > Shade Guava everywhere > -- > > Key: HADOOP-14284 > URL: https://issues.apache.org/jira/browse/HADOOP-14284 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Tsuyoshi Ozawa >Priority: Blocker > Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, > HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, > HADOOP-14284.012.patch > > > HADOOP-10101 upgraded the guava version for 3.x to 21. > Guava is broadly used by Java projects that consume our artifacts. > Unfortunately, these projects also consume our private artifacts like > {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced > by HADOOP-11804, currently only available in 3.0.0-alpha2. > We should shade Guava everywhere to proactively avoid breaking downstreams. > This isn't a requirement for all dependency upgrades, but it's necessary for > known-bad dependencies like Guava. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140128#comment-16140128 ] Steve Loughran commented on HADOOP-1: - # called people's attention to the issue on hadoop-common, hopefully you'll get build reviews # w.r.t test failure, you can look at other recent JIRAs and jenkins to see if its a new failure (or an intermittent like TestKDiag), and just say "I believe this failure is unrelated" > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16140036#comment-16140036 ] Lukas Waldmann commented on HADOOP-1: - what should actually be done when build fails outside the scope of the change? > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139922#comment-16139922 ] Hadoop QA commented on HADOOP-1: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 31 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 25s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 6s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-tools {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 41s{color} | {color:green} hadoop-ftp in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 49m 12s{color} | {color:red} hadoop-tools in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 55s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator | | | hadoop.yarn.sls.nodemanager.TestNMSimulator | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HADOOP-1 | | JIRA Patch URL |
[jira] [Updated] (HADOOP-14803) Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem
[ https://issues.apache.org/jira/browse/HADOOP-14803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14803: Component/s: fs/s3 > Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem > > > Key: HADOOP-14803 > URL: https://issues.apache.org/jira/browse/HADOOP-14803 > Project: Hadoop Common > Issue Type: Test > Components: fs/s3 >Reporter: Ajay Kumar > > Upgrade JUnit 3 TestCase to JUnit 4 in TestS3NInMemoryFileSystem -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14801) s3guard diff demand creates a new table
[ https://issues.apache.org/jira/browse/HADOOP-14801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139885#comment-16139885 ] Steve Loughran commented on HADOOP-14801: - I'll be dealing with this in my bigger HADOOP-14220 patch > s3guard diff demand creates a new table > --- > > Key: HADOOP-14801 > URL: https://issues.apache.org/jira/browse/HADOOP-14801 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Priority: Minor > > ifr you call {{s3guard diff}} to diff a bucket and a table, it creates the > table if not already there. I don't see that as being the right thing to do. > {code} > hadoop s3guard diff $bucket > 2017-08-22 15:14:47,025 INFO s3guard.DynamoDBMetadataStore: Creating > non-existent DynamoDB table hwdev-steve-ireland-new in region eu-west-1 > 2017-08-22 15:14:52,384 INFO s3guard.S3GuardTool: Metadata store > DynamoDBMetadataStore{region=eu-west-1, tableName=hwdev-steve-ireland-new} is > initialized. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14729) Upgrade JUnit 3 TestCase to JUnit 4
[ https://issues.apache.org/jira/browse/HADOOP-14729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139880#comment-16139880 ] Steve Loughran commented on HADOOP-14729: - Thanks. FWIW I wasn't worried about S3n, as I don't worry about S3n...we're thinking there about "how to remove it completely" > Upgrade JUnit 3 TestCase to JUnit 4 > --- > > Key: HADOOP-14729 > URL: https://issues.apache.org/jira/browse/HADOOP-14729 > Project: Hadoop Common > Issue Type: Test >Reporter: Akira Ajisaka >Assignee: Ajay Kumar > Labels: newbie > Attachments: HADOOP-14729.001.patch, HADOOP-14729.002.patch, > HADOOP-14729.003.patch, HADOOP-14729.004.patch, HADOOP-14729.005.patch, > HADOOP-14729.006.patch, HADOOP-14729.007.patch, HADOOP-14729.008.patch, > HADOOP-14729.009.patch, HADOOP-14729.010.patch, HADOOP-14729.011.patch, > HADOOP-14729.012.patch > > > There are still test classes that extend from junit.framework.TestCase in > hadoop-common. Upgrade them to JUnit4. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Patch Available (was: Open) > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14007) cli to list info about a bucket (S3guard or not)
[ https://issues.apache.org/jira/browse/HADOOP-14007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-14007. - Resolution: Duplicate HADOOP-14220 covers same issue, although it's newer it's got a patch in progress > cli to list info about a bucket (S3guard or not) > > > Key: HADOOP-14007 > URL: https://issues.apache.org/jira/browse/HADOOP-14007 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran > > It turns out to be useful during dev to find things out about a bucket: > * whether it uses s3guard > * location of table > * table size > * version & create timestamp > * maybe: which auth mechanism worked (though we may need to add some more > tracking in our providers there, and tread carefully w.r.t security) > this could be added to some s3 cli, e.g. "hadoop s3 info s3a://bucket" > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14220) Add S3GuardTool bucket-info command
[ https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-14220: Status: Open (was: Patch Available) > Add S3GuardTool bucket-info command > --- > > Key: HADOOP-14220 > URL: https://issues.apache.org/jira/browse/HADOOP-14220 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-14220-HADOOP-13345-001.patch, > HADOOP-14220-HADOOP-13345-002.patch > > > Add a diagnostics command to s3guard which does whatever we need to diagnose > problems for a specific (named) s3a url. This is something which can be > attached to bug reports as well as used by developers. > * Properties to log (with provenance attribute, which can track bucket > overrides: s3guard metastore setup, autocreate, capacity, > * table present/absent > * # of keys in DDB table for that bucket? > * any other stats? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16139786#comment-16139786 ] Lukas Waldmann commented on HADOOP-1: - Steve, I understand need for testing from community. I just wonder if community is aware of this code :) > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Attachment: HADOOP-1.10.patch Whitespace and javadoc changes > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: Patch Available (was: In Progress) > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, > HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, > HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, > HADOOP-1.9.patch, HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14444) New implementation of ftp and sftp filesystems
[ https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lukas Waldmann updated HADOOP-1: Status: In Progress (was: Patch Available) > New implementation of ftp and sftp filesystems > -- > > Key: HADOOP-1 > URL: https://issues.apache.org/jira/browse/HADOOP-1 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Affects Versions: 2.8.0 >Reporter: Lukas Waldmann >Assignee: Lukas Waldmann > Attachments: HADOOP-1.2.patch, HADOOP-1.3.patch, > HADOOP-1.4.patch, HADOOP-1.5.patch, HADOOP-1.6.patch, > HADOOP-1.7.patch, HADOOP-1.8.patch, HADOOP-1.9.patch, > HADOOP-1.patch > > > Current implementation of FTP and SFTP filesystems have severe limitations > and performance issues when dealing with high number of files. Mine patch > solve those issues and integrate both filesystems such a way that most of the > core functionality is common for both and therefore simplifying the > maintainability. > The core features: > * Support for HTTP/SOCKS proxies > * Support for passive FTP > * Support for explicit FTPS (SSL/TLS) > * Support of connection pooling - new connection is not created for every > single command but reused from the pool. > For huge number of files it shows order of magnitude performance improvement > over not pooled connections. > * Caching of directory trees. For ftp you always need to list whole directory > whenever you ask information about particular file. > Again for huge number of files it shows order of magnitude performance > improvement over not cached connections. > * Support of keep alive (NOOP) messages to avoid connection drops > * Support for Unix style or regexp wildcard glob - useful for listing a > particular files across whole directory tree > * Support for reestablishing broken ftp data transfers - can happen > surprisingly often -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14804) correct wrong parameters format order in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-14804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen Hongfei updated HADOOP-14804: -- Attachment: HADOOP-14804.001.patch > correct wrong parameters format order in core-default.xml > - > > Key: HADOOP-14804 > URL: https://issues.apache.org/jira/browse/HADOOP-14804 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.0.0-alpha3 >Reporter: Chen Hongfei >Priority: Trivial > Fix For: 3.0.0-alpha3 > > Attachments: HADOOP-14804.001.patch > > > descriptions of "HTTP CORS" parameters is before the names: > >Comma separated list of headers that are allowed for web > services needing cross-origin (CORS) support. > hadoop.http.cross-origin.allowed-headers > X-Requested-With,Content-Type,Accept,Origin > > .. > but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14804) correct wrong parameters format order in core-default.xml
Chen Hongfei created HADOOP-14804: - Summary: correct wrong parameters format order in core-default.xml Key: HADOOP-14804 URL: https://issues.apache.org/jira/browse/HADOOP-14804 Project: Hadoop Common Issue Type: Improvement Affects Versions: 3.0.0-alpha3 Reporter: Chen Hongfei Priority: Trivial Fix For: 3.0.0-alpha3 descriptions of "HTTP CORS" parameters is before the names: Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support. hadoop.http.cross-origin.allowed-headers X-Requested-With,Content-Type,Accept,Origin .. but they should be following value as others. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org