[jira] [Created] (HADOOP-11276) Allow setting url connection timeout in Configuration
Maysam Yabandeh created HADOOP-11276: Summary: Allow setting url connection timeout in Configuration Key: HADOOP-11276 URL: https://issues.apache.org/jira/browse/HADOOP-11276 Project: Hadoop Common Issue Type: Improvement Reporter: Maysam Yabandeh Priority: Minor Currently for url resources there is no way to control the http connection opened by Configuration: {code} private Document parse(DocumentBuilder builder, URL url) throws IOException, SAXException { if (!quietmode) { LOG.debug(parsing URL + url); } if (url == null) { return null; } return parse(builder, url.openStream(), url.toString()); } {code} If we let this method call a protected method that return a stream object from a URL, then the application can override such method and set the application specific connection settings on the URL, like {code} URLConnection con = url.openConnection(); con.setConnectTimeout(connectTimeout); InputStream in = con.getInputStream(); return in; {code} Our monitoring tool currently needs to retrieve the conf from many app masters and the default timeout of 60s is too much, and no way to configure it in the current implementation of the Configuration. If there is any +1 for this change I can submit an initial patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Some information required related to Hadoop documentation developement
Hi Naga, 1. what and how are these *.vm files(format) used in Hadoop [Hdfs and Yarn] documentation ? 2. Does vm files stand for velocity macro? Velocity macro is used for variable substitution as far as I know. For example:: --- YARN Timeline Server --- --- ${maven.build.timestamp} Though many of dosc uses APT format ( http://maven.apache.org/doxia/references/apt-format.html ), you can also use markdown. See hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/markdown for example. 5. Are these documentation pages built if -Pdocs parameters are provided during mvn install ? If so which Pom is responsible for building the html files ? Basically how to modify and test the generated html files? Run mvn site to build site docs. Running mvn site site:stage -DstagingDirectory=/var/www/html/hadoop-site puts built docs to specifiled path. 6. From earlier patches (ex : yarn 1696) i figured out that to add new page(in yarn documentation) we can add a *.vm file in hadoop-yarn-project\hadoop-yarn\hadoop-yarn-site\src\site\apt and make entry in hadoop-project\src\site\site.xml for the new page? Is that sufficient or anything else needs to be taken care? I think it is sufficient. Masatake Iwasaki (11/6/14, 9:51), Naganarasimha G R (Naga) wrote: Hi All, I wanted to know the following related to Hadoop Documentation 1. what and how are these *.vm files(format) used in Hadoop [Hdfs and Yarn] documentation ? 2. Does vm files stand for velocity macro? 3. Do we use some kind of editor to create/update these files ? 4. Are there any guidelines for writing Hadoop/YARN documentation ? 5. Are these documentation pages built if -Pdocs parameters are provided during mvn install ? If so which Pom is responsible for building the html files ? Basically how to modify and test the generated html files? 6. From earlier patches (ex : yarn 1696) i figured out that to add new page(in yarn documentation) we can add a *.vm file in hadoop-yarn-project\hadoop-yarn\hadoop-yarn-site\src\site\apt and make entry in hadoop-project\src\site\site.xml for the new page? Is that sufficient or anything else needs to be taken care? Regards, Naga Huawei Technologies Co., Ltd. Phone: Fax: Mobile: +91 9980040283 Email: naganarasimh...@huawei.commailto:naganarasimh...@huawei.com Huawei Technologies Co., Ltd. Bantian, Longgang District,Shenzhen 518129, P.R.China http://www.huawei.com
[jira] [Created] (HADOOP-11277) hdfs dfs test command returning 0 if true. instead of what should be returning 1 if true.
DeepakVohra created HADOOP-11277: Summary: hdfs dfs test command returning 0 if true. instead of what should be returning 1 if true. Key: HADOOP-11277 URL: https://issues.apache.org/jira/browse/HADOOP-11277 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.5.0 Reporter: DeepakVohra The CDH5 File System (FS) shell commands documentation has an error. The test command lists returning 0 if true. for all options. Should be returning 1 if true. http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-common/FileSystemShell.html#ls -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Some information required related to Hadoop documentation developement
These are APT doc files, see this page (*.apt.vm): http://maven.apache.org/doxia/references/apt-format.html On Thu, Nov 6, 2014 at 10:42 AM, Masatake Iwasaki iwasak...@oss.nttdata.co.jp wrote: Hi Naga, 1. what and how are these *.vm files(format) used in Hadoop [Hdfs and Yarn] documentation ? 2. Does vm files stand for velocity macro? Velocity macro is used for variable substitution as far as I know. For example:: --- YARN Timeline Server --- --- ${maven.build.timestamp} Though many of dosc uses APT format ( http://maven.apache.org/doxia/ references/apt-format.html ), you can also use markdown. See hadoop-mapreduce-project/hadoop-mapreduce-client/ hadoop-mapreduce-client-core/src/site/markdown for example. 5. Are these documentation pages built if -Pdocs parameters are provided during mvn install ? If so which Pom is responsible for building the html files ? Basically how to modify and test the generated html files? Run mvn site to build site docs. Running mvn site site:stage -DstagingDirectory=/var/www/html/hadoop-site puts built docs to specifiled path. 6. From earlier patches (ex : yarn 1696) i figured out that to add new page(in yarn documentation) we can add a *.vm file in hadoop-yarn-project\hadoop-yarn\hadoop-yarn-site\src\site\apt and make entry in hadoop-project\src\site\site.xml for the new page? Is that sufficient or anything else needs to be taken care? I think it is sufficient. Masatake Iwasaki (11/6/14, 9:51), Naganarasimha G R (Naga) wrote: Hi All, I wanted to know the following related to Hadoop Documentation 1. what and how are these *.vm files(format) used in Hadoop [Hdfs and Yarn] documentation ? 2. Does vm files stand for velocity macro? 3. Do we use some kind of editor to create/update these files ? 4. Are there any guidelines for writing Hadoop/YARN documentation ? 5. Are these documentation pages built if -Pdocs parameters are provided during mvn install ? If so which Pom is responsible for building the html files ? Basically how to modify and test the generated html files? 6. From earlier patches (ex : yarn 1696) i figured out that to add new page(in yarn documentation) we can add a *.vm file in hadoop-yarn-project\hadoop-yarn\hadoop-yarn-site\src\site\apt and make entry in hadoop-project\src\site\site.xml for the new page? Is that sufficient or anything else needs to be taken care? Regards, Naga Huawei Technologies Co., Ltd. Phone: Fax: Mobile: +91 9980040283 Email: naganarasimh...@huawei.commailto:naganarasimh...@huawei.com Huawei Technologies Co., Ltd. Bantian, Longgang District,Shenzhen 518129, P.R.China http://www.huawei.com
[jira] [Resolved] (HADOOP-11277) hdfs dfs test command returning 0 if true. instead of what should be returning 1 if true.
[ https://issues.apache.org/jira/browse/HADOOP-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen Chu resolved HADOOP-11277. -- Resolution: Invalid Returning 0 (success) if true is the correct behavior. This is the same behavior as the Linux test command. Resolving. hdfs dfs test command returning 0 if true. instead of what should be returning 1 if true. -- Key: HADOOP-11277 URL: https://issues.apache.org/jira/browse/HADOOP-11277 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.5.0 Reporter: DeepakVohra The CDH5 File System (FS) shell commands documentation has an error. The test command lists returning 0 if true. for all options. Should be returning 1 if true. http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-project-dist/hadoop-common/FileSystemShell.html#ls -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11278) hadoop-daemon.sh command doesn't hornor --config option
Brandon Li created HADOOP-11278: --- Summary: hadoop-daemon.sh command doesn't hornor --config option Key: HADOOP-11278 URL: https://issues.apache.org/jira/browse/HADOOP-11278 Project: Hadoop Common Issue Type: Bug Components: bin Reporter: Brandon Li -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-10283) Make Scheduler and Multiplexer swappable
[ https://issues.apache.org/jira/browse/HADOOP-10283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Li resolved HADOOP-10283. --- Resolution: Not a Problem Resolved in HADOOP-10282 Make Scheduler and Multiplexer swappable Key: HADOOP-10283 URL: https://issues.apache.org/jira/browse/HADOOP-10283 Project: Hadoop Common Issue Type: Sub-task Reporter: Chris Li Assignee: Chris Li Priority: Minor Currently the FairCallQueue uses the DecayRpcScheduler RoundRobinMultiplexer, this task is to allow the user to configure the scheduler and mux in config settings -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HADOOP-10284) Add metrics to the HistoryRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-10284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Li resolved HADOOP-10284. --- Resolution: Not a Problem Assignee: Chris Li Resolved in HADOOP-10281 Add metrics to the HistoryRpcScheduler -- Key: HADOOP-10284 URL: https://issues.apache.org/jira/browse/HADOOP-10284 Project: Hadoop Common Issue Type: Sub-task Reporter: Chris Li Assignee: Chris Li -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11279) Update install defaults for Audit configuration
Madhan Neethiraj created HADOOP-11279: - Summary: Update install defaults for Audit configuration Key: HADOOP-11279 URL: https://issues.apache.org/jira/browse/HADOOP-11279 Project: Hadoop Common Issue Type: Bug Reporter: Madhan Neethiraj - Update install.properties contents for the following: - default values properties to be updated for recent renaming to Ranger (from Argus/XASecure) and changes in conf directory location - rearrange properties to keep groups of properties together - add notes to guide the users on the properties to be updated -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Reopened] (HADOOP-11274) ConcurrentModificationException in Configuration Copy Constructor
[ https://issues.apache.org/jira/browse/HADOOP-11274?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli reopened HADOOP-11274: -- I don't know the behavior of locking this in a constructor, but assuming so, I think there is a deadlock already. Thread 1: Copy constructor takes lock on object 2, and then calls setQuietly which waits on locking on itself Thread 2: Same but the locking in reverse order. ConcurrentModificationException in Configuration Copy Constructor - Key: HADOOP-11274 URL: https://issues.apache.org/jira/browse/HADOOP-11274 Project: Hadoop Common Issue Type: Bug Components: conf Reporter: Junping Du Assignee: Junping Du Priority: Blocker Fix For: 2.6.0 Attachments: HADOOP-11274-v2.patch, HADOOP-11274.003.patch, HADOOP-11274.patch Exception as below happens in doing some configuration update in parallel: {noformat} java.util.ConcurrentModificationException at java.util.HashMap$HashIterator.nextEntry(HashMap.java:922) at java.util.HashMap$EntryIterator.next(HashMap.java:962) at java.util.HashMap$EntryIterator.next(HashMap.java:960) at java.util.HashMap.putAllForCreate(HashMap.java:554) at java.util.HashMap.init(HashMap.java:298) at org.apache.hadoop.conf.Configuration.init(Configuration.java:703) {noformat} In a constructor of Configuration - public Configuration(Configuration other), the copy of updatingResource data structure in copy constructor is not synchronized properly. Configuration.get() eventually calls loadProperty() where updatingResource gets updated. So, whats happening here is one thread is trying to do copy of Configuration as demonstrated in stack trace and other thread is doing Configuration.get(key) and than ConcurrentModificationException occurs because copying of updatingResource is not synchronized in constructor. We should make the update to updatingResource get synchronized, and also fix other tiny synchronized issues there. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HADOOP-11280) TestWinUtils#testChmod fails after removal of NO_PROPAGATE_INHERIT_ACE.
Chris Nauroth created HADOOP-11280: -- Summary: TestWinUtils#testChmod fails after removal of NO_PROPAGATE_INHERIT_ACE. Key: HADOOP-11280 URL: https://issues.apache.org/jira/browse/HADOOP-11280 Project: Hadoop Common Issue Type: Bug Components: native Reporter: Chris Nauroth Assignee: Chris Nauroth Priority: Trivial As part of the Windows YARN secure container executor changes in YARN-2198, {{chmod}} calls no longer use the {{NO_PROPAGATE_INHERIT_ACE}} flag. This change in behavior violates one of the assertions in {{TestWinUtils#testChmod}}, so we need to update the test. -- This message was sent by Atlassian JIRA (v6.3.4#6332)