[ 
https://issues.apache.org/jira/browse/HADOOP-3173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12597695#action_12597695
 ] 

Chris Douglas commented on HADOOP-3173:
---------------------------------------

One more drawback: on windows, if one's path has backslashes in it, then Paths 
created with Path(String) cannot be globbed. This may not be a serious 
limitation, since any paths with backslashes are likely to be from environment 
variables, so real uses will come from Path(Path, String) and other variants.

bq. Why don't we declare not to allow those special characters?

We could. The namenode would have to check for special characters when creating 
new nodes, and we'd also need to provide an upgrade path for users. In theory 
it's a pretty dramatic change to cut down the charset, but in practice I doubt 
many will notice.

> inconsistent globbing support for dfs commands
> ----------------------------------------------
>
>                 Key: HADOOP-3173
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3173
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>         Environment: Hadoop 0.16.1
>            Reporter: Rajiv Chittajallu
>             Fix For: 0.18.0
>
>         Attachments: 3173-0.patch
>
>
> hadoop dfs -mkdir /user/*/bar creates a directory "/user/*/bar" and you cant 
> deleted /user/* as -rmr expands the glob
> $ hadoop dfs -mkdir /user/rajive/a/*/foo
> $ hadoop dfs -ls /user/rajive/a
> Found 4 items
> /user/rajive/a/*      <dir>           2008-04-04 16:09        rwx------       
> rajive  users
> /user/rajive/a/b      <dir>           2008-04-04 16:08        rwx------       
> rajive  users
> /user/rajive/a/c      <dir>           2008-04-04 16:08        rwx------       
> rajive  users
> /user/rajive/a/d      <dir>           2008-04-04 16:08        rwx------       
> rajive  users
> $ hadoop dfs -ls /user/rajive/a/*
> /user/rajive/a/*/foo  <dir>           2008-04-04 16:09        rwx------       
> rajive  users
> $ hadoop dfs -rmr /user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> I am not able to escape '*' from being expanded.
> $ hadoop dfs -rmr '/user/rajive/a/*'
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> $ hadoop dfs -rmr  '/user/rajive/a/\*'
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d
> $ hadoop dfs -rmr  /user/rajive/a/\* 
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/*
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/b
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/c
> Moved to trash: hdfs://namenode-1:8020/user/rajive/a/d

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to