I dug through the code a little bit. Indeed the following exception was due
to the difference in DistributedFileSystem.listStatus() between 0.20.205
and 0.22:

11/11/05 19:08:48 ERROR handler.CreateTableHandler: Error trying to
create the table b
java.io.FileNotFoundException: File
hdfs://ip-10-110-254-200.ec2.
internal:17020/hbase/b does not exist.
       at
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:387)

In 0.20.205:
  public FileStatus[] listStatus(Path p) throws IOException {
    String src = getPathName(p);

    // fetch the first batch of entries in the directory
    DirectoryListing thisListing = dfs.listPaths(
        src, HdfsFileStatus.EMPTY_NAME);

    if (thisListing == null) { // the directory does not exist
      return null;
    }

In 0.22:
  @Override
  public FileStatus[] listStatus(Path p) throws IOException {
    String src = getPathName(p);

    // fetch the first batch of entries in the directory
    DirectoryListing thisListing = dfs.listPaths(
        src, HdfsFileStatus.EMPTY_NAME);

    if (thisListing == null) { // the directory does not exist
      throw new FileNotFoundException("File " + p + " does not exist.");
    }

So in FSTableDescriptors.getTableInfoPath(), we should catch
FileNotFoundException and treat it the same way as status being null.

Cheers

On Sun, Nov 6, 2011 at 6:38 PM, Roman Shaposhnik <[email protected]> wrote:

> On Sun, Nov 6, 2011 at 4:12 PM, Roman Shaposhnik <[email protected]> wrote:
> > Odd indeed. Whatever it was is now gone when I build from this SHA:
> >    61b5659bf7971cfac32f3cf4fca0d3823b4c8f8c
> > However, I can still reproduce it when I build from the previous SHA:
> >    454a75d2eb122b198140a778d00d6e1bc086517e
> >
> > I think since it got fixed, it is probably not really worth pursuing.
>
> Here's the final deal -- this is Hadoop 0.22 related. I can reliably
> reproduce
> it if I enable the .22 profile. Here's how:
>   $ git pull ; git checkout remotes/origin/0.92
>   $ mvn clean assembly:assembly -DskipTests -Dhadoop.profile=22
>   $ tar xzvf  -C /tmp/22 target/hbase-0.92.0-SNAPSHOT.tar.gz
>   $ rm -rf /tmp/hbase*
>   $ /tmp/22/hbase-0.92.0-SNAPSHOT/hbase-daemon.sh start master
>   $ /tmp/22/hbase-0.92.0-SNAPSHOT/hbase shell
>   11/11/06 18:13:50 WARN conf.Configuration: hadoop.native.lib is
> deprecated. Instead, use io.native.lib.available
>   11/11/06 18:13:50 WARN conf.Configuration: hadoop.native.lib is
> deprecated. Instead, use io.native.lib.available
>   11/11/06 18:13:50 WARN conf.Configuration: hadoop.native.lib is
> deprecated. Instead, use io.native.lib.available
>   11/11/06 18:13:50 WARN conf.Configuration: hadoop.native.lib is
> deprecated. Instead, use io.native.lib.available
>   HBase Shell; enter 'help<RETURN>' for list of supported commands.
>   Type "exit<RETURN>" to leave the HBase Shell
>   Version 0.92.0-SNAPSHOT, r61b5659bf7971cfac32f3cf4fca0d3823b4c8f8c,
> Sun Nov  6 18:02:26 PST 2011
>
>   hbase(main):001:0> create 't', 'f'
>
> And it hangs.
>
> I'm about to attend a social function in the next couple of hours and
> will probably
> dig further tomorrow at ApacheCON.
>
> Thanks,
> Roman.
>

Reply via email to