Rick:

Would suggest you also hang this question out on the hadoop-user mailing list. The fellas who know permissions are more likely to see it there I'd say (Thanks for digging in on this one).

St.Ack


Rick Hangartner wrote:
Hi, we think we've narrowed the issue down a bit from the debug logs.

The method "FSNameSystem.checkPermission()" method is throwing the exception because the "PermissionChecker()" constructor is returning that the hbase user is not a superuser or in the same supergroup as hadoop.

  private void checkSuperuserPrivilege() throws AccessControlException {
    if (isPermissionEnabled) {
      PermissionChecker pc = new PermissionChecker(
          fsOwner.getUserName(), supergroup);
      if (!pc.isSuper) {
throw new AccessControlException("Superuser privilege is required");
      }
    }
  }

If we look at at the "PermissionChecker()" constructor we see that it is comparing the hdfs owner name (which should be "hadoop") and the hdfs file system owner's group ("supergroup") to the current user and groups, which the log seems to indicate the user is "hbase" and the groups for user "hbase" only include "hbase" :

  PermissionChecker(String fsOwner, String supergroup
      ) throws AccessControlException{
    UserGroupInformation ugi = UserGroupInformation.getCurrentUGI();
    if (LOG.isDebugEnabled()) {
      LOG.debug("ugi=" + ugi);
    }

    if (ugi != null) {
      user = ugi.getUserName();
      groups.addAll(Arrays.asList(ugi.getGroupNames()));
      isSuper = user.equals(fsOwner) || groups.contains(supergroup);
    }
    else {
      throw new AccessControlException("ugi = null");
    }
  }

The current user and group is derived from the thread information:

  private static final ThreadLocal<UserGroupInformation> currentUGI
    = new ThreadLocal<UserGroupInformation>();

  /** @return the [EMAIL PROTECTED] UserGroupInformation} for the current 
thread */
  public static UserGroupInformation getCurrentUGI() {
    return currentUGI.get();
  }

which we're hoping might be enough to illuminate the problem.

One question this raises is if the "hbase:hbase" user and group are being derived from the Linux file system user and group, or if they are the hdfs user and group?

Otherwise, how can we indicate that "hbase" user is in the hdfs group "supergroup"? Is there a parameter in a hadoop configuration file? Apparently setting the groups of the web server to include "supergroup" didn't have any effect, although perhaps that could be for some other reason?

Thanks very much for any insights. Incidentally we are now running hbase-0.1.2.
Rick


On May 7, 2008, at 1:20 PM, stack wrote:

Rick Hangartner wrote:
1. By "hbase rootdir", you mean "/hbase" and not a "/user/hbase" directory in the hdfs, correct?

Yes.  hbase.rootdir.

2. When you suggest we move to the head of the 0.1 branch, do you mean an 0.1.2 pre-release since right now all the servers we check show hbase-0.1.1 as the latest release?

Yes. We put up a 0.1.2 candidate a few weeks ago but a bunch of bugs came in so we put it aside. I'm about to put up a new 0.1.2 candidate now. Watch this list for an update in the next hour or so.

Thanks,
St.Ack



Reply via email to