[ 
https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15310189#comment-15310189
 ] 

john lilley commented on HADOOP-13223:
--------------------------------------

One more winutils.exe issue seen only on MapR.  I don't think this affects the 
mainline Hadoop development, but it is indicative of the kinds of errors you 
can get out of winutils.  This is the thing that drove us to put MapR's 
winutil.exe folder in the PATH, which in turn broke the other distros:

I’m still trying to narrow this down more precisely, but I’ve found that on 
Windows, attempting to access Hive on Mapr 5.1 in secure mode will fail with 
the stack trace below, unless we put the mapr bin folder (e.g. 
C:\opt\mapr\hadoop\hadoop-2.7.0\bin) in the PATH.  Otherwise, we have a 
winutils.exe in the normal place for Hadoop relative to the jar files, and it 
is found, but a weird error ensues when it is called.  

The smoking gun in the stack trace is the invalid mode ‘00777’.  Indeed, no 
version of winutils, including the one in mapr’s bin, will accept “chmod 
00777”.  This was a bug in Paleolithic versions of RawLocalFileSystem.  I have 
to go back before 2.2.0 to find a RawLocalFileSystem.setPermissions that 
formats five digits instead of four:
  public void setPermission(Path p, FsPermission permission)
    throws IOException {
    if (NativeIO.isAvailable()) {
      NativeIO.chmod(pathToFile(p).getCanonicalPath(),
                     permission.toShort());
    } else {
      execCommand(pathToFile(p), Shell.SET_PERMISSION_COMMAND,
          String.format("%05o", permission.toShort()));
    }
  }

Either way, it is mysterious why putting MapR's bin in the path helps.  

This is the error stack:
    org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
    org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:466)
    
net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.<init>(DMHiveClientMetastoreImpl.java:257)
    
net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.newInstance(DMHiveClientMetastoreImpl.java:60)
    
net.redpoint.hiveclient.DMHiveClientCreator.createHiveClient(DMHiveClientCreator.java:16)
Caused by: ExitCodeException exitCode=1: Invalid mode: '00777'
Incorrect command line arguments.
    org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
    org.apache.hadoop.util.Shell.run(Shell.java:456)
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
    org.apache.hadoop.util.Shell.execCommand(Shell.java:815)
    org.apache.hadoop.util.Shell.execCommand(Shell.java:798)
    
org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:772)
    
org.apache.hadoop.fs.RawLocalFileSystem.mkOneDirWithMode(RawLocalFileSystem.java:487)
    
org.apache.hadoop.fs.RawLocalFileSystem.mkdirsWithOptionalPermission(RawLocalFileSystem.java:527)
    org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:505)
    org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:305)
    
org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:642)
    
org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:570)
    org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:508)
    org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:466)
    
net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.<init>(DMHiveClientMetastoreImpl.java:257)
    
net.redpoint.hiveclient.internal.DMHiveClientMetastoreImpl.newInstance(DMHiveClientMetastoreImpl.java:60)
    
net.redpoint.hiveclient.DMHiveClientCreator.createHiveClient(DMHiveClientCreator.java:16)


> winutils.exe is an abomination and should be killed with an axe.
> ----------------------------------------------------------------
>
>                 Key: HADOOP-13223
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13223
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: bin
>    Affects Versions: 2.6.0
>         Environment: Microsoft Windows, all versions
>            Reporter: john lilley
>
> winutils.exe was apparently created as a stopgap measure to allow Hadoop to 
> "work" on Windows platforms, because the NativeIO libraries aren't 
> implemented there.  Rather than building a DLL that makes native OS calls, 
> the creators of winutils.exe must have decided that it would be more 
> expedient to create an EXE to carry out file system operations in a 
> linux-like fashion.  Unfortunately, like many stopgap measures in software, 
> this one has persisted well beyond its expected lifetime and usefulness.  My 
> team creates software that runs on Windows and Linux, and winutils.exe is 
> probably responsible for 20% of all issues we encounter, both during 
> development and in the field.
> Problem #1 with winutils.exe is that it is simply missing from many popular 
> distros and/or the client-side software installation for said distros, when 
> supplied, fails to install winutils.exe.  Thus, as software developers, we 
> are forced to pick one version and distribute and install it with our 
> software.
> Which leads to problem #2: winutils.exe are not always compatible.  In 
> particular, MapR MUST have its winutils.exe in the system path, but doing so 
> breaks the Hadoop distro for every other Hadoop vendor.  This makes creating 
> and maintaining test environments that work with all of the Hadoop distros we 
> want to test unnecessarily tedious and error-prone.
> Problem #3 is that the mechanism by which you inform the Hadoop client 
> software where to find winutils.exe is poorly documented and fragile.  First, 
> it can be in the PATH.  If it is in the PATH, that is where it is found.  
> However, the documentation, such as it is, makes no mention of this, and 
> instead says that you should set the HADOOP_HOME environment variable, which 
> does NOT override the winutils.exe found in your system PATH.
> Which leads to problem #4: There is no logging that says where winutils.exe 
> was actually found and loaded.  Because of this, fixing problems of finding 
> the wrong winutils.exe are extremely difficult.
> Problem #5 is that most of the time, such as when accessing straight up HDFS 
> and YARN, one does not *need* winutils.exe.  But if it is missing, the log 
> messages complain about its absence.  When we are trying to diagnose an 
> obscure issue in Hadoop (of which there are many), the presence of this red 
> herring leads to all sorts of time wasted until someone on the team points 
> out that winutils.exe is not the problem, at least not this time.
> Problem #6 is that errors and stack traces from issues involving winutils.exe 
> are not helpful.  The Java stack trace ends at the ProcessBuilder call.  Only 
> through bitter experience is one able to connect the dots from 
> "ProcessBuilder is the last thing on the stack" to "something is wrong with 
> winutils.exe".
> Note that none of these involve running Hadoop on Windows.  They are only 
> encountered when using Hadoop client libraries to access a cluster from 
> Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to