[
https://issues.apache.org/jira/browse/HDFS-5541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13832179#comment-13832179
]
Stephen Bovy commented on HDFS-5541:
------------------------------------
Here are the traces from the OPS test ( almost 100% )
I am getting a strange error on writing a file in append mode
>> And is getStatus a new method ??? <<
Some new methods are missing ( thus cannot be used for backwards comaptibility
)
Usage: hadoop [--config confdir] COMMAND
where COMMAND is one of:
namenode -format format the DFS filesystem
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
datanode run a DFS datanode
dfsadmin run a DFS admin client
mradmin run a Map-Reduce admin client
fsck run a DFS filesystem checking utility
fs run a generic filesystem user client
balancer run a cluster balancing utility
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
oiv apply the offline fsimage viewer to an fsimage
fetchdt fetch a delegation token from the NameNode
jobtracker run the MapReduce job Tracker node
pipes run a Pipes job
tasktracker run a MapReduce task Tracker node
historyserver run job history servers as a standalone daemon
job manipulate MapReduce jobs
queue get information regarding JobQueues
version print the version
jar <jar> run a jar file
distcp <srcurl> <desturl> copy file or directories recursively
distcp2 <srcurl> <desturl> DistCp version 2
archive -archiveName NAME <src>* <dest> create a hadoop archive
daemonlog get/set the log level for each daemon
or
CLASSNAME run the class named CLASSNAME
Most commands print help when invoked w/o parameters.
C:\hdp\hadoop\hadoop-1.2.0.1.3.0.0-0380>z:
Z:\>cd d\dclibhdfs
Z:\D\dclibhdfs>dir
Volume in drive Z is Shared Folders
Volume Serial Number is 0000-0064
Directory of Z:\D\dclibhdfs
11/25/2013 05:33 PM <DIR> .
11/25/2013 04:29 PM 56,832 dclibhdfs.dll
11/25/2013 05:33 PM 17,408 TstOpsHdfs.exe
11/22/2013 09:28 PM 7,680 TstReadHdfs.exe
11/22/2013 09:29 PM 7,680 TstWriteHdfs.exe
09/09/2013 03:14 PM 4,961,800 vc2008_SP1_redist_x64.exe
09/09/2013 03:16 PM 1,821,192 vc2008_SP1_redist_x86.exe
09/09/2013 03:02 PM 7,185,000 vc2012_Update3_redist_x64.exe
09/09/2013 03:02 PM 6,552,288 vc2012_Update3_redist_x86.exe
8 File(s) 20,613,976 bytes
1 Dir(s) 272,641,044,480 bytes free
Z:\D\dclibhdfs>TstOpsHdfs.exe
dll attached
dll: tls1=1
Get Global JNI
load jvm
dll: get proc addresses
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: jvm created
dll: thread attach
dll: save environment
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
dll: thread attach
Opened /tmp/testfile.txt for writing successfully...
dll: thread attach
Wrote 14 bytes
Current position: 14
Flushed /tmp/testfile.txt successfully!
dll: thread attach
dll: detach thread
dll: detach thread
could not find method read from class org/apache/hadoop/fs/FSDataInputStream wit
h signature (Ljava/nio/ByteBuffer;)I
readDirect: FSDataInputStream#read error:
java.lang.NoSuchMethodError: read
hdfsOpenFile(/tmp/testfile.txt): WARN: Unexpected error 255 when testing for dir
ect read compatibility
hdfsAvailable: 14
Current position: 1
Direct read support not detected for HDFS filesystem
Read following 13 bytes:
ello, World!
Read following 14 bytes:
Hello, World!
Test Local File System C:\tmp\testfile.txt
13/11/25 17:40:41 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
ry for your platform... using builtin-java classes where applicable
dll: thread attach
dll: detach thread
could not find method read from class org/apache/hadoop/fs/FSDataInputStream wit
h signature (Ljava/nio/ByteBuffer;)I
readDirect: FSDataInputStream#read error:
java.lang.NoSuchMethodError: read
hdfsOpenFile(C:\tmp\testfile.txt): WARN: Unexpected error 255 when testing for d
irect read compatibility
dll: thread attach
dll: detach thread
hdfsCopy(remote-local): Success!
dll: thread attach
dll: thread attach
dll: detach thread
dll: detach thread
hdfsCopy(remote-remote): Success!
dll: thread attach
dll: detach thread
hdfsMove(local-local): Success!
dll: thread attach
dll: detach thread
hdfsMove(remote-local): Success!
hdfsRename: Success!
dll: thread attach
dll: thread attach
dll: detach thread
dll: detach thread
hdfsCopy(remote-remote): Success!
hdfsCreateDirectory: Success!
hdfsSetReplication: Success!
hdfsGetWorkingDirectory: hdfs://bovy2008.td.teradata.com:8020/user/sb186007
hdfsSetWorkingDirectory: Success!
hdfsGetWorkingDirectory: /tmp
hdfsGetDefaultBlockSize: 67108864
could not find method getStatus from class org/apache/hadoop/fs/FileSystem with
signature ()Lorg/apache/hadoop/fs/FsStatus;
hdfsGetCapacity: FileSystem#getStatus error:
java.lang.NoSuchMethodError: getStatus
hdfsGetCapacity: -1
could not find method getStatus from class org/apache/hadoop/fs/FileSystem with
signature ()Lorg/apache/hadoop/fs/FsStatus;
hdfsGetUsed: FileSystem#getStatus error:
java.lang.NoSuchMethodError: getStatus
hdfsGetUsed: -1
hdfsGetPathInfo - SUCCESS!
Name: hdfs://bovy2008.td.teradata.com:8020/tmp, Type: D, Replication: 0, BlockSi
ze: 0, Size: 0, LastMod: Mon Nov 25 17:40:41 2013
Owner: sb186007, Group: supergroup, Permissions: 511 (rwxrwxrwx)
Name: hdfs://bovy2008.td.teradata.com:8020/tmp/appends, Type: F, Replication: 1,
BlockSize: 67108864, Size: 6, LastMod: Mon Nov 25 17:35:37 2013
Owner: sb186007, Group: supergroup, Permissions: 420 (rw-r--r--)
Name: hdfs://bovy2008.td.teradata.com:8020/tmp/newdir, Type: D, Replication: 0,
BlockSize: 0, Size: 0, LastMod: Mon Nov 25 17:40:41 2013
Owner: sb186007, Group: supergroup, Permissions: 493 (rwxr-xr-x)
Name: hdfs://bovy2008.td.teradata.com:8020/tmp/testfile.txt, Type: F, Replicatio
n: 2, BlockSize: 67108864, Size: 14, LastMod: Mon Nov 25 17:40:41 2013
Owner: sb186007, Group: supergroup, Permissions: 420 (rw-r--r--)
Name: hdfs://bovy2008.td.teradata.com:8020/tmp/testfile2.txt, Type: F, Replicati
on: 1, BlockSize: 67108864, Size: 14, LastMod: Mon Nov 25 17:40:41 2013
Owner: sb186007, Group: supergroup, Permissions: 420 (rw-r--r--)
Name: hdfs://bovy2008.td.teradata.com:8020/tmp/usertestfile.txt, Type: F, Replic
ation: 1, BlockSize: 67108864, Size: 14, LastMod: Mon Nov 25 17:35:37 2013
Owner: nobody, Group: supergroup, Permissions: 420 (rw-r--r--)
hdfsGetHosts - SUCCESS! ...
hosts[0][0] - bovy2008.td.teradata.com
hdfsChown:Group: Success!
hdfsChown:User: Success!
hdfsChmod: Success!
hdfsUtime: Success!
hdfsChown read: Success!
hdfsChmod read: Success!
hdfsChmod: Success!
newMTime=1385430041
curMTime=1385430041
hdfsUtime read (mtime): Success!
hdfsDelete: Success!
hdfsDelete: Success!
hdfsDelete: Success!
hdfsDelete: Success!
hdfsExists: Success!
Opened /tmp/appends for writing successfully...
dll: thread attach
Wrote 6 bytes
Flushed /tmp/appends successfully!
dll: thread attach
dll: detach thread
dll: detach thread
Open file in append mode:/tmp/appends
hdfsOpenFile(/tmp/appends): FileSystem#append((Lorg/apache/hadoop/fs/Path;)Lorg/
apache/hadoop/fs/FSDataOutputStream;) error:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: Append is not suppor
ted. Please see the dfs.support.append configuration parameter
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSName
system.java:1840)
at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:
768)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1456)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1452)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInforma
tion.java:1233)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1450)
at org.apache.hadoop.ipc.Client.call(Client.java:1118)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at $Proxy1.append(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryI
nvocationHandler.java:85)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocat
ionHandler.java:62)
at $Proxy1.append(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:993)
at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:982)
at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSy
stem.java:199)
at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:650)
Failed to open /tmp/appends for Append/writing!
dll: detach thread
Connect As User:nobody
dll: thread attach
dll: thread attach
Opened /tmp/usertestfile.txt for writing successfully...
dll: thread attach
Wrote 14 bytes
Flushed /tmp/usertestfile.txt successfully!
dll: thread attach
dll: detach thread
dll: detach thread
hdfs new file user is correct: Success!
dll: detach thread
dll: detach process
dll: invoke thread destructor
dll detached
Z:\D\dclibhdfs>
And How do I get the NativeCodeLoader to work ??
13/11/25 17:40:41 WARN util.NativeCodeLoader: Unable to load native-hadoop libra
ry for your platform... using builtin-java classes where applicable
> LIBHDFS questions and performance suggestions
> ---------------------------------------------
>
> Key: HDFS-5541
> URL: https://issues.apache.org/jira/browse/HDFS-5541
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: hdfs-client
> Reporter: Stephen Bovy
> Priority: Minor
> Attachments: pdclibhdfs.zip
>
>
> Since libhdfs is a "client" interface", and esspecially because it is a "C"
> interface , it should be assumed that the code will be used accross many
> different platforms, and many different compilers.
> 1) The code should be cross platform ( no Linux extras )
> 2) The code should compile on standard c89 compilers, the
> >>> {least common denominator rule applies here} !! <<
> C code with "c" extension should follow the rules of the c standard
> All variables must be declared at the begining of scope , and no (//)
> comments allowed
> >> I just spent a week white-washing the code back to nornal C standards so
> >> that it could compile and build accross a wide range of platforms <<
> Now on-to performance questions
> 1) If threads are not used why do a thread attach ( when threads are not used
> all the thread attach nonesense is a waste of time and a performance killer )
> 2) The JVM init code should not be imbedded within the context of every
> function call . The JVM init code should be in a stand-alone LIBINIT
> function that is only invoked once. The JVM * and the JNI * should be
> global variables for use when no threads are utilized.
> 3) When threads are utilized the attach fucntion can use the GLOBAL jvm *
> created by the LIBINIT { WHICH IS INVOKED ONLY ONCE } and thus safely
> outside the scope of any LOOP that is using the functions
> 4) Hash Table and Locking Why ?????
> When threads are used the hash table locking is going to hurt perfromance .
> Why not use thread local storage for the hash table,that way no locking is
> required either with or without threads.
>
> 5) FINALLY Windows Compatibility
> Do not use posix features if they cannot easilly be replaced on other
> platforms !!
--
This message was sent by Atlassian JIRA
(v6.1#6144)