[
http://issues.apache.org/jira/browse/HADOOP-210?page=comments#action_12413017 ]
Konstantin Shvachko commented on HADOOP-210:
--------------------------------------------
I thought that we might want to use java reflections to make versioning support
more generic.
This would require some programming discipline and the reflection framework
will do the rest.
So I suppose that all classes that require versioning implement Versioned (let
me know if the
name does not sound right) interface that should include getVersion() method.
Additionally all versions of the same class should implement the same
interface, which declares
methods that are version dependent.
For example lets consider the INode class.
public class INode implements INodeReader, Versioned {
String name;
int nrBlocks;
....
}
INodeReader is the interface that declares e.g. only one method readFields( in )
and each version of INode should implement this interface.
For each field declared in the INode class we must have corresponding methods
get<fieldName>
set<fieldName>
setDefault<fieldName>
For now I assume that all fields are of primitive types, and that version
transition
means adding or removing fields in the class only.
Then we should have a procedure of retiring old versions, which I see as
renaming
the package of the Versioned class such that it includes version number. E.g.
org.apache.hadoop.dfs.INode
is renamed to
org.apache.hadoop.dfs.v0.INode
if the old version is 0 and the new one is 1.
The retired classes are placed in a separate jar file.
I didn't think whether the retiring can be automated with an ant script or not.
Finally we have VersionFactory class, which can be either
a member or a superclass of INode.
The implementation of readFields is simple:
INode.readFields( in ) {
int storedVersion = in.readVersion();
VersionFactory.read( getCurrentVersion(), storedVersion, this );
}
And then VersionFactory.read() does the actual job.
VersionFactory.read( targetVersion, sourceVersion, targetClass ) {
get class name of the required version base on targetClass name and
sourceVersion;
construct the sourceClass;
if class not found report return that the version is not supported;
targetFields = targetClass.getFields() and sort them lexicographically by
field name;
sourceFields = sourceClass.getFields() and sort them lexicographically by
field name;
Then we scan the two lists.
If both of them contain field A (of type T) then {
// this common field of the two versions
T value = in.readT();
and then invoke targetClass.setA( value )
}
If field A is contained only in targetClass {
// this is a new field
invoke targetClass.setDefaultA();
}
if field A belongs only to sourceClass {
// this field was removed in the new version
T value = in.readT();
and do not assign it to anything;
}
}
Advantages:
That way we can read data from any previous version not only the preceding one.
And when defining a new version of the class we do not need to have any
knowledge
of the previous version(s).
Also we can and should have many Versioned classes (for INode, for add, delete,
rename operation logs, ...) but we can use the same VersionFactory for all of
them.
> Namenode not able to accept connections
> ---------------------------------------
>
> Key: HADOOP-210
> URL: http://issues.apache.org/jira/browse/HADOOP-210
> Project: Hadoop
> Type: Bug
> Components: dfs
> Environment: linux
> Reporter: Mahadev konar
> Assignee: Mahadev konar
>
> I am running owen's random writer on a 627 node cluster (writing 10GB/node).
> After running for a while (map 12% reduce 1%) I get the following error on
> the Namenode:
> Exception in thread "Server listener on port 60000"
> java.lang.OutOfMemoryError: unable to create new native thread
> at java.lang.Thread.start0(Native Method)
> at java.lang.Thread.start(Thread.java:574)
> at org.apache.hadoop.ipc.Server$Listener.run(Server.java:105)
> After this, the namenode does not seem to be accepting connections from any
> of the clients. All the DFSClient calls get timeout. Here is a trace for one
> of them:
> java.net.SocketTimeoutException: timed out waiting for rpc response
> at org.apache.hadoop.ipc.Client.call(Client.java:305)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:149)
> at org.apache.hadoop.dfs.$Proxy1.open(Unknown Source)
> at
> org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:419)
> at org.apache.hadoop.dfs.DFSClient$DFSInputStream.(DFSClient.java:406)
> at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:171)
> at
> org.apache.hadoop.dfs.DistributedFileSystem.openRaw(DistributedFileSystem.java:78)
> at
> org.apache.hadoop.fs.FSDataInputStream$Checker.(FSDataInputStream.java:46)
> at org.apache.hadoop.fs.FSDataInputStream.(FSDataInputStream.java:228)
> at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:157)
> at
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:43)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:105)
> at
> org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:785).
> The namenode then has around 1% CPU utilization at this time (after the
> outofmemory exception has been thrown). I have profiled the NameNode and it
> seems to be using around a maixmum heap size of 57MB (which is not much). So,
> heap size does not seem to be a problem. It might be happening due to lack of
> Stack space? Any pointers?
--
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
http://www.atlassian.com/software/jira