I'm sorry for my mistake. The object was initialized .
On Thu, Mar 13, 2008 at 3:13 PM, ma qiang [EMAIL PROTECTED] wrote:
Hi all,
My code as below,
public class MapTest extends MapReduceBase implements Mapper {
private int[][] myTestArray;
Thanks, Ted!
I also thought it is not good one to separate them out. Just was
wondering is it possible at all. Thanks!
Ted Dunning wrote:
It is quite possible to do this.
It is also a bad idea.
One of the great things about map-reduce architectures is that data is near
the computation so
It is very possible (even easy).
The data nodes run the datanode process. The task nodes run the task
tracker. If the data nodes don't have a task tracker running, then they
won't do any computation.
On 3/13/08 8:22 AM, Andrey Pankov [EMAIL PROTECTED] wrote:
Thanks, Ted!
I also thought
I tried HadoopDfsReadWriteExample. I am getting the following error. I
appreciate any help. I provide more info at the end.
Error while copying file
Exception in thread main java.io.IOException: Cannot run program
df: CreateProcess error=2, The system cannot find the file specified
at
Hi Johannes,
i'm using the 0.16.0 distribution.
I assume you mean the 0.16.0 release
(http://hadoop.apache.org/core/releases.html) without any additional patch.
I just have tried it but cannot reproduce the problem you described. I did the
following:
1) start a cluster with tsz
2) run a job
here is a reset, followed by three attempts to write the block.
2008-03-13 13:40:06,892 INFO org.apache.hadoop.dfs.DataNode: Receiving
block blk_7813471133156061911 src: /10.251.26.3:35762 dest: /
10.251.26.3:50010
2008-03-13 13:40:06,957 INFO org.apache.hadoop.dfs.DataNode: Exception
in
The namenode ran out of disk space and on restart was throwing the error
at the end of this message.
We copied in the edit.tmp to edit from the secondary, and copied in
srcimage to fsimage, and removed edit.new and our file system started up
and /appears/ to be intact.
What is the proper
Currently I can retrieve entries if I use MapFileOutputFormat via
conf.setOutputFormat with no compression specified. But I was trying to do
this:
public void configure(JobConf jobConf) {
...
this.writer = new MapFile.Writer(jobConf, fileSys, dirName, Text.class,
Text.class,
I have a question. As we know, the name node forms a single point of failure.
In a production environment, I imagine a name node would run in a data
center. If that data center
fails, how would you a put a new name node in place in another data
center that can take over without minimum
If your data center fails, then you probably have to worry more about how to
get your data.
I assume having multiple data centers. I know thanks to HDFS
replication data in the other data center will be enough.
However, as much as I see for now, HDFS has no support for replication
of namenode.
Your procedure is right:
1. Copy edit.tmp from secondary to edit on primary
2. Copy srcimage from secondary to fsimage on primary
3. remove edits.new on primary
4. restart cluster, put in Safemode, fsck /
However, the above steps are not foolproof because the transactions that
occured between
No problem!
Clearly there's demand. As of a few minutes ago, we're at capacity once
again. So I hope everyone who wanted in was able to get on the list.
See some of you in just over a week...
Jeremy
On 3/12/08, Marc Boucher [EMAIL PROTECTED] wrote:
Great news Jeremy, thank you for this.
I don't really have these logs as i've bounce my cluster. But am
willing to ferret out anything in particular on my next failed run.
On Mar 13, 2008, at 4:32 PM, Raghu Angadi wrote:
Yeah, its kind of hard to deal with these failure once they start
occurring.
Are all these logs from
Hi all,
If I can not close the connection to HBase using HTable, after
the object was set as null . Whether the resource of this connection
will be released ?
The code as below;
public class MyMap extends MapReduceBase implements Mapper {
private HTable
14 matches
Mail list logo