that journalnode service is running, which is
simple, I'd like to check latency or sync status. Are there any API,
command to check it?
Regards
Juan Carlos Fernandez
--
Have a Nice Day!
Lohit
if anyone can share some experiences here.
Many thanks.
Bill
--
Have a Nice Day!
Lohit
On May 15, 2013, at 7:17 AM, Michael Segel michael_se...@hotmail.com wrote:
Quick question...
So when we have a cluster which has multiple namespaces (multiple name nodes)
, why would you have a file in two different namespaces?
Are you saying why one would create same file in two
a single name space.
The reason I am asking is that I'm trying to see how people view and use
namespaces.
Does that make sense?
Thx
On May 15, 2013, at 9:24 AM, Lohit lohit.vijayar...@yahoo.com wrote:
On May 15, 2013, at 7:17 AM, Michael Segel michael_se...@hotmail.com
wrote
in HDFS.
Are these implementations enough to secure HDFS?
best regards,
seonpark
* Sorry for my bad english
--
Have a Nice Day!
Lohit
space.
Is it possible to limit the number of tasks (mapper) per computer to 1 or
2 for
these kinds of jobs ?
Regards,
Marco
--
Have a Nice Day!
Lohit
views.*
***
** **
Thanks,
Abhishek
** **
--
Have a Nice Day!
Lohit
single linux box i have 3 ips with me.
--
Have a Nice Day!
Lohit
at MultiFileInputFormat if you want to club multiple
files per map task.
It is best to move completed job directories to some other path so as to avoid
filtering altogether
Lohit
On Oct 16, 2012, at 5:25 PM, Koert Kuipers ko...@tresata.com wrote:
currently i run a map-reduce job that reads from
to have hbase-default.xml in conf directory. hbase jar
already has hbase-default.xml.
So do I need to set all those configurations on my own for this new version
??
Thanks,
Praveenesh
--
Have a Nice Day!
Lohit
There is no FileSystem API to copy.
You could try
hadoop dfs -cp src dest
which basically reads the file and writes to new file.
The code for this is in FsShell.java
- Original Message
From: Jim Twensky jim.twen...@gmail.com
To: core-u...@hadoop.apache.org
Sent: Fri, May 13, 2011
Thanks Koji, Raghu.
This seemed to solve our problem, havent seen this happen in the past 2 days.
What is the typical value of ipc.client.idlethreshold on big clusters.
Does default value of 4000 suffice?
Lohit
- Original Message
From: Koji Noguchi knogu...@yahoo-inc.com
To: core
they do RPC for create/open/getFileInfo.
I will give this a try. Thanks again,
Lohit
- Original Message
From: Koji Noguchi knogu...@yahoo-inc.com
To: core-user@hadoop.apache.org
Sent: Sunday, March 29, 2009 11:44:29 PM
Subject: RE: Socket closed Exception
Hi Lohit,
My initial guess
Thanks Raghu, is the log level at DEBUG? I do not see any socket close
exception at NameNode at WARN/INFO level.
Lohit
- Original Message
From: Raghu Angadi rang...@yahoo-inc.com
To: core-user@hadoop.apache.org
Sent: Monday, March 30, 2009 12:08:19 PM
Subject: Re: Socket closed
/TaskTracker/Task logs.
(This is on HDFS 0.15) Are there cases where NameNode closes socket due heavy
load or during conention of resource of anykind?
Thanks,
Lohit
time. You do not need to
reformat HDFS.
Lohit
- Original Message
From: bzheng bing.zh...@gmail.com
To: core-user@hadoop.apache.org
Sent: Wednesday, March 11, 2009 7:48:41 PM
Subject: What happens when you do a ctrl-c on a big dfs -rmr
I did a ctrl-c immediately after issuing a hadoop dfs
Which version of hadoop are you using.
I think from 0.18 or 0.19 copyFromLocal accepts multiple files as input but
destination should be a directory.
Lohit
- Original Message
From: S D sd.codewarr...@gmail.com
To: Hadoop Mailing List core-user@hadoop.apache.org
Sent: Monday, February
I am planning to add the individual files initially, and after a while (lets
say 2 days after insertion) will make a SequenceFile out of each directory
(I am currently looking into SequenceFile) and delete the previous files of
that directory from HDFS. That way in future, I can access any
/*namenode*.log
Lohit
- Original Message
From: Amandeep Khurana ama...@gmail.com
To: core-user@hadoop.apache.org
Sent: Wednesday, February 4, 2009 5:26:43 PM
Subject: Re: Bad connection to FS.
Here's what I had done..
1. Stop the whole system
2. Delete all the data in the directories where
-site.xml and your FileSystem API talk to KFS.
5. Alternatively you could also create an object of KosmosFileSystem, which
extends from FileSystem. Look at org.apache.hadoop.fs.kfs.KosmosFileSystem for
example.
Lohit
- Original Message
From: Wasim Bari wasimb...@msn.com
To: core-user
Try
./bin/hadoop job -h
Lohit
On Jan 12, 2009, at 6:10 PM, Samuel Guo guosi...@gmail.com wrote:
Hi all,
Is there any method that I can use to stop or suspend a runing job in
Hadoop?
Regards,
Samuel
It looks like you do not have datanodes running.
Can you check datanodes logs and see if they were started without errors.
Thanks,
Lohit
- Original Message
From: sagar arlekar sagar.arle...@gmail.com
To: core-user@hadoop.apache.org
Sent: Tuesday, December 30, 2008 1:00:04 PM
Subject
try
hadoop distcp
more info here
http://hadoop.apache.org/core/docs/current/distcp.html
Documentation is for current release, but looking hadoop distcp should print
out help message.
Thanks,
Lohit
- Original Message
From: C G parallel...@yahoo.com
To: core-user@hadoop.apache.org
Hi Nik,
Can you explain the steps you did. Was NameNode/JobTracker running on the node
where datanode ran.
In a cluster with more than one node stopping one datanode does stop whole
cluster.
Thanks,
Lohit
- Original Message
From: Nikolay Grebnev [EMAIL PROTECTED]
To: core-user
, by decomissioning you would be
asking NameNode to copy over the block is has to some other datanode.
Thanks,
Lohit
- Original Message
From: Amareshwari Sriramadasu [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, November 25, 2008 11:51:21 PM
Subject: Re: how can I decommission
I take that back. I forgot about the changes in new version of HDFS.
If you are testing this take a look at TestReplication.java
Lohit
- Original Message
From: Ramya R [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Cc: [EMAIL PROTECTED]
Sent: Tuesday, November 25, 2008 11:15:28 PM
Thanks sharad and paco.
Lohit
On Nov 25, 2008, at 5:34 AM, Paco NATHAN [EMAIL PROTECTED] wrote:
Hi Lohit,
Our teams collects those kinds of measurements using this patch:
https://issues.apache.org/jira/browse/HADOOP-4559
Some example Java code in the comments shows how to access the data
.
Thanks,
lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, November 25, 2008 3:58:53 PM
Subject: 64 bit namenode and secondary namenode 32 bit datanode
I am trying to migrate from 32 bit jvm and 64 bit for namenode only.
*setup*
NN
.
Thanks,
Lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, November 25, 2008 4:32:26 PM
Subject: Re: 64 bit namenode and secondary namenode 32 bit datanode
lohit wrote:
I might be wrong, but my assumption is running SN either
) or reuse the one already used by Hadoop.
http://hadoop.apache.org/core/docs/r0.18.2/api/org/apache/hadoop/mapred/Partitioner.html
has details.
I think this
http://hadoop.apache.org/core/docs/r0.18.2/api/org/apache/hadoop/examples/SleepJob.html
has its usage example. (look for SleepJob.java)
-Lohit
. you would lose all changes
that has happened since the last checkpoint.
Hope that helps,
Lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 14, 2008 10:38:45 AM
Subject: Recovery of files in hadoop 18
Hi,
I
started this namenode with old image and empty edits. You do not
want your latest edits to be replayed, which has your delete transactions.
Thanks,
Lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 14, 2008 12:11:46 PM
woudl have been to open edits in hex editor or similar to check) , but this
should work.
Once done, you could start.
Thanks,
Lohit
- Original Message
From: Sagar Naik [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 14, 2008 1:59:04 PM
Subject: Re: Recovery
/hdfs_design.html
Thanks,
Lohit
- Original Message
From: Erik Holstad [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 14, 2008 5:08:03 PM
Subject: Cleaning up files in HDFS?
Hi!
We would like to run a delete script that deletes all files older than
x days
Hi Ankur,
We have had this kind of failure reported by others earlier on this list.
This might help you
http://markmail.org/message/u6l6lwus33oeivcd
Thanks,
Lohit
- Original Message
From: ANKUR GOEL [EMAIL PROTECTED]
To: [EMAIL PROTECTED]; core-user@hadoop.apache.org
Sent: Thursday
two, may be having
the files compressed might help.
Lohit
- Original Message
From: Zhou, Yunqing [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Thursday, November 13, 2008 1:06:00 AM
Subject: How to exclude machines from a cluster
Here is a cluster with 13 machines. And due
to access DFS
multiple times. If you know that the each 'D' is read by one 'R' then you are
not buying much with DistributedCache. Although you should also keep in mind if
you are read takes long time you reducers might timeout failing to report
status.
Thanks,
Lohit
- Original Message
of
reducers.
Thanks,
Lohit
- Original Message
From: Elia Mazzawi [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, November 7, 2008 12:35:44 PM
Subject: reduce more than one way
Hello,
I'm writing hadoop programs in Java,
I have 2 hadooop map/reduce programs that have the same map
Its good to take backup of existing data storage (namenode secondary
namenode).
Konstantine has explained the steps in this JIRA
https://issues.apache.org/jira/browse/HADOOP-2585?focusedCommentId=12558173#action_12558173
HTH,
Lohit
- Original Message
From: Stu Hood [EMAIL PROTECTED
Yes, take a look at
src/mapred/org/apache/hadoop/mapred/Task_Counter.properties
Those are all the counters available for a task.
-Lohit
- Original Message
From: Sandy [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, September 24, 2008 5:09:39 PM
Subject: counter
this is done.
http://svn.apache.org/repos/asf/hadoop/core/trunk/src/examples/org/apache/hadoop/examples/terasort/TeraSort.java
Thanks,
Lohit
- Original Message
From: Edward J. Yoon [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Saturday, September 20, 2008 10:53:40 AM
Subject
If your webpage is xml tagged and you are looking into using streaming.
This might help
http://hadoop.apache.org/core/docs/r0.18.0/streaming.html#How+do+I+parse+XML+documents+using+streaming%3F
-Lohit
- Original Message
From: Jim Twensky [EMAIL PROTECTED]
To: core-user
this task trivial.
Lohit
On Sep 4, 2008, at 6:51 AM, Andrey Pankov [EMAIL PROTECTED] wrote:
Hello,
Does anyone know is it possible to compare data on HDFS but avoid
coping data to local box? I mean if I'd like to find difference
between local text files I can use diff command. If files
dirs.
-Lohit
- Original Message
From: 叶双明 [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Thursday, September 4, 2008 12:01:48 AM
Subject: Re: can i run multiple datanode in one pc?
Thanks lohit.
I run start datanod by comman: bin/hadoop datanode -conf
conf/hadoop-site.xml
Yes, each datanode should point to different config.
So, if you have conf/hadoop-site.xml make another conf2/hadoop-site.xml with
ports for datanode specific stuff and you should be able to start multiple
datanodes on same node.
-Lohit
- Original Message
From: 叶双明 [EMAIL PROTECTED
Hi Deepak,
Can you explain what process and what files they are trying to read? If you are
talking about map/reduce tasks reading files on DFS, then, yes parallel reads
are allowed. Multiple writers are not.
-Lohit
- Original Message
From: Deepak Diwakar [EMAIL PROTECTED]
To: core
to your cluster or
would like to rebalance your cluster you could use the rebalancer utility
http://hadoop.apache.org/core/docs/current/hdfs_user_guide.html#Rebalancer
-Lohit
- Original Message
From: Mork0075 [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, August 27, 2008
).
Thanks,
Lohit
- Original Message
From: Piotr Kozikowski [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Monday, August 11, 2008 12:20:05 PM
Subject: Re: java.io.IOException: Could not get block locations. Aborting...
Hi again,
The Could not get block locations exception was gone
not want
your JobTracker or NameNode to be on that system.
PS: Could you point to the wiki you are referring to? We might need to make
some corrections.
Thanks,
Lohit
- Original Message
From: Manish Shah [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, August 12, 2008 11
, but other than that
you should be able to do most of the stuff. Lot of applications use streaming.
-Lohit
- Original Message
From: John DeTreville [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, August 12, 2008 3:33:57 PM
Subject: RE: Difference between Hadoop
Hi John,
This file should be a good starting point for you.
src/hdfs/org/apache/hadoop/hdfs/server/namenode/ReplicationtargetChooser.java
There has been discussions about a pluggable block place policy
https://issues.apache.org/jira/browse/HADOOP-3799
Thanks,
Lohit
- Original Message
directory.
Specify file name and period to monitor the metrics.
Thanks,
Lohit
- Original Message
From: Ivan Georgiev [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, August 8, 2008 4:39:36 AM
Subject: what is the correct usage of hdfs metrics
Hi,
I have been unable
down nodes for malicious programs, in such cases you do not
want your jobtracker or namenode to be on those nodes.
Also, running multiple jvms might slow down the node and your process. I would
recommend you run atleast the NameNode on dedicated node.
Thanks,
Lohit
- Original Message
not
splittable, meaning each map will consume whole of .gz
Thanks,
Lohit
- Original Message
From: Michael K. Tung [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, August 8, 2008 1:09:01 PM
Subject: How to enable compression of blockfiles?
Hello, I have a simple question
(https://issues.apache.org/jira/secure/CreateIssue!default.jspa) as improvement
request and continue the discussion there?
-Lohit
- Original Message
From: Kevin [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, August 6, 2008 10:37:44 AM
Subject: Re: DFS. How to read from
I havent tried it, but see if you can create DFSClient object and use its
open() and read() calls to get the job done. Basically you would have to force
currentNode to be your node of interest in there.
Just curious, what is the use case for your request?
Thanks,
Lohit
- Original
edits, start the
namenode and run 'hadoop fsck /' to see if you have any corrupt files and
fix/get rid of them.
PS : Take a back up of dfs.name.dir before updating and playing around with it.
Thanks,
Lohit
- Original Message
From: steph [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Wiki和文件应该帮助。 否则,请打开JIRA要求将帮助大家:)的更好的文献
- Original Message
From: Daniel Yu [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, July 29, 2008 9:22:00 AM
Subject: Re: How to control the map and reduce step sequentially
我现在在国外读书 我的毕业设计课题正好是用hadoop和hbase的 有一个中文社区是件挺不错的事
It would be really helpful for many if you could create a twiki of this. Those
ideas could be used while implementing HA.
Thanks,
Lohit
- Original Message
From: paul [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Tuesday, July 29, 2008 11:56:44 AM
Subject: Re: Multiple master
://wiki.apache.org/hadoop/FAQ#10
-Lohit
the datanode and once
decommissioned just kill DataNode process. This is described in there
http://wiki.apache.org/hadoop/FAQ#17
Thanks,
Lohit
- Original Message
From: Kevin [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, July 11, 2008 3:43:41 PM
Subject: How to add/remove
tasktracker as well.
Thanks,
Lohit
- Original Message
From: Keliang Zhao [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, July 11, 2008 4:31:05 PM
Subject: Re: How to add/remove slave nodes on run time
May I ask what is the right command to start a datanode on a slave?
I
this test src/test/org/apache/hadoop/mapred/lib/TestLineInputFormat.java
HTH,
Lohit
- Original Message
From: Sandy [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Thursday, July 10, 2008 2:47:21 PM
Subject: Is Hadoop Really the right framework for me?
Hello,
I have been posting
://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.18/src/mapred/org/apache/hadoop/mapred/lib/NLineInputFormat.java
copy it to your .mapred/lib directory, rebuild everything and try it out. I
assume it should work, but I havent tried it out yet.
Thanks,
Lohit
- Original Message
From
I remember dhruba telling me about this once.
Yes, Take a backup of the whole current directory.
As you have seen, remove the last line from edits and try to start the
NameNode.
If it starts, then run fsck to find out which file had the problem.
Thanks,
Lohit
- Original Message
From
basically merges all small files into one file. In hadoop 0.18 we
have archives and once HADOOP-1700 is done, one could open the file to append
to it.
Thanks,
Lohit
- Original Message
From: Goel, Ankur [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, June 27, 2008 2:27:57 AM
If NameNode is down, secondary namenode does not serve requests. It is used to
update the fsimage.
(http://hadoop.apache.org/core/docs/r0.17.0/hdfs_user_guide.html#Secondary+Namenode)
Thanks,
Lohit
- Original Message
From: Miles Osborne [EMAIL PROTECTED]
To: core-user
/jambajuice);
/code
Thanks,
Lohit
- Original Message
From: Steve Loughran [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, June 25, 2008 9:15:55 AM
Subject: Re: Global Variables via DFS
javaxtreme wrote:
Hello all,
I am having a bit of a problem with a seemingly simple
ant -Dcompile.c++=yes compile-c++-examples
I picked it up from build.xml
Thanks,
Lohit
- Original Message
From: Sandy [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Wednesday, June 25, 2008 10:44:20 AM
Subject: Compiling Word Count in C++ : Hadoop Pipes
Hi,
I am currently
++.
Check info about streaming here
http://hadoop.apache.org/core/docs/r0.17.0/streaming.html
And information about parsing XML files in streaming in here
http://hadoop.apache.org/core/docs/r0.17.0/streaming.html#How+do+I+parse+XML+documents+using+streaming%3F
Thanks,
Lohit
- Original
Yes, there is a timeout defined by mapred.task.timeout
default was 600 seconds. And here silent means the task (either map or reduce )
has not reported any status using the reporter you get with map/reduce function
Thanks,
Lohit
- Original Message
From: Edward J. Yoon [EMAIL PROTECTED
seeing very long delays?
Thanks,
Lohit
In MetricsIntValue, incrMetrics() was being called on pushMetrics(), instead of
setMetrics(). This used to cause the values to be incremented periodically.
Thanks,
Lohit
- Original Message
From: Ion Badita [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Saturday, May 31, 2008 4
Hi Ion,
Which version of Hadoop are you using? The problem you reported about
safeModeTime and fsImageLoadTime keep growing was fixed in 0.18 (or trunk)
Thanks,
Lohit
- Original Message
From: Ion Badita [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, May 30, 2008 8:10
You could also find some info about companies/projects using Hadoop at
PoweredBy page
http://wiki.apache.org/hadoop/PoweredBy
Thanks,
Lohit
- Original Message
From: Ted Dunning [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, May 16, 2008 10:02:25 AM
Subject: Re: Making
suggests that namenode state
has been updated, meaning blocks which were missing earlier might be reported
now. Check with full options and see which blocks from which files are missing.
Thanks,
Lohit
- Original Message
From: C G [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Sunday
would have been added to new datanode.
You can set replication factor of a file using 'hadoop dfs -setrep command.
Thanks,
Lohit
- Original Message
From: Otis Gospodnetic [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Friday, May 9, 2008 7:16:42 AM
Subject: Re: Corrupt HDFS
it would ask for confirmation.
Thanks,
Lohit
- Original Message
From: Otis Gospodnetic [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Thursday, May 8, 2008 9:00:34 PM
Subject: Re: Corrupt HDFS and salvaging data
Hi,
Update:
It seems fsck reports HDFS is corrupt when
If a method is deprecated in version 0.14, it could be removed in version 0.15
at the earliest. Might be removed anytime starting 0.15.
- Original Message
From: Karl Wettin [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Thursday, April 24, 2008 4:07:48 AM
Subject: hadoop and
Yes FsShell.java implements most of the Shell commands. You could also use the
FileSystem API
http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/fs/FileSystem.html
Simple example http://wiki.apache.org/hadoop/HadoopDfsReadWriteExample
Thanks,
Lohit
- Original Message
From
Can you check the datanode and namenode logs and see if all are up and running?
I am assuming you are running this on single host hence replication of 1.
Thanks,
Lohit
- Original Message
From: John Menzer [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent: Saturday, April 12, 2008
You could use hadoop daemonlog to get and set LOG levels
to set FSNameSystem to DEBUG you would do something like this
hadoop daemonlog -setlevel namenode:50070 org.apache.hadoop.dfs.FSNameSystem
DEBUG
Thanks,
Lohit
- Original Message
From: Cagdas Gerede [EMAIL PROTECTED]
To: core
pointed out, use a
node which is not in your cluster as a datanode. In this case, the first copy
would be placed on a random node in the cluster because your client is not
longer a datanode.
Thanks,
Lohit
- Original Message
From: Ted Dunning [EMAIL PROTECTED]
To: core-user
in your
case you have only 2 copies and it increases the probability of losing
replicas. There has been discussion about having different policy for different
files, but hasn't been implemented yet.
On 24/03/2008, lohit [EMAIL PROTECTED] wrote:
If your client use to copy is one of the datanodes
you could use -cacheFile or -file option for this. Check streaming doc
for examples.
On Mar 6, 2008, at 2:32 PM, Theodore Van Rooy [EMAIL PROTECTED]
wrote:
I would like to convert a perl script that currently uses argument
variables
to run with Hadoop Streaming.
Normally I would
paths before submitting
your job
more info here
http://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/mapred/JobConf.html#addInputPath(org.apache.hadoop.fs.Path)
Thanks,
Lohit
- Original Message
From: Tarandeep Singh [EMAIL PROTECTED]
To: core-user@hadoop.apache.org
Sent
You should be able to see all your earlier files once the namenode is up again.
Check this detailed document which describes how this is achieved.
http://hadoop.apache.org/core/docs/r0.15.3/hdfs_design.html
Thanks,
Lohit
- Original Message
From: Ben Kucinich [EMAIL PROTECTED]
To: core
87 matches
Mail list logo