Yes, it should print something along the lines of:
The reported blocks 11 has reached the threshold 0.9990 of total
blocks 11. Safe mode will be turned off automatically in 8 seconds.
-Joey
On Fri, Jul 29, 2011 at 12:26 AM, Rahul Das wrote:
> No there was no error only following things happens.
No there was no error only following things happens.
2011-07-21 14:14:30,039 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit:
ugi=hadoop,hadoop ip=/xx.xx.xx.xx cmd=create
src=/user/hdfs/files/d954x328-85x8-4dfe-b73c-34a7a2c1xb0f
dst=nullperm=hadoop:supergroup:rw-r--r-
Nothing from around 1630?
-Joey
On Jul 28, 2011, at 5:06, Rahul Das wrote:
> Hi Joey,
>
> The log is too big to attach into mail. What I found that there is no error
> during this time.
> Only few Warnings are coming like
>
> 2011-07-21 14:13:47,814 WARN
> org.apache.hadoop.hdfs.server.n
Hi Joey,
The log is too big to attach into mail. What I found that there is no error
during this time.
Only few Warnings are coming like
2011-07-21 14:13:47,814 WARN
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
PendingReplicationMonitor timed out block blk_-6058282241824946206_13375223
..
The long startup time after the restart looks like it was caused because the
SecondaryNameNode hasn't been able to roll the edits log for some time. Can
you post your Namenode log from around the same time in this
SecondaryNameNode log (2011-07-21 16:00-16:30)?
-Joey
On Fri, Jul 22, 2011 at 8:29
Yes I have a secondary Namenode running. Here are the log for
SecondaryNamenode
2011-07-21 16:02:47,908 INFO org.apache.hadoop.hdfs.server.common.Storage:
Edits file /home/hadoop/tmp/dfs/namesecondary/current/edits of size 12751835
edits # 138217 loaded in 1581 seconds.
2011-07-21 16:03:21,925 INF
Do you have an instance of the SecondaryNamenode in your cluster?
-Joey
On Fri, Jul 22, 2011 at 3:15 AM, Rahul Das wrote:
> Hi,
>
> I am running a Hadoop cluster with 20 Data node. Yesterday I found that the
> Namenode was not responding ( No write/read to HDFS is happening). It got
> stuck for
Hi,
I am running a Hadoop cluster with 20 Data node. Yesterday I found that the
Namenode was not responding ( No write/read to HDFS is happening). It got
stuck for few hours, then I shut down the Namenode and found the following
error from the Name node log.
2011-07-21 16:15:31,500 WARN org.apach