Re: HDFS upgrade skip versions?

2020-12-15 Thread Wei-Chiu Chuang
Probably one of the protobuf incompatibility. Unfortunately we don't have an open source tool to detect protobuf incompat. A few related issues: 1. HDFS-15700 2. 1. HDFS-14726

HDFS upgrade skip versions?

2020-12-14 Thread Chad William Seys
Hi all, Is it required or highly recommended that one not skip between HDFS (hadoop) versions? I tried skipping from 2.6 to 2.10 and it didn't work so well. :/ Actually, I tested this with a tiny cluster and it worked, but on the production cluster the datanodes did not report blocks to

Re: HDFS upgrade problem of fsImage

2013-11-22 Thread Joshi, Rekha
@hadoop.apache.orgmailto:user@hadoop.apache.org user@hadoop.apache.orgmailto:user@hadoop.apache.org Cc: hdfs-...@hadoop.apache.orgmailto:hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.orgmailto:hdfs-...@hadoop.apache.org Subject: Re: HDFS upgrade problem of fsImage I insist hot upgrade on the test

Re: HDFS upgrade problem of fsImage

2013-11-22 Thread Azuryy Yu
Rekha From: Azuryy Yu azury...@gmail.com Reply-To: user@hadoop.apache.org user@hadoop.apache.org Date: Thursday 21 November 2013 5:19 PM To: user@hadoop.apache.org user@hadoop.apache.org Cc: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org Subject: Re: HDFS upgrade problem of fsImage

Re: HDFS upgrade problem of fsImage

2013-11-21 Thread Joshi, Rekha
-...@hadoop.apache.orgmailto:hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.orgmailto:hdfs-...@hadoop.apache.org, user@hadoop.apache.orgmailto:user@hadoop.apache.org user@hadoop.apache.orgmailto:user@hadoop.apache.org Subject: HDFS upgrade problem of fsImage Hi Dear, I have a small test

Re: HDFS upgrade problem of fsImage

2013-11-21 Thread Azuryy Yu
#Upgrading_from_older_release_to_0.23_and_configuring_federation From: Azuryy Yu azury...@gmail.com Reply-To: user@hadoop.apache.org user@hadoop.apache.org Date: Thursday 21 November 2013 9:48 AM To: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org, user@hadoop.apache.org user@hadoop.apache.org Subject: HDFS upgrade problem

Re: HDFS upgrade problem of fsImage

2013-11-21 Thread Azuryy Yu
@hadoop.apache.org user@hadoop.apache.org Date: Thursday 21 November 2013 9:48 AM To: hdfs-...@hadoop.apache.org hdfs-...@hadoop.apache.org, user@hadoop.apache.org user@hadoop.apache.org Subject: HDFS upgrade problem of fsImage Hi Dear, I have a small test cluster with hadoop-2.0x, and HA

HDFS upgrade problem of fsImage

2013-11-20 Thread Azuryy Yu
Hi Dear, I have a small test cluster with hadoop-2.0x, and HA configuraded, but I want to upgrade to hadoop-2.2. I dont't want to stop cluster during upgrade, so my steps are: 1) on standby NN: hadoop-dameon.sh stop namenode 2) remove HA configuration in the conf 3) hadoop-daemon.sh start

New data on unfinalized hdfs upgrade

2013-11-15 Thread krispyjala
-on-unfinalized-hdfs-upgrade-tp4029496.html Sent from the Users mailing list archive at Nabble.com.

Re: New data on unfinalized hdfs upgrade

2013-11-15 Thread Harsh J
back to 1.0.4 safely? -- View this message in context: http://hadoop-common.472056.n3.nabble.com/New-data-on-unfinalized-hdfs-upgrade-tp4029496.html Sent from the Users mailing list archive at Nabble.com. -- Harsh J

HDFS upgrade

2012-10-17 Thread Amit Sela
Hi all, I want to upgrade a 1TB cluster from hadoop 0.20.3 to hadoop 1.0.3. I am interested to know how long does the hdfs upgrade take and in general how long it takes from deploying new versions until the cluster is back to running heavy MapReduce ? I'd also appreciate it if someone could

Re: files are inaccessible after HDFS upgrade from 0.18.1 to 1.19.0

2009-01-27 Thread Bill Au
Did you start your namenode with the -upgrade after upgrading from 0.18.1 to 0.19.0? Bill On Mon, Jan 26, 2009 at 8:18 PM, Yuanyuan Tian yt...@us.ibm.com wrote: Hi, I just upgraded hadoop from 0.18.1 to 0.19.0 following the instructions on http://wiki.apache.org/hadoop/Hadoop_Upgrade.

Re: files are inaccessible after HDFS upgrade from 0.18.1 to 1.19.0

2009-01-27 Thread Brian Bockelman
Hey YY, At a more basic level -- have you run fsck on that file? What were the results? Brian On Jan 27, 2009, at 10:54 AM, Bill Au wrote: Did you start your namenode with the -upgrade after upgrading from 0.18.1 to 0.19.0? Bill On Mon, Jan 26, 2009 at 8:18 PM, Yuanyuan Tian

Re: files are inaccessible after HDFS upgrade from 0.18.1 to 1.19.0

2009-01-27 Thread Yuanyuan Tian
Subject Re: files are inaccessible after Please respond to HDFS upgrade from 0.18.1 to 1.19.0 core-u

Re: files are inaccessible after HDFS upgrade from 0.18.1 to 1.19.0

2009-01-27 Thread Yuanyuan Tian
Subject Re: files are inaccessible after Please respond to HDFS upgrade from 0.18.1 to 1.19.0 core-u...@hadoop

files are inaccessible after HDFS upgrade from 0.18.1 to 1.19.0

2009-01-26 Thread Yuanyuan Tian
Hi, I just upgraded hadoop from 0.18.1 to 0.19.0 following the instructions on http://wiki.apache.org/hadoop/Hadoop_Upgrade. After upgrade, I run fsck, everything seems fine. All the files can be listed in hdfs and the sizes are also correct. But when a mapreduce job tries to read the files as