Distcp can be success with snapshot, but open files length can be zero..? see HDFS-11402
AFAIK, if you know the open files you can call recoverlease or wait for hardlimit (let Namenode trigger lease recovery). i) Get the list of open files e.g hdfs fsck -openforwrite / -files -blocks -locations | grep -i "OPENFORWRITE:" ii) call recoverylease on each open files e.g hdfs debug recoverlease Note: Service like HBase where RS will keep open WAL files, better stop HBase service which can automatically close the file. iii) and then go for distcp Bytheway,HDFS-10480 gives list of open files. --Brahma Reddy Battula -----Original Message----- From: Ulul [mailto:[email protected]] Sent: 02 January 2017 23:05 To: [email protected] Subject: Re: Mismatch in length of source: Hi I can't remember the exact error message but distcp consistently fails when trying to copy open files. Is it your case ? Workaround it to snapshot prior to copying Ulul On 31/12/2016 19:25, Aditya exalter wrote: > Hi All, > A very happy new year to ALL. > > I am facing issue while executing distcp between two different > clusters, > > Caused by: java.io.IOException: Mismatch in length of > source:hdfs://ip1/xxxxxxxxxx/xxxxx and > target:hdfs://nameservice1/xxxxxx/.distcp.tmp.attempt_1483200922993_00 > 56_m_000011_2 > > I tried using -pb and -skipcrccheck > > hadoop distcp -pb -skipcrccheck -update hdfs://ip1/xxxxxxxxxx/xxxxx > hdfs:///xxxxxxxxxxxx/ > > hadoop distcp -pb hdfs://ip1/xxxxxxxxxx/xxxxx hdfs:///xxxxxxxxxxxx/ > > hadoop distcp -skipcrccheck -update > hdfs://ip1/xxxxxxxxxx/xxxxx hdfs:///xxxxxxxxxxxx/ > > > but nothing seems to be working .Any solutions please. > > > Regards, > Aditya. --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
