Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Daniel Braniss
 On Sun, Dec 26, 2010 at 09:26:13AM +0200, Daniel Braniss wrote:
  this is still broken in 8.2-PRERELEASE, there seems to be a patch, but
  it's almost a year old.
  http://people.freebsd.org/~jh/patches/zfs-ctldir-vptocnp.diff
 
 Setting snapdir to visible should fix this right away:
 # zfs set snapdir=visible tank/foo
 
it did indeed!
any reason why this should not be the default behaviour?

thanks,
danny


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Ruben van Staveren

On 26 Dec 2010, at 10:05, Daniel Braniss wrote:

 On Sun, Dec 26, 2010 at 09:26:13AM +0200, Daniel Braniss wrote:
 this is still broken in 8.2-PRERELEASE, there seems to be a patch, but
 it's almost a year old.
 http://people.freebsd.org/~jh/patches/zfs-ctldir-vptocnp.diff
 
 Setting snapdir to visible should fix this right away:
 # zfs set snapdir=visible tank/foo
 
 it did indeed!
 any reason why this should not be the default behaviour?

Personally, I want to have the snapshot, but not see the directory otherwise so 
that it doesn't get scooped up by rsync et al inadvertently 

 
 thanks,
   danny

Cheers,
Ruben___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Daniel Braniss
  On 26 Dec 2010, at 10:05, Daniel Braniss wrote:
   On Sun, Dec 26, 2010 at 09:26:13AM +0200, Daniel Braniss wrote:
  this is still broken in 8.2-PRERELEASE, there seems to be a patch, =but
  it's almost a year old.
http://people.freebsd.org/~jh/patches/zfs-ctldir-vptocnp.diff
  
  Setting snapdir to visible should fix this right away:
  # zfs set snapdir=visible tank/foo
  
  it did indeed!
  any reason why this should not be the default behaviour?
 
 Personally, I want to have the snapshot, but not see the directory otherwise 
 so that
 it doesn't get scooped up by rsync et al inadvertently

I agree, so the point is that as usual, the solution fixes one problem by 
creating
another one :-)

so basically, the bug is still there, or is it a feature?
ie:
ls  /h/.zfs/snapshot/20101225/  works
cd /h/.zfs/snapshot/20101225/   works
pwd
pwd: .: No such file or directory

btw, why use rsync if 'zfs send| zfs recv' work realy nice?

cheers,
danny


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Denny Lin
On Sun, Dec 26, 2010 at 01:32:03PM +0200, Daniel Braniss wrote:
   On 26 Dec 2010, at 10:05, Daniel Braniss wrote:
On Sun, Dec 26, 2010 at 09:26:13AM +0200, Daniel Braniss wrote:
   Setting snapdir to visible should fix this right away:
   # zfs set snapdir=visible tank/foo
   
   it did indeed!
   any reason why this should not be the default behaviour?
  
  Personally, I want to have the snapshot, but not see the directory 
  otherwise so that
  it doesn't get scooped up by rsync et al inadvertently
 
 btw, why use rsync if 'zfs send| zfs recv' work realy nice?

If I wanted to rsync the contents of /path/to/foo/ to another computer,
rsync would unintenionally pick up the contents of /path/to/foo/.zfs/,
so it's best to have .zfs hidden most of the time.

Other commands such as cp should also have this problem.

-- 
Denny Lin
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread David Magda

On Dec 26, 2010, at 06:32, Daniel Braniss wrote:


btw, why use rsync if 'zfs send| zfs recv' work realy nice?


You're assuming that both sides of the transmission are on ZFS, which  
is not always true.


It may be that the FreeBSD/ZFS system is the back up server for a  
network of Linux machines, and so is the destination of the rsync job.  
It may be that the destination is Linux or AIX or HP-UX.

___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE

2010-12-26 Thread Jean-Yves Avenard
Hi there.

I used stable-8-zfsv28-20101223-nopython.patch.xz from
http://people.freebsd.org/~mm/patches/zfs/v28/

simply because it was the most recent at this location.

Is this the one to use?

Just asking cause the file server I installed it on has stopped
responding this morning and doing a remote power cycle didn't work.

So got to get to the office and see what went on :(
Suspect a kernel panic of some kind

Jean-Yves
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Charles Sprickman

On Sun, 26 Dec 2010, Daniel Braniss wrote:


On Sun, Dec 26, 2010 at 09:26:13AM +0200, Daniel Braniss wrote:

this is still broken in 8.2-PRERELEASE, there seems to be a patch, but
it's almost a year old.
http://people.freebsd.org/~jh/patches/zfs-ctldir-vptocnp.diff


Setting snapdir to visible should fix this right away:
# zfs set snapdir=visible tank/foo


it did indeed!
any reason why this should not be the default behaviour?


Others mentioned rsync or cp (used recursively) might pick up these 
directories.  These are good reasons, especially if you've got a few 
hundred snapshots, which would not be uncommon when using ZFS on a host 
that's doing disk-based backups.


Other gotchas would be some of the periodic scripts - you don't want 
locate.updatedb traversing all that, or the setuid checks.  Also I know 
I'm prone to sometimes doing a brute-force find which can also dip into 
those hundreds of snapshot dirs.  In general, I think having the 
directories hidden is a good default.


Wouldn't be opposed to having the pwd issue fixed though...

Thanks,

Charles


thanks,
danny


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: ARRRGG HELP !!

2010-12-26 Thread Jean-Yves Avenard
On 27 December 2010 09:55, Jean-Yves Avenard jyaven...@gmail.com wrote:
 Hi there.

 I used stable-8-zfsv28-20101223-nopython.patch.xz from
 http://people.freebsd.org/~mm/patches/zfs/v28/

I did the following:

# zpool status
  pool: pool
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
poolONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
ada4ONLINE   0 0 0
ada5ONLINE   0 0 0
ada6ONLINE   0 0 0
ada7ONLINE   0 0 0
cache
  label/zcache  ONLINE   0 0 0

errors: No known data errors

so far so good

[r...@server4 /pool/home/jeanyves_avenard]# zpool add pool log
/dev/label/zil [r...@server4 /pool/home/jeanyves_avenard]# zpool
status
  pool: pool
 state: ONLINE
 scan: none requested
config:

NAMESTATE READ WRITE CKSUM
poolONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
ada2ONLINE   0 0 0
ada3ONLINE   0 0 0
ada4ONLINE   0 0 0
ada5ONLINE   0 0 0
ada6ONLINE   0 0 0
ada7ONLINE   0 0 0
logs
  label/zil ONLINE   0 0 0
cache
  label/zcache  ONLINE   0 0 0

errors: No known data errors

so far so good:

# zpool remove pool logs label/zil
cannot remove logs: no such device in pool

^C

Great... now nothing respond..

Rebooting the box, I can boot in single user mode.
but doing zpool status give me:

ZFS filesystem version 5
ZFS storage pool version 28

and it hangs there forever...

What should I do :( ?
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: ARRRGG HELP !!

2010-12-26 Thread Jean-Yves Avenard
Rebooting in single-user mode.

zpool status pool
or spool scrub pool

hangs just the same ... and there's no disk activity either ...

Will download a liveCD of OpenIndiana, hopefully it will show me what's wrong :(

Jean-Yves
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: unable to pwd in ZFS snapshot

2010-12-26 Thread Garrett Wollman
In article alpine.osx.2.00.1012261912460.43...@hotlap.local,
sp...@bway.net writes:

Other gotchas would be some of the periodic scripts - you don't want 
locate.updatedb traversing all that, or the setuid checks.

locate.updatedb in 9-current doesn't do that, by default.  Arguably
you want the setuid checks to do it, so that you're aware of setuid
executables that are buried in old snapshots -- particularly if you
keep old snapshots of /usr around after a security update.

Also I know I'm prone to sometimes doing a brute-force find which
can also dip into those hundreds of snapshot dirs.  In general, I
think having the directories hidden is a good default.

I could see the logic in having find not descend into .zfs directories
by default (if done in a sufficiently general way), although then
you'd have to introduce a new flag yes, really, look at everything!
for cases when that's not desirable.

-GAWollman
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic

2010-12-26 Thread Jean-Yves Avenard
tried to force a zpool import

got a kernel panic:
panic: solaris assert: weight = space  weight = 2 * space, file:
/usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
line: 793

cpuid = 5
KDB: stack backtrace
#0: 0xff805f64be at kdb_backtrace
#1 ..  panic+0x187
#2 .. metaslab_weight+0xe1
#3: metaslab_sync_done+0x21e
#4: vdev_sync_done
#5: spa_sync+0x6a2
#6 txg_sync_thread+0x147
#7: fork_exit+0x118
#8: fork_trampoline+0xe

uptime 2m25s..

sorry for not writing down all the RAM addressed in the backtrace ...

Starting to smell very poorly :(
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic

2010-12-26 Thread Jean-Yves Avenard
Responding to myself again :P

On 27 December 2010 13:28, Jean-Yves Avenard jyaven...@gmail.com wrote:
 tried to force a zpool import

 got a kernel panic:
 panic: solaris assert: weight = space  weight = 2 * space, file:
 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
 line: 793

 cpuid = 5
 KDB: stack backtrace
 #0: 0xff805f64be at kdb_backtrace
 #1 ..  panic+0x187
 #2 .. metaslab_weight+0xe1
 #3: metaslab_sync_done+0x21e
 #4: vdev_sync_done
 #5: spa_sync+0x6a2
 #6 txg_sync_thread+0x147
 #7: fork_exit+0x118
 #8: fork_trampoline+0xe

 uptime 2m25s..


Command used to import in FreeBSD was:
zpool import -fF -R / pool
which told me that zil was missing, and to use -m

I booted openindiana (which is the only distribution I could ifnd with
a live CD supporting zpool v28)

Doing a zpool import actually made it show that the pool had
successfully been repaired by the command above.
It did think that the pool was in use (and it was, as I didn't do a
zpool export).

So I run zpool import -f pool in openindiana, and luckily, all my
files were there. Not sure if anything was lost...

in openindiana, I then ran zpool export and rebooted into FreeBSD.

I ran zpool import there, and got the same original behaviour of a
zpool import hanging, I can't sigbreak it nothing. Only left with the
option of rebooting.

Back into openindiana, tried to remove the log drive, but no luck.
Always end up with the message:
cannot remove log: no such device in pool

Googling that error seems to be a common issue when trying to remove a
ZIL but while that message is displayed, the log drive is actually
removed.
Not in my case..

So I tried something brave:
In Open Indiana
zpool export pool

rebooted the PC, disconnected the SSD drive I had use and rebooted
into openindiana
ran zpool import -fF -R / pool (complained that log device was
missing) and again zpool import -fF -m -R / pool

zfs status showed that logs device being unavailable this time.

ran zpool remove pool log hex_number_showing_in_place

It showed the error cannot remove log: no such device in pool
but zpool status showed that everything was allright

zpool export pool , then reboot into FreeBSD

zpool import this time didn't hang and successfully imported my pool.
All data seems to be there.


Summary: v28 is still buggy when it comes to removing the log
device... And once something is screwed, zpool utility becomes
hopeless as it hangs.

So better have a OpenIndiana live CD to repair things :(

But I won't be trying to remove the log device for a long time ! at
least the data can be recovered when it happens..

Could it be that this is related to the v28 patch I used
(http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz
and should have stuck to the standard one).

Jean-Yves
Breezing again !
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic

2010-12-26 Thread jhell
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/26/2010 23:17, Jean-Yves Avenard wrote:
 Responding to myself again :P
 
 On 27 December 2010 13:28, Jean-Yves Avenard jyaven...@gmail.com wrote:
 tried to force a zpool import

 got a kernel panic:
 panic: solaris assert: weight = space  weight = 2 * space, file:
 /usr/src/sys/modules/zfs/../../cddl/contrib/opensolaris/uts/common/fs/zfs/metaslab.c,
 line: 793

 cpuid = 5
 KDB: stack backtrace
 #0: 0xff805f64be at kdb_backtrace
 #1 ..  panic+0x187
 #2 .. metaslab_weight+0xe1
 #3: metaslab_sync_done+0x21e
 #4: vdev_sync_done
 #5: spa_sync+0x6a2
 #6 txg_sync_thread+0x147
 #7: fork_exit+0x118
 #8: fork_trampoline+0xe

 uptime 2m25s..

 
 Command used to import in FreeBSD was:
 zpool import -fF -R / pool
 which told me that zil was missing, and to use -m
 
 I booted openindiana (which is the only distribution I could ifnd with
 a live CD supporting zpool v28)
 
 Doing a zpool import actually made it show that the pool had
 successfully been repaired by the command above.
 It did think that the pool was in use (and it was, as I didn't do a
 zpool export).
 
 So I run zpool import -f pool in openindiana, and luckily, all my
 files were there. Not sure if anything was lost...
 
 in openindiana, I then ran zpool export and rebooted into FreeBSD.
 
 I ran zpool import there, and got the same original behaviour of a
 zpool import hanging, I can't sigbreak it nothing. Only left with the
 option of rebooting.
 
 Back into openindiana, tried to remove the log drive, but no luck.
 Always end up with the message:
 cannot remove log: no such device in pool
 
 Googling that error seems to be a common issue when trying to remove a
 ZIL but while that message is displayed, the log drive is actually
 removed.
 Not in my case..
 
 So I tried something brave:
 In Open Indiana
 zpool export pool
 
 rebooted the PC, disconnected the SSD drive I had use and rebooted
 into openindiana
 ran zpool import -fF -R / pool (complained that log device was
 missing) and again zpool import -fF -m -R / pool
 
 zfs status showed that logs device being unavailable this time.
 
 ran zpool remove pool log hex_number_showing_in_place
 
 It showed the error cannot remove log: no such device in pool
 but zpool status showed that everything was allright
 
 zpool export pool , then reboot into FreeBSD
 
 zpool import this time didn't hang and successfully imported my pool.
 All data seems to be there.
 
 
 Summary: v28 is still buggy when it comes to removing the log
 device... And once something is screwed, zpool utility becomes
 hopeless as it hangs.
 
 So better have a OpenIndiana live CD to repair things :(
 
 But I won't be trying to remove the log device for a long time ! at
 least the data can be recovered when it happens..
 
 Could it be that this is related to the v28 patch I used
 (http://people.freebsd.org/~mm/patches/zfs/v28/stable-8-zfsv28-20101223-nopython.patch.xz
 and should have stuck to the standard one).
 

Before anything else can you: (in FreeBSD)

1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
3) ( mount -w / ) to make sure you can remove and also write new
zpool.cache as needed.
3) Remove /boot/zfs/zpool.cache
4) kldload both zfs and opensolaris i.e. ( kldload zfs ) should do the trick
5) verify that vfs.zfs.recover=1 is set then ( zpool import pool )
6) Give it a little bit monitor activity using Ctrl+T to see activity.

You should have your pool back to a working condition after this. The
reason why oi_127 can't work with your pool is because it cannot see
FreeBSD generic labels. The only way to work around this for oi_127
would be to either point it directly at the replacing device or to use
actual slices or partitions for your slogs and other such devices.

Use adaNsN or gpt or gptid for working with your pool if you plan on
using other OS's for recovery effects.


Regards,

- -- 

 jhell,v
-BEGIN PGP SIGNATURE-

iQEcBAEBAgAGBQJNGB5QAAoJEJBXh4mJ2FR+rUAH/1HhzfnDI1jTICrA2Oiwyk12
BLXac0HoTY+NVUrdieMUWPh781oiB0eOuzjnOprev1D2uTqrmKvivnWdzuT/5Kfi
vWSSnIqWiNbtvA5ocgWs7IPtcaD5pZS06oToihvLlsEiRyYXTSh2XD7JOsLbQMNb
uKTfAvGI/XnNX0OY3RNI+OOa031GfpdHEWon8oi5aFBYdsDsv3Wn8Z45qCp8yfI+
WZlI+P+uunrmfgZdSzDbpAxeByhTB+8ntnB6QC4d0GRXKwqTVrFmIw5yuuqRAIf8
oCJYDhH6AUi+cxAGDExhLz2e75mEZNHAqB2nkxTaWbwL/rGjBnVidNm1aj7WnWw=
=FlmB
-END PGP SIGNATURE-
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


Re: New ZFSv28 patchset for 8-STABLE: Kernel Panic

2010-12-26 Thread Jean-Yves Avenard
Hi

On 27 December 2010 16:04, jhell jh...@dataix.net wrote:


 Before anything else can you: (in FreeBSD)

 1) Set vfs.zfs.recover=1 at the loader prompt (OK set vfs.zfs.recover=1)
 2) Boot into single user mode without opensolaris.ko and zfs.ko loaded
 3) ( mount -w / ) to make sure you can remove and also write new
 zpool.cache as needed.
 3) Remove /boot/zfs/zpool.cache
 4) kldload both zfs and opensolaris i.e. ( kldload zfs ) should do the trick
 5) verify that vfs.zfs.recover=1 is set then ( zpool import pool )
 6) Give it a little bit monitor activity using Ctrl+T to see activity.

 You should have your pool back to a working condition after this. The
 reason why oi_127 can't work with your pool is because it cannot see
 FreeBSD generic labels. The only way to work around this for oi_127
 would be to either point it directly at the replacing device or to use
 actual slices or partitions for your slogs and other such devices.

 Use adaNsN or gpt or gptid for working with your pool if you plan on
 using other OS's for recovery effects.


Hi..

Thank you for your response, I will keep it safely should it ever occur again.

Let me explain why I used labels..

It all started when I was trying to solve some serious performance
issue when running with zfs
http://forums.freebsd.org/showthread.php?t=20476

One of the step in trying to trouble shoot the latency problem, was to
use AHCI ; I had always thought that activating AHCI in the bios was
sufficient to get it going on FreeBSD, turned out that was the case
and that I needed to load ahci.ko as well.

After doing so, my system wouldn't boot anymore as it was trying to be
/dev/ad0 which didn't exist anymore and was now names /dev/ata0.
So I used a label to the boot disk to ensure that I will never
encounter that problem ever again.

In the same mindset, I used labels for the cache and log device I
later added to the pool...

I have to say however, that zfs had no issue using the labels until I
tried to remove it. I had rebooted several times without having any
problems.
zpool status never hanged

It all started to play up when I ran the command:
zpool remove pool log label/zil

zpool never ever came out from running that command (I let it run for
a good 30 minutes, during which I was fearing the worse, and once I
rebooted and nothing ever worked, suicide looked like an appealing
alternative)

It is very disappointing however that because the pool is in a
non-working state, none of the command available to troubleshoot the
problem would actually work (which I'm guessing is related to zpool
looking for a device name that it can never find being a label)

I also can't explain why FreeBSD would kernel panic when it was
finally in a state of being able to do an import.

I have to say unfortunately, that if I hadn't had OpenIndiana, I would
probably still by crying underneath my desk right now...

Thanks again for your email, I have no doubt that this would have
worked but in my situation, I got your answer in just 2 hours, which
is better than any paid support could provide !

Jean-Yves
PS: saving my 5MB files over the network , went from 40-55s with v15
to a constant 16s with v28... I can't test with ZIL completely
disabled , it seems that vfs.zfs.zil_disable has been removed, and so
did vfs.zfs.write_limit_override
___
freebsd-stable@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to freebsd-stable-unsubscr...@freebsd.org


FreeBSD 7.4/8.2-RC1 Available...

2010-12-26 Thread Ken Smith

The first Release Candidate for the FreeBSD 7.4/8.2 release cycle
is now available.  For 7.4-RC1 the amd64, i386, pc98, and sparc64
architectures are available, for 8.2-RC1 those architectures plus
ia64 and powerpc are available.  Files suitable for creating
installation media or doing FTP based installs through the network
are available on the FreeBSD mirror sites.  Checksums for the images
are at the bottom of this message.

For this Release Candidate no packages (except for the doc package
set for 8.2-RC1) have been provided in any of the images.  Packages
will be provided with the RC2 builds.

The target schedule for the releases is available here:

  http://www.freebsd.org/releases/8.2R/schedule.html
  http://www.freebsd.org/releases/7.4R/schedule.html

and the wiki pages tracking the current status of the releases
are:

  http://wiki.freebsd.org/Releng/8.2TODO
  http://wiki.freebsd.org/Releng/7.4TODO

If you find problems you can report them through the normal
Gnats based PR system or here on the mailing list.

If you are updating an already running machine the CVS branch
tag for 8.2-RC1 is RELENG_8_2, for 7.4-RC1 it is RELENG_7_4.
If you prefer SVN use releng/8.2 or releng/7.4.

The freebsd-update(8) utility supports binary upgrades of i386 and amd64
systems running earlier FreeBSD releases.  Systems running 8.0-RELEASE,
8.1-RELEASE, or 8.2-BETA1 can upgrade as follows:

# freebsd-update upgrade -r 8.2-RC1

During this process, FreeBSD Update may ask the user to help by merging
some configuration files or by confirming that the automatically
performed merging was done correctly.

# freebsd-update install

The system must be rebooted with the newly installed kernel before
continuing.

# shutdown -r now

After rebooting, freebsd-update needs to be run again to install the new
userland components, and the system needs to be rebooted again:

# freebsd-update install
# shutdown -r now

Users of earlier FreeBSD releases (FreeBSD 7.x) can also use
freebsd-update to upgrade to FreeBSD 8.2-RC1, but will be prompted to
rebuild all third-party applications (e.g., anything installed from the
ports tree) after the second invocation of freebsd-update install, in
order to handle differences in the system libraries between FreeBSD 7.x
and FreeBSD 8.x.  Substitute 7.4-RC1 for 8.2-BETA1 in the above
instructions if you are targeting 7.4-RC1 instead.

Checksums:

MD5 (FreeBSD-7.4-RC1-amd64-bootonly.iso) = 66ba849bc5cabf70c35884997648d2ea
MD5 (FreeBSD-7.4-RC1-amd64-disc1.iso) = 837e6111be636e88ca6f0e65ad233f61
MD5 (FreeBSD-7.4-RC1-amd64-docs.iso) = a7213fee78282beb2be68bb99ce56281
MD5 (FreeBSD-7.4-RC1-amd64-dvd1.iso) = 6c1259c130348851d3b50f082244345d
MD5 (FreeBSD-7.4-RC1-amd64-livefs.iso) = fb8f64c4a8c5a099cd59f8a771b30a78

MD5 (FreeBSD-7.4-RC1-i386-bootonly.iso) = d5bda0643626284ef70346bf9323dc62
MD5 (FreeBSD-7.4-RC1-i386-disc1.iso) = 39e97306df9edc93d745e31ef46de2a8
MD5 (FreeBSD-7.4-RC1-i386-docs.iso) = 14c61e0928d97a8779df16eac1f4f3f1
MD5 (FreeBSD-7.4-RC1-i386-dvd1.iso) = dae529bcf6706bfa0921b60b8486e3f3
MD5 (FreeBSD-7.4-RC1-i386-livefs.iso) = 619fd06612590d7112019d81b0888fe9

MD5 (FreeBSD-7.4-RC1-pc98-bootonly.iso) = eebcc73f5ac917db887c3ff275eacd08
MD5 (FreeBSD-7.4-RC1-pc98-disc1.iso) = 33e2c33a2113579cb841e566f19eb449
MD5 (FreeBSD-7.4-RC1-pc98-livefs.iso) = 4983bc250a03f176bd95daee400f99c9

MD5 (FreeBSD-7.4-RC1-sparc64-bootonly.iso) = 06f9bd78fbe4340a5fc9c0024fccec18
MD5 (FreeBSD-7.4-RC1-sparc64-disc1.iso) = 8cf5cf22e6e824565b0830177d02570f
MD5 (FreeBSD-7.4-RC1-sparc64-docs.iso) = a3f7a75f4387736313fd7037f7c43332

MD5 (FreeBSD-8.2-RC1-amd64-bootonly.iso) = d93f48d9944b7dd2d01eb9a50412fcf2
MD5 (FreeBSD-8.2-RC1-amd64-disc1.iso) = f87e00d05be66d0e8ac996ee11214520
MD5 (FreeBSD-8.2-RC1-amd64-dvd1.iso) = e5427cf598341a06dce22f5cf48ecb43
MD5 (FreeBSD-8.2-RC1-amd64-livefs.iso) = cb687f9f3a69b450449be2ad849fce08
MD5 (FreeBSD-8.2-RC1-amd64-memstick.img) = e2f61ebbfd2f6c83a1c19af6be0af55a

MD5 (FreeBSD-8.2-RC1-i386-bootonly.iso) = c4a3e8001fe3786b9eff2b42df6d4347
MD5 (FreeBSD-8.2-RC1-i386-disc1.iso) = c8d9608dc611132bb6b785de1929fddc
MD5 (FreeBSD-8.2-RC1-i386-dvd1.iso) = 870c038a1620338b95c2405dcc7de04e
MD5 (FreeBSD-8.2-RC1-i386-livefs.iso) = c209b01ae838627fd52c365bd0461144
MD5 (FreeBSD-8.2-RC1-i386-memstick.img) = 57c02f0295f32b73d5bb52a567381d3f

MD5 (FreeBSD-8.2-RC1-ia64-bootonly.iso) = 1e92e3dccecfab6dce27d6f982456d53
MD5 (FreeBSD-8.2-RC1-ia64-disc1.iso) = 245af0a6c1dbb40a865f71195f5387ee
MD5 (FreeBSD-8.2-RC1-ia64-disc2.iso) = c1c12a47a24693a50e0fd3b138acd11b
MD5 (FreeBSD-8.2-RC1-ia64-disc3.iso) = 3b2e109bdd320f10faad1886a632cf85
MD5 (FreeBSD-8.2-RC1-ia64-dvd1.iso) = fa321deb35d5a7ddb349a5d20ecdcebd
MD5 (FreeBSD-8.2-RC1-ia64-livefs.iso) = 30cc67deb6f021e52bb2150bf786d73a

MD5 (FreeBSD-8.2-RC1-pc98-bootonly.iso) = f23ce4becc76c7fdda96b2394652f75d
MD5 (FreeBSD-8.2-RC1-pc98-disc1.iso) = 4e9b77ebd30efe23b9478ab85f1d5dfa
MD5 (FreeBSD-8.2-RC1-pc98-livefs.iso) = 09de03e3a5ae967f1b1cc47c04504656

MD5