Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Brandon High
On Wed, Feb 15, 2012 at 9:16 AM, David Dyer-Bennet  wrote:
> Is there an upgrade path from (I think I'm running Solaris Express) to
> something modern?  (That could be an Oracle distribution, or the free

There *was* an upgrade path from snv_134 to snv_151a (Solaris 11
Express) but I don't know if Oracle still supports it. There was an
intermediate step or two along the way (snv_134b I think?) to move
from OpenSolaris to Oracle Solaris.

As others mentioned, you could jump to OpenIndiana from your current
version. You may not be able to move between OI and S11 in the future,
so it's a somewhat important decision.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server upgrade

2012-02-15 Thread andy thomas

On Wed, 15 Feb 2012, David Dyer-Bennet wrote:


While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).


One of my most vital servers is a Netra 150 dating from 1997 - still going 
strong, crammed with 12 x 300 Gb disks and running Solaris 9. I think one 
ought to have more faith in Sun hardware.


Andy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Bob Friesenhahn

On Wed, 15 Feb 2012, David Dyer-Bennet wrote:

version fits my needs for example.)  Upgrading might perhaps save me from
changing all the user passwords (half a dozen, not a huge problem) and
software packages I've added.

(uname -a says "SunOS fsfs 5.11 snv_134 i86pc i386 i86pc").

Or should I just export my pool and do a from-scratch install of
something?  (Then recreate the users and install any missing software.
I've got some cron jobs, too.)


I have read (on the OpenIndiana site) that there is an upgrade path 
from what you have to OpenIndiana.  They describe the procedure to 
use.  OpenIndiana does not yet include encryption support in zfs since 
encryption support was never released into OpenSolaris.


If I was you, I would try the upgrade to OpenIndiana first.

The alternative is paid and supported Oracle Solaris 11, which would 
require a from-scratch install, and may or may not even be an option 
for you.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Server upgrade

2012-02-15 Thread Enda O'Connor

On 15/02/2012 17:16, David Dyer-Bennet wrote:

While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).

I'm actually more concerned with software than hardware.  My load is
small, the current hardware is handling it no problem.  I don't see myself
as a candidate for dedup, so I don't need to add huge quantities of RAM.
I'm handling compression on backups just fine (the USB external disks are
the choke-point, so compression actually speeds up the backups).

I'd like to be on a current software stream that I can easily update with
bug-fixes and new features.  The way I used to do that got broke in the
Oracle takeover.

I'm interested in encryption for my backups, if that's functional (and
safe) in current software versions.  I take copies off-site, so that's a
useful precaution.

Whatever I do, I'll of course make sure my backups are ALL up-to-date and
at least one is back off-site before I do anything drastic.

Is there an upgrade path from (I think I'm running Solaris Express) to
something modern?  (That could be an Oracle distribution, or the free
software fork, or some Nexenta distribution; my current data pool is 1.8T,
and I don't expect it to grow terribly fast, so the fully-featured free
version fits my needs for example.)  Upgrading might perhaps save me from
changing all the user passwords (half a dozen, not a huge problem) and
software packages I've added.

(uname -a says "SunOS fsfs 5.11 snv_134 i86pc i386 i86pc").


so this is the last opensoalris release ( ie not Solaris express )
S11 express was build 151, this is older again.
Not sure if there is an upgrade path to express from opensolaris. I 
don't think there is.
And S11 itself is now the latest, it's based off build 175b. There is an 
upgrade patch from Express to S11, but not from opensolaris to Express 
if I remember correctly.


Or should I just export my pool and do a from-scratch install of
something?  (Then recreate the users and install any missing software.
I've got some cron jobs, too.)

AND, what "something" should I upgrade to or install?  I've tried a couple
of times to figure out the alternatives and it's never really clear to me
what my good options are.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Server upgrade

2012-02-15 Thread David Dyer-Bennet
While I'm not in need of upgrading my server at an emergency level, I'm
starting to think about it -- to be prepared (and an upgrade could be
triggered by a failure at this point; my server dates to 2006).

I'm actually more concerned with software than hardware.  My load is
small, the current hardware is handling it no problem.  I don't see myself
as a candidate for dedup, so I don't need to add huge quantities of RAM. 
I'm handling compression on backups just fine (the USB external disks are
the choke-point, so compression actually speeds up the backups).

I'd like to be on a current software stream that I can easily update with
bug-fixes and new features.  The way I used to do that got broke in the
Oracle takeover.

I'm interested in encryption for my backups, if that's functional (and
safe) in current software versions.  I take copies off-site, so that's a
useful precaution.

Whatever I do, I'll of course make sure my backups are ALL up-to-date and
at least one is back off-site before I do anything drastic.

Is there an upgrade path from (I think I'm running Solaris Express) to
something modern?  (That could be an Oracle distribution, or the free
software fork, or some Nexenta distribution; my current data pool is 1.8T,
and I don't expect it to grow terribly fast, so the fully-featured free
version fits my needs for example.)  Upgrading might perhaps save me from
changing all the user passwords (half a dozen, not a huge problem) and
software packages I've added.

(uname -a says "SunOS fsfs 5.11 snv_134 i86pc i386 i86pc").

Or should I just export my pool and do a from-scratch install of
something?  (Then recreate the users and install any missing software. 
I've got some cron jobs, too.)

AND, what "something" should I upgrade to or install?  I've tried a couple
of times to figure out the alternatives and it's never really clear to me
what my good options are.

-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [o.seib...@cs.ru.nl: A broken ZFS pool...]

2012-02-15 Thread Tiemen Ruiten

On 02/15/2012 04:02 PM, Paul Kraus wrote:

 Are you saying that you cannot replace a failed drive without
shutting down the system? If that is the case with FreeBSD then I
suggest that FreeBSD is not ready for production use. I know that
under Solaris you_can_  replace failed drives with no downtime to the
end users, we do it on a regular basis.

 I suspect there is a method to replace a failed drive under
FreeBSD with no outage (assuming the drive is in a hot swap capable
enclosure), but as I am not familiar with FreeBSD I do not know what
it is.


Hm no, that's not what I meant, I guess I shouldn't have included that. 
Simply offlining the device (to make sure no attempts to access it are 
made) should be sufficient if you indeed assume a hotswap bay.


Tiemen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [o.seib...@cs.ru.nl: A broken ZFS pool...]

2012-02-15 Thread Paul Kraus
On Wed, Feb 15, 2012 at 9:24 AM, Tiemen Ruiten  wrote:

> The correct sequence to replace a failed drive in a ZFS pool is:
>
> zpool offline tank da4
> shutdown and replace the drive
> zpool replace tank da4

Are you saying that you cannot replace a failed drive without
shutting down the system? If that is the case with FreeBSD then I
suggest that FreeBSD is not ready for production use. I know that
under Solaris you _can_ replace failed drives with no downtime to the
end users, we do it on a regular basis.

I suspect there is a method to replace a failed drive under
FreeBSD with no outage (assuming the drive is in a hot swap capable
enclosure), but as I am not familiar with FreeBSD I do not know what
it is.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, Troy Civic Theatre Company
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [o.seib...@cs.ru.nl: A broken ZFS pool...]

2012-02-15 Thread Tiemen Ruiten

On 02/15/2012 02:49 PM, Olaf Seibert wrote:

This is the current status:

$ zpool status
   pool: tank
  state: FAULTED
status: One or more devices could not be opened.  There are insufficient
 replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see:http://www.sun.com/msg/ZFS-8000-3C
   scan: scrub repaired 0 in 49h3m with 2 errors on Fri Jan 20 15:10:35 2012
config:

 NAME STATE READ WRITE CKSUM
 tank FAULTED  0 0 2
   raidz2-0   DEGRADED 0 0 8
 da0  ONLINE   0 0 0
 da1  ONLINE   0 0 0
 da2  ONLINE   0 0 0
 da3  ONLINE   0 0 0
 3758301462980058947  UNAVAIL  0 0 0  was /dev/da4
 da5  ONLINE   0 0 0

The strange thing is that the pool is FAULTED while its part is merely
DEGRADED.

da4 failed reccently and was replaced with a new disk, but no resilvering is
taking place.


The correct sequence to replace a failed drive in a ZFS pool is:

zpool offline tank da4
shutdown and replace the drive
zpool replace tank da4

You can see a history of modifications you've made to your pool with:

zpool history

Probably you haven't gone through this sequence correctly and now ZFS is 
still referring to the old/wrong UUID (the number you see instead of 
da4) and therefore thinks the disk is unavailable.


Hope that helps,

Tiemen
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] [o.seib...@cs.ru.nl: A broken ZFS pool...]

2012-02-15 Thread Olaf Seibert
At the moment I am feverishly seeking advice for how to fix a broken ZFS
raidz2 I have (using FreeBSD 8.2-STABLE).

This is the current status:

$ zpool status
  pool: tank
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
  scan: scrub repaired 0 in 49h3m with 2 errors on Fri Jan 20 15:10:35 2012
config:

NAME STATE READ WRITE CKSUM
tank FAULTED  0 0 2
  raidz2-0   DEGRADED 0 0 8
da0  ONLINE   0 0 0
da1  ONLINE   0 0 0
da2  ONLINE   0 0 0
da3  ONLINE   0 0 0
3758301462980058947  UNAVAIL  0 0 0  was /dev/da4
da5  ONLINE   0 0 0

The strange thing is that the pool is FAULTED while its part is merely
DEGRADED.

da4 failed reccently and was replaced with a new disk, but no resilvering is
taking place.

I've already tried lots of things with this, including exporting and
then "zpool import -nFX tank". (I only got it back-imported with "zpool
import -V tank). The -nFX ("extreme rewind") option gives no output, but
there is a lot of I/O activity going on, as if it is rewinding forever,
or in a loop, or something like that.

One thing that may, or may not, complicate things is the following.
Already quite a while ago there suddenly was a directory that was so
corrupted that zfs reported I/O errors for various files in it. I could
not even remove them; in the end I moved the other files to a new
directory and put the original directory to the side, and made it mode
000. (If rewinding wants to go back to before this happened, I can
understand that this takes a while, but I left it running overnight and
it didn't make visible progress)

zdb and various other commands complain about the pool not being
available, or I/O errors. For instance:

fourquid.1:~$ sudo zpool clear -nF tank
fourquid.1:~$ sudo zpool clear -F tank
cannot clear errors for tank: I/O error
fourquid.1:~$ sudo zpool clear -nFX tank
(no output, uses some cpu, some I/O)

zdb -v  ok
zdb -v -c tank  zdb: can't open 'tank': input/output error
zdb -v -l /dev/da[01235]ok
zdb -v -u tank  zdb: can't open 'tank': Input/output error
zdb -v -l -u /dev/da[01235] ok
zdb -v -m tank  zdb: can't open 'tank': Input/output error
zdb -v -m -X tank   no output, uses cpu and I/O
zdb -v -i tank  zdb: can't open 'tank': Input/output error
zdb -v -i -F tank   zdb: can't open 'tank': Input/output error
zdb -v -i -X tank   no output, uses cpu and I/O

Are there any hints you can give me? I have full FreeBSD source online
so I can modify some tools, if needed.

Thanks in advance,
-Olaf.
-- 
Pipe rene = new PipePicture(); assert(Not rene.GetType().Equals(Pipe));
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss