What do you mean about mirrored vdevs ? RAID1
hardware? Because I have only ICH9R and opensolaris
doesn't know about it.
No, he means a mirror created by zfs.
This message posted from opensolaris.org
___
zfs-discuss mailing list
I've been testing opensolaris 2008.05 as a replacement for our main fileserver.
After following the CIFS walthrough on the genunix wiki, its working great with
our render farm.
Running into a problem though with our render manager ('Deadline' by frantic
films); the application creates folders
I checked this link
http://www.opensolaris.org/os/community/zfs/version/4/
And this seems to imply that V4 isn't in Sun Solaris at all, only Nevada.
And you are correct, the system I am trying to import to is U3, apologies for
the confusion. So I was wondering is there a way to determine the
On Sat, 14 Jun 2008 06:46:28 PDT
Peter Hawkins [EMAIL PROTECTED] wrote:
So I was wondering is there a way to determine the version of the pool that I
am trying to import.
Look at zpool get version poolName
--
Dick Hoogendijk -- PGP/GnuPG key: 01D2433D
++ http://nagual.nl/ + SunOS sxce
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide is the
following:
ZFS works well with storage based protected LUNs (RAID-5 or mirrored LUNs from
intelligent storage arrays). However, ZFS cannot heal corrupted blocks that are
detected by ZFS checksums.
On 14 June, 2008 - zfsmonk sent me these 0,7K bytes:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot heal
On Sat, 14 Jun 2008, zfsmonk wrote:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot heal corrupted blocks
On Sat, 14 Jun 2008, zfsmonk wrote:
Mentioned on
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
is the following: ZFS works well with storage based protected LUNs
(RAID-5 or mirrored LUNs from intelligent storage arrays). However,
ZFS cannot
- Original Message -
From: Brian Wilson [EMAIL PROTECTED]
Date: Saturday, June 14, 2008 12:12 pm
Subject: Re: [zfs-discuss] zpool with RAID-5 from intelligent storage arrays
To: Bob Friesenhahn [EMAIL PROTECTED]
Cc: zfs-discuss@opensolaris.org
On Sat, 14 Jun 2008, zfsmonk wrote:
On Sat, 14 Jun 2008, Brian Wilson wrote:
What are the odds, in that configuration of zpool (no mirroring,
just using the intelligent disk as concatenated luns in the zpool)
that if we have this silent corruption, the whole zpool dies? If
anyone knows, what's the comparative odds of the
On Tue, Jun 10, 2008 at 12:02 PM, [EMAIL PROTECTED] wrote:
Here I made the opposite observation: Just installed nv90 to a dated
notebook DELL D400; unmodified except of a 80GB 2.5 hard disk and -
of course ! - an extra strip of 1 GB of RAM; making it 1.2 GB
altogether.
Now, first I installed
On Sat, 14 Jun 2008, dick hoogendijk wrote:
With zfs you can scrub the pool at the system level. This allows you
to discover many issues early before they become nightmares.
#zpool status
scrub: none requested
My question is really, do I wait 'till scrub is requested or am I
supposed to
He means that you can have two types of pool as your root pool:
1. A single physical disk.
2. A ZFS mirror. Usually this means 2 disks.
RAIDZ arrays are not supported as root pools (at the moment).
Cheers
Andrew.
This message posted from opensolaris.org
I've got a couple of identical old sparc boxes running nv90 - one
on ufs, the other zfs. Everything else is the same. (SunBlade
150 with 1G of RAM, if you want specifics.)
The zfs root box is significantly slower all around. Not only is
initial I/O slower, but it seems much less able to
Samba is (at least in sxce) installed by default.
On 6/14/08, matt estela [EMAIL PROTECTED] wrote:
I've been testing opensolaris 2008.05 as a replacement for our main
fileserver. After following the CIFS walthrough on the genunix wiki, its
working great with our render farm.
Running into a
On Sat, Jun 14, 2008 at 2:24 PM, Volker A. Brandt [EMAIL PROTECTED] wrote:
I've got a couple of identical old sparc boxes running nv90 - one
on ufs, the other zfs. Everything else is the same. (SunBlade
150 with 1G of RAM, if you want specifics.)
Exactly the same here, though with different
Peter Tribble wrote:
On Tue, Jun 10, 2008 at 12:02 PM, [EMAIL PROTECTED] wrote:
Here I made the opposite observation: Just installed nv90 to a dated
notebook DELL D400; unmodified except of a 80GB 2.5 hard disk and -
of course ! - an extra strip of 1 GB of RAM; making it 1.2 GB
On Sat, Jun 14, 2008 at 02:19:05PM -0500, Bob Friesenhahn wrote:
On Sat, 14 Jun 2008, Brian Wilson wrote:
What are the odds, in that configuration of zpool (no mirroring,
just using the intelligent disk as concatenated luns in the zpool)
that if we have this silent corruption, the whole
On Sat, Jun 14, 2008 at 02:51:31PM -0500, Bob Friesenhahn wrote:
I think that none requested likely means that the administrator has
never issued a request to scrub the pool.
Or the system. That status line will show the last scrub/resilver to
have taken place. None requested means that no
Hey all -
Just spent quite some time trying to work out why my 2 disk mirrored ZFS
pool was running so slow, and found an interesting answer...
System: new Gigabyte M750sli-DS4, AMD 9550, 4GB memory and 2 X Seagate
500GB SATA-II 32mb cache disks.
The SATA ports on the nfoce 750asli chipset
On Sun, Jun 15, 2008 at 04:30, Nathan Kroenert [EMAIL PROTECTED] wrote:
So - using plain dd to the zfs filesystem on said disk
dd if=/dev/zero of=delete.me bs=65536
NB: dd from /dev/zero is a poor benchmark. Since the writes to disk
are all zero, you may create a sparse file, or ZFS
On Sun, 15 Jun 2008, Brian Hechinger wrote:
how long the scrub takes. My pool is set to be scrubbed every night
via a cron job:
And like all other things of this nature, the more often you do it, the
less invasive it will be as there is less to do. That being said, I still
wouldn't
It turns out that when you are in IDE compatability mode, having two
disks on the same 'controller' (c# in solaris) behaves just like real
IDE... Crap!
That is the second time I've seen solaris guess wrong and force what it thinks
is right. Solaris will also limit the size of an ATA drive if
23 matches
Mail list logo