Hi,
I am new to Solaris, but intrigued by ZFS. I am planning to set up a home NAS
(SAMBA/CIFS on ZFS) with my rough plan being to boot SXDE from an IDE drive,
then set up a single storage pool with 4 SATA drives (2 x 250GB 2 x 500GB) on
a single controller.
My main concerns are redundancy
On Jan 24, 2008 8:07 AM, Kava [EMAIL PROTECTED] wrote:
1. Considering the drives are different sizes, would I be better off setting
up 2 x 2-way mirrors separately and then adding them to the pool?
Yes. With raidz you will only get the capacity of the smallest disk
times (number of disks - 1),
More info from the same guide, page 59: The command also warns you about
creating a mirrored or RAID-Z pool using devices of different sizes. While this
configuration is allowed, mismatched levels of redundancy result in unused
space on the larger device, and requires the -f option to override
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've ever
seen a non-server motherboard with PCI-X. Are you sure you don't want a
PCI-E card instead?
Anyway, if someone is aware of some
Hi,
Assume that you have a 2-way mirror of small drives that you want to replace
with another 2-way mirror of larger drives. What is the best way to do this?
If you use the zpool replace command, one at a time on each of the existing old
drives, then you will end up wasting the additional
Kava wrote:
I didn't think this was possible, but apparently it is.
How does this work? How do you mirror data on a 3 disk set?
The available space is constrained by that which can be mirrored.
For 3 disks, split them in half, giving you an even number of
devices to mirror. Avoid
Hi Kava,
Your questions are hard for me to answer without seeing your syntax.
Also, you don't need to futz with slices if you are using whole disks.
I added some add'l information to the zpool replace section
on page 74, here:
http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf
Note
Hi,
I'm wondering if it's possible to import a zpool on an iscsi-device LOCALLY.
Following scenario:
HostA (Sol10u4):
- Pool-1 (a striped-raidz-pool)
- iscsi-zvol on Pool-1
HostB (Sol10u3):
- Pool-2 is a Mirror of one local device and the iscsi-vol of HostA
Is ist possible to mount the
Ok I am not an expert - just done some playing about.
Option 1) I have done this - i had 4x300gb disks and one more in the post, i
could not wait to build my raidz2 so i used a 73gb which was spare - what it
gave me was a raidz2 pool of 5x73gb.
The warning is there to ask you if you really want
You do not waste the new space
zfs seems to always work of the lowest device - so when you replace the disks
with larger ones it increases the mirror size to these new drives as thats the
new smallest drive.
This message posted from opensolaris.org
Yep its possible
But you will only have a mirror of the lowest disk.
3way mirror is just copying the data onto an extra disk
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I use the supermicro 8 port pci-x card, its about 70 pounds in the uk
Works fine on my home nas box which uses the Asus M2N32 WS Pro, which i can use
6 sata of the nvidia chipset giving me 14 usable sata with the solaris native
sata support
This message posted from opensolaris.org
Platform T2000
SunOS ccluatdwunix1 5.10 Generic_125100-10 sun4v sparc SUNW,Sun-Fire-T200
I have a user that stated zfs is allocating more file system space than
actually available via ls command versus what df -k shows.
He stated he used the mkfile to verify if ZFS quota was working.
He
Let's say I have two 300 GB drives and one 500 GB drive. Can I put a
RAID-Z on the three drives and a separate partition on the last 200 GB
of the 500 GB drive?
- Marcus
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Ahh .. so you end up with 2 copies of disk A, one on disk B and the other on
disk C?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I don't think that is correct. I did it 5 minutes ago and it didn't change the
pool size at all.
Here is what I did:
- create mirrored pool of 2 x 8GB disks
- detach one disk
- attach/replace with 12Gb disk
- detach second disk
- attach/replace with second 12GB disk
After this, the pool was
I think you can if you create a slice on the larger drive that is equal to the
size of the smaller drives (so 300GB)
If you just add the whole large drive to the pool, you will lose the extra
space.
**Apparently if you later replace both of the smaller drives with 2 500 GB
drives, the pool
Thanks.
I am going to try this (replacement with larger drive) again ... it sounds damn
handy and I am pretty sure I must have done something wrong ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
Kava,
Because of a recent bug, you need to export and import the pool to see
the expanded space after you use zpool replace.
Also, you don't need to detach first. The process would look like this:
# zpool create test mirror 8gb-1 8gb-2
# zpool replace test 8gb-1 12gb-1
# zpool replace test
That is a lot of drives ;)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
My 2 cents ... read somewhere that you should not be running LVM on top of ZFS
... something about additional overhead.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Kava schrieb:
My 2 cents ... read somewhere that you should not be running LVM on top of
ZFS ... something about additional overhead.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I finally got this to work, but it did not happen automatically. I needed to
export then re-import the pool to get it to work. Only then did the additional
space appear.
Here is what I did:
- create 4 x 8GB disks and 1 x 4 GB disks
- create RAIDZ pool with 3 x 8GB disks 1 x 4GB
- ignore
I finally got this to work, but it did not happen automatically. I needed to
export then re-import the pool to get it to work. Only then did the additional
space appear.
Here is what I did:
- create 4 x 8GB disks and 1 x 4 GB disks
- create RAIDZ pool with 3 x 8GB disks 1 x 4GB
- ignore
On 24 January, 2008 - Kava sent me these 0,3K bytes:
Ahh .. so you end up with 2 copies of disk A, one on disk B and the other on
disk C?
Depends on how you see it.. You end up with 3 copies of your data.. on
disk A, B and C..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED],
yep thats what mirroring does
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
NFS-exported filesystems, so here's one more.
The server is an
Kava wrote:
I finally got this to work, but it did not happen automatically.
I needed to export then re-import the pool to get it to work.
Only then did the additional space appear.
Here is what I did:
- create 4 x 8GB disks and 1 x 4 GB disks
- create RAIDZ pool with 3 x 8GB disks 1 x
Steve Hillman wrote:
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
NFS-exported filesystems, so here's one
Your questions are hard for me to answer without seeing your syntax.
Also, you don't need to futz with slices if you are using whole disks.
I think what he's asking is if it's possible to replace a whole mirror or
RAID-Z vdev in one go. For instance, replacing a small mirror with a bigger
Richard Elling [EMAIL PROTECTED] wrote:
Marcus Sundman wrote:
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've
ever seen a non-server motherboard with PCI-X. Are
Marcus Sundman wrote:
Richard Elling [EMAIL PROTECTED] wrote:
Marcus Sundman wrote:
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've
ever seen a non-server motherboard
On 24 January, 2008 - Steve Hillman sent me these 1,9K bytes:
I realize that this topic has been fairly well beaten to death on this forum,
but I've also read numerous comments from ZFS developers that they'd like to
hear about significantly different performance numbers of ZFS vs UFS for
Marcus Sundman [EMAIL PROTECTED] wrote:
Richard Elling [EMAIL PROTECTED] wrote:
It may be less expensive to purchase a new motherboard with 6 SATA
ports on it.
Sure, but which one? I've been trying to find one for many, many
months already, but it has turned out to be impossible to find
Jill Manfield wrote:
Platform T2000
SunOS ccluatdwunix1 5.10 Generic_125100-10 sun4v sparc SUNW,Sun-Fire-T200
I have a user that stated zfs is allocating more file system space than
actually available via ls command versus what df -k shows.
Here's the same file on UFS and on ZFS with
Marcus:
I'm currently running the asus K8N-LR, and it works wonderfully. Not only do
the onboard ports work, but it also has multiple pci-x slots. I'm running an
opteron 165 (dual core) cpu with it. It's cheap, and fast.
Oh, one thing. The only downside is the onboard gigE interfaces are the
broadcom pci-e based nic's. They unfortunately do not support jumbo frames. I
doubt this will be an issue for you if it's just a home NAS. In my setup I've
pushed 50MB/sec over nfs and the server was barely breathing.
Jan,
I'm wondering if it's possible to import a zpool on an iscsi-device
LOCALLY.
Following scenario:
HostA (Sol10u4):
- Pool-1 (a striped-raidz-pool)
- iscsi-zvol on Pool-1
HostB (Sol10u3):
- Pool-2 is a Mirror of one local device and the iscsi-vol of HostA
Is ist possible to
Tim Cook [EMAIL PROTECTED] wrote:
I'm currently running the asus K8N-LR, and it works wonderfully.
Thanks, but socket 939 is cold dead and buried. S939 CPUs are very
expensive. DDR is over twice as expensive as DDR2. I can't tell if the
motherboard is expensive or not because I just can't find
After posting my reply to the initial note on this thread, and then
reading it again, I have some followup comments:
The following statement should have said ... this ZVOL in not a
LUN, .
So even though the ZVOL contains all the right data, from the point of
view of Solaris, this disk
Jim Dunham wrote:
This raises a key point that that you should be aware of. ZFS does not
support shared access to the same ZFS filesystem.
unless you put NFS or something on top of it.
(I always forget that part myself.)
___
zfs-discuss
Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
drives mirrored into one pool.
I did this (intending to set the rdonly flag after I copy my data):
zfs create pond/read-only
mkdir /pond/read-only/copytest
cp -rp /pond/photos/* /pond/read-only/copytest/
After the copy is
Christopher Gorski wrote:
Hi, I'm running snv_78 on a dual-core 64-bit x86 system with 2 500GB usb
drives mirrored into one pool.
I did this (intending to set the rdonly flag after I copy my data):
zfs create pond/read-only
mkdir /pond/read-only/copytest
cp -rp /pond/photos/*
I'm missing actual files.
I did this a second time, with the exact same result. It appears that
the missing files in each copy are the same files.
I originally copied these files over via Samba before trying to copy
them locally with cp to the other file system.
I'll have 200 sequentially
FWIW, I just finished performing a copy again, to the same filesystem:
mkdir /pond/copytestsame
cd /pond/photos
cp -rp * /pond/copytestsame
Same files are missing throughout the new tree...on the order of a
thousand files. There are about 27k files in /pond/photos and 25k files
in
Christopher Gorski wrote:
FWIW, I just finished performing a copy again, to the same filesystem:
mkdir /pond/copytestsame
cd /pond/photos
cp -rp * /pond/copytestsame
Same files are missing throughout the new tree...on the order of a
thousand files. There are about 27k files in
Nicolas Williams wrote:
On Thu, Jan 24, 2008 at 11:06:13PM -0500, Christopher Gorski wrote:
I'm missing actual files.
Christopher Gorski wrote:
zfs create pond/read-only
mkdir /pond/read-only/copytest
cp -rp /pond/photos/* /pond/read-only/copytest/
Might the missing files' names start
No. Here's a cut and paste of names of actual files missing:
(the original)
ls -al
/pond/photos/unsorted/drive-452a/\[E\]/drive/archives/seconddisk_20nov2002/eujpg/103-0*
-rwxr--r-- 1 root root 593558 Nov 20 2002
Are there so many files that the glob expansion results in too large an
argument list for cp?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Nicolas Williams wrote:
Are there so many files that the glob expansion results in too large an
argument list for cp?
There are only four subdirs in /pond/photos:
# ls /pond/photos
2006-02-15 2006-06-09 2007-12-20 unsorted
___
zfs-discuss
Kyle McDonald wrote:
Albert Chin wrote:
On Tue, Jan 22, 2008 at 09:20:30PM -0500, Kyle McDonald wrote:
Anyone know the answer to this? I'll be ordering 2 of the 7K's for
my x346's this week. If niether A nor B will work I'm not sure
there's any advantage to using the 7k card
New, yes. Aware - probably not.
Given cheap filesystems, users would create many filesystems was an easy
guess, but I somehow don't think anybody envisioned that users would be
creating tens of thousands of filesystems.
ZFS - too good for it's own good :-p
This message posted from
michael schuster wrote:
I assume you've assured that there's enough space in /pond ...
can you try
$(cd pond/photos; tar cf - *) | (cd /pond/copytestsame; tar xf -)
I tried it, and it worked. The new tree is an exact copy of the old one.
-Chris
Erik Trimble wrote:
Kyle McDonald wrote:
Albert Chin wrote:
On Tue, Jan 22, 2008 at 09:20:30PM -0500, Kyle McDonald wrote:
Anyone know the answer to this? I'll be ordering 2 of the 7K's for
my x346's this week. If niether A nor B will work I'm not sure
there's any advantage to
54 matches
Mail list logo