I have an older server that I am using (2 x 2Gb Xeon) .. used ot run a web
company ;0
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hmm .. the only things that I am doing differently (as far as I can tell) is I
am using the web GUI to issue the commands and I am running it in a VM (using
the pre-buil ones available).
I will try using the command line and see if that makes a difference. Maybe I
wasn't waiting for it to
Hi,
I am new to Solaris, but intrigued by ZFS. I am planning to set up a home NAS
(SAMBA/CIFS on ZFS) with my rough plan being to boot SXDE from an IDE drive,
then set up a single storage pool with 4 SATA drives (2 x 250GB 2 x 500GB) on
a single controller.
My main concerns are redundancy
More info from the same guide, page 59: The command also warns you about
creating a mirrored or RAID-Z pool using devices of different sizes. While this
configuration is allowed, mismatched levels of redundancy result in unused
space on the larger device, and requires the -f option to override
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've ever
seen a non-server motherboard with PCI-X. Are you sure you don't want a
PCI-E card instead?
Anyway, if someone is aware of some
Ok I am not an expert - just done some playing about.
Option 1) I have done this - i had 4x300gb disks and one more in the post, i
could not wait to build my raidz2 so i used a 73gb which was spare - what it
gave me was a raidz2 pool of 5x73gb.
The warning is there to ask you if you really want
I use the supermicro 8 port pci-x card, its about 70 pounds in the uk
Works fine on my home nas box which uses the Asus M2N32 WS Pro, which i can use
6 sata of the nvidia chipset giving me 14 usable sata with the solaris native
sata support
This message posted from opensolaris.org
Thanks.
I am going to try this (replacement with larger drive) again ... it sounds damn
handy and I am pretty sure I must have done something wrong ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
That is a lot of drives ;)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I finally got this to work, but it did not happen automatically. I needed to
export then re-import the pool to get it to work. Only then did the additional
space appear.
Here is what I did:
- create 4 x 8GB disks and 1 x 4 GB disks
- create RAIDZ pool with 3 x 8GB disks 1 x 4GB
- ignore
Kava wrote:
I finally got this to work, but it did not happen automatically.
I needed to export then re-import the pool to get it to work.
Only then did the additional space appear.
Here is what I did:
- create 4 x 8GB disks and 1 x 4 GB disks
- create RAIDZ pool with 3 x 8GB disks 1 x
Richard Elling [EMAIL PROTECTED] wrote:
Marcus Sundman wrote:
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've
ever seen a non-server motherboard with PCI-X. Are
Marcus Sundman wrote:
Richard Elling [EMAIL PROTECTED] wrote:
Marcus Sundman wrote:
Kava [EMAIL PROTECTED] wrote:
Can anyone recommend a cheap (but reliable) SATA PCI or PCIX card?
Why would you get a PCI-X card for a home NAS? I don't think I've
ever seen a non-server motherboard
Marcus Sundman [EMAIL PROTECTED] wrote:
Richard Elling [EMAIL PROTECTED] wrote:
It may be less expensive to purchase a new motherboard with 6 SATA
ports on it.
Sure, but which one? I've been trying to find one for many, many
months already, but it has turned out to be impossible to find
Marcus:
I'm currently running the asus K8N-LR, and it works wonderfully. Not only do
the onboard ports work, but it also has multiple pci-x slots. I'm running an
opteron 165 (dual core) cpu with it. It's cheap, and fast.
Oh, one thing. The only downside is the onboard gigE interfaces are the
broadcom pci-e based nic's. They unfortunately do not support jumbo frames. I
doubt this will be an issue for you if it's just a home NAS. In my setup I've
pushed 50MB/sec over nfs and the server was barely breathing.
Tim Cook [EMAIL PROTECTED] wrote:
I'm currently running the asus K8N-LR, and it works wonderfully.
Thanks, but socket 939 is cold dead and buried. S939 CPUs are very
expensive. DDR is over twice as expensive as DDR2. I can't tell if the
motherboard is expensive or not because I just can't find
17 matches
Mail list logo