Is it possible to create a custom Jumpstart profile to install Nevada
on a RAID-10 rpool? From the ZFS Boot FAQ [1], you can create a
profile to install Nevada with a RAID-1 rpool using the following
line:
pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0
Is there an equivalent line for RAID-1
Stephen Le wrote:
> Is it possible to create a custom Jumpstart profile to install Nevada
> on a RAID-10 rpool?
No, simple mirrors only.
--
Ian.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
Hi Graham,
(this message was posed on opensolaris-bugs initially, I am CC'ing and
reply-to'ing zfs-discuss as it seems to be a more appropriate place to discuss
this.)
> I'm surprised to see that the status of bug 6592835 hasn't moved beyond "yes
> that's a problem".
My understanding is that
On Tue, Oct 28, 2008 at 05:30:55PM -0700, Nigel Smith wrote:
> Hi Matt.
> Ok, got the capture and successfully 'unzipped' it.
> (Sorry, I guess I'm using old software to do this!)
>
> I see 12840 packets. The capture is a TCP conversation
> between two hosts using the SMB aka CIFS protocol.
>
>
Hi Tano
Great to hear that you've now got this working!!
I understand you are using a Broadcom network card,
from your previous posts I can see you are using the 'bnx' driver.
I will raise this as a bug, but first please would you run
'/usr/X11/bin/scanpci'
to indentify the exact 'vendor id' and
On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
> I replied to Matt directly, but didn't hear back. It may be a driver issue
> with checksum offloading. Certainly the symptoms are consistent.
> To test with a workaround see
> http://bugs.opensolaris.org/view_bug.do?bug_id=6686
Hello Neil,
Tuesday, October 28, 2008, 10:34:28 PM, you wrote:
NP> However, we wanted to make sure it fit into the framework for
NP> the removal of any device. This a much harder problem which we
NP> have made progress, but it's not there yet...
I think a lot of people here would be interested in
Hi Nils,
thanks for the detailed info. I've tried searching the zfs-discuss archive for
both the bug id and 'resilver', but in both cases the only result I can find
from the whole history is this thread:
http://www.opensolaris.org/jive/thread.jspa?messageID=276358
Maybe the discussions you reca
> No takers? :)
>
> benr.
I'm quite curious about finding out about this too, to be honest :)
And its not just ZFS on Solaris because I've filled up and imported pools into
ZFS Fuse 0.5.0 (which is based on the latest ZFS code) in Linux, and on FreeBSD
too.
--
This message posted from opensol
Bob Friesenhahn wrote:
> AMD Athelon/Opteron dual core likely matches or exceeds
> Intel quad core for ZFS use due to a less bottlenecked memory channel.
How big is the difference? Does anyone have benchmarking results (maybe even
when using ZFS on Solaris 10)?
Martti
__
> Example is:
>
> [EMAIL PROTECTED] ls -la
> /data/zones/testfs/root/etc/services
> lrwxrwxrwx 1 root root 15 Oct 13 14:35
> /data/zones/testfs/root/etc/services ->
> ./inet/services
>
> [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
> lrwxrwxrwx 1 root root
Out of interest, and reasonably on-topic, can anyone predict
performance comparison (CIFS) between these two setups?
1) Dedicated Windows 2003 Server, Intel hardware SATA RAID controller
(single raid 5 array, 8 disks)
2) OpenSolaris+ZFS+CIFS, 8 drives with a SuperMicro controller
On Wed, 29 Oct 2008, Nils Goroll wrote:
> My understanding is that the resilver speed is tied to fact that the currenct
> resilver implementation follows the ZFS on disk structures, which needs
> random-like I/O operations while a traditional RAID rebuild issues sequential
> I/O only. Simply put, t
On Wed, 29 Oct 2008, Graham McArdle wrote:
> Maybe the discussions you recall aren't fully indexed for searching
> on these keywords or they were in another forum, but thanks for
> giving me the gist of it. It is potentially quite an Achilles heel
> for ZFS though. I've argued locally to migrat
Ian Collins wrote:
> Stephen Le wrote:
>
>> Is it possible to create a custom Jumpstart profile to install Nevada
>> on a RAID-10 rpool?
>>
>
> No, simple mirrors only.
>
Though a finish sscript could add additional simple mirrors to create
the config his example would have created.
Pr
On Wed, 29 Oct 2008, Martti Kuparinen wrote:
> Bob Friesenhahn wrote:
>> AMD Athelon/Opteron dual core likely matches or exceeds
>> Intel quad core for ZFS use due to a less bottlenecked memory channel.
>
> How big is the difference? Does anyone have benchmarking results (maybe even
> when using Z
Hi
>> [EMAIL PROTECTED] ls -la
>> /data/zones/testfs/root/etc/services
>> lrwxrwxrwx 1 root root 15 Oct 13 14:35
>> /data/zones/testfs/root/etc/services ->
>> ./inet/services
>>
>> [EMAIL PROTECTED] ls -la /data/zones/testfs/root/etc/services
>> lrwxrwxrwx 1 root root
I don't remember anyone saying they couldn't be stored, just that if they are
stored it's not ZFS' fault if they go corrupt as it's outside of its control.
I'm actually planning to store the zfs send dump on external USB devices
myself, but since the USB device will be running on ZFS I expect to
Matt Harrison wrote:
> On Tue, Oct 28, 2008 at 05:45:48PM -0700, Richard Elling wrote:
>
>> I replied to Matt directly, but didn't hear back. It may be a driver issue
>> with checksum offloading. Certainly the symptoms are consistent.
>> To test with a workaround see
>> http://bugs.opensol
>
> However, I don't think that's what they're talking
> about here. I think they're talking about a ZFS pool
> that consists of an external USB device, and doing a
> send / receive directly to that pool. That way the
> USB device is a true backup copy of your ZFS pool,
> and I think the idea is
On Tue, Oct 28, 2008 at 9:27 AM, Morten-Christian Bernson <[EMAIL PROTECTED]>
wrote:
> I have been reading this forum for a little while, and am interested in more
> information about the performance of ZFS when creating large amounts of
> filesystems. We are considering using ZFS for the user
On Wed, Oct 29, 2008 at 8:43 AM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Wed, 29 Oct 2008, Martti Kuparinen wrote:
>
>> Bob Friesenhahn wrote:
>>> AMD Athelon/Opteron dual core likely matches or exceeds
>>> Intel quad core for ZFS use due to a less bottlenecked memory channel.
>>
>> How big
This seems like a n00b question but I'm stuck.
Nevada build 101. Doing fresh install (in vmware fusion). I don't see any way
to select zfs as the root file system. Looks to me like UFS is the default, but
I don't see any option box to allow that to be changed to zfs. What am I
missing?! Thanks.
On Wed, 29 Oct 2008, Peter Baer Galvin wrote:
> Nevada build 101. Doing fresh install (in vmware fusion). I don't see
> any way to select zfs as the root file system. Looks to me like UFS is
> the default, but I don't see any option box to allow that to be changed
> to zfs. What am I missing?! Tha
Hi Peter,
It's there, you just can't use the GUI installer. You have to choose
the text interactive installer. It'll give you the choice there.
Regards.
Original Message
Subject: [zfs-discuss] zfs boot / root in Nevada build 101
From: Peter Baer Galvin <[EMAIL PROTECTED]>
Hi Peter,
You need to select the text-mode install option to select a ZFS root
file system.
Other ZFS root installation tips are described here:
http://docs.sun.com/app/docs/doc/817-2271/zfsboot-1?a=view
I'll be attending Richard Elling's ZFS workshop at LISA08.
Hope to see you. :-)
Cindy
Pe
* Peter Baer Galvin ([EMAIL PROTECTED]) wrote:
> This seems like a n00b question but I'm stuck.
>
> Nevada build 101. Doing fresh install (in vmware fusion). I don't see
> any way to select zfs as the root file system. Looks to me like UFS is
> the default, but I don't see any option box to allow
We have a 24-disk server, so the current design is 2-disk root mirror and 2x
11-disk RAIDZ2 vdevs. I suppose another solution could have been to have 3x
7-disk vdevs plus a hot spare, but the capacity starts to get compromised.
Using 1TB disks in our current config will give us growth capacity t
Ah, thanks much all!
Cindy I'll try to hit the Elling talk too but my time in San Diego will be
short, unfortunately. I'll keep an eye out for you...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
h
Just occurred to me that S10U6 won't support ZFS root install via the GUI
either? That will be confusing...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinf
Hi Matt
Can you just confirm if that Ethernet capture file, that you made available,
was done on the client, or on the server. I'm beginning to suspect you
did it on the client.
You can get a capture file on the server (OpenSolaris) using the 'snoop'
command, as per one of my previous emails. You
Rats, text install hangs after printing out that very early set of "..".
More debugging...or I could wait for S10U6...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
Good point and we've tried to document this issue all over the place
and will continue to publicize this fact.
With the new ZFS boot and install features, it is a good idea to read
the docs first. Tell you friends.
I will send out a set of s10 10/08 doc pointers as soon as they are
available. Tha
Al Hopper wrote:
> On Wed, Oct 29, 2008 at 8:43 AM, Bob Friesenhahn
> <[EMAIL PROTECTED]> wrote:
>
>> On Wed, 29 Oct 2008, Martti Kuparinen wrote:
>>
>>
>>> Bob Friesenhahn wrote:
>>>
AMD Athelon/Opteron dual core likely matches or exceeds
Intel quad core for ZFS use due to
Dunno about the text installer mentioned in other replies as I never use it.
JumpStart installs working fine though with ZFS root.
I am also in finish script pre-creating some additional filesystems etc.
--
This message posted from opensolaris.org
___
> ECC?
$60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm
for Intel 32x0 north bridge like
http://www.provantage.com/supermicro-x7sbe~7SUPM11K.htm
___
zfs-discuss mailing l
> "r" == Ross <[EMAIL PROTECTED]> writes:
r> I don't remember anyone saying they couldn't be stored, just
r> that if they are stored it's not ZFS' fault if they go corrupt
r> as it's outside of its control.
they can't be stored because they ``go corrupt'' in a transactional
I don't think this is possible.
I already tried to add extra vdevs after install, but I got an error message
telling me that multiple vdevs for rpool are not allowed.
K
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discu
kristof wrote:
> I don't think this is possible.
>
> I already tried to add extra vdevs after install, but I got an error message
> telling me that multiple vdevs for rpool are not allowed.
>
> K
>
Oh. Ok. Good to know.
I always put all my 'data' diskspace in a separate pool anyway to make
mi
> "ns" == Nigel Smith <[EMAIL PROTECTED]> writes:
ns> the bnx driver is closed source :-(
The GPL'd Linux driver is contributed by Broadcom:
http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.27.y.git;a=blob;f=drivers/net/bnx2.c;h=2486a656f12d9f47ff27ead587e084a3c337a1a3;hb=HE
On Wed, Oct 29, 2008 at 10:01:09AM -0700, Nigel Smith wrote:
> Hi Matt
> Can you just confirm if that Ethernet capture file, that you made available,
> was done on the client, or on the server. I'm beginning to suspect you
> did it on the client.
That capture was done from the client
> You can ge
Hi Cindy, I googled quite a lot before posting my question. This issue isn't
mentioned in the ZFS boot FAQ for example or anywhere (that I saw) on the
Opensolaris ZFS pages. Of course I could have read the ZFS Admin book at docs.
sun.com...
--
This message posted from opensolaris.org
__
Hi all,
I have been asked to build a new server and would like to get some opinions on
how to setup a zfs pool for the application running on the server. The server
will be exclusively for running netbackup application.
Now which would be better? setting up a raidz pool with 6x146gig drives or
On 29 October, 2008 - Mike sent me these 0,7K bytes:
> Hi all,
>
> I have been asked to build a new server and would like to get some
> opinions on how to setup a zfs pool for the application running on the
> server. The server will be exclusively for running netbackup
> application.
>
> Now wh
Hi Peter,
It's mentioned here under Annoucements:
http://opensolaris.org/os/community/zfs/boot/
It's just not very obvious.
Original Message
Subject: Re: [zfs-discuss] zfs boot / root in Nevada build 101
From: Peter Baer Galvin <[EMAIL PROTECTED]>
To: zfs-discuss@opensolaris.or
Kyle McDonald wrote:
Ian Collins wrote:
Stephen Le wrote:
Is it possible to create a custom Jumpstart profile to install Nevada
on a RAID-10 rpool?
No, simple mirrors only.
Though a finish sscript could add additional simple mirrors to create
the config his exa
All;
I have a large zfs tank with four raidz2 groups in it. Each of these groups
is 11 disks, and I have four hot spare disks in the system. The system is
running Open Solaris build snv_90. One of these groups has had a disk
failure, which the OS correctly detected, and replaced with one of the
By Better I meant the best practice for a server running the Netbackup
application.
I am not seeing how using raidz would be a performance hit. Usually stripes
perform faster than mirrors.
--
This message posted from opensolaris.org
___
zfs-discuss ma
Hi Miles,
I probably should have explained that storing the zfs send on a USB device is
just one part of the strategy, and in fact that's just our way of getting the
backup off-site.
Once off-site, we do zfs receive that into another pool, and in fact we plan to
have two offsite zfs pools, plu
On Wed, 29 Oct 2008, Tomas Ögren wrote:
The raidz option will give you more storage at less performance.. The
mirror thing has the possibility of achieving higher reliability.. 1 to
3 disks can fail without interruptions, depending on how Murphy picks
them.. The raidz1 one can handle 1 disk only
On Wed, 29 Oct 2008, Mike wrote:
> I am not seeing how using raidz would be a performance hit. Usually
> stripes perform faster than mirrors.
The mirrors load-share and offer a lot more disk seeking capacity
(more IOPS).
Bob
==
Bob Friesenhahn
[EMAIL PROTECT
On Wed, Oct 29, 2008 at 3:42 PM, Mike <[EMAIL PROTECTED]> wrote:
> By Better I meant the best practice for a server running the Netbackup
> application.
>
> I am not seeing how using raidz would be a performance hit. Usually stripes
> perform faster than mirrors.
raidz performs reads from all de
$zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 48.4G 10.6G31K /rpool
rpool/ROOT36.4G 10.6G18K /rpool/ROOT
rpool/ROOT/snv_90_zfs 29.6G 10.6G 29.3G /.alt.tmp.b-Ugf.mnt/
rpool/ROOT/[EMAIL PROTECTED] 319
On 29 October, 2008 - Michael Stalnaker sent me these 32K bytes:
> All;
>
> I have a large zfs tank with four raidz2 groups in it. Each of these groups
> is 11 disks, and I have four hot spare disks in the system. The system is
> running Open Solaris build snv_90. One of these groups has had a
ns> I will raise this as a bug, but first please would you run
'/usr/X11/bin/scanpci'
to indentify the exact 'vendor id' and 'device id' for the Broadcom network
chipset,
and report that back here
Primary network interface Embedded NIC:
pci bus 0x0005 cardnum 0x00 function 0x00: vendor 0x14e4
Rob Logan wrote:
> > ECC?
>
> $60 unbuffered 4GB 800MHz DDR2 ECC CL5 DIMM (Kit Of 2)
> http://www.provantage.com/kingston-technology-kvr800d2e5k2-4g~7KIN90H4.htm
Geez, I have to move to the US for cheap hardware. I've paid 120€ for
exactly that 4GB ECC kit (well, I bought two of these, so 240€)
Karl Rossing wrote:
> $zfs list
> NAME USED AVAIL REFER MOUNTPOINT
> rpool 48.4G 10.6G31K /rpool
> rpool/ROOT36.4G 10.6G18K /rpool/ROOT
> rpool/ROOT/snv_90_zfs 29.6G 10.6G 29.3G /.alt.tmp.b-Ugf.mnt/
> rp
Hi Matt
In your previous capture, (which you have now confirmed was done
on the Windows client), all those 'Bad TCP checksum' packets sent by the
client,
are explained, because you must be doing hardware TCP checksum offloading
on the client network adaptor. WireShark will capture the packets be
On Wed, Oct 29, 2008 at 05:32:39PM -0700, Nigel Smith wrote:
> Hi Matt
> In your previous capture, (which you have now confirmed was done
> on the Windows client), all those 'Bad TCP checksum' packets sent by the
> client,
> are explained, because you must be doing hardware TCP checksum offloadin
oops, meant to reply-all...
-- Forwarded message --
From: Terry Heatlie <[EMAIL PROTECTED]>
Date: Wed, Oct 29, 2008 at 8:14 PM
Subject: Re: [zfs-discuss] zpool import problem
To: Eric Schrock <[EMAIL PROTECTED]>
well, this does seem to be the case:
bash-3.2# dtrace -s raidz_open
Charles Menser gmail.com> writes:
>
> Nearly every time I scrub a pool I get small numbers of checksum
> errors on random drives on either controller.
These are the typical symptoms of bad RAM/CPU/Mobo. Run memtest for 24h+.
-marc
___
zfs-discuss mai
61 matches
Mail list logo