That would surprise me. Can it be that you are saturating the PCI slot
you 2342 card sits in ? IIRC not every slot on V240 can handle dual port 2342
card
going at full rate.
I didn't understand what you mean?
there is only 3 slots on v240, which of them cannot handle dual HBA?
Generalizations
Hello Robert,
it would be really intresting if you can add a HD RAID 10 lun with UFS to your
comparison.
gino
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 04/24/07 01:37, Richard Elling wrote:
Leon Koll wrote:
My guess that Yaniv assumes that 8 pools with 62.5 million files each
have significantly less chances to be corrupted/cause the data loss
than 1 pool with 500 million files in it.
Do you agree with this?
I do not agree with this
Hello Brian,
Thursday, April 26, 2007, 3:55:16 AM, you wrote:
BG If I recall, the dump partition needed to be at least as large as RAM.
BG In Solaris 8(?) this changed, in that crashdumps streans were
BG compressed as they were written out to disk. Although I've never read
BG this anywhere, I
Hello Ron,
Tuesday, April 24, 2007, 4:54:52 PM, you wrote:
RH Thanks Robert. This will be put to use.
Please let us know about the results.
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
On Wed, Apr 25, 2007 at 09:55:16PM -0400, Brian Gupta wrote:
In Solaris 8(?) this changed, in that crashdumps streans were
compressed as they were written out to disk. Although I've never read
this anywhere, I assumed the reasons this was done are as follows:
What happens if the dump slice
On Wed, Apr 25, 2007 at 09:30:12PM -0700, Richard Elling wrote:
IMHO, only a few people in the world care about dumps at all (and you
know who you are :-). If you care, setup dump to an NFS server somewhere,
no need to have it local.
a) what does this entail
b) with zvols not supporting
The Xraid is a very well thought of storage device with a heck of a price
point. Attached is an image of the Settings/Performance Screen where
you see Allow Host Cache Flushing.
I think when you use ZFS, it would be best to uncheck that box.
This is what happen when you do use GUI in your
I was able to duplicate this problem on a test Ultra 10. I put in a workaround
by adding a service that depends on /milestone/multi-user-server which does a
'zfs share -a'. It's strange this hasn't happened on other systems, but maybe
it's related to slower systems...
Ben
This message
On Thu, 26 Apr 2007, Ben Miller wrote:
I just rebooted this host this morning and the same thing happened again. I
have the core file from zfs.
[ Apr 26 07:47:01 Executing start method (/lib/svc/method/nfs-server start)
]
Assertion failed: pclose(fp) == 0, file ../common/libzfs_mount.c,
Hello,
I wonder if the subject of this email is not self-explanetory ?
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would
still corrupt your nfs clients.
-r
cedric briner writes:
Hello,
I wonder if the subject of this email is not self-explanetory ?
okay let'say that it is not. :)
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on an UPS.
Then after reading the :
So first of all, we're not proposing dumping to a filesystem.
We're proposing dumping to a zvol, which is a raw volume
implemented within a pool (see the -V option to the zfs
create command). As Malachi points out, the advantage
of this is that it simplifies the ongoing administration. You
cedric briner wrote:
You might set zil_disable to 1 (_then_ mount the fs to be
shared). But you're still exposed to OS crashes; those would still
corrupt your nfs clients.
-r
hello Roch,
I've few questions
1)
from:
Shenanigans with ZFS flushing and intelligent arrays...
On Wed, 2007-04-25 at 21:30 -0700, Richard Elling wrote:
Brian Gupta wrote:
Maybe a dumb question, but why would anyone ever want to dump to an
actual filesystem? (Or is my head thinking too Solaris)
IMHO, only a few people in the world care about dumps at all (and you
know who you are
Don't mean to be a pest - but is there an eta on when the b62_zfsboot.iso will
be posted?
I'm really looking forward to ZFS root, but I'd rather download a working dvd
image then attempt to patch the image myself :-)
cheers and thanks,
-bp
This message posted from opensolaris.org
Hello Wee,
Thursday, April 26, 2007, 4:21:00 PM, you wrote:
WYT On 4/26/07, cedric briner [EMAIL PROTECTED] wrote:
okay let'say that it is not. :)
Imagine that I setup a box:
- with Solaris
- with many HDs (directly attached).
- use ZFS as the FS
- export the Data with NFS
- on
Peter Tribble wrote:
On 4/24/07, Darren J Moffat [EMAIL PROTECTED]
wrote:
With reference to Lori's blog posting[1] I'd like
to throw out a few of
my thoughts on spliting up the namespace.
Just a plea with my sysadmin hat on - please don't
go overboard
and make new filesystems just
Lori Alt wrote:
Benjamin Perrault wrote:
Don't mean to be a pest - but is there an eta on when the
b62_zfsboot.iso will be posted?
I'm really looking forward to ZFS root, but I'd rather download a
working dvd image then attempt to patch the image myself :-)
Actually, we hadn't planned to
Ming,
Lets take a pro example with a minimal performance
tradeoff.
All FSs that modify a disk block, IMO, do a full
disk block read before anything.
If doing a extended write and moving to a
larger block size with COW you give yourself
the
Just an interesting side note networked based logging isn't always a bad
thing. I'll give you an example. My Netgear router will crash within 1/2
hour if I turn local logging on. However, it has no problems sending the
logs via syslog to another machine.
Just a thought.
Mal
On 4/26/07,
23 matches
Mail list logo