Bill Sommerfeld wrote:
On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
Btw: In one experiment I tried to boot the kernel under kmdb
control (-kd), patched minclsyspri := 61 and used a
breakpoint inside spa_active() to patch the spa_zio_* taskq
to use prio 60 when importing the gzip
Robert Thurlow wrote:
I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors - see bugid 6539587. I haven't played with USB drives
enough to trust them more, but this was a hole I fell in with
I use Supermicro AOC-SAT2-MV8
It is 8-port SATA2, JBOD only, and literally plugplay (sol10u3) and just
~100Euro
It is PCI-X but mine is plugged into a plain PCI slot/mobo and works fine.
(Don't know how much better it would perform on a PCI-X slot/mobo).
I bought mine here:
On Fri, 11 May 2007, Sophia Li wrote:
Original Message
On 5/10/07, Al Hopper [EMAIL PROTECTED] wrote:
My personal opinion is that USB is not robust enough under (Open)Solaris
to provide the reliability that someone considering ZFS is looking for.
I base this on
I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
around 5 seconds. I assume it's just something to do with caching? I was
watching the drive lights on the T2000s with 3 disk raidz and the disks all
blink a couple seconds then are solid for a few seconds.
Is this
On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and
reinstalled with different system builds.
For some builds I have a finish script that creates a zpool using
On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
around 5 seconds. I assume it's just something to do with caching?
Yep - the ZFS equivalent of fsflush. Runs more often so the pipes don't
get as clogged. We've had
Just my problem too ;) And ZFS disapointed me big time here!
I know ZFS is new and every desired feature isn't implemented yet. I hope and
beleive more features are comming soon, so I think I'll stay with ZFS and
wait..
My idea was to start out with just as many state-of-the-art size disks I
On May 11, 2007, at 9:09 AM, Bob Netherton wrote:
**On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
**I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
** around 5 seconds. I assume it's just something to do with caching?
^Yep - the ZFS equivalent of fsflush. Runs
lonny wrote:
On May 11, 2007, at 9:09 AM, Bob Netherton wrote:
**On Fri, 2007-05-11 at 09:00 -0700, lonny wrote:
**I've noticed a similar behavior in my writes. ZFS seems to write in bursts of
** around 5 seconds. I assume it's just something to do with caching?
^Yep - the ZFS equivalent of
Hey Steve,
Not that I can help you out but I'm in the same boat. I'm using nv63
with the zfsbootkit. I've build a DVD after patching the netinstall.
The instructions work fine, but I get the boot loop you describe. I
haven't been able to catch the error yet, even with a boot
zfs boot should work on b62, but does not work on b63 (see bug
6553537). This bug is supposed to be fixed in b65 (I'm testing the
most recent nevada bits today to verify the fix).
I'm not sure what's up with the build 62 problem that Steffen
is having. Steffen, if you'll send me more
Hey All,
Is it possible (or even technically feasible) for zfs to have a
destroy to feature? Basically destroy any snapshot older than a
certain date?
Best Regards,
Jason
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Fri, 11 May 2007, Jason J. W. Williams wrote:
Is it possible (or even technically feasible) for zfs to have a destroy
to feature? Basically destroy any snapshot older than a certain date?
Sorta-kinda. You can use 'zfs get' to get the creation time of a
snapshot. If you give it -p, it'll
Hi Mark,
Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again.
Best Regards,
Jason
On 5/11/07, Mark J Musante [EMAIL PROTECTED] wrote:
On Fri, 11 May 2007, Jason J. W. Williams wrote:
Is it
Bruce Shaw wrote:
Mark J Musante [EMAIL PROTECTED] wrote:
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is
exactly
the way to mount a snapshot. Creating a clone uses up a negligible
amount
of disk space, provided you never write to it. And you can always set
readonly=on if
Jason J. W. Williams wrote:
Hi Mark,
Thank you very much. That's what I was kind of afraid of. Its fine to
script it, just would be nice to have a built in function. :-) Thank
you again.
Note, when writing such a script, you will get the best performance by
destroying the snapshots in order
That would be a great RFE. Currently the iSCSI Alias is the dataset name
which should help with identification.
Adam
On Fri, May 04, 2007 at 02:02:34PM +0200, cedric briner wrote:
cedric briner wrote:
hello dear community,
Is there a way to have a ``local_name'' as define in iscsitadm.1m
18 matches
Mail list logo