On Fri, Jun 26, 2009 at 10:14 AM, Brent Jonesbr...@servuhome.net wrote:
On Thu, Jun 25, 2009 at 12:00 AM, James Leverj...@jamver.id.au wrote:
On 25/06/2009, at 4:38 PM, John Ryan wrote:
Can I ask the same question - does anyone know when the 113 build will
show up on pkg.opensolaris.org/dev
snip
Snapshots are significantly faster as well. My average transfer speed
went from about 15MB/sec to over 40MB/sec. I imagine that 40MB/sec is
now a limitation of the CPU, as I can see SSH maxing out a single core
on the quad cores.
Maybe SSH can be made multi-threaded next? :)
i for
I need some help please.
I am currently using a compact flash as the boot drive (ZFS) on a thumper.
I want to create a clone of it to a file and restore it to a new CF card on
another thumper. Unfortunately, an exact make, model and size of the CF has a
few less cylinders so a straight dd of
Can't you just boot from an OpenSolaris CD, create a ZFS pool on the new
device, and just do a ZFS send/receive directly to it? So long as there's
enough space for the data, a send/receive won't care at all that the systems
are different sizes.
I don't know what you need to do to a pool to
On 28 Jun 2009, at 11:22, Daniel J. Priem wrote:
snip
Snapshots are significantly faster as well. My average transfer speed
went from about 15MB/sec to over 40MB/sec. I imagine that 40MB/sec is
now a limitation of the CPU, as I can see SSH maxing out a single
core
on the quad cores.
Maybe
The Emulex fiber channel device driver supports a max-xfer-size
tunable (via /kernel/drv/emlxs.conf) which only applies to i386.
This tunable specifies the scatter gather list buffer size. By
default, this value is set to 339968 for Solaris 10.
Today I experimented with doubling this value
On Sun, 28 Jun 2009, Bob Friesenhahn wrote:
Today I experimented with doubling this value to 688128 and was happy to see
a large increase in sequential read performance from my ZFS pool which is
based on six mirrors vdevs. Sequential read performance jumped from 552787
MB/s to 799626 MB/s.
This is S10U7 fully patched and not open solaris, but I would
appreciate any
advice on the following transient Permanent error message generated
while running a zpool scrub.
# zpool status -v rpool
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in