to 100MB/s (in the brief time I looked at it).
Is there a setting somewhere between slow and ludicrous speed?
-Don
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
.
Do you mind sharing more data on this? I would like to see the
spa_scrub_* values I sent you earlier while you're running your test
(in a loop so we can see the changes). What I'm looking for is to see
how many inflight scrubs you have at the time of your run.
Thanks,
George
-Don
--
George
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss
that are at the edge of disk and some out a 1/3 of the way
in. Then you want all the metaslabs which are a 1/3 of the way in and
lower to get the bonus. This keeps the allocations towards the outer
edges.
- George
Thanks,
//Jim
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275
or things I should test and I will gladly look into them.
-Don
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275
never seen before.
Thanks,
-Don
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
abnormally high- especially when compared to the much smaller, better
performing pool I have.
--
George Wilson
M: +1.770.853.8523
F: +1.650.494.1676
275 Middlefield Road, Suite 50
Menlo Park, CA 94025
http://www.delphix.com
___
zfs-discuss mailing list
and dive into this mess with me. J
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
George Wilson
http://www.delphix.com
M: +1.770.853.8523
F: +1.650.494.1676
275
--Unix Group--Computer and Network Services-
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
George Wilson
http://www.delphix.com
M: +1.770.853.8523
F
This value is hard-coded in.
- George
On Fri, Oct 29, 2010 at 9:58 AM, David Magda dma...@ee.ryerson.ca wrote:
On Fri, October 29, 2010 10:00, Eric Schrock wrote:
On Oct 29, 2010, at 9:21 AM, Jesus Cea wrote:
When a file is deleted, its block are freed, and that situation is
If your pool is on version 19 then you should be able to import a pool
with a missing log device by using the '-m' option to 'zpool import'.
- George
On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann ehrm...@gmail.com wrote:
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-
The guid is stored on the mirrored pair of the log and in the pool config.
If you're log device was not mirrored then you can only find it in the pool
config.
- George
On Sun, Oct 24, 2010 at 9:34 AM, David Ehrmann ehrm...@gmail.com wrote:
How does ZFS detect that there's a log device attached
Answers below...
Tuomas Leikola wrote:
The endless resilver problem still persists on OI b147. Restarts when it
should complete.
I see no other solution than to copy the data to safety and recreate the
array. Any hints would be appreciated as that takes days unless i can
stop or pause the
Can you post the output of 'zpool status'?
Thanks,
George
LIC mesh wrote:
Most likely an iSCSI timeout, but that was before my time here.
Since then, there have been various individual drives lost along the way
on the shelves, but never a whole LUN, so, theoretically, /except/ for
iSCSI
Tom Bird wrote:
On 18/09/10 09:02, Ian Collins wrote:
In my case, other than an hourly snapshot, the data is not significantly
changing.
It'd be nice to see a response other than you're doing it wrong,
rebuilding 5x the data on a drive relative to its capacity is clearly
erratic
Chris Murray wrote:
Another hang on zpool import thread, I'm afraid, because I don't seem to have
observed any great successes in the others and I hope there's a way of saving
my data ...
In March, using OpenSolaris build 134, I created a zpool, some zfs filesystems,
enabled dedup on them,
Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Neil Perrin
This is a consequence of the design for performance of the ZIL code.
Intent log blocks are dynamically allocated and chained together.
When reading the
David Magda wrote:
On Wed, August 25, 2010 23:00, Neil Perrin wrote:
Does a scrub go through the slog and/or L2ARC devices, or only the
primary storage components?
A scrub will go through slogs and primary storage devices. The L2ARC
device is considered volatile and data loss is not possible
Edward Ned Harvey wrote:
Add to that:
During scrubs, perform some reads on log devices (even if there's nothing to
read).
We do read from log device if there is data stored on them.
In fact, during scrubs, perform some reads on every device (even if it's
actually empty.)
Reading from the
The root filesystem on the root pool is set to 'canmount=noauto' so
you need to manually mount it first using 'zfs mount dataset name'.
Then run 'zfs mount -a'.
- George
On 08/16/10 07:30 PM, Robert Hartzell wrote:
I have a disk which is 1/2 of a boot disk mirror from a failed system
that
Robert Hartzell wrote:
On 08/16/10 07:47 PM, George Wilson wrote:
The root filesystem on the root pool is set to 'canmount=noauto' so you
need to manually mount it first using 'zfs mount dataset name'. Then
run 'zfs mount -a'.
- George
mounting the dataset failed because the /mnt dir
Darren,
It looks like you've lost your log device. The newly integrated missing
log support will help once it's available. In the meantime, you should
run 'zdb -l' on your log device to make sure the label is still intact.
Thanks,
George
Darren Taylor wrote:
I'm at a loss, I've managed to
-...@sun.com mailto:psarc-...@sun.com
*CC: *
zfs-t...@sun.com mailto:zfs-t...@sun.com
I am sponsoring the following case for George Wilson. Requested binding
is micro/patch. Since this is a straight-forward addition of a command
line option, I think itqualifies for self review
I don't recall seeing this issue before. Best thing to do is file a bug
and include a pointer to the crash dump.
- George
zhihui Chen wrote:
Looks that the txg_sync_thread for this pool has been blocked and
never return, which leads to many other threads have been
blocked. I have tried to
Richard Elling wrote:
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote:
Hi all
It seems zfs scrub is taking a big bit out of I/O when running. During a scrub,
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some
L2ARC helps this, but still, the problem remains
Roy Sigurd Karlsbakk wrote:
Hi all
I've been doing a lot of testing with dedup and concluded it's not really ready
for production. If something fails, it can render the pool unuseless for hours
or maybe days, perhaps due to single-threded stuff in zfs. There is also very
little data
Some new features have recently integrated into ZFS which have change
the output of zpool(1m) command. Here's a quick recap:
1) 6574286 removing a slog doesn't work
This change added the concept of named top-level devices for the purpose
of device removal. The named top-levels are constructed
Moshe Vainer wrote:
I am sorry, i think i confused the matters a bit. I meant the bug that prevents
importing with slog device missing, 6733267.
I am aware that one can remove a slog device, but if you lose your rpool and
the device goes missing while you rebuild, you will lose your pool in
Dennis Clarke wrote:
On Sat, 2009-11-07 at 17:41 -0500, Dennis Clarke wrote:
Does the dedupe functionality happen at the file level or a lower block
level?
it occurs at the block allocation level.
I am writing a large number of files that have the fol structure :
-- file begins
1024
Eric Schrock wrote:
On Nov 3, 2009, at 12:24 PM, Cyril Plisko wrote:
I think I'm observing the same (with changeset 10936) ...
# mkfile 2g /var/tmp/tank.img
# zpool create tank /var/tmp/tank.img
# zfs set dedup=on tank
# zfs create tank/foobar
This has to do with the fact that
fyleow wrote:
fyleow wrote:
I have a raidz1 tank of 5x 640 GB hard drives on my
newly installed OpenSolaris 2009.06 system. I did a
zpool export tank and the process has been running
for 3 hours now taking up 100% CPU usage.
When I do a zfs list tank it's still shown as
mounted. What's going
fyleow wrote:
I have a raidz1 tank of 5x 640 GB hard drives on my newly installed OpenSolaris
2009.06 system. I did a zpool export tank and the process has been running for
3 hours now taking up 100% CPU usage.
When I do a zfs list tank it's still shown as mounted. What's going on here?
!
But how do we deal with that on older sStems, which don't have the
patch applied, once it is out?
Thanks, Alexander
On Tuesday, July 21, 2009, George Wilson george.wil...@sun.com wrote:
Russel wrote:
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
We
Russel wrote:
OK.
So do we have an zpool import --xtg 56574 mypoolname
or help to do it (script?)
Russel
We are working on the pool rollback mechanism and hope to have that
soon. The ZFS team recognizes that not all hardware is created equal and
thus the need for this mechanism. We are
Ben wrote:
Hi all,
I have a ZFS mirror of two 500GB disks, I'd like to up these to 1TB disks, how
can I do this? I must break the mirror as I don't have enough controller on my
system board. My current mirror looks like this:
[b]r...@beleg-ia:/share/media# zpool status share
pool: share
Leonid Zamdborg wrote:
George,
Is there a reasonably straightforward way of doing this partition table edit
with existing tools that won't clobber my data? I'm very new to ZFS, and
didn't want to start experimenting with a live machine.
Leonid,
What you could do is to write a program
David Bryan wrote:
Sorry if the question has been discussed before...did a pretty extensive
search, but no luck...
Preparing to build my first raidz pool. Plan to use 4 identical drives in a 3+1
configuration.
My question is -- what happens if one drive dies, and when I replace it, design
Leonid,
I will be integrating this functionality within the next week:
PSARC 2008/353 zpool autoexpand property
6475340 when lun expands, zfs should expand too
Unfortunately, the won't help you until they get pushed to Opensolaris.
The problem you're facing is that the partition table needs
shyamali.chakrava...@sun.com wrote:
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we
have sent (deliberately) NULL pointer for return value.
vdev_disk_io_start()
...
...
error = ldi_ioctl(dvd-vd_lh, zio-io_cmd,
Arne Schwabe wrote:
Am 03.04.2009 2:42 Uhr, schrieb George Wilson:
Arne Schwabe wrote:
Hi,
I have a zpool in a degraded state:
[19:15]{1}a...@charon:~% pfexec zpool import
pool: npool
id: 5258305162216370088
state: DEGRADED
status: The pool is formatted using an older on-disk version
Cyril Plisko wrote:
On Tue, Mar 31, 2009 at 11:01 PM, George Wilson george.wil...@sun.com wrote:
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
richard.ell...@gmail.com wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http
Arne Schwabe wrote:
Hi,
I have a zpool in a degraded state:
[19:15]{1}a...@charon:~% pfexec zpool import
pool: npool
id: 5258305162216370088
state: DEGRADED
status: The pool is formatted using an older on-disk version.
action: The pool can be imported despite missing or damaged
Cyril Plisko wrote:
On Thu, Mar 26, 2009 at 8:45 PM, Richard Elling
richard.ell...@gmail.com wrote:
assertion failures are bugs.
Yup, I know that.
Please file one at http://bugs.opensolaris.org
Just did.
Do you have a crash dump from this issue?
- George
You may
Matthew Ahrens wrote:
Blake wrote:
zfs send is great for moving a filesystem with lots of tiny files,
since it just handles the blocks :)
I'd like to see:
pool-shrinking (and an option to shrink disk A when i want disk B to
become a mirror, but A is a few blocks bigger)
I'm working on it.
Richard Elling wrote:
David Magda wrote:
On Feb 27, 2009, at 18:23, C. Bergström wrote:
Blake wrote:
Care to share any of those in advance? It might be cool to see input
from listees and generally get some wheels turning...
raidz boot support in grub 2 is pretty high on my list to be
Krister Joas wrote:
Hello.
I have a machine at home on which I have SXCE B96 installed on a root
zpool mirror. It's been working great until yesterday. The root pool
is a mirror with two identical 160GB disks. The other day I added a
third disk to the mirror, a 250 GB disk. Soon
Kyle McDonald wrote:
David Magda wrote:
Quite often swap and dump are the same device, at least in the
installs that I've worked with, and I think the default for Solaris
is that if dump is not explicitly specified it defaults to swap, yes?
Is there any reason why they should be
Unfortunately not.
Thanks,
George
Par Kansala wrote:
Hi,
Will the upcoming zfs boot capabilities also enable write cache on a
boot disk
like it does on regular data disks (when whole disks are used) ?
//Par
--
--
http://www.sun.com *Pär Känsälä*
OEM Engagement Architect
*Sun
Just to clarify a bit, ZFS will not enable the write cache for the root
pool. That said, there are disk drives which have the write cache
enabled by default. That behavior remains unchanged.
- George
George Wilson wrote:
Unfortunately not.
Thanks,
George
Par Kansala wrote:
Hi
Mike Gerdts wrote:
On Feb 15, 2008 2:31 PM, Dave [EMAIL PROTECTED] wrote:
This is exactly what I want - Thanks!
This isn't in the man pages for zfs or zpool in b81. Any idea when this
feature was integrated?
Interesting... it is in b76. I checked several other releases both
Vincent Fox wrote:
Let's say you are paranoid and have built a pool with 40+ disks in a Thumper.
Is there a way to set metadata copies=3 manually?
After having built RAIDZ2 sets with 7-9 disks and then pooled these together,
it just seems like a little bit of extra insurance to increase
Krzys wrote:
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
The latest ZFS patches for Solaris 10 are now available:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
ZFS Pool Version available with patches = 4
These patches will provide access to all of the latest features and bug
fixes:
Features:
PSARC 2006/288 zpool
Bernhard,
Here are the solaris 10 patches:
120011-14 - SunOS 5.10: kernel patch
120012-14 - SunOS 5.10_x86: kernel patch
See http://www.opensolaris.org/jive/thread.jspa?threadID=39951tstart=0
for more info.
Thanks,
George
Bernhard Holzer wrote:
Hi,
this parameter (zfs_nocacheflush) is
ZFS Fans,
Here's a list of features that we are proposing for Solaris 10u5. Keep
in mind that this is subject to change.
Features:
PSARC 2007/142 zfs rename -r
PSARC 2007/171 ZFS Separate Intent Log
PSARC 2007/197 ZFS hotplug
PSARC 2007/199 zfs {create,clone,rename} -p
PSARC 2007/283 FMA for
You need to install patch 120011-14. After you reboot you will be able
to run 'zpool upgrade -a' to upgrade to the latest version.
Thanks,
George
sunnie wrote:
Hey, guys
Since corrent zfs software only support ZFS pool version 3, how should I
do to upgrade the zfs software or package?
Ben,
Much of this code has been revamped as a result of:
6514331 in-memory delete queue is not needed
Although this may not fix your issue it would be good to try this test
with more recent bits.
Thanks,
George
Ben Miller wrote:
Hate to re-open something from a year ago, but we just had
The on-disk format for s10u4 will be version 4. This is equivalent to
Opensolaris build 62.
Thanks,
George
David Evans wrote:
As the release date Solaris 10 Update 4 approaches (hope, hope), I was
wondering if someone could comment on which versions of opensolaris ZFS will
seamlessly work
I'm planning on putting back the changes to ZFS into Opensolaris in
upcoming weeks. This will still require a manual step as the changes
required in the sd driver are still under development.
The ultimate plan is to have the entire process totally automated.
If you have more questions, feel
This fix plus the fix for '6495013 Loops and recursion in
metaslab_ff_alloc can kill performance, even on a pool with lots of free
data' will greatly help your situation.
Both of these fixes will be in Solaris 10 update 4.
Thanks,
George
?ukasz wrote:
I have a huge problem with ZFS pool
David Smith wrote:
I was wondering if anyone had a script to parse the zpool status -v output
into a more machine readable format?
Thanks,
David
This message posted from opensolaris.org
___
zfs-discuss mailing list
Peter,
Can you send the 'zpool status -x' output after your reboot. I suspect
that the pool error is occurring early in the boot and later the devices
are all available and the pool is brought into an online state.
Take a look at:
*6401126 ZFS DE should verify that diagnosis is still valid
Peter Goodman wrote:
# zpool status -x
pool: mtf
state: UNAVAIL
status: One or more devices could not be opened. There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
see:
Gino,
Can you send me the corefile from the zpool command? This looks like a
case where we can't open the device for some reason. Are you using a
multi-pathing solution other than MPXIO?
Thanks,
George
Gino wrote:
Today we lost an other zpool!
Fortunately it was only a backup repository.
Gino,
Were you able to recover by setting zfs_recover?
Thanks,
George
Gino wrote:
Hi All,
here is an other kind of kernel panic caused by ZFS that we found.
I have dumps if needed.
#zpool import
pool: zpool8
id: 7382567111495567914
state: ONLINE
status: The pool is formatted using an
William D. Hathaway wrote:
I'm running Nevada build 60 inside VMWare, it is a test rig with no data of value.
SunOS b60 5.11 snv_60 i86pc i386 i86pc
I wanted to check out the FMA handling of a serious zpool error, so I did the
following:
2007-04-07.08:46:31 zpool create tank mirror c0d1 c1d1
Ihsan,
If you are running Solaris 10 then you are probably hitting:
6456939 sd_send_scsi_SYNCHRONIZE_CACHE_biodone() can issue TUR which
calls biowait()and deadlock/hangs host
This was fixed in opensolaris (build 48) but a patch is not yet
available for Solaris 10.
Thanks,
George
Ihsan
Now that Solaris 10 11/06 is available, I wanted to post the complete list of
ZFS features and bug fixes that were included in that release. I'm also
including the necessary patches for anyone wanting to get all the ZFS features
and fixes via patches (NOTE: later patch revision may already be
storage-disk wrote:
Hi there
I have 3 questions regarding zfs.
1. what are zfs packages?
SUNWzfsr, SUNWzfskr, and SUNWzfsu. Note that ZFS has dependencies on
other components of Solaris so installing just the packages in not
supported.
2. what services need to be started in order for
Derek,
I don't think 'zpool attach/detach' is what you want as it will always
result in a complete resilver.
You're best bet is to export and re-import the pool after moving
devices. You might also try to 'zpool offline' the device, move it and
then 'zpool online' it. This should force a
Siegfried,
Can you provide the panic string that you are seeing? We should be able
to pull out the persistent error log information from the corefile. You
can take a look at spa_get_errlog() function as a starting point.
Additionally, you can look at the corefile using mdb and take a look at
Derek,
Have you tried doing a 'zpool replace poolname c1t53d0 c2t53d0'? I'm not
sure if this will work but worth a shot. You may still end up with a
complete resilver.
Thanks,
George
Derek E. Lewis wrote:
On Thu, 28 Dec 2006, George Wilson wrote:
You're best bet is to export and re-import
Bill,
If you want to find the file associated with the corruption you could do
a find /u01 -inum 4741362 or use the output of zdb -d u01 to
find the object associated with that id.
Thanks,
George
Bill Casale wrote:
Please reply directly to me. Seeing the message below.
Is it possible
Stuart,
Can you send the output of 'zpool status -v' from both nodes?
Thanks,
George
Stuart Low wrote:
Nada.
[EMAIL PROTECTED] ~]$ zpool export -f ax150s
cannot open 'ax150s': no such pool
[EMAIL PROTECTED] ~]$
I wonder if it's possible to force the pool to be marked as inactive? Ideally
Stuart,
Given that the pool was imported on both nodes simultaneously may have
corrupted it beyond repair. I'm assuming that the same problem is a
system panic? If so, can you send the panic string from that node?
Thanks,
George
Stuart Low wrote:
I thought that might work too but having
Stuart,
Issuing a 'zpool import' will show all the pools which are accessible
for import and that's why you are seeing them. The fact that a forced
import gives results in a panic is indicative of pool corruption that
resulted from being imported on more than one host.
Thanks,
George
A fix for this should be integrated shortly.
Thanks,
George
Michael Schuster - Sun Microsystems wrote:
Robert Milkowski wrote:
Hello Michael,
Wednesday, August 23, 2006, 12:49:28 PM, you wrote:
MSSM Roch wrote:
MSSM I sent this output offline to Roch, here's the essential ones
and
Neal,
This is not fixed yet. Your best best is to run a replicated pool.
Thanks,
George
Neal Miskin wrote:
Hi Dana
It is ZFS bug 6322646; a flaw.
Is this fixed in a patch yet?
nelly_bo
This message posted from opensolaris.org
___
Robert,
One of your disks is not responding. I've been trying to track down why
the scsi command is not being timed out but for now check out each of
the devices to make sure they are healthy.
BTW, if you capture a corefile let me know.
Thanks,
George
Robert Milkowski wrote:
Hi.
S10U2 +
Robert Milkowski wrote:
Hello George,
Thursday, August 24, 2006, 5:48:08 PM, you wrote:
GW Robert,
GW One of your disks is not responding. I've been trying to track down why
GW the scsi command is not being timed out but for now check out each of
GW the devices to make sure they are
Roch wrote:
Dick Davies writes:
On 22/08/06, Bill Moore [EMAIL PROTECTED] wrote:
On Mon, Aug 21, 2006 at 02:40:40PM -0700, Anton B. Rang wrote:
Yes, ZFS uses this command very frequently. However, it only does this
if the whole disk is under the control of ZFS, I believe; so a
Frank,
The SC 3.2 beta maybe closed but I'm forwarding your request to Eric
Redmond.
Thanks,
George
Frank Cusack wrote:
On August 10, 2006 6:04:38 PM -0700 eric kustarz [EMAIL PROTECTED]
wrote:
If you're doing HA-ZFS (which is SunCluster 3.2 - only available in
beta right now),
Is the
I believe this is what you're hitting:
6456888 zpool attach leads to memory exhaustion and system hang
We are currently looking at fixing this so stay tuned.
Thanks,
George
Daniel Rock wrote:
Joseph Mocker schrieb:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS
SVM
Luke,
You can run 'zpool upgrade' to see what on-disk version you are capable
of running. If you have the latest features then you should be running
version 3:
hadji-2# zpool upgrade
This system is currently running ZFS version 3.
Unfortunately this won't tell you if you are running the
Leon,
Looking at the corefile doesn't really show much from the zfs side. It
looks like you were having problems with your san though:
/scsi_vhci/[EMAIL PROTECTED] (ssd5) offline
/scsi_vhci/[EMAIL PROTECTED] (ssd5) multipath status: failed, path
/[EMAIL PROTECTED],70/SUNW,[EMAIL
5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch
Thanks,
George
George Wilson wrote:
Dave,
I'm copying the zfs-discuss alias on this as well...
It's possible that not all necessary patches have been installed or they
maybe hitting CR# 6428258. If you reboot the zone does it continue to
end up
Dave,
I'm copying the zfs-discuss alias on this as well...
It's possible that not all necessary patches have been installed or they
maybe hitting CR# 6428258. If you reboot the zone does it continue to
end up in maintenance mode? Also do you know if the necessary ZFS/Zones
patches have been
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
the list:
Features:
PSARC 2006/223 ZFS Hot Spares
6405966 Hot Spare support in ZFS
PSARC 2006/303 ZFS Clone Promotion
6276916 support for clone swap
PSARC
Rainer,
This will hopefully go into build 06 of s10u3. It's on my list... :-)
Thanks,
George
Rainer Orth wrote:
George Wilson [EMAIL PROTECTED] writes:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
I forgot to highlight that RAIDZ2 (a.k.a RAID-6) is also in this wad:
6417978 double parity RAID-Z a.k.a. RAID6
Thanks,
George
George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become Solaris 10 11/06. For reference here's
Grant,
Expect patches late September or so. Once available I'll post the patch
information.
Thanks,
George
grant beattie wrote:
On Mon, Jul 31, 2006 at 11:51:09AM -0400, George Wilson wrote:
We have putback a significant number of fixes and features from
OpenSolaris into what will become
that we have 800 servers, 30,000
users, 140 million lines of ASCII per day all fitting in a 2u T2000 box!
thanks
sean
George Wilson wrote:
Sean,
Sorry for the delay getting back to you.
You can do a 'zpool upgrade' to see what version of the on-disk format
you pool is currently running
Robert,
The patches will be available sometime late September. This may be a
week or so before s10u3 actually releases.
Thanks,
George
Robert Milkowski wrote:
Hello eric,
Thursday, July 27, 2006, 4:34:16 AM, you wrote:
ek Robert Milkowski wrote:
Hello George,
Wednesday, July 26, 2006,
upgrade does so that it happens as part of the promotion.
A best practice would be to keep the application data and config/logging
data separately. This would avoid the need for this feature.
Thanks,
George
Darren J Moffat wrote:
George Wilson wrote:
Matt,
This is really cool! One thing that I
94 matches
Mail list logo