/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator
.
Under Solaris 10 I found 'cp -pr' to be the both most reliable and fastest way
to move data into, out of, and between ZFS datasets.
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company
___
zfs-discuss
. That was under Solaris 10, but I would be
surprised if the ufsrestore code has changed since then.
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company
___
zfs-discuss mailing list
zfs-discuss
is very well done for most (but certainly not
all) tasks, I usually don't remember to go to the command line until after I
have been beating my head against the GUI for too long :-(
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company
-based write cache, then perhaps you are running into an NFS 3 vs.
NFS 4 issue. I am not sure if Mac OS X is using NFS 3 or NFS 4.
--
Paul Kraus
Deputy Technical Director, LoneStarCon 3
Sound Coordinator, Schenectady Light Opera Company
___
zfs-discuss
no writes since I did the destroy
then I *might* have a chance at this ... HELP!
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Assistant Technical Director, LoneStarCon 3 (http
be important for large objects randomly updated inside,
like VM disk images and iSCSI backing stores, precreated database
table files, maybe swapfiles, etc.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River
we really can't disable
them, if we do we run into a different zpool 22 issue where the amount
of RAM we will need to destroy a large snapshot will be more than we
have. This is also fixed with zpool 26.
--
{1-2-3-4-5-6-7-}
Paul
chain of two and one chain of three, the
problem occurred on the chain of three)
Not specifically applicable here, but probably related and might be of
use to someone here.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect
-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Assistant Technical Director, LoneStarCon 3 (http://lonestarcon3.org/)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor
.
Thanks, in advance for all of your informed opinions.
P.S. I am sending this to TWO lists, please do NOT respond to the list
you are NOT subscribed to :-)
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River
On Thu, May 3, 2012 at 10:39 AM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Paul Kraus
If you have compression turned on (and I highly recommend turning
in the market. Now, as to sharing futures and NDA
material, that _should_ only be available via direct Oracle channels
(as it was under Sun as well).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
is a 240x2TB (7200RPM) system in 20 Dell MD1200 JBODs. 16 vdevs of 15
disks each -- RAIDZ3. NexentaStor 3.1.2.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Assistant
ARC the best case. The real world,
as usual, fell somewhere in between.
Finding a benchmark tool that matches _my_ work load is why I have
started kludging together my own.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect
the application exits will require a restart of the application
with an automounter / NFS approach.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera
for an increase in capacity caused
replacement of hard drives. This time I'm not sure if I'll run out of
capacity before the drives reach end of practical service life and
start failing.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect
wondering if there is something about slot 20
that may be causing drives to fail.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company
issues with this system, but they
were not related to the J4400).
No data has been lost due to any of the failures or outages. Thank you ZFS.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
, so there are differences.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor
make a solid determination of that.
We think what caused the zfs send | zfs recv to be interrupted was
hitting an e1000g Ethernet device driver bug.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
is dreadful, but we _have_ the data in case of
a real disaster.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http
that you are looking for _relative_ measures (unless you have a
performance goal you need to hit).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light
amounts of data and snapshots. The 22 vdev zpool is on a production
server with normal I/O activity, the 2 vdev case is only receiving zfs
snapshots and doing no other I/O.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet
-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, Troy Civic Theatre Company
- Technical
the files. I run a first rsync to copy all of them, then I
declare a very short outage window and do a final rsync to catch
anything that got changed. I do NOT use the --remove-source-files
option.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior
configuration, starting with one unused disk? In the end I
want the three-disk raidz to have the same name (and mount point) as the
original zpool. There must be an easy way to do this.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems
-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, Troy Civic Theatre Company
- Technical Advisor, RPI Players
makes many products you
could try to use for this). I have been using NBU to backup NG Zones
from the Global for years. Is the version of NBU so old that it does
not support ZFS ?
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems
-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, Troy Civic Theatre Company
- Technical Advisor, RPI
with no outage (assuming the drive is in a hot swap capable
enclosure), but as I am not familiar with FreeBSD I do not know what
it is.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, Troy Civic Theatre Company
- Technical Advisor, RPI Players
per 10^14 bits (bytes ?) transferred to /
from the drive. Note the sign change on the exponent :-)
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator
]
then
fault_details=There is at least one zpool error.
let fault_count=fault_count+1
new_faults[${fault_count}]=${fault_details}
fi
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
and only for moving
data physically around, so the lack of ZFS redundancy is not an
issue).
There are over 2300 snapshots on the source side and we were
replicating close to 2000 of them.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior
-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss
On Fri, Nov 11, 2011 at 1:39 PM, Linder, Doug
doug.lin...@merchantlink.com wrote:
Paul Kraus wrote:
My main reasons for using zfs are pretty basic compared to some here
What are they ? (the reasons for using ZFS)
All technical reasons aside, I can tell you one huge reason I love ZFS
or is growing.
Keep in mind that any type of hardware RAID should report back 0
for both to the OS.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady
the parameter.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
if I need to (in fact, in the early days of Google Mail
I did just that as a backup).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera
problem (we tripped over this on
the replica, now we are working on it on the production copy).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady
On Mon, Oct 31, 2011 at 9:07 AM, Jim Klimov jimkli...@cos.ru wrote:
2011-10-31 16:28, Paul Kraus wrote:
Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of
RAM
to destroy the problem snapshot). I had not yet posted a summary as we
are still working through the overall
-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs
-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
to the documentation),
so I created RAID0 sets of 2 drives each and ZFS sees 6 x 1TB LUNs.
ZFS then provides my redundancy and data integrity.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
the data).
This was originally reported to me as a problem with ZFS, SAMBA,
or the ACLs I had set up. It is amazing how much _changing_ of data
goes on with no knowledge by the end users.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior
On Sat, Oct 22, 2011 at 12:36 AM, Paul Kraus p...@kraus-haus.org wrote:
Recently someone posted to this list of that _exact_ situation, they loaded
an OS to a pair of drives while a pair of different drives containing an OS
were still attached. The zpool on the first pair ended up not being
elaborate #3? In what situation will it happen?
Thanks.
Fred
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http
-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs
after adding
devices. I know this is not a substitute for a real online rebalance,
but it gets the job done (if you can take the data offline, I do it a
small chunk at a time).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems
stories on this list that I just avoid it).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org
is one of the
technologies that has not let me down. Of course, in some cases it has
taken weeks if not months to resolve or work around a bug in the
code, but in all cases the data was recovered.
--
{1-2-3-4-5-6-7-}
Paul Kraus
the data that had been corrupted on the failing
component. No corrupt data was ever presented to the application.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound
-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss
c3t5000C5001A5347FEd0 ONLINE 0 0 0
spares
c3t5000C5001A485C88d0AVAIL
c3t5000C50026A0EC78d0AVAIL
errors: No known data errors
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect
On Wed, Oct 5, 2011 at 5:56 PM, Paul B. Henson hen...@acm.org wrote:
On Thu, Sep 29, 2011 at 07:13:40PM -0700, Paul Kraus wrote:
Another potential difference ... I have been told by Oracle Support
(but have not yet confirmed) that just running the latest zfs code
(Solaris 10U10) will disable
-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss
-SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXcoutput=html
for the results of some of my testing.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com
be viable for the
backup server, if we had a spare 20+ TB of storage just sitting
around. Copying off is NOT an option for production due to outage
window _and_ lack of spare 20+ storage :-(
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems
there, and upgrading the zpool won't
help with legacy snapshots).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com
.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady
objects, we can't reload that,
the outage would kill us.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com
back on status of this new bug
(which looks like an old bug, but the old bug has been fixed in the
patches I'm now running).
On Wed, Aug 3, 2011 at 9:19 AM, Paul Kraus p...@kraus-haus.org wrote:
I am having a very odd problem, and so far the folks at Oracle
Support have not provided a working
-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org
-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org
.
Add the following to /etc/system and reboot:
set zfs:zfs_arc_max = bytes
bytes can be decimal or hex (but don't use a scale like 4g). Best to
keep it a power of 2.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet
are added (earlier in the search path).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
the fullness
of this zpool.
On Thu, Aug 4, 2011 at 1:25 PM, Paul Kraus p...@kraus-haus.org wrote:
Updates to my problem:
1. The destroy operation appears to be restarting from the same point
after the system hangs and has to be rebooted. Oracle gave me the
following to track progress:
echo '::pgrep
this zpool I hang the system due to
lack of memory (the box has 32 GB of RAM).
Any suggestions how to delete / destroy this incomplete snapshot
without running the system out of RAM ?
On Wed, Aug 3, 2011 at 9:56 AM, Paul Kraus p...@kraus-haus.org wrote:
An additional data point, when i try to do
in sy cs us sy id
0 0 112 8117096 211888 55 46 0 0 425 0 912684 0 0 0 0 976 166 836 0 2 98
0 0 112 8117096 211936 53 51 6 0 394 0 926702 0 0 0 0 976 167 833 0 2 98
ARC size (B): 4065882656
--
{1-2-3-4-5-6-7-}
Paul Kraus
impact
importing this zpool ?
On Wed, Aug 3, 2011 at 9:19 AM, Paul Kraus p...@kraus-haus.org wrote:
I am having a very odd problem, and so far the folks at Oracle
Support have not provided a working solution, so I am asking the crowd
here while still pursuing it via Oracle Support
be OK ?
[Not applicable to the root zpool, will the OS installation utility do
the right thing ?]
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady
?
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
data. You may have to force an operation since you did not
detach the zone.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company
.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
to around 20 MB/sec...
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI
Spare ? If you have another zpool and
the Hot Spare will be shared, that makes sense. If the drive is
powered on and spinning, I don't see any downside to making it a 4-way
mirror instead of 3-way + HS.
--
{1-2-3-4-5-6-7-}
Paul Kraus
).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
, but it
cannot meet our performance needs.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org
. This information
came out of Sun (pre-Oracle) and _may_ have been traceable back to
Brendan Gregg.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light
On Tue, Jun 14, 2011 at 11:48 AM, Eric D. Mudama
edmud...@bounceswoosh.org wrote:
On Tue, Jun 14 at 8:04, Paul Kraus wrote:
I saw some stats a year or more ago that indicated the MTDL for raidZ2
was better than for a 2-way mirror. In order of best to worst I
remember the rankings
-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
lost my
data.
Was the cause of the checksum mismatch just that the stream data
was stored as a file ? That does not seem right to me.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
On Thu, Jun 9, 2011 at 1:17 PM, Jim Klimov jimkli...@cos.ru wrote:
2011-06-09 18:52, Paul Kraus пишет:
On Thu, Jun 9, 2011 at 8:59 AM, Jonathan Walkerkall...@gmail.com wrote:
New to ZFS, I made a critical error when migrating data and
configuring zpools according to needs - I stored
, 3 MB/sec. maybe) or just wait it out.
I'll be honest, I am nervous with a raidz2 vdev not at full
strength, and I am looking for some comfort :-)
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
On Thu, Jun 2, 2011 at 11:49 PM, Erik Trimble erik.trim...@oracle.com wrote:
On 6/2/2011 5:12 PM, Jens Elkner wrote:
On Wed, Jun 01, 2011 at 06:17:08PM -0700, Erik Trimble wrote:
On Wed, 2011-06-01 at 12:54 -0400, Paul Kraus wrote:
Here's how you calculate (average) how long a random IOPs
-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs-discuss mailing list
/O activity on this box, as this is a remote
replication target for production data. I have a the replication
disabled until the resilver completes.
Solaris 10U9
zpool version 22
Server is a T2000
--
{1-2-3-4-5-6-7-}
Paul Kraus
mountpoint=foo zfs).
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
-SB1dFB1cmw0QWNNd0RkR1ZnN0JEb2RsLXcoutput=html
for results using raidz2 vdevs. I did not test sequential read
performance here as our workload does not include any.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com
.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
- Technical Advisor, RPI Players
___
zfs
NOT saying that was a bad change, but
that it was a change driven by ONE vendor.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
- Sound Coordinator, Schenectady Light Opera
driver was
not handling a couple very specific scsi commands fast enough for ZFS.
The problem shows itself when importing under Solaris 10U9 or when
making other underlying storage changes. It is apparently 10U9
specific.
On Fri, May 20, 2011 at 1:12 PM, Paul Kraus p...@kraus-haus.org wrote:
I have
On Fri, May 20, 2011 at 12:53 AM, Richard Elling
richard.ell...@gmail.com wrote:
On May 19, 2011, at 2:09 PM, Paul Kraus p...@kraus-haus.org wrote:
Is there a way (other than zpool online) to kick ZFS into
rescanning the LUNs ?
zpool clear poolname
I am unclear on when clear
, and
the rest are fine. The pool has a capacity of 1.5 TB and is about 1.37
TB used, the remaining pool to cleanup is 8 TB used out of 9 TB and we
really can't afford to have these kinds of problems with that one.
--
{1-2-3-4-5-6-7-}
Paul Kraus
that did not have devices on the faulted 3511 are OK. Because of these
other zpools we can't really reboot the box or pull the FC
connections.
--
{1-2-3-4-5-6-7-}
Paul Kraus
- Senior Systems Architect, Garnet River ( http
1 - 100 of 168 matches
Mail list logo