I've just managed to lock up a pool on a Solaris 10 update 7 system
(even creating files in the pool hangs and can't be killed) by
attempting to delete a clone.
Has anyone seen anything like this?
--
Ian.
___
zfs-discuss mailing list
hello,
one of my colleague has a problem with an application. the sysadmins,
responsible for that server told him that it was the applications fault, but i
think they are wrong, and so does he.
from time to time, the app gets unkillable and when trying to list the contents
of some dir which
roland wrote:
hello,
one of my colleague has a problem with an application. the sysadmins,
responsible for that server told him that it was the applications fault, but i
think they are wrong, and so does he.
from time to time, the app gets unkillable and when trying to list the contents of
Other drivers in the stack? Which drivers? And have anyone of them been changed
between b125 and b126?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Carson Gaspar wrote:
On 10/26/09 5:33 PM, p...@paularcher.org wrote:
I can't find much on gam_server on Solaris (couldn't find too much
on it
at all, really), and port_create is apparently a system call. (I'm
not a
developer--if I can't write it in BASH, Perl, or Ruby, I can't write
it.)
I
Hello,
when I run 'zfs send' into the file, system (Ultra Sparc 45) had this load:
# zfs send -R backup/zo...@moving_09112009
/tank/archive_snapshots/exa_all_zones_09112009.snap
Total: 107 processes, 951 lwps, load averages: 54.95, 59.46, 50.25
Is it normal?
Regards,
Jan Hlodan
On Nov 11, 2009, at 12:01 AM, Tim Cook wrote:
On Tue, Nov 10, 2009 at 5:15 PM, Tim Cook t...@cook.ms wrote:
One thing I'm
noticing is a lot of checksum errors being generated during the
resilver.
Is this normal?
Anyone? It's up to 7.35M checksum errors and it's rebuilding
extremely
thanks.
we will try that if the error happens again - needed to reboot as a quick-fix,
as the machine is in production
regards
roland
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Wed, Nov 11, 2009 at 3:38 AM, Orvar Korvar
knatte_fnatte_tja...@yahoo.com wrote:
Other drivers in the stack? Which drivers? And have anyone of them been
changed between b125 and b126?
Looks like the sd drive for one.
http://dlc.sun.com/osol/on/downloads/b126/on-changelog-b126.html
Thanks all,
It was a government customer that I was talking too and it sounded like a good
idea, however with the certification paper trails required today, I don't think
it would be of such a benefit after all. It may be useful on the disk
evacuation, but they're still going to need their
Brian Kolaci wrote:
Hi,
I was discussing the common practice of disk eradication used by many
firms for security. I was thinking this may be a useful feature of ZFS
to have an option to eradicate data as its removed, meaning after the
last reference/snapshot is done and a block is freed,
Hi, you could try LSI itmpt driver as well, it seems to handle this better,
although I think it only supports 8 devices at once or so.
You could also try more recent version of opensolaris (123 or even 126), as
there seems to be a lot fixes regarding mpt-driver (which still seems to have
This feature is described in this RFE:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=4930014
Secure delete option: erase blocks after they're freed
cs
On 11/11/09 09:17, Darren J Moffat wrote:
Brian Kolaci wrote:
Hi,
I was discussing the common practice of disk eradication used
The checksum errors are fixed in build 128 with:
6807339 spurious checksum errors when replacing a vdev
No; you're not losing any data due to this.
- Eric
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi Tim,
I always have to detach the spare.
I haven't tested it yet, but I see an improvement in this behavior,
with the integration of this CR:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6893090
clearing a vdev should automatically detach spare
Cindy
On 11/10/09 16:03, Tim
On 10 Nov, 2009, at 21.02, Ron Mexico wrote:
This didn't occur on a production server, but I thought I'd post
this anyway because it might be interesting.
This is CR 6895446 and a fix for it should be going into build 129.
Regards,
markm
___
I encountered a strange libzfs behavior while testing a zone fix and
want to make sure that I found a genuine bug. I'm creating zones whose
zonepaths reside in ZFS datasets (i.e., the parent directories of the
zones' zonepaths are ZFS datasets). In this scenario, zoneadm(1M)
attempts to
On Tue, 10 Nov 2009, Tim Cook wrote:
My personal thought would be that it doesn't really make sense to
even have it, at least for readzilla. In theory, you always want
the SSD to be full, or nearly full, as it's a cache. The whole
point of TRIM, from my understanding, is to speed up the
On Wed, Nov 11, 2009 at 11:51 AM, Bob Friesenhahn
bfrie...@simple.dallas.tx.us wrote:
On Tue, 10 Nov 2009, Tim Cook wrote:
My personal thought would be that it doesn't really make sense to even
have it, at least for readzilla. In theory, you always want the SSD to be
full, or nearly full,
So, I've done a bit of research and RTFM, and haven't found an answer. If
I've missed something obvious, please point me in the right direction.
Is there a way to manually fail a drive via ZFS? (this is a raid-z2
raidset) In my case, I'm pre-emptively replacing old drives with newer,
faster,
Hi,
Well ... i think Darren should implement this as a part of zfs-crypto. Secure
Delete on SSD looks like quite challenge, when wear leveling and bad block
relocation kicks in ;)
Regards
Joerg
Am 11.11.2009 um 17:53 schrieb Cindy Swearingen:
This feature is described in this RFE:
Tim,
I think you're looking for zpool offline:
zpool offline [-t] pool device ...
Takes the specified physical device offline. While the
device is offline, no attempt is made to read or write
to the device.
This command is not applicable to spares
I've experienced behavior similar this several times, each time it
was a single bad drive, in this case, looking like target 0. For
whatever reason, buggy Solaris/mpt driver, some of the other drives
get wind of it, then hide from their respective buses in fear. :-)
Operating System: SunOS
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of zfs-crypto. Secure
Delete on SSD looks like quite challenge, when wear leveling and bad block
relocation kicks in ;)
No I won't be doing that as part of the zfs-crypto project. As I said
some
On Mon, Sep 07, 2009 at 09:58:19AM -0700, Richard Elling wrote:
I only know of hole punching in the context of networking. ZFS doesn't
do networking, so the pedantic answer is no.
But a VDEV may be an iSCSI device, thus there can be networking below
ZFS.
For some iSCSI targets (including
On Wed, 11 Nov 2009, Tim Cook wrote:
I'm well aware of the fact that SSD mfg's put extra blocks into the
device to increase both performance and MTBF. I'm not sure how that
invalidates what I've said though, or even plays a roll, and you
haven't done a very good job of explaining why you
I already changed some of the drives, no difference. The target drive seem to
have random character - most likely not from the drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Have you tried another SAS-cable?
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P
Sent: 11. marraskuuta 2009 21:05
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS on JBOD
m...@cybershade.us said:
So at this point this looks like an issue with the MPT driver or these SAS
cards (I tested two) when under heavy load. I put the latest firmware for the
SAS card from LSI's web site - v1.29.00 without any changes, server still
locks.
Any ideas, suggestions how to
On Wed, Nov 11, 2009 at 12:29 PM, Darren J Moffat
darr...@opensolaris.orgwrote:
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of zfs-crypto.
Secure Delete on SSD looks like quite challenge, when wear leveling and bad
block relocation kicks in ;)
No
On Wed, November 11, 2009 13:29, Darren J Moffat wrote:
No I won't be doing that as part of the zfs-crypto project. As I said
some jurisdictions are happy that if the data is encrypted then
overwrite of the blocks isn't required. For those that aren't use
dd(1M) or format(1M) may be
So he did actually hit a bug? But the bug is not dangerous as it doesnt destroy
data?
But I did not replace any devices and still it showed checksum errors. I think
I did a zfs send | zfs receive? I dont remember. But I just copied things back
and forth, and the checksum errors showed up. So
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote:
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of
zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear
leveling and bad block relocation kicks in ;)
No I won't be doing
Bill Sommerfeld wrote:
On Wed, 2009-11-11 at 10:29 -0800, Darren J Moffat wrote:
Joerg Moellenkamp wrote:
Hi,
Well ... i think Darren should implement this as a part of
zfs-crypto. Secure Delete on SSD looks like quite challenge, when wear
leveling and bad block relocation kicks in ;)
No I
On Wed, 11 Nov 2009, Darren J Moffat wrote:
note that eradication via overwrite makes no sense if the underlying
storage uses copy-on-write, because there's no guarantee that the newly
written block actually will overlay the freed block.
Which is why this has to be a ZFS feature rather than
On Nov 11, 2009, at 17:40, Bob Friesenhahn wrote:
Zfs is absolutely useless for this if the underlying storage uses
copy-on-write. Therefore, it is absolutely useless to put it in
zfs. No one should even consider it.
The use of encrypted blocks is much better, even though encrypted
Hi everybody,
I am considering moving my data pool from a two disk (10krpm) mirror
layout to a three disk raidz-1. This is just a single user workstation
environment, where I mostly perform compile jobs. From past experiences
with raid5 I am a little bit reluctant to do so, as software raid5 has
Bob Friesenhahn wrote:
On Wed, 11 Nov 2009, Darren J Moffat wrote:
note that eradication via overwrite makes no sense if the underlying
storage uses copy-on-write, because there's no guarantee that the newly
written block actually will overlay the freed block.
Which is why this has to be a
On Wed, 11 Nov 2009, Darren J Moffat wrote:
Zfs is absolutely useless for this if the underlying storage uses
copy-on-write. Therefore, it is absolutely useless to put it in
zfs. No one should even consider it.
I disagree. Sure there are cases where ZFS which is copy-on-write
is sitting
from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads between two disks.
another way to say the
On Wed, 11 Nov 2009, Rob Logan wrote:
from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split the reads
On Nov 11, 2009, at 4:30 PM, Rob Logan wrote:
from a two disk (10krpm) mirror layout to a three disk raidz-1.
wrights will be unnoticeably slower for raidz1 because of parity
calculation
and latency of a third spindle. but reads will be 1/2 the speed
of the mirror because it can split
Hi, you could try LSI itmpt driver as well, it seems
to handle this better, although I think it only
supports 8 devices at once or so.
You could also try more recent version of opensolaris
(123 or even 126), as there seems to be a lot fixes
regarding mpt-driver (which still seems to have
Have you tried another SAS-cable?
I have. 2 identical SAS cards, different cables, different disks (brand, size,
etc). I get the errors on random disks in the pool. I don't think it's hardware
related as there have been a few reports of this issue already.
--
This message posted from
Travis Tabbal wrote:
Hi, you could try LSI itmpt driver as well, it seems to handle this
better, although I think it only supports 8 devices at once or so.
You could also try more recent version of opensolaris (123 or even
126), as there seems to be a lot fixes regarding mpt-driver (which
still
I am at a loss of where else to look to work out why my vSphere 4 server cannot
access my iSCSI LUNs via the COMSTAR iSCSI target.
# uname -a
SunOS prmel1iscsi01 5.11 snv_111b i86pc i386 i86pc Solaris
# itadm list-target -v
TARGET NAME STATE
Duncan Bradey skrev:
I am at a loss of where else to look to work out why my vSphere 4 server cannot
access my iSCSI LUNs via the COMSTAR iSCSI target.
I am at a complete loss, I've tried everything that I can think of, using CHAP,
disabling CHAP, recreating the target, I even reinstalled the
Travis Tabbal wrote:
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
j...@opensolaris.org mailto:j...@opensolaris.org wrote:
The first step towards acknowledging that there is a problem
is you logging a bug in bugs.opensolaris.org
http://bugs.opensolaris.org. If you don't,
Rasmus,
I had 4 volumes:-
Found 4 LU(s)
GUIDDATA SIZE SOURCE
---
600144f0b00309004afb990f0004 1099511562240
/dev/zvol/rdsk/infobrick/iscsi/prmel1vspcor-jhgtier2-03
49 matches
Mail list logo