I'm thinking that the issue is simply with zfs destroy, not with dedup or
compression.
Yesterday I decided to do some iscsi testing, I created a new dataset in my
pool, 1TB. I did not use compression or dedup.
After copying about 700GB of data from my windows box (NTFS on top of the iscsi
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead (depending on workload) to more
fragmentation, less
localized data (more and longer seeks).
ZFS uses COW (copy on write) during writes. This means
--- On Thu, 1/7/10, Tiernan OToole lsmart...@gmail.com wrote:
Sorry to hijack the thread, but can you
explain your setup? Sounds interesting, but need more
info...
This is just a home setup to amuse me and placate my three boys, each of whom
has several Windows instances running under
On Wed, 23 Dec 2009 03:02:47 +0100, Mike Gerdts mger...@gmail.com wrote:
I've been playing around with zones on NFS a bit and have run into
what looks to be a pretty bad snag - ZFS keeps seeing read and/or
checksum errors. This exists with S10u8 and OpenSolaris dev build
snv_129. This is
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus there's
a good chance
that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
What are you using for on the wire protection with NFS ? Is it shared
using krb5i or do
Hi List,
We create a zfs filesystem for each user's homedir. I would like to
monitor their usage and when the user approaches his quota I would like to
receive a warning by mail. Does anybody have a script available which does
this job and can be run using a cron job. Or even better, is this a
On 08/01/2010 12:40, Peter van Gemert wrote:
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead (depending on workload) to more
fragmentation, less
localized data (more and longer seeks).
ZFS uses COW
On Fri, Jan 8, 2010 at 6:55 AM, Darren J Moffat darr...@opensolaris.org wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus
there's a good chance
that these CHKSUM errors must have a common source, either in ZFS or in
NFS ?
What are
On Fri, Jan 8, 2010 at 6:51 AM, James Carlson carls...@workingcode.com wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus
there's a good chance
that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
One
Yet another way to thin-out the backing devices for a zpool on a
thin-provisioned storage host, today: resilver.
If your zpool has some redundancy across the SAN backing LUNs, simply
drop and replace one at a time and allow zfs to resilver only the
blocks currently in use onto the replacement
Hello,
today I wanted to test that the failure of the L2ARC device is not crucial to
the pool. I added a Intel X25-M Postville (160GB) as cache device to a 54 disk
mittor pool. Then I startet a SYNC iozone on the pool:
iozone -ec -r 32k -s 2048m -l 2 -i 0 -i 2 -o
Pool:
pool
mirror-0
Ok,
I now waited 30 minutes - still hung. After that I pulled the SATA cable to the
L2ARC device also - still no success (I waited 10 minutes).
After 10 minutes I put the L2ARC device back (SATA + Power)
20 seconds after that the system continues to run.
dmesg shows:
Jan 8 15:41:57
On Fri, January 8, 2010 07:51, Robert Milkowski wrote:
On 08/01/2010 12:40, Peter van Gemert wrote:
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead (depending on workload) to more
fragmentation, less
Hi,
I have just observed the following issue and I would like to ask if it is
already known:
I'm using zones on ZFS filesystems which were cloned from a common template
(which is itself an original filesystem). A couple of weeks ago, I did a pkg
image-update, so all zone roots got cloned
BTW, this was on snv_111b - sorry I forgot to mention.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Fri, Jan 8, 2010 at 5:28 AM, Frank Batschulat (Home)
frank.batschu...@sun.com wrote:
[snip]
Hey Mike, you're not the only victim of these strange CHKSUM errors, I hit
the same during my slightely different testing, where I'm NFS mounting an
entire, pre-existing remote file living in the
On 08/01/2010 14:50, David Dyer-Bennet wrote:
On Fri, January 8, 2010 07:51, Robert Milkowski wrote:
On 08/01/2010 12:40, Peter van Gemert wrote:
By having a snapshot you
are not releasing the
space forcing zfs to allocate new space from other
parts of a disk
drive. This may lead
Hi Ian,
I see the problem. In your included URL below, you didn't
include the /N suffix as included in the zpool upgrade
output.
CR 6898657 is still filed to identify the change.
If you copy and paste the URL from the zpool upgrade -v output:
Ok, after browsing I found that the sata disks are not shown via cfgadm.
I found http://opensolaris.org/jive/message.jspa?messageID=287791tstart=0
which states that you have to set the mode to AHCI to enable hot-plug etc.
However I sill think, also the plain IDE driver needs a timeout to hande
On Fri, 8 Jan 2010, Peter van Gemert wrote:
I don't think the use of snapshots will alter the way data is
fragmented or localized on disk.
What happens after a snapshot is deleted?
Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
Hello,
Sorry for the (very) long subject but I've pinpointed the problem to this exact
situation.
I know about the other threads related to hangs, but in my case there was no
zfs destroy involved, nor any compression or deduplication.
To make a long story short, when
- a disk contains
On Fri, Jan 8, 2010 at 9:11 AM, Mike Gerdts mger...@gmail.com wrote:
I've seen similar errors on Solaris 10 in the primary domain and on a
M4000. Unfortunately Solaris 10 doesn't show the checksums in the
ereport. There I noticed a mixture between read errors and checksum
errors - and lots
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
imagine such a decrease is normal.
# zpool iostat nest 1 (with dedup enabled):
...
On Fri, Jan 08, 2010 at 10:00:14AM -0800, James Lee wrote:
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
imagine such a decrease
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus
there's a good chance
that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
One possible cause would be a lack of substantial exercise. The man
page says:
On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat darr...@opensolaris.org
wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and thus
there's a good chance
that these CHKSUM errors must have a common source, either in ZFS or in NFS ?
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
Ah, ok. I didn't know that.
--
James Carlson 42.703N 71.076W carls...@workingcode.com
___
On 1/8/2010 10:04 AM, James Carlson wrote:
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
Ah, ok. I didn't know that.
Does anyone know how that works? I can't find it in the docs, no one
On Jan 8, 2010, at 6:20 AM, Frank Batschulat (Home) wrote:
On Fri, 08 Jan 2010 13:55:13 +0100, Darren J Moffat darr...@opensolaris.org
wrote:
Frank Batschulat (Home) wrote:
This just can't be an accident, there must be some coincidence and
thus there's a good chance
that these CHKSUM
Cindy Swearingen wrote:
Hi Ian,
I see the problem. In your included URL below, you didn't
include the /N suffix as included in the zpool upgrade
output.
That's correct, N is the version number. I see it is fixed now, thanks.
--
Ian.
___
On Fri, Jan 8, 2010 at 12:28 PM, Torrey McMahon tmcmah...@yahoo.com wrote:
On 1/8/2010 10:04 AM, James Carlson wrote:
Mike Gerdts wrote:
This unsupported feature is supported with the use of Sun Ops Center
2.5 when a zone is put on a NAS Storage Library.
Ah, ok. I didn't know that.
See the reads on the pool with the low I/O ? I suspect reading the DDT causes
the writes to slow down.
See this bug
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566. It seems to
give some backgrounds.
Can you test setting the primarycache=metadata on the volume you test ?
James Lee wrote:
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I can't
imagine such a decrease is normal.
What is you data?
I've
On Fri, Jan 8, 2010 at 1:44 PM, Ian Collins i...@ianshome.com wrote:
James Lee wrote:
I haven't seen much discussion on how deduplication affects performance.
I've enabled dudup on my 4-disk raidz array and have seen a significant
drop in write throughput, from about 100 MB/s to 3 MB/s. I
On 01/08/2010 02:42 PM, Lutz Schumann wrote:
See the reads on the pool with the low I/O ? I suspect reading the
DDT causes the writes to slow down.
See this bug
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6913566.
It seems to give some backgrounds.
Can you test setting the
this one has me alittle confused. ideas?
j...@opensolaris:~# zpool import z
cannot mount 'z/nukeme': mountpoint or dataset is busy
cannot share 'z/cle2003-1': smb add share failed
j...@opensolaris:~# zfs destroy z/nukeme
internal error: Bad exchange descriptor
Abort (core dumped)
dd if=/dev/urandom of=largefile.txt bs=1G count=8
cp largefile.txt ./test/1.txt
cp largefile.txt ./test/2.txt
Thats it now the system is totally unusable after launching the two 8G copies.
Until these copies finish no other application is able to launch completely.
Checking prstat shows them
Hi list,
Experimental question ...
Imagine a pool made of SSDs disks, is there any interest to add a SSD cache to
it ? What real impact ?
Thx.
--
Francois
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
38 matches
Mail list logo