On Wed, Dec 16 at 22:41, Tim wrote:
hmm, not seeing the same slow down when I boot from the Samsung EStool CD and
run a diag which performs a surface scan...
could this still be a hardware issue, or possibly something with the Solaris
data format on the disk?
Rotating drives often have
On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
The question behind the question is, given the really bad things that
can happen performance-wise with writes that are not 4k aligned when
using flash devices, is there any way to insure that any and all
writes from ZFS are 4k aligned?
Some flash
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
On 12/17/09, Kjetil Torgrim Homme kjeti...@linpro.no wrote:
Andrey Kuzmin
On Thu, Dec 17, 2009 at 09:14, Eric D. Mudama edmud...@bounceswoosh.orgwrote:
On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
The question behind the question is, given the really bad things that can
happen performance-wise with writes that are not 4k aligned when using flash
devices, is there
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
if the hash used for dedup is
Kjetil Torgrim Homme wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based dedupe has been dropped in recent builds.
if the
I'm willing to accept slower writes with compression enabled, par for
the course. Local writes, even with compression enabled, can still
exceed 500MB/sec, with moderate to high CPU usage.
These problems seem to have manifested after snv_128, and seemingly
only affect ZFS receive speeds. Local
Hi all,
I need to move a filesystem off of one host and onto another
smaller
one. The fs in question, with no compression enabled, is using 1.2 TB
(refer). I'm hoping that zfs compression will dramatically reduce this
requirement and allow me to keep the dataset on an 800 GB store.
Darren J Moffat darr...@opensolaris.org writes:
Kjetil Torgrim Homme wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Downside you have described happens only when the same checksum is
used for data protection and duplicate detection. This implies sha256,
BTW, since fletcher-based
Hi, I have a zfs volume that's exported via iscsi for my wife's Mac to
use for Time Machine.
I've just built a new machine to house my big pool, and installed
build 129 on it. I'd like to start using COMSTAR for exporting the
iscsi targets, rather than the older iscsi infrastructure.
I've
Kjetil Torgrim Homme wrote:
I don't know how tightly interwoven the dedup hash tree and the block
pointer hash tree are, or if it is all possible to disentangle them.
At the moment I'd say very interwoven by desgin.
conceptually it doesn't seem impossible, but that's easy for me to
say,
On Thu, 17 Dec 2009, Kjetil Torgrim Homme wrote:
compression requires CPU, actually quite a lot of it. even with the
lean and mean lzjb, you will get not much more than 150 MB/s per core or
something like that. so, if you're copying a 10 GB image file, it will
take a minute or two, just to
-Original Message-
From: Bone, Nick
Sent: 16 December 2009 16:33
To: oab
Subject: RE: [zfs-discuss] Import a SAN cloned disk
Hi
I know that EMC don't recommend a SnapView snapshot being added to the original
hosts Storage Group although it is not prevented.
I tried this just now
On Thu, Dec 17, 2009 at 03:32:21PM +0100, Kjetil Torgrim Homme wrote:
if the hash used for dedup is completely separate from the hash used for
data protection, I don't see any downsides to computing the dedup hash
from uncompressed data. why isn't it?
Hash and checksum functions are slow
I'm trying to see if zfs dedupe is effective on our datasets, but I'm having a
hard time figuring out how to measure the space saved.
When I sent one backup set to the filesystem, the usage reported by zfs list
and zfs get used my zfs are the expected values based on the data size.
When I
On Thu, Dec 17, 2009 at 8:57 PM, Stacy Maydew stacy.may...@sun.com wrote:
I'm trying to see if zfs dedupe is effective on our datasets, but I'm having
a hard time figuring out how to measure the space saved.
When I sent one backup set to the filesystem, the usage reported by zfs
list and
Hi Giridhar,
The size reported by ls can include things like holes in the file. What space
usage does the zfs(1M) command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
Hi,
Reposting as I have not gotten any response.
Here is the issue. I created a
On Thu, Dec 17, 2009 at 10:57 AM, Stacy Maydew stacy.may...@sun.com wrote:
When I sent one backup set to the filesystem, the usage reported by zfs
list and zfs get used my zfs are the expected values based on the data
size.
When I store a second copy, which should dedupe entirely, the zfs
On Thu, Dec 17, 2009 at 6:14 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
Darren J Moffat darr...@opensolaris.org writes:
Kjetil Torgrim Homme wrote:
Andrey Kuzmin andrey.v.kuz...@gmail.com writes:
Downside you have described happens only when the same checksum is
used for data
The commands zpool list and zpool get dedup pool both show a ratio of
1.10.
So thanks for that answer. I'm a bit confused though if the dedup is applied
per zfs filesystem, not zpool, why can I only see the dedup on a per pool basis
rather than for each zfs filesystem?
Seems to me there
On Thu, Dec 17, 2009 at 12:30:29PM -0800, Stacy Maydew wrote:
So thanks for that answer. I'm a bit confused though if the dedup is
applied per zfs filesystem, not zpool, why can I only see the dedup on
a per pool basis rather than for each zfs filesystem?
Seems to me there should be a way to
I'm running Solaris 10 update 8 (10/09). I started out using an older
version of Solaris and have upgraded a few times. I have used zpool
upgrade on the pools I have as new versions become available after
kernel updates.
I see now when I run zfs upgrade that pools I created long ago are
at
fmdump shows errors on a different drive, and none on the one that has this
slow read problem:
Nov 27 2009 20:58:28.670057389 ereport.io.scsi.cmd.disk.recovered
nvlist version: 0
class = ereport.io.scsi.cmd.disk.recovered
ena = 0xbeb7f4dd531
detector = (embedded
On Thu, Dec 17, 2009 at 7:11 AM, Edward Ned Harvey
sola...@nedharvey.com wrote:
And I've heard a trend of horror stories, that zfs has a tendency to implode
when it's very full. So try to keep your disks below 90%.
I've taken to creating an unmounted empty filesystem with a
reservation to
I have observed the opposite, and I believe that all writes are slow to my
dedup'd pool.
I used local rsync (no ssh) for one of my migrations (so it was restartable,
as it took *4 days*), and the writes were slow just like zfs recv.
I have not seen fast writes of real data to the deduped volume,
Hi Doug,
The pool and file system version upgrades allow you to access new
features that are available for a particular Solaris release. For
example, if you upgrade your system to Solaris 10 10/09, then you
would need to upgrade your pool version to access the pool features
available in the
On Wed, Dec 16, 2009 at 6:17 AM, Steven Sim unixan...@gmail.com wrote:
r...@sunlight:/root# zfs send myplace/myd...@prededup | zfs receive -v
myplace/mydata
cannot receive new filesystem stream: destination 'myplace/fujitsu' exists
must specify -F to overwrite it
Try something like this:
zfs
If you have another partition with enough space, you could technically just do:
mv src /some/other/place
mv /some/other/place src
Anyone see a problem with that? Might be the best way to get it de-duped.
--
This message posted from opensolaris.org
___
Hi Giridhar,
The size reported by ls can include things like holes
in the file. What space usage does the zfs(1M)
command report for the filesystem?
Adam
On Dec 16, 2009, at 10:33 PM, Giridhar K R wrote:
Hi,
Reposting as I have not gotten any response.
Here is the issue.
Your parenthetical comments here raise some concerns, or at least eyebrows,
with me. Hopefully you can lower them again.
compress, encrypt, checksum, dedup.
(and you need to use zdb to get enough info to see the
leak - and that means you have access to the raw devices)
An attacker with
On Thu, Dec 17, 2009 at 3:10 PM, Anil an...@entic.net wrote:
If you have another partition with enough space, you could technically just
do:
mv src /some/other/place
mv /some/other/place src
Anyone see a problem with that? Might be the best way to get it de-duped.
You'd lose any existing
Thanks for the response Adam.
Are you talking about ZFS list?
It displays 19.6 as allocated space.
What does ZFS treat as hole and how does it identify?
ZFS will compress blocks of zeros down to nothing and treat them like
sparse files. 19.6 is pretty close to your computed. Does your
It looks like the kernel is using a lot of memory, which may be part
of the performance problem. The ARC has shrunk to 1G, and the kernel
is using up over 5G.
I'm doing a send|receive of 683G of data. I started it last night
around 1am, and as of right now it's only sent 450GB. That's about
I used the default while creating zpool with one disk drive. I guess it is a
RAID 0 configuration.
Thanks,
Giri
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
My ARC is ~3GB.
I'm doing a test that copies 10GB of data to a volume where the blocks
should dedupe 100% with existing data.
First time, the test that runs 5MB sec, seems to average 10-30% ARC *miss*
rate. 400 arc reads/sec.
When things are working at disk bandwidth, I'm getting 3-5% ARC
Ok, my console is 100% completely hung, not gonna be able to enter any commands
when it freezes.
I can't even get the numlock light to change it's status.
This time I even plugged in a PS/2 keyboard instead of USB thinking maybe it
was USB dying during the hang, but not so.
I have hard
Ok, this is the script I am running (as a background process). This script
doesn't matter much, it's just here for reference, as I'm running into problems
just running the savecore command while the zpool import is running.
On Wed, Dec 16 at 7:35, Bill Sprouse wrote:
The question behind the question is, given the
really bad things that
can happen performance-wise with writes that are not
4k aligned when
using flash devices, is there any way to insure that
any and all
writes from ZFS are 4k aligned?
On Dec 17, 2009, at 9:04 PM, stuart anderson wrote:
As a specific example of 2 devices with dramatically different
performance for sub-4k transfers has anyone done any ZFS benchmarks
between the X25E and the F20 they can share?
I am particularly interested in zvol performance with a
On 18.12.09 07:13, Jack Kielsmeier wrote:
Ok, my console is 100% completely hung, not gonna be able to enter any
commands when it freezes.
I can't even get the numlock light to change it's status.
This time I even plugged in a PS/2 keyboard instead of USB thinking maybe it
was USB dying during
40 matches
Mail list logo