Sorry for abusing the mailing list, but I don't know how to report
bugs anymore and have no visibility of whether this is a
known/resolved issue. So, just in case it is not...
With Solaris 11 Express, scrubbing a pool with encrypted datasets for
which no key is currently loaded, unrecoverable
On Mon, May 9, 2011 at 8:33 AM, Darren Honeyball ml...@spod.net wrote:
I'm just mulling over the best configuration for this system - our work load
is mostly writing millions of small files (around 50k) with occasional reads
we need to keep as much space as possible.
If space is a priority,
Good day,
I think ZFS can take advantage of using GPU for sha256 calculation,
encryption and maybe compression. Modern video card, like 5xxx or 6xxx
ATI HD Series can do calculation of sha256 50-100 times faster than
modern 4 cores CPU.
kgpu project for linux shows nice results.
'zfs
On Tue, May 10, 2011 at 11:29 AM, Anatoly legko...@fastmail.fm wrote:
Good day,
I think ZFS can take advantage of using GPU for sha256 calculation,
encryption and maybe compression. Modern video card, like 5xxx or 6xxx ATI
HD Series can do calculation of sha256 50-100 times faster than modern
IMHO, zfs need to run in all kind of HW
T-series CMT server that can help sha calculation since T1 day, did not
see any work in ZFS to take advantage it
On 5/10/2011 11:29 AM, Anatoly wrote:
Good day,
I think ZFS can take advantage of using GPU for sha256 calculation,
encryption and maybe
On Tue, May 10, 2011 at 10:29 PM, Anatoly legko...@fastmail.fm wrote:
Good day,
I think ZFS can take advantage of using GPU for sha256 calculation,
encryption and maybe compression. Modern video card, like 5xxx or 6xxx ATI
HD Series can do calculation of sha256 50-100 times faster than modern
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING:
/pci@0,0/pci8086,3410@9/pci15d9,400@0 (mpt_sas0):
May 5 04:33:44 dev-zfs4
On Mon, May 9, 2011 at 2:54 PM, Tomas Ögren st...@acc.umu.se wrote:
Slightly off topic, but we had an IBM RS/6000 43P with a PowerPC 604e
cpu, which had about 60MB/s memory bandwidth (which is kind of bad for a
332MHz cpu) and its disks could do 70-80MB/s or so.. in some other
machine..
It
przemol...@poczta.fm wrote:
On Fri, Jul 07, 2006 at 11:59:29AM +0800, Raymond Xiong wrote:
It doesn't. Page 11 of the following slides illustrates how COW
works in ZFS:
http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf
Blocks containing active data are never overwritten in
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
Hello przemolicc,
Thursday, June 29, 2006, 8:01:26 AM, you wrote:
ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
ppf What I wanted to point out is the Al's example: he wrote about
damaged data.
On Tue, Aug 08, 2006 at 11:33:28AM -0500, Tao Chen wrote:
On 8/8/06, przemol...@poczta.fm przemol...@poczta.fm wrote:
Hello,
Solaris 10 GA + latest recommended patches:
while runing dtrace:
bash-3.00# dtrace -n 'io:::start {@[execname, args[2]-fi_pathname] =
count();}'
...
So there is no current way to specify the creation of a 3 disk raid-z
array with a known missing disk?
On 12/5/06, David Bustos david.bus...@sun.com wrote:
Quoth Thomas Garner on Thu, Nov 30, 2006 at 06:41:15PM -0500:
I currently have a 400GB disk that is full of data on a linux system.
If I
Ah, did not see your follow up. Thanks.
Chris
On Thu, 30 Nov 2006, Cindy Swearingen wrote:
Sorry, Bart, is correct:
If new_device is not specified, it defaults to
old_device. This form of replacement is useful after an
existing disk has failed and
I use this construct to get something better than none
args[2]-fi_pathname != none ? args[2]-fi_pathname :
args[1]-dev_pathname
In the latest versions of Solaris 10, you'll see IOs not directly issued by the
app
show up as being owned by 'zpool-POOLNAME' where POOLNAME is the real name of
On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes:
Hello,
I'm running Solaris Express build 27a on an amd64 machine and
fuser(1M) isn't behaving
as I would expect for zfs filesystems. Various google and
...
#fuser -c /
/:[lots of other PIDs] 20617tm [others] 20412cm
On 10 May, 2011 - Tomas Ögren sent me these 0,9K bytes:
On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes:
Hello,
I'm running Solaris Express build 27a on an amd64 machine and
fuser(1M) isn't behaving
as I would expect for zfs filesystems. Various google and
...
Sorry for the old posts that some of you are seeing to zfs-discuss. The
link between Jive and mailman was broken so I fixed that. However, once
this was fixed Jive started sending every single post from the
zfs-discuss board on Jive to the mail list. Quite a few posts were sent
before I
I've been going through my iostat, zilstat, and other outputs all to no avail.
None of my disks ever seem to show outrageous service times, the load on the
box is never high, and if the darned thing is CPU bound- I'm not even sure
where to look.
(traversing DDT blocks even if in memory, etc -
Well, as I wrote in other threads - i have a pool named pool on physical
disks, and a compressed volume in this pool which i loopback-mount over iSCSI
to make another pool named dcpool.
When files in dcpool are deleted, blocks are not zeroed out by current ZFS
and they are still allocated for
In a recent post r-mexico wrote that they had to parse system messages and
manually fail the drives on a similar, though different, occasion:
http://opensolaris.org/jive/message.jspa?messageID=515815#515815
--
This message posted from opensolaris.org
Don,
Is it possible to modify the GUID associated with a ZFS volume imported into
STMF?
To clarify- I have a ZFS volume I have imported into STMF and export via
iscsi. I have a number of snapshots of this volume. I need to temporarily go
back to an older snapshot without removing all
On Tue, May 10, 2011 at 02:42:40PM -0700, Jim Klimov wrote:
In a recent post r-mexico wrote that they had to parse system
messages and manually fail the drives on a similar, though
different, occasion:
http://opensolaris.org/jive/message.jspa?messageID=515815#515815
Thanks Jim, good
it is my understanding for write (fast) consider faster HDD (SSD) for ZIL
for read consider faster HDD(SSD) for L2ARC
There were many discussion for V12N env raid1 is better than raidz
On 5/10/2011 3:31 PM, Don wrote:
I've been going through my iostat, zilstat, and other outputs all to no
On Tue, May 10, 2011 at 03:57:28PM -0700, Brandon High wrote:
On Tue, May 10, 2011 at 9:18 AM, Ray Van Dolson rvandol...@esri.com wrote:
My question is -- is there a way to tune the MPT driver or even ZFS
itself to be more/less aggressive on what it sees as a failure
scenario?
You didn't
# dd if=/dev/zero of=/dcpool/nodedup/bigzerofile
Ahh- I misunderstood your pool layout earlier. Now I see what you were doing.
People on this forum have seen and reported that adding a 100Mb file tanked
their
multiterabyte pool's performance, and removing the file boosted it back up.
Sadly I
25 matches
Mail list logo