On Jun 11, 2011, at 10:37, Edward Ned Harvey wrote:
>> From: David Magda [mailto:dma...@ee.ryerson.ca]
>> Sent: Saturday, June 11, 2011 9:38 AM
>>
>> These parity files use a forward error correction-style system that can be
>> used to perform data verification, and allow recovery when data is lo
I have a home server running Solaris 11 Express that I want to move to
OpenIndiana. There's nothing I need to retain on rpool, and my "datastore"
pool has all of the data. My question is, can I install OpenIndiana oi_151
and be able to import the datastore zpool? When I tried from the live cd
This may be interesting also (still fumbling...);
dtrace -n 'fbt:zfs:zio_write:entry,
fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry {
@[probefunc,stack()] = count(); }'
On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote:
> 2011-06-11 20:42, Jim Mauro пишет:
>> Well we may have missed
Hmmmso coming back around to the problem we're trying to solve -
You have iostat data and "zpool iostat" data that shows a steady stream
of writes to one or more of your zpools, correct?
You wish to identify the source of those writes, correct?
Try saving this as a file and running it, and p
2011-06-11 20:42, Jim Mauro пишет:
Well we may have missed something, because that dtrace will
only capture write(2) and pwrite(2) - whatever is generating the writes
may be using another interface (writev(2) for example).
What about taking it down a layer:
dtrace -n 'fsinfo:::write /args[0]->f
2011-06-11 20:34, Jim Klimov пишет:
time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
This time I gave it more time, and used the system a bit -
this dtrace works indeed, but there are still too few file
accesses:
# time dtrace -n
Well we may have missed something, because that dtrace will
only capture write(2) and pwrite(2) - whatever is generating the writes
may be using another interface (writev(2) for example).
What about taking it down a layer:
dtrace -n 'fsinfo:::write /args[0]->fi_fs == "zfs"/ {
@[execname,args[0]-
2011-06-11 19:16, Jim Mauro пишет:
Does this reveal anything;
dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
Alas, not much.
# time dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ {
@[execname,fds[arg0].fi_pathname]=co
2011-06-11 19:15, Pasi Kärkkäinen пишет:
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
I've had two incidents where performance tanked suddenly, leaving the VM
guests and Nexenta SSH/Web consoles inaccessible and requiring a full
reboot of the array to restore functio
So, can this be fixed in firmware? How can I determine if the drive is
actually bad?
--
Edmund White
ewwh...@mac.com
On 6/11/11 10:15 AM, "Pasi Kärkkäinen" wrote:
>On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>>Posted in greater detail at Server Fault
>>- [1]http://s
Does this reveal anything;
dtrace -n 'syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ {
@[execname,fds[arg0].fi_pathname]=count(); }'
On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote:
> While looking over iostats from various programs, I see that
> my OS HDD is busy writing, about 2Mb/sec stream
On Sat, Jun 11, 2011 at 08:35:19AM -0500, Edmund White wrote:
>Posted in greater detail at Server Fault
>- [1]http://serverfault.com/q/277966/13325
>
>I have an HP ProLiant DL380 G7 system running NexentaStor. The server has
>36GB RAM, 2 LSI 9211-8i SAS controllers (no SAS expander
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:38 AM
>
> These parity files use a forward error correction-style system that can be
> used to perform data verification, and allow recovery when data is lost or
> corrupted.
>
> http://en.wikipedia.org/wiki/Parch
2011-06-11 17:20, Edward Ned Harvey пишет:
From: David Magda [mailto:dma...@ee.ryerson.ca]
Sent: Saturday, June 11, 2011 9:04 AM
If one is saving streams to a disk, it pay be worth creating parity files
for them
(especially if the destination file system is not ZFS):
Parity is just a really s
On Jun 11, 2011, at 09:20, Edward Ned Harvey wrote:
> Parity is just a really simple form of error detection. It's not very
> useful for error correction. If you look into error correction codes,
> you'll see there are many other codes which would be more useful for the
> purposes of zfs send da
Posted in greater detail at Server Fault - http://serverfault.com/q/277966/13325
I have an HP ProLiant DL380 G7 system running NexentaStor. The server has 36GB
RAM, 2 LSI 9211-8i SAS controllers (no SAS expanders), 2 SAS system drives, 12
SAS data drives, a hot-spare disk, an Intel X25-M L2ARC c
While looking over iostats from various programs, I see that
my OS HDD is busy writing, about 2Mb/sec stream all the time
(at least while the "dcpool" import/recovery attempts are
underway, but also now during a mere zdb walk).
According to "iostat" this load stands out greatly:
> From: David Magda [mailto:dma...@ee.ryerson.ca]
> Sent: Saturday, June 11, 2011 9:04 AM
>
> If one is saving streams to a disk, it pay be worth creating parity files
for them
> (especially if the destination file system is not ZFS):
Parity is just a really simple form of error detection. It's
On Jun 11, 2011, at 08:46, Edward Ned Harvey wrote:
> If you simply want to layer on some more FEC, there must be some standard
> generic FEC utilities out there, right?
> zfs send | fec > /dev/...
> Of course this will inflate the size of the data stream somewhat, but
> improves the relia
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
>
> See FEC suggestion from another poster ;)
Well, of course, all storage mediums have built-in hardware FEC. At least disk
& tape for sure. But naturally you can't always trust
Hi,
We have met this yesterday. The degraded pool was exported and I had to
re-import it manually.
Is it a normal case? I assume it should not be but
Has anyone met the similar case?
Thanks.
Fred
___
zfs-discuss mailing list
zfs-discuss@opensola
21 matches
Mail list logo