Hi again,
Out of interest, could this problem have been avoided if the ZFS configuration
didnt rely on a single disk? i.e. RAIDZ etc
Thanks
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I have a machine with opensolaris snv111b .I want to let it use to a fc-san
initiator NAS header in my total system.
Now I configure FC HBA port from qlt to qlc mode as initiator wich command
update_drv .
I can use stmfadm list-target -v to find the FC-SAN target is conneted
Target:
Hi Richard,
- scrubbing the same pool, configured as raidz1
didn't max out CPU which is no surprise (haha, slow
storage...) the notable part is that it didn't slow
down payload that much either.
raidz creates more, smaller writes than a mirror or
simple stripe. If the disks are slow,
I've been informed that newer versions of ZFS supports the usage of hot spares
which is denoted for drives that are not in use but available for
resynchronization/resilvering should one of the original drives fail in the
assigned storage pool.
I'm a little sceptical about this because even the
On Opensolaris build 134, upgraded from older versions, I have an rpool for
which I had switch on dedup for a few weeks.
After that I switched to back on.
Now it seems the dedup ratio is stuck at a value of 1.68.
Even when I copy more then 90 GB of data it still remains at 1.68.
Any ideas ?
In following this discussion, I get the feeling that you and Richard are
somewhat
talking past each other. He asked you about the hardware you are currently
running
on, whereas you seem to be interested in a model for the impact of scrubbing
on
I/O throughput that you can apply to some
We currently have a opensolaris box running as a backup server, and to increase
the redundancy I have started coping this to another server with 4 x 2TB in
raidz and dedup. This was going fine taking about 10 hours to zfs send and
receive each 70GB snaphot (no ZIL or cache setup), but on the
Well...i can only say well said.
BTW i have a raidz2 with 9 vdevs with 4 disks each (sata enterprise
disks) and the scrub of the pool takes between 12 to 39 hours..depends
on the workload of the server.
So far it's acceptable but each case is a case i think...
Bruno
On 16-3-2010 14:04, Khyron
I probably lied about snv_134 above. I am probably running snv_133 as I don't
think 134 had come out when I started this.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Tue, 16 Mar 2010, Tonmaus wrote:
This wasn't mirror vs. raidz but raidz1 vs. raidz2, whereas the
latter maxes out CPU and the former maxes out physical disc I/O.
Concurrent payload degradation isn't that extreme on raidz1 pools,
as it seems. Hence, the CPU theory that you still seem to be
Even if it might not be the best technical solution, I think what a lot of
people are looking for when this comes up is a knob they can use to say I only
want X IOPS per vdev (in addition to low prioritization) to be used while
scrubbing. Doing so probably helps them feel more at ease that they
Frank Middleton wrote:
But pkg fix flags an error in it's own inscrutable way. CCing
pkg-discuss in case a pkg guru can shed any light on what the output of
pkg fix (below) means. Presumably libc is OK, or it wouldn't boot :-).
The problem with libc here is that while /lib/libc.so.1 is
On 15 Mar 2010, at 23:03, Tonmaus wrote:
Hi Cindy,
trying to reproduce this
For a RAIDZ pool, the zpool list command identifies
the inflated space
for the storage pool, which is the physical available
space without an
accounting for redundancy overhead.
The zfs list command identifies how
On Tue, March 16, 2010 11:53, thomas wrote:
Even if it might not be the best technical solution, I think what a lot of
people are looking for when this comes up is a knob they can use to say I
only want X IOPS per vdev (in addition to low prioritization) to be used
while scrubbing. Doing so
The issue as presented by Tonmaus was that a scrub was negatively impacting
his RAIDZ2 CIFS performance, but he didn't see the same impact with RAIDZ.
I'm not going to say whether that is a problem one way or the other; it
may
be expected behavior under the circumstances. That's for ZFS
Things used to be simple.
zfs create -V xxg -o shareiscsi=on pool/iSCSI/mynewvolume
It worked.
Now we've got a new, feature-rich baby in town, called comstar, and so far all
attempts at groking the excuse of a manpage has simply left me with a nasty
headache.
_WHERE_ is the replacement
Hi Svein,
Here's a couple of pointers:
http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration
http://blogs.sun.com/observatory/entry/iscsi_san
Thanks,
Cindy
On 03/16/10 12:15, Svein Skogen wrote:
Things used to be simple.
zfs create -V xxg -o shareiscsi=on
This is what I used:
http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Configure+iSCSI+Target+Ports
I distilled that to:
disable the old, enable the new (comstar)
* sudo svcadm disable iscsitgt
* sudo svcadm enable stmf
Then four steps (using my zfs/zpool info - substitute for yours):
On Tue, Mar 16, 2010 at 6:40 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
Here's a couple of pointers:
http://wikis.sun.com/display/OpenSolarisInfo/comstar+Administration
http://blogs.sun.com/observatory/entry/iscsi_san
I found this useful too:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 19:42, Scott Meilicke wrote:
This is what I used:
http://wikis.sun.com/display/OpenSolarisInfo200906/How+to+Configure+iSCSI+Target+Ports
I distilled that to:
disable the old, enable the new (comstar)
* sudo svcadm disable
Someone correct me if I'm wrong, but it could just be a coincidence. That is,
perhaps the data that you copied happens to lead to a dedup ratio relative to
the data that's already on there. You could test this out by copying a few
gigabytes of data you know is unique (like maybe a DVD video
On Tue, Mar 16, 2010 at 2:46 PM, Svein Skogen sv...@stillbilde.net wrote:
Not quite a one liner. After you create the target once (step 3), you do
not have to do that again for the next volume. So three lines.
So ... no way around messing with guid numbers?
I'll write you a Perl script
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 19:57, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 2:46 PM, Svein Skogen sv...@stillbilde.net
mailto:sv...@stillbilde.net wrote:
Not quite a one liner. After you create the target once (step 3),
you do not have to
Carson Gaspar wrote:
Not quite.
11 x 10^12 =~ 10.004 x (1024^4).
So, the 'zpool list' is right on, at 10T available.
Duh, I was doing GiB math (y = x * 10^9 / 2^20), not TiB math (y = x *
10^12 / 2^40).
Thanks for the correction.
You're welcome. :-)
On a not-completely-on-topic note:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen sv...@stillbilde.net wrote:
I'll write you a Perl script :)
I think there are ... several people that'd like a script that gave us
back some of the ease of the old shareiscsi one-off, instead of having
to spend time on copy-and-pasting GUIDs
On Tue, March 16, 2010 14:59, Erik Trimble wrote:
Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this? I know they now have to include the 1GB
= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but just because I say 1 L =
Has there been a consideration by anyone to do a
class-action lawsuit
for false advertising on this? I know they now have
to include the 1GB
= 1,000,000,000 bytes thing in their specs and
somewhere on the box,
but just because I say 1 L = 0.9 metric liters
somewhere on the box,
it
Hello,
In following this discussion, I get the feeling that
you and Richard are somewhat talking past each
other.
Talking past each other is a problem I have noted and remarked earlier. I have
to admit to have got frustrated about the discussion narrowing down to a
certain perspective that
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen sv...@stillbilde.net wrote:
I'll write you a Perl script :)
I think there are ... several people that'd like a script that gave us
back some of the ease of the old shareiscsi one-off, instead of
If CPU is maxed out then that usually indicates some
severe problem
with choice of hardware or a misbehaving device
driver. Modern
systems have an abundance of CPU.
AFAICS the CPU loads are only high while scrubbing a double parity pool. I have
no indication of a technical misbehaviour
Hi,
When creating a zfs pool with mkfile components in S10 10/09, I was testing the
export/import feature (I did it in Solaris 11/06 and it worked well).
But it didn't work. I tried the -d /dir option without success.
It didn't work any better with the zpool destroy and zpool import -D
David Dyer-Bennet wrote:
On Tue, March 16, 2010 14:59, Erik Trimble wrote:
Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this? I know they now have to include the 1GB
= 1,000,000,000 bytes thing in their specs and somewhere on the box,
but
Tonmaus wrote:
Has there been a consideration by anyone to do a
class-action lawsuit
for false advertising on this? I know they now have
to include the 1GB
= 1,000,000,000 bytes thing in their specs and
somewhere on the box,
but just because I say 1 L = 0.9 metric liters
somewhere on the
On Tue, 16 Mar 2010, Tonmaus wrote:
AFAICS the CPU loads are only high while scrubbing a double parity
pool. I have no indication of a technical misbehaviour with the
exception of dismal concurrent performance.
This seems pretty weird to me. I have not heard anyone else complain
about this
Erik Trimble wrote:
Tonmaus wrote:
Has there been a consideration by anyone to do a
class-action lawsuit for false advertising on this? I know they now
have
to include the 1GB = 1,000,000,000 bytes thing in their specs and
somewhere on the box, but just because I say 1 L = 0.9 metric liters
On Wed, Mar 17, 2010 at 5:45 AM, Erik Trimble erik.trim...@sun.com wrote:
Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE, not
just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
The reason why there is not more uproar is that cost per data unit is dwindling
while the gap resulting from this marketing trick is increasing. I remember a
case a German broadcaster filed against a system integrator in the age of the 4
GB SCSI drive. This was in the mid-90s.
Regards,
Eric,
careful:
Am 16.03.2010 23:45, schrieb Erik Trimble:
Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in computer science as
non-authoritative.
How long
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 16.03.2010 22:31, erik.ableson wrote:
On 16 mars 2010, at 21:00, Marc Nicholas wrote:
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen sv...@stillbilde.net
mailto:sv...@stillbilde.net wrote:
I'll write you a Perl script :)
I think
Are you sure that you didn't also enable
something which
does consume lots of CPU such as enabling some sort
of compression,
sha256 checksums, or deduplication?
None of them is active on that pool or in any existing file system. Maybe the
issue is particular to RAIDZ2, which is comparably
On 3/16/2010 17:45, Erik Trimble wrote:
David Dyer-Bennet wrote:
On Tue, March 16, 2010 14:59, Erik Trimble wrote:
Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this? I know they now have to include the
1GB
= 1,000,000,000 bytes thing in
On 3/16/2010 4:23 PM, Roland Rambau wrote:
Eric,
careful:
Am 16.03.2010 23:45, schrieb Erik Trimble:
Up until 5 years ago (or so), GigaByte meant a power of 2 to EVERYONE,
not just us techies. I would hardly call 40+ years of using the various
giga/mega/kilo prefixes as a power of 2 in
On 3/16/2010 8:29 PM, David Dyer-Bennet wrote:
On 3/16/2010 17:45, Erik Trimble wrote:
David Dyer-Bennet wrote:
On Tue, March 16, 2010 14:59, Erik Trimble wrote:
Has there been a consideration by anyone to do a class-action lawsuit
for false advertising on this? I know they now have to
43 matches
Mail list logo