Hello again,
I am still concerned if my points are being well taken.
If you are concerned that a
single 200TB pool would take a long
time to scrub, then use more pools and scrub in
parallel.
The main concern is not scrub time. Scrub time could be weeks if scrub just
would behave. You may
Sorry if this is too basic -
So I have a single zpool in addition to the rpool, called xpool.
NAMESIZE USED AVAILCAP HEALTH ALTROOT
rpool 136G 109G 27.5G79% ONLINE -
xpool 408G 171G 237G42% ONLINE -
I have 408 in the pool, am using 171 leaving me 237 GB.
Hello,
I'd like to check for any guidance about using zfs on iscsi storage appliances.
Recently I had an unlucky situation with an unlucky storage machine freezing.
Once the storage was up again (rebooted) all other iscsi clients were happy,
while one of the iscsi clients (a sun solaris sparc,
Hi Michael,
For a RAIDZ pool, the zpool list command identifies the inflated space
for the storage pool, which is the physical available space without an
accounting for redundancy overhead.
The zfs list command identifies how much actual pool space is available
to the file systems.
See the
That solved it.
Thank you Cindy.
Zpool list NOT reporting raidz overhead is what threw me...
Thanks again.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon wrote:
- In this case, the storage appliance is a legacy system based on linux, so
raids/mirrors are managed at the storage side its own way. Being an iscsi
target, this volume was mounted as a single iscsi disk from the solaris host,
and
On Mar 15, 2010, at 10:55 AM, Gabriele Bulfon gbul...@sonicle.com
wrote:
Hello,
I'd like to check for any guidance about using zfs on iscsi storage
appliances.
Recently I had an unlucky situation with an unlucky storage machine
freezing.
Once the storage was up again (rebooted) all other
Well, I actually don't know what implementation is inside this legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built around IET,
don't know.
Well, maybe I should disable write-back on every zfs host connecting on iscsi?
How do I check this?
Thx
Gabriele.
--
This
On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote:
Well, I actually don't know what implementation is inside this legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built around
IET, don't know.
Well, maybe I should disable write-back on every zfs host
On Mar 15, 2010, at 12:19 PM, Ware Adams rwali...@washdcmail.com
wrote:
On Mar 15, 2010, at 12:13 PM, Gabriele Bulfon wrote:
Well, I actually don't know what implementation is inside this
legacy machine.
This machine is an AMI StoreTrends ITX, but maybe it has been built
around IET,
On Sun, March 14, 2010 13:54, Frank Middleton wrote:
How can it even be remotely possible to get a checksum failure on mirrored
drives
with copies=2? That means all four copies were corrupted? Admittedly this
is
on a grotty PC with no ECC and flaky bus parity, but how come the same
file
On Mon, March 15, 2010 00:54, no...@euphoriq.com wrote:
I'm running a raidz1 with 3 Samsung 1.5TB drives. Every time I scrub the
pool I get multiple read errors, no write errors and no checksum errors on
one drive (always the same drive, and no data loss).
I've changed cables, changed the
Greg, I am using NetBackup 6.5.3.1 (7.x is out) with fine results. Nice and
fast.
-Scott
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hey Scott,
Thanks for the information. I doubt I can drop that kind of cash, but back to
getting bacula working!
Thanks again,
Greg
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Wow. I never thought about it. I changed the power supply to a cheap one a
while back (a now seemingly foolish effort to save money) - it could be the
issue. I'll change it back and let you know.
Thanks
--
This message posted from opensolaris.org
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 15.03.2010 21:13, no...@euphoriq.com wrote:
Wow. I never thought about it. I changed the power supply to a cheap one a
while back (a now seemingly foolish effort to save money) - it could be the
issue. I'll change it back and let you know.
Greeting ALL
I understand that L2ARC is still under enhancement. Does any one know if ZFS
can be upgrades to include Persistent L2ARC, ie. L2ARC will not loose its
contents after system reboot ?
--
Abdullah Al-Dahlawi
George Washington University
Department. Of Electrical Computer
hi,
i´m using opensolaris about 2 years with an mirrored rpool and an data pool
with 3 x 2 (mirrored) drives.
the data pool drives are connected to SIL pci-express cards.
yesterday i updated from 130 to 134, everything seemed to be fine and i also
replaced 1 pair of mirrored drives with larger
On Mon, March 15, 2010 15:35, Svein Skogen wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 15.03.2010 21:13, no...@euphoriq.com wrote:
Wow. I never thought about it. I changed the power supply to a cheap
one a while back (a now seemingly foolish effort to save money) - it
could
some screenshots that may help:
pool: tank
id: 5649976080828524375
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
data ONLINE
mirror-0 ONLINE
c27t2d0ONLINE
c27t0d0ONLINE
Hi Cindy,
trying to reproduce this
For a RAIDZ pool, the zpool list command identifies
the inflated space
for the storage pool, which is the physical available
space without an
accounting for redundancy overhead.
The zfs list command identifies how much actual pool
space is available
Tonmaus wrote:
I am lacking 1 TB on my pool:
u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE
CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x
ONLINE - u...@filemeister:~$ zpool status daten pool: daten state:
ONLINE scrub: none requested config:
NAME
On Mon, 2010-03-15 at 15:03 -0700, Tonmaus wrote:
Hi Cindy,
trying to reproduce this
For a RAIDZ pool, the zpool list command identifies
the inflated space
for the storage pool, which is the physical available
space without an
accounting for redundancy overhead.
The zfs list
On Mon, 2010-03-15 at 15:40 -0700, Carson Gaspar wrote:
Tonmaus wrote:
I am lacking 1 TB on my pool:
u...@filemeister:~$ zpool list daten NAMESIZE ALLOC FREE
CAP DEDUP HEALTH ALTROOT daten10T 3,71T 6,29T37% 1.00x
ONLINE - u...@filemeister:~$ zpool status daten
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this single iscsi target. ZFS best
practices, tell me that to be safe in case of
corruption, pools should always be mirrors or raidz
on 2 or more disks. In
My guess is unit conversion and rounding. Your pool
has 11 base 10 TB,
which is 10.2445 base 2 TiB.
Likewise your fs has 9 base 10 TB, which is 8.3819
base 2 TiB.
Not quite.
11 x 10^12 =~ 10.004 x (1024^4).
So, the 'zpool list' is right on, at 10T available.
Duh! I completely
Someone wrote (I haven't seen the mail, only the unattributed quote):
My guess is unit conversion and rounding. Your pool
has 11 base 10 TB,
which is 10.2445 base 2 TiB.
Likewise your fs has 9 base 10 TB, which is 8.3819
base 2 TiB.
Not quite.
11 x 10^12 =~ 10.004 x (1024^4).
So,
On Mon, Mar 15, 2010 at 9:55 AM, Gabriele Bulfon gbul...@sonicle.comwrote:
Hello,
I'd like to check for any guidance about using zfs on iscsi storage
appliances.
Recently I had an unlucky situation with an unlucky storage machine
freezing.
Once the storage was up again (rebooted) all other
On Mon, Mar 15, 2010 at 5:39 PM, Abdullah Al-Dahlawi dahl...@ieee.orgwrote:
Greeting ALL
I understand that L2ARC is still under enhancement. Does any one know if
ZFS can be upgrades to include Persistent L2ARC, ie. L2ARC will not loose
its contents after system reboot ?
There is a bug
On Mar 15, 2010, at 7:11 PM, Tonmaus sequoiamo...@gmx.net wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this single iscsi target. ZFS best
practices, tell me that to be safe in case of
corruption,
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker rswwal...@gmail.com wrote:
On Mar 15, 2010, at 7:11 PM, Tonmaus sequoiamo...@gmx.net wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host, and prepared as a zfs
pool consisting of this single iscsi
On Mar 15, 2010, at 11:10 PM, Tim Cook t...@cook.ms wrote:
On Mon, Mar 15, 2010 at 9:10 PM, Ross Walker rswwal...@gmail.com
wrote:
On Mar 15, 2010, at 7:11 PM, Tonmaus sequoiamo...@gmx.net wrote:
Being an iscsi
target, this volume was mounted as a single iscsi
disk from the solaris host,
On Mar 14, 2010, at 11:25 PM, Tonmaus wrote:
Hello again,
I am still concerned if my points are being well taken.
If you are concerned that a
single 200TB pool would take a long
time to scrub, then use more pools and scrub in
parallel.
The main concern is not scrub time. Scrub time
33 matches
Mail list logo