=zfs_prefetch_disabledefs=refs=path=hist=
Soft Track Buffer / Prefetch:
http://blogs.sun.com/roch/entry/the_dynamics_of_zfs
As far as I've been able to tell using mdb, this is already lowered in b48?
http://blogs.sun.com/roch/entry/tuning_the_knobs
Suggestions, ideas etc?
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize
be validated?
If the block checksum is ok, then the parity is ok too.. I think?
(assuming checksum=on)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
On 09 November, 2006 - Tomas Ögren sent me these 4,4K bytes:
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
nfs does have a maximum nmber of rnodes which is calculated from the
memory available. It doesn't look like nrnode_max can be overridden.
rnode seems to take 472
*
no_grow = 0 -- This would be set to 1 if we have a
memory crunch
And as Niel pointed out we would probably need some way of limiting the
ARC consumption.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
import yourpool bettername
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
disabling the VDEV prefetch.
If not, it is worth a try.
That was part of my original question, how? :)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
On 13 November, 2006 - Eric Kustarz sent me these 2,4K bytes:
Tomas Ögren wrote:
On 13 November, 2006 - Sanjeev Bagewadi sent me these 7,1K bytes:
Regarding the huge number of reads, I am sure you have already tried
disabling the VDEV prefetch.
If not, it is worth a try.
That was part
c4t40d1 \
mirror c2t40d2 c4t40d2 \
mirror c2t40d3 c2t40d3 \
mirror c3t40d0 c5t40d0 \
mirror c3t40d1 c5t40d1 \
mirror c3t40d2 c5t40d2
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
can limit the size offrom 1 block to
some nth block . Like this is there any sub command to limit the
size of ZFS file system from 1 block to some n th block ?
Just amount, not specific positions on/portions of the FS/devices.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
data 2,89T 184K 2,89T 0% EN LIGNE -
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
data 75,5K 2,85T 24,5K /data
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
On 20 December, 2006 - storage-disk sent me these 0,4K bytes:
Hi Eric,
How do you decode file /var/fm/fmd/errlog and /var/fm/fmd/fltlog?
fmdump -e, fmdump
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin
-compress..
Try the difference between zfs - zfs+gzip vs gzip - gzip+gzip..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss
hash_elements = 0xcda1
hash_elements_max = 0x1b589
hash_collisions = 0x18e58a
hash_chains = 0x3d16
hash_chain_max = 0xf
no_grow = 0x1
}
Should I post ::kmem_cache and/or ::kmastat somewhere? It's about
2*(20+30)kB..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se
. I tried changing arc_reduce_dnlc_percent=0 to keep my dnlc cache
and let someone else free their memory instead, but that lead to death
much faster..
Adding more memory seem to just give you more time, not solve the
problem..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se
On 03 January, 2007 - Robert Milkowski sent me these 0,2K bytes:
Hello Tomas,
Give us output of ::kmastat on crashdump.
Ok, attached.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
mtx = {
_opaque = [ 0 ]
}
}
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
://blogs.sun.com/roch/entry/when_to_and_not_to has some info for
you..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
On 03 January, 2007 - Richard Elling sent me these 0,5K bytes:
Tomas Ögren wrote:
df (GNU df) says there are ~850k inodes used, I'd like to keep those in
memory.. There is currently 1.8TB used on the filesystem.. The
probability of a cache hit in the user data cache is about 0
lookups even after
just 3h.. it's usually been around 20% or so when it's automatically
lowered to around 15k entries due to memory pressure..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
.. It was fine with ncsize=500k (and all of it
used) for a while.. then all of a sudden it just want haywire.. and when
it freed up dnlc, I got back 250MB.. where's the rest ~1750MB tied up?
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
the near death experience..
I've got vmcore as well if needed..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs
is going? I sure hope that 500k dnlc
entries (+dnode_t's etc belonging to that) isn't using up about 2GB
ram..?
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
cache
On 05 January, 2007 - Tomas Ögren sent me these 33K bytes:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although about 18 hours
On 05 January, 2007 - Mark Maybee sent me these 2,9K bytes:
Tomas Ögren wrote:
On 05 January, 2007 - Mark Maybee sent me these 1,5K bytes:
So it looks like this data does not include ::kmastat info from *after*
you reset arc_reduce_dnlc_percent. Can I get that?
Yeah, attached. (although
On 05 January, 2007 - Tomas Ögren sent me these 3,3K bytes:
These numbers come from the last ::kmastat you ran before reducing the
DNLC size. Note below that much of this space is still consumed by
these caches, even after the DNLC has dropped it references. This is
largely due
On 07 January, 2007 - Tomas Ögren sent me these 1,0K bytes:
On 05 January, 2007 - Tomas Ögren sent me these 3,3K bytes:
These numbers come from the last ::kmastat you ran before reducing the
DNLC size. Note below that much of this space is still consumed by
these caches, even after
ONLINE 0 0 0
errors: No known data errors
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
the ARCs
ability to evict vnodes from the DNLC).
I've tried that.. didn't work out too great due to fragmentation.. Left
non-kernel with like 4MB to play with..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
=~ 5.45TiB
Where is the rest of my poolspace?
4*500GB went to security/safety.
I'm using 16 x 500 GB Disks with zraid2-layout.
I expected that available pool-size = available zfs-size.
But now i see 7.24T != 5.33T
Why?
I'm running default options on tray30.
Christian
/Tomas
--
Tomas
c0t1d0s5 ONLINE 0 0 0
c1t1d0s5 ONLINE 0 0 0
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
of the disk) and it won't change to EFI and won't mess around with
the write cache for additional performance.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
?
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
appears to not be a valid number for find . -inum blah
Looks very hexadecimal to me.. Try 2220930 instead.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
of security vs space
vs performance.
http://blogs.sun.com/roch/entry/when_to_and_not_to
http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_performance
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
(checking for new fs'es at each scheduled operation)..
Exclude the ones you don't want with exclude.fs in InclExcl..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
a separate defrag thing that you can run whenever you feel
like it instead..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
performance will be twice the single
raidz2/single disk.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
On 18 May, 2007 - Dale Sears sent me these 1,5K bytes:
Tomas Ögren wrote:
On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:
I was wondering if this was a good setup for a 3320 single-bus,
single-host attached JBOD. There are 12 146G disks in this array:
I used:
zpool create
I rebooted with the new drive, the ZFS
pool reappeared. I just wanted someone else's opinion. I did not see
anything in the documentation about this.
zpool import will do the same.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science
request a relayout; for example can
I convert a raidz1 pool to a raidz2 pool?
Currently no.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs
+st_size+24btnG=Search
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
- is it different regarding this
issue ?
I believe the compression thingie is single threaded up until nevada
build 55-60 something.. Doesn't matter which algorithm is used..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
On 16 June, 2007 - George sent me these 1,1K bytes:
Where can you find the timeframe on that Tomas?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6460622
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
of your data a lot without plain out losing half
your disk space due to simple mirroring.
http://blogs.sun.com/bill/entry/ditto_blocks_the_amazing_tape
http://blogs.sun.com/relling/entry/zfs_copies_and_data_protection
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric
on filesystems which doesn't have a bunch of snapshots?
...etc
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs
in a transaction group.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
...
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
, the next day I found 340 errors..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
to replace a hwraid5 (single device) with a
raidz (multiple devices) or replace 3 t3b's with a single se3511.. For
that, you need the evacuate/shrink thingie which I've heard ETAs around
years end.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for this but only saw references to this and similar
threads. Is there a database where I can search?
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=5003563
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
for this
test).
So, not straight out of the bot but maybe.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
different pools doing the exact same thing
or the same pool with and without NCQ..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss
or
lowering the flush timeout might help..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
/zfs/version/2/
... This version includes support for Ditto Blocks, or replicated
metadata.
Can anybody shed any light on it ?
The 'copies' thing in zfs set is ditto blocks for data.. the one in ver2
is for metadata only..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric
On 24 January, 2008 - Kava sent me these 0,3K bytes:
Ahh .. so you end up with 2 copies of disk A, one on disk B and the other on
disk C?
Depends on how you see it.. You end up with 3 copies of your data.. on
disk A, B and C..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se
) consistency guarantees, try disabling ZIL..
google://zil_disable .. This should up the speed, but might cause disk
corruption if the server crashes while a client is writing data.. (just
like with UFS)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing
disks without data loss..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Are you taking periodical snapshots? Currently that will restart
scrubs..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing
property
7 Separate intent log devices
8 Delegated administration
9 refquota and refreservation properties
10 Cache devices
For more information on a particular version, including supported
releases, see:
http://www.opensolaris.org/os/community/zfs/version/N
/Tomas
--
Tomas Ögren, [EMAIL
when a disk pukes? I don't want to re-invent the wheel (but am so
far pretty surprised I've not turned up any such so far).
zpool status -x | grep -v 'all pools are healthy'
in cron, is one method ;)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing
will be restored, and NFS exports done through ZFS.. The rest of
the OS restoring will be up to you..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
On 15 May, 2008 - Mckay, Al sent me these 1,7K bytes:
Remove me from this group!!!
Taken from the headers of your own mail:
List-Unsubscribe: http://mail.opensolaris.org/mailman/listinfo/zfs-discuss,
mailto:[EMAIL PROTECTED]
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http
: how do I move the pool to the new
controller? Hand-editing /etc/zfs/zpool.cache seems, uh, daunting :)
zpool export blah
move stuff
zpool import blah
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
don't. Today, only simple or mirrored vdevs are
usable for ZFS boot devices.
A two disk raidz has no advantages over a two disk mirror, but it does
have disadvantages (slower and you can't boot from it ;)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student
mypool
repeat for t1..t3
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
a normal directory
and -when- it is better to create a zpool/fssysstem
I know this is related to personal taste, but -some- good advice might
exist ;-)
When you need different accounting (df) or FS options (compression, ...)
for a specific tree..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED
and reduce the number of devices in a pool from 3 to 2. Even if
there is enough space in those 2 remaining drives to hold all the
data.
Currently, yes. It's being worked on as far as I know.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science
recommend for maximal data
protection.
ZFS can mirror disk devices.. If your iSCSI targets show up as disk
devices, they can be mirrored. Try it for yourself ;)
If you have very different latency to them, performance will suffer
(like you only used the slower one)..
/Tomas
--
Tomas Ögren
,
sees that it's still correct and fixes A.
A might be intelligent storage and can cope with a disk dying, but if A
delivers bit errors up to ZFS - then ZFS can't fix it. If A is actually
dumb storage and you leave the raid part to ZFS, then it can fix.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http
-33.92GB
/[EMAIL PROTECTED],0/pci1022,[EMAIL PROTECTED]/pci10f1,[EMAIL
PROTECTED],1/[EMAIL PROTECTED],0
zpool export zfs;zpool import zfs
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
the pool.
Currently your only option is to copy data somewhere else, destroy pool
and create a new one. Disk removal is being worked on I believe, but it
gets kinda complex when you have a bunch of snapshots, clones etc..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric
already supports it.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
S.M.A.R.T test after 27 hours.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
performance is
gained with striped mirrors, but then you lose half of your disks in
space..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs
?
This is when reading.. and since both disks contain the same data, you
can pick either of them.. For reading block a and b, you can read a from
disk 1 and b from disk 2 at the same time..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
This problem is becoming a real pain to us again and I was wondering
if there has been in the past few month any known fix or workaround.
Sun is sending me an IDR this/next week regarding this bug..
/Tomas
--
Tomas Ögren, [EMAIL
cache is a lost
cause. (Waiting for snv96 with primarycache=metadata)
Other than that.. I like it.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
not reduce the size of the pool. Once you add a disk
to a pool, you can only get rid of it by replacing it with something
equally or larger in size.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
On 15 August, 2008 - Tomas Ögren sent me these 0,4K bytes:
On 14 August, 2008 - Paul Raines sent me these 2,9K bytes:
This problem is becoming a real pain to us again and I was wondering
if there has been in the past few month any known fix or workaround.
Sun is sending me an IDR
' as root restarting resilvering..
Doing it as a regular user will not..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs
.
I just tried (mkfile in /tmp) on both Sol10u5 and snv97, both seems to
work.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss
the directory,
nothing else, or it will probably mess up the stream.
Make sure it only echoes stuff when it's an interactive login.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
for instance
make a new directory somewhere and put symlinks there to the real
devices, then 'zpool import -d /that/directory' to only search there
for devices to consider.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
future..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
...
ZFS does not support RAID0 (simple striping).
zpool create mypool disk1 disk2 disk3
Sure it does.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
(due to how ZFS migrates data from prim to sec) to have
primarycache=metadata and secondarycache=all with the L2 ramdisk?
How does ZFS currently like if the L2 is blank/missing at boot?
Maybe these trickery will starve the DNLC too though..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http
On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
Tomas Ögren wrote:
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so. Is there any way to simulate it?
We describe how to limit the size of the ARC cache
On 16 October, 2008 - Darren J Moffat sent me these 1,7K bytes:
Tomas Ögren wrote:
On 15 October, 2008 - Richard Elling sent me these 4,3K bytes:
Tomas Ögren wrote:
Hello.
Executive summary: I want arc_data_limit (like arc_meta_limit, but for
data) and set it to 0.5G or so
mean 'zpool create mypool disk1 disk2' which creates mypool
consisting of the two disks disk1 and disk2 without any ZFS redundancy?
Or what's your definition of 2 ZFS 460gb disks and 900gb ZFS disk ?
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing
performance.. The
mirror thing has the possibility of achieving higher reliability.. 1 to
3 disks can fail without interruptions, depending on how Murphy picks
them.. The raidz1 one can handle 1 disk only..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing
reason).
Create two pools is one way.. Or if the keep (some) files is the
important bit, do mirroring..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
environment zfsBE successful.
Creation of boot environment zfsBE successful.
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss
version, and you can RDP to it from another machine..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
might want to try with checksum off as well..
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
to use.
Do both? :)
/Tomas
--
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
1 - 100 of 230 matches
Mail list logo