solution I
see is to write some startup script which applies it to filesystems
other than rpool. Which feels kludgy. Is there a better way?
echo set zfs:zil_disable = 1 /etc/system
Or use if you don't want to zap /etc/system..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric
, performance will
increase.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Matt Harrison iwasinnamuk...@genestate.com wrote:
Hi list,
I want to monitor the read and write ops/bandwidth for a couple of
pools
and I'm not quite sure how to proceed. I'm using rrdtool so I either
want an accumulated counter or a gauge.
According to the ZFS admin guide, running zpool
correct data to the malfunctioning device. Now
it does not. A scrub only reads the data and verifies that data matches
checksums.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
mirror would be way more useful.
But you have to admit, it would probably be somewhat reliable!
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
/rdsk/c0t5E83A97F1471E0A4d0s0 of=/dev/null bs=1024k
count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 3.93114 s, 273 MB/s
This is in a x4170m2 with Solaris10.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science
writes/sec. on the hot spare as it resilvers.
There is no other I/O activity on this box, as this is a remote
replication target for production data. I have a the replication
disabled until the resilver completes.
700-800 seq ones perhaps.. for random, you can divide by 10.
/Tomas
--
Tomas Ögren
a new pool. But you can use zfs send/recv to move the datasets, so
You can mix as much as you want to, but you can't remove a vdev (yet).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric
On 31 May, 2011 - Gertjan Oude Lohuis sent me these 0,9K bytes:
On 05/31/2011 03:52 PM, Tomas Ögren wrote:
I've done a not too scientific test on reboot times for Solaris 10 vs 11
with regard to many filesystems...
http://www8.cs.umu.se/~stric/tmp/zfs-many.png
As the picture shows, don't
ops using iostat, but that doesn't tell me how contiguous the
data is, i.e. when iostat reports 500 read ops, does that translate to
500 seeks + 1 read per seek, or 50 seeks + 10 reads, etc? Thanks!
Get DTraceToolkit and check out the various things under Disk and FS,
might help.
/Tomas
--
Tomas
[others]
#fuser -c /opt
/opt:
#
Nothing at all for /opt. So it's safe to unmount? Nope:
...
Has anyone else seen something like this?
Try something less ancient, Solaris 10u9 reports it just fine for
example. ZFS was pretty new-born when snv27 got out..
/Tomas
--
Tomas Ögren, st
On 10 May, 2011 - Tomas Ögren sent me these 0,9K bytes:
On 23 November, 2005 - Benjamin Lewis sent me these 3,0K bytes:
Hello,
I'm running Solaris Express build 27a on an amd64 machine and
fuser(1M) isn't behaving
as I would expect for zfs filesystems. Various google
with a PowerPC 604e
cpu, which had about 60MB/s memory bandwidth (which is kind of bad for a
332MHz cpu) and its disks could do 70-80MB/s or so.. in some other
machine..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
as l2arc, waiting for a
Vertex2EX and a Vertex3 to arrive for ZILL2ARC testing. IO to the
filesystems are quite low (50 writes, 500k data per sec average), but
snapshot times goes waay up during backups.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing
95 69 5.69M 8.08M
Thanks
-Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student
filesystem?) *snip*.
you need to use zdb to see what the current block usage is for a
filesystem.
I'd have to look up the particular CLI usage for that, as I don't know
what it is
off the top of my head.
Anybody know the answer to that one?
zdb -bb pool
/Tomas
--
Tomas Ögren, st
.
I can't think of any, so what are your uses?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss
On 07 April, 2011 - Russ Price sent me these 0,7K bytes:
On 04/05/2011 03:01 PM, Tomas Ögren wrote:
On 05 April, 2011 - Joe Auty sent me these 5,9K bytes:
Has this changed, or are there any other techniques I can use to check
the health of an individual SATA drive in my pool short of what ZFS
Status segment LifeTime LBA_first_err
[SK ASC ASQ]
Description number (hours)
# 1 Default Completed - 293 -
[- --]
...
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se
).
Does this explain the hang?
No..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
.
Dave
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
for Solaris.
The problem itself is sparc vs x86 and firmware for the card. AFAIK,
there is no sata card with drivers for solaris sparc. Use a SAS card.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
. I guess ZFS could start defaulting to 4k, but
ideally it should do the right thing depending on content (although
that's hard for disks that are lying).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
, and regular Solaris could too but chooses not to.
statvfs() should be able to report as well. In ZFS, you will run out of
inodes at the same time as you run out of space.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin
On 05 December, 2010 - Chris Gerhard sent me these 0,3K bytes:
Alas you are hosed. There is at the moment no way to shrink a pool which is
what you now need to be able to do.
back up and restore I am afraid.
.. or add a mirror to that drive, to keep some redundancy.
/Tomas
--
Tomas
the same as a snapshot).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
?), will see if the log messages disappear. Did the filesystem
kill off some snapshots or something in an effort to free up space?
Probably.
zfs list -t all to see all the snapshots as well.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science
?
It's mkfile.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
:
`startx /usr/bin/dbus-launch --exit-with-session gnome-session' from
console. Which is how I've been starting X for some time.
This thread started out way off-topic from ZFS discuss (the filesystem)
and has continued off course.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se
question..?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
will not
update accordingly, and it will show resilvering 100% for needed time
to catch up.
I believe this was fixed recently, by displaying how many blocks it has
checked vs how many to check...
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science
more vdevs.
You can not transform a raidz from one form to another.
You can not remove a vdev.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
for 'kaka': 'dedup' is readonly
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
c1t50060E800042AA70d1
Just fyi, this is an inefficient variant of a mirror. More cpu required
and lower performance.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
, be fairly easy to test; and, if I removed the
snapshots afterward, wouldn't take space permanently (have to make sure
that the scheduler doesn't do one of my permanent snapshots during the
test). But I'm interested in the theoretical answer in any case.
/Tomas
--
Tomas Ögren, st
compression (if you want). This is just a
temporary thing, as the filesystem will be used on the inside (with Copy
on Write), the outer one will grow back again.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin
it builds the L2ARC once?
L2ARC is currently cleared at boot. There is an RFE to make it
persistent.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
last block, which since then has gotten lots
of new friends afterwards.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6899970
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
you can always learn more about a
disk, but that's a good starting point.
Since X, X+1, X+2 seems to be the optimally worst case, try just
skipping over a few blocks.. Double (or such) the performance for a
single software tweak would be surely welcome.
/Tomas
--
Tomas Ögren, st...@acc.umu.se
On 20 May, 2010 - John Andrunas sent me these 0,3K bytes:
Can I make a pool not mount on boot? I seem to recall reading
somewhere how to do it, but can't seem to find it now.
zpool export thatpool
zpool import thatpool when you want it back.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
times should be closer to what sd1 and 2 are doing.
sd2,3,4 seems to be getting about the same amount of read+write, but
their service time is 15-20 times higher. This will lead to crap
performance (and probably broken array in a while).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se
activity during a resilver, though, it turns into
random i/o. Which is slow on these drives.
Resilver does a whole lot of random io itself, not bulk reads.. It reads
the filesystem tree, not block 0, block 1, block 2... You won't get
60MB/s sustained, not even close.
/Tomas
--
Tomas Ögren, st
to prevent it from falling asleep.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
. It will
be slow to create and slow when (re)booting, but other than that it
might be ok..
Look into the zfs userquota/groupquota instead.. That's what I did, and
it's partly because of these issues that the userquota/groupquota got
implemented I guess.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
... If this happens often enough to become a
performance problem, then you should throw away that L2ARC device
because it's broken beyond usability.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
/removed at any time as well.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
On 29 April, 2010 - Tomas Ögren sent me these 5,8K bytes:
On 29 April, 2010 - Roy Sigurd Karlsbakk sent me these 10K bytes:
I got this hint from Richard Elling, but haven't had time to test it much.
Perhaps someone else could help?
roy
Interesting. If you'd like to experiment
try
adding a ZIL when u9 comes, so we can remove it again if performance
goes crap.
A separate log will not help. Try faster disks.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
them apart.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
so will basically disable
the NFS service for a day or three. If the scrub would be less agressive
and take a week to perform, it would probably not kill the performance
as bad..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
bug..
Copying via terminal (and cp) works.
At the moment I have a workaround: I use sftp to copy the files from the
laptop to the server. But this is a pain in the ass and I'm sure there's
a way to make this just work properly!
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se
On 21 April, 2010 - Justin Lee Ewing sent me these 0,3K bytes:
So I can obviously see what zpools I have imported... but how do I see
pools that have been exported? Kind of like being able to see deported
volumes using vxdisk -o alldgs list.
'zpool import'
/Tomas
--
Tomas Ögren, st
make stuff go faster.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 12 April, 2010 - David Magda sent me these 0,7K bytes:
On Mon, April 12, 2010 10:48, Tomas Ögren wrote:
On 12 April, 2010 - Bob Friesenhahn sent me these 0,9K bytes:
Zfs is designed for high thoughput, and TRIM does not seem to improve
throughput. Perhaps it is most useful for low
larger
becuase of COW and l2_size from kstat is the actual size of l2arc data.
so can any one tell me why I am loosing my workingset from l2_size actual
data !!!
Maybe the data in the l2arc was invalidated, because the original data
was rewritten?
/Tomas
--
Tomas Ögren, st...@acc.umu.se
right
Sounds plausible.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
=6700597
Solaris 10 'man zfs', under 'receive':
-uFile system that is associated with the received
stream is not mounted.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc
and will be used if L2ARC needs it. It's not wasted, it's
just a number that doesn't match what you think it should be.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
775528448
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
of the file.
Copy data over from lun1 (old single lun thing) to the raidz
(lun2,lun3,missingfile)
Destroy old pool
replace missingfile with lun1
With this method, the pool is lacking redundancy between step 4 and 5,
but requires no extra space.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
On 08 March, 2010 - Chris Banal sent me these 0,8K bytes:
Assuming no snapshots. Do full backups (ie. tar or cpio) eliminate the need
for a scrub?
No, it won't read redundant copies of the data, which a scrub will.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric
us this, not the other way
around. :) Seriously though, isn't that easy to test? And I'm curious
myself too.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas
On 08 March, 2010 - Bill Sommerfeld sent me these 0,4K bytes:
On 03/08/10 12:43, Tomas Ögren wrote:
So we tried adding 2x 4GB USB sticks (Kingston Data
Traveller Mini Slim) as metadata L2ARC and that seems to have pushed the
snapshot times down to about 30 seconds.
Out of curiosity, how
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs
, modifies it and sends it back..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
= 835000
set zfs:zfs_arc_meta_limit = 70
* some tuning
set ncsize = 50
set nfs:nrnode = 5
And I've done runtime modifications to swapfs_minfree to force usage of another
chunk of memory.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student
know why this is not incorporated into ZFS ?
What you can do until this is to enable compression (like lzjb) on the
zvol, then do your dd dance in the client, then you can disable the
compression again.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing
..
Thoughts?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se - 070-5858487
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
Am 20.02.10 03:22, schrieb Tomas Ögren:
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes:
How do you tell how much of your l2arc is populated? I've been looking for
a while now, can't seem to find it.
Must
On 21 February, 2010 - Richard Elling sent me these 1,3K bytes:
On Feb 21, 2010, at 9:18 AM, Tomas Ögren wrote:
On 21 February, 2010 - Felix Buenemann sent me these 0,7K bytes:
Am 20.02.10 03:22, schrieb Tomas Ögren:
On 19 February, 2010 - Christo Kutrovsky sent me these 0,5K bytes
/l2arc_screenshots
And follow up, can you tell how much of each data set is in the arc or l2arc?
kstat -m zfs
(p, c, l2arc_size)
arc_stat.pl is good, but doesn't show l2arc..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
than your current disks. Or if you can stick an Intel
X25-M/E in there through SATA/SAS.
You can add/remove L2ARCs at will and they don't need to be 100%
reliable either, so if you add several of them they will be raid0'd for
performance.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
timestamp (which could happen for various reasons, like
being copied with kept timestamps from somewhere else).
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
disk usage;
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
/uts/common/fs/zfs/arc.c#arc_reclaim_needed
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
On 20 January, 2010 - Mr. T Doodle sent me these 1,0K bytes:
I currently have one filesystem / (root), is it possible to put a quota on
let's say /var? Or would I have to move /var to it's own filesystem in the
same pool?
Only filesystems can have different settings.
/Tomas
--
Tomas Ögren
.
Smells like a serious bug.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
-- richard
We had a setup stage when pool1 was configured on nodea with nodea_l2arc
and pool2 was configured
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
extended attributes + cron, you could provide the same service
yourself and other similar (or not) things people would like to do
without developers providing it for you in the fs..
Start at 'man fsattr'
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student
of the last 1GB out of 8G..
I think it was swapfs_minfree that I poked with a sharp stick. No idea
if anything else that relies on it could break, but the machine has been
fine for a few weeks here now and using more memory for ARC.. ;)
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
, allow, ..) started using
python in u8.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
in my testing) and let metadata take as much as
it'd like.. Is there a chance of getting something like this?
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
quota. Does this feature only work with OpenSolaris or is it
intended to work on Solaris 10?
ZFS userspace quota doesn't support rquotad reporting. (.. yet?)
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin
and zdb -vvv is at:
http://www.acc.umu.se/~stric/tmp/zfs-userquota.txt
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
On 20 October, 2009 - Matthew Ahrens sent me these 0,7K bytes:
Tomas Ögren wrote:
On a related note, there is a way to still have quota used even after
all files are removed, S10u8/SPARC:
In this case there are two directories that have not actually been
removed. They have been removed
.. Maybe comparing timestamps and see that label 2/3 aren't
so hot anymore and ignore them, or something..
zdb -l and zpool import dumps at:
http://www.acc.umu.se/~stric/tmp/zdb-dump/
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science
of info.
If I was using some SAN and my lun got increased, and the new storage
space had some old scrap data on it, I could get hit by the same issue.
Maybe I missed the point. Let me know.
Cindy
On 10/19/09 12:41, Tomas Ögren wrote:
Hi.
We've got some test machines which amongst others has
it on
sparc (physical) too. I didn't install LU from the u8 iso, but it was
patched with latest LU patches through PCA.
[b]luactivate sol10alt[/b]
If you lumount, comment out those rpool/ROOT/ thingies, then luumont
here, it'll work too.
[b]/usr/sbin/shutdown -g0 -i6 -y[/b]
/Tomas
--
Tomas Ögren
to your PERC..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
which also refers to
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=2178540
So it seems like it's fixed in snv114 and s10u8, which won't help your
s10u4 unless you update..
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University
, or OpenSolaris releases with ZFS
User quotas? (Will 2010.02 contain ZFS User quotas?)
http://sparcv9.blogspot.com/2009/08/solaris-10-update-8-1009-is-comming.html
which is in no way official, says it'll be in 10u8 which should be
coming within a month.
/Tomas
--
Tomas Ögren, st...@acc.umu.se, http
1 - 100 of 230 matches
Mail list logo