unlink(1M)?
cheers,
--justin
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
opensolarisisdeadlongliveopensola...@nedharvey.com
To: Sami Tuominen sami.tuomi...@tut.fi; zfs-discuss@opensolaris.org
zfs-discuss@opensolaris.org
Sent: Monday, 26
has only one drive. If ZFS detects something bad it might kernel panic and
lose the whole system right?
What do you mean by lose the whole system? A panic is not a bad thing, and
also does not imply that the machine will not reboot successfully. It certainly
doesn't guarantee your OS will
would be very annoying if ZFS barfed on a technicality and I had to reinstall
the whole OS because of a kernel panic and an unbootable system.
Is this a known scenario with ZFS then? I can't recall hearing of this
happening.
I've seen plenty of UFS filesystems dieing with panic: freeing
Can you check whether this happens from /dev/urandom as well?
It does:
finsdb137@root dd if=/dev/urandom of=oub bs=128k count=1 while true
do
ls -s oub
sleep 1
done
0+1 records in
0+1 records out
1 oub
1 oub
1 oub
1 oub
1 oub
4 oub
4 oub
4 oub
4 oub
4 oub
I think for the cleanness of the experiment, you should also include
sync after the dd's, to actually commit your file to the pool.
OK that 'fixes' it:
finsdb137@root dd if=/dev/random of=ob bs=128k count=1 sync while true
do
ls -s ob
sleep 1
done
0+1 records in
0+1 records out
4 ob
While this isn't causing me any problems, I'm curious as to why this is
happening...:
$ dd if=/dev/random of=ob bs=128k count=1 while true
do
ls -s ob
sleep 1
done
0+1 records in
0+1 records out
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1 ob
1
You do realize that the age of the universe is only on the order of
around 10^18 seconds, do you? Even if you had a trillion CPUs each
chugging along at 3.0 GHz for all this time, the number of processor
cycles you will have executed cumulatively is only on the order 10^40,
still 37 orders of
The point is that hash functions are many to one and I think the point
was about that verify wasn't really needed if the hash function is good
enough.
This is a circular argument really, isn't it? Hash algorithms are never
perfect, but we're trying to build a perfect one?
It seems to me
This assumes you have low volumes of deduplicated data. As your dedup
ratio grows, so does the performance hit from dedup=verify. At, say,
dedupratio=10.0x, on average, every write results in 10 reads.
Well you can't make an omelette without breaking eggs! Not a very nice one,
anyway.
Yes
Since there is a finite number of bit patterns per block, have you tried to
just calculate the SHA-256 or SHA-512 for every possible bit pattern to see
if there is ever a collision? If you found an algorithm that produced no
collisions for any possible block bit pattern, wouldn't that be
Richard Elling wrote:
Miles Nordin wrote:
ave == Andre van Eyssen an...@purplecow.org writes:
et == Erik Trimble erik.trim...@sun.com writes:
ea == Erik Ableson eable...@mac.com writes:
edm == Eric D. Mudama edmud...@bounceswoosh.org writes:
ave The LSI SAS controllers with
with other Word files. You will thus end up seeking all over the disk
to read _most_ Word files. Which really sucks.
snip
very limited, constrained usage. Disk is just so cheap, that you
_really_ have to have an enormous amount of dup before the performance
penalties of dedup are
Raw storage space is cheap. Managing the data is what is expensive.
Not for my customer. Internal accounting means that the storage team gets paid
for each allocated GB on a monthly basis. They have
stacks of IO bandwidth and CPU cycles to spare outside of their daily busy
period. I can't
Does anyone know a tool that can look over a dataset and give
duplication statistics? I'm not looking for something incredibly
efficient but I'd like to know how much it would actually benefit our
Check out the following blog..:
UFS == Ultimate File System
ZFS == Zettabyte File System
it's a nit, but..
UFS != Ultimate File System
ZFS != Zettabyte File System
cheers,
--justin
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
zpool list doesn't reflect pool usage stats instantly. Why?
This is no different to how UFS behaves.
If you rm a file, this uses the system call unlink(2) to do the work which is
asynchronous.
In other words, unlink(2) almost immediately returns a successful return code to
rm (which can
Is there a more elegant approach that tells rmvolmgr to leave certain
devices alone on a per disk basis?
I was expecting there to be something in rmmount.conf to allow a specific device
or pattern to be excluded but there appears to be nothing. Maybe this is an RFE?
Why aren't you using amanda or something else that uses
tar as the means by which you do a backup?
Using something like tar to take a backup forgoes the ability to do things like
the clever incremental backups that ZFS can achieve though; e.g. only backing
the few blocks that have changed in
18 matches
Mail list logo