Michael,
Your explanation is 100% correct: I am concerned about the effort when managing
quite large files ex. 500MB.
In my specific case we have DVD/BlueRay chapter files 500MB - 2GB (part of
movie) that are concatenated into complete movie (3-20GB).
From my point of view (large files) it is
I was thinking in the same direction about the efficiency of the offset
calculations. Trying to get into the ZFS source code to understand this part,
but did not have time to get there yet.
This issue may be a showstopper for the proposal as it would restrict the
functionality to quite rare
Thank you for the feedback Michael.
zcat was my acronym for a special ZFS aware version of cat and I did
not know that it was an existing command. Simply forgot to check. Should
rename if to zfscat or something similar.
Venlig hilsen
Per
Michael Schuster skrev:
Per Baatrup wrote:
dedup
I would like to to concatenate N files into one big file taking advantage of
ZFS copy-on-write semantics so that the file concatenation is done without
actually copying any (large amount of) file content.
cat f1 f2 f3 f4 f5 f15
Is this already possible when source and target are on the same
dedup operates on the block level leveraging the existing FFS checksums. Read
What to dedup: Files, blocks, or bytes here
http://blogs.sun.com/bonwick/entry/zfs_dedup
The trick should be that the zcat userland app already knows that it will
generate duplicate files so data read and writes
zcat was my acronym for a special ZFS aware version of cat and the name was
obviously a big mistake as I did not know it was an existing command and simply
forgot to check.
Should rename if to zfscat or something similar?
--
This message posted from opensolaris.org
Actually 'ln -s source target' would not be the same zcp source target as
writing to the source file after the operation would change the target file as
well where as for zcp this would only change the source file due to
copy-on-write semantics of ZFS.
--
This message posted from
Roland,
Clearly an extension of cp would be very nice when managing large files.
Today we are relying heavily on snapshots for this, but this requires disipline
on storing files in separate zfs'es avioding to snapshot too many files that
changes frequently.
The reason I was speaking about cat
if any of f2..f5 have different block sizes from f1
This restriction does not sound so bad to me if this only refers to changes to
the blocksize of a particular ZFS filesystem or copying between different ZFSes
in the same pool. This can properly be managed with a -f switch on the
userlan app
Btw. I would be surprised to hear that this can be implemented
with current APIs;
I agree. However it looks like an opportunity to dive into the Z-source code.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
10 matches
Mail list logo