Dear Mr. Lougher,
(1) Here's something else for you to think about; it may be relevant to
the solution of the mystery. Trying to make a workaround for the
problem, I wrote a Python tool (syscall.py) that I could call from the
Transparent Archivist to call mksquashfs with a timeout and an automatic
restart after timeout. It worked, but at first there was a problem:
when syscall.py killed mksquashfs (because time ran out) and then
restarted mksquashfs, mksquashfs complained that it couldn't find the
superblock, and it advised that I should use -noappend to simply
overwrite the new squashfs filesystem instead of attempting to append to
the one that was incompletely created in the previous (interrupted)
attempt. So of course I added -noappend to Transparent Archivist's
invocation of mksquashfs:
case squashfs:
{
char *args[] = {
"/usr/local/ch-tools/syscall.py",
"-timeout", "3600",
"-exec", "mksquashfs", before.aschar(),
tempimgfilename.aschar(), "-noappend",
NULL
};
sys (args);
...and so far the hanging problem has not recurred. So maybe the
hanging problem has only to do with updating an already-existing
squashfs filesystem. Or maybe not; I've tried only once so far. I
don't yet have a statistically significant sample of trials using -noappend.
I should have mentioned before that Transparent Archivist's approach to
packing an archive onto the minimum number of CDs and DVDs involves
successive approximation, in which mksquashfs (or mkisofs, etc.) is
called repeatedly as the size of the portion of the archive is adjusted
up or down. In view of that, and assuming that mksquashfs runs reliably
either way (ignoring the fact that my Ubuntu 10.10 amd64 version
evidently doesn't run reliably when appending), shouldn't Transparent
Archivist just use -noappend anyway? Or is mksquashfs likely to run
faster, and to produce an equally-compact image, when appending to the
fs produced in the previous iteration, in such a
successive-approximation application? Your advice on this point would
be valuable to know, and I'll be glad to pass it on to Dave Flater, the
source of Transparent Archivist.
Thanks for your suggestion to try 4.1. I'll try that when I get to it.
One experimental variable at a time!
(2) I'm very impressed by your attention to detail, and by your overall
dedication to praxis. On behalf of people everywhere who don't like to
lose their data, please let me say, "THANK YOU FOR YOUR OUTSTANDING
CONTRIBUTION of squashfs !!!!" It is a very beautiful thing, really,
and much needed for all kinds of purposes.
(3) If it would be of any help to you in figuring out what's going on,
I'm prepared to contribute to you the use of the desktop of my archiving
machine. I see no reason not to trust you, personally, with such access
to my system and data. Your contributions to civilization are your
credentials, as far as I'm concerned. My archives are pretty demanding,
with a single DVD filesystem often containing hundreds of thousands of
small files, along with large files that are often around 500 Mb. If
you want to take advantage of this offer, please send me your id_rsa.pub
and your preferred login name.
Steve Newcomb
+1 910 363 4032
On 02/20/2011 11:01 PM, Phillip Lougher wrote:
> -no-sparse isn't going to have any effect here, this was a workaround
> for some sparse file handling bugs that were fixed for Mksquashfs 3.4
> (i.e. sometime before Mksquashfs 4.0 which you're using). LikewSise,
> the -no-lzma mess was due to a mismatch between Ubuntu's patched
> Mksquashfs (inherited from Debian) and Ubuntu's kernel Squashfs code
> which not being derived from Debian lacked their lzma patches. Again
> these are not relevant here because it is not a squashfs-tools/kernel
> code interoperability problem, and in any case that problem went away a
> couple of Ubuntu releases ago.
>
> So what is the problem? From your description it sounds like a multi-
> threading synchronisation problem. One extremely rare synchronisation
> bug has come to light since Mksquashfs 4.0, plus some other bugs have
> come to light since Mksquashfs 4.0, which could possibly cause
> Mksquashfs to get sufficiently confused to hang. On the other hand this
> is the first Mksquashfs hang reported against Mksquashfs 4.0 in nearly
> two years since it's release... and on that basis the threading code
> seems to be very stable and almost bug free. You may of course be
> extremely unlucky and have a hardware/source filesystem combination
> that's triggered the bugs fixed in Mksquashfs 4.1, or an unknown bug.
>
> I would suggest your first step is to download squashfs-tools 4.1 from
> squashfs.sourceforge.net, and see if the problem still occurs.
>
> If the problem still occurs then your second step should be to raise a
> bug on the Squashfs bug tracker (squashfs.sourceforge.net).
>
--
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/722168
Title:
mksquashfs hangs
--
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs