9. Oct 2015 11:26 by [email protected]:

> 08.10.2015 17:48, je Hazel Russman napisal
>> On Thu, 8 Oct 2015 14:11:16 +0000 (UTC)
>> <>> [email protected]>> > wrote:
>>
>>> 8. Oct 2015 15:54 by >>> [email protected]>>> :
>>>
>>>
>>> > 08.10.2015 15:25, je Pierre Labastie napisal
>>> >> On 08/10/2015 15:12, Jan Bilodjerič wrote:
>>> >>> 08.10.2015 11:56, je >>> >>> [email protected]>>> >>>  napisal
>>> >>>> 8. Oct 2015 08:37 by >>>> >>> [email protected]>>> >>>> :
>>> >>>>
>>> >>>>> Hello.
>>> >>>>> I am building LFS 7.7 on debian 64-bit system.
>>> >>>>> I am getting un unexpected failure when testing procps-ng-3.3.10
>>> >>>>> (details below).
>>> >>>>>
>>> >>>>> === lib tests ===
>>> >>>>>
>>> >>>>> Schedule of variations:
>>> >>>>> unix
>>> >>>>>
>>> >>>>> Running target unix
>>> >>>>> Using /tools/share/dejagnu/baseboards/unix.exp as board description
>>> >>>>> file for target.
>>> >>>>> Using /tools/share/dejagnu/config/unix.exp as generic interface 
>>> file
>>> >>>>> for target.
>>> >>>>> Using ./config/unix.exp as tool-and-target-specific interface file.
>>> >>>>> Running ./lib.test/fileutils.exp ...
>>> >>>>> FAIL: test no space left on device
>>> >>>>> Running ./lib.test/strutils.exp ...
>>> >>>>
>>> >>>> Your partition is probably filled up at this point. Run "df -h" to
>>> >>>> check if you have any space left on the LFS partition. Another
>>> >>>> possibility is the test suite trying to create a large file
>>> >>>> temporarily.
>>> >>>>
>>> >>>> -- Willie
>>> >>>
>>> >>> Output of "df -h" is as follows:
>>> >>>
>>> >>> Filesystem      Size  Used Avail Use% Mounted on
>>> >>> /dev/sda4        16G  2.6G   13G  18% /
>>> >>> none            2.7G     0  2.7G   0% /dev
>>> >>> none            2.7G     0  2.7G   0% /dev/shm
>>> >>>
>>> >> I do not see your LFS partition above...
>>> >>
>>> >> Pierre
>>> >
>>> > Below is the output of "df -h" on my host:
>>> >
>>> > Filesystem      Size  Used Avail Use% Mounted on
>>> > udev             10M     0   10M   0% /dev
>>> > tmpfs           1.1G  9.1M  1.1G   1% /run
>>> > /dev/sda2        28G  716M   28G   3% /
>>> > /dev/sda9        19G  7.5G  9.8G  44% /usr
>>> > tmpfs           2.7G  156K  2.7G   1% /dev/shm
>>> > tmpfs           5.0M  4.0K  5.0M   1% /run/lock
>>> > tmpfs           2.7G     0  2.7G   0% /sys/fs/cgroup
>>> > /dev/sda4        16G  2.6G   13G  18% /mnt/lfs
>>> > /dev/sda5        55G   14G   39G  26% /home
>>> > /dev/sda6       922M  1.3M  858M   1% /tmp
>>> > /dev/sda1       276M   37M  226M  14% /boot
>>> > /dev/sda7       3.7G  815M  2.7G  24% /var
>>> > tmpfs           536M  8.0K  536M   1% /run/user/132
>>> > tmpfs           536M   12K  536M   1% /run/user/1000
>>> >
>>> > I am kind of wet behind the ears linux user, so I don't know what I'm
>>> > missing.
>>> >
>>> > Jan
>>> >
>>> >
>>>
>>> Well that looks good. Looking at the source of the test script I'm not 
>>> really
>>> sure what's supposed to happen, afraid I can't help you.
>>>
>>> --Willie
>>
>> Sometimes you get disk full messages because you have run out of
>> inodes rather than space. It might be worth running df -hi to check on
>> that.
>>
>> --
>> H Russman
>
> Output of df -hi is:
>
> Filesystem     Inodes IUsed IFree IUse% Mounted on
> udev             667K   411  667K    1% /dev
> tmpfs            670K   649  669K    1% /run
> /dev/sda2         28M  9.5K   28M    1% /
> /dev/sda9        1.2M  320K  873K   27% /usr
> tmpfs            670K     8  670K    1% /dev/shm
> tmpfs            670K     5  670K    1% /run/lock
> tmpfs            670K    13  670K    1% /sys/fs/cgroup
> /dev/sda1         72K   328   72K    1% /boot
> /dev/sda4        1.0M   40K  985K    4% /mnt/lfs
> /dev/sda7        239K   15K  225K    6% /var
> /dev/sda5        3.5M   26K  3.5M    1% /home
> /dev/sda6         60K    28   60K    1% /tmp
> tmpfs            670K    11  670K    1% /run/user/132
> tmpfs            670K    18  670K    1% /run/user/1000
>
> Should I continue without finding out what is causing test failure?
> I am not very comfortable with that. Any ideas?
>
> Jan
> --
>




Just continue, this package isn't glibc or GCC so testing isn't all that 
necessary. If this package breaks it does not affect other packages on your 
system.





To be honest, I never run the test suites for any package, not even glibc, 
gcc or binutils. I think the test suites are highly overrated, the times I 
ran a test suite there were so many failures. If you look up the failures, 
you'll notice some tests are destined to fail and don't indicate a problem 
with your compiled version. The only tests I run are the four or five sanity 
checks in the book to check what linker is used, those tests are really good 
and are the only ones I deem necessary.





The problem is: what can you conclude after testing when out of the 500 
tests, 48 fail, what does that mean?




The correct way to handle this is to built a complex system that auto checks 
tests for failures. Kinda like what Debian does, after a package is compiled 
a good number of tests are run, the tests that fail are (almost) always 
marked with a TODO marking a yet to be fixed bug report.




Regards,

Willie

-- 
http://lists.linuxfromscratch.org/listinfo/lfs-support
FAQ: http://www.linuxfromscratch.org/blfs/faq.html
Unsubscribe: See the above information page

Do not top post on this list.

A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing in e-mail?

http://en.wikipedia.org/wiki/Posting_style

Reply via email to