thomas wrote: > Am Samstag, den 05.04.2014, 19:17 -0500 schrieb Bruce Dubbs: >> Armin K. wrote: >>> On 04/06/2014 01:19 AM, thomas wrote: >>>> Am Samstag, den 05.04.2014, 16:41 -0500 schrieb Bruce Dubbs: >>>>> Working with systemd, there seem to be lots of "learning" issues. >>>>> >>>>> I was trying to watch the boot sequence and the screen clears and I get >>>>> a login prompt. How to disable clearing the screen? Well that's simple >>>>> enough: >>>>> >>>>> mkdir -p /etc/systemd/system/getty@tty1.service.d >>>>> >>>>> cat > /etc/systemd/system/getty@tty1.service.d/noclear.conf < EOF >>>>> [service] >>>>> TTYVTDisallocate=no >>>>> EOF >>>> >>>> So you finally managed to start discovering all the real and fancy >>>> benefits of systemd. It has been really time now that a system comes up >>>> which forces the user to figure out how to deal with the login prompt >>>> again. Think about what else you could do in this time this new stuff >>>> prevents you from. Thanks the lord, otherwise you could even go meet >>>> friends and have a nice weekend. Ah, by default or not (i don't know), >>>> systemd ignores the /tmp mointpoint in fstab and handle it by itself >>>> somehow. >> >> That's not true for me. I have: >> >> /dev/sdb5 /tmp ext4 6%
> The support guys now need to ask: Which init have you started? May be > fstab is not ignored on your system, now you have to figure out why. > Simplest case would be that there is no .mount file. Maybe in ArchLinux > the install such a .mount file by default... systemd seems to create some .mount files on the fly from fstab. I did find out that they do not wipe /tmp by default. Of course it does start out empty if it is a tmpfs. > That is what I call consistence, and it immediatly shows me the urge of > inventing .mount files. Changing fstab rebooting will from now on not > neccessarily show any effect. I don't think that will be needed unless the users wants /tmp files on a regular directory and then I think it will be a one liner: ln -s /dev/null <dir>tmp.mount >> I don't know if systemd wipes it or not. It may and I'll need to figure >> out how to disable that. So there are two issues here. > See my comment about efforts below... > >> >> (lots of blah blah of mine) >>>> >>>> I believe giving the user the option between systemd and sysvinit is >>>> brilliant and in times systemd seems to become more popular (for >>>> whatever reason) simply consequent and valid. But I really do not >>>> understand the point in having both in parallel on the machine. The >>>> option should apply to the build time. >> >> How better to compare and contrast then to use exactly the same system >> with the exception of the init program and /etc/init.d? > How often this will be done? And by whom? By trend I use computers - > even with LFS installed - for productive work (more or less). Sometimes > there is a bit more work on the system itself, lets say when a new > version of KDE comes up (which is true this year), but finally, the > system is there to be used and hardly to be booted in X vs. Y seconds. > Maybe that everyone else except me is interested in rebooting to > different init systems 24 hours a day, I'm not. I'm interested in a rock > solid foundation on which the next step (build BLFS packages) is to be > done. If used in production there is nothing to force a user to switch init systems. How many other files are installed that are never used? > Btw, how better to compare than watching two VMs starting. Each doing > its crystal clean own stuff. I assume that "who does the compare" is > answered by "mainly the LFS developers", right? They should be able to > script the build-process entirely, maybe with a parameter like > --init-system={sysvinit,systemd}. Than you can compare two identical > systems (except IP-address, init-system and probably hostname) even at > the same time. The vm route is a little misleading when looking at init systems IMO. It is a lot faster for all systems because the HW is emulated and the SW responds faster than real HW. > I wished all the efforts would have been put in the theory and maybe > even practice of package management. A also interesting area to learn > something. There has been a perfect working SysV and also a well > maintained systemd version of the LFS book. They both are gone, a > mixture came up. Too bad about the two. > >> >>> Yea, I'd prefer the same approach, but that would make it way harder for >>> BLFS. Ie, some packages in BLFS will be installed if building systemd, >>> others will not, and having two udevs will cause twice the trouble for >>> post-LFS part (2 udev sections, for gudev lib, etc). >> >> Actually, most of the BLFS differences seem to be in the boot scripts. >> Having 'make install-sshd' install to both systems is almost trivial. > Again, where is the benefit of polluting a system with stuff from the > boot system i do not use? If you really want to remove one system that you are not using, it wouldn't be that hard to write a small script to remove what you are not using. It would take a little research, but it would only take a few rm -rf commands. -- Bruce -- http://linuxfromscratch.org/mailman/listinfo/lfs-dev FAQ: http://www.linuxfromscratch.org/faq/ Unsubscribe: See the above information page