[gentoo-user] Re: emerge --oneshot portage

2018-03-30 Thread Kai Krakow
Am Tue, 13 Mar 2018 14:52:34 -0600 schrieb thelma:

> Thelma
> On 03/13/2018 12:11 PM, Neil Bothwick wrote:
>> On Tue, 13 Mar 2018 11:36:12 -0600, the...@sys-concept.com wrote:
>> 
>>> sys-apps/portage:0
>>>
>>>   (sys-apps/portage-2.3.16:0/0::gentoo, ebuild scheduled for merge)
>>> pulled in by sys-apps/portage (Argument)
>>>
>>>   (sys-apps/portage-2.3.6:0/0::gentoo, installed) pulled in by
>>> 
>>> sys-apps/portage[python_targets_pypy(-)?,python_targets_python2_7(-)?,python_targets_python3_4(-)?,python_targets_python3_5(-)?,-python_single_target_pypy(-),-python_single_target_python2_7(-),-python_single_target_python3_4(-),-python_single_target_python3_5(-)]
>>> required by (app-portage/gentoolkit-0.3.3:0/0::gentoo, installed)
>> 
>> Your old version of gentoolkit (and other packages mentioned in the full
>> output) is causing this. Trying to upgrade an out of date system
>> piecemeal can cause this. Just do an emerge -u @system and let portage
>> resolve these issues rather than trying to do it yourself.
> 
> I spoke too soon.  Now, when I try: emerge -u @system
> I'm getting an error as well.
> 
> emerge -ua @system
> 
> These are the packages that would be merged, in order:
> 
> Calculating dependencies... done!
> 
> WARNING: One or more updates/rebuilds have been skipped due to a dependency 
> conflict:
> 
> sys-libs/zlib:0
> 
>   (sys-libs/zlib-1.2.11-r1:0/1::gentoo, ebuild scheduled for merge) conflicts 
> with
> >=sys-libs/zlib-1.2.8-r1:0/0=[abi_x86_32(-),abi_x86_64(-)] required by 
> (media-libs/lcms-2.8-r1:2/2::gentoo, installed)
> ^ 
|
Note this --+

> 
> sys-libs/readline:0
> 
>   (sys-libs/readline-7.0_p3:0/7::gentoo, ebuild scheduled for merge) 
> conflicts with
> sys-libs/readline:0/0= required by (dev-lang/ruby-2.1.9:2.1/2.1::gentoo, 
> installed)
>  ^
 |
And this too +

> !!! The following update(s) have been skipped due to unsatisfied dependencies
> !!! triggered by backtracking:
> 
> app-shells/bash:0

Emerge seems to be unable to resolve subslot changes properly and
doesn't issue a rebuild automatically for those. It's a headache I
regularly have to deal with when upgrading Qt.

As you can see from the output, the subslot requirement changed from
"0/0" (as required by lcms) to "0/1" (as required by your emerge
request). Similar goes for readline and ruby.

As written in the other post, you can usually inject those reinstalls
manually with:

# emerge ... --reinstall-atoms={lcms,ruby}

If you are using color output, these are usually easily spotted as
they are the blue package names.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: upgrade to gcc-6.4.0

2018-03-30 Thread Kai Krakow
Am Thu, 15 Mar 2018 20:37:45 -0600 schrieb thelma:

> On 03/15/2018 08:26 PM, the...@sys-concept.com wrote:
>> I'm upgrading one of my older boxes to newer gcc-6.4.0
>> After switching to gcc-6.4.0 
>> source /etc/profile
>> 
>> running: emerge --ask --oneshot sys-devel/libtool
>> 
>> !!! Your current profile is deprecated and not supported anymore.
>> !!! Use eselect profile to update your profile.
>> !!! Please upgrade to the following profile if possible:
>> 
>> default/linux/amd64/17.0/desktop
>> 
>> You may use the following command to upgrade:
>> 
>> eselect profile set default/linux/amd64/17.0/desktop
>> 
>> 
>> These are the packages that would be merged, in order:
>> 
>> Calculating dependencies... done!
>> [ebuild U  ] sys-devel/automake-1.15.1-r2 [1.15-r2] USE="{-test%}" 
>> [ebuild   R] sys-devel/libtool-2.4.6-r3 
>> [blocks B  ] > blocking sys-devel/libtool-2.4.6-r3)
>> 
>>  * Error: The above package list contains packages which cannot be
>>  * installed at the same time on the same system.
>> --
>> I did not switch to new profile "17" yet.  I was trying to rebuild 
>> "sys-devel/libtool" first, but got a blocker.
> 
> I upgraded the "sys-apps/sandbox" and now it allows me to run:
> emerge --ask --oneshot sys-devel/libtool
> 
> So why didn't emerge do it automatically, upgrade the "sandbox" ?

Running "emerge --oneshot" doesn't consider reverse dependencies.

You can manually inject those with

# emerge --ask --oneshot sys-devel/libtool --reinstall-atoms=sys-apps/sandbox
 

If you want to inject more than one dependency, use

# ... --reinstall-atoms={a,b,c}

or

# ... --reinstall-atoms="a b c"


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: repair FAT-fs

2018-03-08 Thread Kai Krakow
Am Fri, 02 Mar 2018 22:17:02 -0700 schrieb thelma:

>>> I 've "dosfstools" installed but I can not run: dosfsck - it doesn't exist.
>> 
>> 
>> Try 'fsck.vfat' instead. There is also 'fsck.fat' or 'fsck.exfat', at least 
>> on my installation.
> 
> I've tried: 
> fsck.vfat -v -a -w /dev/sdb1
> fsck.fat 4.0 (2016-05-06)
> open: No such file or directory
> 
> This doesn't work either:
> fdisk /dev/sdb
> 
> Welcome to fdisk (util-linux 2.28.2).
> Changes will remain in memory only, until you decide to write them.
> Be careful before using the write command.
> 
> fdisk: cannot open /dev/sdb: No such file or directory
> 
> 
> Here is a dmesg:
> 
> [10930879.950647] usb-storage 8-1:1.0: USB Mass Storage device detected
> [10930879.950742] scsi host8: usb-storage 8-1:1.0
> [10930881.068652] scsi 8:0:0:0: Direct-Access Kingston DataTraveler G3  
> PMAP PQ: 0 ANSI: 4
> [10930881.068839] sd 8:0:0:0: Attached scsi generic sg2 type 0
> [10930882.544966] sd 8:0:0:0: [sdb] 30489408 512-byte logical blocks: (15.6 
> GB/14.5 GiB)
> [10930882.545153] sd 8:0:0:0: [sdb] Write Protect is off
> [10930882.545155] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00
> [10930882.545283] sd 8:0:0:0: [sdb] No Caching mode page found
> [10930882.545284] sd 8:0:0:0: [sdb] Assuming drive cache: write through
> [10930882.567263]  sdb: sdb1
> [10930882.568351] sd 8:0:0:0: [sdb] Attached SCSI removable disk
> [10930887.640395] FAT-fs (sdb1): Volume was not properly unmounted. Some data 
> may be corrupt. Please run fsck.

This message is probably an artifact of what follows.

> [10930894.488038] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_ERROR 
> driverbyte=DRIVER_SENSE
> [10930894.488041] sd 8:0:0:0: [sdb] tag#0 Sense Key : Hardware Error 
> [current] 
> [10930894.488043] sd 8:0:0:0: [sdb] tag#0 Add. Sense: No additional sense 
> information
> [10930894.488045] sd 8:0:0:0: [sdb] tag#0 CDB: Synchronize Cache(10) 35 00 00 
> 00 00 00 00 00 00 00

This USB thumb drive is quite obviously broken, or incompatible with your
USB controller. Try a different port or different system. Otherwise,
throw it away and learn not to store important stuff on thumb drives.

Most USB thumb drive use terrifying cheap and weak storage chips, sometimes
supporting only hundreds of write cycles. It's going to break more soon
than later, especially if you write a lot or leave it in the drawer without
connection to a power source for weeks or months.

Some sticks are even crafted in a way to support heavy write-cycles only
where the FAT table is going to be. Reformatting or putting something other
on it than FAT can have catastrophic consequences after a short time.

I was able to completely destroy some cheap USB sticks within a few weeks
by putting f2fs on them.


> [10930894.497472] usb 8-1: USB disconnect, device number 106
> [10932073.936844] usb 3-1: USB disconnect, device number 19

This message means: disconnect, the device node is gone.

> [10932092.353300] usb 3-1: new high-speed USB device number 20 using ehci-pci
> [10932092.473483] usb 3-1: New USB device found, idVendor=1043, idProduct=8012
> [10932092.473486] usb 3-1: New USB device strings: Mfr=1, Product=2, 
> SerialNumber=0
> [10932092.473487] usb 3-1: Product: Flash Disk
> [10932092.473488] usb 3-1: Manufacturer: Generic


In the future, please ensure to post complete logs right from the
beginning without hiding the important stuff. ;-)


BTW: dosfsck is afair part of the mtools package. On a modern system,
use the fsck.{vfat,fat} equivalents. The message you got tells you
that the device was not found, not that the tool was not found:

> I've tried: 
> fsck.vfat -v -a -w /dev/sdb1
> fsck.fat 4.0 (2016-05-06)
  
This comes from the tool starting, so it's there.

> open: No such file or directory
  ^^
This is an error message from the tool, it could not open the device.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: repair FAT-fs

2018-03-07 Thread Kai Krakow
Am Fri, 02 Mar 2018 22:17:02 -0700 schrieb thelma:

>>> I 've "dosfstools" installed but I can not run: dosfsck - it doesn't exist.
>> 
>> 
>> Try 'fsck.vfat' instead. There is also 'fsck.fat' or 'fsck.exfat', at least 
>> on my installation.
> 
> I've tried: 
> fsck.vfat -v -a -w /dev/sdb1
> fsck.fat 4.0 (2016-05-06)
> open: No such file or directory
> 
> This doesn't work either:
> fdisk /dev/sdb
> 
> Welcome to fdisk (util-linux 2.28.2).
> Changes will remain in memory only, until you decide to write them.
> Be careful before using the write command.
> 
> fdisk: cannot open /dev/sdb: No such file or directory
> 
> 
> Here is a dmesg:
> 
> [10930879.950647] usb-storage 8-1:1.0: USB Mass Storage device detected
> [10930879.950742] scsi host8: usb-storage 8-1:1.0
> [10930881.068652] scsi 8:0:0:0: Direct-Access Kingston DataTraveler G3  
> PMAP PQ: 0 ANSI: 4
> [10930881.068839] sd 8:0:0:0: Attached scsi generic sg2 type 0
> [10930882.544966] sd 8:0:0:0: [sdb] 30489408 512-byte logical blocks: (15.6 
> GB/14.5 GiB)
> [10930882.545153] sd 8:0:0:0: [sdb] Write Protect is off
> [10930882.545155] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00
> [10930882.545283] sd 8:0:0:0: [sdb] No Caching mode page found
> [10930882.545284] sd 8:0:0:0: [sdb] Assuming drive cache: write through
> [10930882.567263]  sdb: sdb1
> [10930882.568351] sd 8:0:0:0: [sdb] Attached SCSI removable disk
> [10930887.640395] FAT-fs (sdb1): Volume was not properly unmounted. Some data 
> may be corrupt. Please run fsck.

This message is probably an artifact of what follows.

> [10930894.488038] sd 8:0:0:0: [sdb] tag#0 FAILED Result: hostbyte=DID_ERROR 
> driverbyte=DRIVER_SENSE
> [10930894.488041] sd 8:0:0:0: [sdb] tag#0 Sense Key : Hardware Error 
> [current] 
> [10930894.488043] sd 8:0:0:0: [sdb] tag#0 Add. Sense: No additional sense 
> information
> [10930894.488045] sd 8:0:0:0: [sdb] tag#0 CDB: Synchronize Cache(10) 35 00 00 
> 00 00 00 00 00 00 00

This USB thumb drive is quite obviously broken, or incompatible with your
USB controller. Try a different port or different system. Otherwise,
throw it away and learn not to store important stuff on thumb drives.

Most USB thumb drive use terrifying cheap and weak storage chips, sometimes
supporting only hundreds of write cycles. It's going to break more soon
than later, especially if you write a lot or leave it in the drawer without
connection to a power source for weeks or months.

Some sticks are even crafted in a way to support heavy write-cycles only
where the FAT table is going to be. Reformatting or putting something other
on it than FAT can have catastrophic consequences after a short time.

I was able to completely destroy some cheap USB sticks within a few weeks
by putting f2fs on them.


> [10930894.497472] usb 8-1: USB disconnect, device number 106
> [10932073.936844] usb 3-1: USB disconnect, device number 19

This message means: disconnect, the device node is gone.

> [10932092.353300] usb 3-1: new high-speed USB device number 20 using ehci-pci
> [10932092.473483] usb 3-1: New USB device found, idVendor=1043, idProduct=8012
> [10932092.473486] usb 3-1: New USB device strings: Mfr=1, Product=2, 
> SerialNumber=0
> [10932092.473487] usb 3-1: Product: Flash Disk
> [10932092.473488] usb 3-1: Manufacturer: Generic


In the future, please ensure to post complete logs right from the
beginning without hiding the important stuff. ;-)


BTW: dosfsck is afair part of the mtools package. On a modern system,
use the fsck.{vfat,fat} equivalents. The message you got tells you
that the device was not found, not that the tool was not found:

> I've tried: 
> fsck.vfat -v -a -w /dev/sdb1
> fsck.fat 4.0 (2016-05-06)
  
This comes from the tool starting, so it's there.

> open: No such file or directory
  ^^
This is an error message from the tool, it could not open the device.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 21:20:16 + schrieb Wol's lists:

> On 10/02/18 20:06, Rich Freeman wrote:
>> On Sat, Feb 10, 2018 at 2:52 PM, Kai Krakow <hurikha...@gmail.com>
>> wrote:
>>> Am Sat, 10 Feb 2018 19:38:56 + schrieb Wols Lists:
>>>
>>>> On 10/02/18 18:56, Kai Krakow wrote:
>>>>> role and /usr takes the role of /, and /home already took the role
>>>>> of /usr (that's why it's called /usr, it was user data in early
>>>>> unix). The
>>>>
>>>> Actually no, not at all. /usr is not short for USeR, it's an acronym
>>>> for User System Resources, which is why it contains OS stuff, not
>>>> user stuff. Very confusing, I know.
>>>
>>>  From
>>>  https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.html:
>>>
>>>> In the original Unix implementations, /usr was where the home
>>>> directories of the users were placed (that is to say, /usr/someone
>>>> was then the directory now known as /home/someone). In current
>>>> Unices, /usr is where user-land programs and data (as opposed to
>>>> 'system land' programs and data) are. The name hasn't changed, but
>>>> it's meaning has narrowed and lengthened from "everything user
>>>> related" to "user usable programs and data". As such, some people may
>>>> now refer to this directory as meaning 'User System Resources' and
>>>> not 'user' as was originally intended.
>>>
>>> So, actually the acronym was only invented later to represent the new
>>> role of the directory. ;-)
>>>
>>>
>> A bit more of history here:
>> 
>> http://www.osnews.com/story/25556/
Understanding_the_bin_sbin_usr_bin_usr_sbin_Split
>> 
> Fascinating. And I made a typo, which is interesting too - I always knew
> it as Unix System Resources - typing "user" was a mistake ... I wonder
> how much weird info is down to mistakes like that :-)

You should trust your hidden secret skills more... :-D


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 15:06:06 -0500 schrieb Rich Freeman:

> On Sat, Feb 10, 2018 at 2:52 PM, Kai Krakow <hurikha...@gmail.com>
> wrote:
>> Am Sat, 10 Feb 2018 19:38:56 + schrieb Wols Lists:
>>
>>> On 10/02/18 18:56, Kai Krakow wrote:
>>>> role and /usr takes the role of /, and /home already took the role of
>>>> /usr (that's why it's called /usr, it was user data in early unix).
>>>> The
>>>
>>> Actually no, not at all. /usr is not short for USeR, it's an acronym
>>> for User System Resources, which is why it contains OS stuff, not user
>>> stuff. Very confusing, I know.
>>
>> From https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.html:
>>
>>> In the original Unix implementations, /usr was where the home
>>> directories of the users were placed (that is to say, /usr/someone was
>>> then the directory now known as /home/someone). In current Unices,
>>> /usr is where user-land programs and data (as opposed to 'system land'
>>> programs and data) are. The name hasn't changed, but it's meaning has
>>> narrowed and lengthened from "everything user related" to "user usable
>>> programs and data". As such, some people may now refer to this
>>> directory as meaning 'User System Resources' and not 'user' as was
>>> originally intended.
>>
>> So, actually the acronym was only invented later to represent the new
>> role of the directory. ;-)
>>
>>
> A bit more of history here:
> 
> http://www.osnews.com/story/25556/
Understanding_the_bin_sbin_usr_bin_usr_sbin_Split

Thanks, nice reading.

I'm looking forward to Gentoo usrmerge. While supported with 17.1 
profile, I just don't want to try. There's probably lots of bugs around 
in packages.

Although it's tempting to just symlink /bin /sbin /lib* to their /usr 
counterparts.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Sat, 10 Feb 2018 19:38:56 + schrieb Wols Lists:

> On 10/02/18 18:56, Kai Krakow wrote:
>> role and /usr takes the role of /, and /home already took the role of
>> /usr (that's why it's called /usr, it was user data in early unix). The
> 
> Actually no, not at all. /usr is not short for USeR, it's an acronym for
> User System Resources, which is why it contains OS stuff, not user
> stuff. Very confusing, I know.

>From https://www.tldp.org/LDP/Linux-Filesystem-Hierarchy/html/usr.html:

> In the original Unix implementations, /usr was where the home
> directories of the users were placed (that is to say, /usr/someone was 
> then the directory now known as /home/someone). In current Unices, /usr 
> is where user-land programs and data (as opposed to 'system land'
> programs and data) are. The name hasn't changed, but it's meaning has 
> narrowed and lengthened from "everything user related" to "user usable 
> programs and data". As such, some people may now refer to this 
> directory as meaning 'User System Resources' and not 'user' as was 
> originally intended.

So, actually the acronym was only invented later to represent the new 
role of the directory. ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Thu, 08 Feb 2018 19:02:10 -0500 schrieb Rich Freeman:

> On Thu, Feb 8, 2018 at 6:18 PM, Wol's lists 
> wrote:
>>
>> /var/tmp is defined as the place where programs store stuff like crash
>> recovery files. Mounting it tmpfs is going to screw up any programs
>> that reply on that *defined* behaviour to recover after a crash.
>>
>>
> Care to cite an example of such a program in the Gentoo repo?  I
> certainly can't think of any, and I've been running with /var/tmp on
> tmpfs for over a decade.
> 
> /var/cache strikes me as a much better place for some kind of recovery
> file.  While /var/tmp is typically less volatile than /tmp, it isn't
> really something that software should just rely on.

I don't think that /var/cache is a better choice here. Cache directories 
should be treated as data that could be rebuilt at ANY time. That's 
certainly not true for crash dump files. They simply don't belong there.

Thus, crash dumps should go to non-volatile directories like /var/tmp.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Fri, 09 Feb 2018 12:30:21 +0200 schrieb gevisz:

> 2018-02-09 10:11 GMT+02:00 Neil Bothwick :
>> On Thu, 8 Feb 2018 23:18:19 +, Wol's lists wrote:
>>
>>> > More specifically, /var/tmp is traditionally supposed to be
>>> > non-volatile (across reboots).
>>> >
>>> > Comparatively the contents of /tmp can be volatile (across reboots).
>>> >
>>> > I would advise against mounting /var/tmp on tmpfs.
>>> >
>>> EMPHATICALLY YES.
>>>
>>> /tmp is defined as being volatile - stuff can disappear at any time.
>>>
>>> /var/tmp is defined as the place where programs store stuff like crash
>>> recovery files. Mounting it tmpfs is going to screw up any programs
>>> that reply on that *defined* behaviour to recover after a crash.
>>>
>>> Mounting /var/tmp/portage as tmpfs is perfectly fine as far as I know
>>> -
>>> I do it myself.
>>
>> Why mess around with another tmpfs? Just set PORTAGE_TMPDIR="/tmp" in
>> make.conf. Job done!
> 
> It is an interesting idea. But why it is not done by default then?
> 
> Can somebody think of a situation when it should not be done?
> 
> My /tmp is not on tmpfs currently. Only /run
> 
> May be, it is not a good idea to put /mnt on tmpfs at the time of
> Spector and Meltdown?

Portage doesn't run off /tmp by default because general recommendation is 
to mount /tmp with noexec. Build scripts won't be able to run that way.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Fri, 09 Feb 2018 10:58:35 + schrieb Neil Bothwick:

> On Fri, 09 Feb 2018 10:12:01 +, Peter Humphrey wrote:
> 
>> > Why mess around with another tmpfs? Just set PORTAGE_TMPDIR="/tmp" in
>> > make.conf. Job done!
>> 
>> Acting on the advice of various Gentoo guides, I have this:
>> 
>> # grep tmp /etc/fstab tmpfs   /var/tmp/portagetmpfs
>> noatime,uid=portage,gid=portage,mode=0775  0 0 tmpfs   /tmp
>>tmpfs noatime,nosuid,nodev,noexec,mode=1777   0 0
>> 
>> Are you saying I don't gain anything from it?
> 
> I can't see any benefit from the added complexity. If you want portage
> to use a tmpfs for its temporary directory, why not use one that is
> already there?

The point here is having /tmp as noexec. That's not exactly what I'd call 
added complexity.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Thu, 08 Feb 2018 14:50:31 -0500 schrieb Rich Freeman:

> On Thu, Feb 8, 2018 at 2:17 PM, Dale  wrote:
>> As someone else pointed out, if you start using swap, that generally
>> defeats the purpose of tmpfs.
>>
>>
> I'll just add one thing to this, which I've probably already said ages
> ago:
> 
> In an ideal world swap would STILL be better than building on disk,
> because it gives the kernel fewer constraints around what gets written
> to disk.
> 
> Anything written to disk MUST end up on the disk within the dirty
> writeback time limit.  Anything written to tmpfs doesn't ever have to
> end up on disk, and if it is swapped the kernel need not do it in any
> particular timeframe.  Also, the swapfile doesn't need the same kinds of
> integrity features as a filesystem, which probably lowers the cost of
> writes somewhat (if nothing else after a reboot there is no need to run
> tmpreaper on it).
> 
> So, swapping SHOULD still be better than building on disk, because any
> object file that doesn't end up being swapped is a saved disk IO, and
> the stuff that does get swapped will hopefully get written at a more
> opportune time vs forcing the kernel to stop what is doing after 30s (by
> default) to make sure that something gets written no matter what (if it
> wasn't deleted before then).

I can only second this.

> That's all in an ideal world.  In practice I've never found the kernel
> swapping algorithms to be the best in the world, and I've seen a lot of
> situations where it hurts.  I run without a swapfile for this reason. 
> It pains me to do it because I can think of a bunch of reasons why this
> shouldn't help, and yet for whatever reason it does.

I really prefer having inactive things being swapped out than discarding 
cache from memory. But since kernel 4.9 this no longer works so well. I'm 
still seeking the reason. But for that reason, building in tmpfs is no 
longer such an appealing option as before.

Otherwise, I was quite happy with swap behavior, exactly for the reasons 
you initially outlined.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Thu, 08 Feb 2018 16:42:23 -0700 schrieb Grant Taylor:

> On 02/08/2018 03:32 PM, gevisz wrote:
>> In this case it would be nice to hear a reason.
> 
> I think the reason probably goes back a number of years.  When /tmp was
> made volatile (ram / swap backed) there was a need for non-volatile temp
> space.  Thus, /var/tmp was created as non-volatile specifically for the
> purpose to of surviving across reboots.  (At least that's my
> understanding.)

I don't think this is the reason. Both directories have been there since 
ages, long before someone even considered putting that into ram disks. 
Historically, there would even be /usr/tmp.

The point here is that /var is "variable" data in contrast to "read-only" 
data on the other partitions. This makes /var a candidate for persistent 
OS-state. You could simply keep / and /usr on volatile storage (or even 
read-only storage) and all your variable, non-volatile data would be in
/var.

Having /tmp on tmpfs is then a logical consequence of this because / 
could be read-only. Also, /etc should be symlinked to /var/etc to enable 
and keep configuration changes over reboots, although this could also be 
populated by a boot-strapping process (e.g., IP configuration).

This is especially interesting for container-based, dynamic cloud servers 
which would spawn and disappear on demand, you just need to keep the non-
volatile state directory /var. Usually, such systems start with an empty
/etc directory which is populated by a boot-strapping process.

Following that idea, /var/tmp should also be non-volatile.

Bringing this idea further forward, everything related to the OS-image 
should move to /usr (catchword "usrmerge"), and then / which contains
/var and /etc could be writeable and non-volatile, /usr would contain
boot-strapping configuration and be read-only, /etc would be populated on 
first boot. The idea of /tmp on tmpfs is just kept here.

The idea of having everything boot-related in / doesn't apply since years 
(and wasn't the original idea anyways). These days, initramfs takes this 
role and /usr takes the role of /, and /home already took the role of /usr 
(that's why it's called /usr, it was user data in early unix). The 
splitting we have today is a result of size-constraints of early systems 
when the OS no longer fit one disk, when / became the early boot-
environment (initramfs today). Today, the OS uses dynamic linking when 
most of it was statically linked in the early day, and thus there are 
dependencies between / and /usr that cannot be untangled easily, and 
renders the split for early boot-environments difficult to maintain. So 
everything might easily move to /usr and / can become a non-volatile 
state partition containing /var and /etc. And early boot lives in 
initramfs (to setup /usr mount).


>> That's why I have asked if it does not harm.
> 
> I don't think it will actually harm the Operating System.  Some daemons
> may get cross if files they know that they created no longer exist after
> a reboot.
> 
> Though things should gracefully handle the absence of such files and
> re-create them.
> 
> The biggest Ah Ha moment I ever saw someone have was when they spent
> more than an hour getting a Solaris patch cluster to the machine,
> extracted it to /tmp, rebooted into single user mode, and went where the
> 

[gentoo-user] Re: /var/tmp on tmpfs

2018-02-10 Thread Kai Krakow
Am Thu, 08 Feb 2018 19:11:48 +0200 schrieb gevisz:

> I never used tmpfs for portage TMPDIR before and now decided to give it
> a try.
> 
> I have 8GB of RAM and 12GB of swap on a separate partition.
> 
> Do I correctly understood
> https://wiki.gentoo.org/wiki/Portage_TMPDIR_on_tmpfs that I can safely
> set in the fstab the size of my tmpfs to 12GB so that the chromium could
> be emerged in tmpfs (using the swap) without the need to set
> notmpfs.conf for chromium and the likes.
> 
> And I am going to set the whole /var/tmp/ on tpmfs instead of just
> /var/tmp/portage Is it ok?

I'm using systemd automounts to discard /var/tmp/portage when there is no
longer a user of this directory. It has one caveat: If you want to
inspect build problems, you should keep a shell running inside.

Here's the configuration:

$ fgrep portage /etc/fstab
none /var/tmp/portage tmpfs 
noauto,size=150%,uid=250,gid=250,mode=0775,x-systemd.automount 0 0

$ cat /etc/tmpfiles.d/portage.conf
D /var/tmp/portage 0775 portage portage
x /var/tmp/portage

I used ccache before but building in tmpfs is much faster.

I'm currently experimenting with tuning vm.watermark_scaling_factor as the
kernel tends to swap storms with very high desktop latencies during package
builds which consume a lot of tmpfs. This is behavior I'm seeing since
kernel 4.9, worked better before.

As such, I think it makes most sense to put only /var/tmp/portage on tmpfs.
Programs may expect /var/tmp as being non-volatile over reboots.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Forced rebuild of a package...how?

2018-02-09 Thread Kai Krakow
Am Fri, 09 Feb 2018 11:39:25 +0100 schrieb Kai Krakow:

> Am Sun, 04 Feb 2018 05:20:03 +0100 schrieb tuxic:
> 
>> after installing linux-4.15.1 (downloaded from kernel.org) I want to
>> reinstall (beside others) nvidia drivers.
>> 
>> Emerge told me:
>> |>emerge nvidia-drivers
>> |Calculating dependencies... done!
>> |>>> Jobs: 0 of 0 complete   Load avg: 1.05, 
>> 0.65, 0.34
>> |>>> Auto-cleaning packages...
>> |
>> |>>> No outdated packages were found on your system.
>> 
>> 
>> That is valid for the previous installed kernel...but not for the one 
>> 
>> This was updated just before
>> Sun Feb  4 04:21:46 2018 <<< sys-apps/portage-2.3.23
>> Sun Feb  4 04:21:51 2018 >>> sys-apps/portage-2.3.24
>> 
>> My make.conf has this options:
>> EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=4 --changed-deps-report=n 
>> --changed-deps"
>> 
>> 
>> Thanks for any help in advance!
> 
> I'm using this script to automate the process:
> 
> $ cat /etc/kernel/postinst.d/70-emerge-module-rebuild
> #!/bin/bash
> exec env -i PATH=$PATH /usr/bin/emerge -1v --usepkg=n @module-rebuild
> 
> (I don't know why "env -i PATH=$PATH" is needed but otherwise it won't
> work correctly sometimes)

After reading the rest of the thread, it is now:

$ cat /etc/kernel/postinst.d/70-emerge-module-rebuild
#!/bin/bash
exec env -i PATH=$PATH /usr/bin/emerge -1v --usepkg=n --selective=n 
@module-rebuild


> You could add "--changed-deps=n" there because otherwise it won't play
> well with using "--changed-deps" as a default.
> 
> Now, when you run "make modules_install install", it will automatically
> rebuild kernel modules.
> 
> BTW, I have another script to also install the kernel in systemd-boot
> which also rebuilds initramfs (dracut here):
> 
> $ cat /etc/kernel/postinst.d/zz-systemd-boot
> #!/bin/bash
> /usr/bin/kernel-install remove $1 $2
> /usr/bin/kernel-install add $1 $2





-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Forced rebuild of a package...how?

2018-02-09 Thread Kai Krakow
Am Sun, 04 Feb 2018 05:20:03 +0100 schrieb tuxic:

> after installing linux-4.15.1 (downloaded from kernel.org) I want to
> reinstall (beside others) nvidia drivers.
> 
> Emerge told me:
> |>emerge nvidia-drivers
> |Calculating dependencies... done!
> |>>> Jobs: 0 of 0 complete   Load avg: 1.05, 
> 0.65, 0.34
> |>>> Auto-cleaning packages...
> |
> |>>> No outdated packages were found on your system.
> 
> 
> That is valid for the previous installed kernel...but not for the one 
> 
> This was updated just before
> Sun Feb  4 04:21:46 2018 <<< sys-apps/portage-2.3.23
> Sun Feb  4 04:21:51 2018 >>> sys-apps/portage-2.3.24
> 
> My make.conf has this options:
> EMERGE_DEFAULT_OPTS="--jobs=4 --load-average=4 --changed-deps-report=n 
> --changed-deps"
> 
> 
> Thanks for any help in advance!

I'm using this script to automate the process:

$ cat /etc/kernel/postinst.d/70-emerge-module-rebuild
#!/bin/bash
exec env -i PATH=$PATH /usr/bin/emerge -1v --usepkg=n @module-rebuild

(I don't know why "env -i PATH=$PATH" is needed but otherwise it won't
work correctly sometimes)


You could add "--changed-deps=n" there because otherwise it won't play
well with using "--changed-deps" as a default.

Now, when you run "make modules_install install", it will automatically
rebuild kernel modules.

BTW, I have another script to also install the kernel in systemd-boot
which also rebuilds initramfs (dracut here):

$ cat /etc/kernel/postinst.d/zz-systemd-boot
#!/bin/bash
/usr/bin/kernel-install remove $1 $2
/usr/bin/kernel-install add $1 $2


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: old kernels are installed during the upgrade

2018-01-02 Thread Kai Krakow
Am Tue, 02 Jan 2018 19:26:44 + schrieb Stroller:

>> On 2 Jan 2018, at 11:54, Kruglov Sergey  wrote:
>> 
>> Now I have  gentoo-sources-4.14.8-r1 installed.
>> After  "emerge --ask --update --deep --with-bdeps=y --newuse @world"
>> command emerge installs old kernel in NS (after first update 4.12.12,
>> after second update 4.9.49-r1).
>> How can I fix it?
>> There is sys-kernel/gentoo-sources in my world set.
> 
> Remove sys-kernel/gentoo-sources from your world file - I believe you
> can do this using the emerge command, but am unsure of the right syntax;
> you can just edit /var/lib/portage/world and delete the appropriate
> line.D

It is "emerge --deselect ...".


> Now `emerge -n =sys-kernel/gentoo-sources-4.14.8-r1` - "This option can
> be used to update the world file without  rebuilding the packages."

I don't think this is how it works. While technically correct, the 
outcome is different to what you're trying to achieve.


> This pins your kernel version at 4.14.8-r1 and you can update when, in
> future, you decide it's time to update your kernel, without being nagged
> about it every time a new version is release or you emerge world.

The equal sign doesn't pin versions, at least not that I remember. 
Package are pinned by slot in the world file. Coincidence may be that the 
version you selected happens to be exclusively the only slot, too.

If you intend to pin a package, either emerge by slot, or use 
package.mask and package.unmask.


> For this reason it's always best to emerge kernels with an equals sign,
> pinning them at some specific version, IMO.

Makes no sense if my above answer is correct.


> This suggestion may provoke responses that the kernel is important and
> you should update it to ensure you get security updates - look at the
> attack vectors, you're probably sitting behind a NAT router, with very
> few ports exposed to the internet.

The attack vector is probably not the network facing surface of the 
kernel... Which makes your argument misleading at best...

It is more likely that your kernel is attacked by something you did from 
the browser, or by running a server on one of the "few ports exposed" 
which is vulnerable, and that is the attack vector: A local privilege 
escalation or buffer overflow allowing the attacker to gain control of a 
process, and only then attacking the kernel.

This is why you first should keep your software updated and secured, and 
for the rest just stick to gentoo-sources stable.

Keep in mind that gentoo-sources back-ports some security fixes early. 
Also stable uses LTS kernels mostly which have long-term security 
maintenance.


> It's adequate to update your kernel every 3 months.

It's adequate to update your password every 3 months.

It's adequate to update your software every 3 months.

Really? No...

It's adequate to update your software when a security hole was fixed - on 
the point. Not two or three months later...

It gives a false impression of safety if you recommend such things.


Just my two cents... ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: error while loading shared libraries: libstdc++.so.6:

2018-01-02 Thread Kai Krakow
Am Tue, 02 Jan 2018 10:11:24 -0700 schrieb thelma:

> I was installing some brother driver simply run:

Don't do this... You should really not "install" software with tar.

You're just unpacking an archive, overwriting everything that
might be in its way.


> # tar zxvf ./hl5370dwlpr-2.0.3-1.i386.tar.gz -C /
> ./
> ./usr/
> ./usr/local/
> ./usr/local/Brother/
> ./usr/local/Brother/lpd/
> ./usr/local/Brother/lpd/psconvert2
> ./usr/local/Brother/lpd/filterHL5370DW
> ./usr/local/Brother/lpd/rawtobr2
> ./usr/local/Brother/inf/
> ./usr/local/Brother/inf/setupPrintcap
> ./usr/local/Brother/inf/paperinf
> ./usr/local/Brother/inf/brHL5370DWfunc
> ./usr/local/Brother/inf/braddprinter
> ./usr/local/Brother/inf/brHL5370DWrc
> ./usr/lib/
  ^^
This one replace the /usr/lib symlink with an empty directory.

> ./usr/lib/libbrcomplpr2.so

Move this file to lib64 instead.

Now:

# rmdir /usr/lib && ln -s lib64 /usr/lib

> ./usr/bin/
> ./usr/bin/brprintconflsr2
> ./var/
> ./var/spool/
> ./var/spool/lpd/
> ./var/spool/lpd/HL5370DW/
> 
> # tar zxvf ./cupswrapperHL5370DW-2.0.4-1.i386.tar.gz -C /
> ./
> ./usr/
> ./usr/local/
> ./usr/local/Brother/
> ./usr/local/Brother/cupswrapper/
> ./usr/local/Brother/cupswrapper/brcupsconfig3
> ./usr/local/Brother/cupswrapper/cupswrapperHL5370DW-2.0.4
> 
> Now, I can not run any emerge, eix etc command, I'm getting: 
> eix: error while loading shared libraries: libstdc++.so.6: cannot open shared 
> object file: No such file or directory
> 
> bash: emerge: command not found
> 
> However libstdc++ exists: 
> 
> locate libstdc++.so.6

Locate doesn't necessarily tell you that... It just tells you
that the file existed when the locate db was built (usually
by a cronjob at night).

See "man locate".


> /usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/libstdc++.so.6
> /usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/libstdc++.so.6.0.21
> /usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/32/libstdc++.so.6
> /usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/32/libstdc++.so.6.0.21
> /usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/libstdc++.so.6
> /usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/libstdc++.so.6.0.22
> /usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/32/libstdc++.so.6
> /usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/32/libstdc++.so.6.0.22
> /usr/share/gdb/auto-load/usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/libstdc++.so.6.0.21-gdb.py
> /usr/share/gdb/auto-load/usr/lib64/gcc/x86_64-pc-linux-gnu/5.4.0/32/libstdc++.so.6.0.21-gdb.py
> /usr/share/gdb/auto-load/usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/libstdc++.so.6.0.22-gdb.py
> /usr/share/gdb/auto-load/usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/32/libstdc++.so.6.0.22-gdb.py
> 
> lib -> lib64 (exist in "/")

But no longer in /usr due to your brute force "installation".


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Running Gentoo in VirtualBox

2018-01-01 Thread Kai Krakow
Am Sun, 31 Dec 2017 12:40:43 -0700 schrieb thelma:

> I'm using Gentoo as a server (so it runs 24/7) Apache, Asterisk, Hylafax
> etc.
> 
> What are my chances to run Gentoo as a VirtualBox?
> 
> Installing Gentoo takes me 2-3 days (basic setup min., I don't do it
> every month so I have to go through Gentoo handbook); to configure it
> the way I want it takes another week or two.

I recommend having a template Gentoo system with the basic setup and 
basic world set you need. Keep that updates once in a while. If you need 
to deploy a new system, simply clone from this.

I'm doing something similar with our servers:

Upon deploying a new server, I just clone a most similar one, excluding 
the data partitions. Then I cleanup world, make.conf, passwd/groups, 
reset the machine uid in /etc, adjust IP configuration, and run "emerge --
depclean" followed by a world upgrade.

This can be made easier if you're planning your mount volumes correctly, 
that is having mysql, home, hylafax whatever on separate partitions. It 
allows you to run rsync without crossing mount borders.

This works for me since 10+ years of running Gentoo based servers now.


> So I was thinking,  if I run Windows 10 and configure Gentoo as a
> virtual box it might be easier to transfer it from one system to
> another, in case there is a HD failure (like it just happened to me
> yesterday).
> 
> Any input will be appreciated.

I usually virtualize systems by first preparing the kernel to have 
support for VirtualBox or VMware, then installing the proper tools.

After this, prepare the VM and empty disk images, boot sysrescuecd in it 
(which is actually Gentoo based), format and mount your empty disk images 
in it (within the final structure, so you pre-create mount points), then 
use rsync over SSH (as root) to transfer all files. You can do that while 
the source system is still online.

Keep in mind not to rsync special directories like /sys or /proc from the 
source. Create empty mount points instead.

Then try to boot the system offline (do not yet connect it to virtual 
networking), see if it boots. If not, try to fix it and record the steps. 
You can also chroot into the cloned system from sysrescuecd.

Now it's time for resync. Stop services on the source system so it quasi-
offline, just keep sshd running. Now, rsync again. It will now copy the 
differences in just a few seconds to minutes. Maybe reapply the 
previously recorded fixing steps, then boot the cloned system and shut 
the other system down.

If source and destination is the same hardware, you may want to use an 
external drive as intermediate rsync storage. But then you have an 
offline system you cannot boot in case of failure. It's a bit harder then 
if you do it the first time. But it also works great if you are familiar 
with the process.


> I know I might have problem with Serial port and receiving faxes via
> HylaFax as they are time sensitive.

Serial port usually does work in pass-through mode, I've done exactly 
that previously with hylafax.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: depclean confusion

2017-12-29 Thread Kai Krakow
Am Tue, 19 Dec 2017 23:43:35 +0200 schrieb Nikos Chantziaras:

> On 19/12/17 23:18, Walter Dnes wrote:
>>Finishing off an install, and running "emerge --depclean"
>> 
>> =
> Assigning files to packages...
>>   * In order to avoid breakage of link level dependencies, one or more
>>   * packages will not be removed. This can be solved by rebuilding the
>>   * packages that pulled them in.
>>   *
>>   *   sys-libs/db-5.3.28-r2 pulled in by:
>>   * sys-apps/iproute2-4.14.1-r1 needs libdb-5.3.so *
> Adding lib providers to graph...
>> =
>> 
>> 1) I've rebuilt iproute2
>> 
>> [ebuild   R] sys-apps/iproute2-4.14.1-r1::gentoo  USE="-atm -berkdb
>> -iptables -ipv6 -minimal (-selinux)" 0 KiB
> 
> Unmerge it anyway and then rebuild iproute2. It seems like an automagic
> dep. It should not be using db when the berkdb USE flag is not set.
> Since it does, it's a bug.

This is most of the times caused by configure scripts auto-detecting the 
presence of certain libs. Such behavior should be disabled and indicates 
a missing explicit disable/enable in the ebuild. You should report it.


> However, rebuilding it after unmerging db should fix it.
> 
> With that being said, do a "quickpkg sys-libs/db" first to get a tarball
> backup, just to be safe if you need to restore it.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Kernel 4.14.7 no longer switches to VT7

2017-12-29 Thread Kai Krakow
Am Thu, 21 Dec 2017 15:41:26 + schrieb Jörg Schaible:

> Hi,
> 
> after the update and installation of gentoo-sources-4.14.7 my two
> machines no longer switch to SDDM on VT7, it stays on VT1. However, I
> can switch manually using CTRL-ALT-7 to SDDM and login as usual. If I
> boot with the last stable kernel 4.12.12 anything is back to normal and
> the login screen of SDDM appears directly while the rest of the modules
> is loaded in background.
> 
> Both machines have older Radeon chips (REDWOOD and CEDAR) and I managed
> to load also their firmware with the new kernel 4.14.7, but there's
> still no automatic switch to VT7 anymore.
> 
> I found nothing obvious in /var/log/messages, dmesg or Xorg.0.log. What
> may cause this weird behavior?

If I remember right (many months ago), I fixed it by changing one line 
in /etc/sddm.conf:

[X11]
ServerArguments=-nolisten tcp -keeptty
  
   This is where the magic happens

You may want to add or remove that parameter...


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Using an old kernel .config as the basis for a new .config

2017-12-28 Thread Kai Krakow
Am Thu, 28 Dec 2017 15:05:04 -0500 schrieb Jack:

> On 2017.12.28 14:52, Alan Mackenzie wrote:
>> Hello, Gentoo.
>> 
>> Having just built linux-4.14.7-gentoo, suddenly a new version of the
>> kernel, linux-4.14.8-gentoo-r1 has become stable.  Configuring a kernel
>> from scratch is a repetitive drudge.
>> 
>> There is some way of initialising a new kernel .config from an existing
>> one, I am sure, but I can't find it.  I've looked at the Gentoo wiki,
>> I've looked at (some of) the kernel's own documentation.  The nearest I
>> can find is make oldconfig, which supposedly does what I want, but it
>> just seems to start off with a default .config and go through the
>> hundreds of questions one at a time.
>> 
>> So, would some kind soul please tell me how to get my old .config into
>> a new one properly.  Thanks!
> 
> You need to copy your old .config into the new kernel source directory. 
> "make oldconfig" then uses those values, and only asks you about new
> items.  It sounds like it was asking about everything because it didn't
> have the old file as a starting point - so was starting from scratch.

You actually even don't have to copy the old config file as long as the 
currently running system provides the config you want to migrate.

You can just run

# make oldconfig

and it will figure out the config, looking at the current directory 
first. It will then interactively ask for each new config option. You can 
type "?" at each step to get a description. This is the way I do it.

I only copy a .config file if I want a specific known base configuration.

You can then run

# make menuconfig

to further fine-tune your decisions, or

# make localmodconfig

to disable modules not currently loaded. You should double-check it 
didn't disable important stuff. Take a backup of .config first, then run 
a diff. If in doubt, leave an option enabled as module.


# make olddefconfig

Doesn't ask questions but instead uses defaults. I wouldn't recommend 
this if you are already running optimized manual configs.


There are many more (and interesting ones), have a look at

# make help


You can also "emerge -a kergen" and let it build a .config based on and 
optimized for your hardware, tho it didn't work too well for me. You may 
want to double check what it does, and then manually change the config. 
You can also use it to migrate configs between kernel upgrades.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Is gnome becoming obligatory?

2017-12-15 Thread Kai Krakow
Am Fri, 15 Dec 2017 07:38:01 +0100 schrieb J. Roeleveld:

> On Friday, December 15, 2017 4:05:41 AM CET Kai Krakow wrote:
>> Am Thu, 14 Dec 2017 08:54:59 +0100 schrieb J. Roeleveld:
>> >> Some historical correctnesses about Canek:
>> >> 
>> >> - He has been here for years - He has contributed here for years -
>> >> He supports systemd and has offered more help and explanation about
>> >> systemd to it's users on this list than any other single person, bar
>> >> none - He has never, not once, slagged off SysV Init, OpenRC or any
>> >> other init system, ot the creators or the users - He has never
>> >> posted rude or inflamatory comments about anyone arguing against him
>> >> - He has never resorted to ad-hominem and never posted any knee jerk
>> >> opinions about any other poster wrt their stance on init systems
>> > 
>> > +1 I may not agree with Canek on all things:
>> > - I do dislike systemd, especially on Centos where disabling services
>> > doesn't always work past a reboot
>> 
>> Well, I think you're falling the pitfall expecting "disable" makes a
>> unit unstartable. That is not the case. Disabling a unit only removes
>> it from the list of units starting on your own intent. It can still be
>> pulled it as a (required) dependency.
> 
> Makes sense
> 
>> If you really want it never being started, you need to mask the unit.
>> It's then no longer visible to the dependency resolver as if it were
>> not installed at all.
> 
> This is not listed anywhere easy to find in google.
> 
>> The verbs disable and enable are arguably a bit misleading, while the
>> verbs mask and unmask are not really obvious. But if you think of it,
>> it actually makes sense.
> 
> Actually, it doesn't. But lets not discuss naming conventions. A lot of
> tools have ones where I fail to see the logic.
> It's a shame that option is not easily findable. And not knowing it
> exists, means checking man-pages and googling for them doesn't happen
> either.
> 
>> If you "rc-update del" a service, you wouldn't prevent it from being
>> started neither, just because OpenRC is still able to pull it in as a
>> dependency.
> 
> True, except with OpenRC, all the config is located together. Not mostly
> in / usr/ somewhere with overrides in /etc/...
> I dislike all tools that split their config in this way.
> 
>> So it's actually not an argument for why you'd dislike systemd. ;-)
> 
> The lack of easily findable documentation on how to stop a service from
> starting, even as a dependency, is a reason. (not singularly against
> systemd).
> Systemd, however, has an alternative.

Maybe it's a point of how you view and understand the underlying 
workings. For me, it was quite obvious that "disable" wouldn't stop a 
unit from starting at all. There's also socket activation, and if a 
socket can still pull in the unit, systemd actually tells you that it can 
be pulled in and you need to disable the socket unit, too.

After all, systemd is meant to automate most of the stuff, thus units are 
pulled in by udev or statically enabled units as needed. If you want to 
disable (and possibly break) some part of functionality, you have to 
pretend it's not there, thus "mask" it from visibility of the dependency 
system. That's also well documented in the man pages and blog articles by 
Lennart - which btw I've read _before_ deploying systemd.

I guess the bigger problem here is transitioning from the old, static, 
non plug-and-play init systems to some new style as systemd provides it. 
Old thinking no longer applies, you have to relearn from scratch. It's 
like driving a car from the 70s and then a modern one: The modern one may 
have extras like breaking assistant, traction control, etc... And when 
this first kicks in, it may come at a surprise. But hey, it's not that 
bad and maybe there are even buttons to disable such functionality - at 
your own risk.

But I agree with you that at first glance it is missing some overview: 
You cannot just look at /etc/systemd to see the full picture. There may 
be vendor enabled units which you don't see there. But "systemd status 
unit-file" will tell you. Actually, I like the fact that installing a 
piece of software also enabled the service I expect to be installed and 
working then. The problem here is more on the distribution side where 
dependencies of packages may pull in packages with services you'd never 
need - just for a small runtime dependency. And I can agree with you that 
it breaks the principle of least surprise then. But it really should be 
fixed by the packagers, not by systemd. Systemd is just the messenger 
here which provides the function of vendor presets.

But, yes, if systemd was installed as part of a distribution upgrade, 
without giving you the chance to read the docs, many things will come at 
a surprise, and there's an overwhelming lot of changes and different and 
unexpected behavior. But is that really systemd's fault?


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Is gnome becoming obligatory?

2017-12-14 Thread Kai Krakow
Am Thu, 14 Dec 2017 08:54:59 +0100 schrieb J. Roeleveld:

>> Some historical correctnesses about Canek:
>> 
>> - He has been here for years - He has contributed here for years - He
>> supports systemd and has offered more help and explanation about
>> systemd to it's users on this list than any other single person, bar
>> none - He has never, not once, slagged off SysV Init, OpenRC or any
>> other init system, ot the creators or the users - He has never posted
>> rude or inflamatory comments about anyone arguing against him - He has
>> never resorted to ad-hominem and never posted any knee jerk opinions
>> about any other poster wrt their stance on init systems
> 
> +1 I may not agree with Canek on all things:
> - I do dislike systemd, especially on Centos where disabling services
> doesn't always work past a reboot

Well, I think you're falling the pitfall expecting "disable" makes a unit 
unstartable. That is not the case. Disabling a unit only removes it from 
the list of units starting on your own intent. It can still be pulled it 
as a (required) dependency.

If you really want it never being started, you need to mask the unit. 
It's then no longer visible to the dependency resolver as if it were not 
installed at all.

The verbs disable and enable are arguably a bit misleading, while the 
verbs mask and unmask are not really obvious. But if you think of it, it 
actually makes sense. If you "rc-update del" a service, you wouldn't 
prevent it from being started neither, just because OpenRC is still able 
to pull it in as a dependency.

So it's actually not an argument for why you'd dislike systemd. ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: How to repair a 'secondary Gentoo system'

2017-12-11 Thread Kai Krakow
Am Mon, 11 Dec 2017 20:12:49 +0100 schrieb Helmut Jarausch:

> On 12/11/2017 05:58:42 PM, David Haller wrote:
>> Hello,
>> 
>> On Mon, 11 Dec 2017, Helmut Jarausch wrote:
>> >But now, don't ask me why,
>> >chroot  /OtherGentoo   /bin/bash
>> >dies of a segment fault.
>> >
>> >Is there any means to repair such a Gentoo system short of  
>> rebuilding it
>> >(nearly) from scratch?

You could try to start emerge with chroot directly instead of dropping 
into a shell, then rebuild whatever causes the segfault:

$ chroot /OtherGentoo emerge -1a bash

But keep in mind that chroot doesn't do a very good isolation against the 
host system. You may want to use a wrapper script to setup the needed 
mounts, and maybe some more stuff.

You could also try running busybox instead of bash.


>> How about a bit of debugging first?
>> 
>> # catchsegv chroot  /OtherGentoo   /bin/bash
>> # cd /OtherGentoo/ && chroot  /OtherGentoo/ /bin/bash
>> 
>> (ISTR, there was/is a reason for first cd-ing into the chroot and then
>> chrooting with the full-path...)
>> 
>> Have you (bind) mounted /sys, /dev, /proc into the chroot?
>> 
>> I use this as the top and bottom of a little bit longer
>> chroot-wrapper-script:
>> 
>>  /root/bin/chrooter 
>> #!/bin/bash
>> root="$1"
>> shift
>> 
>> test -e "${root}/proc/kcore" || mount --bind /proc/ "${root}/proc"
>> test -e "${root}/sys/block"  || mount --bind /sys/ "${root}/sys"
>> test -e "${root}/dev/root"   || mount --bind /dev/ "${root}/dev"
>> test -e "${root}/dev/pts/0"  || mount --bind /dev/pts/  
>> "${root}/dev/pts"
>> [..]
>> cd "$root"
>> chroot "$root" /bin/bash -l
>> 
> 
> My procedure is quite similar, I only use
> 
> mount --rbind /dev/ "${root}/dev"
> 
> and
> 
> mount --rbind /run  /${NROOT}/run
> 
> ---
> 
> I've tried
> catchsegv chroot  /OtherGentoo   /bin/bash
> 
> as well as
> 
> chroot  /OtherGentoo   catchsegv /bin/bash
> 
> In both cases, I don't get any error messages BUT I don't get chrooted.
> 
> Strangely enough, dmesg shows
> 
> systemd-coredump[25375]: Failed to connect to coredump service: No 
such  
> file or directory

It seems that at least systemd is installed and dropped some sysctl files 
in the directory structure. This would then set kernel.core_pattern to 
systemd-coredump.

OTOH, you may want to try to enter other Gentoo system with systemd-nspawn 
instead of chroot. It would also setup most of your bind mounts correctly:

$ cd /OtherGentoo && sudo systemd-nspawn

Similar to your use-case, I'm using such a OS tree to manage a rescue 
system in case my main system would not boot. A simple script is used as 
a wrapper to enter the system:

$ cat /mnt/rescue/enter.sh
#!/bin/bash
cd $(dirname $0) && \
exec sudo systemd-nspawn --bind=/usr/portage --bind=/boot $@


> although I'm not using system but openrc on both system

systemd-nspawn should be able to enter non-openrc systems but the host OS 
needs a running systemd instance to build the namespace scope.

I can even simulate a full rescue system boot that way:

$ /mnt/rescue/enter.sh -nb

and the container boots, dropping me at a console login.


Both provides much better isolation and simulation of the root 
environment than chroot.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: #gentoo experiences

2017-11-20 Thread Kai Krakow
Am Sun, 19 Nov 2017 11:25:15 -0500
schrieb "taii...@gmx.com" :

> > I'm collecting information about people's experiences in #gentoo.  
> Thanks!
> > I'm interested in both good and bad experiences, with users,
> > developers, and operators. Basically, anything that anyone would
> > care to share would be much appreciated.  
> The lack of an ncurses setup gui/an express setup option is a major
> PITA which is why I haven't yet used gentoo as dom0 in a production 
> environment, If something goes wrong and I am forced to re-install it 
> will take long enough for the boss to think I am bad at my job and it 
> isn't the type of thing one should do late at night.

Keep a binpkg of all your installed packages in a backup, keep a backup
of /var/lib/portage/world and /etc.

Now for reinstalling, just feed back the binpkg backup and make.conf
backup (from etc) to portage, put the world file back in place, then
reinstall. Should only take a few seconds now.

Then restore the rest of your configuration using the etc-backup.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: gst-plugins-bad-1.11.90 is blocking gst-plugins-base-1.12.3

2017-10-17 Thread Kai Krakow
Am Tue, 17 Oct 2017 16:35:27 +0200
schrieb Hubert Hauser :

> I've got error:
> 
> tux ~ # emerge @preserved-rebuild
> 
>  * IMPORTANT: 8 news items need reading for repository 'gentoo'.
>  * Use eselect news read to view new items.
> 
> Calculating dependencies... done!
> [ebuild   R    ] dev-libs/botan-1.10.17
> [ebuild U  ] media-gfx/imagemagick-7.0.7.6 [6.9.9.0]
> [ebuild   R    ] dev-python/pillow-4.2.1-r1
> [ebuild U  ] media-libs/gst-plugins-base-1.12.3 [1.10.5]
> [ebuild U  ] dev-qt/qtgui-5.9.2 [5.7.1-r1] USE="libinput* -vnc%"
> [ebuild   R    ] media-video/ffmpeg-3.3.4
> [ebuild U  ] dev-qt/qtwidgets-5.9.2 [5.7.1] USE="gtk%*"
> [ebuild U  ] media-libs/gst-plugins-ugly-1.12.3 [1.10.5]
> [ebuild U  ] dev-qt/qtdeclarative-5.9.2 [5.7.1]
> [ebuild U  ] dev-qt/qtprintsupport-5.9.2 [5.7.1]
> [ebuild U  ] media-video/vlc-2.2.6-r2 [2.2.6] USE="qt5*"
> [ebuild U  ] media-plugins/gst-plugins-x264-1.12.3 [1.10.5]
> [ebuild U  ] dev-qt/qtwebchannel-5.9.2 [5.7.1]
> [ebuild   R    ] net-analyzer/wireshark-2.4.2
> [ebuild U  ] xfce-base/xfwm4-4.13.0-r1 [4.12.3-r1] USE="opengl%*
> -xpresent%"
> [ebuild U  ] app-editors/vim-8.0.1188 [8.0.0386] USE="terminal%*"
> PYTHON_SINGLE_TARGET="python3_4%* -python2_7% -python3_5% -python3_6%"
> [ebuild U  ] dev-qt/qtwebengine-5.9.2 [5.7.1-r2]
> [ebuild   R    ] dev-db/postgresql-9.6.5-r1
> [ebuild U  ] media-video/obs-studio-20.0.1-r1 [20.0.1]
> [ebuild U  ] net-print/cups-filters-1.17.9 [1.16.4] USE="-pclm%"
> [ebuild U  ] media-gfx/graphviz-2.40.1 [2.38.0-r1]
> [ebuild   R    ] www-servers/nginx-1.12.1
> [ebuild   R    ] dev-lang/php-7.0.24
> [ebuild   R    ] dev-db/mariadb-10.2.9
> [blocks B  ]  (" media-libs/gst-plugins-base-1.12.3)
> 
> !!! Multiple package instances within a single package slot have been
> pulled !!! into the dependency graph, resulting in a slot conflict:
> 
> dev-qt/qtgui:5
> 
>   (dev-qt/qtgui-5.7.1-r1:5/5.7::gentoo, installed) pulled in by
>     ~dev-qt/qtgui-5.7.1 required by
> (dev-qt/qtquickcontrols-5.7.1:5/5.7::gentoo, installed)
>     ^
> ^ 
>  
> 
>     (and 9 more with the same problem)
> 
>   (dev-qt/qtgui-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
> pulled in by
>     ~dev-qt/qtgui-5.9.2 required by
> (dev-qt/qtwebengine-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
>     ^
> ^ 
>   
> 
>     (and 3 more with the same problem)
> 
> dev-qt/qtwidgets:5
> 
>   (dev-qt/qtwidgets-5.7.1:5/5.7::gentoo, installed) pulled in by
>     ~dev-qt/qtwidgets-5.7.1 required by
> (dev-qt/qtwebkit-5.7.1:5/5.7::gentoo, installed)
>     ^
> ^ 
>   
> 
>     (and 6 more with the same problem)
> 
>   (dev-qt/qtwidgets-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
> pulled in by
>     ~dev-qt/qtwidgets-5.9.2 required by
> (dev-qt/qtwebengine-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
>     ^
> ^ 
>   
> 
>     (and 2 more with the same problem)
> 
> dev-qt/qtprintsupport:5
> 
>   (dev-qt/qtprintsupport-5.7.1:5/5.7::gentoo, installed) pulled in by
>     ~dev-qt/qtprintsupport-5.7.1 required by
> (dev-qt/qtwebkit-5.7.1:5/5.7::gentoo, installed)
>     ^ 
> ^ 
>   
> 
> 
>   (dev-qt/qtprintsupport-5.9.2:5/5.9::gentoo, ebuild scheduled for
> merge) pulled in by
>     ~dev-qt/qtprintsupport-5.9.2 required by
> (dev-qt/qtwebengine-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
>     ^ 
> ^ 
>   
> 
> 
> dev-qt/qtdeclarative:5
> 
>   (dev-qt/qtdeclarative-5.7.1:5/5.7::gentoo, installed) pulled in by
>     ~dev-qt/qtdeclarative-5.7.1 required by
> (dev-qt/qtquickcontrols-5.7.1:5/5.7::gentoo, installed)
>     ^
> ^ 
>  
> 
> 
>   (dev-qt/qtdeclarative-5.9.2:5/5.9::gentoo, ebuild scheduled for
> merge) pulled in by
>     ~dev-qt/qtdeclarative-5.9.2 required by
> (dev-qt/qtwebchannel-5.9.2:5/5.9::gentoo, ebuild scheduled for merge)
>     ^
> ^ 
>  

[gentoo-user] Re: Changing dependencies without upping version ??

2017-09-26 Thread Kai Krakow
Am Tue, 26 Sep 2017 18:30:33 -0700
schrieb Ian Zimmerman <i...@very.loosely.org>:

> On 2017-09-27 02:38, Kai Krakow wrote:
> 
> > If you don't want (or cannot) upgrade, you have two options:
> > 
> >   1. Prepare to maintain your own overlay and deal with it
> > 
> >   2. Don't use a rolling release distribution
> > 
> > Personally, and since you seem to know enough to manage your own
> > overlay, I'd stick to #1.  
> 
> I do so already, and in fact my initial workaround was to fork the
> ebuild in my repo, pretty much like you recommend.
> 
> But I didn't know that this was the official way of stopping upgrades.
> I thought package.mask was that, and I think that's what the Handbook
> (or maybe some other part of the wiki) recommends.

Yes, masking of course. But at some point in time, the ebuild would be
dropped. And you may want to keep it around for rebuilds.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Thunderbird build failure

2017-09-26 Thread Kai Krakow
Am Sat, 23 Sep 2017 17:53:25 +0200 (CEST)
schrieb Christoph Böhmwalder :

> Hey all,
> 
> I've been trying to get Thunderbird to build for a few days now.
> Since I'm pretty much out of ideas at this point, I figured I'd ask
> on here if someone has an idea on what my problem is.
> 
> I've attached the output of `emerge --info
> '=mail-client/thunderbird-52.3.0::gentoo'`, `emerge -pqv
> '=mail-client/thunderbird-52.3.0::gentoo'`, and the complete build
> log.
> 
> From the error message I can see that it likely has something to do
> with either libpng or zlib. I have both of those installed:
> 
> $ emerge --info libpng
> --- >8 ---  
> media-libs/libpng-1.6.29::gentoo was built with the following:
> USE="apng (-neon) -static-libs" ABI_X86="32 (64) (-x32)"
> CPU_FLAGS_X86="sse"
> 
> $ emerge --info zlib
> --- >8 ---  
> sys-libs/zlib-1.2.11::gentoo was built with the following:
> USE="minizip -static-libs" ABI_X86="32 (64) (-x32)"
> 
> 
> I noticed that I have zlib-1.2.11 installed, though Thunderbird (or
> libpng?) is apparently trying to reference a symbol from zlib 1.2.9.
> I tried downgrading to zlib 1.2.9, but...
> 
> # emerge --ask \=sys-libs/zlib-1.2.9
> 
> These are the packages that would be merged, in order:
> 
> Calculating dependencies... done!
> 
> emerge: there are no ebuilds to satisfy "=sys-libs/zlib-1.2.9".
> 
> 
> I'd really appreciate any hints on what I'm doing wrong here. Thanks!

Without having looked at the logs, did you try:

# emerge -DNua world --changed-deps

and/or

# emerge -1a @preserved-rebuild

(not necessarily in that order)


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: Changing dependencies without upping version ??

2017-09-26 Thread Kai Krakow
Am Tue, 26 Sep 2017 11:45:45 -0700
schrieb Ian Zimmerman :

> On 2017-09-26 22:01, Michael Palimaka wrote:
> 
> > If the only argument is you don't want to upgrade, I'm afraid
> > there's not much we can do to help you.  
> 
> You're right that I don't want to upgrade, and I have already
> explained my workaround for that.  But that is _not_ what I'm
> complaining about in this thread.  Rather, my complaint is that such
> a major change is hidden in an ebuild edit with no version/revision
> bump, which means I cannot use the normal means (ie. package.mask) to
> prevent it.  Before I decided to drop Qt completely, I had to make a
> local package of qtcustomplot in my own repo.

If you don't want (or cannot) upgrade, you have two options:

  1. Prepare to maintain your own overlay and deal with it

  2. Don't use a rolling release distribution


Personally, and since you seem to know enough to manage your own
overlay, I'd stick to #1.


> Surely there are other reasons against this kind of thing?  What if
> someone reports a bug in the package?  Now you don't know from the
> version/rev number if it's linked with Qt4 or Qt5.  Is that not
> important?

The problem seems to be that while the package can be built against
both qt4 and qt5, qt4 wasn't present at all.

A proper way I'm sure you could have arranged with, would have been to
introduce both useflags, then mask the qt4 useflag and mark it for
removal during the next version bump. That would have given you an easy
opportunity to properly react to the change, by either unmasking the
flag and pinning the version, or copy the ebuild from db/pkg to your
own overlay.

I don't know how Gentoo policy suggests but I'm pretty sure this is one
of the official ways to prevent exactly your problem.

In the long way, tho, due to qt4 breakage, the qt5 useflag had to be
introduced, and qt4 support had to be dropped. But maybe not in just
one single step without revbump.

The change causes rebuilds for most qt users anyways, as either you had
one of the flags enabled and that resulted in a useflag change thus
rebuild, or: None of the useflags was set, and then you were not
affected (which probably was the "short sighted" decision for doing it
without a revbump in the first place).


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: distributed emerge

2017-09-26 Thread Kai Krakow
Am Wed, 27 Sep 2017 02:04:12 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Mon, 25 Sep 2017 21:35:02 +1000
> schrieb Damo Brisbane <dhatche...@gmail.com>:
> 
> > Can someone point where I might go for parallel @world build, it is
> > really for my own curiositynat this time. Currently I stage binaries
> > for multiple machines on a single nfs share, but the assumption is
> > to use instead some distributed filesystem. So I think I just need a
> > recipie, pointers or ideas on how to distribute emerge on an @world
> > set? I am thinking granular first, ie per package rather than eg
> > distributed gcc within a single package.  
> 
> As others already pointed out, distcc introduces more headache then it
> solves.
> 
> If you are searching for a solution due to performance of package
> building, you get most profit from building on tmpfs.
> 
> Then, I also suggest going breadth first, thus building more packages
> at the same time.
> 
> Your question implies depth first which means having more compiler
> processes running at a time for a single package. But most build
> processes do not scale out very well for the following reasons:
> 
>   1. Configure phases are serial processes
> 
>   2. Dependencies in Makefile are often buggy or incomplete
> 
>   3. Dependencies between source files often allow parallel
>  building only for short burst throughout the complete
>  build and are serial otherwise
> 
> Building packages in parallel instead solves all these problems: Each
> build phase can one in parallel to every other build phase. So while a
> serialized configure phase is running or package is bundled/merged,
> another package can have multiple gccs running while a third package
> maybe builds serialized due to source file deps.
> 
> Also, emerge is very IO bound. Resorting to distcc won't solve this,
> as a lot of compiler internals need to be copied back and forth
> between the peers. It may even create more IO than building locally
> only. Using tmpfs instead solves this much better.
> 
> I'm using the following settings and have 100% on all eight cores
> almost all the time during emerge, while IO is idle most of the time:
> 
> MAKEOPTS="-s -j9 -l8"
> FEATURES="sfperms parallel-fetch parallel-install protect-owned \
> userfetch splitdebug fail-clean cgroup compressdebug buildpkg \
> binpkg-multi-instance clean-logs userpriv usersandbox"
> EMERGE_DEFAULT_OPTS="--binpkg-respect-use=y --binpkg-changed-deps=y \
> --jobs=10 --load-average 8 --keep-going --usepkg"
> 
> $ fgrep portage /etc/fstab
> none /var/tmp/portage tmpfs
> noauto,x-systemd.automount,x-systemd.idle-timeout=60,size=32G,mode=770,uid=portage,gid=portage
> 
> Have either enough swap or lower the tmpfs allocation.
> 
> Using FEATURES buildpkg pinpkg-multi-instance allows to reuse packages
> on different but similar machines. EMERGE_DEFAULT_OPTS makes use of
> this. /usr/portage/{distfiles,packages} is on shared media.
> 
> Also, I'm usually building world upgrades with --changed-deps to
> rebuild dependers and update the bin packages that way.
> 
> I'm not sure, tho, if running emerge in parallel on two machines would
> pickup newly appearing binpkgs during the process... I guess, not. I
> usually don't do that except the dep tree looks independent between
> both machines.
> 
> If your machine cannot saturate the CPU throughout the whole emerge
> process (as long as there are parallel ebuild running), then distcc
> will clearly not help you, make the complete process slower due to
> waiting on remote resources, and even increase the load. Only very
> few, huge projects, with Makefile deps very clearly optimized or
> specially crafted for distributed builds can benefit from distcc.
> Most projects aren't of this type, even Chromium and LibreOffice
> don't. Exactly, those projects have way to much meta data to
> transport between the distcc peers.
> 
> But YMMV. I'd say, try a different path first.

I imagine one case where distcc could help you: If the building machine
(that one running emerge) is very constraint on system resources. But
in that case, the much better performing option is still staging the
builds on another machine and using binary install on that low-resource
machine.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: distributed emerge

2017-09-26 Thread Kai Krakow
Am Mon, 25 Sep 2017 21:35:02 +1000
schrieb Damo Brisbane :

> Can someone point where I might go for parallel @world build, it is
> really for my own curiositynat this time. Currently I stage binaries
> for multiple machines on a single nfs share, but the assumption is to
> use instead some distributed filesystem. So I think I just need a
> recipie, pointers or ideas on how to distribute emerge on an @world
> set? I am thinking granular first, ie per package rather than eg
> distributed gcc within a single package.

As others already pointed out, distcc introduces more headache then it
solves.

If you are searching for a solution due to performance of package
building, you get most profit from building on tmpfs.

Then, I also suggest going breadth first, thus building more packages
at the same time.

Your question implies depth first which means having more compiler
processes running at a time for a single package. But most build
processes do not scale out very well for the following reasons:

  1. Configure phases are serial processes

  2. Dependencies in Makefile are often buggy or incomplete

  3. Dependencies between source files often allow parallel
 building only for short burst throughout the complete
 build and are serial otherwise

Building packages in parallel instead solves all these problems: Each
build phase can one in parallel to every other build phase. So while a
serialized configure phase is running or package is bundled/merged,
another package can have multiple gccs running while a third package
maybe builds serialized due to source file deps.

Also, emerge is very IO bound. Resorting to distcc won't solve this, as
a lot of compiler internals need to be copied back and forth between
the peers. It may even create more IO than building locally only. Using
tmpfs instead solves this much better.

I'm using the following settings and have 100% on all eight cores
almost all the time during emerge, while IO is idle most of the time:

MAKEOPTS="-s -j9 -l8"
FEATURES="sfperms parallel-fetch parallel-install protect-owned \
userfetch splitdebug fail-clean cgroup compressdebug buildpkg \
binpkg-multi-instance clean-logs userpriv usersandbox"
EMERGE_DEFAULT_OPTS="--binpkg-respect-use=y --binpkg-changed-deps=y \
--jobs=10 --load-average 8 --keep-going --usepkg"

$ fgrep portage /etc/fstab
none /var/tmp/portage tmpfs 
noauto,x-systemd.automount,x-systemd.idle-timeout=60,size=32G,mode=770,uid=portage,gid=portage

Have either enough swap or lower the tmpfs allocation.

Using FEATURES buildpkg pinpkg-multi-instance allows to reuse packages
on different but similar machines. EMERGE_DEFAULT_OPTS makes use of
this. /usr/portage/{distfiles,packages} is on shared media.

Also, I'm usually building world upgrades with --changed-deps to
rebuild dependers and update the bin packages that way.

I'm not sure, tho, if running emerge in parallel on two machines would
pickup newly appearing binpkgs during the process... I guess, not. I
usually don't do that except the dep tree looks independent between
both machines.

If your machine cannot saturate the CPU throughout the whole emerge
process (as long as there are parallel ebuild running), then distcc
will clearly not help you, make the complete process slower due to
waiting on remote resources, and even increase the load. Only very few,
huge projects, with Makefile deps very clearly optimized or specially
crafted for distributed builds can benefit from distcc. Most projects
aren't of this type, even Chromium and LibreOffice don't. Exactly,
those projects have way to much meta data to transport between the
distcc peers.

But YMMV. I'd say, try a different path first.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-17 Thread Kai Krakow
Am Sun, 17 Sep 2017 08:20:50 -0500
schrieb Dan Douglas <orm...@gmail.com>:

> On 09/17/2017 04:17 AM, Kai Krakow wrote:
> > Am Sun, 17 Sep 2017 01:20:45 -0500
> > schrieb Dan Douglas <orm...@gmail.com>:
> >   
> >> On 09/16/2017 07:06 AM, Kai Krakow wrote:  
>  [...]  
>  [...]  
> >>  [...]
>  [...]  
>  [...]  
> >>
> >> According to btrfs-filesystem(8), defragmentation breaks reflinks,
> >> in all but a few old kernel versions where I guess they tried to
> >> fix the problem and apparently failed.  
> > 
> > It was splitting and splicing all the reflinks which is actually a
> > tree walk with more and more extents coming into the equation, and
> > ended up doing a lot of small IO and needing a lot of memory. I
> > think you really cannot fix this when working with extents.  
> 
> I figured by "break up" they meant it eliminates the reflink by making
> a full copy... so the increased space they're talking about isn't
> really double that of the original data in other words.
> 
> >   
> >> This really makes much of what btrfs
> >> does altogether pointless if you ever defragment manually or have
> >> autodefrag enabled. Deduplication is broken for the same reason.  
> > 
> > It's much easier to fix this for deduplication: Just write your
> > common denominator of an extent to a tmp file, then walk all the
> > reflinks and share them with parts of this extent.
> > 
> > If you carefully select what to defragment, there should be no
> > problem. A defrag tool could simply skip all the shared extents. A
> > few fragments do not hurt performance at all, but what's important
> > is spatial locality. A lot small fragments may hurt performance a
> > lot, so one could give the defragger a hint when to ignore the rule
> > and still defragment the extent. Also, when your deduplication
> > window is 1M you could probably safely defrag all extents smaller
> > than 1M.  
> 
> Yeah this sort of hurts with the way I deal wtih KVM image snapshots.
> I have raw base images as backing files with lots of shared and null
> data, so I run `fallocate --dig-holes' followed by `duperemove
> --dedupe-options=same' on the cow-enabled base images and hope that
> btrfs defrag can clean up the resulting fragmented mess, but it's a
> slow process and doesn't seem to do a good job.

I would be interested about your results if you try bees[1] to
deduplicate your KVM images. It should be able to dig holes and merge
blocks by reflinking. I'm not sure if it would merge continuous extents
back into one single extent, I think that's on a todo list. It could
act as a reflink-aware defragger then.

It currently does not work well for mixed datasum/nodatasum workloads,
so I made a PR[2] to ignore nocow files. A more elaborated patch would
not try to reflink datasum and nodatasum extents (nocow implies
nodatasum).

[1]: https://github.com/Zygo/bees
[2]: https://github.com/Zygo/bees/pull/21


-- 
Regards,
Kai

Replies to list-only preferred.


pgpb6FiJolG_M.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-17 Thread Kai Krakow
Am Sun, 17 Sep 2017 01:20:45 -0500
schrieb Dan Douglas <orm...@gmail.com>:

> On 09/16/2017 07:06 AM, Kai Krakow wrote:
> > Am Fri, 15 Sep 2017 14:28:49 -0400
> > schrieb Rich Freeman <ri...@gentoo.org>:
> >   
> >> On Fri, Sep 8, 2017 at 3:16 PM, Kai Krakow <hurikha...@gmail.com>
> >> wrote:  
>  [...]  
> >>
> >> True, but keep in mind that this applies in general in btrfs to any
> >> kind of modification to a file.  If you modify 1MB in the middle
> >> of a 10GB file on ext4 you end up it taking up 10GB of space.  If
> >> you do the same thing in btrfs you'll probably end up with the
> >> file taking up 10.001GB.  Since btrfs doesn't overwrite files
> >> in-place it will typically allocate a new extent for the
> >> additional 1MB, and the original content at that position within
> >> the file is still on disk in the original extent.  It works a bit
> >> like a log-based filesystem in this regard (which is also
> >> effectively copy on write).  
> > 
> > Good point, this makes sense. I never thought about that.
> > 
> > But I guess that btrfs doesn't use 10G sized extents? And I also
> > guess, this is where autodefrag jumps in.  
> 
> According to btrfs-filesystem(8), defragmentation breaks reflinks, in
> all but a few old kernel versions where I guess they tried to fix the
> problem and apparently failed.

It was splitting and splicing all the reflinks which is actually a tree
walk with more and more extents coming into the equation, and ended up
doing a lot of small IO and needing a lot of memory. I think you really
cannot fix this when working with extents.


> This really makes much of what btrfs
> does altogether pointless if you ever defragment manually or have
> autodefrag enabled. Deduplication is broken for the same reason.

It's much easier to fix this for deduplication: Just write your common
denominator of an extent to a tmp file, then walk all the reflinks and
share them with parts of this extent.

If you carefully select what to defragment, there should be no problem.
A defrag tool could simply skip all the shared extents. A few fragments
do not hurt performance at all, but what's important is spatial
locality. A lot small fragments may hurt performance a lot, so one
could give the defragger a hint when to ignore the rule and still
defragment the extent. Also, when your deduplication window is 1M you
could probably safely defrag all extents smaller than 1M.


-- 
Regards,
Kai

Replies to list-only preferred.


pgpv49kJ6PHj8.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-16 Thread Kai Krakow
Am Sat, 16 Sep 2017 10:05:21 -0700
schrieb Rich Freeman <ri...@gentoo.org>:

> On Sat, Sep 16, 2017 at 9:43 AM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> >
> > Actually, I'm running across 3x 1TB here on my desktop, with mraid1
> > and draid 0. Combined with bcache it gives confident performance.
> >  
> 
> Not entirely sure I'd use the word "confident" to describe a
> filesystem where the loss of one disk guarantees that:
> 1.  You will lose data (no data redundancy).
> 2.  But the filesystem will be able to tell you exactly what data you
> lost (as metadata will be fine).

I take daily backups with borg backup. It takes only 15 minutes to run.
And it has been tested twice successfully. The only breakdowns I had
were due to btrfs bugs, not hardware faults.

This is confident enough for my desktop system.


> > I was very happy a long time with XFS but switched to btrfs when it
> > became usable due to compression and stuff. But performance of
> > compression seems to get worse lately, IO performance drops due to
> > hogged CPUs even if my system really isn't that incapable.
> >  
> 
> Btrfs performance is pretty bad in general right now.  The problem is
> that they just simply haven't gotten around to optimizing it fully,
> mainly because they're more focused on getting rid of the data
> corruption bugs (which is of course the right priority).  For example,
> with raid1 mode btrfs picks the disk to use for raid based on whether
> the PID is even or odd, without any regard to disk utilization.
> 
> When I moved to zfs I noticed a huge performance boost.

Interesting... While I never tried it I always feared that it would
perform worse if not throwing RAM and ZIL/L2ARC at it.


> Fundamentally I don't see why btrfs can't perform just as well as the
> others.  It just isn't there yet.

And it will take a long time still, because devs are still throwing new
features at it which need to stabilize.


> > What's still cool is that I don't need to manage volumes since the
> > volume manager is built into btrfs. XFS on LVM was not that
> > flexible. If btrfs wouldn't have this feature, I probably would
> > have switched back to XFS already.  
> 
> My main concern with xfs/ext4 is that neither provides on-disk
> checksums or protection against the raid write hole.

Btrfs suffers the same RAID5 write hole problem since years. I always
planned moving to RAID5 later (which is why I have 3 disks) but I fear
this won't be fixed any time soon due to design decisions made too
early.


> I just switched motherboards a few weeks ago and either a connection
> or a SATA port was bad because one of my drives was getting a TON of
> checksum errors on zfs.  I moved it to an LSI card and scrubbed, and
> while it took forever and the system degraded the array more than once
> due to the high error rate, eventually it patched up all the errors
> and now the array is working without issue.  I didn't suffer more than
> a bit of inconvenience but with even mdadm raid1 I'd have had a HUGE
> headache trying to recover from that (doing who knows how much
> troubleshooting before realizing I had to do a slow full restore from
> backup with the system down).

I found md raid not very reliable in the past but I didn't try again in
years. So this may have changed. I only remember it destroyed a file
system after an unclean shutdown not only once, that's not what I
expect from RAID1. Other servers with file systems on bare metal
survived this just fine.


> I just don't see how a modern filesystem can get away without having
> full checksum support.  It is a bit odd that it has taken so long for
> Ceph to introduce it, and I'm still not sure if it is truly
> end-to-end, or if at any point in its life the data isn't protected by
> checksums.  If I were designing something like Ceph I'd checksum the
> data at the client the moment it enters storage, then independently
> store the checksum and data, and then retrieve both and check it at
> the client when the data leaves storage.  Then you're protected
> against corruption at any layer below that.  You could of course have
> additional protections to catch errors sooner before the client even
> sees them.  I think that the issue is that Ceph was really designed
> for object storage originally and they just figured the application
> would be responsible for data integrity.

I'd at least pass the checksum through all the layers while checking it
again, so you could detect which transport or layer is broken.


> The other benefit of checksums is that if they're done right scrubs
> can go a lot faster, because you don't have to scrub all the
> redundancy data synchronously.  You can just start an idle-priority
> read thread on every drive and then pau

[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-16 Thread Kai Krakow
Am Sat, 16 Sep 2017 09:39:33 -0400
schrieb Rich Freeman <ri...@gentoo.org>:

> On Sat, Sep 16, 2017 at 8:06 AM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> >
> > But I guess that btrfs doesn't use 10G sized extents? And I also
> > guess, this is where autodefrag jumps in.
> >  
> 
> It definitely doesn't use 10G extents considering the chunks are only
> 1GB.  (For those who aren't aware, btrfs divides devices into chunks
> which basically act like individual sub-devices to which operations
> like mirroring/raid/etc are applied.  This is why you can change raid
> modes on the fly - the operation takes effect on new chunks.  This
> also allows clever things like a "RAID1" on 3x1TB disks to have 1.5TB
> of useful space, because the chunks essentially balance themselves
> across all three disks in pairs.  It also is what causes the infamous
> issues when btrfs runs low on space - once the last chunk is allocated
> it can become difficult to rebalance/consolidate the remaining space.)

Actually, I'm running across 3x 1TB here on my desktop, with mraid1 and
draid 0. Combined with bcache it gives confident performance.


> I couldn't actually find any info on default extent size.  I did find
> a 128MB example in the docs, so presumably that isn't an unusual size.
> So, the 1MB example would probably still work.  Obviously if an entire
> extent becomes obsolete it will lose its reference count and become
> free.

According to bees[1] source code it's 128M actually if I remember right.


> Defrag was definitely intended to deal with this.  I haven't looked at
> the state of it in ages, when I stopped using it due to a bug and some
> limitations.  The main limitation being that defrag at least used to
> be over-zealous.  Not only would it free up the 1MB of wasted space,
> as in this example, but if that 1GB file had a reflink clone it would
> go ahead and split it into two duplicate 1GB extents.  I believe that
> dedup would do the reverse of this.  Getting both to work together
> "the right way" didn't seem possible the last time I looked into it,
> but if that has changed I'm interested.
> 
> Granted, I've been moving away from btrfs lately, due to the fact that
> it just hasn't matured as I originally thought it would.  I really
> love features like reflinks, but it has been years since it was
> "almost ready" and it still tends to eat data.

XFS has gained reflinks lately, and I think they are working on
snapshots currently. Kernel 4.14 or 4.15 promises new features for XFS
(if they get ready by then), so maybe that will be snapshots? I'm not
sure.

I was very happy a long time with XFS but switched to btrfs when it
became usable due to compression and stuff. But performance of
compression seems to get worse lately, IO performance drops due to
hogged CPUs even if my system really isn't that incapable.

What's still cool is that I don't need to manage volumes since the
volume manager is built into btrfs. XFS on LVM was not that flexible.
If btrfs wouldn't have this feature, I probably would have switched
back to XFS already.


>  For the moment I'm
> relying more on zfs.

How does it perform memory-wise? Especially, I'm currently using bees[1]
for deduplication: It uses a 1G memory mapped file (you can choose
other sizes if you want), and it picks up new files really fast, within
a minute. I don't think zfs can do anything like that within the same
resources.


>  I'd love to switch back if they ever pull things
> together.  The other filesystem I'm eyeing with interest is cephfs,
> but that still is slightly immature (on-disk checksums were only just
> added), and it has a bit of overhead until you get into fairly large
> arrays.  Cheap arm-based OSD options seem to be fairly RAM-starved at
> the moment as well given the ceph recommendation of 1GB/TB.  arm64
> still seems to be slow to catch on, let alone cheap boards with 4-16GB
> of RAM.

Well, for servers, XFS is still my fs of choice. But I will be
evaluating btrfs for that soon, maybe compare it to zfs. When we
evaluated the resource usage, we will buy matching hardware and setup a
new server, mainly for thin-provisioning container systems for web
hostings. I guess, ZFS would be somewhat misused here as DAS.

If XFS gets into shape anytime soon with snapshotting features, I will
of course consider it. Using it since years and it was extremely
reliable, surviving power losses, not degrading in performance.
Something I cannot say the same about for ext3, apparently. Also, XFS
gives good performance with JBOD because allocations are distributed
diagonally across the whole device. This is good for cheap hardware as
well as hardware raid controllers.

[1]: https://github.com/Zygo/bees


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-16 Thread Kai Krakow
Am Fri, 15 Sep 2017 14:28:49 -0400
schrieb Rich Freeman <ri...@gentoo.org>:

> On Fri, Sep 8, 2017 at 3:16 PM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> >
> > At least in btrfs there's also a caveat that the original extents
> > may not actually be split and the split extents share parts of the
> > original extent. That means, if you delete the original later, the
> > copy will occupy more space than expected until you defragment the
> > file: 
> 
> True, but keep in mind that this applies in general in btrfs to any
> kind of modification to a file.  If you modify 1MB in the middle of a
> 10GB file on ext4 you end up it taking up 10GB of space.  If you do
> the same thing in btrfs you'll probably end up with the file taking up
> 10.001GB.  Since btrfs doesn't overwrite files in-place it will
> typically allocate a new extent for the additional 1MB, and the
> original content at that position within the file is still on disk in
> the original extent.  It works a bit like a log-based filesystem in
> this regard (which is also effectively copy on write).

Good point, this makes sense. I never thought about that.

But I guess that btrfs doesn't use 10G sized extents? And I also guess,
this is where autodefrag jumps in.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [offtopic] Copy-On-Write ?

2017-09-08 Thread Kai Krakow
Am Thu, 07 Sep 2017 17:46:27 +0200
schrieb Helmut Jarausch :

> Hi,
> 
> sorry, this question is not Gentoo specific - but I know there are
> many very knowledgeable people on this list.
> 
> I'd like to "hard-link" a file X to Y - i.e. there is no additional  
> space on disk for Y.
> 
> But, contrary to the "standard" hard-link (ln), file Y should be
> stored in a different place (inode) IF it gets modified.
> With the standard hard-link, file X is the same as Y, so any changes
> to Y are seen in X by definition.
> 
> Is this possible
> - with an ext4 FS
> - or only with a different (which) FS

You can do this with "cp --reflink=always" if the filesystem supports
it.

To my current knowledge, only btrfs (since a long time) and xfs (in
newer kernel versions) support it. Not sure if ext4 supports it or
plans support for it.

It is different to hard linking as the new file is linked by a new
inode, thus it has it's own time stamp and permissions unlike hard
links. Just contents are initially shared until you modify them. Also
keep in mind that this increases fragmentation especially when there
are a lot of small modifications.

At least in btrfs there's also a caveat that the original extents may
not actually be split and the split extents share parts of the
original extent. That means, if you delete the original later, the copy
will occupy more space than expected until you defragment the file:

File A extent map: [][22  2  2][]
File B extent map: [][22  2  2][]
Modify b:  [][22][4][2][] <- one block modified

Delete file a: [][22  2  2][] <- extent 2 still mapped
File b extent map: [][22][4][2][]

So extent 2 is still on disk in its original state [].

Defragment file b: [][2242][]
File a:[][][] <- completely gone now



-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Updating an old version of Gentoo

2017-07-28 Thread Kai Krakow
Am Thu, 27 Jul 2017 17:14:17 +0200
schrieb Arve Barsnes :

> >
> > On Thursday 27 Jul 2017 09:48:43 symack wrote:  
> > > There must be an easy way to do this. Something like download the
> > > latest portage and source package. Untar on live system and
> > > rebuild! That would be so amazing if possible.​  
> >  
> 
> It does not seem like the installation is super old, maybe worth a
> try to update the portage tree in steps, by pulling from git at some
> set intervals, and just do updates after each sync.

Is there a list of portage timestamps which would break upgrading old
systems?

My idea would be to download portage trees from specific points in
time, run an upgrade, then download the next, until I'm back in a sane
state to do normal sync and update.


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: local file containing a web site

2017-06-11 Thread Kai Krakow
Am Sat, 10 Jun 2017 12:12:09 -0400
schrieb allan gottlieb :

> I was interviewed and the material was put on a website 
> (news.mit.edu/2017/reflections-puzzle-keeper-allan-gottlieb-0608).
> 
> For someone to view this they need that
> 1.  They are on the net.
> 2.  MIT has not removed it.
> 
> I would like to produce a file containing what is seen when viewing
> that web page (it brings in other pages).  A pdf would be good, but
> others would be OK.

If only single pages are of interest, I'd probably suggest PDF indeed
as it is universally viewable.

> The goal is to be able to put this on a flash drive and be able to
> view in without net access.

Try httrack which is a web site mirror and offline browsing software.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: port forwarding

2017-06-05 Thread Kai Krakow
Am Sun, 4 Jun 2017 22:28:17 -0600
schrieb the...@sys-concept.com:

> My firewall (dd-wrt) does not opening specific port.
> 
> In NAT(QoS) tab I have:
> forward from port 4569 to internal IP port 4569 (this is an asterisk
> IAX port);
> 
> netstat -a |grep 4569
> udp0  0 0.0.0.0:45690.0.0.0:*
> 
> But when I check this port via ShieldsUP! it keeps showing me it is:
> "closed"

Even when forwarding is correctly set up, your internal machine also
needs to accept connections forwarded to it, and it needs to accept
them from external IPs.

So please also check if direct connections are accepted from internal
machines, if not change your setup properly. You can use nmap for that.

Then run tcpdump on the internal machine and check if forwarded packets
reach the internal machine.

If not, ask in the dd-wrt forums.

If yes, ask in the asterisk forums.

I don't see how this is Gentoo related, except you installed asterisk
in Gentoo by means of portage.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Kernel did not finding root partition

2017-05-30 Thread Kai Krakow
Am Tue, 30 May 2017 09:26:03 +0100
schrieb Peter Humphrey <pe...@prh.myzen.co.uk>:

> On Monday 29 May 2017 21:42:28 Kai Krakow wrote:
> > Am Mon, 29 May 2017 19:16:11 +0100
> > 
> > schrieb Neil Bothwick <n...@digimed.co.uk>:  
> > > On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:  
> [...]
>  [...]  
> > > 
> > > You said you were using rEFInd, why have you got GRUB as well.
> > > rEFInd can work without a config, GRUB cannot.  
> > 
> > This puzzles me, too... Maybe rEFInd was installed to sda and grub
> > installed to sda1, so rEFInd would chain-boot through grub.
> > 
> > Grub, however, won't work without a config file. I'd also suggest to
> > skip grub completely and use just one loader.  
> 
> Not only that, but for some reason I couldn't get grub to work at all
> on my Asus UEFI system. I use systemd-boot only, with a separate
> config file for each kernel I might want to boot. (I do not have the
> rest of systemd in this openrc system; just its boot program.)
> 
> It might not help the OP but this is my script for compiling a kernel:
> 
> # cat /usr/local/bin/kmake 
> #!/bin/bash 
> mount /boot 
> cd /usr/src/linux 
> time (make -j12 && make modules_install && make install &&\ 
>   /bin/ls -lh --color=auto /boot &&\ 
>   echo &&\ 
>   cp -v ./arch/x86/boot/bzImage /boot/EFI/Boot/bootX64.efi
> ) &&\ 
> echo; echo "Rebuilding modules..."; echo &&\ 
> emerge --jobs --load-average=48 @module-rebuild @x11-module-rebuild
> 
> He may be missing the copying step; that would explain his inability
> either to boot or to supply the info you asked him for.

I hooked into the install hook infrastructure of the kernel instead:

$ cat /etc/kernel/postinst.d/70_rebuild-modules
#!/bin/bash
exec env -i PATH=$PATH /usr/bin/emerge -1v --usepkg=n @module-rebuild

$ cat /etc/kernel/postinst.d/90_systemd
#!/bin/bash
/usr/bin/kernel-install remove $1 $2
/usr/bin/kernel-install add $1 $2

This takes care of everything and the kernel-install script from
systemd also rebuilds the dracut initrd (because it installed hooks
to /usr/lib/kernel/install.d).

eclean-kernel can then be used to properly clean up obsolete kernel
versions. I'm running it through cron to keep only the most recent 5
kernels at weekly intervals.

For the hooks to properly execute at the right time, it is important to
give the "make install" target last:

$ cd /usr/src/linux
$ make oldconfig
# make -j9 
# make modules_install firmware_install install

The "install" target triggers the hooks, so modules have to be already
installed at that time.

Additionally I have a script to rebuild dracut easily on demand (e.g.,
when early boot components were updated or changed):

$ cat /usr/local/sbin/rebuild-dracut.sh
#!/bin/bash
set -e
if [ "$1" == "-a" ]; then
versions=$(cd /boot && ls vmlinuz-* | fgrep -v .old | sed 
's/vmlinuz-//')
else
versions="$@"
fi
versions=${versions:=$(uname -r)}
for hook in $(ls /etc/kernel/postinst.d/*_{dracut,grub,systemd} 2>/dev/null); do
for version in $versions; do
${hook} ${version%.old} /boot/vmlinuz-${version}
done
done


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Kernel did not finding root partition

2017-05-29 Thread Kai Krakow
Am Mon, 29 May 2017 19:16:11 +0100
schrieb Neil Bothwick :

> On Mon, 29 May 2017 15:07:48 -0300, Raphael MD wrote:
> 
>  [...]  
> 
> > > 1. partition layout
> > > 2. kernel cmdline
> > > 3. boot-loader config
> > > 4. exact error message on screen  
> 
> > 1. partition layout
> > /dev/sda1 vfat boot
> > /dev/sda3 xfs   root
> > /dev/sda2 swap  
> 
> That looks OK.

Yes, but I am missing some info:

Is sda1 marked as ESP?

Also, you should mark sda3 as root partition through gptfdisk.

That way, any modern EFI boot loader should be able to auto-configure
everything.

> > > 2. kernel cmdline
> > None  
> 
> Are you letting rEFInd auto-detect it? Maybe you need to configure it
> manually with a root= setting.

I think you need a working initrd for auto-detection to work. At least,
systemd is able to assemble the partitions from GPT partition type
settings and can autodetect boot, swap and rootfs.

Otherwise, you should give at least a root= cmdline.

> > > 3. boot-loader config
> > Grub, without any different config.  
> 
> You said you were using rEFInd, why have you got GRUB as well. rEFInd
> can work without a config, GRUB cannot.

This puzzles me, too... Maybe rEFInd was installed to sda and grub
installed to sda1, so rEFInd would chain-boot through grub.

Grub, however, won't work without a config file. I'd also suggest to
skip grub completely and use just one loader.

> > > 4. exact error message on screen
> > Kernel boot up, start to load drivers and stop asking for root
> > partition.  
> 
> That's a summary, not an exact message. As such it gives no useful
> information.

Yes, this is not helpful. How could one expect us to be helpful if
she/he refuses to give details? Nobody requires to copy the screen
contents by hand. For me, a useful screen shot taken with a mobile
phone camera would be a first step.

I think there are even services which can OCR such a screen shot...


-- 
Regards,
Kai

Replies to list-only preferred.


pgpcttxRPkMin.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: Kernel did not finding root partition

2017-05-29 Thread Kai Krakow
Am Mon, 29 May 2017 08:09:02 -0300
schrieb Raphael MD :

> I'm trying to install Gentoo in my notebook, but kernel, during the
> boot, do not find the root partition.
> 
> I'm using UEFI boot, I've tried Genkernel, I've checked XFS's support
> in kernel's menuconfig and re-cheked GRUB config files, but is a
> pain, do not work.
> 
> I've installed Funtoo with Debian Kernel first, but Funtoo KDE's
> ebuild was pointing to a invalid URL and I've switched to Gentoo and
> now I'm suffering this problem to boot.
> 
> Have anyone some information, about this Kernel's boot didn't finding
> root partition? Is better configure kernel without Genkernel? I need
> to pass some commands to Kernel via GRUB?
> 
> PS.: Appear to be very simple configure UEFI, because I'm using
> Refind and it was working with Funtoo, and I realized this problem is
> with gentoo kernel's config, but I do not know where I need to config.
> 
> Any suggestions?

For UEFI boot the best way is to install the kernel to the ESP,
especially if it is directly loaded by EFI. Which exact message do you
see? It is not clear if the kernel already booted and just cannot find
the rootfs, or if even the kernel cannot load.

I don't know reFind, but some EFI loaders like gummiboot / systemd-boot
expect the kernel to have an EFI stub because the kernel is
chain-loaded through EFI...

So we need to know a few things:

1. partition layout
2. kernel cmdline
3. boot-loader config
4. exact error message on screen


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: OT: Avi-player diplaying the current frame counter while playing ?

2017-05-26 Thread Kai Krakow
Am Fri, 26 May 2017 00:11:41 +0200
schrieb wabe :

> tu...@posteo.de wrote:
> 
> > Hi,
> > 
> > currentlu I am playing around with Blender animations.
> > 
> > To sync certain movements of objects to other objects
> > I need the exact frame number, at which "something
> > happens" ;)
> > 
> > The clips are of the avi-format but they are of non
> > standard resultions. MPlayer does not play them
> > ("no audio found" and that's it) but mpv has no
> > problem with them.
> > But mpv only prints the perceantage of how much
> > is played already (I cant convice mpv otherwise...)
> > 
> > Is there a way to instruct mpv to print the current
> > frame count while playing?
> > Are there other players known to do that?  
> 
> Check out cinelerra. It is a non-linear video editor and so it also 
> plays videos and shows the exact timestamp or framenumber (you can 
> set this in preferences). But cinelerra is a very complex software 
> and thus maybe not what you are looking for. 
> 
> Avidemux can also be used as a video player but IIRC it can't show
> framenumbers but only the exact timestamp.

Kdenlive may be an option, too.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Puzzled by zswap [Was: tmp on tmpfs]

2017-05-26 Thread Kai Krakow
Am Thu, 25 May 2017 11:46:45 -0700
schrieb Ian Zimmerman <i...@primate.net>:

> On 2017-05-24 19:05, Kai Krakow wrote:
> 
> > To get in line with Rich Freeman: I didn't want to imply that zswap
> > only works with swap, neither that tmpfs only works with swap. Both
> > work without. But if you want to put some serious amount of data
> > into tmpfs, you need swap as a backing device sooner or later.  
> 
> Looking at zswap, I have several questions
> (even after reading linux/Documentation/vm/zswap.txt).
> 
> 1.  How does it know which swap device to use as backing store, if
> any? Clearly at boot time no swap configuration exists, even if
> initrd/initramfs is used, which here it is not.  So when the kernel
> sees zswap.enable=1 in the command line, what happens?

You simply don't assign a swap device to zswap. It's transparently
inserted into the swapping chain of the kernel. Thus pages are first
compressed, and later swapped out by normal kernel processing.

> 2.  The doc says it can be turned on at runtime by means of
> /sys/module/zswap/parameters/enabled.  But kconfig doesn't make it
> possible to build the support as a module, only built-in, and so it is
> not surprising that this path doesn't exist.

I wonder why this doesn't exist. All my builtin modules have their
parameters in /sys/module:

# lsmod | fgrep zswap | wc -l
0
# ls -ald /sys/module/zswap
drwxr-xr-x 3 root root 0 26. Mai 07:54 /sys/module/zswap

> 3.  It seems to require zbud to also be turned on, but this is not
> enforced by kconfig.  Is this a bug or what?

No idea, I enabled it...

> 4.  Quoting:
> 
>  Zswap seeks to be simple in its policies.  Sysfs attributes allow
> for one user controlled policy:
>  * max_pool_percent - The maximum percentage of memory that the
> compressed pool can occupy.
> 
> Does this mean this is another (hypothetical) node in
> /sys/module/zswap/parameters/ ?

grep ^ /sys/module/zswap/parameters/*
/sys/module/zswap/parameters/compressor:lzo
/sys/module/zswap/parameters/enabled:Y
/sys/module/zswap/parameters/max_pool_percent:20
/sys/module/zswap/parameters/zpool:zbud 

This also implies that zbud it required for zswap to even operate. If
you didn't include it, it may be the reason why zswap is missing
in /sys/module.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: tmp on tmpfs

2017-05-25 Thread Kai Krakow
Am Thu, 25 May 2017 08:34:10 +0200
schrieb "J. Roeleveld" :

> It is possible. I have it set up like that on my laptop.
> Apart from a small /boot partition. The whole drive is encrypted.
> Decryption keys are stored encrypted in the initramfs, which is
> embedded in the kernel.

And the kernel is on /boot which is unencrypted, so are your encryption
keys. This is not much better, I guess...

> On May 25, 2017 12:40:12 AM GMT+02:00, Rich Freeman
>  wrote:
> >On Wed, May 24, 2017 at 2:16 PM, Andrew Savchenko
> > wrote:  
> >>
> >> Apparently it is pointless to encrypt swap if unencrypted
> >> hibernation image is used, because all memory is accessible through
> >> that image (and even if it is deleted later, it can be restored
> >> from hdd and in some cases from ssd).
> >>  
> >
> >Yeah, that was my main concern with an approach like that.  I imagine
> >you could use a non-random key and enter it on each boot and restore
> >from the encrypted swap, though I haven't actually used hibernation
> >on linux so I'd have to look into how to make that work.  I imagine
> >with an initramfs it should be possible.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: tmp on tmpfs

2017-05-24 Thread Kai Krakow
Am Wed, 24 May 2017 12:30:36 -0700
schrieb Rich Freeman <ri...@gentoo.org>:

> On Wed, May 24, 2017 at 11:34 AM, Ian Zimmerman <i...@primate.net>
> wrote:
> > On 2017-05-24 08:00, Kai Krakow wrote:
> >  
> >> Unix semantics suggest that /tmp is not expected to survive reboots
> >> anyways (in contrast, /var/tmp is expected to survive reboots), so
> >> tmpfs is a logical consequence to use for /tmp.  
> >
> > /tmp is wiped by the bootmisc init job anyway.
> >  
> 
> In general I haven't found anything that is bothered by /var/tmp being
> lost on reboot, but obviously that is something you need to be
> prepared for if you put it on tmpfs.
> 
> One thing that wasn't mentioned is that having /tmp in tmpfs might
> also have security benefits depending on what is stored there, since
> it won't be written to disk.  If you have a filesystem on tmpfs and
> your swap is encrypted (which you should consider setting up since it
> is essentially "free") then /tmp also becomes a useful dumping ground
> for stuff that is decrypted for temporary processing.  For example, if
> you keep your passwords in a gpg-encrypted file you could copy it to
> /tmp, decrypt it there, do what you need to, and then delete it.  That
> wouldn't leave any recoverable traces of the file.

Interesting point... How much performance impact does encrypted swap
have? I don't mean any benchmark numbers but real life experience from
your perspective when the system experiences memory pressure?

> There are lots of guides about encrypted swap.  It is the sort of
> thing that is convenient to set up since there is no value in
> preserving a swap file across reboots, so you can just generate a
> random key on each boot.  I suspect that would break down if you're
> using hibernation / suspend to disk.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: tmp on tmpfs

2017-05-24 Thread Kai Krakow
Am Wed, 24 May 2017 11:34:20 -0700
schrieb Ian Zimmerman <i...@primate.net>:

> On 2017-05-24 08:00, Kai Krakow wrote:
> 
> > While I have no benchmarks and use the systemd default of tmpfs for
> > /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> > systemd so it is cleaned up when no longer used (by unmounting).
> > 
> > What can I say? It works so much faster: Building packages is a lot
> > faster most of the time, even if you'd expect gcc uses a lot of
> > memory.
> > 
> > Well, why might that be? First, tmpfs is backed by swap space, that
> > means, you need a swap partition of course. Swap is a lot simpler
> > than file systems, so swapping out unused temporary files is fast
> > and is a good thing. Also, unused memory sitting around may be
> > swapped out early. Why would you want inactive memory resident? So
> > this is also a good thing. Portage can use memory much more
> > efficient by this.
> > 
> > Applying this reasoning over to /tmp should no explain why it works
> > so well and why you may want it.
> > 
> > BTW: I also use zswap, so tmpfs sits in front of a compressed
> > write-back cache before being written out to swap compressed. This
> > should generally be much more efficient (performance-wise) than
> > putting /tmp on zram.
> > 
> > I configured tmpfs for portage to use up to 30GB of space, which is
> > almost twice the RAM I have. And it works because tmpfs is not
> > required to be resident all the time: Inactive parts will be swapped
> > out. The kernel handles this much similar to the page cache, with
> > the difference that your files aren't backed by your normal file
> > system but by swap.  And swap has a lot lower IO overhead.
> > 
> > Overall, having less IO overhead (and less head movement for portage
> > builds) is a very very efficient thing to do. GCC constantly needs
> > all sorts of files from your installation (libs for linking, header
> > files, etc), and writes a lot of transient files which are needed
> > once later and then discarded. There's no point in putting it on a
> > non-transient file system.
> > 
> > I use the following measures to get more performance out of this
> > setup:
> > 
> >   * I have three swap partitions spread across three HDDs
> >   * I have a lot of swap space (60 GB) to have space for tmpfs
> >   * I have bcache in front of my HDD filesystem
> >   * I have a relatively big SSD dedicated to bcache
> > 
> > My best recommendation is to separate swap and filesystem devices.
> > While I didn't do it that way, I still separate them through bcache
> > and thus decouple fs access and swap access although they are on the
> > same physical devices. My bcache is big enough that most accesses
> > would go to the SSD only. I enabled write-back to have that effect
> > also for write access.
> > 
> > If you cannot physically split swap from fs, a tmpfs setup for
> > portage may not be recommended (except you have a lot of memory,
> > like 16GB or above). But YMMV.
> > 
> > Still, I recommend it for /tmp, especially if your system is on
> > SSD.  
> 
> All interesting points, and you convinced me to at least give tmpfs a
> try on the desktop.
> 
> My laptop is different, though.  It doesn't have that much RAM by
> comparison (4G) and it _only_ has a SSD.  Builds have been slow :(  I
> am afraid to mess with it lest I increase the wear on the SSD.

You still may want to test /var/tmp/portage as tmpfs for small
packages... Or manually call:

# sudo PORTAGE_TMPDIR=/path/to/tmpfs emerge -1a small-package

For big packages, I suggest to nfs mount some storage from your desktop.
It probably will still be slow (maybe a little bit slower) but should
be much better for your SSD lifetime.


> > Unix semantics suggest that /tmp is not expected to survive reboots
> > anyways (in contrast, /var/tmp is expected to survive reboots), so
> > tmpfs is a logical consequence to use for /tmp.  
> 
> /tmp is wiped by the bootmisc init job anyway.

That's why such jobs exist, and why usually /tmp is wiped completely
while /var/tmp is wiped based on atime/mtime...


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: tmp on tmpfs

2017-05-24 Thread Kai Krakow
Am Wed, 24 May 2017 08:00:33 +0200
schrieb Kai Krakow <hurikha...@gmail.com>:

> Am Wed, 24 May 2017 07:34:34 +0200
> schrieb gentoo-u...@c-14.de:
> 
> > On 17-05-23 at 22:16, Ian Zimmerman wrote:  
> > > So what are gentoo users' opinions on this matter of faith?
> > I use an ext4 partition backed by zram. Gives me ~3x compression on
> > the things I normally have lying around there (plain text files) and
> > ensures that anything I throw there (or programs throw there) gets
> > cleaned up on reboot.
> >   
> > > I have long been in the camp that thinks tmpfs for /tmp has no
> > > advantages (and may have disadvantages) over a normal filesystem
> > > like ext3, because the files there are normally so small that they
> > > will stay in the page cache 100% of the time.
> > I've never actually benchmarked this. Most of the things I notice
> > that tend to end up there are temporary build files generated during
> > configure stages or temporary log files used by various programs
> > (clang static analyzer). Even if the entire file stays in the page
> > cache, it'll still generate IO overhead and extra seeks that might
> > slow down the rest of your system (unless your /tmp is on a
> > different hard drive) which on spinning rust will cause slowdowns
> > while on an ssd it'll eat away at your writes (which you may or may
> > not have to worry about).
> >   
> > > But I see that tmpfs is the default with systemd.  Surely they
> > > have a good reason for this? :)
> > Or someone decided they liked the idea and made it the default and
> > nobody ever complained (or if they did were told to just change it
> > on their system). 
> > 
> > Either way, it'd be nice if someone actually benchmarked this.  
> 
> While I have no benchmarks and use the systemd default of tmpfs
> for /tmp, I also put /var/tmp/portage on tmpfs, automounted through
> systemd so it is cleaned up when no longer used (by unmounting).
> 
> What can I say? It works so much faster: Building packages is a lot
> faster most of the time, even if you'd expect gcc uses a lot of
> memory.
> 
> Well, why might that be? First, tmpfs is backed by swap space, that
> means, you need a swap partition of course.

To get in line with Rich Freeman: I didn't want to imply that zswap
only works with swap, neither that tmpfs only works with swap. Both
work without. But if you want to put some serious amount of data into
tmpfs, you need swap as a backing device sooner or later.

> Swap is a lot simpler than
> file systems, so swapping out unused temporary files is fast and is a
> good thing. Also, unused memory sitting around may be swapped out
> early. Why would you want inactive memory resident? So this is also a
> good thing. Portage can use memory much more efficient by this.
> 
> Applying this reasoning over to /tmp should no explain why it works so
> well and why you may want it.
> 
> BTW: I also use zswap, so tmpfs sits in front of a compressed
> write-back cache before being written out to swap compressed. This
> should generally be much more efficient (performance-wise) than
> putting /tmp on zram.
> 
> I configured tmpfs for portage to use up to 30GB of space, which is
> almost twice the RAM I have. And it works because tmpfs is not
> required to be resident all the time: Inactive parts will be swapped
> out. The kernel handles this much similar to the page cache, with the
> difference that your files aren't backed by your normal file system
> but by swap. And swap has a lot lower IO overhead.
> 
> Overall, having less IO overhead (and less head movement for portage
> builds) is a very very efficient thing to do. GCC constantly needs all
> sorts of files from your installation (libs for linking, header files,
> etc), and writes a lot of transient files which are needed once later
> and then discarded. There's no point in putting it on a non-transient
> file system.
> 
> I use the following measures to get more performance out of this
> setup:
> 
>   * I have three swap partitions spread across three HDDs
>   * I have a lot of swap space (60 GB) to have space for tmpfs
>   * I have bcache in front of my HDD filesystem
>   * I have a relatively big SSD dedicated to bcache
> 
> My best recommendation is to separate swap and filesystem devices.
> While I didn't do it that way, I still separate them through bcache
> and thus decouple fs access and swap access although they are on the
> same physical devices. My bcache is big enough that most accesses
> would go to the SSD only. I enabled write-back to have that effect
> also for write access.
> 
> If you cannot physically split swap fro

[gentoo-user] Re: tmp on tmpfs

2017-05-24 Thread Kai Krakow
Am Wed, 24 May 2017 07:34:34 +0200
schrieb gentoo-u...@c-14.de:

> On 17-05-23 at 22:16, Ian Zimmerman wrote:
> > So what are gentoo users' opinions on this matter of faith?  
> I use an ext4 partition backed by zram. Gives me ~3x compression on
> the things I normally have lying around there (plain text files) and
> ensures that anything I throw there (or programs throw there) gets
> cleaned up on reboot.
> 
> > I have long been in the camp that thinks tmpfs for /tmp has no
> > advantages (and may have disadvantages) over a normal filesystem
> > like ext3, because the files there are normally so small that they
> > will stay in the page cache 100% of the time.  
> I've never actually benchmarked this. Most of the things I notice that
> tend to end up there are temporary build files generated during
> configure stages or temporary log files used by various programs
> (clang static analyzer). Even if the entire file stays in the page
> cache, it'll still generate IO overhead and extra seeks that might
> slow down the rest of your system (unless your /tmp is on a different
> hard drive) which on spinning rust will cause slowdowns while on an
> ssd it'll eat away at your writes (which you may or may not have to
> worry about).
> 
> > But I see that tmpfs is the default with systemd.  Surely they have
> > a good reason for this? :)  
> Or someone decided they liked the idea and made it the default and
> nobody ever complained (or if they did were told to just change it on
> their system). 
> 
> Either way, it'd be nice if someone actually benchmarked this.

While I have no benchmarks and use the systemd default of tmpfs
for /tmp, I also put /var/tmp/portage on tmpfs, automounted through
systemd so it is cleaned up when no longer used (by unmounting).

What can I say? It works so much faster: Building packages is a lot
faster most of the time, even if you'd expect gcc uses a lot of memory.

Well, why might that be? First, tmpfs is backed by swap space, that
means, you need a swap partition of course. Swap is a lot simpler than
file systems, so swapping out unused temporary files is fast and is a
good thing. Also, unused memory sitting around may be swapped out
early. Why would you want inactive memory resident? So this is also a
good thing. Portage can use memory much more efficient by this.

Applying this reasoning over to /tmp should no explain why it works so
well and why you may want it.

BTW: I also use zswap, so tmpfs sits in front of a compressed
write-back cache before being written out to swap compressed. This
should generally be much more efficient (performance-wise) than putting
/tmp on zram.

I configured tmpfs for portage to use up to 30GB of space, which is
almost twice the RAM I have. And it works because tmpfs is not required
to be resident all the time: Inactive parts will be swapped out. The
kernel handles this much similar to the page cache, with the difference
that your files aren't backed by your normal file system but by swap.
And swap has a lot lower IO overhead.

Overall, having less IO overhead (and less head movement for portage
builds) is a very very efficient thing to do. GCC constantly needs all
sorts of files from your installation (libs for linking, header files,
etc), and writes a lot of transient files which are needed once later
and then discarded. There's no point in putting it on a non-transient
file system.

I use the following measures to get more performance out of this setup:

  * I have three swap partitions spread across three HDDs
  * I have a lot of swap space (60 GB) to have space for tmpfs
  * I have bcache in front of my HDD filesystem
  * I have a relatively big SSD dedicated to bcache

My best recommendation is to separate swap and filesystem devices.
While I didn't do it that way, I still separate them through bcache
and thus decouple fs access and swap access although they are on the
same physical devices. My bcache is big enough that most accesses would
go to the SSD only. I enabled write-back to have that effect also for
write access.

If you cannot physically split swap from fs, a tmpfs setup for portage
may not be recommended (except you have a lot of memory, like 16GB or
above). But YMMV.

Still, I recommend it for /tmp, especially if your system is on SSD.
Unix semantics suggest that /tmp is not expected to survive reboots
anyways (in contrast, /var/tmp is expected to survive reboots), so
tmpfs is a logical consequence to use for /tmp.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Qt-4.8.7 bug

2017-05-22 Thread Kai Krakow
Am Mon, 22 May 2017 19:33:55 +0200
schrieb Jörg Schaible :

> Peter Humphrey wrote:
> 
> > On Monday 22 May 2017 09:49:01 Jörg Schaible wrote:  
> >> Hi Peter,
> >> 
> >> Peter Humphrey wrote:
> >> 
> >> [snip]
> >>   
>  [...]  
> >> 
> >> well, this does not seem to be the complete truth. When I switched
> >> to gcc 5.x I did a revdep-rebuild for anything that was compiled
> >> against libstdc++.so.6 just like the according news entry was
> >> recommending. And I am quite sure that those Qt plugins were part
> >> of my 515 recompiled packages.
> >> 
> >> Nevertheless, my KDE 4 apps were broken after the update to Qt
> >> 4.8.7. Rebuilding anything that was using libQtCore.so.4 solved
> >> it, but I fail to see how this is related to the gcc update two
> >> weeks ago.  
> > 
> > I can only suggest you read bug report 618922 if you haven't
> > already, including following its reference to bug 595618. It makes
> > sense to me.  
> 
> It does not for me. My packages were already compiled with gcc-5.4.0.
> Those Buzilla issues only talk about (plasma/qt) packages compiled
> with previous gcc-4.x which are supposed to be incompatible. All of
> the plasma/qt related packages that have been recompiled, because
> they were built upon libQtCore.so.4 were already recompiled with
> gcc5. I've checked my logs.

Is the problem maybe order of building packages?

From your description I see one edge case: Plasma could have been
compiled with gcc-5 but linked to qt still built with gcc-4. Then, qt
was rebuilt after this and is compiled by gcc-5 now.

Without reading the bug report I would guess that is what the bug is
about: Linking plasma against gcc-4 built qt binaries... Even when you
rebuild qt with gcc-5 after this, you'd need to relink plasma.

From your logs, you should see the order in which those packages were
rebuilt and linked.

revdep-rebuild is not always able to perfectly order packages right for
emerge, and emerge in turn has no problem with wrong ordering because
it is rebuilding packages and the dependency constraints are already
fulfilled (read: runtime and build deps are already there). Portage
does not consider rebuilds as a hard dependency/precondition for
rebuilding other packages...

As far as I understood, "--changed-deps" should fix this and also
rebuild packages depending on the rebuilt packages _after_ they've been
rebuilt.

The "--empty" option would have a similar effect, tho rebuild many more
packages.

You could've tried if "revdep-rebuild ... -- --changed-deps" would've
done anything better. I would be interested...


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: Qt-4.8.7 bug

2017-05-21 Thread Kai Krakow
Am Sun, 21 May 2017 09:38:31 +0100
schrieb Peter Humphrey <pe...@prh.myzen.co.uk>:

> On Saturday 20 May 2017 18:39:07 Kai Krakow wrote:
> > Am Sat, 20 May 2017 16:36:08 +0100
> > 
> > schrieb Mick <michaelkintz...@gmail.com>:  
> > > On Saturday 20 May 2017 10:48:52 Mick wrote:  
>  [...]  
>  [...]  
> > >  [...]
> > >  [...]
> > >  [...]
> > >  [...]
> > >  [...]
> > >  [...]
> > >
>  [...]  
> > >  
> > >  [...]
> > >
>  [...]  
>  [...]  
> > > 
> > > It seems revdep-rebuild'ing against library='libQtCore.so.4' also
> > > rebuilds the newly installed Qt packages.  This is why there so
> > > many packages to rebuild.  
> > 
> > That's why I suggested using "--changed-deps": It doesn't rebuild
> > packages that provide the library itself and have already been built
> > after the library provider...
> > 
> > OTOH, it doesn't check binary dependence, just what is written into
> > the ebuilds itself. But it should work most of the time.
> > 
> > A combination of two emerge invocations may work, too:
> > 
> > # emerge -DNua world --changed-deps
> > # emerge -1a @preserved-rebuild --changed-deps
> > 
> > This also worked well for me when I did the gcc upgrade.
> > 
> > But I think the need to use changed-deps to rebuild dependers
> > should be considered a bug and be reported. Portage has support for
> > sub-slot dependencies to describe such binary breakage during
> > upgrades and automatically rebuild the dependers.  
> 
> Have you seen https://bugs.gentoo.org/show_bug.cgi?id=595618 ? It
> says that "Qt plugins compiled with gcc-4 are incompatible with
>  can be expected to anticipate that. On the other hand, some kind of
> notice could be issued, and bug 618922 is pursuing that. (That's the
> one I started this thread with.)

I was thinking about how portage could support the user here:

Portage could record which compiler in which version was used building
the package. It already records tons of information in /var/db/pkg.
There's CFLAGS but that doesn't really help here.

What I did after "emerge -DNua world --changed-deps" while upgrading
GCC at the same time, was finding all packages in /var/db/pkg that had
mtime older than the CFLAGS file of the GCC build. While changed-deps
seemed to capture most of what needed to be rebuilt, I just wanted to be
sure the rest of the system was rebuilt, too.

Of course there are ebuilds that just create data files or scripts
which could've been excluded from rebuilding, and packages that are
built with different compilers than gcc... That's why I thought it
would be nice if portage recorded the exact compiler version used.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Sudden auto-unmount of an encfs-partition ... why?

2017-05-21 Thread Kai Krakow
Am Sun, 21 May 2017 08:15:57 +0200
schrieb tu...@posteo.de:

> I have a directory mounted via fuse.encfs (encrypted).
> 
> Since kernel 4.11 (seldom, more often with 4.11.1 and 4.11.2) it
> happens that once in a sudden the system decides to make the
> contents unaccessible:
> 'mount' stills shows the mount of that directory but neither
> 'ls' or any other application can find the directory anymore.
> This happens while an application still accesses files
> of that directory (and the failure to do so shows that
> the "auto umount" has hit again).
> 
> I fetched the kernel right off ftp.kernel.org (more
> exactlu: off a mirror of that).

Why don't you use the gentoo-sources kernel ebuild? It has some special
patches for Gentoo userland... Tho I don't see any that may have
directly to do with your problem...

But maybe you want to check if it happens there, too. They have at
least 4.11.1 available by now. Myself, I'm using ck-sources 4.11.1.


> What is happening here? Has Linus implemented a timer
> for that ? :)

Do you use systemd and mounted it with a mount-timeout parameter
accidently?

> Any help is very appreciated since this featire is VERY
> annoying!

Since this is a fuse filesystem, check dmesg if there are any signals
regarding the fuse daemon: Maybe it just crashed. It doesn't really
unmount since you still see the mount point listed. Compare the running
fuse related processes before and after the issue.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Qt-4.8.7 bug

2017-05-20 Thread Kai Krakow
Am Sat, 20 May 2017 16:36:08 +0100
schrieb Mick <michaelkintz...@gmail.com>:

> On Saturday 20 May 2017 10:48:52 Mick wrote:
> > On Saturday 20 May 2017 03:19:20 Peter Humphrey wrote:  
> > > On Saturday 20 May 2017 00:26:58 Kai Krakow wrote:  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > After all that, KMail now works as it did before.
> > >   
>  [...]  
> > > 
> > > Mick might like to try that, perhaps. I assume the effect will be
> > > the same.  
> > 
> > Thanks Peter.  First PC is going through it.  91 packages!  
> 
> It seems revdep-rebuild'ing against library='libQtCore.so.4' also
> rebuilds the newly installed Qt packages.  This is why there so many
> packages to rebuild.

That's why I suggested using "--changed-deps": It doesn't rebuild
packages that provide the library itself and have already been built
after the library provider...

OTOH, it doesn't check binary dependence, just what is written into the
ebuilds itself. But it should work most of the time.

A combination of two emerge invocations may work, too:

# emerge -DNua world --changed-deps
# emerge -1a @preserved-rebuild --changed-deps

This also worked well for me when I did the gcc upgrade.

But I think the need to use changed-deps to rebuild dependers should be
considered a bug and be reported. Portage has support for sub-slot
dependencies to describe such binary breakage during upgrades and
automatically rebuild the dependers.


-- 
Regards,
Kai

Replies to list-only preferred.


pgpIVPoQ1wk25.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: ReactOS on virtualbox: VERY small viewport?

2017-05-20 Thread Kai Krakow
Am Sat, 20 May 2017 12:40:38 +0200
schrieb tu...@posteo.de:

> On 05/20 12:29, Kai Krakow wrote:
> > Am Sat, 20 May 2017 11:01:14 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/20 10:22, Kai Krakow wrote:  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > Hi Kai,
> > > 
> > > installing the guest additions results in a blue screen after
> > > reboot...  
> > 
> > I used the latest nightly of React OS and it worked there. I
> > actually just tried that just before answering you.
> > 
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> >   
> 
> I tried the latest release 0.4.5 ...
> OK, will try the snapshot.
> 
> How "stable" is ReactOS in general?
> "Experimental"? "Useable, but..."?, "It totally replaces
> the efforts of the other software company..."? ;) :)

You got a blue screen... Guess... ;-)

The web page says it is alpha quality software. So it is not stable.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: ReactOS on virtualbox: VERY small viewport?

2017-05-20 Thread Kai Krakow
Am Sat, 20 May 2017 11:01:14 +0200
schrieb tu...@posteo.de:

> On 05/20 10:22, Kai Krakow wrote:
> > Am Sat, 20 May 2017 09:22:12 +0200
> > schrieb tu...@posteo.de:
> >   
> > > Hi,
> > > 
> > > I need a "windows" only for the purpose of flashing th.e firmware
> > > of my NiMH-charger because the vendor forgot, that there are
> > > other OSes alive on this planet earth.
> > > 
> > > I tried ReactOS via virtualbox...and yes it "runs"...  But the
> > > given screen is /that/ tiny, that even the desktop of ReactOS
> > > does not fit.
> > > 
> > > H
> > > 
> > > I earned about some problems of the correct implementation and
> > > installation/usage of layer eight of some software/OSes and the
> > > handling of them so better to ask, what may the reason for this
> > > tiny screen...?
> > > 
> > > Virtualnbox?
> > > ReactOS?
> > > Me?  
> > 
> > Install the guest extensions, set the virtualbox window size to auto
> > adjust the guest resolution, then reboot and resize your window.
> > 
> > You may find that the taskbar is missing, try rebooting again. It
> > should appear after some seconds.
> > 
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> >   
> 
> Hi Kai,
> 
> installing the guest additions results in a blue screen after
> reboot...

I used the latest nightly of React OS and it worked there. I actually
just tried that just before answering you.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: ReactOS on virtualbox: VERY small viewport?

2017-05-20 Thread Kai Krakow
Am Sat, 20 May 2017 09:22:12 +0200
schrieb tu...@posteo.de:

> Hi,
> 
> I need a "windows" only for the purpose of flashing th.e firmware of
> my NiMH-charger because the vendor forgot, that there are other OSes
> alive on this planet earth.
> 
> I tried ReactOS via virtualbox...and yes it "runs"...  But the given
> screen is /that/ tiny, that even the desktop of ReactOS does not fit.
> 
> H
> 
> I earned about some problems of the correct implementation and
> installation/usage of layer eight of some software/OSes and the
> handling of them so better to ask, what may the reason for this tiny
> screen...?
> 
> Virtualnbox?
> ReactOS?
> Me?

Install the guest extensions, set the virtualbox window size to auto
adjust the guest resolution, then reboot and resize your window.

You may find that the taskbar is missing, try rebooting again. It
should appear after some seconds.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Qt-4.8.7 bug

2017-05-19 Thread Kai Krakow
Am Fri, 19 May 2017 22:50:06 +0100
schrieb Peter Humphrey :

> On Friday 19 May 2017 15:15:24 Mick wrote:
> > On Friday 19 May 2017 13:43:24 Peter Humphrey wrote:  
> > > Hello list,
> > > 
> > > Today's update broke KMail. 15 dev-qt packages were upgraded from
> > > 4.8.6 to 4.8.7, and when I logged out, restarted xdm and logged
> > > in again, KMail's folder list showed all the folders in red, and
> > > the other two panes were blank. Akonadiconsole showed the LAN
> > > Mail agent offline, broken.
> > > 
> > > In case anyone else falls over this one, I've raised this bug;
> > > it's been confirmed by one other user so far:
> > > 
> > > https://bugs.gentoo.org/show_bug.cgi?id=618922
> > > 
> > > Has anyone found a fix for this?  
> > 
> > I haven't run an update yet, but thank you for bringing this to our
> > attention. I'll stay put on 4.8.6-r2, until the bug is resolved.  
> 
> Apparently, a revdep-rebuild fixes it: revdep-rebuild --
> library='libQtCore.so.4'
> 
> 71 packages! See you in the morning - I don't intend to sit and wait
> while the likes of qtwebkit and libreoffice compiling.

You could try "emerge -DNua world --changed-deps" to fix such problems.
It should work even when already upgraded.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Tux AWOL

2017-05-19 Thread Kai Krakow
Am Fri, 19 May 2017 09:32:47 +0300
schrieb Nikos Chantziaras <rea...@gmail.com>:

> On 05/18/2017 05:06 PM, Daniel Frey wrote:
> > On 05/17/2017 03:35 PM, Kai Krakow wrote:  
> >>
> >> It also enables me to finally use UEFI and suspend to RAM again
> >> with NVIDIA proprietary without a dead framebuffer after
> >> resume. ;-)  
> > 
> > I have had this problem for years and thought it was a bad card.
> > Replaced it recently, still have the problem.  
> 
> This is usually due to CSM being enabled in the mainboard's settings.
> If you use UEFI with the EFI console kernel driver, but you still get
> this in dmesg:
> 
>   NVRM: Your system is not currently configured to drive a VGA console
>   NVRM: on the primary VGA device. The NVIDIA Linux graphics driver
>   NVRM: requires the use of a text-mode VGA console. Use of other
> console NVRM: drivers including, but not limited to, vesafb, may
> result in NVRM: corruption and stability problems, and is not
> supported.
> 
> then CSM is the reason. One of the issues is restoring the
> framebuffer after resuming.
> 
> CSM is the "Compatibility Support Module" of UEFI. When enabled, the 
> graphics card is being initialized by CSM, not by UEFI, and the
> nvidia driver doesn't fully support this.
> 
> Some mainboards allow you to disable CSM. Unfortunately, not all do.

Okay, I also switched to "ultra-fast boot mode" so this implicitly
disabled CSM for me... But I need the ksm driver module to finally get
rid of this message if I remember correctly. It's called nvidia-modeset:

# lsmod | fgrep nvidia
nvidia_drm 34730  1
nvidia_modeset775151  17 nvidia_drm
nvidia  11456287  555 nvidia_modeset

# dmesg | fgrep -i nvidia
[2.914981] nvidia: loading out-of-tree module taints kernel.
[2.914986] nvidia: module license 'NVIDIA' taints kernel.
[2.921352] nvidia-nvlink: Nvlink Core is being initialized, major device 
number 242
[2.921595] nvidia :01:00.0: vgaarb: changed VGA decodes: 
olddecodes=io+mem,decodes=none:owns=io+mem
[2.921663] NVRM: loading NVIDIA UNIX x86_64 Kernel Module  381.22  Thu May  
4 00:55:03 PDT 2017 (using threaded interrupts)
[2.922349] nvidia-modeset: Loading NVIDIA Kernel Mode Setting Driver for 
UNIX platforms  381.22  Thu May  4 00:21:48 PDT 2017
[2.978918] [drm] [nvidia-drm] [GPU ID 0x0100] Loading driver
[3.684589] nvidia-modeset: Allocated GPU:0 
(GPU-de8a8443-463b-6a86-b74c-cad4397a98e9) @ PCI::01:00.0


BTW: You also need a video bios with UEFI GOP support (or something like
that). Current cards have it, older may need a firmware upgrade which
can sometimes be obtained from the manufacturer. MSI has a forum where
you can request one. It may take some time, but they send it.

For my previous nvidia card I had to get and flash such a firmware.

Some bios versions may allow disabling csm only when UEFI GOP support
is there.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Tux AWOL

2017-05-17 Thread Kai Krakow
Am Wed, 17 May 2017 12:14:18 -0700
schrieb Jorge Almeida :

> On Wed, May 17, 2017 at 11:01 AM, Nikos Chantziaras
>  wrote:
> > On 05/14/2017 01:47 PM, Jorge Almeida wrote:  
> >>
> >> It's the first time I hear about plymouth. Visiting
> >> https://cgit.freedesktop.org/plymouth/ I found zilch
> >> documentation.  
> >  
> 
> Actually, I replied too soon: there is a README in the tree.
> >
> > It's... complicated:
> >
> >   https://wiki.gentoo.org/wiki/Plymouth
> >
> >  
> Well, regardless of how well/badly it works, it does seem to have
> everything I don't want: hidden boot messages? logs sent to somewhere?
> No-thank-you.
> 
> (Not to mention that newer versions seem to be systemd-only, according
> to the Wiki)

Actually it's pretty much plug and play: Choose theme, enable,
reboot... The kernel configuration part should be on par with other
boot screen themers. Of course, yes: Newer versions need seat support
which OpenRC doesn't have. Mask newer versions.

I don't remember if there are themes available that can show boot
messages, themes can also be handcrafted. A static image should be
really easy. I prefer uncluttered boot screens without boot messages,
so I never tried. I remember that plymouth was hiding itself back when
I was still working with OpenRC and a fatal error occurred (read:
booting stopped and failed) but since switching to systemd, I never had
such problems so it's no issue for me. In systemd, plymouth would be
hidden, when systemd falls back to emergency mode. This only happens to
me when I mess up dracut. I need and initramfs due to multi-dev btrfs.

BTW: Newer versions also seem to be KMS-only, so if your graphics
driver doesn't support KMS, plymouth wouldn't work there anyway. For
nvidia proprietary, there's a KMS module which you need to trick into
being loaded very early at boot. This is easy when integrated into
initrd. It also enables me to finally use UEFI and suspend to RAM again
with NVIDIA proprietary without a dead framebuffer after resume. ;-)

But I think this is also everything you don't want. I just wanted to
take note of the pitfalls for completion reasons.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Tux AWOL

2017-05-17 Thread Kai Krakow
Am Wed, 17 May 2017 19:38:41 +0300
schrieb Arthur Țițeică :

> În ziua de duminică, 14 mai 2017, la 12:50:43 EEST, Alan Mackenzie a
> scris:
> > Something strange happened when I installed the 4.11.0 sources -
> > all the options were initialised to what they were in my 4.9.16
> > running kernel. This saved me a lot of time.  
> 
> I've seen it gets the defaults from /boot/config* files if it can't
> find a local .config.
> 
> I had the opposite problem the other day... I wanted to start with a
> fresh .config and had to find out how it gets the current config
> automatically.

Run "make help", find "make defconfig" ;-)

-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 21:31:32 +0100
schrieb lee :

> > I'm sorry, but that's only marginally more believable than claiming
> > keyboards are too complicated for your users.  
> 
> Does it matter what you or I believe?  Some users have difficulties
> using a keyboard and/or a mouse.  I've seen that, so no, what you or I
> believe does not matter.

If this is the underlying (and perfectly legitimate) problem, you need
to deploy a solution that's most easy for your users and not for you.
That may involve a custom transfer solution where they simply can drop
files to. The underlying technology is then up to you: Use what is
appropriate.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 21:47:17 +0100
schrieb lee :

> > Depending on what data is transferred, you should also take into
> > account if your solution is certificated to transfer such data. E.g.
> > medical data may only be transferred through properly certificated
> > VPN appliances. Otherwise, you should fall back to sneakernet. I'm
> > not sure how that is any more secure but that's how things are.  
> 
> Interesting, who certifies such appliances?

I really never asked... ;-) Maybe I should...


> What if I, as a patient,
> do not want my data transferred that way,

See your words below: "nobody in Germany actually cares"... So you
won't be asked because it's secure by definition (as in
"certification"). ;-)

The old transport was ISDN. But that is being shut down.

Or did you direct your concern to sneakernet transmission? I doubt that
such data would even be encrypted... Although it clearly should.


> and how do I know if they
> didn't make a mistake when certifying the equipment?

That's German bureaucracy: It has the certificate stamp, so it's okay.
The technical internals do not matter: Nobody asks for that after it's
been certified.


> It's not medical data, and nobody in Germany actually cares about
> protecting peoples data anyway.  The little that is being done towards
> that is nothing but pretense.

We are servicing a medical laboratory: They take this certification
very seriously, so at least they care to fulfill the requirements.
However, we do not control that: After the initial setup they do most
configuration by themselves and we only deliver equipment now. As far
as I know, they cannot even freely choose the provider on their side of
the connection. And they are managing their internal network by
themselves, we wouldn't be easily allowed to do that.

Usually, as a IT service company, you would also sign a non-disclosure
contract when working for a company handling sensitive data. But only
few companies seem to know that...


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 22:14:48 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sun, 14 May 2017 01:28:55 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> Kai Krakow <hurikha...@gmail.com> writes:
> >>   
>  [...]  
>  [...]  
> >>  [...]
>  [...]  
> >> 
> >> Wow, you must be living in some sort of paradise.  Here, internet
> >> is more like being cut off from the rest of the world.
> >> 
> >> But then, there's a manufacturer that makes incredibly slow USB
> >> sticks which I won't buy anymore ...  
> >
> > Okay, it really depends. I shouldn't say "most"... ;-)  
> 
> Intenso --- pretty cheap, but awfully slow; however, it does
> work. Better don't buy anything they make unless your time is entirely
> worthless to you.
> 
> > I compared my really crappy (but most reliable yet) old USB stick
> > to my internet connection. My USB stick doesn't do 48 MByte/s, more
> > like 5-10. And don't even ask when writing data.  
> 
> 5--10MB/s?  How do you get that much?

For reading? It can work, tho it will eventually drop to 2 MB/s after a
short time. For writing: It drops well below 1 MB/s after a short burst.

> > Even my rusty hard disk (read: not SSD) has a hard time writing
> > away a big download with constantly high download rate.  
> 
> It must be really old then, about 20 years.

No, it's just that other IO is also ongoing and filesystem internals
have some write overheads and involve head movement which easily limits
the drive from its theoretical ideal rate of 120-150 MB/s. Short
bursts: No problem. Long running writes are more like 50-70 MB/s, which
is pretty near the download rate.

It's also what I see in gigabit networks: Copy speed could be somewhere
between 100 and 120 MB/s, but the local drive seems to easily limit
this to 70-80 MB/s.

My current setup allows constant writing of around 270-280 MB/s
according to:

# dd bs=1M if=/dev/urandom of=test.dat
13128171520 bytes (13 GB, 12 GiB) copied, 48,0887 s, 273 MB/s

So it's not that bad... ;-)

But dd also runs at 100% CPU during that time, so I guess the write
rate could be even higher. I see combined rate of up to 500 MB/s
sometimes tho I'm not sure if this is actual transfer rate or just
queued IO rate. Also, it is pretty near the SATA bus saturation. I'm
not sure if my chipset would deliver this rate per SATA connection or
as a combined rate.


> > But I guess that a good internet connection should be at least 50
> > MBit these days.  
> 
> I'd say 100, but see above.  The advantage is that you have sufficient
> bandwidth to do several things at the same time.  I've never seen fast
> internet.

My provider easily delivers such rates, given the remote side is fast
enough. Most downloads a saturated at around 15-20 MB/s. Only few
servers can deliver more. Probably not only a limit of the servers, but
the peer network connections.


> > And most USB sticks are really crappy at writing. That also counts
> > when you do not transfer the file via network. Of course, most DSL
> > connections have crappy upload speed, too. Only lately, Telekom
> > offers 40 MBit upload connections in Germany.  
> 
> They offer 384kbit/s downstream and deliver 365.  It's almost
> symmetrical, yet almost unusable.

Sounds crappy... No alternative providers there? Problem is almost
always a combination of multiple factors: A long running cable limiting
DSL to a lower physical bandwidth, and usually a too limited traffic
concentrator in that area: You should see very different transfer rates
at different times of the day.


> They also offer 50Mbit and deliver between 2 and 12, and upstream is
> awfully low.  Tell them you could pay for 16 instead of 50 because you
> don't get even that much, and they will tell you that you would get
> even less than you do now.  That is unacceptable.

Yes... They would downgrade you to less performing DSL technology then.
It's all fine for them because you only pay for "up to" that bandwidth.


> And try to get a static IP so you could really use your connection ...

No problems so far, at least for business plans.


> > I'm currently on a 400/25 MBit link and can saturate the link only
> > with proper servers like the Steam network which can deliver 48
> > MByte/s.  
> 
> You must be sitting in a data center and be very lucky to have that.

Cable network... In a smallish city.

Next involvement is announced to be 1 GBit in around 2018-2019...
Something, that's already almost standard in other European countries.
Well... "standard" in terms of availability... Not actual usage. I
think the prices will be pretty high. But that's okay: If you need it,
you should be willing to pay that. It won't help to have such bandwidth
without the provider being able to effort the needed infrastructure.
It's already over-provisioned too much as you already found out.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-15 Thread Kai Krakow
Am Mon, 15 May 2017 08:53:15 +0100
schrieb Mick <michaelkintz...@gmail.com>:

> On Sunday 14 May 2017 11:35:29 Kai Krakow wrote:
> > Am Sun, 14 May 2017 09:52:41 +0100
> > 
> > schrieb Mick <michaelkintz...@gmail.com>:  
> > > On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:  
>  [...]  
> > > 
> > > OpenVPN is not the most efficient VPN implementation for
> > > connections to a server because it is not multithreaded  
> > 
> > Probably true but it works well here for connections of up to 100
> > MBit.  
> 
> It can work well for well above that throughput, but the limitation
> is the tun/tap mechanism and the CPU of the device/PC it is running
> on.

I think most important is to use the UDP transport and not TCP, because
the tunnel protocol doesn't need to ensure packet delivery. This is
done by the protocols running inside the tunnel. Also, we usually
enable compression at least for low-bandwidth uplinks (which become
rare these days fortunately).

To compensate for the UDP protocol, we usually also give the tunneling
packets higher priority at the edge router to reduce drop rate under
uplink pressure.

This works well for most dial-up links we encounter (currently up to
100 MBit). I probably won't consider it for higher throughput links
because I fear the appliance CPU may become a bottleneck. But so far,
no problems, even not with CPU usage.


> > > and also because unlike
> > > IKE/IPSec it operates in userspace, not in kernelspace.  
> > 
> > IPsec also doesn't work without help from userspace processes.   
> 
> Sure, but this is only for managing the (re)keying process, which BTW
> takes longer with IKE than with OpenVPN (we're talking about
> milliseconds here). Once the keys have been agreed and set up between
> peers the rest happens exceedingly fast in kernelspace, managed as a
> network layer interface (L3).  I recall seeing IPSec tunnels running
> 10 times faster than OpenVPN, being processed even faster than VLAN
> trunking, but this is very much dependent on the resources of the
> device running the tunnel.

I use IPsec only between to endpoints directly connected to the
internet (without NAT) and static IP. And only then it was really
reliable, and it performed well. No question about it...

And I like the fact that I don't need an intermediate transfer net as
opposed to OpenVPN.

OTOH, only OpenVPN has been reliable enough (and very reliable so far)
when one or both sides were NATed with dynamic IP.

And we had one customer running two networks across four sites, and
their IPsec solution never ran reliable. And this was with
professional, expensive firewall appliances. We replaced it with
site-2-site OpenVPN and it runs faster now and without any disconnects.
All sites use static IPs so that was never the problem. I don't know
what caused this. The old appliances were mostly blackboxes, and at
least one was faulty hardware (which explained problems at one site).


> > But I
> > see what you mean: With OpenVPN, traffic bounces between kernel and
> > userspace multiple times before leaving the machine. But I don't
> > really see that as a problem for the scenario OpenVPN is used in:
> > It best fits with dial-up connections which are really not gigabit
> > yet. For this, performance overhead is almost zero.  
> 
> Yes, at dial-up throughput even a smart phone has enough resources to
> manage OpenVPN without it becoming a constraint.
> 
> 
> > IPsec can be a big pita if NAT is involved. For Windows client, L2TP
> > may be a good alternative.  
> 
> IKE/IPSec uses NAT-Traversal (NAT-T) by encapsulating ESP packets
> within UDP over port 4500.  This will allow clients to initiate a
> connection with the server over port 500 and then switch to 4500 as
> part of NAT-T detection. Trivia:  many routers/VPN concentrators use
> Vendor ID strings to determine if the remote peer can implement NAT-T
> among other attributes to shorten this NAT-T detection process.
> 
> Of course the server will have to be accessible over port 500 for the
> clients to be able to get to it, but this is a port forwarding/DMZ
> network configuration exercise at the server end.

Oh wait... So I need to forward port 500 and 4500 so NAT-T does work
properly? Even when both sides are NATed? I never got that to work
reliably for one side NATed, and it never worked for both sides NATed.
And my research in support forums always said: That does not work...


>  [...]  
> > > 
> > > If the users are helpless then you may be better configuring a VPN
> > > tunnel between their Internet gateway and the server, so they can
> > > access the server as if it were a local share, or using the built
> > > in ftp client that MSWindo

[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 01:25:24 +0100
schrieb lee :

> "Poison BL."  writes:
> 
> > On Sat, Apr 29, 2017 at 9:11 PM, lee  wrote:  
> >>
> >> "Poison BL."  writes:  
>  [...]  
> > trust  
>  [...]  
> >>
> >> Why not?  (12GB are nowhere close to half a petabyte ...)  
> >
> > Ah... I completely misread that "or over 50k files in 12GB" as 50k
> > files *at* 12GB each... which works out to 0.6 PB, incidentally.
> >  
> >> The data would come in from suppliers.  There isn't really anything
> >> going on atm but fetching data once a month which can be like
> >> 100MB or 12GB or more.  That's because ppl don't use ftp ...  
> >
> > Really, if you're pulling it in from third party suppliers, you
> > tend to be tied to what they offer as a method of pulling it from
> > them (or them pushing it out to you), unless you're in the unique
> > position to dictate the decision for them.  
> 
> They need to use ftp to deliver the data, we need to use ftp to get
> the data.  I don't want that any other way.
> 
> The problem is that the ones supposed to deliver data are incompetent
> and don't want to use ftp because it's too complicated.  So what's the
> better solution?

Use an edge router appliance with proper VPN support. You are from
Germany? I can recommend Securepoint appliances. You pay for the
hardware and support, they support you with setting everything up. You
can also find a distributor who can install this for you. Securepoint
works with competent partners all around Germany.

There's also other alternatives like Watchguard (but their OpenVPN
support is not that good), and a lot of free router/firewall softwares
you can deploy to semi-professional equipment by firmware replacement.
But at least with the latter option, you're mostly on your own and need
to invest a lot of effort to make it work properly and secure.

Depending on what data is transferred, you should also take into
account if your solution is certificated to transfer such data. E.g.
medical data may only be transferred through properly certificated VPN
appliances. Otherwise, you should fall back to sneakernet. I'm not sure
how that is any more secure but that's how things are.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 01:28:55 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sat, 29 Apr 2017 22:02:51 -0400
> > schrieb "Walter Dnes" <waltd...@waltdnes.org>:
> >  
> >>   Then there's always "sneakernet".  To quote Andrew Tanenbaum from
> >> 1981
> >>   
>  [...]  
> >
> > Hehe, with the improvements in internet connections nowadays, we
> > almost stopped transferring backups via sneakernet. Calculating the
> > transfer speed of the internet connection vs. the speed calculating
> > miles per hour, internet almost always won lately. :-)
> >
> > Most internet connections are faster than even USB sticks these
> > days.  
> 
> Wow, you must be living in some sort of paradise.  Here, internet is
> more like being cut off from the rest of the world.
> 
> But then, there's a manufacturer that makes incredibly slow USB sticks
> which I won't buy anymore ...

Okay, it really depends. I shouldn't say "most"... ;-)

I compared my really crappy (but most reliable yet) old USB stick to my
internet connection. My USB stick doesn't do 48 MByte/s, more like 5-10.
And don't even ask when writing data.

Even my rusty hard disk (read: not SSD) has a hard time writing away a
big download with constantly high download rate.

But I guess that a good internet connection should be at least 50 MBit
these days.

And most USB sticks are really crappy at writing. That also counts when
you do not transfer the file via network. Of course, most DSL
connections have crappy upload speed, too. Only lately, Telekom offers
40 MBit upload connections in Germany.

I'm currently on a 400/25 MBit link and can saturate the link only with
proper servers like the Steam network which can deliver 48 MByte/s.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:18:56 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sat, 29 Apr 2017 20:02:57 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> Alan McKinnon <alan.mckin...@gmail.com> writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> The intended users are incompetent, hence it is too difficult to
> >> use ...  
> >
> > If you incompetent users are using Windows: Have you ever tried
> > entering ftp://u...@yoursite.tld in the explorer directory input
> > bar?  
> 
> I tried at work and it said something like that the service cannot be
> accessed.
> 
> 
> > [...]
> > Debian is not the king to rule the internet. You shouldn't care when
> > they shut down their FTP services. It doesn't matter to the rest of
> > the world using the internet.  
> 
> Who can say what their influence actually is?  Imagine Debian going
> away, and all the distributions depending on them as well because they
> loose their packet sources, then what remains?  It is already rather
> difficult to find a usable distribution, and what might the effect on
> upstream sources be.

The difference is: They only shut down a service. They are not
vanishing from the internet. You cannot conclude from that, they are:

(a) shutting down all their service
(b) ftp is deprecated and nobody should use it any longer

And I didn't write that you shouldn't care if Debian vanishes. I only
said it shouldn't mean anything to you if they shut down their FTP
services for probably good reasons. It's not the end of life, the
universe, and everything. And you can keep your towel.

What I wanted to say: Debian is not that important that everyone will
shut down FTP now and kill FTP support from client software. That
simply won't happen. That is not what it means when Debian is shutting
down a service.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 10:11:31 +0100
schrieb Jorge Almeida <jjalme...@gmail.com>:

> On Sun, May 14, 2017 at 9:31 AM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> > Am Sun, 14 May 2017 08:32:46 +0100
> > schrieb Jorge Almeida <jjalme...@gmail.com>:  
> 
>  [...]  
> 
> 
> >> $ zgrep -i logo /proc/config.gz
> >> CONFIG_LOGO=y
> >> # CONFIG_LOGO_LINUX_MONO is not set
> >> # CONFIG_LOGO_LINUX_VGA16 is not set
> >> CONFIG_LOGO_LINUX_CLUT224=y
> >> $  
> >
> > Use
> >
> > # vimdiff oldlinux/.config newlinux/.config
> >  
> 
> Done that. There are only a few differences and none seems relevant.
> >
> > I think there were changes to the framebuffer devices. You may need
> > to switch to a different one.
> >  
> I use the Intel integrated graphics, I didn't do anything special
> about framebuffer. The current one works smoothly regarding KMS and
> I'm happy with it (I do use VTs).
> 
> $ cat /proc/fb
> 0 inteldrmfb
> 
> 
> I suppose it's goodbye to Tux, for now. I was hoping someone else
> would be using the same kernel...

You could setup plymouth and I'm pretty sure there should be a
fullscreen Tux theme somewhere... ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 09:52:41 +0100
schrieb Mick :

> On Saturday 13 May 2017 23:58:17 R0b0t1 wrote:
> > I had some problems setting up OpenVPN that were solved by using
> > per-client public keys. That seems to be the best supported
> > configuration (as well as the most secure). Windows-side using
> > OpenVPN-GUI is very easy.
> > 
> > OpenVPN tends to have poor bandwidth due to overhead, but that may
> > be in large part due to my connection.  
> 
> OpenVPN is not the most efficient VPN implementation for connections
> to a server because it is not multithreaded

Probably true but it works well here for connections of up to 100 MBit.

> and also because unlike
> IKE/IPSec it operates in userspace, not in kernelspace.

IPsec also doesn't work without help from userspace processes. But I
see what you mean: With OpenVPN, traffic bounces between kernel and
userspace multiple times before leaving the machine. But I don't really
see that as a problem for the scenario OpenVPN is used in: It best fits
with dial-up connections which are really not gigabit yet. For this,
performance overhead is almost zero.


>  If you have
> more than one client connecting to the server at the same time you
> will need to set up multiple instances with different ports or
> different protocols.

That is not true: We connect many clients to the same server port
without problems, each with their own certificate.

>  With IKE/IPSec you don't.  MSWindows PCs come
> with IKEv2 natively so they can be configured to use it without
> installing additional client applications.

IPsec can be a big pita if NAT is involved. For Windows client, L2TP
may be a good alternative.

>  [...]  
> > > 
> > > The ftp server already doesn't allow unencrypted connections.
> > > 
> > > Now try to explain to ppl for whom Filezilla is too complicated
> > > how to set up a VPN connection and how to secure their LAN once
> > > they create the connection (if we could ever get that to work).
> > > I haven't been able to figure that out myself, and that is one of
> > > the main reasons why I do not have a VPN connection but use ssh
> > > instead.  The only disadvantage is that I can't do RDP sessions
> > > with that ---  I probably could and just don't know how to ---
> > > but things might be a lot easier if wireguard works.  
> 
> If the users are helpless then you may be better configuring a VPN
> tunnel between their Internet gateway and the server, so they can
> access the server as if it were a local share, or using the built in
> ftp client that MSWindows comes with.  SMB will work securely in this
> case too.

This is what I would recommend, too. Put the VPN endpoints on the
network edges and no clients needs to worry: They just use the
connection.

>  [...]  
> > > 
> > > I'm finding it a horrible nightmare, see above.  It is the most
> > > difficult thing you could come up with.  I haven't found any good
> > > documentation that explains it, the different types of it, how it
> > > works, what to use (apparently there are many different ways or
> > > something, some of which require a static IP on both ends, and
> > > they even give you different disadvantages in performance ...),
> > > how to protect the participants and all the complicated stuff
> > > involved.  So far, I've managed to stay away from it, and I
> > > wouldn't know where to start.  Of course, there is some
> > > documentation, but it is all confusing and no good.  
> > 
> > Feel free to start a thread on it. As above, I recommend
> > one-key-per-client and running your own CA.  

I wouldn't recommend running your own CA because you will have to
deploy a trust relationship with every client.

> For secure connections you will have to set up CA and TLS keys with
> any option.  Even ftps - unless the ftp server is already configured
> with its TLS certificates.

Or you use certificates from LetsEncrypt. Their CA is already trusted
on most machines my default.


-- 
Regards,
Kai

Replies to list-only preferred.


pgpoaF9L6wIsl.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:48:46 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sat, 29 Apr 2017 20:30:03 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> Danny YUE <sheepd...@gmail.com> writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> Doesn't that require ssh access?  And how do you explain that to
> >> ppl finding it too difficult to use Filezilla?  Is it available for
> >> Windoze?  
> >
> > Both, sshfs and scp, require a full shell (that may be restricted
> > but that involves configuration overhead on the server side).  
> 
> I wouldn't want them to have that.

And I can understand this...

> > You can use sftp (FTP wrapped into SSH), which is built into SSH. It
> > has native support in many Windows clients (most implementations use
> > PuTTY in the background). It also has the advantage that you can
> > easily restrict users on your system to SFTP-only with an easy
> > server-side configuration.  
> 
> From what I've been reading, sftp is deprecated and has been replaced
> by ftp with TLS.

From what I'm guessing, you're mixing up sftp and ftps. sftp is
ssh+ftp, and ftps is ftp with ssl. The latter is probably deprecated in
favor of ftp with tls. TLS supports name indication (to show the
correct server certificate) and it supports handshaking so the same
port can be used for secure and insecure connections.

Apparently, many sites on the internet also mix up ftps und sftp, for
them both is FTP with SSL. But that's not true. I think that comes from
the fact that "secure ftp" often is a synonym for "ssl encryption" as
it is with "secure http". But that doesn't mean the acronym is "sftp"
as it also is not "shttp".

>  [...]  
> >> 
> >> Does that work well, reliably and securely over internet
> >> connections?  
> >
> > It supports encryption as transport security, and it supports
> > kerberos for secure authentication, the latter is not easy to setup
> > in Linux, but it should work with Windows clients out-of-the-box.
> >
> > But samba is a pretty complex daemon and thus offers a big attack
> > surface for hackers and bots. I'm not sure you want to expose this
> > to the internet without some sort of firewall in place to restrict
> > access to specific clients - and that probably wouldn't work for
> > your scenario.  
> 
> At least it's a possibility.  I don't even know if they have static
> IPs, though.

Modern CIFS implementations can be forced to encrypt the transport
layer and only accept kerberos authenticated clients. It should be safe
to use then if properly firewalled. At least "CIFS" (which is samba)
afaik means "common internet file system" - that should at least have a
minimal meaning of "intended to be used over internet connections". Of
course this really doesn't say anything about transport security. Be
sure to apply one, and you should be good to go.

> > But you could offer access via OpenVPN and tunnel samba through
> > that.  
> 
> I haven't been able yet to figure out what implications creating a VPN
> has.  I understand it's supposed to connect networks through a secured
> tunnel, but what kind of access to the LAN does someone get who
> connects via VPN?  Besides, VPN is extremely complicated and
> difficult to set up.  I consider it an awful nightmare.

You need to first understand how tunnel devices work. Then it becomes
very easy to set up. The access to the LAN can be restricted by
firewall rules. As long as you don't setup routes from the transfer
network (where the tunnel is located) to your LAN, there won't be
access. And then there's firewall rules after you set up routing.

> Wireguard seems a lot easier.

I didn't know that, I will look into it.

> > By that time, you can as easily offer FTP, too, through the tunnel
> > only, as there should be no more security concerns now: It's
> > encrypted now.  
> 
> The ftp server already doesn't allow unencrypted connections.
> 
> Now try to explain to ppl for whom Filezilla is too complicated how to
> set up a VPN connection and how to secure their LAN once they create
> the connection (if we could ever get that to work).  I haven't been
> able to figure that out myself, and that is one of the main reasons
> why I do not have a VPN connection but use ssh instead.  The only
> disadvantage is that I can't do RDP sessions with that ---  I
> probably could and just don't know how to --- but things might be a
> lot easier if wireguard works.

You can always deploy VPN at the edge of the network, so your clients
won't need to bother with the d

[gentoo-user] Re: replacement for ftp?

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 02:59:41 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Sat, 29 Apr 2017 20:38:24 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> Kai Krakow <hurikha...@gmail.com> writes:
> >>   
>  [...]  
>  [...]  
>  [...]  
> >> 
> >> Yes, I'm using it mostly for backups/copies.
> >> 
> >> The problem is that ftp is ideal for the purpose, yet users find it
> >> too difficult to use, and nobody uses it.  So there must be
> >> something else as good or better which is easier to use and which
> >> ppl do use.  
> >
> > Well, I don't see how FTP is declining, except that it is
> > unencrypted. You can still use FTP with TLS handshaking, most sites
> > should support it these days but almost none forces correct
> > certificates because it is usually implemented wrong on the server
> > side (by giving you ftp.yourdomain.tld as the hostname instead of
> > ftp.hostingprovider.tld which the TLS cert has been issued for).
> > That makes it rather pointless to use. In linux, lftp is one of the
> > few FTP clients supporting TLS out-of-the-box by default, plus it
> > forces correct certificates.  
> 
> These certificates are a very stupid thing.  They are utterly
> complicated, you have to self-sign them which produces warnings, and
> they require to have the host name within them as if the host wasn't
> known by several different names.

Use LetsEncrypt then, you can add any number of host names you want, as
far as I know. But you need a temporary web server to prove ownership
of the server/hostname and sign the certificates.

> > But I found FTP being extra slow on small files, that's why I
> > suggested to use rsync instead. That means, where you could use
> > sftp (ssh+ftp), you can usually also use ssh+rsync which is
> > faster.  
> 
> That requires shell access.
> 
> What do you consider "small files"?  I haven't observed a slowdown
> like that, but I haven't been looking for it, either.

Transfer 1 smallish files (like web assets, php files) to a server
with FTP, then try rsync. You should see a very big difference in time
needed. That's due to the connection overhead of FTP.

> > There's also the mirror command in lftp, which can be pretty fast,
> > too, on incremental updates but still much slower than rsync.
> >  
> >> I don't see how they would transfer files without ftp when ftp is
> >> the ideal solution.  
> >
> > You simply don't. FTP is still there and used. If you see something
> > like "sftp" (ssh+ftp, not ftp+ssl which I would refer to as ftps),
> > this is usually only ftp wrapped into ssh for security reasons. It
> > just using ftp through a tunnel, but to the core it's the ftp
> > protocol. In the end, it's not much different to scp, as ftp is
> > really just only a special shell with some special commands to
> > setup a file transfer channel that's not prone to interact with
> > terminal escape sequences in whatever way those may be implemented,
> > something that e.g. rzsz needs to work around.
> >
> > In the early BBS days, where you couldn't establish a second
> > transfer channel like FTP does it using TCP, you had to send
> > special escape sequences to put the terminal into file transfer
> > mode, and then send the file. By that time, you used rzsz from the
> > remote shell to initiate a file transfer. This is more the idea of
> > how scp implements a file transfer behind the scenes.  
> 
> IIRC, I used xmodem or something like that back then, and rzsz never
> worked.

Yes, or xmodem... ;-)

> > FTP also added some nice features like site-to-site transfers where
> > the data endpoints both are on remote sites, and your local site
> > only is the control channel. This directly transfers data from one
> > remote site to another without going through your local connection
> > (which may be slow due to the dial-up nature of most customer
> > internet connections).  
> 
> Interesting, I didn't know that.  How do you do that?

You need a client that supports this. I remember LeechFTP for Windows
supported it back then. The client needs to log in to both FTP servers
and then shuffle correct PORT commands between them, so that the data
connection is directly established between both.

That feature is also the reason why this looks so overly complicated
and incompatible to firewalls. When FTP was designed, there was a real
need to directly transfer files between servers as your connection was
usually a slow modem connection below 2400 baud, or some other slow
connection. Or ev

[gentoo-user] Re: [OT] Tux AWOL

2017-05-14 Thread Kai Krakow
Am Sun, 14 May 2017 08:32:46 +0100
schrieb Jorge Almeida :

> On Sun, May 14, 2017 at 4:30 AM, Stroller
>  wrote:
> >  
> >> On 13 May 2017, at 09:46, Jorge Almeida 
> >> wrote:
> >>
> >> In case someone is using kernel 4.11: I tried it and everything
> >> seems fine, except that the linux logo on the boot screen (i.e.
> >> tty1) is gone. It was there before (with 4.10.9), and I used make
> >> oldconfig.  
> >
> > Using `make oldconfig` isn't enough to diagnose - you need to
> > establish whether the option is enabled.  
> 
> I use make menuconfig after oldconfig. I did check the usual suspects,
> but maybe something needs to
> be explicitly enabled that was formerly implicit.
> 
> >
> > On my system:
> >
> > $ uname -r
> > 4.9.4-gentoo
> > $ zgrep -i logo /proc/config.gz
> > CONFIG_LOGO=y
> > # CONFIG_LOGO_LINUX_MONO is not set
> > # CONFIG_LOGO_LINUX_VGA16 is not set
> > CONFIG_LOGO_LINUX_CLUT224=y
> > $
> >  
> 
> >  
> $ zgrep -i logo /proc/config.gz
> CONFIG_LOGO=y
> # CONFIG_LOGO_LINUX_MONO is not set
> # CONFIG_LOGO_LINUX_VGA16 is not set
> CONFIG_LOGO_LINUX_CLUT224=y
> $

Use

# vimdiff oldlinux/.config newlinux/.config

to edit both files side by side. It will show you the differences pretty
easily.

I think there were changes to the framebuffer devices. You may need to
switch to a different one.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: GCC 5.4.0

2017-05-13 Thread Kai Krakow
Am Sat, 22 Apr 2017 23:13:45 -0700
schrieb Daniel Frey :

> On 04/22/2017 10:45 PM, Philip Webb wrote:
> > I've been following the thread re GCC 5.4.0 & after 'eix-sync'
> > installed it. There's a news item warning that there's a new ABI
> > & it mb necessary to run 'revdep-rebuild' if it fails with a
> > linking error.
> > 
> > The first pkg I tried to compile with 5.4.0 indeed failed at that
> > point, so I followed the advice & ran
> > 'revdep-rebuild --library 'llibstdc++.so.6' -- --exclude gcc'.
> > It wanted to rebuild  223  pkgs & stalled with an unfound ebuild.
> > 
> > I went back to GCC 4.9.3 & the pkg merged without any problem.
> > 
> > What are other users' experiences using GCC 5.4.0 ?
> >   
> 
> I'm currently rebuilding 304 packages. However, last time I updated
> major versions of gcc I had weird issues, an `emerge -e world` fixed
> that. Some packages have already been built with the new gcc version,
> so I plan to exclude those from --emptytree.

You can try "emerge -DNua world --changed-deps"

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: FreeCAD permission problems

2017-05-06 Thread Kai Krakow
Am Sat, 6 May 2017 16:23:19 +0200
schrieb tu...@posteo.de:

> It's there
> -rw-r--r-- 1 root root 141 May  6 10:37 /etc/env.d/000opengl
> 
> and its contents is:
> # Configuration file for eselect
> # This file has been automatically generated.
> LDPATH="/usr/lib64/opengl/nvidia/lib"
> OPENGL_PROFILE="nvidia"
> 
> Contents of ld.so.conf:
> 
> # ld.so.conf autogenerated by env-update; make all changes to
> # contents of /etc/env.d directory
> /usr/lib64/opengl/nvidia/lib
> /lib64
> /usr/lib64
> /usr/local/lib64
> /lib
> /usr/lib
> /usr/local/lib
> include ld.so.conf.d/*.conf
> /usr/lib64/OpenCL/vendors/nvidia
> /usr/lib/llvm/4/lib64
> /usr/lib64/itcl4.0.3/
> /usr/lib64/itk4.0.1/
> /usr/lib64/qt4
> /opt/nvidia-cg-toolkit/lib64
> /usr/games/lib64
> /usr/games/lib
> /opt/cuda/lib64
> /opt/cuda/lib
> /opt/cuda/nvvm/lib64
> /usr/lib64/fltk
> /usr/lib64/libgig/
> 
> 
> 
> No, no ACLs here:
> 
> ls -l /dev/input/*   (excerpt)
> 
> crw-rw 1 root input 13, 64 May  6 12:11 /dev/input/event0
> crw-rw 1 root input 13, 65 May  6 12:11 /dev/input/event1
> 
> 
> crw-rw 1 root video 195,   0 May  6 12:11 /dev/nvidia0
> crw-rw 1 root video 195,   1 May  6 12:11 /dev/nvidia1
> crw-rw 1 root video 195, 255 May  6 12:11 /dev/nvidiactl
> crw-rw-rw- 1 root root  195, 254 May  6 12:11 /dev/nvidia-modeset
> crw-rw-rw- 1 root root  246,   0 May  6 12:20 /dev/nvidia-uvm
> crw-rw-rw- 1 root root  246,   1 May  6 12:20 /dev/nvidia-uvm-tools
> 
> 
> I have two nvidia-cards in my PC. One (the slower,older) is for
> everytyhing except rendering, the newer and faster one is for
> rendering except anything else.
> 
> The above shows both permissions:
> root:root and root:portage...
> 
> 
> Video-group settings are ok it seems:
> NVreg_DeviceFileGID=27
> 
> 27(video)

Okay, this looks all good.

> (as user)
> glxgears -info:
> Running synchronized to the vertical refresh.  The framerate should be
> approximately the same as the monitor refresh rate.
> GL_RENDERER   = GeForce GT 430/PCIe/SSE2  <<<= this is
> the older, slower graphics card!

Is this what you expected?

I'm not sure how to handle multiple nvidia cards properly and assign
them to different tasks. My best guess is using nvidia-settings.

I guess one GPU is used for X11, the other is not. So, if you want one
application to use the idle GPU, it may not be initialized.

I think this is when you use persistenced:
https://docs.nvidia.com/deploy/driver-persistence/

> GL_VERSION= 4.5.0 NVIDIA 381.09
> GL_VENDOR = NVIDIA Corporation
> GL_EXTENSIONS = GL_AMD_multi_draw_indirect GL_ARB_arrays_of_arrays
[...snip...]

I guess this is all not Gentoo related. Your graphics stack looks
correct. I guess that FreeCAD chokes because of your special setup
with two GPUs. You may want to contact their support forum.
Everything related to the basic configuration looks correct.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: FreeCAD permission problems

2017-05-06 Thread Kai Krakow
Am Sat, 6 May 2017 14:42:59 +0200
schrieb tu...@posteo.de:

> On 05/06 02:16, Kai Krakow wrote:
> > Am Sat, 6 May 2017 12:55:24 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/06 12:28, Kai Krakow wrote:  
>  [...]  
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > Hi,
> > > 
> > > ...it runs now at least for root (called as user it crashes
> > > still).
> > > 
> > > I did the following:
> > > 
> > > 
> > > mv /usr/lib64/libGL.so  /usr/lib64/off.libGL.so 
> > > 
> > > for all libGL.so* in /usr/lib64/libGL.so*  
> > 
> > You shouldn't shuffle those files around. They are controlled by the
> > package manager.
> > 
> > I think it's a bug of the software that it overwrites ld paths.
> > With a Gentoo standard configuration and eselect opengl switched to
> > nvidia, every software should find and load the nvidia opengl stuff
> > first.
> > 
> > Could you show the output of
> > 
> > # lddtree $(which FreeCAD)
> > 
> > E.g., lddtree $(which kwin_x11) shows a line for me:
> > 
> > libGL.so.1 => /usr/lib64/opengl/nvidia/lib/libGL.so.1
> > 
> > which clearly says it's linking libGL.so.1 from nvidia first.
> > 
> > If a libGL line is missing for FreeCAD, it is dynamically loaded by
> > the application itself. Then it's a FreeCAD bug that should be
> > fixed.
> > 
> > If it's loading from /usr/lib64/libGL* for you, then some paths and
> > configs are borked in your system.
> > 
> >   
> > > Addtionally I added 06nvidia to /etc/ld.so.config.d/. with this
> > > contents:
> > > /usr/lib64/opengl/nvidia/lib
> > > and did a ldconfig afterwards and reboot to release any
> > > filehandle.  
> > 
> > I wonder why these paths are missing for you... My ld.so.conf has
> > nvidia paths right in the beginning (first two lines). It's
> > actually made from /etc/env.d/000opengl. There's nothing nvidia
> > specific in the .d directory.
> > 
> >   
> > > One question remains:
> > > It works for root but not for any other user.
> > > I (as user) am in the video group.
> > > 
> > > I checked the directory/file permissions of opencascade and they
> > > seem to be ok.  
> > 
> > I don't think that modern kernels and desktop managers still use the
> > video group. It should be handled by ACLs. Please have a look at the
> > ACLs of the device nodes.
> > 
> > It all depends on your login manager and pam configuration. You
> > should check that if things don't work right. If you're using
> > systemd, you are using systemd-logind, otherwise you're probably
> > using consolekit.
> > 
> > If you're not using either of those, the system would fall back to
> > standard unix group permissions. But I'm not sure if this works
> > correctly if you didn't configure the whole chain to work that way.
> > 
> >   
> > > I straced FreeCAD...but...I fear not to see anything suspicious
> > > because the output contains a lot of noise (much more as normally
> > > seen in such traces)...  
> > 
> > You can use call filters to limit that to what you want to see.
> > Also, there's ltrace which could be interesting.
> > 
> >   
> > > The eselects show:  
>  [...]  
> > > Available OpenGL implementations:
> > >   [1]   nvidia *
> > >   [2]   xorg-x11  
>  [...]  
> > > i915 (Intel 915, 945)
> > > i965 (Intel GMA 965, G/Q3x, G/Q4x, HD)
> > > r300 (Radeon R300-R500)
> > > r600 (Radeon R600-R700, Evergreen, Northern Islands)
> > > sw (Software renderer)
> > >   [1]   classic
> > >   [2]   gallium *
> > > 
> > > Why is nvidia not listed with the second command?  
> > 
> > Afaik, it does not provide mesa drivers. That's probably why it
> > cannot find an "swrast" driver/visual then. Directly using nvidia
> > OpenGL fixes that, which is what you did now.
> > 
> > I think the bug with FreeCAD is, that it cannot properly handle
> > multiple opengl implementations which it tries to do itself. It
> > should be left to the system to correctly load the correct opengl
> > implementation.
> > 
> > I guess FreeCAD looks up v

[gentoo-user] Re: FreeCAD permission problems

2017-05-06 Thread Kai Krakow
Am Sat, 6 May 2017 12:55:24 +0200
schrieb tu...@posteo.de:

> On 05/06 12:28, Kai Krakow wrote:
> > Am Sat, 6 May 2017 04:18:57 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/05 09:17, Kai Krakow wrote:  
>  [...]  
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]  
> > >  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
>  [...]  
> > > 
> > > Hi Kai,
> > > 
> > > 
> > > here the results:
> > > LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/. FreeCAD   
> > > ERROR: ld.so: object '/usr/lib64/opengl/nvidia/lib/.' from
> > > LD_PRELOAD cannot be preloaded (cannot read file data): ignored.
> > > FreeCAD 0.16, Libs: 0.16RUnknown © Juergen Riegel, Werner Mayer,
> > > Yorik van Havre 2001-2015 #   ###     
> > >   ##  # #   #   # 
> > >   # ##     # #   #  #   # 
> > >     # # #  # #  #  # #  #   # 
> > >   # #      ## # #   # 
> > >   # #   ## ## # #   #  ##  ##  ##
> > >   # #       ### # #    ##  ##  ##
> > > 
> > > libGL error: No matching fbConfigs or visuals found
> > > libGL error: failed to load driver: swrast
> > > using visual class 4, id 2b
> > > [1]17990 segmentation fault
> > > LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/. FreeCAD  
> > 
> > This makes no sense... You have to give an .so file.
> >   
> >  >LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so FreeCAD  
> > > FreeCAD 0.16, Libs: 0.16RUnknown
> > > © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2015
> > >   #   ###     
> > >   ##  # #   #   # 
> > >   # ##     # #   #  #   # 
> > >     # # #  # #  #  # #  #   # 
> > >   # #      ## # #   # 
> > >   # #   ## ## # #   #  ##  ##  ##
> > >   # #       ### # #    ##  ##  ##
> > > 
> > > using visual class 4, id 2b
> > > [1]17552 segmentation fault
> > > LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so FreeCAD  
> > 
> > Okay, so this fixes the problem with the visual as I expected. But
> > now it's segfaulting.
> > 
> > Are you using an NVIDIA card with proprietary driver?
> > 
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> > 
> >   
> 
> Hi,
> 
> ...it runs now at least for root (called as user it crashes still).
> 
> I did the following:
> 
> 
> mv /usr/lib64/libGL.so  /usr/lib64/off.libGL.so 
> 
> for all libGL.so* in /usr/lib64/libGL.so*

You shouldn't shuffle those files around. They are controlled by the
package manager.

I think it's a bug of the software that it overwrites ld paths. With a
Gentoo standard configuration and eselect opengl switched to nvidia,
every software should find and load the nvidia opengl stuff first.

Could you show the output of

# lddtree $(which FreeCAD)

E.g., lddtree $(which kwin_x11) shows a line for me:

libGL.so.1 => /usr/lib64/opengl/nvidia/lib/libGL.so.1

which clearly says it's linking libGL.so.1 from nvidia first.

If a libGL line is missing for FreeCAD, it is dynamically loaded by the
application itself. Then it's a FreeCAD bug that should be fixed.

If it's loading from /usr/lib64/libGL* for you, then some paths and
configs are borked in your system.


> Addtionally I added 06nvidia to /etc/ld.so.config.d/. with this
> contents:
> /usr/lib64/opengl/nvidia/lib
> and did a ldconfig afterwards and reboot to release any filehandle.

I wonder why these paths are missing for you... My ld.so.conf has nvidia
paths right in the beginning (first two lines). It's actually made
from /etc/env.d/000opengl. There's nothing nvidia specific in the .d
directory.


> One question remains:
> It works for root but not for any other user.
> I (as user) am in the video group.
> 
> I checked the directory/file permissions of opencascade and they
> seem to be ok.

I don't think that modern kernels and desktop managers still use the
video group. It should be handled by ACLs. Please have a look at the
ACLs of the device nodes.

It all depends on your login manager and pam configuration. You should
check that if things don't work right. If you're using systemd, you are
using systemd-logind, otherwise you're probably using consolekit.

If you're not using either of t

[gentoo-user] Re: FreeCAD permission problems

2017-05-06 Thread Kai Krakow
Am Fri, 5 May 2017 20:55:42 -0400
schrieb Zhu Sha Zang <zhushaz...@yahoo.com.br>:

> [rodolfo@asgard ~]$ eselect qtgraphicssystem list 20:55
> Available Qt Graphics Systems:
>[1]   native
>[2]   opengl (experimental)
>[3]   raster (default) *
> 
> Best Regards

This does not help, the software is trying to load the swrast GL
driver which is not there.

It would be helpful to see the output of

# eselect opengl list

and

# eselect mesa list


> On 05/05/2017 03:17 PM, Kai Krakow wrote:
> > Am Fri, 5 May 2017 21:12:53 +0200
> > schrieb tu...@posteo.de:
> >  
> >> On 05/05 09:03, Kai Krakow wrote:  
>  [...]  
>  [...]  
> >>   [...]
> >>   [...]  
>  [...]  
> >>   [...]  
>  [...]  
> >>   [...]
> >>   [...]  
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> >> Hi Kai,
> >>
> >> sorry for the confusion I initiated...
> >>
> >> This one I used
> >>
> >> QT_GRAPHICSSYSTEM=raster freecad  
> > Please also try my other suggestion:
> >
> > Find your GL drivers with "locate libGL.so" or "qfile -b libGL.so"
> > and try those paths in the preloader:
> >
> > # LD_PRELOAD=/path/to/libGL.so freecad
> >
> > Try the libGL most specific to your graphics card first.
> >
> >  
> 
> 
> 



-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: FreeCAD permission problems

2017-05-06 Thread Kai Krakow
Am Sat, 6 May 2017 04:18:57 +0200
schrieb tu...@posteo.de:

> On 05/05 09:17, Kai Krakow wrote:
> > Am Fri, 5 May 2017 21:12:53 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/05 09:03, Kai Krakow wrote:  
>  [...]  
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
> > >  [...]
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > Hi Kai,
> > > 
> > > sorry for the confusion I initiated...
> > > 
> > > This one I used
> > > 
> > > QT_GRAPHICSSYSTEM=raster freecad  
> > 
> > Please also try my other suggestion:
> > 
> > Find your GL drivers with "locate libGL.so" or "qfile -b libGL.so"
> > and try those paths in the preloader:
> > 
> > # LD_PRELOAD=/path/to/libGL.so freecad
> > 
> > Try the libGL most specific to your graphics card first.
> > 
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> > 
> >   
> 
> Hi Kai,
> 
> 
> here the results:
> LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/. FreeCAD   
> ERROR: ld.so: object '/usr/lib64/opengl/nvidia/lib/.' from LD_PRELOAD
> cannot be preloaded (cannot read file data): ignored. FreeCAD 0.16,
> Libs: 0.16RUnknown © Juergen Riegel, Werner Mayer, Yorik van Havre
> 2001-2015 #   ###     
>   ##  # #   #   # 
>   # ##     # #   #  #   # 
>     # # #  # #  #  # #  #   # 
>   # #      ## # #   # 
>   # #   ## ## # #   #  ##  ##  ##
>   # #       ### # #    ##  ##  ##
> 
> libGL error: No matching fbConfigs or visuals found
> libGL error: failed to load driver: swrast
> using visual class 4, id 2b
> [1]17990 segmentation fault
> LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/. FreeCAD

This makes no sense... You have to give an .so file.

 >LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so FreeCAD  
> FreeCAD 0.16, Libs: 0.16RUnknown
> © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2015
>   #   ###     
>   ##  # #   #   # 
>   # ##     # #   #  #   # 
>     # # #  # #  #  # #  #   # 
>   # #      ## # #   # 
>   # #   ## ## # #   #  ##  ##  ##
>   # #       ### # #    ##  ##  ##
> 
> using visual class 4, id 2b
> [1]17552 segmentation fault
> LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so FreeCAD

Okay, so this fixes the problem with the visual as I expected. But now
it's segfaulting.

Are you using an NVIDIA card with proprietary driver?


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: FreeCAD permission problems

2017-05-05 Thread Kai Krakow
Am Fri, 5 May 2017 21:12:53 +0200
schrieb tu...@posteo.de:

> On 05/05 09:03, Kai Krakow wrote:
> > Am Fri, 5 May 2017 20:40:50 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/05 08:28, Kai Krakow wrote:  
>  [...]  
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
>  [...]  
> > > 
> > > Hi kai,
> > > 
> > > THANKS FOR THAT COMMANDLINE!  
> > 
> > Which of those two?
> >   
> > > Now FreeCAD is willing to cooperate...up to an certain level: It
> > > starts
> > > 
> > > Loading an STEP-data file results in :
> > > FreeCAD 0.16, Libs: 0.16RUnknown
> > > © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2015
> > >   #   ###     
> > >   ##  # #   #   # 
> > >   # ##     # #   #  #   # 
> > >     # # #  # #  #  # #  #   # 
> > >   # #      ## # #   # 
> > >   # #   ## ## # #   #  ##  ##  ##
> > >   # #       ### # #    ##  ##  ##
> > > 
> > > libGL error: No matching fbConfigs or visuals found
> > > libGL error: failed to load driver: swrast
> > > Unhandled std::exception caught in GUIApplication::notify.
> > > The error message is: Permission denied
> > > *** Abort *** an exception was raised, but no catch was found.
> > >   ... The exception is:SIGSEGV 'segmentation violation'
> > > detected. Address 0
> > > 
> > > Any ideas?  
> > 
> > I'd still try the preload stuff. I don't think QT_GRAPHICSSYSTEM can
> > solve this.
> > 
> > But I'm only guessing which command line you used.
> > 
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> > 
> >   
> 
> Hi Kai,
> 
> sorry for the confusion I initiated...
> 
> This one I used
> 
> QT_GRAPHICSSYSTEM=raster freecad

Please also try my other suggestion:

Find your GL drivers with "locate libGL.so" or "qfile -b libGL.so" and
try those paths in the preloader:

# LD_PRELOAD=/path/to/libGL.so freecad

Try the libGL most specific to your graphics card first.


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: FreeCAD permission problems

2017-05-05 Thread Kai Krakow
Am Fri, 5 May 2017 20:40:50 +0200
schrieb tu...@posteo.de:

> On 05/05 08:28, Kai Krakow wrote:
> > Am Fri, 5 May 2017 19:43:14 +0200
> > schrieb tu...@posteo.de:
> >   
> > > On 05/05 10:31, Daniel Frey wrote:  
>  [...]  
>  [...]  
> > >  [...]  
> > >  [...]  
> > >  [...]
>  [...]  
>  [...]  
> > > 
> > > It says that passing 
> > > 
> > > --graphicssystem=raster
> > > 
> > > as option to FreeCAD would fix that problem.
> > > 
> > > 
> > > When doing so, FreeCAD says it does not that 
> > > option.
> > > 
> > > Hm  
> > 
> > Then it's maybe
> > 
> > # QT_GRAPHICSSYSTEM=raster freecad
> > 
> > I had a similar problem with mixxx. I think I solved it with an LD
> > preloader:
> > 
> > $ cat bin/mixxx
> > #!/bin/sh
> > LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so exec /usr/bin/mixxx
> > 
> > You may want to try something similar with freecad. Be sure to
> > adjust that to your graphics card. It will obviously not work that
> > way if you don't use NVIDIA proprietary... ;-)
> > 
> > -- 
> > Regards,
> > Kai
> > 
> > Replies to list-only preferred.
> > 
> > 
> >   
> 
> Hi kai,
> 
> THANKS FOR THAT COMMANDLINE!

Which of those two?

> Now FreeCAD is willing to cooperate...up to an certain level: It
> starts
> 
> Loading an STEP-data file results in :
> FreeCAD 0.16, Libs: 0.16RUnknown
> © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2015
>   #   ###     
>   ##  # #   #   # 
>   # ##     # #   #  #   # 
>     # # #  # #  #  # #  #   # 
>   # #      ## # #   # 
>   # #   ## ## # #   #  ##  ##  ##
>   # #       ### # #    ##  ##  ##
> 
> libGL error: No matching fbConfigs or visuals found
> libGL error: failed to load driver: swrast
> Unhandled std::exception caught in GUIApplication::notify.
> The error message is: Permission denied
> *** Abort *** an exception was raised, but no catch was found.
>   ... The exception is:SIGSEGV 'segmentation violation'
> detected. Address 0
> 
> Any ideas?

I'd still try the preload stuff. I don't think QT_GRAPHICSSYSTEM can
solve this.

But I'm only guessing which command line you used.


-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: FreeCAD permission problems

2017-05-05 Thread Kai Krakow
Am Fri, 5 May 2017 19:43:14 +0200
schrieb tu...@posteo.de:

> On 05/05 10:31, Daniel Frey wrote:
> > On 05/05/2017 10:23 AM, tu...@posteo.de wrote:  
> > > On 05/05 10:17, Daniel Frey wrote:  
>  [...]  
>  [...]  
>  [...]  
> > > 
> > > Hi Dan,
> > > 
> > > I am already in the video group...
> > > 
> > > And: When run as user, is starts but loading
> > > an *.STP file crashes FreeCAD with:
> > > 
> > > FreeCAD 0.16, Libs: 0.16RUnknown
> > > © Juergen Riegel, Werner Mayer, Yorik van Havre 2001-2015
> > >   #   ###     
> > >   ##  # #   #   # 
> > >   # ##     # #   #  #   # 
> > >     # # #  # #  #  # #  #   # 
> > >   # #      ## # #   # 
> > >   # #   ## ## # #   #  ##  ##  ##
> > >   # #       ### # #    ##  ##  ##
> > > 
> > > libGL error: No matching fbConfigs or visuals found
> > > libGL error: failed to load driver: swrast
> > > *** Abort *** an exception was raised, but no catch was found.
> > >   ... The exception is:SIGSEGV 'segmentation violation'
> > > detected. Address 0 [1]5658 exit 1 FreeCAD
> > > 
> > > 
> > > It ssems more odd than previously thought 
> > > 
> > > What is that 'swrast' thingy?
> > > 
> > > 
> > > Cheers
> > > Meino
> > > 
> > > 
> > >   
> > 
> > From what I've just read, it's a software raster driver. I figured
> > it couldn't talk to the hardware, hence the adding to video group
> > suggestion.
> > 
> > Found this though:
> > 
> > http://forum.freecadweb.org/viewtopic.php?t=20187
> > 
> > Dan
> >   
> 
> It says that passing 
> 
> --graphicssystem=raster
> 
> as option to FreeCAD would fix that problem.
> 
> 
> When doing so, FreeCAD says it does not that 
> option.
> 
> Hm

Then it's maybe

# QT_GRAPHICSSYSTEM=raster freecad

I had a similar problem with mixxx. I think I solved it with an LD
preloader:

$ cat bin/mixxx
#!/bin/sh
LD_PRELOAD=/usr/lib64/opengl/nvidia/lib/libGL.so exec /usr/bin/mixxx

You may want to try something similar with freecad. Be sure to adjust
that to your graphics card. It will obviously not work that way if you
don't use NVIDIA proprietary... ;-)

-- 
Regards,
Kai

Replies to list-only preferred.





[gentoo-user] Re: htop wants cgroups

2017-05-01 Thread Kai Krakow
Am Mon, 1 May 2017 16:01:13 +0100
schrieb Jorge Almeida <jjalme...@gmail.com>:

> On Mon, May 1, 2017 at 2:46 PM, Rich Freeman <ri...@gentoo.org> wrote:
> > On Sun, Apr 30, 2017 at 4:17 PM, Kai Krakow <hurikha...@gmail.com>
> > wrote:  
> >> Am Sun, 30 Apr 2017 10:33:05 -0700
> >> schrieb Jorge Almeida <jjalme...@gmail.com>:
> >>  
>  [...]  
> >>  
> 
> >
> > Honestly, I can't think of why you wouldn't want to use it.
> >
> > The use cases of killing orphan processes and managing resources at
> > a service level have already been mentioned.  
> 
> I don't usually have orphan processes (that process 1 doesn't reap).
> My services don't require fine tuning re resources.

This doesn't really qualify, because in the end, everything would be
reaped by PID 1... I spoke of the cases where an emerge build phase
left orphan processes floating around. I don't want them, even when
(not if) PID 1 kills the process at last. This also affects the
shutdown phase because the filesystem cannot be cleanly unmounted then.
You really don't want that. It results in funny effects with some
combinations of boot loaders and file systems (e.g. XFS and grub, grub
may see zero-size files then, and everything is back to normal after a
clean mount).

Also, stopping daemons in OpenRC without cgroups often leaves services
in limbo state, partially shutdown but still orphans running. This can
often result in all sorts of (not so) funny effects (besides the one
mentioned above). If someone runs apache servers with PHP shopping
softwares, she/he knows what I mean: such PHP software with so-called
integrated "background services" seems to happily spawn processes
ignoring SIGTERM, probably even double-forking, resulting in unstoppable
services: OpenRC stops the master, master exits, orphan process is
re-parented to PID 1, you try to start the service: boom, doesn't work.

Tho, in systemd you can easily escape the cgroup by doing "su -" or
something similar which moves the process into a different session
context. This also affects OpenRC because it can't catch such funny
maneuvers. "su" is really not a thing to use in init scripts or bash
scripts initializing daemons. :-(

> > Another use case is that the kernel automatically takes cgroups into
> > account when scheduling.  So, if one of your services launches a
> > bunch of children they'll be weighted together when allocating
> > CPU.  That means that a service with ten threads won't get 10x the
> > CPU of a service with one thread if CPU becomes limiting, assuming
> > equal niceness/etc.  On a multi-user system the same would apply to
> > the user running 100 processes vs 1.

Good catch, I forgot to mention that.

> > I also use cgroups to monitor memory use/etc at a service level.  
> 
>  I don't have complex services (some might argue that very complex
> services are badly designed services, but I leave that discussion to
> pros). I only run single-user workstations.

A multi-threaded/multi-process daemon isn't particular a complex
daemon...

> > Sure, they're somewhat optional, but they're a pretty useful kernel
> > feature.  
> 
> No arguing there. Still, it shouldn't be pushed. It's a bad sign.

Well, I think the wording can be discussed. But I think it's not too
bad: The Gentoo newbie/noob will simple follow the warning, enable it,
and that results in a suggested configuration with all features
possible. It saves developers from figuring out unexpected problems
later. If you know better, go for it, with all the consequences that
has... ;-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: htop wants cgroups

2017-04-30 Thread Kai Krakow
Am Sun, 30 Apr 2017 10:33:05 -0700
schrieb Jorge Almeida :

> > It allows portage to properly shut down remaining processes from
> > ebuild build phases by knowing exactly which processes have been
> > spawn in the compile phase, and it allows openrc to better manage
> > the processes and proper shut down any processes belonging to a
> > service.  
> 
> I understand that, in principle. In practice, sshd works fine without
> it, for example. And portage doesn't have a cgroups related USE
> variable. Doesn't mean I won't find a need for it, someday.

It does have such a FEATURE in make.conf and it's used to better manage
run-away processes from build phases.

> > Also you may benefit from setting resource limits and fair resource
> > sharing for a group of processes where ulimit applies only to single
> > processes and doesn't know about resource shares at all.
> >
> > Overall, it makes sense to have it.  
> 
> It makes sense that the kernel has it. Should it be enabled? For a
> server, probably. For a single-user workstation? Maybe.

Maybe I don't have the ordinary workstation, but I use it to limit
memory of sometimes-run-away services (memory-wise) and to control
resource usage of container machines I'm using during development.
Probably not the ordinary use-case...


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: htop wants cgroups

2017-04-30 Thread Kai Krakow
Am Sun, 30 Apr 2017 12:36:03 -0700
schrieb Jorge Almeida <jjalme...@gmail.com>:

> On Sun, Apr 30, 2017 at 12:14 PM, Nikos Chantziaras
> <rea...@gmail.com> wrote:
> > On 04/30/2017 08:33 PM, Jorge Almeida wrote:  
> >>
> >> On Sun, Apr 30, 2017 at 9:40 AM, Kai Krakow <hurikha...@gmail.com>
> >> wrote:  
>  [...]  
> 
> >
> > You can enable cgroups in the kernel and then simply not use them.
> > This will shut it up. It's what I do :-P
> >  
> The warnings don't bother me that much, I just feel they are Bad
> Policy. Enabling cgroups would add unnecessary complexity to the
> kernel configuration, if only a bit.

On the other hand, if such warning weren't there, it would make
installing full-featured systems more difficult. And any experienced
Gentoo user can decide on her/his own how to react to such warnings.

I like to have such warnings in. It gives me hints what I like to have
and what not and gives me the opportunity to improve my knowledge by
researching on that warning.

An improvement could maybe be made: Such warnings could tell links into
the Gentoo wiki for further information on that topic.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: htop wants cgroups

2017-04-30 Thread Kai Krakow
Am Sun, 30 Apr 2017 09:26:16 -0700
schrieb Jorge Almeida :

> Why?
> 
> emerging htop yields this message:
>  *   CONFIG_CGROUPS: is not set when it should be.
>  * Please check to make sure these options are set correctly.
>  * Failure to do so may cause unexpected problems.
> 
> 
> Gee, I can use top without cgroups support. I thought I might use htop
> as well. Anyone knows why I _should_ use a kernel with cgroups
> support? Just curious, not a big deal. I can do without htop if I
> must.

Well, it says "should be" enabled. It's not a requirement. You may not
use some of htop's features like proper process grouping.

> (I'm not suggesting that cgroups doesn't have valid use cases. But a
> graphic version of top? Really? Please help me to understand. I want
> to do the _correct_ thing, and I wouldn't want my dog to die for lack
> of cgroups support.)

I would be interested in why you wouldn't want to use cgroups. Besides
being a requirement for systemd, it also has very valid use cases for
other software you probably use:

It allows portage to properly shut down remaining processes from ebuild
build phases by knowing exactly which processes have been spawn in the
compile phase, and it allows openrc to better manage the processes and
proper shut down any processes belonging to a service.

Also you may benefit from setting resource limits and fair resource
sharing for a group of processes where ulimit applies only to single
processes and doesn't know about resource shares at all.

Overall, it makes sense to have it.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: How to get rid of sys-fs/static-dev, or do I really need it?

2017-04-30 Thread Kai Krakow
Am Tue, 25 Apr 2017 08:29:51 +0100
schrieb Neil Bothwick :

> On Tue, 25 Apr 2017 01:30:07 +0200, wabe wrote:
> 
> > > Do you have virtual/udev installed? That should be enough to keep
> > > virtual/dev-manager happy.
> > 
> > Thanks for your answer. virtual/udev is already installed. Tomorrow
> > I'll make a backup of my system and after that I'll remove
> > sys-fs/static-dev.  
> 
> I can't see it doing any harm, because any static nodes in /dev/ are
> hidden once udev starts.

It's hidden when devtmpfs is mounted by the kernel. Modern kernels do
this automatically, udev is not involved here. That's probably the
reason why it was removed from the profile. The default options of
gentoo-sources suggest devtmpfs to be enabled and automounted by the
kernel at boot.

-- 
Regards,
Kai

Replies to list-only preferred.


pgpI6PUULNrqL.pgp
Description: Digitale Signatur von OpenPGP


[gentoo-user] Re: ebuild: package specific CFLAGS

2017-04-30 Thread Kai Krakow
Am Sat, 29 Apr 2017 00:14:10 -0400
schrieb John Covici :

> On Fri, 28 Apr 2017 22:10:42 -0400,
> Ian Zimmerman wrote:
> > 
> > I'm trying to create an ebuild of a crufty old program that needs
> > -fgnu89-inline in compiler flags to have any chance of building.
> > 
> > What's the way to do that in an ebuild?  I could have something like
> > 
> > src_configure() {
> > econf $(use_enable nls) CFLAGS=-fgnu89-inline
> > }
> > 
> > but then, will this not _override_ (rather than add to, as desired)
> > the CFLAGS from make.conf?  
> 
> Maybe you'd be better off setting an environment variable outside the
> ebuild in a shell script in /etc/portage/env where you can put the
> whole CCFLAGS .

You should also say that you need to reference that
in /etc/portage/packages.env, similar to how packages.use works. Just
that instead of use flags, you give filenames from /etc/portage/env.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Pseudo first impressions

2017-04-30 Thread Kai Krakow
Am Sun, 30 Apr 2017 00:56:40 -0500
schrieb R0b0t1 <r03...@gmail.com>:

> On Sun, Apr 30, 2017 at 12:14 AM, Kai Krakow <hurikha...@gmail.com>
> wrote:
> > Am Sat, 29 Apr 2017 14:39:13 +
> > schrieb Alan Mackenzie <a...@muc.de>:
> >  
> >> For a start, I could barely read parts of it, which were displayed
> >> in dark blue text on a black background.  Setting
> >> up /etc/portage/color.map is not the first thing a new user should
> >> have to do to be able to read messages from emerge.  This is,
> >> however, something I knew had to be done, and I did it.  
> >
> > This is a problem with most terminal emulators having a much too
> > dark "dark blue". On an old DOS CRT, this dark blue was still
> > bright enough to be read easily on black background. Especially, I
> > found PuTTY in Windows having a dark blue barely readable.
> >
> > E.g., in KDE Konsole I usually switch to a different terminal color
> > scheme which usually gets around this. But then, contrast on bright
> > colors is usually very bad, as can be seen in MC at some points. But
> > the new "breeze" color scheme from current Plasma versions is quite
> > nice and an overall good fit.
> >  
> 
> I have occasionally had this problem (and the reverse - green and
> yellow are unreadable on light backgrounds), but the default colors in
> URxvt are fairly reasonable.

Much depends on a reasonable color palette in the terminal emulator.
There are only few that get it right.

> Not to derail this thread but what is the process for getting changes
> into the handbook? I have some suggestions as well, but still only
> have a vague idea of how it is maintained. There's a lot that could be
> added in relation to maintaining modern systems, and many of the
> changes to portage could be added. (E.g. there's people who will come
> into the IRC and have a conglomeration of settings that, based on the
> quirks and naming conventions, you can tell were taken from 3-4 places
> each being published years apart. There probably needs to be some
> basic information all in one place.)

There's the Gentoo wiki. I don't know if you need special privileges or
if it's open to everyone to put in improvements. And then, there's
always the BGO (bugs.gentoo.org) where you can suggest even handbook
improvements by selecting the proper bug component.

> And in reply to the Perl problem, though my response probably isn't
> needed: I can verify that using a high backtrack number solved this,
> and that the dependency chain was the longest I have seen save one
> other time.

Usually, using "reinstall-atoms" works much better for me (and it's
faster in most cases because when emerge dumps all those stuff and I
see it's easily resolvable by reinstall-atoms, I do that, instead of
using a high backtrack value, waiting for ages again, only to see that
it may not solve my problem).


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-04-30 Thread Kai Krakow
Am Sat, 29 Apr 2017 22:02:51 -0400
schrieb "Walter Dnes" :

>   Then there's always "sneakernet".  To quote Andrew Tanenbaum from
> 1981
> 
> > Never underestimate the bandwidth of a station wagon full of tapes
> > hurtling down the highway.  

Hehe, with the improvements in internet connections nowadays, we
almost stopped transferring backups via sneakernet. Calculating the
transfer speed of the internet connection vs. the speed calculating
miles per hour, internet almost always won lately. :-)

Most internet connections are faster than even USB sticks these days.
LAN connections tailored towards storage are even fast then ordinary
hard disks (i.e. SAN).

Nice fun fact, tho. If go "a station wagon full", it probably still
holds true today.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-04-30 Thread Kai Krakow
Am Sat, 29 Apr 2017 20:02:57 +0100
schrieb lee :

> Alan McKinnon  writes:
> 
> > On 25/04/2017 16:29, lee wrote:  
> >> 
> >> Hi,
> >> 
> >> since the usage of FTP seems to be declining, what is a replacement
> >> which is at least as good as FTP?
> >> 
> >> I'm aware that there's webdav, but that's very awkward to use and
> >> missing features.
> >> 
> >>   
> >
> > Why not stick with ftp?  
> 
> The intended users are incompetent, hence it is too difficult to
> use ...

If you incompetent users are using Windows: Have you ever tried
entering ftp://u...@yoursite.tld in the explorer directory input bar?

> > Or, put another way, why do you feel you need to use something
> > else?  
> 
> I don't want to use anything else.
> 
> Yet even Debian has announced that they will shut down their ftp
> services in November, one of the reasons being that almost no one uses
> them.  Of course, their application is different from what I'm looking
> for because they only have downloads and no uploads.

And that's the exact reason why: Offering FTP just for downloads (not
even for browsing) is inefficient. Getting a file via HTTP is much more
efficient as the connection overhead is much lower. Removing FTP is
thus just a question of reducing attack surface and server load.

Your scenario differs a lot and doesn't follow the reasoning debian put
behind it.

> However, another reason given was that ftp isn't exactly friendly to
> firewalls and requires "awkward kludges" when load balancing is used.
> That is a pretty good reason.

This is due to FTP incorporating transfer of ports and IP addresses in
the protocol which was a good design decision when the protocol was
specified but isn't nowadays. Embedding FTP into a tunnel solves that,
e.g. by using sftp (ssh+ftp). HTTP also solves that by not embedding
such information at the protocol level. But tunneling FTP is not how
you would deploy such a scenario, so the option is HTTP, hence FTP can
be shut down by debian. KISS principle.

> Anyway, when pretty much nobody uses a particular software anymore, it
> won't be very feasible to use that software.

Nobody said that when debian announced to shut down their FTP servers.
Debian is not the king to rule the internet. You shouldn't care when
they shut down their FTP services. It doesn't matter to the rest of the
world using the internet.

> > There's always dropbox  
> 
> Well, dropbox sucks.  I got a dropbox link and it didn't work at all,
> and handing out the data to some 3rd party is a very bad idea.  It's
> also difficult to automate things with that.

There's also owncloud (or whatever it is called now). You can automate
things by deploying a sync application on your clients side.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-04-30 Thread Kai Krakow
Am Sat, 29 Apr 2017 20:30:03 +0100
schrieb lee :

> Danny YUE  writes:
> 
> > On 2017-04-25 14:29, lee  wrote:  
> >> Hi,
> >>
> >> since the usage of FTP seems to be declining, what is a replacement
> >> which is at least as good as FTP?
> >>
> >> I'm aware that there's webdav, but that's very awkward to use and
> >> missing features.  
> >
> > What about sshfs? It allows you to mount a location that can be
> > accessed via ssh to your local file system, as if you are using
> > ssh.  
> 
> Doesn't that require ssh access?  And how do you explain that to ppl
> finding it too difficult to use Filezilla?  Is it available for
> Windoze?

Both, sshfs and scp, require a full shell (that may be restricted but
that involves configuration overhead on the server side). You can use
sftp (FTP wrapped into SSH), which is built into SSH. It has native
support in many Windows clients (most implementations use PuTTY in the
background). It also has the advantage that you can easily restrict
users on your system to SFTP-only with an easy server-side
configuration.

> > Also samba can be a replacement. I have a samba server on my OpenWRT
> > router and use mount.cifs to mount it...  
> 
> Does that work well, reliably and securely over internet connections?

It supports encryption as transport security, and it supports kerberos
for secure authentication, the latter is not easy to setup in Linux,
but it should work with Windows clients out-of-the-box.

But samba is a pretty complex daemon and thus offers a big attack
surface for hackers and bots. I'm not sure you want to expose this to
the internet without some sort of firewall in place to restrict access
to specific clients - and that probably wouldn't work for your scenario.

But you could offer access via OpenVPN and tunnel samba through that.
By that time, you can as easily offer FTP, too, through the tunnel
only, as there should be no more security concerns now: It's encrypted
now. OpenVPN also offers transparent compression which can be a big
plus for your scenario.

OpenVPN is not too difficult to setup, and the client is available for
all major OSes. And it's not too complicated to use: Open VPN
connection, then use your file transfer client as you're used to. Just
one simple extra step.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-04-29 Thread Kai Krakow
Am Sat, 29 Apr 2017 20:38:24 +0100
schrieb lee <l...@yagibdah.de>:

> Kai Krakow <hurikha...@gmail.com> writes:
> 
> > Am Tue, 25 Apr 2017 15:29:18 +0100
> > schrieb lee <l...@yagibdah.de>:
> >  
> >> since the usage of FTP seems to be declining, what is a replacement
> >> which is at least as good as FTP?
> >> 
> >> I'm aware that there's webdav, but that's very awkward to use and
> >> missing features.  
> >
> > If you want to sync files between two sites, try rsync. It is
> > supported through ssh also. Plus, it's very fast also.  
> 
> Yes, I'm using it mostly for backups/copies.
> 
> The problem is that ftp is ideal for the purpose, yet users find it
> too difficult to use, and nobody uses it.  So there must be something
> else as good or better which is easier to use and which ppl do use.

Well, I don't see how FTP is declining, except that it is unencrypted.
You can still use FTP with TLS handshaking, most sites should support
it these days but almost none forces correct certificates because it is
usually implemented wrong on the server side (by giving you
ftp.yourdomain.tld as the hostname instead of ftp.hostingprovider.tld
which the TLS cert has been issued for). That makes it rather pointless
to use. In linux, lftp is one of the few FTP clients supporting TLS
out-of-the-box by default, plus it forces correct certificates.

But I found FTP being extra slow on small files, that's why I suggested
to use rsync instead. That means, where you could use sftp (ssh+ftp),
you can usually also use ssh+rsync which is faster.

There's also the mirror command in lftp, which can be pretty fast, too,
on incremental updates but still much slower than rsync.

> I don't see how they would transfer files without ftp when ftp is the
> ideal solution.

You simply don't. FTP is still there and used. If you see something
like "sftp" (ssh+ftp, not ftp+ssl which I would refer to as ftps), this
is usually only ftp wrapped into ssh for security reasons. It just
using ftp through a tunnel, but to the core it's the ftp protocol. In
the end, it's not much different to scp, as ftp is really just only a
special shell with some special commands to setup a file transfer
channel that's not prone to interact with terminal escape sequences in
whatever way those may be implemented, something that e.g. rzsz needs
to work around.

In the early BBS days, where you couldn't establish a second transfer
channel like FTP does it using TCP, you had to send special escape
sequences to put the terminal into file transfer mode, and then send
the file. By that time, you used rzsz from the remote shell to initiate
a file transfer. This is more the idea of how scp implements a file
transfer behind the scenes.

FTP also added some nice features like site-to-site transfers where the
data endpoints both are on remote sites, and your local site only is
the control channel. This directly transfers data from one remote site
to another without going through your local connection (which may be
slow due to the dial-up nature of most customer internet connections).

Also, FTP is able to stream multiple files in a single connection for
transferring many small files, by using tar as the transport protocol,
thus reducing the overhead of establishing a new connection per file.
Apparently, I know only few clients that support that, and even fewer
servers which that would with.

FTP can be pretty powerful, as you see. It's just victim of its poor
implementation in most FTP clients that makes you feel it's mostly
declined. If wrapped into a more secure tunnel (TLS, ssh), FTP is still
a very good choice for transferring files, tho not the most efficient.
Depending on your use case, you get away much better using more
efficient protocols like rsync.


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Pseudo first impressions

2017-04-29 Thread Kai Krakow
Am Sat, 29 Apr 2017 20:53:50 -0700
schrieb Ian Zimmerman :

> On 2017-04-30 02:23, lee wrote:
> 
> > > Do a --depclean and that will resolve itself.  
> > 
> > Last time I tried that, it wanted to remove the source of the kernel
> > I'm using, along with other things.  It would have made sense if I
> > had upgraded the kernel, too, but I didn't have the time to do that
> > yet.  
> 
> emerge --select =sys-kernel/gentoo-sources-${VERSION}
> 
> or add a line for the exact version to the world file manually

If you give a slot instead of a version, it will record that slot in
the world file, which is usually more appropriate:

# emerge --select sys-kernel/gentoo-sources:${VERSION}

No need to add that manually then.

Kernel versions are slotted per minor version, so it is essentially the
same for your example.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Pseudo first impressions

2017-04-29 Thread Kai Krakow
Am Sat, 29 Apr 2017 14:39:13 +
schrieb Alan Mackenzie :

> For a start, I could barely read parts of it, which were displayed in
> dark blue text on a black background.  Setting
> up /etc/portage/color.map is not the first thing a new user should
> have to do to be able to read messages from emerge.  This is,
> however, something I knew had to be done, and I did it.

This is a problem with most terminal emulators having a much too dark
"dark blue". On an old DOS CRT, this dark blue was still bright enough
to be read easily on black background. Especially, I found PuTTY in
Windows having a dark blue barely readable.

E.g., in KDE Konsole I usually switch to a different terminal color
scheme which usually gets around this. But then, contrast on bright
colors is usually very bad, as can be seen in MC at some points. But
the new "breeze" color scheme from current Plasma versions is quite
nice and an overall good fit.

> The error message was "Multiple package instances within a single
> package slot have been pulled into the dependency graph, resulting in
> a slot conflict:".  Uhh???

This wouldn't happen if this would actually be a new system with vendor
stage tarball. I guess you're upgrading an existing system from new
hardware.

> Is this gobbledegook really what a new user should be seeing, having
> not yet installed any packages, bar a very few, beyond what is
> requisite to bringing a new machine up?
> 
> The actual conflict packages are:
> dev-lang/perl-5.24.1-r1:0/5.24::gentoo
>   and
> dev-lang/perl-5.22.3-rc4:0/5.22::gentoo
> , "pulled in" by internal system packages I've got no direct interest
> in, plus, shockingly, "and 2 more with the same problem" and "and 5
> more with the same problem".

This, and similar conflicts, can be easily resolved by forcing rebuilds
of the packages marked in blue in the conflict tree. I particular, here
it works by running:

# emerge -DNua world --reinstall-atoms "$(qlist -ICS dev-perl/
virtual/perl-)"

I found "reinstall-atoms" to become very handy lately. Emerge seems to
be pretty bad at determining rebuilds of dependents in world upgrades,
even when using big backtrack values. Also, bigger backtrack values
increase deptree calculation by a huge factor, especially when emerge
isn't able to figure it out anyways.

You may want to remove all perl virtuals first, which is essentially
what perl-cleaner also does in a first step:

# emerge -Ca $(qlist -IC virtual/perl-)


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: replacement for ftp?

2017-04-25 Thread Kai Krakow
Am Tue, 25 Apr 2017 15:29:18 +0100
schrieb lee :

> since the usage of FTP seems to be declining, what is a replacement
> which is at least as good as FTP?
> 
> I'm aware that there's webdav, but that's very awkward to use and
> missing features.

If you want to sync files between two sites, try rsync. It is supported
through ssh also. Plus, it's very fast also.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: News: invalid item?

2017-04-21 Thread Kai Krakow
Am Fri, 21 Apr 2017 08:13:29 -0700
schrieb Daniel Frey <djqf...@gmail.com>:

> On 04/20/2017 10:18 PM, Kai Krakow wrote:
> > Am Wed, 19 Apr 2017 12:45:49 -0700
> > schrieb Daniel Frey <djqf...@gmail.com>:
> >   
> >> Anyone sync recently and get this:
> >>
> >> !!! Invalid news item:
> >> /usr/portage/metadata/news/2017-04-10-split-and-slotted-wine/2017-04-10-split-and-slotted-wine.en.txt
> >>
> >> Not sure if I should file a bugreport or what happened here...
> >>
> >> Synced a few minutes ago @ 12:37-ish PST.  
> > 
> > Please try updating portage first and then try again...
> >   
> 
> I synced again yesterday (Apr 20) and it did not go away. Currently
> syncing again.

Ah no, not syncing...

# emerge -1a portage


-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: Pay attention to what 'emerge' tells you.

2017-04-20 Thread Kai Krakow
Am Thu, 20 Apr 2017 05:05:25 +0800
schrieb Bill Kenworthy :

> On 20/04/17 04:54, Grant Edwards wrote:
> > I did my normal (approximately) weekly emerge sync/update today, and
> > the update failed: emerge complained about a conflict between perl
> > 5.22 and 5.24. There were a bunch of perl modules that required
> > 5.22, but others required 5.24.
> >
> > After a bit of messing around, I just uninstalled all the ones that
> > required 5.22 (and then uninstalled whatever apps required those
> > modules).  This took numerous iterations of 'emerge --pretend
> > --depclean' and 'emerge -C ' and 'emerge -auvND'.  After
> > 10-15 minutes of this, the update ran without conflict, and then I
> > reinstalled whatever apps I had uninstalled.
> >
> > Now update the next machine... same conflicts.
> >
> > This time I paid closer attention to the emerge output and added
> > '--backtrack=30' as it suggested.  Then the update worked ran no
> > problem.
> >  
> 
> simpler is to:
> emerge perl --nodeps
> perl-cleaner --all
> emerge -NuDv world etc ...
> 
> less work, no glitches ... I have just started machine #4

Well, you are lucky that no glitches appeared... No deps ignores all
deps, perl may not properly work. At least I would rebuild perl again
after everything was resolved.

Or you use:

# emerge -1a perl --reinstall-atoms "$(qlist -IC dev-perl/ virtual/perl-)"

It may require you to first uninstall all virtuals:

# emerge -Ca $(qlist -IC virtual/perl-)

Also, "qlist -ICS ..." may be required instead.

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: News: invalid item?

2017-04-20 Thread Kai Krakow
Am Wed, 19 Apr 2017 12:45:49 -0700
schrieb Daniel Frey :

> Anyone sync recently and get this:
> 
> !!! Invalid news item:
> /usr/portage/metadata/news/2017-04-10-split-and-slotted-wine/2017-04-10-split-and-slotted-wine.en.txt
> 
> Not sure if I should file a bugreport or what happened here...
> 
> Synced a few minutes ago @ 12:37-ish PST.

Please try updating portage first and then try again...

-- 
Regards,
Kai

Replies to list-only preferred.




[gentoo-user] Re: [OT] Tools for putting HDD back to new state

2017-04-14 Thread Kai Krakow
Am Fri, 14 Apr 2017 09:37:09 +0200
schrieb Marc Joliet <mar...@gmx.de>:

> (Sorry for the late reply, I hope it's still useful to you.)

NP. The links below were interesting.

> On Dienstag, 4. April 2017 00:46:54 CEST Kai Krakow wrote:
> > Am Mon, 3 Apr 2017 16:15:24 -0400
> > 
> > schrieb Rich Freeman <ri...@gentoo.org>:  
> > > On Mon, Apr 3, 2017 at 2:34 PM, Kai Krakow <hurikha...@gmail.com>
> > > 
> > > wrote:  
>  [...]  
> > > 
> > > If it contains data you'd prefer not be recoverable you might
> > > want to use shred or ATA secure erase.  
> > 
> > I wonder if shredding adds any value with the high density of modern
> > drives... Each bit is down to a "few" (*) atoms. It should be pretty
> > difficult, if not impossible, to infer the previous data from it. I
> > think most of the ability to infer the previous data comes from
> > magnetic leakage from the written bit to the neighbor bits. And
> > this is why clever mathematicians created series of alternating bit
> > patterns to distribute this leakage evenly, which is the different
> > algorithms the shredder programs use.
> > 
> > Do you have any insights on that matter? Just curious.  
> 
> For the record, there was some discussion on this on this not too
> long ago [edit: oops, looks like it was almost two years ago now]:
> see the thread "Securely Securely deletion of an HDD" (yes, I
> including my spelling mistake), which you can find online at https://
> archives.gentoo.org/gentoo-user/message/a01e0ad7b07855647a528f1e0324631a
> and
> https://archives.gentoo.org/gentoo-user/message/582fe3c66c7e13de979b656e9db33325.

So you suggest shooting a bullet at the disks? ;-)

You could also use the hammer method:
https://youtu.be/oNcaIQMjbM8?t=2m55s

> > > Shred overwrites the drive with random data using a few passes to
> > > make recovery more difficult.  Some debate whether it actually
> > > adds value.  
> > 
> > For a mere mortal it is already impossible to recover data after
> > writing zeros to it. Shredding is very time consuming and probably
> > not worth the effort if you just want a blank drive and have no
> > critical or security relevant data on it, i.e. you used it for
> > testing.
> > 
> > But while you are at it: Shredding tools should usually do a read
> > check to compare that the data that ought to have been written
> > actually was written, otherwise the whole procedure is pretty
> > pointless. As a side effect, this exposes sector defects.
> > 
> > If you want to do this to pretend data has never been written to the
> > drive, you're probably out of luck anyways: If you'd be able to
> > recover data after a single write of zeros, it should be easily
> > possible to see that the data was shredded with different bit
> > patterns. The S.M.A.R.T counters will add the rest and tell you the
> > power-on hours, maybe even amount of data written, head moves etc.
> > 
> > (*): On an atomic scale, that's still 1 million atoms...  
> 
> I don't think using zeros is enough, certainly not on SSDs that do
> their own compression, I would think.

Well, I don't think that compression and its overhead to be effective
is worth the effort to implement it. I don't think drives do this.
Especially that the bus speed is becoming the bottleneck. Thus to be
effective, data would have to be compressed before transferring over
the bus and uncompressed after. Also deduplication is very unlikely to
be done in firmware. I wouldn't take that as an argument why you want
use random data.

But I think the point here is sector remapping (as pointed out in the
references threads): SSDs do that through the FTL constantly, HDDs do
that upon encountering physical problems on the platter. It absolutely
makes no difference if you put random data or zero data to the disk:
You won't reach the previously mapped sector locations. Secure erase is
probably the only thing you can do here, hoping that it covers all
sectors (also the spare sectors and unmapped sectors).

> And AFAIK using random data
> can still fill the drive at native write speed, so I don't see what
> you gain by avoiding that.  But really, if you haven't already, check
> the primary sources in the thread I mentioned above.

Depends on what's your random source: /dev/random won't generate
entropy fast enough to do this. /dev/urandom could, but actually it's
not that very random because it's generated mathematically. That
somehow defeats the purpose for using as overwrite source. A mixture of
both could do good enough, that's probably where special wiping software
comes in.

Conclusion: If you don't store state secrets, overwriting with zer

  1   2   3   >