[gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Ian Zimmerman
On 2017-09-04 13:55, Grant wrote:

> ansible does sound pretty cool.  I'll check it out if I outgrow my
> script but as long as I can keep using Dell XPS 13 laptops I don't
> think it will have any trouble scaling.

For those dug in minimalists among us, there is also app-admin/cdist.

-- 
Please don't Cc: me privately on mailing lists and Usenet,
if you also post the followup to the list or newsgroup.
Do obvious transformation on domain to reply privately _only_ on Usenet.



Re: [gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread R0b0t1
On Mon, Sep 4, 2017 at 3:16 PM, Grant  wrote:
>> I have a network of very nearly identical Dell XPS 13 laptops that I
>> manage with a script.  The master pushes the contents of its
>> filesystem to the others so I only have to manage one system.  It's
>> worked really well over several years.  I just got a new Dell XPS 13
>> to serve as the master and there have been some changes that were
>> difficult to integrate with the network (high-res screen, /dev/sda
>> replaced with /dev/nvme0n0) but those problems are fixed thanks to you
>> guys.
>>
>> Now I'm running into "trap invalid opcode" errors on the older
>> systems.  Can I disable some of the newer CPU instruction sets on the
>> master laptop when compiling to hopefully generate binaries that will
>> work on the older systems?  If so, could anyone point me in the right
>> direction?  I don't want to use distcc please.
>>
>> CHOST="x86_64-pc-linux-gnu"
>> CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"
>
>
> Switching to -mtune=native seems to work.  Time for an emerge -e world.
>

This still might fail after a sufficient amount of time, you may want
to run cpuid2cpuflags on all of the laptops. I'm not sure if there is
a way around this. You may be able to refer to the Intel ARK to look
up processor and family capabilities directly.

R0b0t1.



Re: [gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Grant
>>> This is exactly the use-case ansible was designed for: declarative,
>>> idempotent, predictable management of a fleet of machines that may or
>>> may not be around when you feel like updating something (so it catches
>>> up later), and needs only sshd and python to do it's magic :-)
>>
>> ansible does sound pretty cool.  I'll check it out if I outgrow my
>> script but as long as I can keep using Dell XPS 13 laptops I don't
>> think it will have any trouble scaling.
>
>
> Allow me a few moments to convince you more :-)
>
> Maybe you don't need it right this minute, but your current method will
> become less and less workable with time, and when you feel that maybe
> you ought to do it differently, you might remember this conversation.
>
> ansible works like you do - ssh's into a host, and does what needs
> doing. It figures out what to do the same way you do, by having a clear
> picture of what should be and making the host that way. It's very much
> like a proxy for you.
>
> The difference comes in when you observe that with ansible you don't
> have to deal with the details of HOW to do something (the shipped
> modules already deal with all of that), you only concern yourself with
> WHAT you want to accomplish.
>
> If you only use ansible to basically run ssh in a for loop, then it's a
> waste of the setup effort. But let's say Dell retire or change those
> XPS13s dramatically. 4 of yours wear out, and you buy 8 Precisions.
> Bugger, they have different hardware and the chipset running the SSDs
> has a different drive. The GPU, the wifi NIC, these things all drift
> with time. Then you figure you want 2 management Precisions, plus
> your's, the wife's and the workshop manager's laptops need extra stuff;
> and the plebs can stay the same on the XPS.
>
> And your script gets out of hand real quick (we've all been there). How
> do you detect that some is a Precision and do appropriate actions in a
> nasty script? It's not easy.
> Ansible does all that for you upfront out the box. It always knows what
> it's working on (thanks to a module called setup) and catering for
> inevitable differences is trivial to handle. With none of the downsides
> to copying entire tree structures around (like copying way too many
> files you didn't intend to. Like /var/run...)


It truly sounds great but the devil is in the details in my particular
environment.  If I feel like I'm outgrowing my script (and maybe even
if I don't) I'll dig into ansible.  How big of a duty is the
implementation?

Thanks,
Grant



Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread R0b0t1
Sorry, I missed your reply.

On Mon, Sep 4, 2017 at 3:32 PM, Marvin Gülker  wrote:
> Am 04. September 2017 um 12:07 Uhr -0500 schrieb R0b0t1 :
>> Even if they can not present an argument like I have,
>> they will probably only notice it if it misbehaves in some way. If it
>> misbehaves more than other software on their system, who is to say it
>> isn't a poorly designed language and/or ecosystem?
>
> I think that on a technical mailinglist you should convey your point
> using technical arguments, not rhethorical ones.

The technical reasoning in the argument I presented is "it doesn't
work when I try to use it." It is not sophistry.

> The reasoning is
> errorneous. If your goal is not ultimate API stability, then Ruby's
> design approach that focuses more on progress than on ultimate API
> stability is not poor, but different. You can agree or disagree with the
> goal, but you can't question the measures taken to implement it by first
> stipulating a goal different from the one the measure was intended to
> implement. Take a look at Ruby's versioning policy[1]; ultimate API
> backward compatibility is not a design goal in minor versions of the
> language. Ruby is simply not the right tool for the job if you want to
> create for example an archive software that must run 20 years without
> touching it.
>

The problem is there's a zeroth goal of every project: to be useful.
If the software produced is hard to use or not usable at all, then all
of the work spent on it is for naught.

Suggesting that it is impossible to progress a language while
maintaining language stability is fallacious reasoning. You can choose
to do both, e.g. by structuring releases so that breaking changes are
lumped together, as in Python. It is also the case that not all
changes are good changes, though experimentation is key to success.

> Even though, the problem is not as dramatic as you seem to imply. I
> stand by my point that using private C interfaces is the programmer's
> fault and there is nothing to be standardised here. Real breaking
> changes of documented behaviour like the Bignum/Fixnum one are rare, and
> the effects are moderate. Most of the software written in Ruby will not
> have a problem with running on newer versions.
>

The problem is dramatic enough if there's people complaining about it.
Granted, most distributions seem to take care of such issues for their
users, so the only people complaining seem to be Gentooers.

Other people who do not like the situation simply avoid Ruby.

Cheers,
 R0b0t1.



Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Floyd Anderson

Hello Grant,

On Mo, 04 Sep 12:24:00 -0700
Grant  wrote:


Is there a way to digitally discover the true height and width of your
screen in mm?


If you know the shadow mask/dot pitch [1] or the real pixel per inch of 
your screen, then calculate it. This way you see if software reports 
wrong values.


BTW: X supports -dpi parameter/option. I use it in my xserverrc (xinit 
server script) configuration and the Xorg.0.log reports the correct and

prior calculated values.

[1] 


--
Regards,
floyd




Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread wabe
Grant  wrote:

> > and indeed XFCE doesn't seem to have very good support for this.
> > Maybe you can find some of the settings listed there useful though.
> >
> > Other than that, if you want working DPI scaling, you'll have much
> > better luck with KDE 5 / Plasma.  
> 
> 
> Won't I freak out if I'm an xfce4 guy and I try to switch to KDE?  Is
> there a better choice for HiDPI migration for people who like xfce4?

I'm using XFCE with a 140 DPI display and it's working fine.

--
Regards
wabe



Re: [gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Alan McKinnon
On 04/09/2017 22:55, Grant wrote:
>> This is exactly the use-case ansible was designed for: declarative,
>> idempotent, predictable management of a fleet of machines that may or
>> may not be around when you feel like updating something (so it catches
>> up later), and needs only sshd and python to do it's magic :-)
> 
> ansible does sound pretty cool.  I'll check it out if I outgrow my
> script but as long as I can keep using Dell XPS 13 laptops I don't
> think it will have any trouble scaling.


Allow me a few moments to convince you more :-)

Maybe you don't need it right this minute, but your current method will
become less and less workable with time, and when you feel that maybe
you ought to do it differently, you might remember this conversation.

ansible works like you do - ssh's into a host, and does what needs
doing. It figures out what to do the same way you do, by having a clear
picture of what should be and making the host that way. It's very much
like a proxy for you.

The difference comes in when you observe that with ansible you don't
have to deal with the details of HOW to do something (the shipped
modules already deal with all of that), you only concern yourself with
WHAT you want to accomplish.

If you only use ansible to basically run ssh in a for loop, then it's a
waste of the setup effort. But let's say Dell retire or change those
XPS13s dramatically. 4 of yours wear out, and you buy 8 Precisions.
Bugger, they have different hardware and the chipset running the SSDs
has a different drive. The GPU, the wifi NIC, these things all drift
with time. Then you figure you want 2 management Precisions, plus
your's, the wife's and the workshop manager's laptops need extra stuff;
and the plebs can stay the same on the XPS.

And your script gets out of hand real quick (we've all been there). How
do you detect that some is a Precision and do appropriate actions in a
nasty script? It's not easy.
Ansible does all that for you upfront out the box. It always knows what
it's working on (thanks to a module called setup) and catering for
inevitable differences is trivial to handle. With none of the downsides
to copying entire tree structures around (like copying way too many
files you didn't intend to. Like /var/run...)

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Grant
 I'm getting strange results from xdpyinfo.  I always get 96x96 DPI and
 the screen size changes along with the resolution.  When I run 'xrandr
 --dpi 200x200' and check xdpyinfo, it reports correctly.  But if I log
 out and back in to xfce4 without doing anything else, it gives me
 96x96 again.
>>>
>>>
>>>
>>> XFCE is probably forcing 96DPI by default. This is usually done by
>>> desktop
>>> environments that don't support DPI scaling very well. I just found this
>>> (sort of flame-war-ish) thread:
>>>
>>>https://forum.xfce.org/viewtopic.php?id=7734
>>>
>>> and indeed XFCE doesn't seem to have very good support for this. Maybe
>>> you
>>> can find some of the settings listed there useful though.
>>>
>>> Other than that, if you want working DPI scaling, you'll have much better
>>> luck with KDE 5 / Plasma.
>>
>>
>> Won't I freak out if I'm an xfce4 guy and I try to switch to KDE?  Is
>> there a better choice for HiDPI migration for people who like xfce4?
>
>
> You could try LXQt, which is the upcoming replacement for LXDE. It's
> Qt-based, so DPI scaling *should* work well (no guarantees, didn't try it
> myself yet.) And its desktop philosophy is more similar to XFCE, meaning
> minimalist, non-bloated UIs.
>
> Anyway, if I were you, I'd just try all of them using live-CDs/USBs from
> various distros, and see what works best. LXDE, LXQt, Gnome, KDE, Budgie,
> those seem to be the main ones right now.


Great tips, thank you Nikos.

- Grant



Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread R0b0t1
On Mon, Sep 4, 2017 at 12:49 PM, Michael Orlitzky  wrote:
> On 09/04/2017 01:07 PM, R0b0t1 wrote:
>>
>> For almost all languages but Ruby (and Perl) you can take code written
>> against one minor version and compile it in the next minor version.
>
>
> This isn't a language issue with Ruby, it's a culture/package-management
> one. For a long time, it's been easy to bundle dependencies in Ruby. The
> result is a culture of saying "I need the version of ruby-foo that was
> released on my birthday that one time mercury was in retrograde, and
> also I'd like the version number to have a seven in it somewhere because
> that's my daughter's age." When two package authors come up with two
> different requirements like that, you end up needing *two* versions of
> ruby-foo installed.
>

That's very insightful. However, I think developers originally started
requesting versions with such specificity because breaking changes,
even minor ones, were made with such consistency. At a certain point
this does simply spill over into maintainer preference but package
maintainers may be following the lead of the language developers.

E.g. most Python packages provide extremely stable interfaces.

Cheers,
 R0b0t1.



[gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Nikos Chantziaras

On 04/09/17 23:58, Grant wrote:

I'm getting strange results from xdpyinfo.  I always get 96x96 DPI and
the screen size changes along with the resolution.  When I run 'xrandr
--dpi 200x200' and check xdpyinfo, it reports correctly.  But if I log
out and back in to xfce4 without doing anything else, it gives me
96x96 again.



XFCE is probably forcing 96DPI by default. This is usually done by desktop
environments that don't support DPI scaling very well. I just found this
(sort of flame-war-ish) thread:

   https://forum.xfce.org/viewtopic.php?id=7734

and indeed XFCE doesn't seem to have very good support for this. Maybe you
can find some of the settings listed there useful though.

Other than that, if you want working DPI scaling, you'll have much better
luck with KDE 5 / Plasma.


Won't I freak out if I'm an xfce4 guy and I try to switch to KDE?  Is
there a better choice for HiDPI migration for people who like xfce4?


You could try LXQt, which is the upcoming replacement for LXDE. It's 
Qt-based, so DPI scaling *should* work well (no guarantees, didn't try 
it myself yet.) And its desktop philosophy is more similar to XFCE, 
meaning minimalist, non-bloated UIs.


Anyway, if I were you, I'd just try all of them using live-CDs/USBs from 
various distros, and see what works best. LXDE, LXQt, Gnome, KDE, 
Budgie, those seem to be the main ones right now.





Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Grant
 Is there a way to digitally discover the true height and width of your
 screen in mm?
>>>
>>>
>>> Yes. xdpyinfo shows the information:
>>>
>>>xdpyinfo | grep -B2 resolution
>>>
>>> If the information is wrong, that usually means one of two things
>>> (sometimes
>>> even both): a) the video driver is reporting the wrong size to Xorg,
>>> and/or
>>> b) the screen is reporting the wrong size to the driver.
>>
>>
>> I'm getting strange results from xdpyinfo.  I always get 96x96 DPI and
>> the screen size changes along with the resolution.  When I run 'xrandr
>> --dpi 200x200' and check xdpyinfo, it reports correctly.  But if I log
>> out and back in to xfce4 without doing anything else, it gives me
>> 96x96 again.
>
>
> XFCE is probably forcing 96DPI by default. This is usually done by desktop
> environments that don't support DPI scaling very well. I just found this
> (sort of flame-war-ish) thread:
>
>   https://forum.xfce.org/viewtopic.php?id=7734
>
> and indeed XFCE doesn't seem to have very good support for this. Maybe you
> can find some of the settings listed there useful though.
>
> Other than that, if you want working DPI scaling, you'll have much better
> luck with KDE 5 / Plasma.


Won't I freak out if I'm an xfce4 guy and I try to switch to KDE?  Is
there a better choice for HiDPI migration for people who like xfce4?

- Grant



Re: [gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Grant
>>> I have a network of very nearly identical Dell XPS 13 laptops that I
>>> manage with a script.  The master pushes the contents of its
>>> filesystem to the others so I only have to manage one system.  It's
>>> worked really well over several years.  I just got a new Dell XPS 13
>>> to serve as the master and there have been some changes that were
>>> difficult to integrate with the network (high-res screen, /dev/sda
>>> replaced with /dev/nvme0n0) but those problems are fixed thanks to you
>>> guys.
>>>
>>> Now I'm running into "trap invalid opcode" errors on the older
>>> systems.  Can I disable some of the newer CPU instruction sets on the
>>> master laptop when compiling to hopefully generate binaries that will
>>> work on the older systems?  If so, could anyone point me in the right
>>> direction?  I don't want to use distcc please.
>>>
>>> CHOST="x86_64-pc-linux-gnu"
>>> CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"
>>
>>
>> Switching to -mtune=native seems to work.  Time for an emerge -e world.
>
> Also time for ansible. Why you managing a fleet of machines with a
> script that won't actually differentiate properly between machines? It
> will sorts mostly do it right, except when you forget something.


Well I designed it around the principle that I would have the luxury
of using sufficiently identical hardware across each system so that it
wouldn't need to differentiate.  It's simple, it's one file.  I just
execute the file with certain parameters based on whether I'm cloning
the running system to a USB stick, cloning the running USB stick to
its host system, pushing the master system to another system, or
updating the running system based on the last push to it.  Worked
great until the latest iteration of XPS 13.  Even now the only
hardware differentiation it needs to make is /dev/sda or /dev/nvme0n1.
DisplaySize in xorg.conf and -mtune=native in make.conf are sufficient
to handle different screen resolutions and CPUs.


> This is exactly the use-case ansible was designed for: declarative,
> idempotent, predictable management of a fleet of machines that may or
> may not be around when you feel like updating something (so it catches
> up later), and needs only sshd and python to do it's magic :-)


ansible does sound pretty cool.  I'll check it out if I outgrow my
script but as long as I can keep using Dell XPS 13 laptops I don't
think it will have any trouble scaling.

- Grant



[gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Nikos Chantziaras

On 04/09/17 23:26, Grant wrote:

Is there a way to digitally discover the true height and width of your
screen in mm?


Yes. xdpyinfo shows the information:

   xdpyinfo | grep -B2 resolution

If the information is wrong, that usually means one of two things (sometimes
even both): a) the video driver is reporting the wrong size to Xorg, and/or
b) the screen is reporting the wrong size to the driver.


I'm getting strange results from xdpyinfo.  I always get 96x96 DPI and
the screen size changes along with the resolution.  When I run 'xrandr
--dpi 200x200' and check xdpyinfo, it reports correctly.  But if I log
out and back in to xfce4 without doing anything else, it gives me
96x96 again.


XFCE is probably forcing 96DPI by default. This is usually done by 
desktop environments that don't support DPI scaling very well. I just 
found this (sort of flame-war-ish) thread:


  https://forum.xfce.org/viewtopic.php?id=7734

and indeed XFCE doesn't seem to have very good support for this. Maybe 
you can find some of the settings listed there useful though.


Other than that, if you want working DPI scaling, you'll have much 
better luck with KDE 5 / Plasma.





Re: [gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Alan McKinnon
On 04/09/2017 22:16, Grant wrote:
>> I have a network of very nearly identical Dell XPS 13 laptops that I
>> manage with a script.  The master pushes the contents of its
>> filesystem to the others so I only have to manage one system.  It's
>> worked really well over several years.  I just got a new Dell XPS 13
>> to serve as the master and there have been some changes that were
>> difficult to integrate with the network (high-res screen, /dev/sda
>> replaced with /dev/nvme0n0) but those problems are fixed thanks to you
>> guys.
>>
>> Now I'm running into "trap invalid opcode" errors on the older
>> systems.  Can I disable some of the newer CPU instruction sets on the
>> master laptop when compiling to hopefully generate binaries that will
>> work on the older systems?  If so, could anyone point me in the right
>> direction?  I don't want to use distcc please.
>>
>> CHOST="x86_64-pc-linux-gnu"
>> CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"
> 
> 
> Switching to -mtune=native seems to work.  Time for an emerge -e world.

Also time for ansible. Why you managing a fleet of machines with a
script that won't actually differentiate properly between machines? It
will sorts mostly do it right, except when you forget something.

This is exactly the use-case ansible was designed for: declarative,
idempotent, predictable management of a fleet of machines that may or
may not be around when you feel like updating something (so it catches
up later), and needs only sshd and python to do it's magic :-)

Never mind that ansible was written with servers in mind; in terms of
management where you do $STUFF_THAT_NEEDS_MANAGING, there is no
difference between servers and laptops. A computer is still just a computer.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread Marvin Gülker
Am 04. September 2017 um 12:07 Uhr -0500 schrieb R0b0t1 :
> Even if they can not present an argument like I have,
> they will probably only notice it if it misbehaves in some way. If it
> misbehaves more than other software on their system, who is to say it
> isn't a poorly designed language and/or ecosystem?

I think that on a technical mailinglist you should convey your point
using technical arguments, not rhethorical ones. The reasoning is
errorneous. If your goal is not ultimate API stability, then Ruby's
design approach that focuses more on progress than on ultimate API
stability is not poor, but different. You can agree or disagree with the
goal, but you can't question the measures taken to implement it by first
stipulating a goal different from the one the measure was intended to
implement. Take a look at Ruby's versioning policy[1]; ultimate API
backward compatibility is not a design goal in minor versions of the
language. Ruby is simply not the right tool for the job if you want to
create for example an archive software that must run 20 years without
touching it.

Even though, the problem is not as dramatic as you seem to imply. I
stand by my point that using private C interfaces is the programmer's
fault and there is nothing to be standardised here. Real breaking
changes of documented behaviour like the Bignum/Fixnum one are rare, and
the effects are moderate. Most of the software written in Ruby will not
have a problem with running on newer versions.

Marvin

[1]: 
https://www.ruby-lang.org/en/news/2013/12/21/ruby-version-policy-changes-with-2-1-0/



Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Grant
> My laptop's 13" screen has a native resolution of 3200x1800 which
> makes everything crazy small on-screen.  Is there a good method for
> telling Xorg or xfce4 to compensate, or should I one-at-a-time my
> applications?


 Depends on your desktop. I'm not sure if XFCE supports this, but in KDE
 everything scales to my monitor's DPI automatically.

 What is the output of:

xdpyinfo | grep -i resolution

 (The utility is in the x11-apps/xdpyinfo package.)

 On such a small screen, the result should be a very high DPI (around
 282.)
 If that's not the number you get, then your graphics driver is reporting
 it
 wrong to Xorg, and you need to set it manually.
>>>
>>>
>>> This led me to the DisplaySize parameter for xorg.conf which helps a lot.
>>>
>>
>> Is there a way to digitally discover the true height and width of your
>> screen in mm?
>
>
> Yes. xdpyinfo shows the information:
>
>   xdpyinfo | grep -B2 resolution
>
> If the information is wrong, that usually means one of two things (sometimes
> even both): a) the video driver is reporting the wrong size to Xorg, and/or
> b) the screen is reporting the wrong size to the driver.


I'm getting strange results from xdpyinfo.  I always get 96x96 DPI and
the screen size changes along with the resolution.  When I run 'xrandr
--dpi 200x200' and check xdpyinfo, it reports correctly.  But if I log
out and back in to xfce4 without doing anything else, it gives me
96x96 again.

- Grant



[gentoo-user] Re: Lowest common denominator compile

2017-09-04 Thread Grant
> I have a network of very nearly identical Dell XPS 13 laptops that I
> manage with a script.  The master pushes the contents of its
> filesystem to the others so I only have to manage one system.  It's
> worked really well over several years.  I just got a new Dell XPS 13
> to serve as the master and there have been some changes that were
> difficult to integrate with the network (high-res screen, /dev/sda
> replaced with /dev/nvme0n0) but those problems are fixed thanks to you
> guys.
>
> Now I'm running into "trap invalid opcode" errors on the older
> systems.  Can I disable some of the newer CPU instruction sets on the
> master laptop when compiling to hopefully generate binaries that will
> work on the older systems?  If so, could anyone point me in the right
> direction?  I don't want to use distcc please.
>
> CHOST="x86_64-pc-linux-gnu"
> CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"


Switching to -mtune=native seems to work.  Time for an emerge -e world.

- Grant



[gentoo-user] Lowest common denominator compile

2017-09-04 Thread Grant
I have a network of very nearly identical Dell XPS 13 laptops that I
manage with a script.  The master pushes the contents of its
filesystem to the others so I only have to manage one system.  It's
worked really well over several years.  I just got a new Dell XPS 13
to serve as the master and there have been some changes that were
difficult to integrate with the network (high-res screen, /dev/sda
replaced with /dev/nvme0n0) but those problems are fixed thanks to you
guys.

Now I'm running into "trap invalid opcode" errors on the older
systems.  Can I disable some of the newer CPU instruction sets on the
master laptop when compiling to hopefully generate binaries that will
work on the older systems?  If so, could anyone point me in the right
direction?  I don't want to use distcc please.

CHOST="x86_64-pc-linux-gnu"
CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"

- Grant



[gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Nikos Chantziaras

On 04/09/17 22:24, Grant wrote:

My laptop's 13" screen has a native resolution of 3200x1800 which
makes everything crazy small on-screen.  Is there a good method for
telling Xorg or xfce4 to compensate, or should I one-at-a-time my
applications?


Depends on your desktop. I'm not sure if XFCE supports this, but in KDE
everything scales to my monitor's DPI automatically.

What is the output of:

   xdpyinfo | grep -i resolution

(The utility is in the x11-apps/xdpyinfo package.)

On such a small screen, the result should be a very high DPI (around 282.)
If that's not the number you get, then your graphics driver is reporting it
wrong to Xorg, and you need to set it manually.


This led me to the DisplaySize parameter for xorg.conf which helps a lot.



Is there a way to digitally discover the true height and width of your
screen in mm?


Yes. xdpyinfo shows the information:

  xdpyinfo | grep -B2 resolution

If the information is wrong, that usually means one of two things 
(sometimes even both): a) the video driver is reporting the wrong size 
to Xorg, and/or b) the screen is reporting the wrong size to the driver.





Re: [gentoo-user] Re: High resolution on a 13 inch screen

2017-09-04 Thread Grant
>>> My laptop's 13" screen has a native resolution of 3200x1800 which
>>> makes everything crazy small on-screen.  Is there a good method for
>>> telling Xorg or xfce4 to compensate, or should I one-at-a-time my
>>> applications?
>>
>> Depends on your desktop. I'm not sure if XFCE supports this, but in KDE
>> everything scales to my monitor's DPI automatically.
>>
>> What is the output of:
>>
>>   xdpyinfo | grep -i resolution
>>
>> (The utility is in the x11-apps/xdpyinfo package.)
>>
>> On such a small screen, the result should be a very high DPI (around 282.)
>> If that's not the number you get, then your graphics driver is reporting it
>> wrong to Xorg, and you need to set it manually.
>
>
> This led me to the DisplaySize parameter for xorg.conf which helps a lot.
>
> Thanks,
> Grant


Is there a way to digitally discover the true height and width of your
screen in mm?

- Grant



Re: [gentoo-user] Rename /dev/nvme0n1 to /dev/sda

2017-09-04 Thread Grant
 My new laptop uses /dev/nvme0n1 instead of /dev/sda which conflicts
 with the script I use to manage about 12 similar laptops running
 Gentoo.  Is there a udev method for renaming the disk that will work
 well with any USB disks that happen to also be attached?
>>>
>>> I'm not certain what you mean by that, but I would guess that you want
>>> the nvme disk to show up as /dev/sda, and the USB disk(s) to show up
>>> as /dev/sd[b-z].
>>>
>>> It is not possible to accomplish this using udev; the kernel owns the
>>> /dev/sdX device namespace, and will sequentially create devices nodes
>>> for SCSI-like block devices using that namespace. There is no way to
>>> change that using a udev rule.
>>
>>
>> Can I rename /dev/sda to /dev/sd[b-z] if it's attached via USB, and
>> then rename /dev/nvme0n1 to /dev/sda if /dev/nvme0n1 exists?
>>
>> Alternatively, can I rename /dev/sda to /dev/sd[b-z] if /dev/sda and
>> /dev/nvme0n1 exist, and then rename /dev/nvme0n1 to /dev/sda if
>> /dev/nvme0n1 exists?
>
> You might technically be able to do it, but I would guess it would
> cause some nasty race conditions between the kernel and udev. It's a
> bad idea.


 Is it the conditionals that cause this to be a bad idea?  Because I
 believe udev has functionality designed to rename devices exactly like
 this.
>>>
>>> udev doesn't provide any functionality to rename device nodes. You can
>>> adjust their permissions, and create symlinks, but there is no direct
>>> way to rename them.
>>
>>
>> I use stuff like this to rename my USB devices and it works perfectly:
>>
>> SUBSYSTEM=="net", ACTION=="add", ENV{ID_NET_NAME_PATH}=="enp0s20u2u1",
>> NAME="net0"
>>
>> Isn't this a true rename of the device node?
>
> Network devices don't have device nodes. They have interface names,
> which are a different concept entirely.


OK I'll take your advice and change the script to detect /dev/nvme0n1.

- Grant



Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread Michael Orlitzky
On 09/04/2017 01:07 PM, R0b0t1 wrote:
> 
> For almost all languages but Ruby (and Perl) you can take code written
> against one minor version and compile it in the next minor version.


This isn't a language issue with Ruby, it's a culture/package-management
one. For a long time, it's been easy to bundle dependencies in Ruby. The
result is a culture of saying "I need the version of ruby-foo that was
released on my birthday that one time mercury was in retrograde, and
also I'd like the version number to have a seven in it somewhere because
that's my daughter's age." When two package authors come up with two
different requirements like that, you end up needing *two* versions of
ruby-foo installed.

Even if both packages could happily use the same, latest version of
ruby-foo -- you get what upstream says in most cases. And what upstream
says is usually crap, because they bundle everything and will never
notice annoying incompatibilities like end-users do.



Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread R0b0t1
On Mon, Sep 4, 2017 at 1:49 AM, Marvin Gülker  wrote:
> Am 03. September 2017 um 15:35 Uhr -0500 schrieb R0b0t1 :
>> I think the takeaway from Alan's comment is that Python is unnaturally
>> stable compared to other interpreted languages. One might be inclined
>> to think Python developers consider their work to be a widely used
>> tool as opposed to a toy to play with.
>
> If you regard Ruby as a toy language, I'm inclined to say that quite a
> number of (most often Rails) applications use it in for serious
> projects.
>

I don't doubt they're attempting to be serious.

>> These are all fairly major changes for a minor release. I'm not really
>> sure any of this evidence supports the opinion that Ruby doesn't
>> experience breaking changes more regularly than other languages.
>
> I have not made the claim that Ruby is more stable than other languages,
> especially did I never say that Ruby is as stable as Python. My
> intention was the counter the statement that every Ruby minor release is
> a "complete new language". The changes I listed are breaking, but not to
> a degree that justifies the "complete new language" statement.
>

For almost all languages but Ruby (and Perl) you can take code written
against one minor version and compile it in the next minor version. I
don't disagree that the language at large is more or less the exact
same thing, but there is enough different to prevent a program from
compiling - thus, it's a new language.

>> Situations like the above, and reliance on private C interfaces, are
>> what makes it seem plausible to me that there are packages that
>> require a version that has no listed breaking changes.
>
> Using unsupported private C interfaces is going to make any package
> break in any language over time. This is not Ruby's fault.
>

Perhaps not in the technical sense, but there is software written
against those interfaces and people use it. Perhaps those interfaces
should be standardized?

>> This statement makes me think you haven't tried to understand the
>> issue, as that ISO document - to the best of my knowledge, I can't
>> actually view it without paying money - implements Ruby 1.8.1 and
>> potentially some features from 1.9. Hearsay indicates it was started
>> at the behest of the Japanese government so that they could use Ruby
>> for internal projects as their rules seem to require standards
>> documents for software. This is important, because it shows that there
>> is no real effort by Ruby's lead developer or the Ruby community to
>> produce a legitimate standards document.
>
> I've not worked with the ISO document. You requested a formal standard,
> and I replied that there's an ISO document, which I regard as a
> standard. I didn't know that it describes such an ages old version of
> Ruby (though I should have known better given the date). Since the 2.4.0
> release post on ruby-lang.org justifies removal of Fixnum/Bignum with
> the interpreter not being compliant with the ISO standard, I was under
> the impression that it was still usable. If it isn't, I apologise.
>

You can still use it, but it only defines Ruby ~1.8. To most that
means it is not usable. I'm not certain why they bothered to quote it
for justification. Grasping at straws?

Standards can exist and still be useless. That the standard would be
so vague as to not guarantee any interoperability between vendors was
something that some users feared when it was still relevant.

>> In practice one finds more references to something called RubySpec
>> which is an executable implementation of what people like to call a
>> specification. RubySpec appears to be discontinued[1], but even when
>> it was in use there are three things that should be pointed out:
>
> The RubySpec was started as a community effort indeed, but if you only
> read the Rubinius view of it, you're going to see a lot of bias. The
> Rubinius main maintainer retracted from the effort by his own
> decision. Consequently, the RubySpec is now maintained by the core team
> of the canonical Ruby implementation[1]. Thus, it is not true that the
> core developers do not make use of RubySpec.
>

This is sufficiently new that I hadn't learned of it. Still though,
there is more than one Ruby implementation, and past behavior w.r.t.
creating a language specification has me dubious that the
specification will be maintained to a useful degree.

The issue in the past when RubySpec was a separate project was that
everyone seems to have ended up ignoring it, because writing testcases
is no fun. Unless there are strictly enforced project guidelines I
don't expect the existence of ruby/spec to mean anything more than the
existence of RubySpec.

And, again, there are still crucial areas of the language left
undefined, namely its grammar.

>> If you look at the RubySpec code you will see that the "specification"
>> consists of testcases that attempt to define the behavior of Ruby. As
>> mentioned, these tests 

Re: [gentoo-user] Rename /dev/nvme0n1 to /dev/sda

2017-09-04 Thread Mike Gilbert
On Mon, Sep 4, 2017 at 11:00 AM, Grant  wrote:
>>> My new laptop uses /dev/nvme0n1 instead of /dev/sda which conflicts
>>> with the script I use to manage about 12 similar laptops running
>>> Gentoo.  Is there a udev method for renaming the disk that will work
>>> well with any USB disks that happen to also be attached?
>>
>> I'm not certain what you mean by that, but I would guess that you want
>> the nvme disk to show up as /dev/sda, and the USB disk(s) to show up
>> as /dev/sd[b-z].
>>
>> It is not possible to accomplish this using udev; the kernel owns the
>> /dev/sdX device namespace, and will sequentially create devices nodes
>> for SCSI-like block devices using that namespace. There is no way to
>> change that using a udev rule.
>
>
> Can I rename /dev/sda to /dev/sd[b-z] if it's attached via USB, and
> then rename /dev/nvme0n1 to /dev/sda if /dev/nvme0n1 exists?
>
> Alternatively, can I rename /dev/sda to /dev/sd[b-z] if /dev/sda and
> /dev/nvme0n1 exist, and then rename /dev/nvme0n1 to /dev/sda if
> /dev/nvme0n1 exists?

 You might technically be able to do it, but I would guess it would
 cause some nasty race conditions between the kernel and udev. It's a
 bad idea.
>>>
>>>
>>> Is it the conditionals that cause this to be a bad idea?  Because I
>>> believe udev has functionality designed to rename devices exactly like
>>> this.
>>
>> udev doesn't provide any functionality to rename device nodes. You can
>> adjust their permissions, and create symlinks, but there is no direct
>> way to rename them.
>
>
> I use stuff like this to rename my USB devices and it works perfectly:
>
> SUBSYSTEM=="net", ACTION=="add", ENV{ID_NET_NAME_PATH}=="enp0s20u2u1",
> NAME="net0"
>
> Isn't this a true rename of the device node?

Network devices don't have device nodes. They have interface names,
which are a different concept entirely.



Re: [gentoo-user] Re: Read-only access to a git repo

2017-09-04 Thread Alan McKinnon
On 04/09/2017 17:27, Alberto Luaces wrote:
> I would add gitolite to the recommendation list, as it is a small but
> powerful system: the administration interface is a git repository as
> well, so each repository/user can be individually configured without
> accessing system files:
> 
> http://gitolite.com/gitolite/
> 

Thanks, I'll give that a try too. I think gitlolite might even be too
heavy - push/pull/nothing permissions as per Simon's links sounds good

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Rename /dev/nvme0n1 to /dev/sda

2017-09-04 Thread J. Roeleveld
On 4 September 2017 17:00:30 GMT+02:00, Grant  wrote:
>>> My new laptop uses /dev/nvme0n1 instead of /dev/sda which
>conflicts
>>> with the script I use to manage about 12 similar laptops running
>>> Gentoo.  Is there a udev method for renaming the disk that will
>work
>>> well with any USB disks that happen to also be attached?
>>
>> I'm not certain what you mean by that, but I would guess that you
>want
>> the nvme disk to show up as /dev/sda, and the USB disk(s) to show
>up
>> as /dev/sd[b-z].
>>
>> It is not possible to accomplish this using udev; the kernel owns
>the
>> /dev/sdX device namespace, and will sequentially create devices
>nodes
>> for SCSI-like block devices using that namespace. There is no way
>to
>> change that using a udev rule.
>
>
> Can I rename /dev/sda to /dev/sd[b-z] if it's attached via USB,
>and
> then rename /dev/nvme0n1 to /dev/sda if /dev/nvme0n1 exists?
>
> Alternatively, can I rename /dev/sda to /dev/sd[b-z] if /dev/sda
>and
> /dev/nvme0n1 exist, and then rename /dev/nvme0n1 to /dev/sda if
> /dev/nvme0n1 exists?

 You might technically be able to do it, but I would guess it would
 cause some nasty race conditions between the kernel and udev. It's
>a
 bad idea.
>>>
>>>
>>> Is it the conditionals that cause this to be a bad idea?  Because I
>>> believe udev has functionality designed to rename devices exactly
>like
>>> this.
>>
>> udev doesn't provide any functionality to rename device nodes. You
>can
>> adjust their permissions, and create symlinks, but there is no direct
>> way to rename them.
>
>
>I use stuff like this to rename my USB devices and it works perfectly:
>
>SUBSYSTEM=="net", ACTION=="add", ENV{ID_NET_NAME_PATH}=="enp0s20u2u1",
>NAME="net0"
>
>Isn't this a true rename of the device node?
>
>- Grant

For network devices I tend to use the MAC addresses.

USB devices get a different name if you plug it into a different port.

--
Joost
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] Read-only access to a git repo

2017-09-04 Thread Alan McKinnon
On 04/09/2017 17:20, Simon Thelen wrote:
> On 17-09-04 at 17:05, Alan McKinnon wrote:
>> Hi,
>>
>> I fear I have a severe case of too many trees in the way to see the forest.
>>
>> I have a git server, it only runs git.
>> All the sysadmins have full access using ssh://, their keys are in git's
>> authorized_keys, the repos are owned git:git, MODE 770, etc etc, and it
>> works like it should.
>>
>> I want an app to have read-only access to some repos, i.e. clone and
>> pull only. But they are not publicly accessible i.e. the app user must
>> be pre-authorized by me and have a public key. And for the life of me I
>> can't think how to do it!
> You want to use the command= option in the .ssh/authorized_keys file for
> the key to ensure that only `git-upload-pack ' is run
> 
>> Google is only giving me results about what fancy buttons to click in
>> GitHub and Gitlab... I know, my Google-fu sucks today.
> You can look at [1] if you want a really minimal example of a git
> permissions management system.
> 
> [1]: https://github.com/cbdevnet/fugit
> 


available options: push/pull/nothing

well, that's my needs-spec exactly :-)

thanks for the link, I'll check it out later

-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: Read-only access to a git repo

2017-09-04 Thread Alberto Luaces
I would add gitolite to the recommendation list, as it is a small but
powerful system: the administration interface is a git repository as
well, so each repository/user can be individually configured without
accessing system files:

http://gitolite.com/gitolite/
-- 
Alberto




Re: [gentoo-user] Read-only access to a git repo

2017-09-04 Thread Simon Thelen
On 17-09-04 at 17:05, Alan McKinnon wrote:
> Hi,
> 
> I fear I have a severe case of too many trees in the way to see the forest.
> 
> I have a git server, it only runs git.
> All the sysadmins have full access using ssh://, their keys are in git's
> authorized_keys, the repos are owned git:git, MODE 770, etc etc, and it
> works like it should.
> 
> I want an app to have read-only access to some repos, i.e. clone and
> pull only. But they are not publicly accessible i.e. the app user must
> be pre-authorized by me and have a public key. And for the life of me I
> can't think how to do it!
You want to use the command= option in the .ssh/authorized_keys file for
the key to ensure that only `git-upload-pack ' is run

> Google is only giving me results about what fancy buttons to click in
> GitHub and Gitlab... I know, my Google-fu sucks today.
You can look at [1] if you want a really minimal example of a git
permissions management system.

[1]: https://github.com/cbdevnet/fugit

-- 
Simon Thelen



[gentoo-user] Read-only access to a git repo

2017-09-04 Thread Alan McKinnon
Hi,

I fear I have a severe case of too many trees in the way to see the forest.

I have a git server, it only runs git.
All the sysadmins have full access using ssh://, their keys are in git's
authorized_keys, the repos are owned git:git, MODE 770, etc etc, and it
works like it should.

I want an app to have read-only access to some repos, i.e. clone and
pull only. But they are not publicly accessible i.e. the app user must
be pre-authorized by me and have a public key. And for the life of me I
can't think how to do it!

Google is only giving me results about what fancy buttons to click in
GitHub and Gitlab... I know, my Google-fu sucks today.

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Rename /dev/nvme0n1 to /dev/sda

2017-09-04 Thread Grant
>> My new laptop uses /dev/nvme0n1 instead of /dev/sda which conflicts
>> with the script I use to manage about 12 similar laptops running
>> Gentoo.  Is there a udev method for renaming the disk that will work
>> well with any USB disks that happen to also be attached?
>
> I'm not certain what you mean by that, but I would guess that you want
> the nvme disk to show up as /dev/sda, and the USB disk(s) to show up
> as /dev/sd[b-z].
>
> It is not possible to accomplish this using udev; the kernel owns the
> /dev/sdX device namespace, and will sequentially create devices nodes
> for SCSI-like block devices using that namespace. There is no way to
> change that using a udev rule.


 Can I rename /dev/sda to /dev/sd[b-z] if it's attached via USB, and
 then rename /dev/nvme0n1 to /dev/sda if /dev/nvme0n1 exists?

 Alternatively, can I rename /dev/sda to /dev/sd[b-z] if /dev/sda and
 /dev/nvme0n1 exist, and then rename /dev/nvme0n1 to /dev/sda if
 /dev/nvme0n1 exists?
>>>
>>> You might technically be able to do it, but I would guess it would
>>> cause some nasty race conditions between the kernel and udev. It's a
>>> bad idea.
>>
>>
>> Is it the conditionals that cause this to be a bad idea?  Because I
>> believe udev has functionality designed to rename devices exactly like
>> this.
>
> udev doesn't provide any functionality to rename device nodes. You can
> adjust their permissions, and create symlinks, but there is no direct
> way to rename them.


I use stuff like this to rename my USB devices and it works perfectly:

SUBSYSTEM=="net", ACTION=="add", ENV{ID_NET_NAME_PATH}=="enp0s20u2u1",
NAME="net0"

Isn't this a true rename of the device node?

- Grant



Re: [gentoo-user] Rename /dev/nvme0n1 to /dev/sda

2017-09-04 Thread Grant
>> I would suggest you utilize the existing symlinks in one of the
>> /dev/disk/ sub-directories, or create some udev rules to create your
>> own symlinks based on whatever metadata you wish. I would also suggest
>> you read the udev(7) manual page.
>>
>
> ++
>
> Labels are the most obvious solution to this sort of problem
> (especially if you want a generic system image that you can install in
> multiple places and not have to tweak).  UUIDs are the other obvious
> solution, but that does need to be tailored to each install.


In order to use labels for this, I would need to label each of my
potentially-connected devices, correct?

- Grant



Re: [gentoo-user] Ruby - 3 versions - seriously????

2017-09-04 Thread Marvin Gülker
Am 03. September 2017 um 15:35 Uhr -0500 schrieb R0b0t1 :
> I think the takeaway from Alan's comment is that Python is unnaturally
> stable compared to other interpreted languages. One might be inclined
> to think Python developers consider their work to be a widely used
> tool as opposed to a toy to play with.

If you regard Ruby as a toy language, I'm inclined to say that quite a
number of (most often Rails) applications use it in for serious
projects.

> Then why does he have all three installed?

I cannot answer that. I lack the insight into the Gentoo Ruby deployment
process.

> These are all fairly major changes for a minor release. I'm not really
> sure any of this evidence supports the opinion that Ruby doesn't
> experience breaking changes more regularly than other languages.

I have not made the claim that Ruby is more stable than other languages,
especially did I never say that Ruby is as stable as Python. My
intention was the counter the statement that every Ruby minor release is
a "complete new language". The changes I listed are breaking, but not to
a degree that justifies the "complete new language" statement.

> Leading into my next point, I remember some conversations about people
> discussing the Ruby parser and how there was no BNF description of the
> language. Consequently (from memory) there was at least one
> implementation of Ruby were encountering regressions in the parser
> between versions that were undocumented and not detected until the
> releases had already been made. The result was that code was
> semantically different between some versions. Regrettably I'm having
> trouble citing this one.

It was JRuby as far as I know, but don't nail me on that. I don't have
it ready either, nor do I have the exact circumstances in memory.

> Situations like the above, and reliance on private C interfaces, are
> what makes it seem plausible to me that there are packages that
> require a version that has no listed breaking changes.

Using unsupported private C interfaces is going to make any package
break in any language over time. This is not Ruby's fault.

> This statement makes me think you haven't tried to understand the
> issue, as that ISO document - to the best of my knowledge, I can't
> actually view it without paying money - implements Ruby 1.8.1 and
> potentially some features from 1.9. Hearsay indicates it was started
> at the behest of the Japanese government so that they could use Ruby
> for internal projects as their rules seem to require standards
> documents for software. This is important, because it shows that there
> is no real effort by Ruby's lead developer or the Ruby community to
> produce a legitimate standards document.

I've not worked with the ISO document. You requested a formal standard,
and I replied that there's an ISO document, which I regard as a
standard. I didn't know that it describes such an ages old version of
Ruby (though I should have known better given the date). Since the 2.4.0
release post on ruby-lang.org justifies removal of Fixnum/Bignum with
the interpreter not being compliant with the ISO standard, I was under
the impression that it was still usable. If it isn't, I apologise.

> In practice one finds more references to something called RubySpec
> which is an executable implementation of what people like to call a
> specification. RubySpec appears to be discontinued[1], but even when
> it was in use there are three things that should be pointed out:

The RubySpec was started as a community effort indeed, but if you only
read the Rubinius view of it, you're going to see a lot of bias. The
Rubinius main maintainer retracted from the effort by his own
decision. Consequently, the RubySpec is now maintained by the core team
of the canonical Ruby implementation[1]. Thus, it is not true that the
core developers do not make use of RubySpec.

> If you look at the RubySpec code you will see that the "specification"
> consists of testcases that attempt to define the behavior of Ruby. As
> mentioned, these tests are written in Ruby, and are subject to bugs in
> Ruby that are made undetectable, or very hard to detect, by the
> self-referential relationship of the behavioral specification and the
> language.

This sounds logical to me, and I agree. I'm not the correct person to
address this to, though. From the formal point of view, I surely cannot
compete with you. The spirit in the Ruby development appears to follow
not a formal, but a practical approach, which is always going to be
inferior.

> What I have read in this regard leads me to conclude that Ruby is not
> a language that I should use for my development, and it pains me to
> say this.

Use the tool that fits the job for you. I wonder if I was perceived as
using Ruby everythere; this isn't the case. Actually, I don't write much
Ruby code currently, but much more C/C++.

Marvin

[1]: https://github.com/ruby/spec



Re: [gentoo-user] electron and sslv3

2017-09-04 Thread Damo Brisbane
Emerge -pv openssl:

[ebuild   R] dev-libs/openssl-1.0.2l::gentoo  USE="asm sslv3
tls-heartbeat zlib -bindist -gmp -kerberos -rfc3779 -sctp -sslv2
-static-libs {-test} -vanilla"...

I figured ssl better off without it; I think the issue with this package is
it builds it's own version of chromium as part of the emerge, and I think
this is where the ssl dependency comes in. Right though, I think package
maintainer is where I need to head to next.

Thanks

On Sat, Sep 2, 2017 at 11:40 AM, Adam Carter  wrote:

> On Sat, Sep 2, 2017 at 6:26 AM, Damo Brisbane 
> wrote:
>
>> Hello,
>>
>> I am having troubles installing dev-util/electron, related to linking in
>> "ssl3" in the final step of the ebuild, from build log:
>>
>> /usr/lib/gcc/x86_64-pc-linux-gnu/6.3.0/../../../../x86_64-pc-linux-gnu/bin/ld:
>> cannot find -lssl3
>>
>>
>> FYI on ssl, I only want a "working/current" ssl and/or tls installation
>> and I don't care for the details around the installation other than I would
>> like - as much as possible - "ssl" to be future proof and compatible with
>> current and new installs; in this case I just want electron, and I can't
>> install the package because of this linking error. I can successfully build
>> by hacking the final link step and simply remove the reference to "-lssl",
>> below:
>>
>>
>> > cd $PORTAGE_TMPDIR/dev-util/electron-1.3.13-r1/work/chromium-
>> 52.0.2743.82/out/R
>> > x86_64-pc-linux-gnu-g++ -Wl,-O1 -Wl,--a 
>> > obj/atom/app/electron.atom_main.o  obj/libelectron_lib.a
>> o... lib/libnode.so lib/libv8.so -lz -lhttp_parser -lssl -lcrypto -
>>
>>
>> and compiles fine.
>>
>> There are no "ssl" use flags on electron?:
>>
>
> My first guess would be that your openssl is not compiled with sslv3. The
> ebuild for electron only asks for >=dev-libs/openssl-1.0.2g:0=[-bindist]
> not openssl[sslv3]. If that's the problem then there's a bug in electrons
> ebuild.
>
> What does emerge -pv openssl show for use flags?
>
> However, ssl is pretty much deprecated these days due to security issues,
> so unless you have a need to support something that cant do TLS, you're
> better off leaving it out. Another issue may be that -lssl may be a loose
> term for SSL+TLS...
>