[gentoo-user] Re: Anyone running a hardened profile?

2015-09-06 Thread James
walt  gmail.com> writes:


> https://wiki.gentoo.org/wiki/Hardened_Gentoo

> That wiki page is very seductive.  It makes me want to drop everything
> and select a hardened profile and re-emerge everything from scratch.

Blueness has some interesting builds that might be worth taking for
a test drive before you commit to hardened?  Read up on them
to ensure they meet your needs, if you do not have a second, older
computer to install on and test. [1] 


> But I have a feeling I'd soon be in big trouble if I did.  Is this
> something that only gentoo devs should be messing with, or is this
> a project that a typical gentoo end-user might hope to accomplish
> without frequent suicidal thoughts?

Scary thoughts. I keep old systems around to test things out
first. ymmv.

Also Pentoo (by dev zerochaos) has an installer and is hardened.n [2] 

James


[1] http://releases.freeharbor.net/

[2] http://www.pentoo.ch/download/




[gentoo-user] Anyone running a hardened profile?

2015-09-06 Thread walt
https://wiki.gentoo.org/wiki/Hardened_Gentoo

That wiki page is very seductive.  It makes me want to drop everything
and select a hardened profile and re-emerge everything from scratch.

But I have a feeling I'd soon be in big trouble if I did.  Is this
something that only gentoo devs should be messing with, or is this
a project that a typical gentoo end-user might hope to accomplish
without frequent suicidal thoughts?





Re: [gentoo-user] Anyone running a hardened profile?

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 1:15:17 PM walt wrote:
> https://wiki.gentoo.org/wiki/Hardened_Gentoo
> 
> That wiki page is very seductive.  It makes me want to drop everything
> and select a hardened profile and re-emerge everything from scratch.
> 
> But I have a feeling I'd soon be in big trouble if I did.  Is this
> something that only gentoo devs should be messing with, or is this
> a project that a typical gentoo end-user might hope to accomplish
> without frequent suicidal thoughts?

There's different opinions on it, but mine is that while it adds some security 
it's so little that it's not worth it in most cases. It provides more security 
on a binary distro because everyone has the same binaries and an attacker 
don't need to guess where a specific piece of code may get loaded but by 
running a source distro your address space is already pretty unique. The only 
case where it provides some security is when an attacker is trying to guess an 
address for an exploit, making the wrong guess will likely crash the process 
and it will be reloaded on a new address. Do you have valuable enough data for 
an attacker to go through that hassle in order to get it? If you do then you 
should use a hardened profile, but physical security and disk encryption is 
more important because if it's worth that much it'll be easier to just rob 
you.

Be aware that there's no hardened desktop profile so that alone will make it 
somewhat harder if plan to use it on a desktop.

Another reason is if you want to use something like SELinux (which doesn't 
require a hardened profile) that gives you very fine grained control about 
access control but it's also very restrictive. I think it's only worth it for 
large networks with many users and different levels of access to sensitive 
data.

I needed some of SELinux features but settled for using AppArmor in an unusual 
way to accomplish them because SELinux is too much trouble. All AppArmor 
really does is provide process isolation or sandboxing. If an attacker gains 
access through an exploint he will only be able to access the files that the 
exploited service has access to. I use it with a catch all profile that 
prevents execution from all world writeable and home directories, and access 
to ssh/pgp keys, keyrings, etc. This works nice for servers and desktops and 
is not too restrictive. And if I need to execute code from my home dir for 
development I can launch an unrestricted shell via sudo. I can leave my laptop 
unlocked with the wallet open (I use the kwallet pam module) and it will be 
really hard for you to get anything like ssh keys or passwords (I also have 
patches for kwallet so it requires a password to show saved passwords), but 
the programs that need them have access to them.


-- 
Fernando Rodriguez



Re: [gentoo-user] Anyone running a hardened profile?

2015-09-06 Thread Michael Orlitzky
On 09/06/2015 04:15 PM, walt wrote:
> https://wiki.gentoo.org/wiki/Hardened_Gentoo
> 
> That wiki page is very seductive.  It makes me want to drop everything
> and select a hardened profile and re-emerge everything from scratch.
> 
> But I have a feeling I'd soon be in big trouble if I did.  Is this
> something that only gentoo devs should be messing with, or is this
> a project that a typical gentoo end-user might hope to accomplish
> without frequent suicidal thoughts?

It depends on how many hardening features you want to enable. It's a lot
easier than it used to be because there's a kernel config thingy that
lets you pick safe options without understanding all the details. You
can get a lot of protection for very little risk by enabling pax/grsec
and checking a few boxes in the hardened kernel config.

Just beware that there are kernel options that will clobber things like
cpupower and others that will slow down specific programs like clamav
with JIT. Anyway, we're all here because we like to tinker with things
until they're broken, right? Give it a try and be sure to read the
kernel help pages carefully and have fun. You can always switch back to
a non-hardened kernel and everything will go back to normal.




Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 3:40:52 PM Fernando Rodriguez wrote:
> On Sunday, September 06, 2015 12:21:13 PM the...@sys-concept.com wrote:
> > 
> > On 09/05/2015 03:19 PM, Fernando Rodriguez wrote:
> > [snip]
> > >>>
> > >>> I was right that rebuilding libtiff (not perl-cleaner) will get you 
past 
> > > that 
> > >>> ebuild, but my advice was the same you just received, go back to the 
> *VERY 
> > >>> FIRST* post on the subject and fix the root of the problem. You will 
see 
> > > that 
> > >>> error again (or worse runtime errors) until you fix it.
> > >>>
> > >>> Hint: It was Alan that gave you the solution. You chose to fix it by 
> > > installing 
> > >>> an incompatible jpeg implementation. That needs to be undone.
> > >>
> > >> It seems to me Alan solution:
> > >> emerge -avC libjpeg-turbo
> > >> emerge -av1 media-libs/jpeg:0 media-libs/jpeg:62
> > >>
> > >> It will come to hunt me.  It allow me to keep going with upgrades as I
> > >> couldn't find any better solution.
> > >>
> > >> Currently I have media-libs/jpeg installed in two slots:
> > >> media-libs/jpeg-6b-r12:62
> > >> media-libs/jpeg-8d-r1:0
> > >>
> > >> I can not get rid of "media-libs/jpeg-6b-r12" due to dependency
> > >> net-misc/nxclient-3.5.0.7 (=media-libs/jpeg-6*)
> > >>
> > >> I need "nxclient" for now until I find better solution.
> > >>
> > >> According to: equery d media-libs/jpeg-8d-r1
> > >>  * These packages depend on media-libs/jpeg-8d-r1:
> > >> virtual/jpeg-0-r2 (>=media-libs/jpeg-8d-r1:0
> > >>
> > >> Maybe I could uninstall jpeg-8d-r1 as only "virtual/jpeg-0-r2" depends
> > >> on it.
> > >>
> > >> Thelma
> > >>
> > > 
> > > That was your own solution. You have a few ways to solve it.
> > > 
> > > 1. Grab the last version of nxclient from the gentoo attic (or from 
> > > /var/db/pkg/net-misc/nxclient if it doesn't need any files to build) 
change 
> the 
> > > dependency to virtual/jpeg, put it on a local overlay and re-emerge it 
> (after 
> > > replacing libjpeg with libjpeg-turbo again)
> > 
> > > 2. Rebuild everything that depends on libjpeg on the right order. 
> Something 
> > > like (I have not tested this command, but it may build stuff you don't 
have 
> > > installed, hence the --depclean, maybe somebody can give you a better 
> one):
> > > 
> > > # emerge --oneshot `equery depends virtual/jpeg`
> > > # emerge --depclean --ask
> > > 
> > > 3. This is a hack and I've never done it. In /var/db/pkg/net-
misc/nxclient 
> > > update all media-libs/jpeg to the virtual/jpeg. Then unmerge all media-
> > > libs/jpeg slots and emerge libjpeg turbo. This should work because 
> anything 
> > > linked against jpeg *should* work with jpeg-turbo but not the other way 
> > > around.
> > 
> > Good ideas, thank you.
> > I'm trying to implement one of this ideas but I have a problem with making 
> manifest.  The nxclient-3.5.0.7.ebuild is not in "attic" but I have a local 
> copy.
> > Though, I can not "manifest" the ebuild it can not be found: 
> nxclient-3.5.0-7.x86_64.tar.gz
> > ebuild /usr/local/portage/net-misc/nxclient/nxclient-3.5.0.7.ebuild 
manifest
> 
> Change the SRC_URI variable on the ebuild to point to one of those mirrors. 
If 
> you already have it on distfiles it probably doesn't find it because the 
SRC_URI 
> has a different filename.
> 
> > Layman -L
> > doesn't list it either
> 
> That's normal because it's not a layman managed overlay. Just make sure to 
> list it on your repos.conf. When you emerge it with the -v option it should 
> show your overlay name instead of gentoo next to the package name.
> 
> If it's a binary package you could also just add the dependencies to your 
> world file or to a set and just unpack it manually to /opt.
> 
> BTW. It is on the attic, and it does requires extra files. Just copy the 
whole 
> directory from the attic, including files/, then update the SRC_URI and the 
> jpeg dependency.
> 
> https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/?hideattic=0
> 
> > Various places have it, like:
> > https://baobabmaster.unige.ch/download/linux/
> > 
> > I downloaded and copy the file to:  /usr/portage/distfiles
> > but system still try to download the file when I try to make ebuild ... 
> manifest"
> > 
> > Thelma
> > 
> 
> 

Before you uninstall libjpeg backup all the libjpeg libraries from /usr/lib. 
If for some reason it doesn't work with libjpeg-turbo you can just copy them 
to /usr/NX/lib as it seems to load libraries from there.

-- 
Fernando Rodriguez



Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread thelma
On 09/06/2015 01:40 PM, Fernando Rodriguez wrote:
[snip]
> 
>> Layman -L
>> doesn't list it either
> 
> That's normal because it's not a layman managed overlay. Just make sure to 
> list it on your repos.conf. When you emerge it with the -v option it should 
> show your overlay name instead of gentoo next to the package name.

I think you are talking about:
/etc/portage/repos.conf/local.conf

In that file I have:

cat /etc/portage/repos.conf/local.conf
[local_portage_nxclient]
location = /usr/local/portage
masters = gentoo
auto-sync = no


Thelma.



Re: [gentoo-user] Anyone running a hardened profile?

2015-09-06 Thread Andreas K. Huettel

> But I have a feeling I'd soon be in big trouble if I did.  Is this
> something that only gentoo devs should be messing with, or is this
> a project that a typical gentoo end-user might hope to accomplish
> without frequent suicidal thoughts?

In most cases hardened just works. 

You may/will run into trouble when binary-only software (e.g. skype, nvidia-
drivers, ...) is used. If you can get along with an opensource-only system you 
will most likely be fine.

It's useful to be familiar with bugzilla, bug reporting procedures, ... and 
hang out on #gentoo-hardened in clase of trouble.


-- 

Andreas K. Huettel
Gentoo Linux developer 
dilfri...@gentoo.org
http://www.akhuettel.de/




Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread thelma


Thelma

On 09/06/2015 01:40 PM, Fernando Rodriguez wrote:
[snip]
>> Though, I can not "manifest" the ebuild it can not be found: 
> nxclient-3.5.0-7.x86_64.tar.gz
>> ebuild /usr/local/portage/net-misc/nxclient/nxclient-3.5.0.7.ebuild manifest
> 
> Change the SRC_URI variable on the ebuild to point to one of those mirrors. 
> If 
> you already have it on distfiles it probably doesn't find it because the 
> SRC_URI 
> has a different filename.
> 
>> Layman -L
>> doesn't list it either
> 
> That's normal because it's not a layman managed overlay. Just make sure to 
> list it on your repos.conf. When you emerge it with the -v option it should 
> show your overlay name instead of gentoo next to the package name.
> 
> If it's a binary package you could also just add the dependencies to your 
> world file or to a set and just unpack it manually to /opt.
> 
> BTW. It is on the attic, and it does requires extra files. Just copy the 
> whole 
> directory from the attic, including files/, then update the SRC_URI and the 
> jpeg dependency.

Hmm I don't know which directory you talking about, is it:
https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/files/?hideattic=0

I just downloaded: nxclient-3.5.0.7.ebuild
and have only two files in:
/usr/local/portage/net-misc/nxclient
-rw-r--r-- 1 root root 1162 Sep  6 17:21 Manifest
-rw-r--r-- 1 root root 1991 Sep  6 17:21 nxclient-3.5.0.7.ebuild

> https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/?hideattic=0
> 
>> Various places have it, like:
>> https://baobabmaster.unige.ch/download/linux/

I removed: media-libs/jpeg
and emerged libjpeg-turbo

I think it worked, I changed in: nxclient-3.5.0.7.ebuild
...
SRC_URI="http://download.vrdc.cornell.edu/extras/other/nxclient-3.5.0-7.x86_64.tar.gz;

RDEPEND="dev-libs/expat
>=dev-libs/openssl-0.9.8
media-libs/audiofile
media-libs/libjpeg-turbo
...

Generated new manifest and it compiled just fine.

This solved the problem with audacity-2.0.5 I had a problem previously; now it 
compiles just fine.

Thanks,
Thelma



[gentoo-user] Haskell stack

2015-09-06 Thread David Haller
Hello all,

could it be, that the dev-haskell stack deps are broken/out-of-sync?

I tried to emerge pandoc today, which pulled in haskell, and I got
lots of breakages. The tree is freshly synced. Anyway, I finally got
it solved by masking:

# grep haskell /etc/portage/* 2>/dev/null
/etc/portage/package.mask:>=dev-haskell/text-1.0
/etc/portage/package.mask:>=dev-haskell/yaml-0.8.10.1
/etc/portage/package.mask:>=dev-haskell/blaze-builder-conduit-1.1.0
/etc/portage/package.mask:>=dev-haskell/tls-1.2
/etc/portage/package.mask:dev-haskell/text-stream-decode
/etc/portage/package.mask:>=dev-haskell/conduit-1.0.10

Without these masks, conflicting versions of e.g. dev-haskell/text
were required. To my naive view, that smells of missing slots. Judging
from dev-haskell/text:

# eix dev-haskell/text
[I] dev-haskell/text
 Available versions:  0.11.3.1(0/0.11.3.1){tbz2} 
[m]1.1.1.3(0/1.1.1.3)^t{tbz2} [m]~1.2.1.3(0/1.2.1.3) {developer doc hscolour 
profile test}
 Installed versions:  0.11.3.1{tbz2}(00:21:34 07/09/15)(doc -developer 
-hscolour -profile -test)

(note the [m]asked by me versions, those should go in a SLOT=1).

The directory structure seems to support parallel installs:

/usr/lib64/text-0.11.3.1/ghc-7.6.3/Data/Text.hi
/usr/lib64/text-0.11.3.1/ghc-7.6.3/libHStext-0.11.3.1.a

so adding proper slots should be easy, but I'm not sure.

Can anyone reproduce this problem? I did not have _any_ haskell
package installed so far.

Should I open a bug? Or just wait a couple of days, sync, and try
again without the masks?

TIA,
-dnh

PS: does haskell have no dynamic libraries?

-- 
"God, root, what is difference?" - Pitr
"God is more forgiving." - Dave Aronson



Re: [gentoo-user] Haskell stack

2015-09-06 Thread Michael Orlitzky
On 09/06/2015 10:39 PM, David Haller wrote:
> 
> Can anyone reproduce this problem? I did not have _any_ haskell
> package installed so far.
> 
> Should I open a bug? Or just wait a couple of days, sync, and try
> again without the masks?

Yes please open a bug.


> 
> PS: does haskell have no dynamic libraries?
> 

Historically no, but it's getting better.




Re: [gentoo-user] Anyone running a hardened profile?

2015-09-06 Thread Hinnerk van Bruinehsen
Hello walt,

I've running hardened (hardened profile + kernel (including pax and grsec)) for
at least 5 years on all my boxes except one (my gaming only box).
It's been quite a while that I had an issue caused by the hardened profile that
wasn't due to my fiddeling around with stuff like SELinux and the like. Even
these problems were all relatively easy to solve.
Truth be told: I guess that you won't even notice the hardened profile unless
you also run a hardened kernel (while still having increased security).

On Sun, Sep 06, 2015 at 01:15:17PM -0700, walt wrote:
> https://wiki.gentoo.org/wiki/Hardened_Gentoo
> 
> That wiki page is very seductive.  It makes me want to drop everything
> and select a hardened profile and re-emerge everything from scratch.
> 
> But I have a feeling I'd soon be in big trouble if I did.  Is this
> something that only gentoo devs should be messing with, or is this
> a project that a typical gentoo end-user might hope to accomplish
> without frequent suicidal thoughts?



Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 6:14:52 PM the...@sys-concept.com wrote:
> 
> Thelma
> 
> On 09/06/2015 01:40 PM, Fernando Rodriguez wrote:
> [snip]
> >> Though, I can not "manifest" the ebuild it can not be found: 
> > nxclient-3.5.0-7.x86_64.tar.gz
> >> ebuild /usr/local/portage/net-misc/nxclient/nxclient-3.5.0.7.ebuild 
manifest
> > 
> > Change the SRC_URI variable on the ebuild to point to one of those 
mirrors. If 
> > you already have it on distfiles it probably doesn't find it because the 
SRC_URI 
> > has a different filename.
> > 
> >> Layman -L
> >> doesn't list it either
> > 
> > That's normal because it's not a layman managed overlay. Just make sure to 
> > list it on your repos.conf. When you emerge it with the -v option it 
should 
> > show your overlay name instead of gentoo next to the package name.
> > 
> > If it's a binary package you could also just add the dependencies to your 
> > world file or to a set and just unpack it manually to /opt.
> > 
> > BTW. It is on the attic, and it does requires extra files. Just copy the 
whole 
> > directory from the attic, including files/, then update the SRC_URI and the 
> > jpeg dependency.
> 
> Hmm I don't know which directory you talking about, is it:
> https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/files/?hideattic=0

That's it, if you don't need them they where likely for old versions so 
nevermind.

> I just downloaded: nxclient-3.5.0.7.ebuild
> and have only two files in:
> /usr/local/portage/net-misc/nxclient
> -rw-r--r-- 1 root root 1162 Sep  6 17:21 Manifest
> -rw-r--r-- 1 root root 1991 Sep  6 17:21 nxclient-3.5.0.7.ebuild
> 
> > https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/?hideattic=0
> > 
> >> Various places have it, like:
> >> https://baobabmaster.unige.ch/download/linux/
> 
> I removed: media-libs/jpeg
> and emerged libjpeg-turbo
> 
> I think it worked, I changed in: nxclient-3.5.0.7.ebuild
> ...
> 
SRC_URI="http://download.vrdc.cornell.edu/extras/other/nxclient-3.5.0-7.x86_64.tar.gz;
> 
> RDEPEND="dev-libs/expat
> >=dev-libs/openssl-0.9.8
> media-libs/audiofile
> media-libs/libjpeg-turbo
> ...
> 
> Generated new manifest and it compiled just fine.

Good. You probably want to make that dependency virtual/jpeg and re-emerge so 
if it changes in the future you don't run into the same problem. 

And yes, that was the right file in repos.conf, I just have it all in a single 
repos.conf file which works too.

> This solved the problem with audacity-2.0.5 I had a problem previously; now 
it compiles just fine.
> 
> Thanks,
> Thelma
> 

-- 
Fernando Rodriguez



Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread lee
Fernando Rodriguez  writes:

> On Saturday, September 05, 2015 6:09:36 PM Mick wrote:
>> On Saturday 05 Sep 2015 14:06:27 lee wrote:
>> > Fernando Rodriguez  writes:
>> > > On Saturday, September 05, 2015 1:05:06 AM lee wrote:
>> > >> In this case, I happen to have full physical access to the server and
>> > >> thus to the certificate stored on it.  This is not the case for, let's
>> > >> say, an employee checking his work-email from home whom I might give 
> the
>> > >> login-data on the phone and instruct to add an exception when the 
> dialog
>> > >> to do so pops up when they are trying to connect.
>> > > 
>> > > As a workaround you can create your own CA cert. I tested with a windows
>> > > self- signed cert (I guess the correct term is self-issued) and the
>> > > openssl command will show two certs. The second is the CA.
>> > > 
>> > > http://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certifica
>> > > te-authority/
>> > 
>> > They're saying:
>> > 
>> > 
>> > "Whatever you see in the address field in your browser when you go to
>> > your device must be what you put under common name, even if it’s an IP
>> > address.  [...]  If it doesn’t match, even a properly signed certificate
>> > will not validate correctly and you’ll get the “cannot verify
>> > authenticity” error."
>> > 
>> > 
>> > What's the solution for a server which can be reached by different fqdns
>> > and IPs?  What if the fqdns and IPs it can be reached by change over the
>> > lifetime of the certificates?
>> 
> [...]
>
> Wildcards  should do it. The browser will give you a warning but you don't 
> care since all you want is encryption and your users already trust you.

True --- and the problem will be back again when seamonkey etc. decide
not to accept certificates with wildcards anymore.

> The only thing that matters about that article is that you'll be signing your 
> certificate with the CA ones so you get two certificates when you run the 
> openssl command, the last one is the CA certificate. If you, or your users 
> add 
> trust to that one, anything you sign with it will be trusted.
>
> I only tried it with a windows server issued certificate which does all that 
> by 
> default.

Changing the key would be a last resort.

If I do that, should I use a SHA-3 key?  Would that work, or is SHA-3
too new?

> Since it lets you open the exception dialog but just hangs when downloading 
> the certificate I wonder if it has something to do with your OCSP settings. 
> Check that they match mine:
>
> security.OCSP.GET.enabled false
> security.OCSP.enabled 1
> security.OCSP.require false
>
> everything else is true.

I checked, and we have the same settings.  It doesn't really hang, it
does nothing when I try to get the certificate.  Does it do something
when you try?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread lee
Mick  writes:

> On Saturday 05 Sep 2015 22:40:09 Fernando Rodriguez wrote:
>> 
>> Since it lets you open the exception dialog but just hangs when downloading
>> the certificate I wonder if it has something to do with your OCSP settings.
>> Check that they match mine:
>> 
>> security.OCSP.GET.enabled false
>> security.OCSP.enabled 1
>> security.OCSP.require false
>> 
>> everything else is true.
>
> Some reports mention a couple of workarounds which may solve this problem:
>
> 1. Remove your certificate from *any* tabs that may have been saved in.  
> Check 
> that it is no longer stored in any tab.
>
> Then try to reload it in the Authorities tab and see if it will allow you to 
> set up an exception.
>
> 2. I think you mentioned that you tried a fresh profile, but just in case:  
> Make a back up of your Mozilla Profile.  Go to Help/Troubleshooting and 
> select 
> Refresh Firefox (or the equivalent for the SeaMonkey GUI).

I moved ~/.mozilla out of the way, imported the certificate under
authorities, gave it all trust, and I'm getting exactly the same as
before when trying to connect:  The dialog to add an exception comes up,
with the buttons to add one disabled, and clicking on "Get Certificate"
does nothing.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread lee
Mick  writes:

> On Saturday 05 Sep 2015 17:22:24 lee wrote:
>> Mick  writes:
>> > On Saturday 05 Sep 2015 02:08:47 Fernando Rodriguez wrote:
>> >> On Saturday, September 05, 2015 1:05:06 AM lee wrote:
>> >> > In this case, I happen to have full physical access to the server and
>> >> > thus to the certificate stored on it.  This is not the case for, let's
>> >> > say, an employee checking his work-email from home whom I might give
>> >> > the login-data on the phone and instruct to add an exception when the
>> >> > dialog to do so pops up when they are trying to connect.
>> >> 
>> >> As a workaround you can create your own CA cert. I tested with a windows
>> >> self- signed cert (I guess the correct term is self-issued) and the
>> >> openssl command will show two certs. The second is the CA.
>> >> 
>> >> http://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certific
>> >> ate -authority/
>> > 
>> > lee, on my FF I can import a self-signed certificate when I go to:
>> >  about:preferences#advanced
>> 
>> You mean to enter this as an URL, just like about:config?  When I do
>> that, I'm getting "The URL is not valid and cannot be loaded. The
>> provided address is not in a recognized format. Please check the
>> location bar for mistakes and try again.".
>> 
>> Maybe that only works with firefox?
>
> Yes, it seems to be the case that SeaMonkey has some GUI differences to 
> Firefox.  I am on Firefox-38.2.1 at present.

Does Firefox even have a MUA built in?  IIRC it's only the web browser
part of seamonkey.

>> > and then select the 'Servers' tab.  After I import it I can select it and
>> > click on the 'Add Exception' button at the bottom of the tab.  Enter the
>> > http address of the server and FF should go and fetch it afresh when you
>> > click on 'Get Certificate', then tick 'Permanently store this exception'
>> > and 'Confirm Security Exception'.  These buttons will be greyed out if
>> > do not download the certificate or if I am running FF in Private
>> > Browsing mode.
>> 
>> I'm guessing you might be in the window that shows up when you edit
>> preferences and go to 'Privacy & Security --> Certificates --> Manage
>> Certificates ...' and then to the "Servers" tab.
>
> Yes, this is the location I am referring to.  However, if it is hanging and 
> not connecting to the server to fetch the certificate something is not right. 
>  
> This is the reason with the exception button it greyed out.
>
> I can't recall if you tried this:
>
> Can you please remove it from Servers and try adding it to the Authorities 
> tab?  Your version may have additional verification checks for self-signed 
> certificates, because they essentially acting as their own Root CAs.

Yes, I tried that.

>> From there, I can import the certificate I downloaded with openssl.
>> Once imported, I can click on "Add Exceptions".  That gives me the same
>> dialog which comes up when I'm trying to connect which doesn't allow me
>> to add an exception because the buttons to do so are disabled.  The
>> dialog remains stuck at "Checking Information" indefinitely.
>> 
>> I'm attaching a screenshot:
>
> The fact that it is hanging and not obtaining the certificate makes me wonder 
> if you need to specify a domain name in the CN field of the certificate, 
> identical to the full URI that the client is trying to connect to.

That brings us back to the impractical idea of trying to bind a
certificate to a specific fqdn or IP, or to a number of those.

Is it possible to create a certificate that doesn't use either but a
wildcard only?  I don't understand why or how an fqdn/IP in a
certificate could or should be relevant at all.

When creating the certificate, I have used the fqdn the host does
actually have and knows itself by (because I needed to fill in the
fields, and it seemed most reasonable to use the actual host name).

That this host can be reached at all, via different fqdns and IPs, is a
matter of network traffic (re-)direction and of how the DNS-entries
currently happen to be.  They are all transparent and irrelevant to the
user/client and subject to change.  Why should they matter for a
certificate which is supposed to let me figure out whether I'm
connecting to the host I'm expecting to connect to, or to something
else?

When a friend calls you on the phone, you do not insist that they are
not your friend and reject their call just because they're calling you
from a different phone number.  You do not reject their call and insist
that they are not your friend because the call has been (re-)directed
over a satellite or goes through an asterisk server.  You do not insist
that your friend is someone else when they show up at your door wearing
different cloths than they usually do.  Instead, you figure out that the
caller, or the person at your door, is your friend by the human
equivalent of a certificate.


-- 
Again we must be afraid of speaking of daemons for fear that 

Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread lee
Mick  writes:

> On Saturday 05 Sep 2015 14:06:27 lee wrote:
>> Fernando Rodriguez  writes:
>> > On Saturday, September 05, 2015 1:05:06 AM lee wrote:
>> >> In this case, I happen to have full physical access to the server and
>> >> thus to the certificate stored on it.  This is not the case for, let's
>> >> say, an employee checking his work-email from home whom I might give the
>> >> login-data on the phone and instruct to add an exception when the dialog
>> >> to do so pops up when they are trying to connect.
>> > 
>> > As a workaround you can create your own CA cert. I tested with a windows
>> > self- signed cert (I guess the correct term is self-issued) and the
>> > openssl command will show two certs. The second is the CA.
>> > 
>> > http://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certifica
>> > te-authority/
>> 
>> They're saying:
>> 
>> 
>> "Whatever you see in the address field in your browser when you go to
>> your device must be what you put under common name, even if it’s an IP
>> address.  [...]  If it doesn’t match, even a properly signed certificate
>> will not validate correctly and you’ll get the “cannot verify
>> authenticity” error."
>> 
>> 
>> What's the solution for a server which can be reached by different fqdns
>> and IPs?  What if the fqdns and IPs it can be reached by change over the
>> lifetime of the certificates?
>
> If we are talking about changing subdomains, e.g. mailserver1.mydomain.com 
> and 
> mailserver2.mydomain.com then you could use a wildcard CN field descriptor in 
> your certificate:  *.mydomain.com

just different fqdns and IPs with which the server can be reached during
the lifetime of the certificate (which is only 10 years)

Can you foresee all possible fqdns and IPs it might have in the next 10
years?

I think it would be retarded to bind a certificate to a fqdn or IP. Both
can change at any time.  It's the certificate that counts and which can
be verified by a fingerprint before accepting it.  The host name and IP
are entirely irrelevant for this.  In some cases, you can even say that
the certificate is for an organization, not for a particular host or
device (operated by or being operated on behalf of the organization).


Or think of gpg: Binding a certificate to fqdn/IP would be like binding
my gpg public key to the place (like my postal address) I am at so that
I can decrypt something only when I happen to be at the right
place. Then give me an option to add multiple places to my pubkey when
creating it, and as soon as I'm at another place about which I haven't
foreseen that I might be there some time, I will have a problem.  For
all I know, I could be travelling in a car or a train or an air plane.

It's impractical.  Change is a constant.

> If we are talking about a multidomain certificate, then you would have the 
> main domain name in CN and add all the remaining domain names in the 
> subjectAltName field.
>
> For example:
>
> [req]
> req_extensions = v3_req
>
> [ v3_req ]
>
> # Extensions to add to a certificate request
> [snip...]
>
> subjectAltName = @alt_names
>
> [alt_names]
> DNS.1 = mydomain.com
> DNS.2 = mydomain.net
> DNS.3 = www.mydomain.com
> DNS.4 = mx.sub.mydomain.com
> DNS.5 = mx.someotherdomain.com
> IP.1 = 123.456.78.9
> IP.2 = 987.654.32.1
>
> You could specify the same on the CLI when you are generating the self signed 
> certificate.

At least that's possible.  How would I add that without changing the
existing certificate?  I don't want to irritate the users, don't want to
have the phone calls about it and don't want trouble with the odd client
which happens to have been updated and refuses to accept the certificate
...

See what I mean?  It's impractical.

>> How do I deploy some sort of central infrastructure all clients on the
>> LAN and anywhere on the world will automatically use to do the simple
>> thing of adding an exception (or whatever is required for that) so that
>> seamonkey and relatives can be used to access email?
>> 
>> That's letting aside that it's ridiculous to deploy such an
>> infrastructure when the same thing could be achieved by the user
>> clicking a button once to add an exception, as it used to be.
>
> This I think is primarily a problem of the latest version of SeaMonkey.  I 
> suspect they have inadvertently added a regression bug.
>
>
>> Seriously?  The result is currently a version freeze; the alternative is
>> using unencrypted connections.  After some time, the version freeze
>> cannot be kept up.  Since there are no alternative MUAs, we can only go
>> back to unencrypted connections when that happens.  And that's something
>> I don't even want to do on the LAN.
>> 
>> 
>> Well, I've made a bug report about this:
>> https://bugzilla.mozilla.org/show_bug.cgi?id=1202128
>
> Also have a look at this bug, in case it is related:
>
> https://bugzilla.mozilla.org/show_bug.cgi?id=1036338

I really can't tell.  It seems 

Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread Mick
On Sunday 06 Sep 2015 03:45:26 lee wrote:
> Mick  writes:
> > On Saturday 05 Sep 2015 14:06:27 lee wrote:

> >> What's the solution for a server which can be reached by different fqdns
> >> and IPs?  What if the fqdns and IPs it can be reached by change over the
> >> lifetime of the certificates?
> > 
> > If we are talking about changing subdomains, e.g.
> > mailserver1.mydomain.com and mailserver2.mydomain.com then you could use
> > a wildcard CN field descriptor in your certificate:  *.mydomain.com
> 
> just different fqdns and IPs with which the server can be reached during
> the lifetime of the certificate (which is only 10 years)
> 
> Can you foresee all possible fqdns and IPs it might have in the next 10
> years?

OK, 10 years is rather long for a common certificate.  In this case you should 
consider setting up your own CA key and CA certificate, with which you sign 
the server certificates.  Server certificates can expire and/or be revoked, 
you can update and publish a CRL, but the CA certificate will remain valid for 
10 years.

As long as the new server certificate is signed by the same CA certificate, 
the clients should not mind provided:

1. The client has accepted the CA certificate in its Authorities store and set 
up an exception to mark it as trusted.
2. The CN of the new certificate contains the URI that the client is trying to 
connect to.
3. The expiry date has not arrived yet.
4. The CRL does not contain the certificate (yet).


> I think it would be retarded to bind a certificate to a fqdn or IP. Both
> can change at any time.  It's the certificate that counts and which can
> be verified by a fingerprint before accepting it.  The host name and IP
> are entirely irrelevant for this.  In some cases, you can even say that
> the certificate is for an organization, not for a particular host or
> device (operated by or being operated on behalf of the organization).

When you change the domain or IP address on a server you typically change its 
certificate to reflect this too.  You do not need to change the CA 
certificate.

If you change the contents of the certificate then the fingerprint will change 
too.  Certificate verification tests include more than just the fingerprint.


> Or think of gpg: Binding a certificate to fqdn/IP would be like binding
> my gpg public key to the place (like my postal address) I am at so that
> I can decrypt something only when I happen to be at the right
> place. Then give me an option to add multiple places to my pubkey when
> creating it, and as soon as I'm at another place about which I haven't
> foreseen that I might be there some time, I will have a problem.  For
> all I know, I could be travelling in a car or a train or an air plane.
> 
> It's impractical.  Change is a constant.

Sure, but (although you could) gpg is not usually used to secure the 
connection to a server.  A server's identity is interlinked with its domain 
name, or IP address.  The identity for gpg is linked to the email address of 
its owner.  You wouldn't sign/encrypt your work's email with the gpg key for 
your private email address.


> > If we are talking about a multidomain certificate, then you would have
> > the main domain name in CN and add all the remaining domain names in the
> > subjectAltName field.
> > 
> > For example:
> > 
> > [req]
> > req_extensions = v3_req
> > 
> > [ v3_req ]
> > 
> > # Extensions to add to a certificate request
> > [snip...]
> > 
> > subjectAltName = @alt_names
> > 
> > [alt_names]
> > DNS.1 = mydomain.com
> > DNS.2 = mydomain.net
> > DNS.3 = www.mydomain.com
> > DNS.4 = mx.sub.mydomain.com
> > DNS.5 = mx.someotherdomain.com
> > IP.1 = 123.456.78.9
> > IP.2 = 987.654.32.1
> > 
> > You could specify the same on the CLI when you are generating the self
> > signed certificate.
> 
> At least that's possible.  How would I add that without changing the
> existing certificate?  I don't want to irritate the users, don't want to
> have the phone calls about it and don't want trouble with the odd client
> which happens to have been updated and refuses to accept the certificate
> ...
> 
> See what I mean?  It's impractical.

Yes. It could be impractical at this stage, because the architecture of your 
PKI has to change.

However, you can treat your existent self-signed certificate as a CA and use 
it to sign other certificates which contain different IP addresses, fqdns, 
etc. depending on the server(s) in question.

If your clients have accepted your self-signed certificate as a CA, then it 
should not cause a problem.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread thelma

On 09/05/2015 03:19 PM, Fernando Rodriguez wrote:
[snip]
>>>
>>> I was right that rebuilding libtiff (not perl-cleaner) will get you past 
> that 
>>> ebuild, but my advice was the same you just received, go back to the *VERY 
>>> FIRST* post on the subject and fix the root of the problem. You will see 
> that 
>>> error again (or worse runtime errors) until you fix it.
>>>
>>> Hint: It was Alan that gave you the solution. You chose to fix it by 
> installing 
>>> an incompatible jpeg implementation. That needs to be undone.
>>
>> It seems to me Alan solution:
>> emerge -avC libjpeg-turbo
>> emerge -av1 media-libs/jpeg:0 media-libs/jpeg:62
>>
>> It will come to hunt me.  It allow me to keep going with upgrades as I
>> couldn't find any better solution.
>>
>> Currently I have media-libs/jpeg installed in two slots:
>> media-libs/jpeg-6b-r12:62
>> media-libs/jpeg-8d-r1:0
>>
>> I can not get rid of "media-libs/jpeg-6b-r12" due to dependency
>> net-misc/nxclient-3.5.0.7 (=media-libs/jpeg-6*)
>>
>> I need "nxclient" for now until I find better solution.
>>
>> According to: equery d media-libs/jpeg-8d-r1
>>  * These packages depend on media-libs/jpeg-8d-r1:
>> virtual/jpeg-0-r2 (>=media-libs/jpeg-8d-r1:0
>>
>> Maybe I could uninstall jpeg-8d-r1 as only "virtual/jpeg-0-r2" depends
>> on it.
>>
>> Thelma
>>
> 
> That was your own solution. You have a few ways to solve it.
> 
> 1. Grab the last version of nxclient from the gentoo attic (or from 
> /var/db/pkg/net-misc/nxclient if it doesn't need any files to build) change 
> the 
> dependency to virtual/jpeg, put it on a local overlay and re-emerge it (after 
> replacing libjpeg with libjpeg-turbo again)

> 2. Rebuild everything that depends on libjpeg on the right order. Something 
> like (I have not tested this command, but it may build stuff you don't have 
> installed, hence the --depclean, maybe somebody can give you a better one):
> 
> # emerge --oneshot `equery depends virtual/jpeg`
> # emerge --depclean --ask
> 
> 3. This is a hack and I've never done it. In /var/db/pkg/net-misc/nxclient 
> update all media-libs/jpeg to the virtual/jpeg. Then unmerge all media-
> libs/jpeg slots and emerge libjpeg turbo. This should work because anything 
> linked against jpeg *should* work with jpeg-turbo but not the other way 
> around.

Good ideas, thank you.
I'm trying to implement one of this ideas but I have a problem with making 
manifest.  The nxclient-3.5.0.7.ebuild is not in "attic" but I have a local 
copy.
Though, I can not "manifest" the ebuild it can not be found: 
nxclient-3.5.0-7.x86_64.tar.gz
ebuild /usr/local/portage/net-misc/nxclient/nxclient-3.5.0.7.ebuild manifest

Layman -L
doesn't list it either

Various places have it, like:
https://baobabmaster.unige.ch/download/linux/

I downloaded and copy the file to:  /usr/portage/distfiles
but system still try to download the file when I try to make ebuild ... 
manifest"

Thelma



Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread Mick
On Sunday 06 Sep 2015 15:29:25 lee wrote:
> Mick  writes:
> > On Saturday 05 Sep 2015 17:22:24 lee wrote:

> >> Maybe that only works with firefox?
> > 
> > Yes, it seems to be the case that SeaMonkey has some GUI differences to
> > Firefox.  I am on Firefox-38.2.1 at present.
> 
> Does Firefox even have a MUA built in?  IIRC it's only the web browser
> part of seamonkey.

No, but T'bird uses the same SSL certificate storage as the mozilla browser 
does.


> > I can't recall if you tried this:
> > 
> > Can you please remove it from Servers and try adding it to the
> > Authorities tab?  Your version may have additional verification checks
> > for self-signed certificates, because they essentially acting as their
> > own Root CAs.
> 
> Yes, I tried that.

Right, I saw your email.  From what I can gather this is a bug impairing 
critical functionality of the Mozilla suite.


> > The fact that it is hanging and not obtaining the certificate makes me
> > wonder if you need to specify a domain name in the CN field of the
> > certificate, identical to the full URI that the client is trying to
> > connect to.
> 
> That brings us back to the impractical idea of trying to bind a
> certificate to a specific fqdn or IP, or to a number of those.
> 
> Is it possible to create a certificate that doesn't use either but a
> wildcard only?  I don't understand why or how an fqdn/IP in a
> certificate could or should be relevant at all.

It is relevant because Mozilla will read the CN and or subjectAltName fields 
for DNS/IP to match against the URL the client is trying to connect to.  It 
will also read any additional fields for OCSP and CRL URIs and try to connect 
to those too to retrieve relevant files (e.g. of revocation lists).  These 
would be contained in the certificate's X509v3 extensions.  The browser does 
not extrapolate from what is contained in those fields, but treats their 
contents literally.  If the CN field contains 'example.com' but the client is 
trying to connect to 'www.example.com' (or the server redirects to the latter) 
the browser's verification engine could throw its arms up and say STOP!  This 
is not the address specified on the certificate, therefore you could be 
inadvertently trying to connect to a malicious server impersonating your 
server.  The browser is warning about a Man In The Middle attack.  This is 
fine and as it should be.  What is not at all fine is that it stops you 
connecting AND it does not allow you to acknowledge as acceptable whatever it 
is that it doesn't like about your certificate.


> When creating the certificate, I have used the fqdn the host does
> actually have and knows itself by (because I needed to fill in the
> fields, and it seemed most reasonable to use the actual host name).
> 
> That this host can be reached at all, via different fqdns and IPs, is a
> matter of network traffic (re-)direction and of how the DNS-entries
> currently happen to be.  They are all transparent and irrelevant to the
> user/client and subject to change.  Why should they matter for a
> certificate which is supposed to let me figure out whether I'm
> connecting to the host I'm expecting to connect to, or to something
> else?
> 
> When a friend calls you on the phone, you do not insist that they are
> not your friend and reject their call just because they're calling you
> from a different phone number.  You do not reject their call and insist
> that they are not your friend because the call has been (re-)directed
> over a satellite or goes through an asterisk server.  You do not insist
> that your friend is someone else when they show up at your door wearing
> different cloths than they usually do.  Instead, you figure out that the
> caller, or the person at your door, is your friend by the human
> equivalent of a certificate.


Well, in the UK we have a feature called 'Caller ID'.  You will be surprised 
at the number of voice mails I have to leave when I call from a 'caller ID 
witheld' phone.  People will NOT answer unless they recognise the number of 
the caller.  :-)

With a server the FQDN is much more important, as it can impersonate e.g. you 
Bank and steal login information about your login details.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 3:03:12 PM lee wrote:
> Fernando Rodriguez  writes:
> 
> > On Saturday, September 05, 2015 6:09:36 PM Mick wrote:
> >> On Saturday 05 Sep 2015 14:06:27 lee wrote:
> >> > Fernando Rodriguez  writes:
> >> > > On Saturday, September 05, 2015 1:05:06 AM lee wrote:
> >> > >> In this case, I happen to have full physical access to the server 
and
> >> > >> thus to the certificate stored on it.  This is not the case for, 
let's
> >> > >> say, an employee checking his work-email from home whom I might give 
> > the
> >> > >> login-data on the phone and instruct to add an exception when the 
> > dialog
> >> > >> to do so pops up when they are trying to connect.
> >> > > 
> >> > > As a workaround you can create your own CA cert. I tested with a 
windows
> >> > > self- signed cert (I guess the correct term is self-issued) and the
> >> > > openssl command will show two certs. The second is the CA.
> >> > > 
> >> > > http://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certifica
> >> > > te-authority/
> >> > 
> >> > They're saying:
> >> > 
> >> > 
> >> > "Whatever you see in the address field in your browser when you go to
> >> > your device must be what you put under common name, even if it’s an IP
> >> > address.  [...]  If it doesn’t match, even a properly signed certificate
> >> > will not validate correctly and you’ll get the “cannot verify
> >> > authenticity” error."
> >> > 
> >> > 
> >> > What's the solution for a server which can be reached by different fqdns
> >> > and IPs?  What if the fqdns and IPs it can be reached by change over 
the
> >> > lifetime of the certificates?
> >> 
> > [...]
> >
> > Wildcards  should do it. The browser will give you a warning but you don't 
> > care since all you want is encryption and your users already trust you.
> 
> True --- and the problem will be back again when seamonkey etc. decide
> not to accept certificates with wildcards anymore.
> 
> > The only thing that matters about that article is that you'll be signing 
your 
> > certificate with the CA ones so you get two certificates when you run the 
> > openssl command, the last one is the CA certificate. If you, or your users 
add 
> > trust to that one, anything you sign with it will be trusted.
> >
> > I only tried it with a windows server issued certificate which does all 
that by 
> > default.
> 
> Changing the key would be a last resort.
> 
> If I do that, should I use a SHA-3 key?  Would that work, or is SHA-3
> too new?

You don't need to change the private key, you'll just have another CA cert and 
private key that you can use to sign your existing certificate (and generate a 
signed one). I suppose you may even be able to use the same key for both.

> > Since it lets you open the exception dialog but just hangs when 
downloading 
> > the certificate I wonder if it has something to do with your OCSP settings. 
> > Check that they match mine:
> >
> > security.OCSP.GET.enabled false
> > security.OCSP.enabled 1
> > security.OCSP.require false
> >
> > everything else is true.
> 
> I checked, and we have the same settings.  It doesn't really hang, it
> does nothing when I try to get the certificate.  Does it do something
> when you try?

It downloads the cert as expected. When I do it from the error page I don't 
even have to do that because it's already downloaded. What does it says under 
"Technical Details" on the error page?

-- 
Fernando Rodriguez



Re: [gentoo-user] broken seamonkey :(

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 4:29:25 PM lee wrote:
> Mick  writes:
> 
> > On Saturday 05 Sep 2015 17:22:24 lee wrote:
> >> Mick  writes:
> >> > On Saturday 05 Sep 2015 02:08:47 Fernando Rodriguez wrote:
> >> >> On Saturday, September 05, 2015 1:05:06 AM lee wrote:
> >> >> > In this case, I happen to have full physical access to the server 
and
> >> >> > thus to the certificate stored on it.  This is not the case for, 
let's
> >> >> > say, an employee checking his work-email from home whom I might give
> >> >> > the login-data on the phone and instruct to add an exception when 
the
> >> >> > dialog to do so pops up when they are trying to connect.
> >> >> 
> >> >> As a workaround you can create your own CA cert. I tested with a 
windows
> >> >> self- signed cert (I guess the correct term is self-issued) and the
> >> >> openssl command will show two certs. The second is the CA.
> >> >> 
> >> >> http://datacenteroverlords.com/2012/03/01/creating-your-own-ssl-certific
> >> >> ate -authority/
> >> > 
> >> > lee, on my FF I can import a self-signed certificate when I go to:
> >> >  about:preferences#advanced
> >> 
> >> You mean to enter this as an URL, just like about:config?  When I do
> >> that, I'm getting "The URL is not valid and cannot be loaded. The
> >> provided address is not in a recognized format. Please check the
> >> location bar for mistakes and try again.".
> >> 
> >> Maybe that only works with firefox?
> >
> > Yes, it seems to be the case that SeaMonkey has some GUI differences to 
> > Firefox.  I am on Firefox-38.2.1 at present.
> 
> Does Firefox even have a MUA built in?  IIRC it's only the web browser
> part of seamonkey.
> 
> >> > and then select the 'Servers' tab.  After I import it I can select it 
and
> >> > click on the 'Add Exception' button at the bottom of the tab.  Enter 
the
> >> > http address of the server and FF should go and fetch it afresh when 
you
> >> > click on 'Get Certificate', then tick 'Permanently store this exception'
> >> > and 'Confirm Security Exception'.  These buttons will be greyed out if
> >> > do not download the certificate or if I am running FF in Private
> >> > Browsing mode.
> >> 
> >> I'm guessing you might be in the window that shows up when you edit
> >> preferences and go to 'Privacy & Security --> Certificates --> Manage
> >> Certificates ...' and then to the "Servers" tab.
> >
> > Yes, this is the location I am referring to.  However, if it is hanging 
and 
> > not connecting to the server to fetch the certificate something is not 
right.  
> > This is the reason with the exception button it greyed out.
> >
> > I can't recall if you tried this:
> >
> > Can you please remove it from Servers and try adding it to the Authorities 
> > tab?  Your version may have additional verification checks for self-signed 
> > certificates, because they essentially acting as their own Root CAs.
> 
> Yes, I tried that.
> 
> >> From there, I can import the certificate I downloaded with openssl.
> >> Once imported, I can click on "Add Exceptions".  That gives me the same
> >> dialog which comes up when I'm trying to connect which doesn't allow me
> >> to add an exception because the buttons to do so are disabled.  The
> >> dialog remains stuck at "Checking Information" indefinitely.
> >> 
> >> I'm attaching a screenshot:
> >
> > The fact that it is hanging and not obtaining the certificate makes me 
wonder 
> > if you need to specify a domain name in the CN field of the certificate, 
> > identical to the full URI that the client is trying to connect to.
> 
> That brings us back to the impractical idea of trying to bind a
> certificate to a specific fqdn or IP, or to a number of those.
> 
> Is it possible to create a certificate that doesn't use either but a
> wildcard only?  I don't understand why or how an fqdn/IP in a
> certificate could or should be relevant at all.
> 
> When creating the certificate, I have used the fqdn the host does
> actually have and knows itself by (because I needed to fill in the
> fields, and it seemed most reasonable to use the actual host name).
> 
> That this host can be reached at all, via different fqdns and IPs, is a
> matter of network traffic (re-)direction and of how the DNS-entries
> currently happen to be.  They are all transparent and irrelevant to the
> user/client and subject to change.  Why should they matter for a
> certificate which is supposed to let me figure out whether I'm
> connecting to the host I'm expecting to connect to, or to something
> else?
> 
> When a friend calls you on the phone, you do not insist that they are
> not your friend and reject their call just because they're calling you
> from a different phone number.  You do not reject their call and insist
> that they are not your friend because the call has been (re-)directed
> over a satellite or goes through an asterisk server.  You do not insist
> that your friend is someone else when they show up at 

Re: [SOLVED] [gentoo-user] poppler-0.32.0 - error

2015-09-06 Thread Fernando Rodriguez
On Sunday, September 06, 2015 12:21:13 PM the...@sys-concept.com wrote:
> 
> On 09/05/2015 03:19 PM, Fernando Rodriguez wrote:
> [snip]
> >>>
> >>> I was right that rebuilding libtiff (not perl-cleaner) will get you past 
> > that 
> >>> ebuild, but my advice was the same you just received, go back to the 
*VERY 
> >>> FIRST* post on the subject and fix the root of the problem. You will see 
> > that 
> >>> error again (or worse runtime errors) until you fix it.
> >>>
> >>> Hint: It was Alan that gave you the solution. You chose to fix it by 
> > installing 
> >>> an incompatible jpeg implementation. That needs to be undone.
> >>
> >> It seems to me Alan solution:
> >> emerge -avC libjpeg-turbo
> >> emerge -av1 media-libs/jpeg:0 media-libs/jpeg:62
> >>
> >> It will come to hunt me.  It allow me to keep going with upgrades as I
> >> couldn't find any better solution.
> >>
> >> Currently I have media-libs/jpeg installed in two slots:
> >> media-libs/jpeg-6b-r12:62
> >> media-libs/jpeg-8d-r1:0
> >>
> >> I can not get rid of "media-libs/jpeg-6b-r12" due to dependency
> >> net-misc/nxclient-3.5.0.7 (=media-libs/jpeg-6*)
> >>
> >> I need "nxclient" for now until I find better solution.
> >>
> >> According to: equery d media-libs/jpeg-8d-r1
> >>  * These packages depend on media-libs/jpeg-8d-r1:
> >> virtual/jpeg-0-r2 (>=media-libs/jpeg-8d-r1:0
> >>
> >> Maybe I could uninstall jpeg-8d-r1 as only "virtual/jpeg-0-r2" depends
> >> on it.
> >>
> >> Thelma
> >>
> > 
> > That was your own solution. You have a few ways to solve it.
> > 
> > 1. Grab the last version of nxclient from the gentoo attic (or from 
> > /var/db/pkg/net-misc/nxclient if it doesn't need any files to build) change 
the 
> > dependency to virtual/jpeg, put it on a local overlay and re-emerge it 
(after 
> > replacing libjpeg with libjpeg-turbo again)
> 
> > 2. Rebuild everything that depends on libjpeg on the right order. 
Something 
> > like (I have not tested this command, but it may build stuff you don't have 
> > installed, hence the --depclean, maybe somebody can give you a better 
one):
> > 
> > # emerge --oneshot `equery depends virtual/jpeg`
> > # emerge --depclean --ask
> > 
> > 3. This is a hack and I've never done it. In /var/db/pkg/net-misc/nxclient 
> > update all media-libs/jpeg to the virtual/jpeg. Then unmerge all media-
> > libs/jpeg slots and emerge libjpeg turbo. This should work because 
anything 
> > linked against jpeg *should* work with jpeg-turbo but not the other way 
> > around.
> 
> Good ideas, thank you.
> I'm trying to implement one of this ideas but I have a problem with making 
manifest.  The nxclient-3.5.0.7.ebuild is not in "attic" but I have a local 
copy.
> Though, I can not "manifest" the ebuild it can not be found: 
nxclient-3.5.0-7.x86_64.tar.gz
> ebuild /usr/local/portage/net-misc/nxclient/nxclient-3.5.0.7.ebuild manifest

Change the SRC_URI variable on the ebuild to point to one of those mirrors. If 
you already have it on distfiles it probably doesn't find it because the 
SRC_URI 
has a different filename.

> Layman -L
> doesn't list it either

That's normal because it's not a layman managed overlay. Just make sure to 
list it on your repos.conf. When you emerge it with the -v option it should 
show your overlay name instead of gentoo next to the package name.

If it's a binary package you could also just add the dependencies to your 
world file or to a set and just unpack it manually to /opt.

BTW. It is on the attic, and it does requires extra files. Just copy the whole 
directory from the attic, including files/, then update the SRC_URI and the 
jpeg dependency.

https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo-x86/net-misc/nxclient/?hideattic=0

> Various places have it, like:
> https://baobabmaster.unige.ch/download/linux/
> 
> I downloaded and copy the file to:  /usr/portage/distfiles
> but system still try to download the file when I try to make ebuild ... 
manifest"
> 
> Thelma
> 

-- 
Fernando Rodriguez