Re: [gentoo-user] alternative to dvbcut

2015-01-11 Thread Neil Bothwick
On Sat, 10 Jan 2015 19:38:36 +0100, lee wrote:

 since dvbcut isn't available in Gentoo and doesn't compile either,
 what's the alternative?

avidemux


-- 
Neil Bothwick

There are only two tragedies in life: one is not getting what one wants;
and the other is getting it. - Oscar Wilde (1854-1900)


pgpcOu71jWAis.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Firefox bookmarks

2015-01-11 Thread Walter Dnes
On Sat, Jan 10, 2015 at 12:10:33AM -0500, Philip Webb wrote
 Can anyone tell me where Firefox stores its bookmarks ?
 I want to copy the bookmarks from my Gentoo system to another system ;
 I've tried copying  .cache/mozilla.mozilla ,
 but it has no effect on the bookmarks shown by Firefox in the other machine.

  Remember to shut down the target Firefox first.  The file to copy is
places.sqlite as per http://kb.mozillazine.org/Places.sqlite

  There are various other *.sqlite files.  You can plug their names into
a Google seasrch for...

Firefox whatever.sqlite

...and see if you want to copy them over.

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



Re: [gentoo-user] alternative to dvbcut

2015-01-11 Thread Matti Nykyri
 On Jan 10, 2015, at 20:38, lee l...@yagibdah.de wrote:
 
 Hi,
 
 since dvbcut isn't available in Gentoo and doesn't compile either,
 what's the alternative?

Well I would use ffmpeg. Dvbcut is just a frontend for ffmpeg. Ffmpeg is a true 
swiss army knife for any video manipulation... You can do almost anything with 
it.

Stream selection cutting is really easy with ffmpeg:

ffmpeg -i stream.ts -acodec copy -scodec copy -vcodec copy -ss 60 -t 120 
output.mkv

You can use -map to select desired stream.

This kind of multiplexing is really fast!

-- 
-Matti


Re: [gentoo-user] Usign ansible

2015-01-11 Thread Alan McKinnon
On 11/01/2015 09:46, Tomas Mozes wrote:
 On 2015-01-10 23:11, Alan McKinnon wrote:
 On 10/01/2015 21:40, Tomas Mozes wrote:


 Ansible is a not a backup solution. You don't need to download your /etc
 from the machines because you deploy your /etc to machines via ansible.

 I was also thinking about putting /etc in git and then deploying it but:
 - on updates, will you update all configurations in all /etc repos?
 - do you really want to keep all the information in git, is it
 necessary?

 The set of fileS in /etc/ managed by ansible is always a strict subset
 of everything in /etc

 For that reason alone, it's a good idea to back up /etc anyway,
 regardless of having a CM system in place. The smallest benefit is
 knowing when things changed, by the cm SYSTEM or otherwise
 
 For what reason?

For the simple reason that ansible is not the only system that can make
changes in /etc

 And how does a workflow look like then? You commit changes to your git
 repo of ansible. Then you deploy via ansible and check the /etc of each
 machine and commit a message that you changed something via ansible?


When you commit to the ansible repo, you are committing and tracking
changes to the *ansible* config. You are not tracking changes to /etc on
the actual destination host, that is a separate problem altogether and
not directly related to the fact that ansible logs in and does various
stuff.

You can make your workflow whatever makes sense to you.

The reason I'm recommending to keep all of /etc in it's own repo is that
it's the simplest way to do it. /etc/ is a large mixture of
ansible-controlled files, sysadmin-controlled files, and other arbitrary
files installed by the package manager. It's also not very big, around
10M or so typically. So you *could* manually add to a repo every file
you change manually, but that is error-prone and easy to forget. Simpler
to just commit everything in /etc which gives you an independant record
of all changes over time. Have you ever dealt with a compliance auditor?
An independant change record that is separate from the CM itself is a
feature that those fellows really like a lot.




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Usign ansible

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon alan.mckin...@gmail.com wrote:
 The reason I'm recommending to keep all of /etc in it's own repo is that
 it's the simplest way to do it. /etc/ is a large mixture of
 ansible-controlled files, sysadmin-controlled files, and other arbitrary
 files installed by the package manager. It's also not very big, around
 10M or so typically. So you *could* manually add to a repo every file
 you change manually, but that is error-prone and easy to forget. Simpler
 to just commit everything in /etc which gives you an independant record
 of all changes over time. Have you ever dealt with a compliance auditor?
 An independant change record that is separate from the CM itself is a
 feature that those fellows really like a lot.

If you're taking care of individual long-lived hosts this probably
isn't a bad idea.

If you just build a new host anytime you do updates and destroy the
old one then obviously a git repo in /etc won't get you far.

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 8:14 AM, lee l...@yagibdah.de wrote:
 Rich Freeman ri...@gentoo.org writes:

 Doing backups with dd isn't terribly practical, but it is completely
 safe if done correctly.  The LV would need to be the same size or
 larger, or else your filesystem will be truncated.

 Yes, my impression is that it isn't very practical or a good method, and
 I find it strange that LVM is still lacking some major features.

Generally you do backup at the filesystem layer, not at the volume
management layer.  LVM just manages a big array of disk blocks.  It
has no concept of files.


 Just create a small boot partition and give the rest to zfs.  A
 partition is a block device, just like a disk.  ZFS doesn't care if it
 is managing the entire disk or just a partition.

 ZFS does care: You cannot export ZFS pools residing on partitions, and
 apparently ZFS cannot use the disk cache as efficiently when it uses
 partitions.

Cite?  This seems unlikely.

 Caching in memory is also less efficient because another
 file system has its own cache.

There is no other filesystem.  ZFS is running on bare metal.  It is
just pointing to a partition on a drive (an array of blocks) instead
of the whole drive (an array of blocks).  The kernel does not cache
partitions differently from drives.

 On top of that, you have the overhead of
 software raid for that small partition unless you can dedicate
 hardware-raided disks for /boot.

Just how often are you reading/writing from your boot partition?  You
only read from it at boot time, and you only write to it when you
update your kernel/etc.  There is no requirement for it to be raided
in any case, though if you have multiple disks that wouldn't hurt.


 This sort of thing was very common before grub2 started supporting
 more filesystems.

 That doesn't mean it's a good setup.  I'm finding it totally
 undesirable.  Having a separate /boot partition has always been a
 crutch.

Better not buy an EFI motherboard.  :)


 With ZFS at hand, btrfs seems pretty obsolete.

 You do realize that btrfs was created when ZFS was already at hand,
 right?  I don't think that ZFS will be likely to make btrfs obsolete
 unless it adopts more dynamic desktop-oriented features (like being
 able to modify a vdev), and is relicensed to something GPL-compatible.
 Unless those happen, it is unlikely that btrfs is going to go away,
 unless it is replaced by something different.

 Let's say it seems /currently/ obsolete.

You seem to have an interesting definition of obsolete - something
which holds potential promise for the future is better described as
experimental.


 Solutions are needed /now/, not in about 10 years when btrfs might be
 ready.


Well, feel free to create one.  Nobody is stopping anybody from using
zfs, but unless it is either relicensed by Oracle or the
kernel/grub/etc is relicensed by everybody else you're unlikely to see
it become a mainstream solution.  That seems to be the biggest barrier
to adoption, though it would be nice for small installations if vdevs
were more dynamic.

By all means use it if that is your preference.  A license may seem
like a small thing, but entire desktop environments have been built as
a result of them.  When a mainstream linux distro can't put ZFS
support on their installation CD due to licensing compatibility it
makes it pretty impractical to use it for your default filesystem.

I'd love to see the bugs worked out of btrfs faster, but for what I've
paid for it so far I'd say I'm getting good value for my $0.  It is
FOSS - it gets done when those contributing to it (whether paid or
not) are done.  The ones who are paying for it get to decide for
themselves if it meets their needs, which could be quite different
from yours.

I'd actually be interested in a comparison of the underlying btrfs vs
zfs designs.  I'm not talking about implementation (bugs/etc), but the
fundamental designs.  What features are possible to add to one which
are impossible to add to the other, what performance limitations will
the one always suffer in comparison to the other, etc?  All the
comparisons I've seen just compare the implementations, which is
useful if you're trying to decide what to install /right now/ but less
so if you're trying to understand the likely future of either.

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread lee
Rich Freeman ri...@gentoo.org writes:

 On Sat, Jan 10, 2015 at 1:22 PM, lee l...@yagibdah.de wrote:
 Rich Freeman ri...@gentoo.org writes:

 You can dd from a logical volume into a file, and from a file into a
 logical volume.  You won't destroy the volume group unless you do
 something dumb like trying to copy it directly onto a physical volume.
 Logical volumes are just block devices as far as the kernel is
 concerned.

 You mean I need to create a LV (of the same size) and then use dd to
 write the backup into it?  That doesn't seem like a safe method.

 Doing backups with dd isn't terribly practical, but it is completely
 safe if done correctly.  The LV would need to be the same size or
 larger, or else your filesystem will be truncated.

Yes, my impression is that it isn't very practical or a good method, and
I find it strange that LVM is still lacking some major features.

 How about ZFS as root file system?  I'd rather create a pool over all
 the disks and create file systems within the pool than use something
 like ext4 to get the system to boot.

 I doubt zfs is supported by grub and such, so you'd have to do the
 usual in-betweens as you're eluding to.  However, I suspect it would
 generally work.  I haven't really used zfs personally other than
 tinkering around a bit in a VM.

 That would be a very big disadvantage.  When you use zfs, it doesn't
 really make sense to have extra partitions or drives; you just want to
 create a pool from all drives and use that.  Even if you accept a boot
 partition, that partition must be on a raid volume, so you either have
 to dedicate at least two disks to it, or you're employing software raid
 for a very small partition and cannot use the whole device for ZFS as
 recommended.  That just sucks.

 Just create a small boot partition and give the rest to zfs.  A
 partition is a block device, just like a disk.  ZFS doesn't care if it
 is managing the entire disk or just a partition.

ZFS does care: You cannot export ZFS pools residing on partitions, and
apparently ZFS cannot use the disk cache as efficiently when it uses
partitions.  Caching in memory is also less efficient because another
file system has its own cache.  On top of that, you have the overhead of
software raid for that small partition unless you can dedicate
hardware-raided disks for /boot.

 This sort of thing was very common before grub2 started supporting
 more filesystems.

That doesn't mean it's a good setup.  I'm finding it totally
undesirable.  Having a separate /boot partition has always been a
crutch.

 Well, I don't want to use btrfs (yet).  The raid capabilities of brtfs
 are probably one of its most unstable features.  They are derived from
 mdraid:  Can they compete with ZFS both in performance and, more
 important, reliability?



 Btrfs raid1 is about as stable as btrfs without raid.  I can't say
 whether any code from mdraid was borrowed but btrfs raid works
 completely differently and has about as much in common with mdraid as
 zfs does.

Hm, I might have misunderstood an article I've read.

 I can't speak for zfs performance, but btrfs performance isn't all
 that great right now - I don't think there is any theoretical reason
 why it couldn't be as good as zfs one day, but it isn't today.

Give it another 10 years, and btrfs might be the default choice.

 Btrfs is certainly far less reliable than zfs on solaris - zfs on
 linux has less long-term history of any kind but most seem to think it
 works reasonably well.

It seems that ZFS does work (I can't say anything about its reliability
yet), and it provides a solution unlike any other FS.  Btrfs doesn't
fully work yet, see [1].


[1]: https://btrfs.wiki.kernel.org/index.php/RAID56

 With ZFS at hand, btrfs seems pretty obsolete.

 You do realize that btrfs was created when ZFS was already at hand,
 right?  I don't think that ZFS will be likely to make btrfs obsolete
 unless it adopts more dynamic desktop-oriented features (like being
 able to modify a vdev), and is relicensed to something GPL-compatible.
 Unless those happen, it is unlikely that btrfs is going to go away,
 unless it is replaced by something different.

Let's say it seems /currently/ obsolete.  It's not fully working yet,
reliability is very questionable, and it's not as easy to handle as ZFS.
By the time btrfs has matured to the point where it isn't obsolete
anymore, chances are that there will be something else which replaces
it.

Solutions are needed /now/, not in about 10 years when btrfs might be
ready.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] alternative to dvbcut

2015-01-11 Thread lee
Neil Bothwick n...@digimed.co.uk writes:

 On Sat, 10 Jan 2015 19:38:36 +0100, lee wrote:

 since dvbcut isn't available in Gentoo and doesn't compile either,
 what's the alternative?

 avidemux

I tried that some time ago and found it unable to keep the sound in
sync.

dvbcut works great to remove commercials, keeps the sound synced without
any ado and is easy to use.  Cinelerra can't even deal with the
recordings, and I don't want to convert them before removing the ads.
Openshot crashes all the time.  I don't need any of the extra features
cinelerra and openshot might have.

Perhaps I could extract dvbcut from a Debian or rpm package?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] pdf viewer

2015-01-11 Thread lee
Andrew Savchenko birc...@gentoo.org writes:

 On Sat, 10 Jan 2015 19:25:54 +0100 lee wrote:
 Andrew Savchenko birc...@gentoo.org writes:
 
  On Fri, 09 Jan 2015 20:49:56 +0100 lee wrote:
  Andrew Savchenko birc...@gentoo.org writes:
  
   When I need something simple (e.g. to read pdf books) I use mupdf.
  
  How did you get mupdf to display a pdf?
 
  Just run it:
  $ mupdf file.pdf
 
  In my case mupdf is configured as follows:
  Installed versions:  1.5-r1(02:19:48 AM 12/28/2014)(X curl openssl -static 
  -static-libs -vanilla)
 
 There's only 'utool' and no 'mupdf'.

 You should enable USE=X as I wrote above.

Thanks, that creates 'mupdf'.

  2) Configure your default mime handler using xdg-mime.
 
 Hm, xdg-mime is not installed; I've never heared of it.

 x11-misc/xdg-utils
 Most WM/DE will pull this package.

Apparently fvwm doesn't.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] pdf viewer

2015-01-11 Thread lee
Walter Dnes waltd...@waltdnes.org writes:

 On Sat, Jan 10, 2015 at 07:25:54PM +0100, lee wrote
 Andrew Savchenko birc...@gentoo.org writes:

  1) Configure your handlers in seamonkey.
 
 How?

   I've got Seamonkey 2.31.  Go to 
 Edit == Preferences == Category;Browser == Helper Aplications

   Assuming you've already got Content Type PDF file in the list,
 click on the icon beside emacsclient in the Action column.  This
 opens a dropdown menu.  Click on Use other... and navigate to
 /usr/bin/mupdf in the file menu.

That's what I thought and tried.  I don't want to use it as default
action, though, because I sometimes save PDFs.

   If you're really brave, you can try editing the mimeTypes.rdf file in
 the browser profile directly.  Remember to shut down your browser, and
 back up the file first.

Thanks, if it all doesn't help, I'll do that.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] Usign ansible

2015-01-11 Thread Stefan G. Weichinger
Am 11.01.2015 um 13:25 schrieb Rich Freeman:
 On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon alan.mckin...@gmail.com 
 wrote:
 The reason I'm recommending to keep all of /etc in it's own repo is that
 it's the simplest way to do it. /etc/ is a large mixture of
 ansible-controlled files, sysadmin-controlled files, and other arbitrary
 files installed by the package manager. It's also not very big, around
 10M or so typically. So you *could* manually add to a repo every file
 you change manually, but that is error-prone and easy to forget. Simpler
 to just commit everything in /etc which gives you an independant record
 of all changes over time. Have you ever dealt with a compliance auditor?
 An independant change record that is separate from the CM itself is a
 feature that those fellows really like a lot.
 
 If you're taking care of individual long-lived hosts this probably
 isn't a bad idea.
 
 If you just build a new host anytime you do updates and destroy the
 old one then obviously a git repo in /etc won't get you far.

I have long-lived hosts out there and with rather individual setups and
a wide range of age (= deployed over many years).

So my first goal is kind of getting an overview:

* what boxes am I responsible for?

* getting some kind of meta-info into my local systems - /etc, @world,
and maybe something like the facts provided by facter module (a nice
kind of profile ... with stuff like MAC addresses and other essential
info)  ... [1]

and then, as I learn my steps, I can roll out some homogenization:

* my ssh-keys really *everywhere*
* standardize things for each customers site (network setup, proxies)

etc etc

I am just cautious: rolling out standardized configs over dozens of
maybe different servers is a bit of a risk. But I think this will come
step by step ... new servers get the roles applied from the start, and
existing ones are maybe adapted to this when I do other update work.

And at keeping /etc in git:

So far I made it a habit to do that on customer servers. Keeping track
of changes is a good thing and helpful. I still wonder how to centralize
this as I would like to have these, let's call them profiles in my own
LAN as well. People tend to forget their backups etc ... I feel better
with a copy locally.

This leads to finding a structure of managing this.

The /etc-git-repos so far are local to the customer servers.
Sure, I can add remote repos and use ansible to push the content up there.

One remote-repo per server-machine? I want to run these remote-repos on
one of my inhouse-servers ...

For now I wrote a small playbook that allows me to rsync /etc and
world-file from all the Gentoo-boxes out there (and only /etc from
firewalls and other non-gentoo-machines).

As mentioned I don't have FQDNs for all hosts and this leads to the
problem that there are several lines like ipfire in several groups.

Rsyncing stuff into a path containing the hostname leads to conflicts:

- name: sync /etc from remote host to inventory host
  synchronize: |
  mode=pull
  src=/etc
  dest={{ local_storage_path }}/{{ inventory_hostname
}}/etc
  delete=yes
  recursive=yes


So I assume I should just setup some kind of talking names like:

[smith]
ipfire_smith 

[brown]
ipfire_brown 

... and use these just as labels ?

Another idea is to generate some kind of UUID for each host and use that?



I really like the ansible-approach so far.

Even when I might not yet run the full standardized approach as I have
to slowly get the existing hosts into this growing setup.

Stefan


[1]  I haven't yet managed to store the output of the setup-module to
the inventory host. I could run ansible -i hosts.yml -m setup all but
I want a named txt-file per host in a separate subdir ...



Re: [gentoo-user] fail2ban: You have to create an init script for each container ...

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 10:48 AM, lee l...@yagibdah.de wrote:

 I don't want to run fail2ban in the container because the container must
 not mess with the firewall settings of the host.  If a container can do
 that, then what's the point of having containers in the first place?


I've never used the LXC scripts to set up a container, but I actually
run a firewall inside a container.  You just need to run it in a
separate network namespace so that it is messing with its own
interface.

In general, though, I wouldn't want my containers messing with my host
interfaces.


 BTW, why does Gentoo put containers under /etc?  Containers aren't
 configuration files ...


I'd never put a container there.  I can't speak to how the lxc scripts
are intended to be used - I don't use those tools to manage
containers.  I typically stick my containers in their own place in
btrfs subvolumes for easy management.

-- 
Rich



Re: [gentoo-user] fail2ban: You have to create an init script for each container ...

2015-01-11 Thread lee

see https://bugs.gentoo.org/show_bug.cgi?id=536320


lee l...@yagibdah.de writes:

 Hi,

 I'm trying to get fail2ban to work on the host and keep getting error
 messages like:


 ,
 | Jan 08 21:13:04 [/etc/init.d/fail2ban] You have to create an init script 
 for each container:
 | Jan 08 21:13:04 [/etc/init.d/fail2ban] ln -s lxc /etc/init.d/lxc.container
 | Jan 08 21:13:05 [/etc/init.d/fail2ban] ERROR: fail2ban failed to start
 `


 After 'ln -s lxc /etc/init.d/lxc.container', it says:


 ,
 | Jan 08 21:17:08 [/etc/init.d/fail2ban] Unable to find a suitable 
 configuration file.
 | Jan 08 21:17:08 [/etc/init.d/fail2ban] If you set up the container in a 
 non-standard
 | Jan 08 21:17:08 [/etc/init.d/fail2ban] location, please set the CONFIGFILE 
 variable.
 | Jan 08 21:17:09 [/etc/init.d/fail2ban] ERROR: fail2ban failed to start
 `


 Naming the link 'lxc.acheron', with 'acheron' being the name of the
 container, gives the first error message again.  The containers'
 configuration is at the default location:


 ,
 | heimdali init.d # ls -la /etc/lxc/acheron/config
 | -rw-r--r-- 1 root root 967  5. Jan 01:14 /etc/lxc/acheron/config
 | heimdali init.d # 
 `


 What am I missing?

 Shorewall is used on the host, exim is running in the container, and I
 want fail2ban (on the host) to look into the logfile of the exim which
 runs in the container:


 ,
 | heimdali fail2ban # cat paths-overrides.local 
 | exim_main_log = /etc/lxc/acheron/rootfs/var/log/exim/exim_main.log
 | heimdali fail2ban # 
 `


 I don't want to run fail2ban in the container because the container must
 not mess with the firewall settings of the host.  If a container can do
 that, then what's the point of having containers in the first place?


 BTW, why does Gentoo put containers under /etc?  Containers aren't
 configuration files ...

-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] Usign ansible

2015-01-11 Thread Alan McKinnon
On 11/01/2015 14:25, Rich Freeman wrote:
 On Sun, Jan 11, 2015 at 3:22 AM, Alan McKinnon alan.mckin...@gmail.com 
 wrote:
 The reason I'm recommending to keep all of /etc in it's own repo is that
 it's the simplest way to do it. /etc/ is a large mixture of
 ansible-controlled files, sysadmin-controlled files, and other arbitrary
 files installed by the package manager. It's also not very big, around
 10M or so typically. So you *could* manually add to a repo every file
 you change manually, but that is error-prone and easy to forget. Simpler
 to just commit everything in /etc which gives you an independant record
 of all changes over time. Have you ever dealt with a compliance auditor?
 An independant change record that is separate from the CM itself is a
 feature that those fellows really like a lot.
 
 If you're taking care of individual long-lived hosts this probably
 isn't a bad idea.

Yes, this is what I do.

I do have cattle, not pets. But my cattle are long-production dairy
cows, not beef steers for slaughter. And I have a stud bull or two :-)

 If you just build a new host anytime you do updates and destroy the
 old one then obviously a git repo in /etc won't get you far.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Usign ansible

2015-01-11 Thread Alan McKinnon
On 11/01/2015 17:12, Stefan G. Weichinger wrote:

 And at keeping /etc in git:
 
 So far I made it a habit to do that on customer servers. Keeping track
 of changes is a good thing and helpful. I still wonder how to centralize
 this as I would like to have these, let's call them profiles in my own
 LAN as well. People tend to forget their backups etc ... I feel better
 with a copy locally.
 
 This leads to finding a structure of managing this.
 
 The /etc-git-repos so far are local to the customer servers.
 Sure, I can add remote repos and use ansible to push the content up there.
 
 One remote-repo per server-machine? I want to run these remote-repos on
 one of my inhouse-servers ...
 
 For now I wrote a small playbook that allows me to rsync /etc and
 world-file from all the Gentoo-boxes out there (and only /etc from
 firewalls and other non-gentoo-machines).
 
 As mentioned I don't have FQDNs for all hosts and this leads to the
 problem that there are several lines like ipfire in several groups.
 
 Rsyncing stuff into a path containing the hostname leads to conflicts:
 
 - name: sync /etc from remote host to inventory host
   synchronize: |
   mode=pull
   src=/etc
   dest={{ local_storage_path }}/{{ inventory_hostname
 }}/etc
   delete=yes
   recursive=yes
 
 
 So I assume I should just setup some kind of talking names like:
 
 [smith]
 ipfire_smith 
 
 [brown]
 ipfire_brown 
 
 ... and use these just as labels ?
 
 Another idea is to generate some kind of UUID for each host and use that?


The trick is to use a system that guarantees you a unique label or
identifier for each host.

Perhaps {{ customer_name }}/{{ hostname }} works?

This would fail if you have two customers with the same company name
(rare, but not impossible) or customers have machines with the same name
(silly, but possible). In that case, you'd probably have to go with
UUIDs or similar.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] fail2ban: You have to create an init script for each container ...

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 1:47 PM, lee l...@yagibdah.de wrote:

 Same here, so why does fail2ban get involved with containers?


Seems like there are three options here.
1. Run fail2ban on the host and have it look into the containers,
monitor their logs, and add host iptables rules to block connections.
2. Run fail2ban in each container and have it monitor its own logs,
and then add host iptables rules to block connections.
3. Run fail2ban in each container and have each container in its own
network namespace.  Fail2ban can then add container iptables rules to
block connections.

I actually gave up on fail2ban after a bunch of issues.  The only
place I get brute force attacks right now is ssh, and I'm using the
Google authenticator plugin.  I just ignore the thousands of failed
ssh authentication attempts...

-- 
Rich



Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread Rich Freeman
On Sun, Jan 11, 2015 at 1:42 PM, lee l...@yagibdah.de wrote:
 Rich Freeman ri...@gentoo.org writes:

 Generally you do backup at the filesystem layer, not at the volume
 management layer.  LVM just manages a big array of disk blocks.  It
 has no concept of files.

 That may require downtime while the idea of taking snapshots and then
 backing up the volume is to avoid the downtime.

Sure, which is why btrfs and zfs support snapshots at the filesystem
layer.  You can do an lvm snapshot but it requires downtime unless you
want to mount an unclean snapshot for backups.


 Just create a small boot partition and give the rest to zfs.  A
 partition is a block device, just like a disk.  ZFS doesn't care if it
 is managing the entire disk or just a partition.

 ZFS does care: You cannot export ZFS pools residing on partitions, and
 apparently ZFS cannot use the disk cache as efficiently when it uses
 partitions.

 Cite?  This seems unlikely.

 , [ man zpool ]
 |For pools to be portable, you  must  give  the  zpool  command
 |whole  disks,  not  just partitions, so that ZFS can label the
 |disks with portable EFI labels.  Otherwise,  disk  drivers  on
 |platforms  of  different  endianness  will  not  recognize the
 |disks.
 `

 You may be able to export them, and then you don't really know what
 happens when you try to import them.  I didn't keep a bookmark for the
 article that mentioned the disk cache.

 When you read about ZFS, you'll find that using the whole disk is
 recommended while using partitions is not.

Ok, I get the EFI label issue if zfs works with multiple endianness
and only stores the setting in the EFI label (which seems like an odd
way to do things).  You didn't mention anything about disk cache and
it seems unlikely that using partitions vs whole drives is going to
matter here.

Honestly, I feel like there is a lot of cargo cult mentality with many
in the ZFS community.  Another one of those must do things is using
ECC RAM.  Sure, you're more likely to end up with data corruptions
without it than with it, but the same is true with ANY filesystem.
I've yet to hear any reasonable argument as to why ZFS is more
susceptible to memory corruption than ext4.


 Caching in memory is also less efficient because another
 file system has its own cache.

 There is no other filesystem.  ZFS is running on bare metal.  It is
 just pointing to a partition on a drive (an array of blocks) instead
 of the whole drive (an array of blocks).  The kernel does not cache
 partitions differently from drives.

 How do you use a /boot partition that doesn't have a file system?

Oh, I thought you meant that the memory cache of zfs itself is less
efficient.  I'd be interested in a clear explanation as to why
10X100GB filesystems use cache differently than 1X1TB filesystem if
file access is otherwise the same.  However, even if having a 1GB boot
partition mounted caused wasted cache space that problem is easily
solved by just not mounting it except when doing kernel updates.


 On top of that, you have the overhead of
 software raid for that small partition unless you can dedicate
 hardware-raided disks for /boot.

 Just how often are you reading/writing from your boot partition?  You
 only read from it at boot time, and you only write to it when you
 update your kernel/etc.  There is no requirement for it to be raided
 in any case, though if you have multiple disks that wouldn't hurt.

 If you want to accept that the system goes down or has to be brought
 down or is unable to boot because the disk you have your /boot partition
 on has failed, you may be able to get away with a non-raided /boot
 partition.

 When you do that, what's the advantage other than saving the software
 raid?  You still need to either dedicate a disk to it, or you have to
 leave a part of all the other disks unused and cannot use them as a
 whole for ZFS because otherwise they will be of different sizes.

Sure, when I have multiple disks available and need a boot partition I
RAID it with software RAID.  So what?  Updating your kernel /might/ be
a bit slower when you do that twice a month or whatever.

 Better not buy an EFI motherboard.  :)

 Yes, they are a security hazard and a PITA.  Maybe I can sit it out
 until they come up with something better.

Security hazard?  How is being able to tell your motherboard to only
boot software of your own choosing a security hazard?  Or are you
referring to something other than UEFI?

I think the pain is really only there because most of the
utilities/etc haven't been updated to the new reality.

In any case, I don't see it going away anytime soon.


 With ZFS at hand, btrfs seems pretty obsolete.

 You do realize that btrfs was created when ZFS was already at hand,
 right?  I don't think that ZFS will be likely to make btrfs obsolete
 unless it adopts more dynamic desktop-oriented features (like being
 able to modify a vdev), and is 

Re: [gentoo-user] Usign ansible

2015-01-11 Thread Tomas Mozes

On 2015-01-11 09:22, Alan McKinnon wrote:

On 11/01/2015 09:46, Tomas Mozes wrote:

On 2015-01-10 23:11, Alan McKinnon wrote:

On 10/01/2015 21:40, Tomas Mozes wrote:


Ansible is a not a backup solution. You don't need to download your 
/etc
from the machines because you deploy your /etc to machines via 
ansible.


I was also thinking about putting /etc in git and then deploying it 
but:

- on updates, will you update all configurations in all /etc repos?
- do you really want to keep all the information in git, is it
necessary?


The set of fileS in /etc/ managed by ansible is always a strict 
subset

of everything in /etc

For that reason alone, it's a good idea to back up /etc anyway,
regardless of having a CM system in place. The smallest benefit is
knowing when things changed, by the cm SYSTEM or otherwise


For what reason?


For the simple reason that ansible is not the only system that can make
changes in /etc


And how does a workflow look like then? You commit changes to your git
repo of ansible. Then you deploy via ansible and check the /etc of 
each

machine and commit a message that you changed something via ansible?



When you commit to the ansible repo, you are committing and tracking
changes to the *ansible* config. You are not tracking changes to /etc 
on

the actual destination host, that is a separate problem altogether and
not directly related to the fact that ansible logs in and does various
s

You can make your workflow whatever makes sense to you.

The reason I'm recommending to keep all of /etc in it's own repo is 
that

it's the simplest way to do it. /etc/ is a large mixture of
ansible-controlled files, sysadmin-controlled files, and other 
arbitrary

files installed by the package manager. It's also not very big, around
10M or so typically. So you *could* manually add to a repo every file
you change manually, but that is error-prone and easy to forget. 
Simpler

to just commit everything in /etc which gives you an independant record
of all changes over time. Have you ever dealt with a compliance 
auditor?

An independant change record that is separate from the CM itself is a
feature that those fellows really like a lot.


Out of curiosity, ansible-controlled files, sysadmin-controlled files 
means that something is managed via ansible and something is done 
manually?


And then, /etc is not the only directory with changing files, what about 
other directories?


Regarding the workflow with /etc in git vs ansible in git I was asking 
about your concrete workflow so we can learn from it and maybe apply 
some good practices on our servers as well.




Re: [gentoo-user] installing Gentoo in a xen VM

2015-01-11 Thread lee
Rich Freeman ri...@gentoo.org writes:

 On Sun, Jan 11, 2015 at 8:14 AM, lee l...@yagibdah.de wrote:
 Rich Freeman ri...@gentoo.org writes:

 Doing backups with dd isn't terribly practical, but it is completely
 safe if done correctly.  The LV would need to be the same size or
 larger, or else your filesystem will be truncated.

 Yes, my impression is that it isn't very practical or a good method, and
 I find it strange that LVM is still lacking some major features.

 Generally you do backup at the filesystem layer, not at the volume
 management layer.  LVM just manages a big array of disk blocks.  It
 has no concept of files.

That may require downtime while the idea of taking snapshots and then
backing up the volume is to avoid the downtime.

 Just create a small boot partition and give the rest to zfs.  A
 partition is a block device, just like a disk.  ZFS doesn't care if it
 is managing the entire disk or just a partition.

 ZFS does care: You cannot export ZFS pools residing on partitions, and
 apparently ZFS cannot use the disk cache as efficiently when it uses
 partitions.

 Cite?  This seems unlikely.

, [ man zpool ]
|For pools to be portable, you  must  give  the  zpool  command
|whole  disks,  not  just partitions, so that ZFS can label the
|disks with portable EFI labels.  Otherwise,  disk  drivers  on
|platforms  of  different  endianness  will  not  recognize the
|disks.
`

You may be able to export them, and then you don't really know what
happens when you try to import them.  I didn't keep a bookmark for the
article that mentioned the disk cache.

When you read about ZFS, you'll find that using the whole disk is
recommended while using partitions is not.

 Caching in memory is also less efficient because another
 file system has its own cache.

 There is no other filesystem.  ZFS is running on bare metal.  It is
 just pointing to a partition on a drive (an array of blocks) instead
 of the whole drive (an array of blocks).  The kernel does not cache
 partitions differently from drives.

How do you use a /boot partition that doesn't have a file system?

 On top of that, you have the overhead of
 software raid for that small partition unless you can dedicate
 hardware-raided disks for /boot.

 Just how often are you reading/writing from your boot partition?  You
 only read from it at boot time, and you only write to it when you
 update your kernel/etc.  There is no requirement for it to be raided
 in any case, though if you have multiple disks that wouldn't hurt.

If you want to accept that the system goes down or has to be brought
down or is unable to boot because the disk you have your /boot partition
on has failed, you may be able to get away with a non-raided /boot
partition.

When you do that, what's the advantage other than saving the software
raid?  You still need to either dedicate a disk to it, or you have to
leave a part of all the other disks unused and cannot use them as a
whole for ZFS because otherwise they will be of different sizes.

 This sort of thing was very common before grub2 started supporting
 more filesystems.

 That doesn't mean it's a good setup.  I'm finding it totally
 undesirable.  Having a separate /boot partition has always been a
 crutch.

 Better not buy an EFI motherboard.  :)

Yes, they are a security hazard and a PITA.  Maybe I can sit it out
until they come up with something better.

 With ZFS at hand, btrfs seems pretty obsolete.

 You do realize that btrfs was created when ZFS was already at hand,
 right?  I don't think that ZFS will be likely to make btrfs obsolete
 unless it adopts more dynamic desktop-oriented features (like being
 able to modify a vdev), and is relicensed to something GPL-compatible.
 Unless those happen, it is unlikely that btrfs is going to go away,
 unless it is replaced by something different.

 Let's say it seems /currently/ obsolete.

 You seem to have an interesting definition of obsolete - something
 which holds potential promise for the future is better described as
 experimental.

Can you build systems on potential promises for the future?

If the resources it takes to develop btrfs would be put towards
improving ZFS, or the other way round, wouldn't that be more efficient?
We might even have a better solution available now.  Of course, it's not
a good idea to remove variety, so it's a dilemma.  But are the features
provided or intended to be provided and the problems both btrfs and ZFS
are trying to solve so much different that that each of them needs to
re-invent the wheel?

 Solutions are needed /now/, not in about 10 years when btrfs might be
 ready.


 Well, feel free to create one.  Nobody is stopping anybody from using
 zfs, but unless it is either relicensed by Oracle or the
 kernel/grub/etc is relicensed by everybody else you're unlikely to see
 it become a mainstream solution.  That seems to be the biggest barrier
 to adoption, though it would 

Re: [gentoo-user] fail2ban: You have to create an init script for each container ...

2015-01-11 Thread lee
Rich Freeman ri...@gentoo.org writes:

 On Sun, Jan 11, 2015 at 10:48 AM, lee l...@yagibdah.de wrote:

 I don't want to run fail2ban in the container because the container must
 not mess with the firewall settings of the host.  If a container can do
 that, then what's the point of having containers in the first place?


 I've never used the LXC scripts to set up a container, but I actually
 run a firewall inside a container.  You just need to run it in a
 separate network namespace so that it is messing with its own
 interface.

 In general, though, I wouldn't want my containers messing with my host
 interfaces.

Same here, so why does fail2ban get involved with containers?


 BTW, why does Gentoo put containers under /etc?  Containers aren't
 configuration files ...


 I'd never put a container there.  I can't speak to how the lxc scripts
 are intended to be used - I don't use those tools to manage
 containers.  I typically stick my containers in their own place in
 btrfs subvolumes for easy management.

I wouldn't put them there, either.  Yet Gentoo does, very unexpectedly.
I'll probably move the container into its own ZFS FS.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] Usign ansible

2015-01-11 Thread Alan McKinnon
On 11/01/2015 19:41, Tomas Mozes wrote:
 On 2015-01-11 09:22, Alan McKinnon wrote:
 On 11/01/2015 09:46, Tomas Mozes wrote:
 On 2015-01-10 23:11, Alan McKinnon wrote:
 On 10/01/2015 21:40, Tomas Mozes wrote:


 Ansible is a not a backup solution. You don't need to download your
 /etc
 from the machines because you deploy your /etc to machines via
 ansible.

 I was also thinking about putting /etc in git and then deploying it
 but:
 - on updates, will you update all configurations in all /etc repos?
 - do you really want to keep all the information in git, is it
 necessary?

 The set of fileS in /etc/ managed by ansible is always a strict subset
 of everything in /etc

 For that reason alone, it's a good idea to back up /etc anyway,
 regardless of having a CM system in place. The smallest benefit is
 knowing when things changed, by the cm SYSTEM or otherwise

 For what reason?

 For the simple reason that ansible is not the only system that can make
 changes in /etc

 And how does a workflow look like then? You commit changes to your git
 repo of ansible. Then you deploy via ansible and check the /etc of each
 machine and commit a message that you changed something via ansible?


 When you commit to the ansible repo, you are committing and tracking
 changes to the *ansible* config. You are not tracking changes to /etc on
 the actual destination host, that is a separate problem altogether and
 not directly related to the fact that ansible logs in and does various
 s

 You can make your workflow whatever makes sense to you.

 The reason I'm recommending to keep all of /etc in it's own repo is that
 it's the simplest way to do it. /etc/ is a large mixture of
 ansible-controlled files, sysadmin-controlled files, and other arbitrary
 files installed by the package manager. It's also not very big, around
 10M or so typically. So you *could* manually add to a repo every file
 you change manually, but that is error-prone and easy to forget. Simpler
 to just commit everything in /etc which gives you an independant record
 of all changes over time. Have you ever dealt with a compliance auditor?
 An independant change record that is separate from the CM itself is a
 feature that those fellows really like a lot.
 
 Out of curiosity, ansible-controlled files, sysadmin-controlled files
 means that something is managed via ansible and something is done manually?


Yes


 And then, /etc is not the only directory with changing files, what about
 other directories?

Do with them whatever you want, just like /etc

/etc is the canonical example of something you might want to track in
git, as a) it changes and b) it's hard to recreate.

Maybe you have other directories and locations you feel the same about,
so if you think they need tracking in git by all means go ahead and
track them. It's your choice after all, you can do with your servers
whatever you wish



 Regarding the workflow with /etc in git vs ansible in git I was asking
 about your concrete workflow so we can learn from it and maybe apply
 some good practices on our servers as well.


There isn't any workflow.

Ansible does it's thing and sometimes changes stuff.
Changes get committed to a repo however and whenever works best for you.
Maybe it's a regular cron job, maybe it's something you remember to do
every time you quit vi, maybe it's an ansible handler that runs at the
end of every play.

It will be almost impossible to give advice to someone else on this.


-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] OT: Amplifier to connect to network

2015-01-11 Thread Joseph

I'm looking for a solution to stream music in a home from my Gentoo box.

I have speaker wire inside wall (CL3 14AWG) going from each room to a central location in a basement.  
I was thinking to utilize these wires and put an amplifier in the basement where I could connect all the speakers to.  
But I would like to connect the amplifier to my network and be able to control it from my computer.  

Are there better solutions? 


--
Joseph



Re: [gentoo-user] Is it wrong to install a specific version

2015-01-11 Thread Stroller

On Sun, 11 January 2015, at 2:26 am, behrouz khosravi bz.khosr...@gmail.com 
wrote:
 
 After that I … installed all of the required qt5 packages from qt-5.4.0 one 
 by one and using the specific version.
 After that I did the same with qt-framework packages and so on.

This is fine, if you use `emerge -1`. That installs the package (thus 
fulfilling the dependency) without adding to the world file. 

If you specify the package version and emerge without the `--oneshot` flag, 
then the package version will be added to the world file, and thus the package 
version will be pinned, and be a problem during updates.

 I think that this is generally a bad idea, because the it makes the World set 
 much bigger.
 However I am wondering what will happen when the tree updates?

I would edit your world file with a text editor and remove these packages. They 
will still remain on your system, and they will still fulfil the Plasma's 
dependencies.

Stroller.




Re: [gentoo-user] Is it wrong to install a specific version

2015-01-11 Thread behrouz khosravi



 I would edit your world file with a text editor and remove these packages.
 They will still remain on your system, and they will still fulfil the
 Plasma's dependencies.


I didn't know that is possible.
Thanks for this tip.


[gentoo-user] restoring master boot record

2015-01-11 Thread Philip Webb
I'm trying to be prepared in case my SSD fails.
If it did, I would buy a replacement  use my sync'ed copy of system files
to restore the Gentoo system without needing to re-install it.

One question, if anyone can help : in order to have a working system,
I think I'ld need to rewrite the master boot record onto the new SSD.
I use Lilo, which has a line 'boot=device-name':
would it be enough to simply run Lilo, perhaps from System-Rescue ?
Should I have a back-up copy of the present MBR to use  if so,
what is the correct command to copy it  later put it in the proper place ?

If I do find myself facing this problem one day, I may not have e-mail,
so I'ld like to have the necessary info saved for when it's needed.

-- 
,,
SUPPORT ___//___,   Philip Webb
ELECTRIC   /] [] [] [] [] []|   Cities Centre, University of Toronto
TRANSIT`-O--O---'   purslowatchassdotutorontodotca




Re: [gentoo-user] Is it wrong to install a specific version

2015-01-11 Thread Sid S
A package set is just a list of packages in a file under
/etc/portage/sets. You can operate on every package in the set at
once.

https://dev.gentoo.org/~zmedico/portage/doc/ch02.html

If you merged the packages by version they won't be automatically
updated. This includes the case where they are part of a set (unless
you leave part of the version unspecified). If you merged the slot,
the packages would update to changes made in that slot.



Re: [gentoo-user] pdf viewer

2015-01-11 Thread Walter Dnes
On Sun, Jan 11, 2015 at 01:21:19PM +0100, lee wrote
 Walter Dnes waltd...@waltdnes.org writes:
 
Assuming you've already got Content Type PDF file in the list,
  click on the icon beside emacsclient in the Action column.  This
  opens a dropdown menu.  Click on Use other... and navigate to
  /usr/bin/mupdf in the file menu.
 
 That's what I thought and tried.  I don't want to use it as default
 action, though, because I sometimes save PDFs.

  Two options...

1) In the Action column you can select Always ask, and it'll always
ask what you want to do.  I find that to be a pain.

2) mupdf does not render straight from memory.  First it saves the pdf
file to /tmp/ and renders it from there.  I believe the linux default is
to always clean up /tmp/ at every reboot (but not during restore from
hibernation).  While mupdf doesn't have a Save as option, you can
copy/move the file from /tmp/ manually, giving you the same effect as a
Save as.

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



[gentoo-user] Re: Firefox bookmarks

2015-01-11 Thread »Q«
On Sun, 11 Jan 2015 03:21:08 -0500
Walter Dnes waltd...@waltdnes.org wrote:

 On Sat, Jan 10, 2015 at 12:10:33AM -0500, Philip Webb wrote
  Can anyone tell me where Firefox stores its bookmarks ?
  I want to copy the bookmarks from my Gentoo system to another
  system ; I've tried copying  .cache/mozilla.mozilla ,
  but it has no effect on the bookmarks shown by Firefox in the other
  machine.
 
   Remember to shut down the target Firefox first.  The file to copy is
 places.sqlite as per http://kb.mozillazine.org/Places.sqlite

Make sure to remove all places.sqlite-* files from the target profile
first.  (Mozillazine's article was written before there were any such
files.)




[gentoo-user] cannot emerge dev-qt/qtcore, undefined reference to `__stack_chk_fail'

2015-01-11 Thread Justin Findlay
I am having a problem emerging dev-qt/qtcore that I have been unable to
solve myself yet.  The system is amd64 and I have ABI_X86='32 64' so
that pipelight will work.  I think the error is coming from somewhere
within glibc's multilib compatability.

# ebuild $(equery which qtcore) merge



d Existing ${T}/environment for 'qtcore-4.8.6-r1' will be sourced. Run
 'clean' to start with a fresh environment.
 Checking qt-everywhere-opensource-src-4.8.6.tar.gz's mtime...
 WORKDIR is up-to-date, keeping...
 * checking ebuild checksums ;-) ...


   [ ok ]
 * checking auxfile checksums ;-) ...


   [ ok ]
 * checking miscfile checksums ;-) ...


   [ ok ]
 It appears that 'pretend' has already executed for
'qtcore-4.8.6-r1'; skipping.
 Remove '/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/.pretended' to force
pretend.
 It appears that 'setup' has already executed for 'qtcore-4.8.6-r1';
skipping.
 Remove '/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/.setuped' to force
setup.
 It appears that 'unpack' has already executed for 'qtcore-4.8.6-r1';
skipping.
 Remove '/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/.unpacked' to force
unpack.
 It appears that 'prepare' has already executed for
'qtcore-4.8.6-r1'; skipping.
 Remove '/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/.prepared' to force
prepare.
 It appears that 'configure' has already executed for
'qtcore-4.8.6-r1'; skipping.
 Remove '/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/.configured' to
force configure.
 Compiling source in
/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6
...
 * abi_x86_32.x86: running multilib-minimal_abi_src_compile
 * Running emake in src/tools/bootstrap
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/tools/moc
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/tools/rcc
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/tools/uic
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/corelib
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/network
make -j3 -l4
make: Nothing to be done for 'first'.
 * Running emake in src/xml
make -j3 -l4
rm -f libQtXml.so.4.8.6 libQtXml.so libQtXml.so.4 libQtXml.so.4.8
x86_64-pc-linux-gnu-g++ -m32 -Wl,-O1 -Wl,--as-needed
-Wl,-rpath-link,/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6-abi_x86_32.x86/lib
-Wl,--no-undefined -shared -Wl,-Bsymbolic-functions
-Wl,-soname,libQtXml.so.4 -o libQtXml.so.4.8.6
.obj/release-shared/qdom.o .obj/release-shared/qxml.o
-L/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6-abi_x86_32.x86/lib
-L/usr/lib32/qt4 -lQtCore
-L/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6-abi_x86_32.x86/lib
-lpthread
/usr/lib32/libc_nonshared.a(stack_chk_fail_local.oS): In function
`__stack_chk_fail_local':
stack_chk_fail_local.c:(.text+0x20): undefined reference to
`__stack_chk_fail'
collect2: error: ld returned 1 exit status
Makefile:122: recipe for target '../../lib/libQtXml.so.4.8.6' failed
make: *** [../../lib/libQtXml.so.4.8.6] Error 1
 * ERROR: dev-qt/qtcore-4.8.6-r1::gentoo failed (compile phase):
 *   emake failed
 *
 * If you need support, post the output of `emerge --info
'=dev-qt/qtcore-4.8.6-r1::gentoo'`,
 * the complete build log and the output of `emerge -pqv
'=dev-qt/qtcore-4.8.6-r1::gentoo'`.
 * The complete build log is located at
'/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/temp/build.log'.
 * The ebuild environment file is located at
'/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/temp/environment'.
 * Working directory:
'/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6-abi_x86_32.x86/src/xml'
 * S:
'/var/tmp/portage/dev-qt/qtcore-4.8.6-r1/work/qt-everywhere-opensource-src-4.8.6'


Here's more information on the problem:
https://gist.github.com/jfindlay/3bb0a4c8a0a6d1eafcd5, thanks.


Justin



Re: [gentoo-user] Usign ansible

2015-01-11 Thread Tomas Mozes

On 2015-01-11 22:06, Alan McKinnon wrote:
Out of curiosity, ansible-controlled files, sysadmin-controlled 
files
means that something is managed via ansible and something is done 
manually?



Yes


Then it's clear why /etc is in git. Ideally one would not make manual 
changes to systems managed via ansible.