Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Eray Aslan
On Tue, Sep 16, 2014 at 10:43:18PM +0200, Alan McKinnon wrote:
 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so.

I find that for a few machines, puppet is overkill.  For a lot of
machines, puppet can become unmanageable - with puppet master and
security being the culprit.

We have used puppet a lot but recently settled on salt (strictly
speaking not my decision so cannot really compare it with ansible) and
we are happy with the outcome.  You might want to consider
app-admin/salt as well.

-- 
Eray



[gentoo-user] [ANN] Ebuild for Puppet

2006-09-07 Thread José González Gómez
Hi there,I recently discovered Puppet[1], From their web site: Puppet is an open-source next-generation server automation tool.  It is
composed of a declarative language for expressing system configuration, a
client and server for distributing it, and a library
for realizing the
configuration.. Basically, Puppet intends to be a better cfengine [2]
and IMHO it looks very promising. That's why I have contributed init
scripts for Gentoo and integration of Puppet with portage, available
from version 0.19.0, and have also contributed an ebuild [3] to make
Puppet available to all those Gentoo sys admins out there. I also have
to say that I have found its main developer (Luke Kanies) to be very
supportive while programming all the integration with Gentoo. I hope
you find this useful, and if so, contribute to make it even better (of
course, if you find any problem in the integration with Gentoo, feel
free to contact me).
Best regards,Jose[1] http://www.reductivelabs.com/projects/puppet/index.html
[2] 
http://www.reductivelabs.com/projects/puppet/documentation/notcfengine.html[3] http://bugs.gentoo.org/show_bug.cgi?id=146712







Re: [gentoo-user] Re: Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 07:46, Hans de Graaff wrote:
 On Tue, 16 Sep 2014 22:43:18 +0200, Alan McKinnon wrote:
 
 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different
 files)
 
 I'm using puppet for small installs ( 10 hosts) and am quite happy with 
 it. It's wonderful to push some changes and have all these hosts 
 configure themselves accordingly. Not to mention the joy of adding new 
 hosts.

I want the benefits of puppet and the end result it brings about -
that's already established.

 
 The configuration can get large, but then again, these are all things 
 that you are already managing on the host. Better to do it all in one 
 place, rather than on each individual host with all its associated 
 inconsistencies.
 
 Us being a ruby shop I never looked at ansible and I'm not even sure it 
 existed when we choose puppet.

Ansible is somewhat new, and reading between the lines it might have
been written in response to large complex puppet installs.


 One thing you can do to make the deployment easier for smaller scale 
 setups would be to use a masterless puppet. One less component to worry 
 about. Just distribute the puppet repository and run puppet apply.


Well, I've already decided to not use puppet, I find it over-complex for
my needs (not to mind that the language has some confusing parts to it )


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ansible, puppet and chef

2014-09-16 Thread Alec Ten Harmsel
We use bcfg2, and all I can say is to stay away. XML abuse runs rampant
in bcfg2. From what I've heard from other professional sysadmins, Puppet
is the favorite, but that's mostly conjecture.

Alec

On 09/16/2014 04:43 PM, Alan McKinnon wrote:
 Anyone here used ansible and at least one of puppet/chef?

 What are your thoughts?

 I've made several attempts over the years to get puppet going but never
 really got it off the ground. Chef I stay away from (likely due to the
 first demo of it I saw and how badly that went)

 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different files)

 I've stumbled upon ansible, it seems much better than puppet for
 smallish sites with good odds I might even keep the whole thing in my
 head at any one time :-)

 Anyone care to share experiences?







Re: [gentoo-user] Cross system dependencies

2014-06-29 Thread Neil Bothwick
On Sun, 29 Jun 2014 08:55:41 +0200, J. Roeleveld wrote:

  or...
  puppet and it's kin  
 
 Last time I looked at puppet, it seemed too complex for what I need.
 I will recheck it again.

What about something like monit?


-- 
Neil Bothwick

Bug: (n.) any program feature not yet described to the marketing
department.


signature.asc
Description: PGP signature


[gentoo-user] Ansible, puppet and chef

2014-09-16 Thread Alan McKinnon
Anyone here used ansible and at least one of puppet/chef?

What are your thoughts?

I've made several attempts over the years to get puppet going but never
really got it off the ground. Chef I stay away from (likely due to the
first demo of it I saw and how badly that went)

Puppet seems to me a good product for a large site with 1000 hosts.
Not so much for ~20 or so. Plus puppet's language and configs get large
and hard to keep track of - lots and lots of directory trees with many
things mentioning other things. (Nagios has the same problem if you
start keeping host, services, groups and commands in many different files)

I've stumbled upon ansible, it seems much better than puppet for
smallish sites with good odds I might even keep the whole thing in my
head at any one time :-)

Anyone care to share experiences?



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 11:34, J. Roeleveld wrote:
 
 On Wednesday, September 17, 2014 12:19:37 PM Eray Aslan wrote:
 On Tue, Sep 16, 2014 at 10:43:18PM +0200, Alan McKinnon wrote:
 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so.

 I find that for a few machines, puppet is overkill.  For a lot of
 machines, puppet can become unmanageable - with puppet master and
 security being the culprit.

 We have used puppet a lot but recently settled on salt (strictly
 speaking not my decision so cannot really compare it with ansible) and
 we are happy with the outcome.  You might want to consider
 app-admin/salt as well.
 
 Looks good (had a really quick look).
From what I read (and please correct me if I'm wrong), a difference between 
 salt and ansible is:
 
 Salt Requires a daemon to be installed and running on all machines
 and the versions need to be (mostly) in sync
 
 For Alan, this might work, but for my situation it wouldn't, as I'd need to 
 keep various VMs in sync with the rest where I'd prefer to simply clone them 
 and then enforce changes. Relying on SSH and powershell makes that simpler.
 
 But, it does mean that all nodes need to have incoming ports open. With Salt, 
 all nodes connect back to the master. This allows a tighter security.


I'm not too stressed either way. All my hosts run sshd anyway and the
security is not in whether tcp22 is open or not, it's in what I put in
sshd_config. With the puppet design, the puppet daemon must be running
(or a cronjob) and puppet can self host that along with nrpe, munin and
all the other crap that gets installled so I can do my job :-)


My issue with puppet is not it's network architecture but with it's
convoluted config language that I can't wrap my brains around. Plus the
re-use of similar keywords to mean quite different things meaning I have
to read 5 topics in the manual to get stuff working. Nagios btw has the
same problem hence why I'm switching to Icinga 2 which fixes Nagios's
config language once and for all.


-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: Ansible, puppet and chef

2014-09-17 Thread Hans de Graaff
On Tue, 16 Sep 2014 22:43:18 +0200, Alan McKinnon wrote:

 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different
 files)

I'm using puppet for small installs ( 10 hosts) and am quite happy with 
it. It's wonderful to push some changes and have all these hosts 
configure themselves accordingly. Not to mention the joy of adding new 
hosts.

The configuration can get large, but then again, these are all things 
that you are already managing on the host. Better to do it all in one 
place, rather than on each individual host with all its associated 
inconsistencies.

Us being a ruby shop I never looked at ansible and I'm not even sure it 
existed when we choose puppet.

One thing you can do to make the deployment easier for smaller scale 
setups would be to use a masterless puppet. One less component to worry 
about. Just distribute the puppet repository and run puppet apply.

Hans




Re: [gentoo-user] XOrg / XRanR segfault starting X

2016-08-18 Thread David Haller
Hello,

On Thu, 18 Aug 2016, james wrote:
>other tools:: 'lshw'

or sys-apps/hwinfo

HTH,
-dnh

-- 
"I'm nobody's puppet!"-- Rygel XIV



Re: [gentoo-user] Managing multiple Gentoo systems

2011-07-07 Thread kashani

On 7/7/2011 1:37 PM, Alan McKinnon wrote:

On Thursday 07 July 2011 11:23:15 kashani did opine thusly:

On 7/2/2011 3:14 PM, Grant wrote:

After a frustrating experience with a Linksys WRT54GL, I've
decided to stick with Gentoo routers.  This increases the
number of Gentoo systems I'm responsible for and they're
nearing double-digits.  What can be done to make the management
of multiple Gentoo systems easier? I think identical hardware
in each system would help a lot but I'm not sure that's
practical.  I need to put together a bunch of new workstations
and I'm thinking some sort of server/client arrangement with
the only Gentoo install being on the server could be
appropriate.

- Grant


You may want to look at something like a config management

system.

I'm using Puppet these days, but Gentoo support isn't spectacular.
It would be a bit complex to have Puppet install the packages with
the correct USE flags. However you could use Puppet to manage all
the text files and then manage the packages somewhat manually.


Give chef a try.

It overcomes a lot of the issue puppet ran into, and of course makes
new ones all of it's won, but by and large chef is more flexible.


Too late. I've already put a year in with Puppet and have too much 
working code to switch. Also I'm not much of a programmer so I get a bit 
more out of the DSL though my templates are getting fairly fancy these 
days. For anyone else interested in what we're talking about, here's a 
fairly balanced and up to date link talking about some of the differences.


http://redbluemagenta.com/2011/05/21/puppet-vs-chef/

kashani



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Alan McKinnon
On 27/09/2013 06:33, Johann Schmitz wrote:
 Hi Alan,
 
 On 26.09.2013 22:42, Alan McKinnon wrote:
 You will break things horribly and will curse the day you tried.
 Basically, puppet and portage will get in each other's way and clobber
 each other. Puppet has no concept of USE flags worth a damn, cannot
 determine in advance what an ebuild will provide and the whole thing
 breaks puppet's 100% deterministic model.

 Puppet is designed to work awesomely well with binary distros, that is
 where it excels. Keep within those constraints. Same goes for chef,
 cfengine and various others things that accomplish the same end.
 
 Did you try to combine one of these solutions with portage's binary
 package feature? With --usepkgonly gentoo is more or less a binary
 distro. I'm thinking of using a single use flag set for 20+ Gentoo
 servers to get rid of compiling large packages in the live environment.


binpkgs don't turn gentoo into a binary distro, they turn it into
something resembling a Unix from the 90s with pkgadd - using dumb
tarballs with no metadata and no room to make choices. Puppet fails at
that as the intelligence cannot happen in puppet, it has to happen in
portage. If the binpkg doesn't match what package.* says, puppet is
stuck and portage falls back to building locally. The result is worse
than the worst binary distro.

By all means use a central use set, it's what I do for my dev VMs and it
works out well for me. Just remember to run emerge on each machine
individually.




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 03:30, Alec Ten Harmsel wrote:
 We use bcfg2, and all I can say is to stay away. XML abuse runs rampant
 in bcfg2. From what I've heard from other professional sysadmins, Puppet
 is the favorite, but that's mostly conjecture.

XML. Ugh. OSSEC works like that too. The software itself works well but
the config is painful.


 
 Alec
 
 On 09/16/2014 04:43 PM, Alan McKinnon wrote:
 Anyone here used ansible and at least one of puppet/chef?

 What are your thoughts?

 I've made several attempts over the years to get puppet going but never
 really got it off the ground. Chef I stay away from (likely due to the
 first demo of it I saw and how badly that went)

 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different files)

 I've stumbled upon ansible, it seems much better than puppet for
 smallish sites with good odds I might even keep the whole thing in my
 head at any one time :-)

 Anyone care to share experiences?



 
 
 


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread J. Roeleveld

On Tuesday, September 16, 2014 10:43:18 PM Alan McKinnon wrote:
 Anyone here used ansible and at least one of puppet/chef?
 
 What are your thoughts?
 
 I've made several attempts over the years to get puppet going but never
 really got it off the ground. Chef I stay away from (likely due to the
 first demo of it I saw and how badly that went)
 
 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different files)
 
 I've stumbled upon ansible, it seems much better than puppet for
 smallish sites with good odds I might even keep the whole thing in my
 head at any one time :-)
 
 Anyone care to share experiences?

No experiences yet, but I have been looking for options to quickly and easily 
create (and remove) VMs lab environments.

I agree with your comments on Chef and Puppet.
Ansible looks nice and seems easy to manage. I miss an option to store the 
configuration inside a database, but I don't see an issue adding the 
generation of the config-files from database tables to the rest of the 
environment I am working on.

I like that Ansible also seems to support MS Windows nodes, just too bad that 
requires enabling it after install. But with this, cloning VMs and changing 
the network configs afterwards seems easier to manage.

--
Joost




Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread J. Roeleveld

On Wednesday, September 17, 2014 12:19:37 PM Eray Aslan wrote:
 On Tue, Sep 16, 2014 at 10:43:18PM +0200, Alan McKinnon wrote:
  Puppet seems to me a good product for a large site with 1000 hosts.
  Not so much for ~20 or so.
 
 I find that for a few machines, puppet is overkill.  For a lot of
 machines, puppet can become unmanageable - with puppet master and
 security being the culprit.
 
 We have used puppet a lot but recently settled on salt (strictly
 speaking not my decision so cannot really compare it with ansible) and
 we are happy with the outcome.  You might want to consider
 app-admin/salt as well.

Looks good (had a really quick look).
From what I read (and please correct me if I'm wrong), a difference between 
salt and ansible is:

Salt Requires a daemon to be installed and running on all machines
and the versions need to be (mostly) in sync

For Alan, this might work, but for my situation it wouldn't, as I'd need to 
keep various VMs in sync with the rest where I'd prefer to simply clone them 
and then enforce changes. Relying on SSH and powershell makes that simpler.

But, it does mean that all nodes need to have incoming ports open. With Salt, 
all nodes connect back to the master. This allows a tighter security.

--
Joost



Re: [gentoo-user] Managing multiple Gentoo systems

2011-07-07 Thread Alan McKinnon
On Thursday 07 July 2011 14:01:55 kashani did opine thusly:
 On 7/7/2011 1:37 PM, Alan McKinnon wrote:
  On Thursday 07 July 2011 11:23:15 kashani did opine thusly:
  On 7/2/2011 3:14 PM, Grant wrote:
  After a frustrating experience with a Linksys WRT54GL, I've
  decided to stick with Gentoo routers.  This increases the
  number of Gentoo systems I'm responsible for and they're
  nearing double-digits.  What can be done to make the
  management
  of multiple Gentoo systems easier? I think identical
  hardware
  in each system would help a lot but I'm not sure that's
  practical.  I need to put together a bunch of new
  workstations
  and I'm thinking some sort of server/client arrangement with
  the only Gentoo install being on the server could be
  appropriate.
  
  - Grant
  
 You may want to look at something like a config management
  
  system.
  
  I'm using Puppet these days, but Gentoo support isn't
  spectacular. It would be a bit complex to have Puppet install
  the packages with the correct USE flags. However you could
  use Puppet to manage all the text files and then manage the
  packages somewhat manually.
  
  Give chef a try.
  
  It overcomes a lot of the issue puppet ran into, and of course
  makes new ones all of it's won, but by and large chef is more
  flexible.
 
 Too late. I've already put a year in with Puppet and have too much
 working code to switch. Also I'm not much of a programmer so I get a
 bit more out of the DSL though my templates are getting fairly
 fancy these days. For anyone else interested in what we're talking
 about, here's a fairly balanced and up to date link talking about
 some of the differences.
 
 http://redbluemagenta.com/2011/05/21/puppet-vs-chef/

At least with puppet you can still work around shortcomings as you 
find them (no black box tricks in puttet)

But regardless of it's quality, it's still 1,000,000's of times better 
than doing it all manually!

-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Johann Schmitz
Hi Alan,

On 26.09.2013 22:42, Alan McKinnon wrote:
 You will break things horribly and will curse the day you tried.
 Basically, puppet and portage will get in each other's way and clobber
 each other. Puppet has no concept of USE flags worth a damn, cannot
 determine in advance what an ebuild will provide and the whole thing
 breaks puppet's 100% deterministic model.
 
 Puppet is designed to work awesomely well with binary distros, that is
 where it excels. Keep within those constraints. Same goes for chef,
 cfengine and various others things that accomplish the same end.

Did you try to combine one of these solutions with portage's binary
package feature? With --usepkgonly gentoo is more or less a binary
distro. I'm thinking of using a single use flag set for 20+ Gentoo
servers to get rid of compiling large packages in the live environment.

Regards,
Johann



Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 09:34, J. Roeleveld wrote:
 
 On Tuesday, September 16, 2014 10:43:18 PM Alan McKinnon wrote:
 Anyone here used ansible and at least one of puppet/chef?

 What are your thoughts?

 I've made several attempts over the years to get puppet going but never
 really got it off the ground. Chef I stay away from (likely due to the
 first demo of it I saw and how badly that went)

 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different files)

 I've stumbled upon ansible, it seems much better than puppet for
 smallish sites with good odds I might even keep the whole thing in my
 head at any one time :-)

 Anyone care to share experiences?
 
 No experiences yet, but I have been looking for options to quickly and easily 
 create (and remove) VMs lab environments.

Have you tried Vagrant?

I haven't tried it myself, I'm just reacting to the VM keyword ;-)

 
 I agree with your comments on Chef and Puppet.
 Ansible looks nice and seems easy to manage. I miss an option to store the 
 configuration inside a database, but I don't see an issue adding the 
 generation of the config-files from database tables to the rest of the 
 environment I am working on.

Ansible has an add-on called Tower that seems to do this. The marketing
blurb implies you can use almost any storage backend you like from MySQL
and PostGres to LDAP

 
 I like that Ansible also seems to support MS Windows nodes, just too bad that 
 requires enabling it after install. But with this, cloning VMs and changing 
 the network configs afterwards seems easier to manage.

I'm lucky, this is a Unix-only shop so I don't have to deal with Windows
servers. The three managers who have Windows laptops for varying reasons
have all been clearly told upfront they will support themselves and I
ain't touching it :-)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Cross system dependencies

2014-06-29 Thread J. Roeleveld
On Sunday, June 29, 2014 09:35:33 AM Neil Bothwick wrote:
 On Sun, 29 Jun 2014 08:55:41 +0200, J. Roeleveld wrote:
   or...
   puppet and it's kin
  
  Last time I looked at puppet, it seemed too complex for what I need.
  I will recheck it again.
 
 What about something like monit?

Hmm... I looked into that before, don't recall why I didn't look into it 
properly before.

Just had a look on the website, it looks usable, will need to check this.
Will also replace nagios at the same time, which I find ok, but don't really 
like it.

I might open a new thread at a later stage when I get round to trying it.

Thanks,

Joost



Re: [gentoo-user] Managing multiple Gentoo systems

2011-07-07 Thread Alan McKinnon
On Thursday 07 July 2011 11:23:15 kashani did opine thusly:
 On 7/2/2011 3:14 PM, Grant wrote:
  After a frustrating experience with a Linksys WRT54GL, I've
  decided to stick with Gentoo routers.  This increases the
  number of Gentoo systems I'm responsible for and they're
  nearing double-digits.  What can be done to make the management
  of multiple Gentoo systems easier? I think identical hardware
  in each system would help a lot but I'm not sure that's
  practical.  I need to put together a bunch of new workstations
  and I'm thinking some sort of server/client arrangement with
  the only Gentoo install being on the server could be
  appropriate.
  
  - Grant
 
   You may want to look at something like a config management 
system.
 I'm using Puppet these days, but Gentoo support isn't spectacular.
 It would be a bit complex to have Puppet install the packages with
 the correct USE flags. However you could use Puppet to manage all
 the text files and then manage the packages somewhat manually.

Give chef a try.

It overcomes a lot of the issue puppet ran into, and of course makes 
new ones all of it's won, but by and large chef is more flexible.


 
 Here's a snippet of a template for nrpe.cfg
 
 % if processorcount.to_i = 12 then -%
 command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir')
 %/check_load -w 35,25,25 -c 35,25,25
 % elsif fqdn =~ /(.*)stage|demo(.*)/ then -%
 command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir')
 %/check_load -w 10,10,10 -c 10,10,10
 % else -%
 command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir')
 %/check_load -w 10,7,5 -c 10,7,5
 % end -%
 
 If you were managing a make.conf you could set -j%=
 processorcount*2 % or whatever as well as pass in your own
 settings etc. Once you have things working it's pretty good at
 keeping your servers in sync and doing minor customization per
 server based on OS, hardware, IP, hostname, etc.
 
 kashani
-- 
alan dot mckinnon at gmail dot com



Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Tomas Mozes

On 2014-09-17 14:07, Alan McKinnon wrote:

Nagios btw has the same problem hence why I'm switching to Icinga 2
which fixes Nagios's config language once and for all.


Or you can use hostgroups/templates and have all your configuration in
files and in git. Depends what you like more.



Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Tomas Mozes

On 2014-09-16 22:43, Alan McKinnon wrote:

Anyone here used ansible and at least one of puppet/chef?

What are your thoughts?

I've made several attempts over the years to get puppet going but never
really got it off the ground. Chef I stay away from (likely due to the
first demo of it I saw and how badly that went)

Puppet seems to me a good product for a large site with 1000 hosts.
Not so much for ~20 or so. Plus puppet's language and configs get large
and hard to keep track of - lots and lots of directory trees with many
things mentioning other things. (Nagios has the same problem if you
start keeping host, services, groups and commands in many different 
files)


I've stumbled upon ansible, it seems much better than puppet for
smallish sites with good odds I might even keep the whole thing in my
head at any one time :-)

Anyone care to share experiences?


We use ansible.

I like it because you don't need any agents to install, just the ssh 
keys and python, which is mandatory on gentoo anyway. We use a 
minimalistic script that bootstraps machines (xen-domU) and then 
everything else is configured via ansible. Since version 1.6 there is 
the portage module to install software and you can do pretty stuff with 
replace/lineinfile/template/copy modules.


The roles are a good way of keeping your systems equal. We have a common 
role for all gentoo machines, then roles specific for dom0 and domU 
machines and then the actual roles of a project (project-app for 
application server of a project). You can even more abstract it to have 
a common application server or a common database, but since you can 
include other playbooks, we don't use it that way (also to not get lost 
in too many levels of abstractions).


For upgrades you either write precise playbooks (for example, before you 
used a specific testing package and now you want a newer testing 
one) where you delete the previous package.accept_keywords line and 
insert the new one. Or by having a small number of servers it's often 
faster by clusterssh.





Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 09:07, Tomas Mozes wrote:
 On 2014-09-16 22:43, Alan McKinnon wrote:
 Anyone here used ansible and at least one of puppet/chef?

 What are your thoughts?

 I've made several attempts over the years to get puppet going but never
 really got it off the ground. Chef I stay away from (likely due to the
 first demo of it I saw and how badly that went)

 Puppet seems to me a good product for a large site with 1000 hosts.
 Not so much for ~20 or so. Plus puppet's language and configs get large
 and hard to keep track of - lots and lots of directory trees with many
 things mentioning other things. (Nagios has the same problem if you
 start keeping host, services, groups and commands in many different
 files)

 I've stumbled upon ansible, it seems much better than puppet for
 smallish sites with good odds I might even keep the whole thing in my
 head at any one time :-)

 Anyone care to share experiences?
 
 We use ansible.
 
 I like it because you don't need any agents to install, just the ssh
 keys and python, which is mandatory on gentoo anyway. We use a
 minimalistic script that bootstraps machines (xen-domU) and then
 everything else is configured via ansible. Since version 1.6 there is
 the portage module to install software and you can do pretty stuff with
 replace/lineinfile/template/copy modules.
 
 The roles are a good way of keeping your systems equal. We have a common
 role for all gentoo machines, then roles specific for dom0 and domU
 machines and then the actual roles of a project (project-app for
 application server of a project). You can even more abstract it to have
 a common application server or a common database, but since you can
 include other playbooks, we don't use it that way (also to not get lost
 in too many levels of abstractions).
 
 For upgrades you either write precise playbooks (for example, before you
 used a specific testing package and now you want a newer testing
 one) where you delete the previous package.accept_keywords line and
 insert the new one. Or by having a small number of servers it's often
 faster by clusterssh.


That's almost exactly the same setup I have in mind.

How complex do the playbooks get in real-life?


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-26 Thread Alan McKinnon
On 26/09/2013 11:08, Grant wrote:
 I'm thinking of a different approach and I'm getting pretty excited.
 
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

Laptops are a good choice, desktops are almost dead out there, and thin
clients nettops are just dead in the water for anything other than
appliances and media servers


 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

Could work, but don't push *your* laptop's config to all the other
laptops. they end up with your stuff which might not be what them to
have. Rather have a completely separate area where you store portage
configs, tree, packages and distfiles for laptops/clients and push from
there.

I'd recommend if you have a decent-ish desktop lying around, you press
that into service as your master build host. yeah, it takes 10% longer
to build stuff, but so what? Do it overnight.

  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

Errr no. Do not do that. Do not use puppet for Gentoo systems. Let me
make that clear :-)

DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.

You will break things horribly and will curse the day you tried.
Basically, puppet and portage will get in each other's way and clobber
each other. Puppet has no concept of USE flags worth a damn, cannot
determine in advance what an ebuild will provide and the whole thing
breaks puppet's 100% deterministic model.

Puppet is designed to work awesomely well with binary distros, that is
where it excels. Keep within those constraints. Same goes for chef,
cfengine and various others things that accomplish the same end.


 If I can make this work I will basically only admin my laptop and
 hosted server no matter how large the office grows.  Huge time savings
 and huge scalability.  No multiseat required.  Please shoot this down!

Rather keep your laptop as your laptop with it's own setup, and
everything else as that own setup. You only need one small difference
between what you want your laptop to have, and everything else to have,
to crash that entire model.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread J. Roeleveld

On Wednesday, September 17, 2014 10:12:52 AM Alan McKinnon wrote:
 On 17/09/2014 09:34, J. Roeleveld wrote:
  On Tuesday, September 16, 2014 10:43:18 PM Alan McKinnon wrote:
  Anyone here used ansible and at least one of puppet/chef?
  
  What are your thoughts?
  
  I've made several attempts over the years to get puppet going but never
  really got it off the ground. Chef I stay away from (likely due to the
  first demo of it I saw and how badly that went)
  
  Puppet seems to me a good product for a large site with 1000 hosts.
  Not so much for ~20 or so. Plus puppet's language and configs get large
  and hard to keep track of - lots and lots of directory trees with many
  things mentioning other things. (Nagios has the same problem if you
  start keeping host, services, groups and commands in many different
  files)
  
  I've stumbled upon ansible, it seems much better than puppet for
  smallish sites with good odds I might even keep the whole thing in my
  head at any one time :-)
  
  Anyone care to share experiences?
  
  No experiences yet, but I have been looking for options to quickly and
  easily create (and remove) VMs lab environments.
 
 Have you tried Vagrant?

Nope.

 I haven't tried it myself, I'm just reacting to the VM keyword ;-)

Yes, but it doesn't have support for Xen or KVM and I'd need to write a custom 
provider to make that work.
That basically does what I am looking into, but with the products we work 
with, I need more custom activities in some of the VMs then are easily 
organised.

  I agree with your comments on Chef and Puppet.
  Ansible looks nice and seems easy to manage. I miss an option to store the
  configuration inside a database, but I don't see an issue adding the
  generation of the config-files from database tables to the rest of the
  environment I am working on.
 
 Ansible has an add-on called Tower that seems to do this. The marketing
 blurb implies you can use almost any storage backend you like from MySQL
 and PostGres to LDAP

Ok, from a quick scan of that page, it looked like a web frontend for some 
stuff. I'll definitely look into that part. The rest is more custom, so I 
might just generate the config files on the fly.

  I like that Ansible also seems to support MS Windows nodes, just too bad
  that requires enabling it after install. But with this, cloning VMs and
  changing the network configs afterwards seems easier to manage.
 
 I'm lucky, this is a Unix-only shop so I don't have to deal with Windows
 servers. The three managers who have Windows laptops for varying reasons
 have all been clearly told upfront they will support themselves and I
 ain't touching it :-)

Not all products we deal with run on non-MS Windows systems, so we are sort-of 
stuck with it. They only run inside VMs that are only accessible via the LAB 
network. Which means, no access to the internet unless specifically allowed. 
(The host and port on the internet needs to be known prior to allowing access)

--
Joost



Re: [gentoo-user] SCIRE Project

2007-02-13 Thread Duane Griffin

On 13/02/07, Daniel van Ham Colchete [EMAIL PROTECTED] wrote:

Hello everyone

Here on my company we are going to start deploying Gentoo Linux on our
customers. Every server will have the very same installed packages,
the very same use flags, very same cflags, only a few configurations
will differ.

I would like the deployment and maintenance to be done as easily as
possible because this project needs to be scalable to more than 100
servers. Although we are going to install only 10 servers in the
beginning, my boss says that I should be prepared for this number to
grow.


I don't know anything about SCIRE but you may want to take a look at puppet:
http://reductivelabs.com/projects/puppet/index.html

There are ebuilds available from bugs.gentoo.org.

Cheers,
Duane.

--
I never could learn to drink that blood and call it wine - Bob Dylan
--
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] Which network monitoring?

2011-04-03 Thread kashani

On 4/3/2011 7:10 PM, Pandu Poluan wrote:

Hello users!

I am transitioning my infrastructure back-ends from Windows to Gentoo
Linux. The next server to be transitioned is our infrastructure
monitoring server.

Currently, we're using WebWatchBot. Its abilities that we use are:
- Monitoring Internet connection up/down (we have 4 Internet connections)
- Monitoring website (which we host on a 3rd party webhosting) by
searching for a keyword using HTTP
- Monitoring free space on other servers (mostly Windows-based, thuse
we use WMI)
- Monitoring services on Windows-based servers (again, WMI)
- Sending alerts to selected groups (PICs) when failure exceeds a
threshold (e.g., Systems group will receive alerts for their database
servers, Infrastructure group will receive all alerts)

Can you recommend a suitable monitoring system for Gentoo?


	Nagios still works well for me. And it'll do some wmi stuff, IIRC. I've 
been using a combination of Mysql backed Puppet with stored resources 
for system management. Then push Nagios configs to the Nagios server via 
tags in Puppet. Still working to get it right, but it's about there. 
Next step is to get collectd working with Nagios as well.


kashani



Re: [gentoo-user] portage for chef-0.10.0

2011-06-28 Thread kashani

On 6/28/2011 5:00 AM, Alexey Melezhik wrote:

Current chef-client portage is only for version 0.9.12 (according to
http://packages.gentoo.org/package/app-admin/chef), while version 0.10.0
of chef was released at May, 02. When portage for chef-client, version
0.10.0 will be ready?

Thank you.



	As the others have pointed out it's coming, but in the short term you 
can always gem install directly and continue to use the init scripts 
that shipped with the portage package. I do the same on Ubuntu w/ Puppet.


kashani



Re: [gentoo-user] NFS tutorial for the brain dead sysadmin?

2014-07-27 Thread Stefan G. Weichinger
Am 27.07.2014 18:25, schrieb Stefan G. Weichinger:

 Only last week I re-attacked this topic as I start using puppet here to
 manage my systems ... and one part of this might be sharing /usr/portage
 via NFSv4. One client host mounts it without a problem, the thinkpads
 don't do so ... just another example ;-)

As so often ... my fault: thinkpads did have NFSv4 in the kernel, but no
nfs-utils installed ... ;-)

sorry, S





Re: [gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Alan McKinnon
On 27/01/2015 10:49, Tomas Mozes wrote:
 I haven't tested it yet, however I like the minimalistic syntax.
 
 As an ansible user - do you plan to allow using default values for
 modules and/or variables?


+1 for that.

I'm also a happy ansible user with zero plans to change, but I can't
imagine a deployment tool without sane rational explicit defaults. A
whole host of problems simply stop being problems if that feature is
available.

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] preventing keyboard layout files from being overritten during system upgrades

2015-12-04 Thread Alec Ten Harmsel
On Fri, Dec 04, 2015 at 01:55:30PM +0200, gevisz wrote:
> 
> So, my main question is How can I ensure that the already
> edited /usr/share/X11/xkb/symbols/ru file will not be overwritten
> during the next system update. Thabk you.
> 

Use a configuration management tool like puppet or ansible. It will take
a small amount of initial investment to set up, but then any overwritten
configuration files can be easily added back by running the
configuration management tool.

Alec



Re: [gentoo-user] Managing multiple Gentoo systems

2011-07-07 Thread kashani

On 7/2/2011 3:14 PM, Grant wrote:

After a frustrating experience with a Linksys WRT54GL, I've decided to
stick with Gentoo routers.  This increases the number of Gentoo
systems I'm responsible for and they're nearing double-digits.  What
can be done to make the management of multiple Gentoo systems easier?
I think identical hardware in each system would help a lot but I'm not
sure that's practical.  I need to put together a bunch of new
workstations and I'm thinking some sort of server/client arrangement
with the only Gentoo install being on the server could be appropriate.

- Grant



	You may want to look at something like a config management system. I'm 
using Puppet these days, but Gentoo support isn't spectacular. It would 
be a bit complex to have Puppet install the packages with the correct 
USE flags. However you could use Puppet to manage all the text files and 
then manage the packages somewhat manually.


Here's a snippet of a template for nrpe.cfg

% if processorcount.to_i = 12 then -%
command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir') 
%/check_load -w 35,25,25 -c 35,25,25

% elsif fqdn =~ /(.*)stage|demo(.*)/ then -%
command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir') 
%/check_load -w 10,10,10 -c 10,10,10

% else -%
command[check_load]=%= scope.lookupvar('nrpe::params::pluginsdir') 
%/check_load -w 10,7,5 -c 10,7,5

% end -%

If you were managing a make.conf you could set -j%= processorcount*2 % 
or whatever as well as pass in your own settings etc. Once you have 
things working it's pretty good at keeping your servers in sync and 
doing minor customization per server based on OS, hardware, IP, 
hostname, etc.


kashani




Re: [gentoo-user] Re: Managing multiple systems with identical hardware

2013-09-29 Thread Alan McKinnon
On 29/09/2013 20:36, Grant wrote:
 I'm slowly coming to conclsuion that you are trying to solve a problem
 with Gentoo that binary distros already solved a very long time ago. You
 are forcing yourself to become the sole maintainer of GrantOS and do all
 the heavy lifting of packaging. But, Mint and friends already did all
 that work already and frankly, they are much better at it than you or I.

 I think it will work if I can find a way to manage the few differences
 above.  Am I overlooking any potential issues?

 I think Grant Should look at CFengine, if he is not familar
 with it. It is the traditional 800 pound Gorrilla when it comes
 to managing many systems. Surely there are folks there in those
 forums that can help Grant filter his ideas until they are
 ready for action. CFengine is in portage.

 Alan may be right, as CFengine (or whatever) may work better
 with a binary distribution and is probable more tightly integrated
 with something like debian or such OSes.
 
 Can you give me a general idea of how my workflow might be with a
 solution like that?


It's not really possible to give a cut and dried answer to that, as all
three solutions (CFEngine, Puppet, Chef) try hard to integrate
themselves into your needs rather than get you to integrate into a rigid
code-imposed system.

I could say that you load a config into Puppet, define how it works and
where it must go, then tell puppet to do it and let you know the
results, but that doesn't tell you much.

I reckon you should pop over to puppet's website and start reading. As
you grasp the general ideas you'll find ways to make it work for you.




-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] another old box to update

2015-01-07 Thread Stefan G. Weichinger
Am 08.01.2015 um 00:02 schrieb Alan McKinnon:

 In my opinion, ansible almost always beats puppet.
 
 Puppet is a) complex b) built to be able to deal with vast enterprise
 setups and c) has a definition language I never could wrap my brains
 around. It always felt to me like puppet was never a good fit for my needs.
 
 Then ansible hit the scene. Now ansible is designed to do what sysadmins
 do anyway, to do it in mostly the same way, and to do it in an automated
 fashion. It fits nicely into my brain and I can read a playbook at a
 glance to know what it does.
 
 Ansible deploys stuff and makes it like how you want it to be. It is
 equally good at managing 1000 identical machines as 100 mostly the same
 ones, or 20 totally different ones. How it manages this miracle is not
 easy to explain, so I urge you to give it a test drive. Fool around with
 a bunch of VMs to get a feel of how it works. A tip: keep it simple, and
 use roles everywhere.
 
 Ansible works nicely with other tools like vagrant and docker which
 build and deploy your base images. Once you have a base machine with
 sshd running and an administrative login account, ansible takes over an
 manages the rest. It works really well.

Thanks for the pointer, Alan.

I will look into ansible asap ... installed it on my basement server
already, but it's rather late in my $TIMEZONE ...

 On the business side of things, yes indeed you need to rationalize
 things and what you offer to customers. There comes a point where you
 the business grows and you just can't manage all these different things.
 Mistakes get made, SLAs slip, and everyone gets annoyed.

exactly. $NEXTSTEP.

 On how to track all that real-world data you mentioned, I have a few
 rules of my own. Monitor and track everything I can, get and store as
 much info out of logs as I can. All the info you need is in there
 somewhere. But how to get it is a problem highly specific to your
 business. Maybe start some new threads each one with a specific
 question, and watch for common solutions in the answers?

will do as soon as I am there, yes.

S




Re: [gentoo-user] maintaining clones

2009-08-02 Thread Ajai Khattri

On Fri, 31 Jul 2009, Dan Farrell wrote:


hmm... network booting?  network mounting?  install packages once on
one system, share them with everyone.  Share passwd/shadow files and
the like manually, or symlink them to skeletal versions symlinked to
somewhere that can be obscured and replaced by a network boot.  you
could even boot them from thumb drives or cds.

of course, it would be a good bit of work to configure initially,
and might not go whithout a hitch.


For configuration, you may want to look at something like puppet to manage 
that. Your build machine would the puppetmaster and keep the other 
machines' configs up-to-date.




--
A



Re: [gentoo-user] core i5

2010-06-24 Thread Stefan G. Weichinger
Am 24.06.2010 05:04, schrieb kashani:

 That's works. :-) I was doing a fair amount of rpm building, svn to
 git with large trees, kickstart, Mysql, and Puppet work at a job a few
 months ago which was hitting the host fairly hard. Between the above and
 Outlook getting an extra drive to isolate the host OS from the VMs was a
 requirement. Much smoother after that.

I always change my mind between having the VM-files on the local RAID1
or store them in the RAID1 in the basement and mount it via NFSv4 ...
much RAM in the host helps in any way.



Re: [gentoo-user] portage for chef-0.10.0

2011-06-29 Thread Alexey Melezhik

yeah, gem is okay, but portage is more proper way for our admins (((:

kashani kashani-l...@badapple.net писал(а) в своём письме Tue, 28 Jun  
2011 20:33:07 +0400:



On 6/28/2011 5:00 AM, Alexey Melezhik wrote:

Current chef-client portage is only for version 0.9.12 (according to
http://packages.gentoo.org/package/app-admin/chef), while version 0.10.0
of chef was released at May, 02. When portage for chef-client, version
0.10.0 will be ready?

Thank you.



	As the others have pointed out it's coming, but in the short term you  
can always gem install directly and continue to use the init scripts  
that shipped with the portage package. I do the same on Ubuntu w/ Puppet.


kashani







Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Tomas Mozes

On 2014-09-17 10:08, Alan McKinnon wrote:


That's almost exactly the same setup I have in mind.

How complex do the playbooks get in real-life?


The common role has about 70 tasks. It does almost everything covered in
the handbook plus installs and configures additional stuff like postfix,
nrpe, etc. The dom0 role has 15 tasks including monitoring, xen, grub.
The domU role basically just configures rc.conf.

An actual web server with apache/php has just about 20 tasks. A 
load-balancer
with varnish/nginx/keepalived has just about the same. A database has 
about

30 tasks because it also configures database replication.



Re: [gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Alec Ten Harmsel

On 01/27/2015 11:33 AM, Alan McKinnon wrote:
 On 27/01/2015 10:49, Tomas Mozes wrote:
 I haven't tested it yet, however I like the minimalistic syntax.

 As an ansible user - do you plan to allow using default values for
 modules and/or variables?

 +1 for that.

 I'm also a happy ansible user with zero plans to change, but I can't
 imagine a deployment tool without sane rational explicit defaults. A
 whole host of problems simply stop being problems if that feature is
 available.


I'm curious, what exactly do you mean about default values? Is there a
small example you can give me? The tutorial on Ansible's website is a
little confusing.

Alec



Re: [gentoo-user] Madly flickering display

2016-09-23 Thread David Haller
Hello,

On Fri, 23 Sep 2016, Peter Humphrey wrote:
>I've broken this out from the thread it appeared in, Problems with Xinerama 
>I found from Xorg.0.log that X11 wasn't finding an evdev module, even though 
>I had INPUT_DEVICES="evdev" in make.conf. So I added USE=evdev to dev-
>qt/qtgui* and that created the module, even though kensington@gentoo said it 
>had nothing to do with it.

Do you have x11-drivers/xf86-input-evdev installed?

# equery files  x11-drivers/xf86-input-evdev |grep '\.so'
/usr/lib64/xorg/modules/input/evdev_drv.so

That's the 'evdev' module that your X doesn't find.

-dnh

-- 
"I'm nobody's puppet!"-- Rygel XIV



Re: [gentoo-user] SCIRE Project

2007-02-16 Thread José González Gómez

2007/2/13, Daniel van Ham Colchete [EMAIL PROTECTED]:


Hello everyone

Here on my company we are going to start deploying Gentoo Linux on our
customers. Every server will have the very same installed packages,
the very same use flags, very same cflags, only a few configurations
will differ.

I would like the deployment and maintenance to be done as easily as
possible because this project needs to be scalable to more than 100
servers. Although we are going to install only 10 servers in the
beginning, my boss says that I should be prepared for this number to
grow.

Yesterday I found about the SCIRE project that seems to solve my
problems easily. But it seems that the project's development is
stopped. Unfortunately, I don't know a thing of Phyton, so I can't
help. Do anyone know how is the project going? Are we going to have a
production usable release? If so, when? It's not like I'm pushing
anything, I just want to know if I can count on it or not.

Setting the project aside, I'm thinking about developing my own
installer to install a catalyst's stage4 and reboot a working Gentoo.
After that I'm thinking about using emerge with binary packages to
install updates automatically. What do you think? Will it work? Is it
possible to rollback an update if something goes wrong?



We're working on womething similar using catalyst [1] to create a custom
livecd, quickstart [2] to automate installation of a basic working system
from that livecd and puppet (already mentioned in the thread) to automate
administration from that point.


To solve the problem with incompatible configuration files, everytime

I upgrade anything, a perl script will reconfigure the customers
server.



I recommend to use an existing solution (puppet, cfengine, there are other
out there) instead of developing a custom tool to keep configuration up to
date.

Best regards
Jose

[1] http://www.gentoo.org/proj/en/releng/catalyst/
[2] http://agaffney.org/quickstart/


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-29 Thread Alan McKinnon
On 29/09/2013 20:31, Grant wrote:

[snip]

 There's one thing that we haven't touched on, and that's the hardware.
 Are they all identical hardware items, or at least compatible? Kernel
 builds and hardware-sensitive apps like mplayer are the top reasons
 you'd want to centralize things, but those are the very apps that will
 make sure life miserable trying to fins commonality that works in all
 cases. So do keep hardware needs in mind when making purchases.
 
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

OK


 
 Personally, I wouldn't do the building and pushing on my own laptop,
 that turns me inot the central server and updates only happen when I'm
 in the office. I'd use a central build host and my laptop is just
 another client. Not all that important really, the build host is just an
 address from the client's point of view
 
 I don't think I'm making the connection here.  The central server
 can't do any unattended building and pushing, correct?  So I would
 need to be around either way I think.
 
 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

I see what you desire now - essentially you want to clone your laptop
(or big chunks of it) over to your other workstations.

No problem, just share your laptop's stuff with the workstations. Either
share it directly, or upload your laptops configs and buildpks to a
central fileserver where the workstations can access them (it comes down
to the same thing really)

 
 OK, I'm thinking over how much variation there would be from laptop to
 laptop:

 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.

 Am I missing anything?  I can imagine everything else being totally
 identical.

 What could I use to manage these differences?

 I'm sure there are numerous files in /etc/ with small niggling
 differences, you will find these as you go along.

 In a Linux world, these files actually do not subject themselves to
 centralization very well, they really do need a human with clue to make
 a decision whilst having access to the laptop in question. Every time
 we've brain-stormed this at work, we end up with only two realistic
 options: go to every machine and configure it there directly, or put
 individual per-host configs into puppet and push. It comes down to the
 same thing, the only difference is the location where stuff is stored.
 
 I'm sure I will need to carefully define those config differences.
 Can I set up puppet (or similar) on my laptop and use it to push
 config updates to all of the other laptops?  That way the package I'm
 using to push will be aware of config differences per system and push
 everything correctly.  You said not to use puppet, but does that apply
 in this scenario?

My warning about using Puppet on Gentoo should have come with a
disclaimer: don't use puppet to make a Gentoo machine to emerge packages
from source.

You intend to push binary packages always, where the workstation doesn't
have a choice in what it gets (you already decided that earlier). That
will work well and from your workstation's POV is almost identical to
how binary distros work.

 
 I'm slowly coming to conclsuion that you are trying to solve a problem
 with Gentoo that binary distros already solved a very long time ago. You
 are forcing yourself to become the sole maintainer of GrantOS and do all
 the heavy lifting of packaging. But, Mint and friends already did all
 that work already and frankly, they are much better at it than you or I.
 
 Interesting.  When I switched from Windows about 10 years ago I had
 only a very brief run with Mandrake before I settled on Gentoo so I
 don't *really* know what a binary distro is about.  How would this
 workflow be different on a binary distro?

A binary distro would be the same as I described above. How those
distros work is quite simple - their packages are archives like
quickpkgs with pre- and post- install/uninstall scripts. These script do
exactly the same thing as the various phase functions in portage - they
define where to move files to, ownerships and permissions of them, and
maybe a migration script if needed.

The distro's package manager deals with all the details - you just tell
it what you want installed and it goes ahead and does it.

What the Puppet server does is tell the workstation it needs to install
package XYZ. Code on the workstation then runs the package manager to do
just that.

For config

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Grant
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

Puppet seems like overkill for what I need.  I think all I really need
is something to manage config file differences and user accounts.  At
this point I'm thinking I shouldn't push packages themselves, but
portage config files and then let each laptop emerge unattended based
on those portage configs.  I'm going to bring this to the 'salt'
mailing list to see if it might be a good fit.  It seems like a much
lighter weight application.

I'm soaking up a lot of your time (again).  I'll return with any real
Gentoo questions I run into and to run down the final plan before I
execute it.  Thanks so much for your help.  Not sure what I'd do
without you. :)

- Grant



Re: [gentoo-user] using git to track (gentoo) server configs ?

2014-03-25 Thread yac
On Thu, 13 Feb 2014 17:01:47 +0100
Stefan G. Weichinger li...@xunil.at wrote:

 
 I happily use git for local repositories to track configs in /etc or
 for example, /root/bin or /usr/local/bin (scripts ..)
 
 There is also etckeeper, yes, useful as well.
 
 But I would like to have some kind of meta-repo for all the
 gentoo-servers I am responsible for ... some remote repo to pull from.
 
 Most files in /etc might be rather identical so it would make sense to
 only track the individual changes (saves space and bandwidth)
 
 Maybe it would be possible to use git-branches for each server?
 Does anyone of you already use something like that?
 What would be a proper and clever way to do that?
 
 Yes, I know, there is puppet and stuff ... but as far as I see this is
 overkill for my needs.
 
 I'd like to maintain some good and basic /etc, maybe plus
 /var/lib/portage/world and /root/.alias (etc etc ..) to be able to
 deploy a good and nice standardized gentoo server. Then adjust config
 at the customer (network, fstab, ...) and commit this to a central
 repo (on my main server at my office or so).
 
 Yes, rsyncing that stuff also works in a way ... but ... versioning is
 better.
 
 How do you guys manage this?
 
 Looking forward to your good ideas ;-)
 
 Regards, Stefan
 

You are probably looking for cfengine or puppet

---
Jan Matějka| Gentoo Developer
https://gentoo.org | Gentoo Linux
GPG: A33E F5BC A9F6 DAFD 2021  6FB6 3EBF D45B EEB6 CA8B


signature.asc
Description: PGP signature


[gentoo-user] Re: Ansible, puppet and chef

2014-09-16 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:

 We use bcfg2, and all I can say is to stay away. XML abuse runs rampant
 in bcfg2. From what I've heard from other professional sysadmins, Puppet
 is the favorite, but that's mostly conjecture.

Hi Alec!

  Anyone here used ansible 
  What are your thoughts?

I have no thoughts. I do see many, many new git repositories
that contain mesos and ansible. [1] So ansible must be cool?
Ansible is everywhere now. Already given up on the local cron_extended
effort? What, no Chronos?

  Anyone care to share experiences?

Hey, I was drunk OK? I thought this clustering for science was a
good thing, like getting a puppy and a new girlfriend all in the
same week. BOY was I tricked. Anyway, I know deep down inside you
are wanting a cluster where you work(?). Alec has one, I'm building
one, so come in, the water is, well, very wet and wild!


[1] https://github.com/AnsibleShipyard/ansible-mesos

https://github.com/mhamrah/ansible-mesos-playbook

http://blog.michaelhamrah.com/2014/06/setting-up-a-multi-node-mesos-cluster-running-docker-haproxy-and-marathon-with-ansible/

http://ops-school.readthedocs.org/en/latest/config_management.html

snip many more.


James







[gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-26 Thread Alec Ten Harmsel
Hi,

I've been working on my own replacement for Bcfg2 - bossman[1] - over
the past few months, and it's finally ready to be released in the wild.
I would be honored if anyone on this list who's thinking of trying
puppet, chef, ansible, bcfg2, etc. would try out bossman instead.
bossman has an incredibly simple syntax; no ruby DSLs or XML. My main
motivation for writing it was dealing with bcfg2's XML config on a daily
basis. Additionally, bossman has a (hopefully) great 'pretend' mode and
checks for a lot of errors.

If you are already using another solution and have some time to check
out bossman, I would love feedback. The only config manager I've used in
practice is bcfg2, so getting perspectives from those using other
solutions would be fantastic.

bossman is written in C99 and is built with CMake.

I don't recommend it for production deployments quite yet, but I plan on
actively working on it. It currently only supports pulling configuration
from a mounted filesystem (i.e. local disk, NFS, etc.), but HTTP support
(and a deployment tutorial/guide) will be added in v0.2.

I'm sorry to spam gentoo-user, but I'm not sure who else would be
interested in something like this. Also, feel free to email me with bugs
in the code or documentation, or open something in GitHub's issue tracker.

Alec

[1] https://github.com/trozamon/bossman/archive/v0.1.0.tar.gz

[2] https://github.com/trozamon/bossman-roles



Re: [gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Tomas Mozes

On 2015-01-26 16:30, Alec Ten Harmsel wrote:

Hi,

I've been working on my own replacement for Bcfg2 - bossman[1] - over
the past few months, and it's finally ready to be released in the wild.
I would be honored if anyone on this list who's thinking of trying
puppet, chef, ansible, bcfg2, etc. would try out bossman instead.
bossman has an incredibly simple syntax; no ruby DSLs or XML. My main
motivation for writing it was dealing with bcfg2's XML config on a 
daily

basis. Additionally, bossman has a (hopefully) great 'pretend' mode and
checks for a lot of errors.

If you are already using another solution and have some time to check
out bossman, I would love feedback. The only config manager I've used 
in

practice is bcfg2, so getting perspectives from those using other
solutions would be fantastic.

bossman is written in C99 and is built with CMake.

I don't recommend it for production deployments quite yet, but I plan 
on
actively working on it. It currently only supports pulling 
configuration
from a mounted filesystem (i.e. local disk, NFS, etc.), but HTTP 
support

(and a deployment tutorial/guide) will be added in v0.2.

I'm sorry to spam gentoo-user, but I'm not sure who else would be
interested in something like this. Also, feel free to email me with 
bugs
in the code or documentation, or open something in GitHub's issue 
tracker.


Alec

[1] https://github.com/trozamon/bossman/archive/v0.1.0.tar.gz

[2] https://github.com/trozamon/bossman-roles


I haven't tested it yet, however I like the minimalistic syntax.

As an ansible user - do you plan to allow using default values for 
modules and/or variables?




Re: [gentoo-user] ansible daemon

2017-11-18 Thread Alan McKinnon
On 18/11/2017 23:36, Damo Brisbane wrote:
> Hi,
> 
> I am wanting to have continuously running ansible daemon to push out
> desired state to some servers. I do not see such functionally covered
> within readme (https://wiki.gentoo.org/wiki/Ansible). Am I correct to
> assume that if I want to run ansible as a daemon, I will have to set up
> [if I want] *ansible user*, init.d/ansible rc script? 
> 
> Also note I haven't used Ansible in production - I am assuming that
> running as a daemon is best for this scenario.


You assume wrong. Ansible is not a daemon, it does not listen and cannot
be a daemon. When you need ansible to do something, you give it a play
to run and it does it. Then the play ends and the command quits. There
isn't really much scope for having ansible "continuously run", it does
not know when you have changed things that need updating - only you know
that.

I think you want Tower or AWX or even rundeck, those are
scheduling/controlling/orchestration wrappers that can fire off ansible
jobs.
As a last resort you can always add a cron to run an overall site.yml
play every X hours or so


Are you coming from a puppet/salt/chef world? If so, the one thing to
always keep in mind is this:

Ansible is almost, but not quite, entirely unlike Puppet.

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] another old box to update

2015-01-08 Thread Stefan G. Weichinger
On 08.01.2015 00:02, Alan McKinnon wrote:
 On 07/01/2015 22:30, Stefan G. Weichinger wrote:
 Am 07.01.2015 um 20:06 schrieb Tomas Mozes:

 Strange, I only have successful stories with upgrading old gentoo
 machines. If you have a machine which you update regularly then you know
 all the issues during the time and so upgrading per partes leads to no
 surprises but the same challenges you've handled before. But yes, it
 takes time.

 Moreover, if you use configuration management like Ansible, you can even
 automatically merge changes when applications ship new configuration.

 Thanks for that posting, it reminds me of some bigger issue I wanted to
 discuss here for quite a while now.

 Over the years I am now responsible for dozens of servers and VMs
 running gentoo linux ... and I wonder how to efficiently keep track of them.

 I learned my first steps with puppet and use it in a basic setup for my
 own machines in my LAN. It seems to work better for many identical
 servers, let's say in a hosting environment.

 The servers at my customers are somehow similar but not identical:

 different setups for services ... different update-cycles (which have to
 be synchronized and shortened as we have seen in this thread!) ...

 I look for a way of tracking all these systems:

 a) central database/repo with all the systems and how to access them:

  * unique system id
  * what IP, port, ssh-key, etc etc

 I use git for local tracking of /etc on most of my systems in the last
 years, but I did never really come up with a clever way how to
 centralize dozens of separate git-repos ... one repo per server pushed
 to one central git-home on of my inhouse servers?

 b) in addition tracking of let's say rules or services:

  * which server runs e.g. apache? So if there is a new security warning
 out there for apache ... ask system which servers/customers would need
 that update?

 etc etc

 c) when was my last access to that server? Have I looked into it lately?
  
 (or more business-oriented:)
 Do I even have to / does the customer pay for that?)
 This should lead to some SLA-kind-of-thing, yes ... a bit off-topic for now.

 -

 Puppet is more oriented to push configs out to systems.

 Maybe a combination would apply ... puppet for building the basement,
 having stuff generalized (this path, that password/ssh-key )

 and some other components to track what has been done over time.

 I run OTRS  ( http://en.wikipedia.org/wiki/OTRS ) for my daily work and
 looked into their module ITSM (
 https://www.otrs.com/homepage/software/otrsitsm-features/ ) lately ...
 it allows to create configuration items (think: ITIL) etc, so far I
 think this is a bit of overkill and not really fitting the size of my
 business.

 I'd love to keep it simple and CLI-oriented:

 Gentoo allows to define (nearly?) everything via text-files, combined
 with the cleverness of git (and maybe puppet) this should give me a way of

 a) easily deploy new systems with configs according to some standards:
  I want these packages/users/paths/files ...

 b) track these systems: what boxes am I responsible for, what is out
 there and failing? ;-) (not talking monitoring here ... just what are my
 active systems out there)

 from there I should slowly get into defining new contracts with my
 clients including regular checks each 3 or 6 months ... what has to be
 done, are there any bigger updates to do (think udev, baselayout ...)
 and tell them if is possible to update the box within a few hours in
 parallel to normal work or if we need a bigger maintenance window.

 ---

 I am sure there are many other gentoo-users out there with similar
 challenges to face. And I am looking forward to your thoughts,
 experiences and recommendations!
 
 
 In my opinion, ansible almost always beats puppet.
 
 Puppet is a) complex b) built to be able to deal with vast enterprise
 setups and c) has a definition language I never could wrap my brains
 around. It always felt to me like puppet was never a good fit for my needs.
 
 Then ansible hit the scene. Now ansible is designed to do what sysadmins
 do anyway, to do it in mostly the same way, and to do it in an automated
 fashion. It fits nicely into my brain and I can read a playbook at a
 glance to know what it does.
 
 Ansible deploys stuff and makes it like how you want it to be. It is
 equally good at managing 1000 identical machines as 100 mostly the same
 ones, or 20 totally different ones. How it manages this miracle is not
 easy to explain, so I urge you to give it a test drive. Fool around with
 a bunch of VMs to get a feel of how it works. A tip: keep it simple, and
 use roles everywhere.
 
 Ansible works nicely with other tools like vagrant and docker which
 build and deploy your base images. Once you have a base machine with
 sshd running and an administrative login account, ansible takes over an
 manages the rest. It works really well.

played around with ansible today and managed to get

Re: [gentoo-user] another old box to update

2015-01-07 Thread Alan McKinnon
On 07/01/2015 22:30, Stefan G. Weichinger wrote:
 Am 07.01.2015 um 20:06 schrieb Tomas Mozes:
 
 Strange, I only have successful stories with upgrading old gentoo
 machines. If you have a machine which you update regularly then you know
 all the issues during the time and so upgrading per partes leads to no
 surprises but the same challenges you've handled before. But yes, it
 takes time.

 Moreover, if you use configuration management like Ansible, you can even
 automatically merge changes when applications ship new configuration.
 
 Thanks for that posting, it reminds me of some bigger issue I wanted to
 discuss here for quite a while now.
 
 Over the years I am now responsible for dozens of servers and VMs
 running gentoo linux ... and I wonder how to efficiently keep track of them.
 
 I learned my first steps with puppet and use it in a basic setup for my
 own machines in my LAN. It seems to work better for many identical
 servers, let's say in a hosting environment.
 
 The servers at my customers are somehow similar but not identical:
 
 different setups for services ... different update-cycles (which have to
 be synchronized and shortened as we have seen in this thread!) ...
 
 I look for a way of tracking all these systems:
 
 a) central database/repo with all the systems and how to access them:
 
   * unique system id
   * what IP, port, ssh-key, etc etc
 
 I use git for local tracking of /etc on most of my systems in the last
 years, but I did never really come up with a clever way how to
 centralize dozens of separate git-repos ... one repo per server pushed
 to one central git-home on of my inhouse servers?
 
 b) in addition tracking of let's say rules or services:
 
   * which server runs e.g. apache? So if there is a new security warning
 out there for apache ... ask system which servers/customers would need
 that update?
 
 etc etc
 
 c) when was my last access to that server? Have I looked into it lately?
   
 (or more business-oriented:)
 Do I even have to / does the customer pay for that?)
 This should lead to some SLA-kind-of-thing, yes ... a bit off-topic for now.
 
 -
 
 Puppet is more oriented to push configs out to systems.
 
 Maybe a combination would apply ... puppet for building the basement,
 having stuff generalized (this path, that password/ssh-key )
 
 and some other components to track what has been done over time.
 
 I run OTRS  ( http://en.wikipedia.org/wiki/OTRS ) for my daily work and
 looked into their module ITSM (
 https://www.otrs.com/homepage/software/otrsitsm-features/ ) lately ...
 it allows to create configuration items (think: ITIL) etc, so far I
 think this is a bit of overkill and not really fitting the size of my
 business.
 
 I'd love to keep it simple and CLI-oriented:
 
 Gentoo allows to define (nearly?) everything via text-files, combined
 with the cleverness of git (and maybe puppet) this should give me a way of
 
 a) easily deploy new systems with configs according to some standards:
   I want these packages/users/paths/files ...
 
 b) track these systems: what boxes am I responsible for, what is out
 there and failing? ;-) (not talking monitoring here ... just what are my
 active systems out there)
 
 from there I should slowly get into defining new contracts with my
 clients including regular checks each 3 or 6 months ... what has to be
 done, are there any bigger updates to do (think udev, baselayout ...)
 and tell them if is possible to update the box within a few hours in
 parallel to normal work or if we need a bigger maintenance window.
 
 ---
 
 I am sure there are many other gentoo-users out there with similar
 challenges to face. And I am looking forward to your thoughts,
 experiences and recommendations!


In my opinion, ansible almost always beats puppet.

Puppet is a) complex b) built to be able to deal with vast enterprise
setups and c) has a definition language I never could wrap my brains
around. It always felt to me like puppet was never a good fit for my needs.

Then ansible hit the scene. Now ansible is designed to do what sysadmins
do anyway, to do it in mostly the same way, and to do it in an automated
fashion. It fits nicely into my brain and I can read a playbook at a
glance to know what it does.

Ansible deploys stuff and makes it like how you want it to be. It is
equally good at managing 1000 identical machines as 100 mostly the same
ones, or 20 totally different ones. How it manages this miracle is not
easy to explain, so I urge you to give it a test drive. Fool around with
a bunch of VMs to get a feel of how it works. A tip: keep it simple, and
use roles everywhere.

Ansible works nicely with other tools like vagrant and docker which
build and deploy your base images. Once you have a base machine with
sshd running and an administrative login account, ansible takes over an
manages the rest. It works really well.

On the business side of things, yes indeed you need to rationalize
things and what you offer

[gentoo-user] Re: Gentoo for many servers

2009-11-15 Thread Andreas Niederl
Alex Schuster wrote:
 Alan McKinnon wrote:
 
 clusterssh will let you log into many machines at once and run emerge
  -avuND world everywhere
 
 This is way cool. I just started using it on eight Fedora servers I am 
 administrating. Nice, now this is an improvement over my 'for $h in 
 $HOSTS; do ssh $h yum install foo; done' approach.

You could have a look at app-admin/puppet [1][2] which supposedly takes
car of these things.


[...]
 Now I am thinking about a Gentoo installation instead.
 
 Pros:
  - Continuous updates, no downtime for upgrading, only when I decide to 
 install a new kernel. This is really really cool. I fear the upgrade from 
 Fedora 10 to 12 which has to be done soon.

  - Some improvement in speed. Those machines do A LOT of numbercrunching, 
 which jobs often lasting for days, so even small improvements would be 
 nice.
  - Easier debugging. When things do not work, I think it's easier to dig 
 into the problem. No fancy, but sometimes buggy GUIs hiding basic 
 functionality.

These two things would probably be your best selling points for your idea.


  - Heck, Gentoo is _cooler_ than typical distributions. And emerging with 
 distcc on about 8*4 cores would be fun :)

Being 'cool' doesn't count, at least last time I looked.


  - I am probably the only one who can administrate them.

That is a huge disadvantage.


 Cons:
  - If something will not work with this not so common (meta)distribution, 
 people will say always trouble with your Gentoo Schmentoo, it works fine 
 in Fedora. Fedora is more mainstream, if something does not work there, 
 then it's okay for the people to accept it.
  - I fear that big packages like Matlab are made for and tested on the 
 typical distributions, and may have problems with the not-so-common 
 Gentoo. I think someone here just had such a problem with Mathematica 
 (which we do currently not use).
[...]

If you're using commercial software which is only supported by Redhat,
Novell, etc. then you should think twice about replacing it.

But I'm guessing that those packages don't have to be installed on every
machine.
So, I'd suggest that you use Gentoo on those boxes where you'd have the
biggest advantage using it and no or minimal disadvantages.


  - I am probably the only one who can administrate them. I think Gentoo is 
 easier to maintain in the long run, but only when you take the time to 
 learn it. With Fedora, you do not need much more than the 'yum install' 
 command. There is no need to read complicated X.org upgrade guides and 
 such.
[...]

Please do your colleagues and successors a favor and document your whole
setup really good.


Regards,
Andi

[1] http://reductivelabs.com/products/puppet/
[2] http://log.onthebrink.de/2008/05/using-puppet-on-gentoo.html



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Grant
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 Rather go for a centralized repo solution that pushes configs out, you
 must just find the one that's right for you.

Does using puppet or salt to push configs from my laptop qualify as a
centralized repo solution?

 2. Binary packages are almost perfect for your needs IMHO, running
 emerge gets very tedious quickly, and your spec is that all workstations
 have the same USE. You'd be amazed how much time you save by doing this:

 emerge -b on your laptop and share your /var/packages
 emerge -K on the workstations when your laptop is on the network

 step 2 goes amazingly quickly - eyeball the list to be emerged, they
 should all be purple, press enter. About a minute or two per
 workstation, as opposed to however many hours the build took.

The thing is my laptop goes with me all over the place and is very
rarely on the same network as the bulk of the laptop clients.  Most of
the time I'm on a tethered and metered cell phone connection
somewhere.  Build time itself really isn't a big deal.  I can have the
clients update overnight.  Whether the clients emerge or emerge -K is
the same amount of admnistrative work I would think.

 3. (OK, three points). Share your portage tree over the network. No
 point in syncing multiple times when you actually just need to do it once.

Yep, I figure each physical location should designate one system to
host the portage tree and distfiles.

- Grant



Re: [gentoo-user] How to create my own linux distribution

2006-07-24 Thread Jose Gonzalez Gomez
2006/7/18, David Corbin [EMAIL PROTECTED]:
Not wanting to hijack the thread from the OP, but this subject interests mefor the following reason. I work on a software system where one customer hasabout 1 systems at about 500 locations.Remote systems are categorized
as one of 3 types/configurations.Automated management of them is essential,including upgrades, but seldom upgrades to the latest and greatest asstability is very important.Upgrades need to just work, and not require
manual intervention.Currently, they're all on Windows in one form oranother (ugh!).The systems in question have very limited capabilities, andthe people on site have very limited permissions.Other potential customers have similar size systems, and I certainly expect
someone to realize the value of Linux for this.I'd like to have a solutionin my mind when the time comes.I've considered the idea of a custom distribution to do this.There is nodoubt in my mind, that any such distribtuion would be based on an existing
one, with tweaks that deal with where updates come from, and what packagesare availble, etc.Gentoo or Debian are the two likely candidates.I'm not*sure* a customized distribution is appropriate.
I think you should take a look at:http://www.reductivelabs.com/projects/puppet/index.htmlhttp://www.cfengine.org/
Best regardsJose


Re: [gentoo-user] core i5

2010-06-23 Thread kashani

On 6/23/2010 1:56 PM, Stefan G. Weichinger wrote:

Am 23.06.2010 04:27, schrieb kashani:


 I updated from a Q6600 to an i7 860 recently. Not amazing speed
wise, but I can run 8 threads and use more than 8GB of RAM. The RAM was
the big thing for me. If you're planning to do a lot with VMs I'd
suggest at least an extra drive if not more if you can swing it.


You mean for storing the VMs?
I have two drives locally now, RAID1 mostly. And I also test storing VMs
on an nfsV4-storage via gigabit ethernet. Quite OK. And NFS-storage is
more quiet ;-)


	That's works. :-) I was doing a fair amount of rpm building, svn to git 
with large trees, kickstart, Mysql, and Puppet work at a job a few 
months ago which was hitting the host fairly hard. Between the above and 
Outlook getting an extra drive to isolate the host OS from the VMs was a 
requirement. Much smoother after that.


kashani



Re: [gentoo-user] Re: Lynx, Links, or Elinks?

2011-12-09 Thread David Haller
Hello,

On Fri, 09 Dec 2011, Hartmut Figge wrote:
Philip Webb:
Lynx : I've been using it daily since 1996 .

I have used it during my new installation on x86_64 to read the handbook
of Gentoo. Problem was, that the lines there were too long to fit into
the 80x24 window on console.

The lines were truncated and i had to guess what would be displayed
without truncation. ;) Maybe there is a way of horizontal scrolling
which i had not found.

links uses [] for horizontal scrolling. I also like how it handles
tables. But: I recommend to install all four (or at least lynx, links
and w3m), as all have shortcomings/quirks and advantages (e.g. when
dumping html as text results vary and with what options you can tune
the result).

HTH,
-dnh

-- 
I'm nobody's puppet!-- Rygel XIV



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Grant
  Puppet seems like overkill for what I need.  I think all I really need
  is something to manage config file differences and user accounts.  At
  this point I'm thinking I shouldn't push packages themselves, but
  portage config files and then let each laptop emerge unattended based
  on those portage configs.  I'm going to bring this to the 'salt'
  mailing list to see if it might be a good fit.  It seems like a much
  lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 How about using something like unison? I've been using it for a while
 now to sync a specific subset of ~ between three computers.
 It allows for exclude rules for host-specific stuff.

I think what I'd be missing with unison is something to manage the
differences in those host-specific files.

- Grant



[gentoo-user] using git to track (gentoo) server configs ?

2014-02-13 Thread Stefan G. Weichinger

I happily use git for local repositories to track configs in /etc or for
example, /root/bin or /usr/local/bin (scripts ..)

There is also etckeeper, yes, useful as well.

But I would like to have some kind of meta-repo for all the
gentoo-servers I am responsible for ... some remote repo to pull from.

Most files in /etc might be rather identical so it would make sense to
only track the individual changes (saves space and bandwidth)

Maybe it would be possible to use git-branches for each server?
Does anyone of you already use something like that?
What would be a proper and clever way to do that?

Yes, I know, there is puppet and stuff ... but as far as I see this is
overkill for my needs.

I'd like to maintain some good and basic /etc, maybe plus
/var/lib/portage/world and /root/.alias (etc etc ..) to be able to
deploy a good and nice standardized gentoo server. Then adjust config at
the customer (network, fstab, ...) and commit this to a central repo (on
my main server at my office or so).

Yes, rsyncing that stuff also works in a way ... but ... versioning is
better.

How do you guys manage this?

Looking forward to your good ideas ;-)

Regards, Stefan



Re: [gentoo-user] NFS tutorial for the brain dead sysadmin?

2014-07-27 Thread Stefan G. Weichinger
Am 26.07.2014 04:47, schrieb walt:

 So, why did the broken machine work normally for more than a year
 without rpcbind until two days ago?  (I suppose because nfs-utils was
 updated to 1.3.0 ?)
 
 The real problem here is that I have no idea how NFS works, and each
 new version is more complicated because the devs are solving problems
 that I don't understand or even know about.

I double your search for understanding ... my various efforts to set up
NFSv4 for sharing stuff in my LAN also lead to unstable behavior and
frustration.

Only last week I re-attacked this topic as I start using puppet here to
manage my systems ... and one part of this might be sharing /usr/portage
via NFSv4. One client host mounts it without a problem, the thinkpads
don't do so ... just another example ;-)

Additional in my context: using systemd ... so there are other
(different?) dependencies at work and services started.

I'd be happy to get that working in a reliable way. I don't remember
unstable behavior with NFS (v2 back then?) when we used it at a company
I worked for in the 90s.

Stefan






Re: [gentoo-user] Ansible, puppet and chef

2014-09-17 Thread Alan McKinnon
On 17/09/2014 14:46, Tomas Mozes wrote:
 On 2014-09-17 10:08, Alan McKinnon wrote:
 
 That's almost exactly the same setup I have in mind.

 How complex do the playbooks get in real-life?
 
 The common role has about 70 tasks. It does almost everything covered in
 the handbook plus installs and configures additional stuff like postfix,
 nrpe, etc. The dom0 role has 15 tasks including monitoring, xen, grub.
 The domU role basically just configures rc.conf.
 
 An actual web server with apache/php has just about 20 tasks. A
 load-balancer
 with varnish/nginx/keepalived has just about the same. A database has about
 30 tasks because it also configures database replication.



That doesn't seem too bad - almost manageable :-)

-- 
Alan McKinnon
alan.mckin...@gmail.com




[gentoo-user] Re: [Extremely OT] Ansible/Puppet replacement

2015-01-28 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:


 Assuming that disks are formatted, a stage3 has been freshly extracted,
 bossman is installed, and the role/config files are on a mounted
 filesystem, it should be similar to the role below:

I think the list needs to be expanded, generically first:
/var/lib/portage/world 
/etc/*
/usr/local/*

just to name a few to clone a system. I'll work on a comprehensive list
of require and optional.

 Automating the bootstrapping of a node is reasonably complicted, even
 harder on Gentoo than on RHEL. This is the type of thinking I want to
 do, and I'm working on doing this with my CentOS box that runs ssh,
 Jenkins, postgres, and Redmine.


I'm going to work through the ansible gentoo install method previously
outlined. Once I try that for new installs, it should be easy to add
what robust workstaion has added since it was installed, as the second step
in the process. That way, part A of the install is a raw/virgin install
on new (wiped) hardware. part B would be to upgrade that virgin install
to basically be a clone of an existing workstation. This all requires
similar if not identical hardware, which is a common occurrence in
large installation environment.






[gentoo-user] Re: [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread James
Alec Ten Harmsel alec at alectenharmsel.com writes:



 I'm sorry to spam gentoo-user, but I'm not sure who else would be
 interested in something like this. Also, feel free to email me with bugs
 in the code or documentation, or open something in GitHub's issue tracker.

One man's spam generates  maps for another.


So my map of todo on ansible is all about common gentoo installs. [1]
Let's take the first and most easy example the clone. I have a gentoo
workstation install that I want to replicated onto identical hardware (sort
of like a disk to disk dd install). 

So how would I impress the bossman by actually saving admin time
on how to use the bossman to create (install from scratch + pxe?)
a clone.


Gotta recipe for that using bossman?
Or is that an invalid direction for bossman?

curiously,
James


[1]
http://blog.jameskyle.org/2014/08/automated-stage3-gentoo-install-using-ansible/






Re: [gentoo-user] another old box to update

2015-01-07 Thread Stefan G. Weichinger
Am 07.01.2015 um 20:06 schrieb Tomas Mozes:

 Strange, I only have successful stories with upgrading old gentoo
 machines. If you have a machine which you update regularly then you know
 all the issues during the time and so upgrading per partes leads to no
 surprises but the same challenges you've handled before. But yes, it
 takes time.
 
 Moreover, if you use configuration management like Ansible, you can even
 automatically merge changes when applications ship new configuration.

Thanks for that posting, it reminds me of some bigger issue I wanted to
discuss here for quite a while now.

Over the years I am now responsible for dozens of servers and VMs
running gentoo linux ... and I wonder how to efficiently keep track of them.

I learned my first steps with puppet and use it in a basic setup for my
own machines in my LAN. It seems to work better for many identical
servers, let's say in a hosting environment.

The servers at my customers are somehow similar but not identical:

different setups for services ... different update-cycles (which have to
be synchronized and shortened as we have seen in this thread!) ...

I look for a way of tracking all these systems:

a) central database/repo with all the systems and how to access them:

* unique system id
* what IP, port, ssh-key, etc etc

I use git for local tracking of /etc on most of my systems in the last
years, but I did never really come up with a clever way how to
centralize dozens of separate git-repos ... one repo per server pushed
to one central git-home on of my inhouse servers?

b) in addition tracking of let's say rules or services:

* which server runs e.g. apache? So if there is a new security warning
out there for apache ... ask system which servers/customers would need
that update?

etc etc

c) when was my last access to that server? Have I looked into it lately?

(or more business-oriented:)
Do I even have to / does the customer pay for that?)
This should lead to some SLA-kind-of-thing, yes ... a bit off-topic for now.

-

Puppet is more oriented to push configs out to systems.

Maybe a combination would apply ... puppet for building the basement,
having stuff generalized (this path, that password/ssh-key )

and some other components to track what has been done over time.

I run OTRS  ( http://en.wikipedia.org/wiki/OTRS ) for my daily work and
looked into their module ITSM (
https://www.otrs.com/homepage/software/otrsitsm-features/ ) lately ...
it allows to create configuration items (think: ITIL) etc, so far I
think this is a bit of overkill and not really fitting the size of my
business.

I'd love to keep it simple and CLI-oriented:

Gentoo allows to define (nearly?) everything via text-files, combined
with the cleverness of git (and maybe puppet) this should give me a way of

a) easily deploy new systems with configs according to some standards:
I want these packages/users/paths/files ...

b) track these systems: what boxes am I responsible for, what is out
there and failing? ;-) (not talking monitoring here ... just what are my
active systems out there)

from there I should slowly get into defining new contracts with my
clients including regular checks each 3 or 6 months ... what has to be
done, are there any bigger updates to do (think udev, baselayout ...)
and tell them if is possible to update the box within a few hours in
parallel to normal work or if we need a bigger maintenance window.

---

I am sure there are many other gentoo-users out there with similar
challenges to face. And I am looking forward to your thoughts,
experiences and recommendations!

Best regards, Stefan




Re: [gentoo-user] Again a small embedded arch problem with Gentoo

2015-11-29 Thread Alec Ten Harmsel



On 2015-11-29 06:57, meino.cra...@gmx.de wrote:

Hi,

two "identical" (better read: expected to be identical... ;)


Just a quick suggestion - you should use ansible or puppet to keep them 
in sync.



Arietta
G25 tiny embedded systems have a Gentoo installed each. Both are
updated always at the same time.
On both I copied the same source of VIM (git repo) and tried to
update the local repo than.

One said:
Arietta G25 B:CVS-Archive/VIM>./update.sh
Already up-to-date.

(ok, nice and fine)

but the other said:
fatal: unable to access 'https://github.com/vim/vim.git/': SSL certificate 
problem: certificate has expired
[1]1983 exit 1 ./update.sh



What is the date on the board that failed? The only thing I can think of 
is that

the date is incorrect, and far enough off to cause an SSL error.

Alec



Re: [gentoo-user] Again a small embedded arch problem with Gentoo

2015-11-29 Thread Meino . Cramer
Alec Ten Harmsel <a...@alectenharmsel.com> [15-11-29 13:16]:
> 
> 
> On 2015-11-29 06:57, meino.cra...@gmx.de wrote:
> >Hi,
> >
> >two "identical" (better read: expected to be identical... ;)
> 
> Just a quick suggestion - you should use ansible or puppet to keep them 
> in sync.
> 
> >Arietta
> >G25 tiny embedded systems have a Gentoo installed each. Both are
> >updated always at the same time.
> >On both I copied the same source of VIM (git repo) and tried to
> >update the local repo than.
> >
> >One said:
> >Arietta G25 B:CVS-Archive/VIM>./update.sh
> >Already up-to-date.
> >
> >(ok, nice and fine)
> >
> >but the other said:
> >fatal: unable to access 'https://github.com/vim/vim.git/': SSL 
> >certificate problem: certificate has expired
> >[1]1983 exit 1 ./update.sh
> >
> 
> What is the date on the board that failed? The only thing I can think 
> of is that
> the date is incorrect, and far enough off to cause an SSL error.
> 
> Alec
> 

Hi Alec,

oh YEAH! Thats's it! The ntp-sync command seems to have failed...
The system time said 19.Oct.2015 (which is not the usual 1.1.1970)
I resynced the time and and the problem is gone!
Thanks! :)

Best regards,
Meino





Re: [gentoo-user] Re: Java 8 and remote access

2016-02-02 Thread Thomas Sigurdsen
On 02/02/2016 02:50 PM, Grant wrote:
>>>>>>>> I need to run a Java 8 app remotely.  Can this be done on Gentoo?
>>>>>
>> Bummer.  FYI guys, Amazon Workspaces, Amazon Appstream, and Microsoft
>> Azure RemoteApp kinda work the way I've described but they all have
>> limitations which exclude them from working for me in this case.  It
>> looks like I'll be admin'ing another remote Gentoo system for this.
>>
> 
> 
> Do any cloud VM providers have Gentoo as an OS option?  If not, is there a
> good one that lets you install your own OS?
> 
> BTW since Java is a VM I'm surprised there is no service that lets you just
> upload a Java app and run it remotely on the service without any OS
> management.  Am I missing anything there?
> 
> - Grant
> 

linode.com and kimsufi.com lets you install gentoo. There might be many
more.

You might want to look into something like cfengine, puppet or chef to
lessen your workload of running multiple machines.

I've used linode for a couple years and its never been any problem, I do
see that I'm probably paying for more than I need, but that's another story.



signature.asc
Description: OpenPGP digital signature


[gentoo-user] (SALT) Saltstack

2019-11-28 Thread james

Curiously,

Does anyone have any experience, tips  or comments on the use of saltstack

Gentoo specific location::

https://docs.saltstack.com/en/latest/topics/installation/gentoo.html#post-installation-tasks

My specific (eventual) goal is to communicate/manage a wide variety of 
gentoo systems, from servers & workstations to a myriad of embedded and 
5G minimal gentoo systems; particularly those on  embedded processors 
that have modest resources.


An eventual framework, where the devices can be graphically located and 
data overlayed  on different types of (data) graphical maps too.



It appears that some are using  OpenStack and Ceph with
Git, Ansible, Puppet, Chef, StackStorm for similar goals
of a total management system for all the microprocessors and sensors in 
their  theater of responsible.


some are rooting their cell phones, to have a hand held device to 
compliment laptops and multi-monitor systems.



TIA for any feedback, suggestions gotchas or any information.

James




Re: [gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-22 Thread Alan McKinnon
On 21/04/2013 22:49, Tanstaafl wrote:
 On 2013-04-21 4:32 PM, Alan McKinnon alan.mckin...@gmail.com wrote:
 On 21/04/2013 20:47, Tanstaafl wrote:
 30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf
 monthly
 20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly
 
 Only the last line is wrong - your monthly and yearly are equivalent.To
 be properly yearly, you need a month value in field 4.
 
 Oh, right (I added that interval myself, rsnapshot only comes with the
 hourly, daily weekly and monthly by default).
 
 So, if I wanted it to run at 8:20pm on Dec 31, it would be:
 
 20 22 31 12 *  rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly


Correct



 I'm not familiar with rsnapshot, I assume that package can deal with how
 many of each type of snapshot to retain in it's conf file? I see no
 crons to delete out of date snapshots.
 
 Correct, rsnapshot handles this.
 
 And, more as a nitpick than anything else, I always recommend that when
 a sysadmin adds a root cronjob, use crontab -e so it goes in
 /var/spool/cron, not /etc/crontab. Two benefits:

 - syntax checking when you save and quit
 - if you let portage, package managers, chef, puppet or whatever manage
 your global cronjobs in /etc/portage, then there's no danger that system
 will trash the stuff that you added there manually.
 
 I prefer doing things manually... so, nothing else manages my cron jobs.
 
 That said, I prefer to do this 'the gentoo way'... so is crontab -e the
 gentoo way?


There's no gentoo way for this :-)

Admittedly, things have changed over the years, most distros now have
the equivalent of cron.daily etc that cron jobs get installed into,
leaving the main /etc/crontab as a place to put the lastrun logic. It
wasn't always like that though.

If you ever move to puppet or similar to do your configs you'll want to
revisit this. Meanwhile, as you do everything manually anyway, your
current method seems to work just fine for you


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread thegeezer
On 09/30/2013 06:31 PM, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.
 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.
 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)
 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

 - Grant

maybe someone could chip in re: experience with distributed compilation
and cached compiles?
https://wiki.gentoo.org/wiki/Distcc
http://ccache.samba.org/

this may be closer to what you are looking for ?



Re: [gentoo-user] Recommendations for scheduler

2014-08-03 Thread Bruce Schultz
On 3 August 2014 10:08:39 PM AEST, Alan McKinnon alan.mckin...@gmail.com 
wrote:
On 03/08/2014 11:27, Bruce Schultz wrote:
 
 
 On 2 August 2014 5:10:43 AM AEST, Alan McKinnon
alan.mckin...@gmail.com wrote:
 On 01/08/2014 19:50, Сергей wrote:
 Also you can have a look at anacron.





 Unfortunately, anacron doesn't suit my needs at all. Here's how
anacron
 works:

 this bunch of job will all happen today regardless of what time it
is.
 That's not what I need, I need something that has very little to do
 with
 time. Example:

 1. Start backup job on db server A
 2. When complete, copy backup to server B and do a test import
 3. If import succeeds, move backup to permanent storage and log the
 fact
 4. If import fails, raise an alert and trigger the whole cycle to
start
 again at 1

 Meanwhile,

 1. All servers are regularly doing apt-get update and downloading
 .debs,
 and applying security packages. Delay this on the db server if a
backup
 is in progress.

 Meanwhile there is the regular Friday 5am code-publish cycle and
 month-end finance runs - this is a DevOps environment.
 
 I'm not sure if its quite what you have in mind, and it comes with a
bit of a steep learning curve, but cfengine might fit the bill.
 
 http://cfengine.com

Hi Bruce,

Thanks for the reply.

I only worked with cfengine once, briefly, years ago, and we quickly
decided to roll our own deployment solution to solve that very specific
vertical problem.


Isn't cfengine a deployment framework, similar in ideals to puppet and
chef?

I don't want to deploy code or manage state, I want to run code
(backups, database maintenance, repair of dodgy data in databases and
code publish in a devops environment)

Cfengine can run arbitrary commands at scheduled times, so it is capable as a 
replacment for cron. It also has package management built in for your package 
updates.

It is in the same vein as chef  puppet, but deployment framework is not the 
way I would describe it. Deployment is only be a subset of what you can do with 
it.

Cfengine3 was a major rewrite over version 2. The community edition is open 
source and should be available in Debian. The gentoo ebuild is a bit out of 
date currently. It also comes as a supported enterprise version which adds some 
sort of framework around the core - I've never personally looked into the 
enterprise features though.

Bruce

-- 
:B



[gentoo-user] Biggest Fake Conference in Computer Science

2013-04-12 Thread nelsonsteves
We are researchers from different parts of the world and conducted a study on  
the world’s biggest bogus computer science conference WORLDCOMP 
( http://sites.google.com/site/worlddump1 ) organized by Prof. Hamid Arabnia 
from University of Georgia, USA.


We submitted a fake paper to WORLDCOMP 2011 and again (the 
same paper with a modified title) to WORLDCOMP 2012. This paper 
had numerous fundamental mistakes. Sample statements from 
that paper include: 

(1). Binary logic is fuzzy logic and vice versa
(2). Pascal developed fuzzy logic
(3). Object oriented languages do not exhibit any polymorphism or inheritance
(4). TCP and IP are synonyms and are part of OSI model 
(5). Distributed systems deal with only one computer
(6). Laptop is an example for a super computer
(7). Operating system is an example for computer hardware


Also, our paper did not express any conceptual meaning.  However, it 
was accepted both the times without any modifications (and without 
any reviews) and we were invited to submit the final paper and a 
payment of $500+ fee to present the paper. We decided to use the 
fee for better purposes than making Prof. Hamid Arabnia (Chairman 
of WORLDCOMP) rich. After that, we received few reminders from 
WORLDCOMP to pay the fee but we never responded. 


We MUST say that you should look at the above website if you have any thoughts 
to submit a paper to WORLDCOMP.  DBLP and other indexing agencies 
have stopped indexing WORLDCOMP’s proceedings since 2011 due to its fakeness. 
See http://www.informatik.uni-trier.de/~ley/db/conf/icai/index.html for of one 
of the 
conferences of WORLDCOMP and notice that there is no listing after 2010. See 
http://sites.google.com/site/dumpconf for comments from well-known researchers 
about WORLDCOMP. If WORLDCOMP is not fake then why did DBLP suddenly stopped 
listing the proceedings after? 


The status of your WORLDCOMP papers can be changed from “scientific” 
to “other” (i.e., junk or non-technical) at any time. See the comments 
http://www.mail-archive.com/tccc@lists.cs.columbia.edu/msg05168.html   
of a respected researcher on this. Better not to have a paper than 
having it in WORLDCOMP and spoil the resume and peace of mind forever!


Our study revealed that WORLDCOMP is a money making business, 
using University of Georgia mask, for Prof. Hamid Arabnia. He is throwing 
out a small chunk of that money (around 20 dollars per paper published 
in WORLDCOMP’s proceedings) to his puppet (Mr. Ashu Solo or A.M.G. Solo) 
who publicizes WORLDCOMP and also defends it at various forums, using 
fake/anonymous names. The puppet uses fake names and defames other conferences
to divert traffic to WORLDCOMP. He also makes anonymous phone calls and 
threatens the critiques of WORLDCOMP (see Item 7 in Section 5 of 
http://sites.google.com/site/dumpconf ).That is, the puppet does all 
his best to get a maximum number of papers published at WORLDCOMP to 
get more money into his (and Prof. Hamid Arabnia’s) pockets. 


Monte Carlo Resort (the venue of WORLDCOMP until 2012) has refused to 
provide the venue for WORLDCOMP’13 because of the fears of their image 
being tarnished due to WORLDCOMP’s fraudulent activities. WORLDCOMP’13 
will be held at a different resort.


WORLDCOMP will not be held after 2013. 


The paper submission deadline for WORLDCOMP’13 was March 18 and it was 
extended to April 6 and now it is extended to April 20 (it may be extended 
again) but still there are no committee members, no reviewers, and there is no 
conference Chairman. The only contact details available on WORLDCOMP’s 
website is just an email address! Prof. Hamid Arabnia expends the deadline 
to get more papers (means, more registration fee into his pocket!).


Let us make a direct request to Prof. Hamid arabnia: publish all reviews for 
all the papers (after blocking identifiable details) since 2000 conference. 
Reveal the names and affiliations of all the reviewers (for each year) 
and how many papers each reviewer had reviewed on average. We also request 
him to look at the Open Challenge at https://sites.google.com/site/moneycomp1 


Sorry for posting to multiple lists. Spreading the word is the only way to stop 
this bogus conference. Please forward this message to other mailing lists and 
people. 


We are shocked with Prof. Hamid Arabnia and his puppet’s activities 
http://worldcomp-fake-bogus.blogspot.com   Search Google using the 
keyword worldcomp fake for additional links.




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread Alan McKinnon
On 01/10/2013 08:07, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.

 That sounds about right.

 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)

 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

 Two general points I can add:

 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname

 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

 Rather go for a centralized repo solution that pushes configs out, you
 must just find the one that's right for you.
 
 Does using puppet or salt to push configs from my laptop qualify as a
 centralized repo solution?


yes



 
 2. Binary packages are almost perfect for your needs IMHO, running
 emerge gets very tedious quickly, and your spec is that all workstations
 have the same USE. You'd be amazed how much time you save by doing this:

 emerge -b on your laptop and share your /var/packages
 emerge -K on the workstations when your laptop is on the network

 step 2 goes amazingly quickly - eyeball the list to be emerged, they
 should all be purple, press enter. About a minute or two per
 workstation, as opposed to however many hours the build took.
 
 The thing is my laptop goes with me all over the place and is very
 rarely on the same network as the bulk of the laptop clients.  Most of
 the time I'm on a tethered and metered cell phone connection
 somewhere.  Build time itself really isn't a big deal.  I can have the
 clients update overnight.  Whether the clients emerge or emerge -K is
 the same amount of admnistrative work I would think.


I see. So you give up the efficiency of binpkgs to get a system that at
least works reliably.

Within those constraints that probably is the best option.

 
 3. (OK, three points). Share your portage tree over the network. No
 point in syncing multiple times when you actually just need to do it once.
 
 Yep, I figure each physical location should designate one system to
 host the portage tree and distfiles.


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-27 Thread Grant
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.

I actually do want them all to have my stuff and I want to have all
their stuff.  That way everything is in sync and I can manage all of
them by just managing mine and pushing.  How about pushing only
portage configs and then letting each of them emerge unattended?  I
know unattended emerges are the kiss of death but if all of the
identical laptops have the same portage config and I emerge everything
successfully on my own laptop first, the unattended emerges should be
fine.

 I'd recommend if you have a decent-ish desktop lying around, you press
 that into service as your master build host. yeah, it takes 10% longer
 to build stuff, but so what? Do it overnight.

Well, my goal is to minimize the number of different systems I
maintain.  Hopefully just one type of laptop and a server.

  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

 DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.

OK, I'm thinking over how much variation there would be from laptop to laptop:

1. /etc/runlevels/default/* would vary of course.
2. /etc/conf.d/net would vary for the routers and my laptop which I
sometimes use as a router.
3. /etc/hostapd/hostapd.conf under the same conditions as #2.
4. Users and /home would vary but the office workstations could all be
identical in this regard.

Am I missing anything?  I can imagine everything else being totally identical.

What could I use to manage these differences?

 Rather keep your laptop as your laptop with it's own setup, and
 everything else as that own setup. You only need one small difference
 between what you want your laptop to have, and everything else to have,
 to crash that entire model.

I think it will work if I can find a way to manage the few differences
above.  Am I overlooking any potential issues?

- Grant



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Alan McKinnon
On 30/09/2013 19:31, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your laptop
 (or big chunks of it) over to your other workstations.
 
 That sounds about right.
 
 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together (it's
 hard to describe in text how puppet gets the job done, so much easier to
 do it for real and watch the results)
 
 Puppet seems like overkill for what I need.  I think all I really need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

Two general points I can add:

1. Sharing config files turns out to be really hard. By far the easiest
way is to just share /etc but that is an all or nothing approach, and
you just need one file to be different to break it. Like /etc/hostname

You *could* create a share directory inside /etc and symlink common
files in there, but that gets very tedious quickly.

Rather go for a centralized repo solution that pushes configs out, you
must just find the one that's right for you.

2. Binary packages are almost perfect for your needs IMHO, running
emerge gets very tedious quickly, and your spec is that all workstations
have the same USE. You'd be amazed how much time you save by doing this:

emerge -b on your laptop and share your /var/packages
emerge -K on the workstations when your laptop is on the network

step 2 goes amazingly quickly - eyeball the list to be emerged, they
should all be purple, press enter. About a minute or two per
workstation, as opposed to however many hours the build took.

3. (OK, three points). Share your portage tree over the network. No
point in syncing multiple times when you actually just need to do it once.


 
 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

I'm sure Neil would step in if I'm hit by a bus
He'd say the same things, and use about 1/4 of the words it takes me ;-)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] How to create my own linux distribution

2006-07-24 Thread Matthew R. King

Jose Gonzalez Gomez wrote:
2006/7/18, David Corbin [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED]:


Not wanting to hijack the thread from the OP, but this subject
interests me
for the following reason. I work on a software system where one
customer has
about 1 systems at about 500 locations.  Remote systems are
categorized
as one of 3 types/configurations.  Automated management of them is
essential,
including upgrades, but seldom upgrades to the latest and
greatest as
stability is very important.  Upgrades need to just work, and
not require
manual intervention.  Currently, they're all on Windows in one form or
another (ugh!).  The systems in question have very limited
capabilities, and
the people on site have very limited permissions.

Other potential customers have similar size systems, and I
certainly expect
someone to realize the value of Linux for this.  I'd like to have
a solution
in my mind when the time comes.

I've considered the idea of a custom distribution to do
this.  There is no
doubt in my mind, that any such distribtuion would be based on an
existing
one, with tweaks that deal with where updates come from, and what
packages
are availble, etc.  Gentoo or Debian are the two likely
candidates.  I'm not
*sure* a customized distribution is appropriate.


I think you should take a look at:

http://www.reductivelabs.com/projects/puppet/index.html
http://www.cfengine.org/ http://www.cfengine.org/


Best regards
Jose

http://www.linuxfromscratch.org/
--
gentoo-user@gentoo.org mailing list



Re: [gentoo-user] [OT] (A nice illustration of Gentoo)

2012-02-16 Thread m...@trausch.us
On 02/15/2012 11:07 PM, Pandu Poluan wrote:
 Eh? You don't need to duck from me... I am one of those guys who are
 against initrd/initramfs :-P

I hate extra cogs in the system.  Simpler is better!

 That said, anyone read his description of other distros? Somehow I got
 the vibes that Gentoo's is the one where he didn't explicitly point out
 something bad against; only a lamentation that he really don't want to
 spend too much time compiling everything.

That seems to be the general gist, yes.

 The blog writer's closing paragraph is very poetic it hurts; for I'm
 about to leave my beloved Gentoo servers behind...

I am actually considering using Gentoo on servers, myself.  I've been
using it on my desktop for a while.  The only thing that I am really
concerned about is that if I do use it on servers, I'm going to have to
find some way to gain better control over the local Portage tree.

What I really ought to be doing is looking at Portage way closer than I
do as an everyday user.  I just want to be absolutely sure that I don't
break anything on a server when I update.  But since I have recently
learned about and started learning Puppet, I think I might have a much
easier time mitigating the risks of change by having an environment that
is defined not in terms of the underlying distribution, but in terms of
the requirements of the servers.

--- Mike

-- 
A man who reasons deliberately, manages it better after studying Logic
than he could before, if he is sincere about it and has common sense.
   --- Carveth Read, “Logic”



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] emerge --update : how to keep it going?

2012-12-02 Thread Alan McKinnon
On Sat, 01 Dec 2012 19:58:45 +
Graham Murray gra...@gmurray.org.uk wrote:

 Volker Armin Hemmann volkerar...@googlemail.com writes:
 
  --keep-going does not help you, if the emerge does not start
  because of missing dep/slot conflict/blocking/masking whatever... 
 
 Though it would be nice if there was some flag, probably mainly of use
 with either ' -u @world' or --resume, to tell portage to get on and
 merge what it can and leave any masked packages or those which would
 generate blockers or conflicts. 
 

That is a terribly bad idea, and you need to have a fairly deep
understanding of IT theory to see it (which is why so few people see
it). I don't know which camp you are in.

The command is to emerge world, and it's supposed to be determinate,
i.e. when it's ready to start you can tell what it's going to do, and
that should be what you told it to do, no more and no less[1]

the command is 
emerge world
not 
emerge the-bits-of-world-you-think-you-can-deal-with

If portage cannot emerge world and fully obey what root told it to do,
then portage correctly refuses to continue. It could not possibly be
any other way, as eg all automated build tools (puppet, chef and
friends, even flameeyes's sandbox) break horribly if you do it any
other way. Life is hard enough dealing with build failures without
adding portage do somethign different to what it was told into the mix.

[1] determinate excludes build failures, as those are not predictable.
Dep graph failures happen before the meaty work begins.



-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-21 Thread Tanstaafl

On 2013-04-21 4:32 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

On 21/04/2013 20:47, Tanstaafl wrote:

30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf monthly
20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly



Only the last line is wrong - your monthly and yearly are equivalent.To
be properly yearly, you need a month value in field 4.


Oh, right (I added that interval myself, rsnapshot only comes with the 
hourly, daily weekly and monthly by default).


So, if I wanted it to run at 8:20pm on Dec 31, it would be:

20 22 31 12 *  rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly


I'm not familiar with rsnapshot, I assume that package can deal with how
many of each type of snapshot to retain in it's conf file? I see no
crons to delete out of date snapshots.


Correct, rsnapshot handles this.


And, more as a nitpick than anything else, I always recommend that when
a sysadmin adds a root cronjob, use crontab -e so it goes in
/var/spool/cron, not /etc/crontab. Two benefits:

- syntax checking when you save and quit
- if you let portage, package managers, chef, puppet or whatever manage
your global cronjobs in /etc/portage, then there's no danger that system
will trash the stuff that you added there manually.


I prefer doing things manually... so, nothing else manages my cron jobs.

That said, I prefer to do this 'the gentoo way'... so is crontab -e the 
gentoo way?


;)



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Neil Bothwick
On Mon, 1 Jul 2013 05:29:58 -0700, Grant wrote:

  It's a lot more work and doesn't cover everything. One of the
  advantages of a pull system like BackupPC is that the only work
  needed on the client is adding the backuppc user's key to authorized
  keys. Everything else is done by the server. If the server cannot
  contact the client, or the connection is broken mid-backup, it tries
  again. It also gives a single point of configuration. If you want to
  change the backup plan fr all machines, you make one change on one
  computer.  
 
 If you have a crazy number of machines to back up, I could see
 sacrificing some security for convenience.  Still I would think you
 could use something like puppet to have the best of both worlds.  I
 have 5 machines and I think I can get it down to 3.

There is no sacrifice, you are running rsync as root on the client
either way. Alternatively, you could run rsyncd on the client, which
avoids the need for the server to be able to run an SSH session.

  It works well, save work and minimises disk space usage, especially
  with multiple similar clients. Preventing infiltration is simple as
  you don't need to open it to the Internet at all, the backup server
  can be completely stealthed and still do its job.  
 
 Obviously the backup server has to be able to make outbound
 connections in order to pull so I think you're saying it could drop
 inbound connections, but then how could you talk to it?  Do you mean a
 local backup server?

Yes, you talk to the server over the LAN, or a VPN. There need be no way
of connecting to it from outside of your LAN.


-- 
Neil Bothwick

There's a fine line between fishing and standing on the shore looking
like an idiot.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-30 Thread Frank Steinmetzger
On Mon, Sep 30, 2013 at 09:31:18PM +0200, Alan McKinnon wrote:

  (or big chunks of it) over to your other workstations.
  
  Puppet seems like overkill for what I need.  I think all I really need
  is something to manage config file differences and user accounts.  At
  this point I'm thinking I shouldn't push packages themselves, but
  portage config files and then let each laptop emerge unattended based
  on those portage configs.  I'm going to bring this to the 'salt'
  mailing list to see if it might be a good fit.  It seems like a much
  lighter weight application.
 
 Two general points I can add:
 
 1. Sharing config files turns out to be really hard. By far the easiest
 way is to just share /etc but that is an all or nothing approach, and
 you just need one file to be different to break it. Like /etc/hostname
 
 You *could* create a share directory inside /etc and symlink common
 files in there, but that gets very tedious quickly.

How about using something like unison? I've been using it for a while
now to sync a specific subset of ~ between three computers.
It allows for exclude rules for host-specific stuff.
-- 
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me with any Facebook service.

No, you *can’t* call 999 now.  I’m downloading my mail.


signature.asc
Description: Digital signature


Re: [gentoo-user] What's with foomatic-filters and cups-filters?

2014-06-12 Thread Frank Steinmetzger
On Sun, Jun 08, 2014 at 05:08:11PM -0500, Dale wrote:

 Every time I upgrade CUPS or hplip, I go to a Konsole and type in
 hp-setup as root.  A window pops up and I just set the printer up again,
 it's GUI based.  So far, that has worked.  Don't jinx it tho.  lol

 If needed, I go to my web browser to CUPS and delete the printer first.

I, too, have an HP printer (Laserjet 1000 from 2004, still with the original
toner). Back in the days printing worked simply with cups and foo2zjs. Then
along came hplip which drives me nuts nowadays:

It’s another icon in the tray for a function that I use once in a blue moon.

It needs some kind of binary plugin, but I don’t think it’s the printer
firmware, because hplip already installs that into /usr/share/ Recently
I had to download the plugin manually b/c a) it must be the same version as
hplip and there was an hplip upgrade, and b) my PC cannot get online right
now. The plugin’s URL was very generic, with no indication about a specific
printer model.

Then, nowadays, the system python is python 3. If that is the case, then the
plugin installer will fail because it is a shellscript with an embedded tar
which contains a python 2 script. And the GUI gives no clue to that cause of
failure.
--
Gruß | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“I want to be free!” said the string puppet and cut its strings.


signature.asc
Description: Digital signature


Re: [gentoo-user] Recommendations for scheduler

2014-08-01 Thread Alan McKinnon
On 01/08/2014 19:50, Сергей wrote:
 Also you can have a look at anacron.
 
 
 


Unfortunately, anacron doesn't suit my needs at all. Here's how anacron
works:

this bunch of job will all happen today regardless of what time it is.
That's not what I need, I need something that has very little to do with
time. Example:

1. Start backup job on db server A
2. When complete, copy backup to server B and do a test import
3. If import succeeds, move backup to permanent storage and log the fact
4. If import fails, raise an alert and trigger the whole cycle to start
again at 1

Meanwhile,

1. All servers are regularly doing apt-get update and downloading .debs,
and applying security packages. Delay this on the db server if a backup
is in progress.

Meanwhile there is the regular Friday 5am code-publish cycle and
month-end finance runs - this is a DevOps environment.

Yes, I know I can hack something together with bash scripts and cron
with a truly insane number of flag files. But this doesn't work for sane
definitions of work involving other people. I can't expect my support
crew to read bash scripts they found from crontabs and figure out what
they mean. They need a picture that shows what will happen when and what
the environment looks like.

So basically I need something to replace bash and cron the same way
puppet replaces scp and for loops






-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Recommendations for scheduler

2014-08-03 Thread Alan McKinnon
On 03/08/2014 11:27, Bruce Schultz wrote:
 
 
 On 2 August 2014 5:10:43 AM AEST, Alan McKinnon alan.mckin...@gmail.com 
 wrote:
 On 01/08/2014 19:50, Сергей wrote:
 Also you can have a look at anacron.





 Unfortunately, anacron doesn't suit my needs at all. Here's how anacron
 works:

 this bunch of job will all happen today regardless of what time it is.
 That's not what I need, I need something that has very little to do
 with
 time. Example:

 1. Start backup job on db server A
 2. When complete, copy backup to server B and do a test import
 3. If import succeeds, move backup to permanent storage and log the
 fact
 4. If import fails, raise an alert and trigger the whole cycle to start
 again at 1

 Meanwhile,

 1. All servers are regularly doing apt-get update and downloading
 .debs,
 and applying security packages. Delay this on the db server if a backup
 is in progress.

 Meanwhile there is the regular Friday 5am code-publish cycle and
 month-end finance runs - this is a DevOps environment.
 
 I'm not sure if its quite what you have in mind, and it comes with a bit of a 
 steep learning curve, but cfengine might fit the bill.
 
 http://cfengine.com

Hi Bruce,

Thanks for the reply.

I only worked with cfengine once, briefly, years ago, and we quickly
decided to roll our own deployment solution to solve that very specific
vertical problem.


Isn't cfengine a deployment framework, similar in ideals to puppet and
chef?

I don't want to deploy code or manage state, I want to run code
(backups, database maintenance, repair of dodgy data in databases and
code publish in a devops environment)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Alan McKinnon
On 27/01/2015 19:49, Alec Ten Harmsel wrote:
 
 On 01/27/2015 11:33 AM, Alan McKinnon wrote:
 On 27/01/2015 10:49, Tomas Mozes wrote:
 I haven't tested it yet, however I like the minimalistic syntax.

 As an ansible user - do you plan to allow using default values for
 modules and/or variables?

 +1 for that.

 I'm also a happy ansible user with zero plans to change, but I can't
 imagine a deployment tool without sane rational explicit defaults. A
 whole host of problems simply stop being problems if that feature is
 available.

 
 I'm curious, what exactly do you mean about default values? Is there a
 small example you can give me? The tutorial on Ansible's website is a
 little confusing.

When I saw your other reply to Tomas, I thought you might ask that
question :-)

What I was thinking of is defaults like you find in roles on galaxy. In
a sub-dir called defaults you find a file main.yml containing variables
defaults as key-value pairs. A good example is Stouts.openvpn, it has eg

openvpn_port: 1194
openvpn_proto: udp


If you don't define those vars yourself in the playbook, the role uses
those defaults, which is exactly what you want - they match upstream
default.

The location is very explicit, the defaults are in a named file in an
obvious location and there's no mysterious automagic.

I definitely wasn't talking about crazy magic permission modes like the
suggestion you mentioned in the other mail.

A default mode is fine, but make it a variable called mode or umask
that we can see in a file and make it what we want. I think we do agree
on this :-)


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Alec Ten Harmsel

On 01/27/2015 03:49 AM, Tomas Mozes wrote:

 I haven't tested it yet, however I like the minimalistic syntax.

Thanks. Testing is kind of a high barrier; it took me hours to write a
role for jenkins.


 As an ansible user - do you plan to allow using default values for
 modules and/or variables?


I am not really a fan of default values; I prefer explicitness over
forcing someone to have a complete knowledge of bossman. For example,
one of my coworkers pointed out to me that

file /etc/ssh/sshd_config

should install `/etc/ssh/sshd_config` with whatever owner/permissions it
has on disk in the configuration files directory. In other words, he
wanted the implicit defaults to make it easier. Personally, I think this
is harder because now I have to care about permissions of a file that is
being copied and writing a role; instead I like having explicit roles in
which it is mandatory to type

file /etc/ssh/sshd_config root:root 600

This makes it easier to see everything on the same terminal, don't have
to care about local permissions, etc. I'm trying to write a config
manager that has the fewest amount of gotchas possible, and I think
that means that a role should explicitly describe the state a machine
should end up in.

I do not have variables, but I might add them if someone gives me a
compelling use case that cannot be done without them.

If this didn't answer your question, my bad. Thanks for the feedback. I
know a lot of the people on this list work full-time and have families;
there's always time to procrastinate on homework though ;)

Alec



Re: [gentoo-user] Re: Java 8 and remote access

2016-02-03 Thread J. Roeleveld
On Tuesday, February 02, 2016 03:59:35 PM Thomas Sigurdsen wrote:
> On 02/02/2016 02:50 PM, Grant wrote:
> >>>>>>>> I need to run a Java 8 app remotely.  Can this be done on Gentoo?
> >> 
> >> Bummer.  FYI guys, Amazon Workspaces, Amazon Appstream, and Microsoft
> >> Azure RemoteApp kinda work the way I've described but they all have
> >> limitations which exclude them from working for me in this case.  It
> >> looks like I'll be admin'ing another remote Gentoo system for this.
> > 
> > Do any cloud VM providers have Gentoo as an OS option?  If not, is there a
> > good one that lets you install your own OS?
> > 
> > BTW since Java is a VM I'm surprised there is no service that lets you
> > just
> > upload a Java app and run it remotely on the service without any OS
> > management.  Am I missing anything there?
> > 
> > - Grant
> 
> linode.com and kimsufi.com lets you install gentoo. There might be many
> more.

Yes, plenty.
It depends where you need them to be located.

> You might want to look into something like cfengine, puppet or chef to
> lessen your workload of running multiple machines.

Or Ansible. There are some on this list that use that to maintain multiple 
Gentoo boxes.

> I've used linode for a couple years and its never been any problem, I do
> see that I'm probably paying for more than I need, but that's another story.

--
Joost

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Running Gentoo in VirtualBox

2017-12-31 Thread R0b0t1
On Sun, Dec 31, 2017 at 2:13 PM, Alan McKinnon <alan.mckin...@gmail.com> wrote:
> On 31/12/2017 21:40, the...@sys-concept.com wrote:
>> I'm using Gentoo as a server (so it runs 24/7) Apache, Asterisk, Hylafax
>> etc.
>>
>> What are my chances to run Gentoo as a VirtualBox?
>>
>> Installing Gentoo takes me 2-3 days (basic setup min., I don't do it
>> every month so I have to go through Gentoo handbook); to configure it
>> the way I want it takes another week or two.
>>
>> So I was thinking,  if I run Windows 10 and configure Gentoo as a
>> virtual box it might be easier to transfer it from one system to
>> another, in case there is a HD failure (like it just happened to me
>> yesterday).
>>
>> Any input will be appreciated.
>> I know I might have problem with Serial port and receiving faxes via
>> HylaFax as they are time sensitive.
>>
>
> Virtualization here will not solve your risk or effort exposure. It will
> increase it.
>

I agree: this way lies pain.

Firstly, if you must use Windows, I highly recommend using Hyper-V
over VirtualBox or VMWare/ESXi. It is free and has better hardware
configuration support.

Secondly, the backup scheme you are wanting to implement is easier and
more robustly done with LVM. Even if you don't want to deal with that,
rsync over SSH is secure and easy to set up. More complicated
solutions like BorgBackup may ultimately be the most appropriate.

Thirdly, save your configuration files so you do not need to reread
the documentation each time you install a system. Look at Gentoo stage
4 tarballs and programs like Ansible or Puppet. You might also
consider running Debian. Gentoo is nice, but not necessary or even
suitable for every use case.

Cheers,
 R0b0t1



Re: [gentoo-user] Can some config files be automatically protected from etc-update?

2023-04-17 Thread Frank Steinmetzger
Am Mon, Apr 17, 2023 at 12:28:01PM -0700 schrieb Mark Knecht:
> On Mon, Apr 17, 2023 at 11:26 AM Walter Dnes  wrote:
> >
> >   Now that the (no)multilib problem in my latest update has been solved,
> > I have a somewhat minor complaint.  Can I get etc-update to skip certain
> > files?  My latest emerge world wanted to "update"...
> >
> > 1) /etc/hosts (1)
> > 2) /etc/inittab (1)
> > 3) /etc/mtab (1)
> > 4) /etc/conf.d/consolefont (1)
> > 5) /etc/conf.d/hwclock (1)
> > 6) /etc/default/grub (1)
> > 7) /etc/ssh/sshd_config (1)
> >
> > ...hosts is critical for networking.  consolefont allows me tp use the
> > true text console with a readable font, etc, etc.  I have my reasons
> > for making certain settings, and keeping them that way.
> >
> In my experience with all distros I go outside the distro for this
> sort of issue. Put a copy somewhere, white a little script that
> does a diff on the files you feel are important enough and run
> a cron job hourly that looks for any differences.

Isn’t that exactly what etc-update does? IIRC (my last Gentoo update was a 
few months ago), I select one of the files, and it lets me view a diff in 
vim (configurable) of my old version and the new one from the update. Then I 
can either merge the two files right in vim, or elect to keep the new or old 
file entirely.

-- 
Grüße | Greetings | Qapla’
Please do not share anything from, with or about me on any social network.

“I want to be free!” said the string puppet and cut its strings.


signature.asc
Description: PGP signature


Re: [gentoo-user] Managing multiple systems with identical hardware

2013-10-01 Thread joost
Alan McKinnon alan.mckin...@gmail.com wrote:
On 30/09/2013 19:31, Grant wrote:
 Keeping all of the laptops 100% identical as far as hardware is
 central to this plan.  I know I'm setting myself up for big
problems
 otherwise.

 I'm hoping I can emerge every package on my laptop that every other
 laptop needs.  That way I can fix any build problems and update any
 config files right on my own system.  Then I would push config file
 differences to all of the other laptops.  Then each laptop could
 emerge its own stuff unattended.

 I see what you desire now - essentially you want to clone your
laptop
 (or big chunks of it) over to your other workstations.
 
 That sounds about right.
 
 To get a feel for how it works, visit puppet's web site and download
 some of the test appliances they have there and run them in vm
software.
 Set up a server and a few clients, and start experimenting in that
 sandbox. You'll quickly get a feel for how it all hangs together
(it's
 hard to describe in text how puppet gets the job done, so much
easier to
 do it for real and watch the results)
 
 Puppet seems like overkill for what I need.  I think all I really
need
 is something to manage config file differences and user accounts.  At
 this point I'm thinking I shouldn't push packages themselves, but
 portage config files and then let each laptop emerge unattended based
 on those portage configs.  I'm going to bring this to the 'salt'
 mailing list to see if it might be a good fit.  It seems like a much
 lighter weight application.

Two general points I can add:

1. Sharing config files turns out to be really hard. By far the easiest
way is to just share /etc but that is an all or nothing approach, and
you just need one file to be different to break it. Like /etc/hostname

You *could* create a share directory inside /etc and symlink common
files in there, but that gets very tedious quickly.

Rather go for a centralized repo solution that pushes configs out, you
must just find the one that's right for you.

2. Binary packages are almost perfect for your needs IMHO, running
emerge gets very tedious quickly, and your spec is that all
workstations
have the same USE. You'd be amazed how much time you save by doing
this:

emerge -b on your laptop and share your /var/packages
emerge -K on the workstations when your laptop is on the network

step 2 goes amazingly quickly - eyeball the list to be emerged, they
should all be purple, press enter. About a minute or two per
workstation, as opposed to however many hours the build took.

3. (OK, three points). Share your portage tree over the network. No
point in syncing multiple times when you actually just need to do it
once.


 
 I'm soaking up a lot of your time (again).  I'll return with any real
 Gentoo questions I run into and to run down the final plan before I
 execute it.  Thanks so much for your help.  Not sure what I'd do
 without you. :)

I'm sure Neil would step in if I'm hit by a bus
He'd say the same things, and use about 1/4 of the words it takes me
;-)


-- 
Alan McKinnon
alan.mckin...@gmail.com

Grant,

Additionally. You might want to consider sharing /etc/portage and 
/var/lib/portage/world (the file)
I do that between my build host and the other machines. (Along with the portage 
tree, packages and distfiles)

That way all workstations end up with the same packages each time you run 
emerge -vauDk world on them.

And like Alan said, it goes really quick.

--
Joost

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: [gentoo-user] Cross system dependencies

2014-06-29 Thread J. Roeleveld
On Saturday, June 28, 2014 09:23:17 PM thegeezer wrote:
 On 06/28/2014 07:06 PM, J. Roeleveld wrote:
  On Saturday, June 28, 2014 01:39:41 PM Neil Bothwick wrote:
  On Sat, 28 Jun 2014 11:36:11 +0200, J. Roeleveld wrote:
  I need a way to add dependencies to services which are provided by
  different servers. For instance, my mail server uses DNS to locate my
  LDAP server which contains the mail aliases. All these are running on
  different machines. Currently, I manually ensure these are all started
  in the correct sequence, I would like to automate this to the point
  where I can start all 3 servers at the same time and have the different
  services wait for the dependency services to be available even though
  they are on different systems.
  
  All the dependency systems in the init-systems I could find are all
  based on dependencies on the same server. Does anyone know of something
  that can already provide this type of dependencies? Or do I need to
  write something myself?
  
  With systemd you can add ExecStartPre=/some/script to the service's unit
  file where /some/script waits for the remote services to become
  available,
  and possibly return an error if the service does not become available
  within a set time.
  
  That method works for any init-system and writing a script to check and if
  necessary fail is my temporary fall-back plan. I was actually hoping for a
  method that can be used to monitor availability and, if necessary, stop
  services when the dependencies disappear.
  
  --
  Joost
 
 the difficulty is in identifying failed services.
 local network issue / load issue could mean your services start bouncing.
 the best way is to have redundancy so it doesn't matter as much

I know that. A proper system for this would have a configurable amount of 
retries with a wait-time in between.

 having said all of that::
 
 systemd will start servers and buffer network activity - how this works
 for non local services would be interesting to see.

It would, but I am not going to migrate my servers to something like systemd 
without a clear and proven advantage. For me, that currently does not exist.
It also would not work as not all the software I run will happily wait while 
the rest of the stack starts.
I would end up in a bigger mess thanks to timeout issues during startup.

 with openrc :
 you could on the DNS server have a service which is just a batch script
 that uses watches for pid / program path in ps which outputs ACK or
 NAK to a file in an NFS share  say /nfs/monitoring/dns

Yes, but in order to access the NFS share, I need DNS to be running. Chicken-
egg problem.

 then on the mail server you could have a service that polls
 /nfs/monitoring/dns for NAK or ACK
 you can then choose to have this service directly start your dependent
 services, or if you adjust /etc/init.d/postfix to have depends =
 mymonitorDNS which is an empty shell of a service. your watchdog
 service could stop / start the empty shell of a script mymonitorDNS, and
 then postfix depends on mymonitorDNS
 this would save you from i've just stopped the mail server for
 maintenance and my watchdogservice has just restarted it due to a
 NAKACK event

That is the problem I have with these watchdog services. During boot, I want 
it to wait. But it needs to understand not to start a service when I stopped 
it during runtime.
Otherwise it could prevent a clean shutdown as well...

 or...
 you could have a central master machine which has it's own services,
 watchdog and monitor... i.e. /etc/init.d/thepostfixserver start  /
 depends on thednsserver which just runs
 # ssh postfixserver '/etc/init.d/postfix start'
 
 or...
 puppet and it's kin

Last time I looked at puppet, it seemed too complex for what I need.
I will recheck it again.

Thanks,

Joost



Re: [gentoo-user] Odd portage / eix behavior

2008-08-09 Thread Eric Martin

Alan McKinnon wrote:

On Friday 08 August 2008 19:58:46 Eric Martin wrote:

On one of my boxes, eix shows every package as (*) which is testing for
my current arch but stable on some other.  emerge --info reports x86 as
my arch, so I don't know what the problem is.  I don't think it's a huge
problem as it's just an annoyance but I might be missing something.  I
don't know where to start on google / forums so I figured I'd start here.


I recall something similar happening to me a while ago, on a disconnected box 
that hadn't been updated for 6 months. I was getting weird status symbols just 
like you after a sync. In my case, an upgrade to the latest ~arch portage 
fixed it.


You seem to be running purely x86 right? I assume as a first step you have 
done all the sensible things - remerge latest stable portage, emerge --sync, 
checked /etc/portage/* for silly masks that you forgot about?





Yes to everything except the 'purely x86'.  I'm running ~x86 on a few things

dev-perl/Video-Frequencies  ~x86
dev-perl/Video-ivtv ~x86
media-tv/ivtv   ~x86
app-misc/lirc   ~x86
x11-drivers/xf86-video-ivtv ~x86
sys-power/powertop  ~x86
app-admin/puppet~x86
dev-ruby/facter ~x86
media-tv/mythtv ~x86

Portage is 2.1.4.4 which my other computers are running.  This was my 
local rsync mirror that sync'd every two days and my other machines 
sync'd off of it.


This is actually happening on two boxes, although I created the problem 
on the second.  It's my mythbox, and I cloned the box via rsync to make 
a myth-dev box so I can fix it without downtime.  My wife is ready to 
kill me every time I break the mythbox.  Portage didn't have the problem 
on myth-dev, and then I rsync'd over and the problem was created so I 
know it's some setting somewhere...


--
Eric Martin
Key fingerprint = D1C4 086E DBB5 C18E 6FDA  B215 6A25 7174 A941 3B9F



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] emerge --update : how to keep it going?

2012-12-02 Thread kwkhui
On Sun, 2 Dec 2012 16:12:02 +0200
Alan McKinnon alan.mckin...@gmail.com wrote:

 On Sat, 01 Dec 2012 19:58:45 +
 Graham Murray gra...@gmurray.org.uk wrote:
 
  Volker Armin Hemmann volkerar...@googlemail.com writes:
  
   --keep-going does not help you, if the emerge does not start
   because of missing dep/slot conflict/blocking/masking whatever... 
  
  Though it would be nice if there was some flag, probably mainly of
  use with either ' -u @world' or --resume, to tell portage to get on
  and merge what it can and leave any masked packages or those which
  would generate blockers or conflicts. 
  
 
 That is a terribly bad idea, and you need to have a fairly deep
 understanding of IT theory to see it (which is why so few people see
 it). I don't know which camp you are in.
 
 The command is to emerge world, and it's supposed to be determinate,
 i.e. when it's ready to start you can tell what it's going to do, and
 that should be what you told it to do, no more and no less[1]
 
 the command is 
 emerge world
 not 
 emerge the-bits-of-world-you-think-you-can-deal-with
 
 If portage cannot emerge world and fully obey what root told it to do,
 then portage correctly refuses to continue. It could not possibly be
 any other way, as eg all automated build tools (puppet, chef and
 friends, even flameeyes's sandbox) break horribly if you do it any
 other way. Life is hard enough dealing with build failures without
 adding portage do somethign different to what it was told into the
 mix.
 
 [1] determinate excludes build failures, as those are not
 predictable. Dep graph failures happen before the meaty work begins.
 
 
 

While there are good reasons not to implement it in portage itself, you
can implement it with a bit of help from shell scripts telling portage
what to do.

Do an emerge -uDpv world, use sed or awk or whatever to replace the
beginning [ebuild ...] and whatever come after the package
nameversion, and finally loop emerge -1 =${package} for each package
in that list.  Now provided you discard the return value of emerge, if
such ${package} will give you something that portage doesn't think is a
good idea (e.g. unsatisfiable dependencies), the loop will go on to the
next package instead of completely halting.

The shell script is thus left as an exercise.

The usual warning applies:- it can be done doesn't necessarily mean it
is a good idea to do it.

Kerwin.


signature.asc
Description: PGP signature


Re: [gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-21 Thread Alan McKinnon
On 21/04/2013 20:47, Tanstaafl wrote:
 Ok, my goal is to keep 3 'snapshots' per day (11:30am, 2:30pm and
 5:30pm), 7 daily's (8:50pm), 4 weekly's (8:40pm), 12 monthly's (8:30pm),
 and 5 yearly's (8:20pm).
 
 My myhost1.conf has:
 
 intervalhourly  3
 intervaldaily   7
 intervalweekly  4
 intervalmonthly 12
 intervalyearly  5
 
 And my /etc/crontab now looks like:
 
 # for vixie cron
 # $Header:
 /var/cvsroot/gentoo-x86/sys-process/vixie-cron/files/crontab-3.0.1-r4,v 1.3
 2011/09/20 15:13:51 idl0r Exp $

 # Global variables
 SHELL=/bin/bash
 PATH=/sbin:/bin:/usr/sbin:/usr/bin
 MAILTO=root
 HOME=/

 # check scripts in cron.hourly, cron.daily, cron.weekly and cron.monthly
 59  *  * * *rootrm -f /var/spool/cron/lastrun/cron.hourly
 9  3  * * * rootrm -f /var/spool/cron/lastrun/cron.daily
 19 4  * * 6 rootrm -f /var/spool/cron/lastrun/cron.weekly
 29 5  1 * * rootrm -f /var/spool/cron/lastrun/cron.monthly
 */10  *  * * *  roottest -x /usr/sbin/run-crons 
 /usr/sbin/run-crons
 #
 # rsnapshot cronjobs
 #
 30 11,14,17 * * * root   rsnapshot -c /etc/rsnapshot/myhost1.conf
 sync; rsnapshot -c /etc/rsnapshot/myhost1.conf hourly
 50 20 * * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf daily
 40 20 * * 6 rootrsnapshot -c /etc/rsnapshot/myhost1.conf weekly
 30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf monthly
 20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly

Only the last line is wrong - your monthly and yearly are equivalent.To
be properly yearly, you need a month value in field 4.

I'm not familiar with rsnapshot, I assume that package can deal with how
many of each type of snapshot to retain in it's conf file? I see no
crons to delete out of date snapshots.


And, more as a nitpick than anything else, I always recommend that when
a sysadmin adds a root cronjob, use crontab -e so it goes in
/var/spool/cron, not /etc/crontab. Two benefits:

- syntax checking when you save and quit
- if you let portage, package managers, chef, puppet or whatever manage
your global cronjobs in /etc/portage, then there's no danger that system
will trash the stuff that you added there manually.

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
 I'm planning to rsync --fake-super the important files from each
 client to a particular folder on the backup server as an unprivileged
 user and then have the backup server run rdiff-backup locally to
 maintain a history of those files.

 How does that work with files that aren't world-readable?

The client can run rsync as root, the unprivileged user would be
writing on the backup server.  --fake-super writes all original
ownership/permissions to xattrs in the files.

 authorized_keys on the server
 would restrict the clients to a particular rsync command in a
 particular directory.  That way if the backup server is infiltrated,
 the clients aren't exposed in any way, and if a client is infiltrated,
 the only extra exposure is the rsync'ed copy of the files on the
 server which isn't a real vulnerability because of the rdiff-backup
 history.  I'd also like to have a secondary backup server pull those
 same rsync'ed files from the primary backup server and run its own
 rdiff-backup repository on them.  That way all copies of any system's
 backups are never made vulnerable by the break-in of a single system.

 Doesn't that compare favorably to a layout like backuppc's?

 It's a lot more work and doesn't cover everything. One of the advantages
 of a pull system like BackupPC is that the only work needed on the client
 is adding the backuppc user's key to authorized keys. Everything else is
 done by the server. If the server cannot contact the client, or the
 connection is broken mid-backup, it tries again. It also gives a single
 point of configuration. If you want to change the backup plan fr all
 machines, you make one change on one computer.

If you have a crazy number of machines to back up, I could see
sacrificing some security for convenience.  Still I would think you
could use something like puppet to have the best of both worlds.  I
have 5 machines and I think I can get it down to 3.

 It works well, save work and minimises disk space usage, especially with
 multiple similar clients. Preventing infiltration is simple as you don't
 need to open it to the Internet at all, the backup server can be
 completely stealthed and still do its job.

Obviously the backup server has to be able to make outbound
connections in order to pull so I think you're saying it could drop
inbound connections, but then how could you talk to it?  Do you mean a
local backup server?

- Grant



Re: [gentoo-user] {OT} backups... still backups....

2013-07-01 Thread Grant
  It's a lot more work and doesn't cover everything. One of the
  advantages of a pull system like BackupPC is that the only work
  needed on the client is adding the backuppc user's key to authorized
  keys. Everything else is done by the server. If the server cannot
  contact the client, or the connection is broken mid-backup, it tries
  again. It also gives a single point of configuration. If you want to
  change the backup plan fr all machines, you make one change on one
  computer.

 If you have a crazy number of machines to back up, I could see
 sacrificing some security for convenience.  Still I would think you
 could use something like puppet to have the best of both worlds.  I
 have 5 machines and I think I can get it down to 3.

 There is no sacrifice, you are running rsync as root on the client
 either way. Alternatively, you could run rsyncd on the client, which
 avoids the need for the server to be able to run an SSH session.

I think the sacrifice is that with the backuppc method, if someone
breaks into the backup server they will have read(/write) access to
the clients.  The method I'm describing requires more management if
you have a lot of machines, but it doesn't have the aforementioned
vulnerability.

The rsyncd option is interesting.  If you don't want to restore
directly onto the client, there are no SSH keys involved at all?

  It works well, save work and minimises disk space usage, especially
  with multiple similar clients. Preventing infiltration is simple as
  you don't need to open it to the Internet at all, the backup server
  can be completely stealthed and still do its job.

 Obviously the backup server has to be able to make outbound
 connections in order to pull so I think you're saying it could drop
 inbound connections, but then how could you talk to it?  Do you mean a
 local backup server?

 Yes, you talk to the server over the LAN, or a VPN. There need be no way
 of connecting to it from outside of your LAN.

To me it seems presumptuous to be sure a particular machine will never
be infiltrated to the degree that you're OK with such an infiltration
giving read(/write) access on every client to the infiltrator.

- Grant



Re: [gentoo-user] NFS tutorial for the brain dead sysadmin?

2014-07-27 Thread J. Roeleveld
On 27 July 2014 18:25:24 CEST, Stefan G. Weichinger li...@xunil.at wrote:
Am 26.07.2014 04:47, schrieb walt:

 So, why did the broken machine work normally for more than a year
 without rpcbind until two days ago?  (I suppose because nfs-utils was
 updated to 1.3.0 ?)
 
 The real problem here is that I have no idea how NFS works, and each
 new version is more complicated because the devs are solving problems
 that I don't understand or even know about.

I double your search for understanding ... my various efforts to set up
NFSv4 for sharing stuff in my LAN also lead to unstable behavior and
frustration.

Only last week I re-attacked this topic as I start using puppet here to
manage my systems ... and one part of this might be sharing
/usr/portage
via NFSv4. One client host mounts it without a problem, the thinkpads
don't do so ... just another example ;-)

Additional in my context: using systemd ... so there are other
(different?) dependencies at work and services started.

I'd be happy to get that working in a reliable way. I don't remember
unstable behavior with NFS (v2 back then?) when we used it at a company
I worked for in the 90s.

Stefan

I use NFS for filesharing between all wired systems at home.
Samba is only used for MS Windows and laptops.

Few things I always make sure are valid:
- One partition per NFS share
- No NFS share is mounted below another one
- I set the version to 3 on the clients
- I use LDAP for the user accounts to ensure the UIDs and GIDs are consistent.

NFS4 requires all the exports to be under a single foldertree.

I haven't had any issues in the past 7+ years with this and in the past 5+ 
years I had portage, distfiles and packages shared.
/etc/portage is symlinked to a NFS share as well, allowing me to create binary 
packages on a single host (inside a chroot) which are then used to update the 
different machines.

If anyone wants a more detailed description of my setup. Let me know and I will 
try to write something up.

Kind regards

Joost

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: [gentoo-user] Re: [Extremely OT] Ansible/Puppet replacement

2015-01-27 Thread Alec Ten Harmsel

On 01/27/2015 10:34 AM, James wrote:
 Alec Ten Harmsel alec at alectenharmsel.com writes:



 I'm sorry to spam gentoo-user, but I'm not sure who else would be
 interested in something like this. Also, feel free to email me with bugs
 in the code or documentation, or open something in GitHub's issue tracker.
 One man's spam generates  maps for another.


 So my map of todo on ansible is all about common gentoo installs. [1]
 Let's take the first and most easy example the clone. I have a gentoo
 workstation install that I want to replicated onto identical hardware (sort
 of like a disk to disk dd install). 

 So how would I impress the bossman by actually saving admin time
 on how to use the bossman to create (install from scratch + pxe?)
 a clone.

Assuming that disks are formatted, a stage3 has been freshly extracted,
bossman is installed, and the role/config files are on a mounted
filesystem, it should be similar to the role below:

file /etc/portage/make.conf root:root 644
! emerge-webrsync
! emerge --sync

file /etc/locale.gen root:root 600
! locale-gen

pkg sys-kernel/gentoo-sources
file /usr/src/linux/.config root:root 644
! make -C /usr/src/linux all modules_install install

pkg sys-boot/grub
! grub-install /dev/sda # I can't remember all the options needed here
file /etc/default/grub
! grub-mkconfig -o /boot/grub/grub.cfg

# Generating /etc/fstab using something similar to Arch's `genfstab`
would be much better
file /etc/fstab root:root 644

# Root password
file /etc/shadow root:root 640

# Logger
pkg app-admin/syslog-ng

# Network
pkg net-misc/dhcpcd
enable dhcpcd

# For remote access
pkg net-misc/openssh
file /etc/ssh/sshd_config root:root 600
file /etc/ssh/known_hosts root:root 600
# Other sshd files...
enable sshd

There are a ton of assumptions that make this work; if installing
manually, the installer is responsible, and if installing from PXE, this
stuff would have to be baked into the ISO.



 Gotta recipe for that using bossman?
 Or is that an invalid direction for bossman?

 curiously,
 James


 [1]
 http://blog.jameskyle.org/2014/08/automated-stage3-gentoo-install-using-ansible/





Automating the bootstrapping of a node is reasonably complicted, even
harder on Gentoo than on RHEL. This is the type of thinking I want to
do, and I'm working on doing this with my CentOS box that runs ssh,
Jenkins, postgres, and Redmine.

Alec



Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-27 Thread Alan McKinnon
On 27/09/2013 12:37, Grant wrote:
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.
 
 I actually do want them all to have my stuff and I want to have all
 their stuff.  That way everything is in sync and I can manage all of
 them by just managing mine and pushing.  How about pushing only
 portage configs and then letting each of them emerge unattended?  I
 know unattended emerges are the kiss of death but if all of the
 identical laptops have the same portage config and I emerge everything
 successfully on my own laptop first, the unattended emerges should be
 fine.

Within those constraints it could work fine. The critical stuff to share
is make.conf and /etc/portage/*, everything else can be shared to
greater or lesser degree and you can undo things on a whim if you wish.

There's one thing that we haven't touched on, and that's the hardware.
Are they all identical hardware items, or at least compatible? Kernel
builds and hardware-sensitive apps like mplayer are the top reasons
you'd want to centralize things, but those are the very apps that will
make sure life miserable trying to fins commonality that works in all
cases. So do keep hardware needs in mind when making purchases.

Personally, I wouldn't do the building and pushing on my own laptop,
that turns me inot the central server and updates only happen when I'm
in the office. I'd use a central build host and my laptop is just
another client. Not all that important really, the build host is just an
address from the client's point of view



 
 I'd recommend if you have a decent-ish desktop lying around, you press
 that into service as your master build host. yeah, it takes 10% longer
 to build stuff, but so what? Do it overnight.
 
 Well, my goal is to minimize the number of different systems I
 maintain.  Hopefully just one type of laptop and a server.
 
  Maybe puppet could help with that?  It would almost be
 like my own distro.  Some laptops would have stuff installed that they
 don't need but at least they aren't running Fedora! :)

 DO NOT PROVISION GENTOO SYSTEMS FROM PUPPET.
 
 OK, I'm thinking over how much variation there would be from laptop to laptop:
 
 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.
 
 Am I missing anything?  I can imagine everything else being totally identical.
 
 What could I use to manage these differences?

I'm sure there are numerous files in /etc/ with small niggling
differences, you will find these as you go along.

In a Linux world, these files actually do not subject themselves to
centralization very well, they really do need a human with clue to make
a decision whilst having access to the laptop in question. Every time
we've brain-stormed this at work, we end up with only two realistic
options: go to every machine and configure it there directly, or put
individual per-host configs into puppet and push. It comes down to the
same thing, the only difference is the location where stuff is stored.

I'm slowly coming to conclsuion that you are trying to solve a problem
with Gentoo that binary distros already solved a very long time ago. You
are forcing yourself to become the sole maintainer of GrantOS and do all
the heavy lifting of packaging. But, Mint and friends already did all
that work already and frankly, they are much better at it than you or I.

I would urge you to take a good long hard look at exactly why a binary
distro is not suitable, as I feel that would solve all your issues. Run
Gentoo

Re: [gentoo-user] Managing multiple systems with identical hardware

2013-09-29 Thread Grant
 I realized I only need two types of systems in my life.  One hosted
 server and bunch of identical laptops.  My laptop, my wife's laptop,
 our HTPC, routers, and office workstations could all be on identical
 hardware, and what better choice than a laptop?  Extremely
 space-efficient, portable, built-in UPS (battery), and no need to buy
 a separate monitor, keyboard, mouse, speakers, camera, etc.  Some
 systems will use all of that stuff and some will use none, but it's
 OK, laptops are getting cheap, and keyboard/mouse/video comes in handy
 once in a while on any system.

 Laptops are a good choice, desktops are almost dead out there, and thin
 clients nettops are just dead in the water for anything other than
 appliances and media servers

 What if my laptop is the master system and I install any application
 that any of the other laptops need on my laptop and push its entire
 install to all of the other laptops via rsync whenever it changes?
 The only things that would vary by laptop would be users and
 configuration.

 Could work, but don't push *your* laptop's config to all the other
 laptops. they end up with your stuff which might not be what them to
 have. Rather have a completely separate area where you store portage
 configs, tree, packages and distfiles for laptops/clients and push from
 there.

 I actually do want them all to have my stuff and I want to have all
 their stuff.  That way everything is in sync and I can manage all of
 them by just managing mine and pushing.  How about pushing only
 portage configs and then letting each of them emerge unattended?  I
 know unattended emerges are the kiss of death but if all of the
 identical laptops have the same portage config and I emerge everything
 successfully on my own laptop first, the unattended emerges should be
 fine.

 Within those constraints it could work fine. The critical stuff to share
 is make.conf and /etc/portage/*, everything else can be shared to
 greater or lesser degree and you can undo things on a whim if you wish.

 There's one thing that we haven't touched on, and that's the hardware.
 Are they all identical hardware items, or at least compatible? Kernel
 builds and hardware-sensitive apps like mplayer are the top reasons
 you'd want to centralize things, but those are the very apps that will
 make sure life miserable trying to fins commonality that works in all
 cases. So do keep hardware needs in mind when making purchases.

Keeping all of the laptops 100% identical as far as hardware is
central to this plan.  I know I'm setting myself up for big problems
otherwise.

 Personally, I wouldn't do the building and pushing on my own laptop,
 that turns me inot the central server and updates only happen when I'm
 in the office. I'd use a central build host and my laptop is just
 another client. Not all that important really, the build host is just an
 address from the client's point of view

I don't think I'm making the connection here.  The central server
can't do any unattended building and pushing, correct?  So I would
need to be around either way I think.

I'm hoping I can emerge every package on my laptop that every other
laptop needs.  That way I can fix any build problems and update any
config files right on my own system.  Then I would push config file
differences to all of the other laptops.  Then each laptop could
emerge its own stuff unattended.

 OK, I'm thinking over how much variation there would be from laptop to
 laptop:

 1. /etc/runlevels/default/* would vary of course.
 2. /etc/conf.d/net would vary for the routers and my laptop which I
 sometimes use as a router.
 3. /etc/hostapd/hostapd.conf under the same conditions as #2.
 4. Users and /home would vary but the office workstations could all be
 identical in this regard.

 Am I missing anything?  I can imagine everything else being totally
 identical.

 What could I use to manage these differences?

 I'm sure there are numerous files in /etc/ with small niggling
 differences, you will find these as you go along.

 In a Linux world, these files actually do not subject themselves to
 centralization very well, they really do need a human with clue to make
 a decision whilst having access to the laptop in question. Every time
 we've brain-stormed this at work, we end up with only two realistic
 options: go to every machine and configure it there directly, or put
 individual per-host configs into puppet and push. It comes down to the
 same thing, the only difference is the location where stuff is stored.

I'm sure I will need to carefully define those config differences.
Can I set up puppet (or similar) on my laptop and use it to push
config updates to all of the other laptops?  That way the package I'm
using to push will be aware of config differences per system and push
everything correctly.  You said not to use puppet, but does that apply
in this scenario?

 I'm slowly coming to conclsuion that you are trying to solve a problem
 with Gentoo that binary

Re: [gentoo-user] Multiseat -- LTSP?

2012-01-30 Thread Grant
[snip]
 If I throw out installing a separate OS on a separate machine for each
 workstation and all of the proprietary thin-client protocols, I think
 I have 3 options:

 1. Connect monitors, USB keyboards, and USB mice directly to a server
 with multiple video cards.  I found a motherboard with 6 PCI-E slots:

 http://www.newegg.com/Product/Product.aspx?Item=N82E16813128508

 6 video cards could be installed for 6 workstations if the server goes
 headless, and even more if multi-headed video cards are used.  Xorg
 requires some special configuration for this but this discussion from
 2010 sounds like it's something that is actually done:

 http://forums.gentoo.org/viewtopic-t-836950-start-0.html

 These guys got it working in 2006:

 http://www.linuxgazette.net/124/smith.html

 2. Set up a separate thin client for each workstation and run LTSP on
 the server.  This seems inferior to #1 because it requires setting up
 and maintaining the LTSP server and client configuration, NFS, xinetd,
 tftp, dnsmasq, and PXE-boot.  Bandwidth would also be limited compared
 to #1 and hardware and power requirements would be much greater.

 3. Run a Plugable thin client for each workstation:

 http://www.amazon.com/dp/B004PXPPNA

 This likely requires running Userful Multiseat Linux on my server
 which is only packaged up for Ubuntu.  The Plugable thin client
 connects to the server via USB 2.0 which makes me wonder if it could
 be made to work without Userful Multiseat Linux as a USB video card
 and input devices, but I imagine drivers for the video card and
 bandwidth over USB could be a problem.

 I think #1 is the way to go but I'd love to hear anyone else's opinion
 on that.  Has anyone here ever set up multiseat in Xorg?

 Can you rely on Xorg devs to ensure that they are not going to break your
 multiseat system in the future?

Maybe I'm wrong, but I don't know why there would be (much) more
likelihood of regression with Xorg multiseat than with anything else,
including LTSP and all of its dependencies.  In the context of both
hardware and software, I think there are much fewer points of
potential failure with multiseat than with an LTSP thin-client
arrangement.

 Are you sure that you will come across bandwidth issues if you follow option
 #2?  On a gigabit network at work we're running thousands of thin clients
 distributed across hundreds of VM servers, and there is no noticeable latency
 (unless a particular VM MSWindows server plays up).

I'm sure I wouldn't.  I only mentioned the increased bandwidth of
multiseat vs. thin-clients as a technicality.

 I understand that managing multiple boxen is always a greater burden, but
 something like GNAP may lighten the work needed?

  http://www.gentoo.org/proj/en/base/embedded/gnap-userguide.xml

That looks cool, but from my perspective it's another layer to learn,
install, configure, and manage.  chef and puppet take a different
approach to lessening the burden of administrating multiple systems,
but in the end neither approach comes anywhere near the hardware and
software simplicity (and corresponding ease of setup and maintenance)
of multiseat.

- Grant



Re: [gentoo-user] Cross system dependencies

2014-06-28 Thread thegeezer
On 06/28/2014 07:06 PM, J. Roeleveld wrote:
 On Saturday, June 28, 2014 01:39:41 PM Neil Bothwick wrote:
 On Sat, 28 Jun 2014 11:36:11 +0200, J. Roeleveld wrote:
 I need a way to add dependencies to services which are provided by
 different servers. For instance, my mail server uses DNS to locate my
 LDAP server which contains the mail aliases. All these are running on
 different machines. Currently, I manually ensure these are all started
 in the correct sequence, I would like to automate this to the point
 where I can start all 3 servers at the same time and have the different
 services wait for the dependency services to be available even though
 they are on different systems.

 All the dependency systems in the init-systems I could find are all
 based on dependencies on the same server. Does anyone know of something
 that can already provide this type of dependencies? Or do I need to
 write something myself?
 With systemd you can add ExecStartPre=/some/script to the service's unit
 file where /some/script waits for the remote services to become available,
 and possibly return an error if the service does not become available
 within a set time.
 That method works for any init-system and writing a script to check and if 
 necessary fail is my temporary fall-back plan. I was actually hoping for a 
 method that can be used to monitor availability and, if necessary, stop 
 services when the dependencies disappear.

 --
 Joost


the difficulty is in identifying failed services.
local network issue / load issue could mean your services start bouncing.
the best way is to have redundancy so it doesn't matter as much

having said all of that::

systemd will start servers and buffer network activity - how this works
for non local services would be interesting to see.

with openrc :
you could on the DNS server have a service which is just a batch script
that uses watches for pid / program path in ps which outputs ACK or
NAK to a file in an NFS share  say /nfs/monitoring/dns

then on the mail server you could have a service that polls
/nfs/monitoring/dns for NAK or ACK
you can then choose to have this service directly start your dependent
services, or if you adjust /etc/init.d/postfix to have depends =
mymonitorDNS which is an empty shell of a service. your watchdog
service could stop / start the empty shell of a script mymonitorDNS, and
then postfix depends on mymonitorDNS
this would save you from i've just stopped the mail server for
maintenance and my watchdogservice has just restarted it due to a
NAKACK event

or...
you could have a central master machine which has it's own services,
watchdog and monitor... i.e. /etc/init.d/thepostfixserver start  /
depends on thednsserver which just runs
# ssh postfixserver '/etc/init.d/postfix start' 

or...
puppet and it's kin






Re: [gentoo-user] Re: Custom ebuilds for CoreOS

2014-12-02 Thread Rich Freeman
On Tue, Dec 2, 2014 at 12:37 PM, James wirel...@tampabay.rr.com wrote:
 Rich Freeman rich0 at gentoo.org writes:

 You seem to be wanting a minimalist profile of Gentoo, not CoreOS.

 YES!, I want Gentoo to CRUSH CoreOS because we can and our goal is not
 to deceptively move users to a rent the binary jail. OK?


Gentoo and CoreOS really target different uses.  I certainly could see
one being installed more than the other just as there are no doubt
more tubes of toothpaste sold in a year than there are iPhones sold in
a year (or, at least I hope there are).  That doesn't mean that
toothpaste is crushing the iPhone.

This isn't unlike Gentoo vs ChromeOS.  You're comparing a
general-purpose distro (and one that is even more
general-purpose/customizable than a typical one) to a tool made to do
exactly one job well.

CoreOS is just about hosting containers.  Sure, some of those
containers might be rent the binary jails - but you could run Gentoo
in one of those containers just as easily.  CoreOS really competes
with the likes of VMWare/KVM, or even OpenStack.  If you don't want to
run a bazillion containers, then sure it isn't something you're going
to be interested in.


 It isn't intended as a starting point for embedded projects or such.
 Sure, maybe you could make it work, but sooner or later CoreOS will
 make some change that will make you very unhappy because they aren't
 making it for you.

 CoreOS will never be in my critical path. Large corporations will turn
 computer scientist and hackers into WalMart type-employees. Conglomerates
 are the enemy, imho. I fear Conglomerates much more than any group
 of government idiots. ymmv.

Well, then don't run it!  Large corporations are actually the
least-progressive when it comes to adopting these kinds of
technologies.  I actually see thing being embraced by mid-sized
companies first.  The new way of doing these things lets you quickly
scale up from development to production without a lot of manual
configuration of individual hosts.  I work for a big company and
they're still doing lots of manual installation scripts that get
signed and dated like it is still the 80s.  It isn't Walmart-type work
primarily because it is so error-prone we always need people to fix
all the stuff that breaks.  My LUG meets at a mid-sized VoIP company
that uses the likes of Puppet/Chef for everything and I'm sure Docker
is on their radar as something to think about next - they're hardly
robots but they realize that they'd rather have their bright employees
doing something other than dealing with botched updates on hosts that
bring down 47 VMs at a time.  Their customers like that they can just
pay them for a VoIP account and get full service for a low cost,
versus paying the kid next door to figure out how to custom-rig a PBX
for them.  And, yes, they use Asterisk.

--
Rich



  1   2   >