Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-22 Thread Dmitry Ilyin
I've started my merging effort here
https://github.com/dmitryilyin/openstack-puppet-pacemaker

Can I change the interface of pcmk_resource?

You have pcmk_constraint but I have pcmk_location/colocation/order
separately. I can merge then into a single resource like you did
or I can keep them separated. Or I can make both. Actually they are
different enough to be separated.

Will I have to develop 'pcs' style provider for every resource? Do we
really need them?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] [puppet] move puppet-pacemaker

2016-03-19 Thread Dmitry Ilyin
Hello.

I'm the author of fuel-infra/puppet-pacemaker and I guess I would be able
to merge the code from "fuel-infra/puppet-pacemaker" to
"openstack/puppet-pacemaker"
We will be having a single set of pcmk_* types and two providers for the
each type: "pcs" and "xml", there is also a "noop" provider.

It would be possible to choose the implementation by specifying:

pcmk_resource { 'my-resource' :
  provider => 'pcs',
}

or

pcmk_resource { 'my-resource' :
  provider => 'xml',
}


2016-03-18 2:50 GMT+03:00 Andrew Woodward :

> I'd be happy to see more collaboration here as well, I'd like to hear from
> the maintainers on both sides identify some of what isn't implemented on
> each so we can better decide which one to continue from, develop feature
> parity and then switch to.
>
> On Thu, Mar 17, 2016 at 12:03 PM Emilien Macchi 
> wrote:
>
>> On Thu, Mar 17, 2016 at 2:22 PM, Sergii Golovatiuk
>>  wrote:
>> > Guys,
>> >
>> > Fuel has own implementation of pacemaker [1]. It's functionality may be
>> > useful in other projects.
>> >
>> > [1] https://github.com/fuel-infra/puppet-pacemaker
>>
>> I'm afraid to see 3 duplicated efforts to deploy Pacemaker:
>>
>> * puppetlabs/corosync, not much maintained and not suitable for Red
>> Hat for some reasons related to the way to use pcs.
>> * openstack/puppet-pacemaker, only working on Red Hat systems,
>> suitable for TripleO and previous Red Hat installers.
>> * fuel-infra/puppet-pacemaker, which looks like a more robust
>> implementation of puppetlabs/corosync.
>>
>> It's pretty clear Mirantis and Red hat, both OpenStack major
>> contributors who deploy Pacemaker do not use puppetlabs/corosync but
>> have their own implementations.
>> Maybe it would be time to converge at some point. I see a lot of
>> potential in fuel-infra/puppet-pacemaker to be honest. After reading
>> the code, I think it's still missing some features we might need to
>> make it work on TripleO but we could work together at establishing the
>> list of missing pieces and discuss about implementing them, so our
>> modules would converge.
>>
>> I don't mind using X or Y tool, I want the best one and it seems both
>> of our groups have some expertise that could help to maybe one day
>> replace puppetlabs/corosync code by Fuel & Red Hat's module.
>> What do you think?
>>
>> >
>> > --
>> > Best regards,
>> > Sergii Golovatiuk,
>> > Skype #golserge
>> > IRC #holser
>> >
>> > On Sat, Feb 13, 2016 at 6:20 AM, Emilien Macchi <
>> emilien.mac...@gmail.com>
>> > wrote:
>> >>
>> >>
>> >> On Feb 12, 2016 11:06 PM, "Spencer Krum"  wrote:
>> >> >
>> >> > The module would also be welcome under the voxpupuli[0] namespace on
>> >> > github. We currently have a puppet-corosync[1] module, and there is
>> some
>> >> > overlap there, but a pure pacemaker module would be a welcome
>> addition.
>> >> >
>> >> > I'm not sure which I would prefer, just that VP is an option. For
>> >> > greater openstack integration, gerrit is the way to go. For greater
>> >> > participation from the wider puppet community, github is the way to
>> go.
>> >> > Voxpupuli provides testing and releasing infrastructure.
>> >>
>> >> The thing is, we might want to gate it on tripleo since it's the first
>> >> consumer right now. Though I agree VP would be a good place too, to
>> attract
>> >> more puppet users.
>> >>
>> >> Dilemma!
>> >> Maybe we could start using VP, with good testing and see how it works.
>> >>
>> >> Iterate later if needed. Thoughts?
>> >>
>> >> >
>> >> > [0] https://voxpupuli.org/
>> >> > [1] https://github.com/voxpupuli/puppet-corosync
>> >> >
>> >> > --
>> >> >   Spencer Krum
>> >> >   n...@spencerkrum.com
>> >> >
>> >> > On Fri, Feb 12, 2016, at 09:44 AM, Emilien Macchi wrote:
>> >> > > Please look and vote:
>> >> > > https://review.openstack.org/279698
>> >> > >
>> >> > >
>> >> > > Thanks for your feedback!
>> >> > >
>> >> > > On 02/10/2016 04:04 AM, Juan Antonio Osorio wrote:
>> >> > > > I like the idea of moving it to use the OpenStack infrastructure.
>> >> > > >
>> >> > > > On Wed, Feb 10, 2016 at 12:13 AM, Ben Nemec <
>> openst...@nemebean.com
>> >> > > > > wrote:
>> >> > > >
>> >> > > > On 02/09/2016 08:05 AM, Emilien Macchi wrote:
>> >> > > > > Hi,
>> >> > > > >
>> >> > > > > TripleO is currently using puppet-pacemaker [1] which is a
>> >> > > > module
>> >> > > > hosted
>> >> > > > > & managed by Github.
>> >> > > > > The module was created and mainly maintained by Redhat. It
>> >> > > > tends to
>> >> > > > > break TripleO quite often since we don't have any gate.
>> >> > > > >
>> >> > > > > I propose to move the module to OpenStack so we'll use
>> >> > > > OpenStack Infra
>> >> > > > > benefits (Gerrit, Releases, Gating, etc). Another idea
>> would
>> >> > > > be to
>> >> > > > gate
>> >> > > > > the module with TripleO HA jobs.
>> >> > > > >
>> 

Re: [openstack-dev] [Fuel] [Puppet] Potential critical issue, due Puppet mix stderr and stdout while execute commands

2015-10-23 Thread Dmitry Ilyin
Here is the implementation of the puppet "command" that outputs only stdout
and drops the stderr unless an error have happened.
https://github.com/dmitryilyin/puppet-neutron/commit/b55f36a8da62fc207a91b358c396c03c8c58981b

2015-10-22 17:59 GMT+03:00 Matt Fischer :

> On Thu, Oct 22, 2015 at 12:52 AM, Sergey Vasilenko <
> svasile...@mirantis.com> wrote:
>
>>
>> On Thu, Oct 22, 2015 at 6:16 AM, Matt Fischer 
>> wrote:
>>
>>> I thought we had code in other places that split out stderr and only
>>> logged it if there was an actual error but I cannot find the reference now.
>>> I think that matches the original proposal. Not sure I like idea #3.
>>
>>
>> Matthew, this topic not about SSL. ANY warnings, ANY output to stderr
>> from CLI may corrupt work of providers from puppet-* modules for openstack
>> components.
>>
>> IMHO it's a very serious bug, that potential affect openstack puppet
>> modules.
>>
>> I see 3 way for fix it:
>>
>>1. Long way. big patch to puppet core for add ability to collect
>>stderr and stdout separately. But most of existing puppet providers waits
>>that stderr and stdout was mixed when handle errors of execution (non-zero
>>return code). Such patch will broke backward compatibility, if will be
>>enabled by default.
>>2. Middle way. We should write code for redefine 'commands' method.
>>New commands should collect stderr and stdout separately, but if error
>>happens return stderr (with ability access to stdout too).
>>3. Short way. Modify existing providers to use json output instead
>>plain-text or csv. JSON output may be separated from any garbage 
>> (warnings)
>>slightly. I make this patch as example of this way:
>>https://review.openstack.org/#/c/238156/ . Anyway json more
>>formalized format for data exchange, than plain text.
>>
>> IMHO way #1 is a best solution, but not easy.
>>
>>
> I must confess that I'm a bit confused about this. It wasn't a secret that
> we're calling out to commands and parsing the output. It's been discussed
> over and over on this list as recently as last week, so this has been a
> known possible issue for quite a long time. In my original email I was
> agreeing with you, so I'm not sure why we're arguing now. Anyway...
>
> I think we need to split stderr and stdout and log stderr on errors, your
> idea #2. Using json like openstack-client can do does not solve this
> problem for us, you still can end up with a bunch of junk on stderr.
>
> This would be a good quick discussion in Tokyo if you guys will be there.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Fuel] [Puppet] Potential critical issue, due Puppet mix stderr and stdout while execute commands

2015-10-22 Thread Dmitry Ilyin
I have investigated this issue too.

Previously, Puppet "commands" had only stdout in their output value and
stderr was discarded unless the command have returned an error code and the
exception is raised with stderr as its message.
In the current Puppet versions BOTH stdout and stderr go into the comamnd
output. It's bad in out case because it breaks neutron command output
parsing and can impact any other command.

2015-10-22 14:50 GMT+03:00 Martin Mágr :

>
>
> On 10/22/2015 05:16 AM, Matt Fischer wrote:
>
>> I thought we had code in other places that split out stderr and only
>> logged it if there was an actual error but I cannot find the reference now.
>>
>
> https://github.com/openstack/puppet-glance/blob/stable/kilo/lib/puppet/provider/glance.rb#L184
>
> But it was removed in master branch for some reason.
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] pacemaker management tools

2014-06-20 Thread Dmitry Ilyin
Hello, guy! I've just saw your thread and I have something to say about
your topic.

 What management tools are there?

The old time pacemaker users of course will name crm shell first (crmsh
package). It allows interacive configuration by enetering commands like
this:

 crm configure primitive test1 ocf:pacemaker:Dummy
 crm configure primitive test2 ocf:pacemaker:Dummy
 crm configure primitive test3 ocf:pacemaker:Dummy
 crm configure colocatin test2_with_test1 100: test2 test1
 crm configure order test3_after_test2 200: test2 test3

 crm status

Or by using editor:

 crm configure edit

There is also crm status and other usefull stuff.

This days crm is not developed anymore and considered unsupported and
depricated. Most distributions will not have it packaged at all.

The newer configuration tools is pcs (pcs package). It allows intercative
configuration and status monitoring too.

 pcs resource create test1 ocf:pacemaker:Dummy
 pcs resource create test2 ocf:pacemaker:Dummy
 pcs resource create test3 ocf:pacemaker:Dummy
 pcs constraint colocation add test2 test1 100

 pcs status

But it lacks crm's interctive shell and very convinient editor featutres.
Pcs should be included in all moders distributions.

There are also basic tools written in C that come together with pacemaker
itself and any sane distribution should include them.
https://github.com/ClusterLabs/pacemaker/tree/master/tools
Some of them can be very convinient even for intercative use.

Both pcs and crm are just python wrappers that call basic C tools as a
backend or parse XML cib.

 What options do we have to generate and/or manage pacemaker configuration?

In most cases it boils down to either adding and removing configuration
elements during the installation or the runtime or to importing of the
precreated configuration. YOur scripts can use crm/pcs to create primitives
and constraints one by one as would human do. Or you can describe the
entire configuration and then just import it.

Of couse you can always write pcs/crm calls to a shell script and just run
it. crm can even run batch chages in one transaction like this:

config.crm:

configure
property stonith-enabled=false
property no-quorum-policy=ignore
primitive test1 ocf:pacemaker:Dummy
primitive test2 ocf:pacemaker:Dummy
primitive test3 ocf:pacemaker:Dummy
colocation test2_with_test1 100: test2 test1
order test3_after_test2 200: test2 test3
commit

then crm -f config.crm use --force if needed

pcs has no single transaction update capabilities but you can use shell
script and shadow/commit if you really want transaction

The other solution would be to import precreated XML file as a patch:

diff
  diff-added
cib
  configuration
resources
  primitive class=ocf id=test1 provider=pacemaker
type=Dummy/
  primitive class=ocf id=test2 provider=pacemaker
type=Dummy/
  primitive class=ocf id=test3 provider=pacemaker
type=Dummy/
/resources
constraints
  rsc_colocation id=test2_with_test1 rsc=test2 score=100
with-rsc=test1/
  rsc_order first=test2 id=test3_after_test2 score=200
then=test3/
/constraints
  /configuration
/cib
  /diff-added
/diff

If we can somehow generate such a file it can be easily applied like this:
cibadmin --patch --xml-file=patch.xml

You can also use crm_diff to apply xml patches manually.

I think, that for TripleO if you want import precreated configuration XML
is a way to go. You will not depend on any python wrappers like pcs and crm
and will be able to create any possible configuration.
XML allows use of XSLT transformations if you are creative enough of can be
just generated by template or written manually.



2014-06-20 0:03 GMT+04:00 Mike Scherbakov mscherba...@mirantis.com:

 Anything we can take out of here for our HA fixes? May be we want to
 participate in the thread?

 -- Forwarded message --
 From: Howley, Tom tom.how...@hp.com
 Date: Wed, Jun 18, 2014 at 1:31 PM
 Subject: Re: [openstack-dev] [TripleO] pacemaker management tools
 To: OpenStack Development Mailing List (not for usage questions) 
 openstack-dev@lists.openstack.org


 Jan/Adam,

 Is cibadmin available in the different distros? This can be used to update
 the CIB based on XML description of full pacemaker config. I have used this
 on ubuntu in the past and found it more reliable than using crm commands
 for automated deployment/configuration of pacemaker clusters. It also has
 patch facility, which I haven't used.

 I wouldn't have assumed that the pacemaker config needed to be a static
 file baked into an image. If cibadmin is an option, the different elements
 requiring pacemaker control could supply their relevant XML snippets (based
 off config values supplied via heat) and a pacemaker/pacemaker-config
 element could apply those XML configs to the running cluster (with checks
 for resource naming clashes, etc.) Does that sound like a possible approach?

 Tom

 -Original 

Re: [openstack-dev] Fwd: Have you guys considered...

2014-06-20 Thread Dmitry Ilyin
Speaking about the idea of running everything inside containers...

First, this idea is no the new one and have been around for a very long
time. People were running their services inside a simple chroot, then
OpenVZ and now LXC containers with variable success. Usually the ideas
behind this are following:


   -

   Isolation. Misbehaving or hacked service will not impact other services.
   -

   Granular maintenance. Services can be updated without accidentally
   messing other ones.
   -

   Personal environments. Each service can have it's own libraries,
   dependencies and environment if it's required.
   -

   Modular architecture. Each service can be thinked of as an application
   this application can be installed, upgraded, stopped, started and shared
   separately.


Sometimes, such approach is used even without any form of containers:


   -

   Mac OS's Applications when programs are packed in special folders with
   everything they needs to run except basic system libraries and latest
   versions try to add sandbox as well.
   -

   Windows' Program Files has very close functions if used correctly.
   -

   Such linuces as http://nixos.org/ and http://crux.nu/ implement various
   ways to use filesystem as a package manger and make updates easy, reliable
   and able to be rolled back - something we desire too.
   -

   PC-BSD, pbi packages are also an example of containers without actual
   containerization.
   -

   Popular Python's and Ruby's virtualenv and rvm can be added to the end
   of this list too.


There are some OSes that make running every application in it's own
container the core of their design:


   -

   The CoreOS https://coreos.com/ is well-known to be a very good host for
   docker containers. It has minimal footprint and very convenient REST
   interface. We should really consider using it as a host system for a Fuel
   master node.
   -

   OSv http://osv.io/ goes much farther down the road of minimalisation of
   the containers by developing thei rown specialized kernel. Each instance
   can be thinked of as a wrapper around the application and the application
   should work without ability to fork and extensive file system. This
   containers can be deployed in a cloud in masse very fast.
   -

   There are also projects to use this approach on desktop with full-scale
   virtualization insead of containers as well http://qubes-os.org/


On the contrary traditional BSD and Linux approach is based upon having a
single “tree” of software. All programs should use the same libraries as
others, have no conflicts and work in a single filesystem namespace and can
depend on each other. This approach have been around for a lot of years and
has some advantages like saving disk space and ram by using same libraries
or making a distribution to be consistent. Linux community have been
mercilessly smiting everyone who ever said that this approach has it
downsides like problems with adding and maintaining custom software and
maintaining different library versions as well as dependency on upstream
repository.

Using containers can actually solve a lot of problems we have been having
for a long time if done right especially for a complex system like
OpenStack. We can have every service packaged as a pre-built container that
can be just downloaded, extracted and started on a target node. Updates can
just replace this container with newer version keeping the old one stored.

We have already moved Fuel Master node to this architecture, but we have
not done it the right way. Master system should be very thin and contain as
little services and things to configure as possible to the point of only
entering the IP address. And containers should be also very thin focused
only on their job with small amount of configuration up to being stateless
if possible. Such configuration would require very little configuration
making configuration management tools like Puppet redundant. Everything
should be already configured inside the container.

By all means, we should go for a container based architecture for every
OpenStack node, but we should not make it look like having many operating
systems to administer on a single node instead of the only one. Containers
are not like cheap VPS hosting where you can order you instance of CentOS
inside an OpenVZ container and they should be thinked of as the
applications not as the operating systems like instances that are started
in the OpenStack cloud. Yes, it's a large paradigm shift but it's definitly
worth it.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev