Re: [openstack-dev] OpenStack upstream specs - Invitation to edit

2017-09-10 Thread Tomasz Pa
Hey James,


I'am unable to open this document :(

TP (Tomasz from Intel)

On Tue, Oct 18, 2016 at 2:35 PM, James Penick (via Google Docs) <
jpen...@gmail.com> wrote:

> James Penick  has invited you to *edit* the following
> document:
> OpenStack upstream specs
> 
> Open in Docs
> 
> This email grants access to this item without logging in. Only forward it
> to people you trust.
> Google Docs: Create and edit documents online.
> Google Inc. 1600 Amphitheatre Parkway, Mountain View, CA 94043, USA
> You have received this email because someone shared a document with you
> from Google Docs. [image: Logo for Google Docs] 
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 
Tomasz Paszkowski
OpenStack | Kubernetes | SDN
+48500166299
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ironic][nova] Suggestion required on pci_device inventory addition to ironic and its subsequent changes in nova

2017-04-11 Thread Tomasz Pa
On Apr 10, 2017 1:02 PM, "John Garbutt"  wrote:

On 10 April 2017 at 11:31,  .

With ironic I thought everything is "passed through" by default,
because there is no virtualization in the way. (I am possibly
incorrectly assuming no BIOS tricks to turn off or re-assign PCI
devices dynamically.)


It's not entirely true. On Intel Rack Scale Design platform you can
attach/detach pci devices on fly.



TP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][ironic] Kubernetes-based long running processes

2017-03-18 Thread Tomasz Pa
On 16 Mar 2017 6:21 pm, "Dean Troyer"  wrote:



Before implementing something new it would be a good exercise to have
a look at the other existing ways to run VMs and containers already in
the OpenStack ecosystem.  Service VMs are a thing, and projects like
Octavia are built around running inside the existing infrastructure.
There are a bunch of deployment projects that are also designed
specifically to run services with minimal base requirements.



VMs are having much bigger overhead than containers. Imagine you have
Ironic cluster with 3000 bare-metal nodes with each having console enabled.
Overhead on running 3000vms vs 3000 containers is huge. No to mention that
Kubernetes container highavalability is far ahead when compared to
openstack vms.

Also Kubernetes is not something new and framework to launch containers on
top of it would be really light.

TP
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][devstack][all] ZooKeeper vs etcd for Tooz/DLM

2017-03-14 Thread Tomasz Pa
Etcd seems to be a better choice for performance reasons as well:

https://coreos.com/blog/performance-of-etcd.html

TP

On 14 Mar 2017 12:45 am, "Davanum Srinivas"  wrote:

> On Tue, Mar 14 2017, Davanum Srinivas wrote:
>
> > Let's do it!! (etcd v2-v3 in tooz)
>
> Hehe. I'll move that higher in my priority list, I swear. But anyone is
> free to beat me to it in the meantime. ;)
>
> --
> Julien Danjou
> -- Free Software hacker
> -- https://julien.danjou.info
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-26 Thread Tomasz Pa
Hey Steven,

answers inline.

On Mon, Apr 25, 2016 at 9:27 AM, Steven Dake (stdake)  wrote:
>
> I disagree with your assertion.  You are gaming your data to provide the
> worst possible container (nova) because the RPMs pull in libvirt.  Kolla
> has no control over how Red Hat choses to package RDO, and at this time
> they choose to package libvirt as a dependency thereof.  Obviously it
> would be more optimal in a proper container system not to include libvirt
> in the dependencies installed with Nova.  If you really want that, use
> from source installs.  Then you could shave 1 minute off your upgrade time
> of a 64 node cluster.

Look here: http://paste.openstack.org/show/495459/ . As you can see
there're no libvirt dependencies there. It's only python-nova deps
there.
>
> A DSL does not solve this problem unless the DSL contains every dependency
> to install (from binary).  I don't see this as maintainable.

Agree being to detailed within the DSL can make the maintenance a
nightmare. I was thinking about some build automation which can
extract
dependencies (ie: repoquery --requires python-nova) and put each one
into a separate layer. We will just need a basic DSL with same
complexity as we have now in Dockerfiles which will be building
Dockerfiles dynamically.

Other approach could be setting a dedicated image for each of the
dependency and bind them together into the single image during build.

I'm also having Alpine linux in mind, this together with bazel can
make images really small.
>
> Just as a conclusion, deploying each dependency in a separate layer is an
> absolutely terrible idea.  Docker performance atleast is negatively
> affected by large layer counts, and aufs has a limit of 42 layers, so your
> idea is not viable asp resented.

This limit was 127 back in 2013.

-- 
Tomasz Paszkowski
SS7, Asterisk, SAN, Datacenter, Cloud Computing
+48500166299

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-24 Thread Tomasz Pa
On Mon, Apr 25, 2016 at 5:51 AM, Jeffrey Zhang  wrote:
>
> I do not think this is the issue of Kolla. It is a issue of Docker
> images. All the docker images has such issue. This should be solved
> in the docker side.
>

It's a kolla issue and they way project build images. Calling yum
install in Dockerfile is not the best idea (you endup with multiple
dependent packages put into the single image layer). Having some good
DSL would mean that you can place each package into separate image
layer (ie: RUN rpm -i ./python-nova-.rpm) and take advantage of
that during build (reusing cached layers).


-- 
Tomasz Paszkowski
SS7, Asterisk, SAN, Datacenter, Cloud Computing
+48500166299

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-24 Thread Tomasz Pa
On Mon, Apr 25, 2016 at 7:11 AM, MichaƂ Rostecki
 wrote:
>
> Ummm... no?
>
> If you want to upgrade the things in containers, you should just
> rebuild them. You don't do any "yum update" action by hand in some
> middle layer. The newest updated repo metadata come from the base
> image and your layers on top of it are just installing packages as
> usual.
>
> So, the upgrade of python-nova package should come from new metadata
> in base image, and then usual "yum install" in nova layer.

Yes, you still need to build a new image, but having a separate layer
(ie: each package added with a separate RUN command) you can take
advantage of the cached layers. So rebuild is much fast than it's now.
>>
>
> Each package in it's own image layer? There will be no difference in
> image size, but there will be an impact on readability and facility.

Image size would not change, that's correct. But image build would be
much faster and by having them properly layered you'll need to
download only layer which has changed.



-- 
Tomasz Paszkowski
SS7, Asterisk, SAN, Datacenter, Cloud Computing
+48500166299

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][kubernetes] Mirantis participation in kolla-mesos project and shift towards Kubernetes

2016-04-24 Thread Tomasz Pa
On Sat, Apr 23, 2016 at 2:27 AM, Steven Dake (stdake)  wrote:
>
> Trying to understand the issue here, do you believe the current jinja2
> Dockerfiles are not readable?  I'm pretty sure my 11 and 13 year old
> children which are just learning programming already understand conditionals
> and variables.  A full-blown DSL for describing container contents seems out
> of the domain of Kolla's mission, although I'd never say no if someone
> wanted to give a crack at implementing one.

Kolla images are simply to big and not properly layered. What it means
that if you today want to upgrade one of the packages: ie: python-nova
(assuming images are build from RPMs) you're upgrading whole layer
which contains all it's dependencies added by yum. So at the end
simple 12MB upgrades turns to be 120MB (right now this is the size of
the kolla image layer which contains python-nova).

Having some proper DSL for describing container content would mean
that we can have each package deployed within it's own image layer.
This would definitely speed up upgrades and would ensure better layer
sharing between multiple images.

Cheers

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev