No worries, good to know I did not miss anything about release procedures
:-)
Thanks,
Dmitry
On Mon, Jan 30, 2017 at 6:51 PM, Matthew Thode <prometheanf...@gentoo.org>
wrote:
> On 01/30/2017 03:24 AM, Dmitry Mescheryakov wrote:
> > Hello Matthew,
> >
> > I see tha
Hello Matthew,
I see that you have frozen my CR https://review.openstack.org/#/c/425132/ ,
but it is for stable/newton. Should not freeze apply to master only?
Thanks,
Dmitry
On Wed, Jan 25, 2017 at 12:22 AM, Matthew Thode
wrote:
> We are going to be freezing
Sergii,
I am curious - does it mean that the plugins will stop working with older
versions of Fuel?
Thanks,
Dmitry
2016-01-20 19:58 GMT+03:00 Sergii Golovatiuk :
> Hi,
>
> Recently I merged the change to master and 8.0 that moves one task from
> Nailgun to Library
ying this.
Dmitry
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
> IRC #holser
>
> On Wed, Jan 20, 2016 at 6:41 PM, Dmitry Mescheryakov <
> dmescherya...@mirantis.com> wrote:
>
>> Sergii,
>>
>> I am curious - does it mean that the p
ent these days
> >
> > [1] https://wiki.openstack.org/wiki/FeatureFreeze
> > [2]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/081131.html
> > [3]
> >
> http://lists.openstack.org/pipermail/openstack-dev/2015-December/08
Hello folks,
I'd like to request extension of current FFE for the feature [1]. During
the three FFE days we merged the spec [2] after big discussion and made a
couple iterations over the implementation [3]. We had a chat with Bogdan on
how to progress and here are the action items still need to
urrent results are already enough to consider the change
useful. What is left is to confirm that it does not make our failover worse.
> >
> > 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk <sgolovat...@mirantis.com>:
> >> Hi,
> >>
> >> -1 for FFE for
2015-12-02 16:52 GMT+03:00 Jordan Pittier <jordan.pitt...@scality.com>:
>
> On Wed, Dec 2, 2015 at 1:05 PM, Dmitry Mescheryakov <
> dmescherya...@mirantis.com> wrote:
>
>>
>>
>> My point is simple - lets increase our architecture scalability by 2
tly.
>>
>> That said I'm uncertain about the stability impact of this change, yet
>> I see a reasoning worth discussing behind it.
>>
>> 2015-12-01 20:53 GMT+01:00 Sergii Golovatiuk <sgolovat...@mirantis.com>:
>> > Hi,
>> >
>> > -1 for FFE for
Hello guys,
I would like to propose to disable HA for OpenStack RPC queues. The
rationale is to reduce load on RabbitMQ by removing necessity for it to
replicate messages across the cluster. You can find more details about
proposal in spec [1].
To what is in the spec I can add that I've ran a
Folks,
I would like to request feature freeze exception for disabling HA for RPC
queues in RabbitMQ [1].
As I already wrote in another thread [2], I've conducted tests which
clearly show benefit we will get from that change. The change itself is a
very small patch [3]. The only thing which I
Hello folks,
I second Patrick's idea. In our case we would like to install standalone
RabbitMQ cluster with Fuel reference architecture to perform destructive
tests on it. Requirement to install controller is an excessive burden in
that case.
Thanks,
Dmitry
2015-10-19 13:44 GMT+03:00 Patrick
Bogdan,
Answering your questions, in MOS 7.0 source code heartbeats are enabled by
default (with heartbeat_timeout_threshold value set to 60). We patched our
version of upstream stable/kilo to do so. So, in installed env heartbeats
are enabled for every component except Neutron, for which puppet
Oops, the last line should be read as
On the other side, it is a nice UX feature we really want to have 6.0
Dmitry
2014-11-15 3:50 GMT+03:00 Dmitry Mescheryakov dmescherya...@mirantis.com:
Dmitry,
Lets review the CR from the point of danger to current deployment process:
in the essence
really want to have 5.1.1.
Thanks,
Dmitry
2014-11-15 3:04 GMT+03:00 Dmitry Borodaenko dborodae...@mirantis.com:
+286 lines a week after Feature Freeze, IMHO it's too late to make an
exception for this one.
On Wed, Nov 12, 2014 at 7:37 AM, Dmitry Mescheryakov
dmescherya...@mirantis.com
Hello fuelers,
I would like to request you merging CR [1] which implements blueprint [2].
It is a nice UX feature we really would like to have in 6.0. On the other
side, the implementation is really small: it is a small piece of puppet
which runs a shell script. The script always exits with 0, so
Hey Jay,
Did you consider Swift's eventual consistency? The general use case for
many OpenStack application is:
1. obtain the token from Keystone
2. perform some operation in OpenStack providing token as credentials.
As a result of operation #1 the token will be saved into Swift by the
Hello,
I used google docs to create the initial image. If you want to edit that
one, copy the doc[1] to your drive and edit it. It is not the latest
version of the image, but the only difference is that this one has the very
first project name EHO in place of Sahara.
Thanks,
Dmitry
[1]
Hello Fuelers,
On the previous meeting a topic was raised on how Fuel doc team should
work with bugs, see [1] for details. We agreed to move the discussion
into the mailing list.
The thing is there are two members in the team at the moment (Meg and
Irina) and they need to distribute work among
Hello Fuelers,
Right now we have the following policy in place: the branches for a
release are opened only after its 'parent' release have reached hard
code freeze (HCF). Say, 5.1 release is parent releases for 5.1.1 and
6.0.
And that is the problem: if parent release is delayed, we can't
Hello people,
I think backward compatibility is a good idea. We can make the
user/pass inputs for data objects optional (they are required
currently), maybe even gray them out in the UI with a checkbox to turn
them on, or something like that.
This is similar to what I was thinking. We
Again, thanks everyone who have joined Sahara meeting. Below are the
logs from the meeting.
Minutes:
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-03-18.06.html
Logs:
http://eavesdrop.openstack.org/meetings/sahara/2014/sahara.2014-07-03-18.06.log.html
Thanks,
Dmitry
I agree with Andrew and actually think that we do need to have
https://review.openstack.org/#/c/87573 (Fix running EDP job on
transient cluster) fixed in stable branch.
We also might want to add https://review.openstack.org/#/c/93322/
(Create trusts for admin user with correct tenant name). This
Hello people,
The following patch set splits monolithic sahara-api process into two
- sahara-api and sahara-engine:
https://review.openstack.org/#/c/90350/
After the change is merged, there will be three binaries to run Sahara:
* sahara-all - runs Sahara all-in-one (like sahara-api does right
Hello Isaku,
Thanks for sharing this! Right now in Sahara project we think to use
Marconi as a mean to communicate with VM. Seems like you are familiar
with the discussions happened so far. If not, please see links at the
bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
at 03:19:10PM +0400,
Dmitry Mescheryakov dmescherya...@mirantis.com wrote:
Hello Isaku,
Thanks for sharing this! Right now in Sahara project we think to use
Marconi as a mean to communicate with VM. Seems like you are familiar
with the discussions happened so far. If not, please see links
:33 GMT+04:00 Chris Friesen chris.frie...@windriver.com:
On 03/24/2014 01:27 PM, Dmitry Mescheryakov wrote:
I see two possible explanations for these 5 remaining queues:
* They were indeed recreated by 'compute' services. I.e. controller
service send some command over rpc
Chris,
In oslo.messaging a single reply queue is used to gather results from
all the calls. It is created lazily on the first call and is used
until the process is killed. I did a quick look at oslo.rpc from
oslo-incubator and it seems like it uses the same pattern, which is
not surprising since
19:52 GMT+04:00 Chris Friesen chris.frie...@windriver.com:
On 03/24/2014 02:59 AM, Dmitry Mescheryakov wrote:
Chris,
In oslo.messaging a single reply queue is used to gather results from
all the calls. It is created lazily on the first call and is used
until the process is killed. I did
, Daniel P. Berrange wrote:
On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:
Hello folks,
A number of OpenStack and related projects have a need to perform
operations inside VMs running on OpenStack. A natural solution would
be an agent running inside the VM
Hello Nader,
You should use python-keystoneclient [1] to obtain the token. You can
find example usage in helper script [2].
Dmitry
[1] https://github.com/openstack/python-keystoneclient
[2] https://github.com/openstack/savanna/blob/master/tools/get_auth_token.py#L74
2014-03-10 21:25
For what it's worth in Sahara (former Savanna) we inject the second
key by userdata. I.e. we add
echo ${public_key} ${user_home}/.ssh/authorized_keys
to the other stuff we do in userdata.
Dmitry
2014-03-10 17:10 GMT+04:00 Jiří Stránský ji...@redhat.com:
On 7.3.2014 14:50, Imre Farkas wrote:
Коллеги,
Сегодня я опять беру выходной по болезни. Начал поправляться но все
еще чувствую себя неуверенно. Надеюсь вылечиться целиком ко вторнику.
Дмитрий
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
Ooops, sorry, wrong recepient :-)
7 марта 2014 г., 14:13 пользователь Dmitry Mescheryakov
dmescherya...@mirantis.com написал:
Коллеги,
Сегодня я опять беру выходной по болезни. Начал поправляться но все
еще чувствую себя неуверенно. Надеюсь вылечиться целиком ко вторнику.
Дмитрий
Hello folks,
A number of OpenStack and related projects have a need to perform
operations inside VMs running on OpenStack. A natural solution would
be an agent running inside the VM and performing tasks.
One of the key questions here is how to communicate with the agent. An
idea which was
Hello folks,
Not long ago we had a discussion on unified guest agent [1] - a way for
performing actions 'inside' VMs. Such thing is needed for PaaS projects for
tasks such as application reconfiguration and user requests pass-through.
As a proof of concept I've made os_collect_config as a guest
I agree with Andrew. I see no value in letting users select how their
cluster is provisioned, it will only make interface a little bit more
complex.
Dmitry
2014/1/30 Andrew Lazarev alaza...@mirantis.com
Alexander,
What is the purpose of exposing this to user side? Both engines must do
Hello folks,
At the end of the previous discussion on the topic [1] I've decided to make
a PoC based on oslo.messaging. Clint suggested and I agreed to make it for
os-collect-config. Actually I've made a PoC for Savanna first :-) but
anyway here is the one for os-collect-config [2].
I've made a
I agree that enabling communication between guest and cloud service is a
common problem for most agent designs. The only exception is agent based on
hypervisor provided transport. But as far as I understand many people are
interested in network-based agent, so indeed we can start a thread (or
being too concerned.
- Tim
From: Dmitry Mescheryakov [dmescherya...@mirantis.com]
Sent: Wednesday, December 18, 2013 11:51 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [trove] My thoughts on the Unified Guest
the standard way of sending messages to guests so that
package maintainers, the Infra team, and newbies to OpenStack wouldn't have
to deal with dozens of different ways of doing things, but the important
thing is that other methods of communication would still be possible.
Thanks,
Tim
From: Dmitry
Clint, do you mean
* use os-collect-config and its HTTP transport as a base for the PoC
or
* migrate os-collect-config on PoC after it is implemented on
oslo.messaging
I presume the later, but could you clarify?
2013/12/18 Clint Byrum cl...@fewbar.com
Excerpts from Dmitry Mescheryakov's
Tim,
The unified agent we proposing is based on the following ideas:
* the core agent has _no_ functionality at all. It is a pure RPC
mechanism with the ability to add whichever API needed on top of it.
* the API is organized into modules which could be reused across
different projects.
*
2013/12/18 Steven Dake sd...@redhat.com
On 12/18/2013 08:34 AM, Tim Simpson wrote:
I've been following the Unified Agent mailing list thread for awhile now
and, as someone who has written a fair amount of code for both of the two
existing Trove agents, thought I should give my opinion about
out a bit before being too concerned.
- Tim
--
*From:* Dmitry Mescheryakov [dmescherya...@mirantis.com]
*Sent:* Wednesday, December 18, 2013 11:51 AM
*To:* OpenStack Development Mailing List (not for usage questions)
*Subject:* Re: [openstack-dev] [trove] My
Folks,
The discussion didn't result in a consensus, but it did revealed a great
number of things to be accounted. I've tried to summarize top-level points
in the etherpad [1]. It lists only items everyone (as it seems to me)
agrees on, or suggested options where there was no consensus. Let me
Hello Thomas,
I do understand your feelings. The problem is there were already many
points raised both pro and contra adopting Salt as an agent. And so far no
consensus was reached on that matter. Maybe someone else is willing to step
out and write a PoC for Salt-based agent? Then we can agree on
Clint, Kevin,
Thanks for reassuring me :-) I just wanted to make sure that having direct
access from VMs to a single facility is not a dead end in terms of security
and extensibility. And since it is not, I agree it is much simpler (and
hence better) than hypervisor-dependent design.
Then
Vladik,
Thanks for the suggestion, but hypervisor-dependent solution is exactly
what scares off people in the thread :-)
Thanks,
Dmitry
2013/12/11 Vladik Romanovsky vladik.romanov...@enovance.com
Maybe it will be useful to use Ovirt guest agent as a base.
Region/Cell/Availability Zone will be left out of
service.
Do you think that is solvable, or maybe I overestimate the threat?
Thanks,
Dmitry
2013/12/9 Dmitry Mescheryakov dmescherya...@mirantis.com
2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com
This list of features makes me
] https://github.com/rackerlabs/openstack-guest-agents-windows-xenserver
[3] https://wiki.openstack.org/wiki/GuestAgent
2013/12/10 Dmitry Mescheryakov dmescherya...@mirantis.com
Guys,
I see two major trends in the thread:
* use Salt
* write our own solution with architecture similar to Salt
2013/12/10 Clint Byrum cl...@fewbar.com
Excerpts from Dmitry Mescheryakov's message of 2013-12-10 08:25:26 -0800:
And one more thing,
Sandy Walsh pointed to the client Rackspace developed and use - [1], [2].
Its design is somewhat different and can be expressed by the following
What is the exact scenario you're trying to avoid?
It is DDoS attack on either transport (AMQP / ZeroMQ provider) or server
(Salt / Our own self-written server). Looking at the design, it doesn't
look like the attack could be somehow contained within a tenant it is
coming from.
In the current
2013/12/9 Clint Byrum cl...@fewbar.com
Excerpts from Steven Dake's message of 2013-12-09 09:41:06 -0800:
On 12/09/2013 09:41 AM, David Boucha wrote:
On Sat, Dec 7, 2013 at 11:09 PM, Monty Taylor mord...@inaugust.com
mailto:mord...@inaugust.com wrote:
On 12/08/2013 07:36
2013/12/9 Kurt Griffiths kurt.griffi...@rackspace.com
This list of features makes me *very* nervous from a security
standpoint. Are we talking about giving an agent an arbitrary shell command
or file to install, and it goes and does that, or are we simply triggering
a preconfigured action
Hello all,
We would like to push further the discussion on unified guest agent. You
may find the details of our proposal at [1].
Also let me clarify why we started this conversation. Savanna currently
utilizes SSH to install/configure Hadoop on VMs. We were happy with that
approach until
Arindam,
It is not achievable with the current Savanna. The anti-affinity feature
allows to run one VM per compute-node only. It can not evenly distribute
VMs in case the number of compute nodes is lower than the desired size of
Hadoop cluster.
Dmitry
2013/12/5 Arindam Choudhury
Hmm, not sure, I am not an expert in Nova. By the way the link I gave you
is for Grizzly. If you are running a different release, take a look at that
release doc as configuration might look different here.
Dmitry
2013/12/5 Arindam Choudhury arin...@live.com
Hi,
I introduced this in my nova
No, anti-affinity does not work that way. It allows to distribute nodes
running the same process, but you can't separate nodes running different
processes (i.e. master and workers).
Dmitry
2013/12/5 Arindam Choudhury arin...@live.com
Hi,
Is it possible using anti-affinity to reserve a
Hello Jay,
Just in case you've missed it, there is a project Savanna dedicated to
deploying Hadoop clusters on OpenStack:
https://github.com/openstack/savanna
http://savanna.readthedocs.org/en/0.3/
Dmitry
2013/11/29 Jay Lau jay.lau@gmail.com
Hi,
I'm now trying to deploy a hadoop
Hey Jon,
Can you post your code as a work in progress review? Maybe we can perceive
from the code what is wrong.
Thanks,
Dmitry
2013/11/10 Jon Maron jma...@hortonworks.com
Hi,
I am debugging an issue with the swift integration - I see os_auth_url
with a value of 127.0.0.1, indicating
, at 3:53 AM, Dmitry Mescheryakov
dmescherya...@mirantis.com wrote:
Hey Jon,
Can you post your code as a work in progress review? Maybe we can perceive
from the code what is wrong.
Thanks,
Dmitry
2013/11/10 Jon Maron jma...@hortonworks.com
Hi,
I am debugging an issue with the swift
I've noticed you list Remote install and configure a Hadoop cluster
(synergy with Savanna?) among possible use cases. Recently there was a
discussion about Savanna on bare metal provisioning through Nova (see
thread [1]). Nobody tested that yet, but it was concluded that it should
work without any
Hello Travis,
We didn't researched Savanna on bare metal, though we considered it some
time ago. I know little of bare metal provisioning, so I am rather unsure
what problems you might experience.
My main concern are images: does bare metal provisioning work with qcow2
images? Vanilla plugin
Mike, and if you looked up 'compute' in dictionary, you would never guess
what OpenStack Compute does :-).
I think that 'Data Processing' is a good name which in short describes what
Savanna is going to be. The name 'MapReduce' for the program does not cover
whole functionality provided by
Linus,
Sorry for taking so long to respond. The cluster machines were removed by
rollback which was caused by exception:
2013-08-04 11:08:33.907 3542 INFO savanna.service.instances [-] Cluster
'cluster-test-01' creation rollback (reason: unexpected type type
'NoneType' for addr arg)
Looks like
Arindam,
You may find examples of REST requests here:
https://github.com/stackforge/savanna/tree/master/etc/rest-api-samples
To get the full list of supported configs, send GET request to the
following URL
/v1.0/$TENANT/plugins/vanilla/1.1.2
Note that each config has 'scope' parameter. Configs
67 matches
Mail list logo