+1
On Tue, Jan 22, 2013 at 7:25 PM, Vishvananda Ishaya
wrote:
> +1
>
> We mentioned previously that we would fast-track former core members back
> in.
> I gess we can wait a couple of days to see if anyone objects and then add
> him back.
>
> Vish
> On Jan 22, 2013, at 3:38 PM, Matt Dietz wrote:
Just a thought - this sounds like systems such as Google protocol
buffer are for, where multiple versions of structured data is
serialized/deserialized. Thanks,
Yun
On Wed, Oct 10, 2012 at 3:27 AM, Day, Phil wrote:
> Hi All,
>
> I guess I may have mis-stated the problem a tad in talking about ve
+1
Yun
On Monday, July 23, 2012, Johannes Erdfelt wrote:
> On Fri, Jul 20, 2012, Vishvananda Ishaya wrote:
> > When I was going through the list of reviewers to see who would be good
> > for nova-core a few days ago, I left one out. Sean has been doing a lot
> > of reviews lately[1] and did the
Hi,
What you describe seems like a bug. If the VM is running when you do
snapshot, the VM will be temporarily suspended, snapshotted, then
resumed. But if the VM is off when you do snapshot, the VM should
remained to be off after the snapshot, not to be set back to the
ACTIVE state. Would you mind
Jay,
there is a tools/clean_file_locks.py that you might be able to take
advantage of.
Yun
On Wed, Jun 20, 2012 at 3:23 PM, Jay Pipes wrote:
> Turns out my issue was a borked run of Tempest that left a
> nova-ensure_bridge.lock file around. After manually destroying this lock
> file, Tempest is
John,
A strategy we are making in Nova (WIP) is to allow instance
termination no matter what. Perhaps a similar strategy could be
adopted for volumes too? Thanks,
Yun
On Wed, Jun 20, 2012 at 12:02 AM, John Griffith
wrote:
> On Tue, Jun 19, 2012 at 7:40 PM, Lars Kellogg-Stedman
> wrote:
>> I at
ACTIVE, VERIFY_RESIZE, STOPPED, SHUTOFF, PAUSED, SUSPENDED, RESCUE, ERROR,
> DELETED
>
> Does that seem right to you, and is there a plan to change that set for
> Folsom?
>
> -David
>
>
>
>
>
> On 6/18/2012 12:51 PM, Yun Mao wrote:
>>
>> Hi Jay et al,
Hi Jay et al,
there is a patch in review here to overhaul the state machine:
https://review.openstack.org/#/c/8254/
All transient state in vm state will be moved to task state. Stable
state in task state (RESIZE_VERIFY) will be moved to vm state. There
is also a state transition diagram in dot f
ay Pipes wrote:
> On 05/24/2012 10:46 AM, Yun Mao wrote:
>>
>> Sandy,
>>
>> I like the suggestion of graphvis, although I haven't used it for a
>> while. Is there a dir in nova appropriate to put .dot files? I was
>> hoping to get the proposal discussed a f
Python is a scripting language. To get setuid work, you usually have
to give the setuid permission to /usr/bin/python which is a big no no.
One work around is to have a customized compiled program (e.g. from
C), which takes a python file as input, do all kinds of sanity check,
and switch to root u
down_terminate handling. Honestly I feel like
> that is compatibility we don't need. It should be up to the provider whether
> a stop_instances counts as a terminate. In my mind they are two different
> things.
>
> Comments welcome on the review.
>
> Vish
>
> On May 31,
shutdown, stop, are power_off are synonym in this discussion. They all
mean to stop the VM from running, but keep the disk image and network,
so that the VM could be started back on again.
There are three ways to do it: 1) using EC2 stop-instance API. 2) use
OS API stop-server. 3) inside the VM, e
Hi,
the first (simple) step to simplify power_state is in gerrit for review.
https://review.openstack.org/#/c/7796/
The document is also migrated to wiki: http://wiki.openstack.org/VMState
Thanks,
Yun
On Fri, May 25, 2012 at 8:42 AM, Vaze, Mandar wrote:
> Sorry for responding to old thread, I
achine (via attributes on nodes/edges)
>
> I'd like to see more discussion on how reconciliation will be handled in the
> event of a conflict.
>
> Cheers!
> -S
>
>
> From: Yun Mao [yun...@gmail.com]
> Sent: Thursday, Ma
According to
http://docs.openstack.org/api/openstack-compute/2/content/Resize_Server-d1e3707.html
"The resize operation converts an existing server to a different
flavor, in essence, scaling the server up or down. The original server
is saved for a period of time to allow rollback if there is a p
It
>>should be possible to issue a RevertResize command for any task_state
>>(assuming that a resize is happening or has recently happened and is not
>>yet confirmed). The code to support this capability doesn't exist yet,
>>but I want to ask you: is it compatibl
ld
> be possible to issue a RevertResize command for any task_state (assuming that
> a resize is happening or has recently happened and is not yet confirmed). The
> code to support this capability doesn't exist yet, but I want to ask you: is
> it compatible with your proposal to a
Hi,
There are vm_states, task_states, and power_states for each VM. The
use of them is complicated. Some states are confusing, and sometimes
ambiguous. There also lacks a guideline to extend/add new state. This
proposal aims to simplify things, explain and define precisely what
they mean, and why
If you are using the essex release, have you tried to enable the
libvirt_nonblocking option?
Yun
On Tue, May 15, 2012 at 2:18 AM, Sam Su wrote:
> Hi,
>
> I have a multi-nodes openstack environment, including a control node running
> Glance, nova-api, nova-scheduler, nova-network, rabbitmq, mysql
//blueprints.launchpad.net/nova/+spec/nova-orchestration (mine, for
> Folsom summit)
>
>
>
> Both can be obsoleted/ deleted. We have the applicable specs in wiki.
>
>
>
> There is one more that Yun Mao submitted, which I am not able to locate. Yun
> – could you please update
Hi,
I've uploaded some code as work in progress towards what we discussed
at the Folsom summit, nova orchestration session. Where I'm going is
more or less described in this blueprint.
https://blueprints.launchpad.net/nova/+spec/task-management
The first step is to build a proof of concept based
Hi guys,
I can't get my master branch freshly off github to pass the
run_test.sh script. The errors are as follows. Tried on mac and ubuntu
12.04.. Any ideas? Thanks,
Yun
==
ERROR: test_json (nova.tests.test_log.JSONFormatterTe
technique is unlikely to be effective
> since cProfile wouldn't track the forked child worker processes' stacks,
> AFAIK. Still interested to see if the time to execute the 300 API calls is
> dramatically reduced, though.
>
> Looking forward to any results you might have.
&g
Hi Stackers, I spent some time looking at nova-api today.
Setup: everything-on-one-node devstack, essex trunk. I setup 1 user
with 10 tiny VMs.
Client: 3 python threads each doing a loop of "nova list" equivalent
for 100 times. So 300 API calls with concurrency=3.
how to profile: python -m cProfil
Hi Ziad,
thanks for the great work. Do we know how the states are persisted in
Spiff? Thanks,
Yun
On Fri, Apr 6, 2012 at 3:53 PM, Ziad Sawalha wrote:
> Here's a link to my analysis so far:
> http://wiki.openstack.org/NovaOrchestration/WorkflowEngines/SpiffWorkflow
>
> It looks good, but I won't
Right now, if you use KVM via libvirt (the default case), on the
compute node, nova-compute runs on the host. If you use Xen via
xenapi, nova-compute runs on Dom-U. (I'll ignore Xen via libvirt since
no one really uses it.)
What's the fundamental design decision to make the distinction?
Presumably
Hi Ziad,
Thanks for taking the effort. Do you know which ones out of the 43
workflows patterns are relavant to us? I'm slightly concerned that
SpiffWorkflow might be an overkill and bring unnecessary complexity
into the game. There was a discussion a while ago suggesting that
relatively simple seq
pute nodes
> running, but as the workflow only had rpc casts, I'm not sure that
> really mattered very much.
>
> The profile I gave was for vm creation. But I also ran tests for
> deletion, listing, and showing vms in the OS API.
>
> Networks were static throughout the process
Hi Mark,
what workload and what setup do you have while you are profiling? e.g.
how many compute nodes do you have, how many VMs do you have, are you
creating/destroying/migrating VMs, volumes, networks?
Thanks,
Yun
On Fri, Mar 23, 2012 at 4:26 PM, Mark Washenberger
wrote:
>
>
> "Johannes Erdf
Hi,
As far as I know, OpenStack doesn't use zookeeper yet. Is this
something you work on as an extra component? Nova/glance/keystone uses
eventlet, which doesn't work well with the default zookeeper python
lib. We have some success with this library I wrote:
https://github.com/maoy/python-evzookee
Hi,
I have signed the agreement but I'm not sure how to make my git review
command realize that. Right now I got:
$ git review
fatal: A Contributor Agreement must be completed before uploading:
http://wiki.openstack.org/HowToContribute
fatal: The remote end hung up unexpectedly
Thanks,
12 at 2:04 PM, Johannes Erdfelt wrote:
> On Tue, Mar 13, 2012, Yun Mao wrote:
>> There are two places in the current master branch that use tpool:
>> NWFilterFirewall and XenAPISession. Are they safe?
>
> I've looked at XenAPISession and it appears to be safe. It do
at 4:18 PM, Johannes Erdfelt wrote:
> On Mon, Mar 12, 2012, Yun Mao wrote:
>> My understanding is that if the answer to question3 is yes, then the
>> blocking call should be executed in tpool, although it's more likely
>> to have bugs in that case.
>
> Please be very care
Hi stackers,
A couple of days ago there was a long discussion of eventlet. I am
trying to summarize all external python dependencies for nova, glance
and keystone. I extracted the dependency from devstack, but I realize
that it is slightly different from tools/pip-requires. So I'm a little
confuse
, so that there is concurrency across
> un-related VMs, but serialisation for each VM.
>
> Phil
>
> -Original Message-
> From: Yun Mao [mailto:yun...@gmail.com]
> Sent: 02 March 2012 20:32
> To: Day, Phil
> Cc: Chris Behrens; Joshua Harlow; openstack
> Sub
First I agree that having blocking DB calls is no big deal given the
way Nova uses mysql and reasonably powerful db server hardware.
However I'd like to point out that the math below is misleading (the
average time for the nonblocking case is also miscalculated but it's
not my point). The number t
Hi Phil, I'm a little confused. To what extend does sleep(0) help?
It only gives the greenlet scheduler a chance to switch to another
green thread. If we are having a CPU bound issue, sleep(0) won't give
us access to any more CPU cores. So the total time to finish should be
the same no matter what
ntlet +
sqlalchemy + mysql pool is buggy so instead we make every DB call a
blocking call? Thanks,
Yun
On Thu, Mar 1, 2012 at 2:45 PM, Yun Mao wrote:
> There are plenty eventlet discussion recently but I'll stick my
> question to this thread, although it's pretty much a separate
&
ery
db access call a blocking call? Thanks,
Yun
On Wed, Feb 29, 2012 at 9:18 PM, Johannes Erdfelt wrote:
> On Wed, Feb 29, 2012, Yun Mao wrote:
>> Thanks for the explanation. Let me see if I understand this.
>>
>> 1. Eventlet will never have this problem if there is only 1
) is never used, and we do not run a eventlet
hub at all, we should never see this problem?
Thanks,
Yun
On Wed, Feb 29, 2012 at 5:24 PM, Johannes Erdfelt wrote:
> On Wed, Feb 29, 2012, Yun Mao wrote:
>> we sometimes notice this error message which prevent us from starting
>> nova
recreating the database each time
>
> Vish
>
> On Feb 29, 2012, at 12:42 PM, Yun Mao wrote:
>
>> Greetings,
>>
>> What's the most convenient way to run a subset of the existing tests?
>> By default run_tests.sh tests everything. For example, I'd like to
Hi,
we sometimes notice this error message which prevent us from starting
nova services occasionally. We are using a somewhat modified diablo
stable release on Ubuntu 11.10. It may very well be the problem from
our patches but I'm wondering if you guys have any insight. In what
condition does this
Greetings,
What's the most convenient way to run a subset of the existing tests?
By default run_tests.sh tests everything. For example, I'd like to run
everything in test_scheduler plus test_notify.py, what's the best way
to do that? Thanks,
Yun
___
Ma
Tue, Feb 21, 2012 at 11:23 AM, Yun Mao wrote:
>>
>> What's the recommended way to play with stable/diablo with devstack?
>
>
> Ideally:
>
>> git checkout stable/diablo
>> ./stack.sh
>
> Which you are probably doing.
>
>>
>> We've bee
What's the recommended way to play with stable/diablo with devstack?
We've been using the stable/diablo branch of devstack but stack.sh in
that branch is old that has some annoying small issues. If I use the
master branch of devstack but replace stackrc with the stable/diablo
branch content, would
agreed..
-1 on shard, +1 on cluster
Yun
On Mon, Feb 13, 2012 at 7:59 PM, Martin Paulo wrote:
> Please not 'shards'
> Sharding as a concept is so intertwined with databases IMHO that it
> will serve to confuse even more. Why not 'cluster'?
>
> Martin
>
> On 13 February 2012 09:50, Chris Behrens
bbit or Qpid is a good fit.
>>
>> It would be interesting exercise to allow the ZeroMQ driver to defer back to
>> the Kombu or Qpid driver for those messages which must remain centralized.
>>
>> --
>> Eric Windisch
>>
>> On Wednesday, January 25, 2012 at
There is a hack on top of devstack for you to restart those services
easily across reboot.
https://blueprints.launchpad.net/devstack/+spec/upstart
Yun
On Fri, Jan 27, 2012 at 1:18 AM, nandakumar raghavan
wrote:
> Hi,
>
> I have similar query. I had installed open stack using devstack on a fresh
if you need to restart your service frequently without destroying your
existing data, you might want to take a look at the upstart patch for
devstack.
https://blueprints.launchpad.net/devstack/+spec/upstart
Yun
On Thu, Jan 26, 2012 at 2:30 PM, Joe Smithian wrote:
> localadmin@k:~$ sudo screen -
Hi I'm curious and unfamiliar with the subject. What's the benefit of
0MQ vs Kombu? Thanks,
Yun
On Tue, Jan 24, 2012 at 7:08 PM, Eric Windisch wrote:
> Per today's meeting, I am proposing the ZeroMQ RPC driver for a
> feature-freeze exception.
>
> I am making good progress on this blueprint, it
I've always thought that whatever committed to the master branch has
already passed the unit tests by default. But I saw some failed tests
when I check out the master branch. Is it because I have a bad setting
on my Ubuntu 11.10 or it is not strictly enforced that everything must
pass run_test.sh b
Greetings,
I have registered a blueprint for HA task management
https://blueprints.launchpad.net/nova/+spec/task-management
Tasks in Nova such as launching instances are complicated and error
prone. Currently there is no systematic, reusable way to keep track of
the distributed task executions.
Hi Sandy,
I'm wondering if it is possible to change the scheduler's rpc cast to
rpc call. This way the exceptions should be magically propagated back
to the scheduler, right? Naturally the scheduler can find another node
to retry or decide to give up and report failure. If we need to
provision man
devstack makes setting up a dev environment such a breeze, that I'd
rather not go back to the packages and manual installations if
possible, for a not so serious deployment environment.
So I wrote the script upstart.sh and a few templates. The basic idea
is that once you like what stack.sh has don
John,
there is OpenStack Object Store, a.k.a. Swift, there is also an object
store inside nova called nova-objectstore. The latter is deprecated.
See here:
https://answers.launchpad.net/nova/+question/156113
Yun
On Mon, Nov 21, 2011 at 8:19 AM, John Dickinson wrote:
> I suspect there is a comm
min/content/users-and-projects.html
After switching to admin user, it works fine.
Anyway, this keystone vs old authentication is really confusing..
On Thu, Oct 27, 2011 at 10:43 PM, Yun Mao wrote:
> I think I'm close to figuring this out. You can take a look at the
> devstac
I think I'm close to figuring this out. You can take a look at the
devstack scripts. In particular,
https://github.com/cloudbuilders/devstack/blob/master/files/keystone_data.sh
Then you can source openrc to get the EC2_* environment variables.
However, it only works for euca-describe-instances,
e
Is there a reason that libvirt_use_virtio_for_bridges is not set to
True by default? Without virtio the network performance in kvm is
ridiculously slow.. Thanks,
Yun
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launch
Hi stackers,
is there a document somewhere that talks about the deployment strategy
for high availability? There seems to be a few single point of
failures in the nova architecture -- the controller, which has the API
and the scheduler, the rabbitmq server, and the mysql server.
Google helped me
59 matches
Mail list logo