been documented here in the official ceilometer
measurements
pagehttp://docs.openstack.org/developer/ceilometer/measurements.html.
Thanks for your enlightenment!
On Thu, Jul 18, 2013 at 8:43 PM, Eoghan Glynn egl...@redhat.com wrote:
Hi Jobin,
The memory utilization metering
Hey Jobin,
Thanks for your perceptive question.
The reason is that the conduits for gathering CPU metering and memory
metering are quite different in ceilometer currently:
* cpu/cpu_util are derived by polling the libvirt daemon
* memory is derived from the compute.instance.exists
enabled instance
usage auditing in my nova.conf http://pastebin.ubuntu.com/5887592/. Is
there anyway I could get these meters(memory and disk utilization) for VM's
provisioned using OpenStack?
Thanks for your efforts.
On Thu, Jul 18, 2013 at 4:57 PM, Eoghan Glynn egl...@redhat.com wrote
+1
Thanks,
Eoghan
- Original Message -
G'day,
Would anyone be interested in morning runs (5K) during the Summit in PDX next
week?
If you are, let's meet in the lobby of the Portland Hilton on Sixth Avenue at
0600 on Monday and 0700 from Tuesday to Friday.
Some of the
Here's a first pass at a proposal for unifying StackTach/Ceilometer
and other instrumentation/metering/monitoring efforts.
It's v1, so bend, spindle, mutilate as needed ... but send feedback!
http://wiki.openstack.org/UnifiedInstrumentationMetering
Thanks for putting this together Sandy,
if you have:
Time | Value
0 | 10
1 | 30
2 | 50
3 | 80
4 | 100
If your delta-pollster is down at 1 and 2, you restart at 3,
therefore at 4 you'll send 20 as usage (100 minus 80).
So you miss the delta between 10 (time 0) and 80 (time 3)
(therefore 70 for free!). If
Hi Yawei Wu,
The root of the confusion is the fact the cpu meter is reporting
the cumlative cpu_time stat from libvirt. This libvirt counter is
reset when the associated qemu process is restarted (an artifact
of how cpuacct works).
So when you stop/start or suspend/resume, a fresh qemu process
Not at all. It means the CPU time consumed is reset to 0, but
that's not an issue in itself, the API should be capable to
deal with that if you ask for the total usage.
Would that total usage be much more apparent if we started
metering the delta between CPU times on subsequent polling
I don't think (max - min) would suffice to give an accurate
measure of the actual CPU time used, as the counter may have
reset multiple times in the course of the requested duration.
It is, because /max in the API should be aware of the fact a
reset can occur and computes accordingly.
If your pollster is not running to compute delta and you have
no state stored, you'll miss a part of what has been used.
Would we have also have some 'misses' with the cumulative approach
when the ceilometer agent was down?
If I understood the (\Sigma local maxima)-first idea correctly,
the
Would we have also have some 'misses' with the cumulative approach
when the ceilometer agent was down?
No, unless the counter resets several times while your agent is down.
But delta has the same issue.
If I understood the (\Sigma local maxima)-first idea correctly,
the usage up to
HI All,
I would like to open a discussion on a topic user should have a
option to reset the tenant’s quotas( to the default).
Hi Vijaya,
I don't think a new nova command is needed for this use-case,
just add a simple custom script:
nova quota-update `nova quota-defaults $1 | tail
Isn't that just changing one custom limit with another ?
A true reset to the defaults would see the user stay in step with any
changes to the default values.
Do you mean configured changes to the defaults?
AFAIK 'nova quota-defaults' returns the current set of defaults,
which seems to be
My point was that if a user is currently configured to have a quota
of 50 VMs, and the default is currently configured to be 20 VMs then
there is a difference between configuring the user to have a quota
of 20 and configuring a user to have the default quota.The
first is just a
Thanks Yunhong for pointing this issue out and submitting a patch
in quick order.
Your reasoning for switching from if offset to if offset is None,
in order to avoid including the offset==0 case, makes perfect sense.
You'll just have to propose the change first to openstack-common,
from where
I think its great that we're having this discussion.
+1, excellent discussion in terms of both tone content.
In the hope that its informative, I'd like to give some info on
issues
we're looking at when moving our Glance deployment to Folsom. A lot
of
this is in common with Ryan, but
-d89527bc87b4
cf8971efd8934844b559d26e238506cc 141d12e2431f47a5bf77f90da4800960]
http://ip:8774/v2/141d12e2431f47a5bf77f90da4800960/servers/2314f007-d446-477a-bcf0-6a5f77d4d25b/action
returned with HTTP 500
2012/9/3 Eoghan Glynn egl...@redhat.com
While trying to create a VM instance
While trying to create a VM instance on openstack, the boot command
(nova boot) returns the following error:
---
ERROR: The server has either erred or is incapable of performing the
requested operation. (HTTP 500)
---
everything seems to be working (nova services are starting).
I am
I have Installed Nova Volume in the Openstack Essex Controller. But
When I restart the nova-volume service, I get the following error
[...]
2012-09-03 12:26:22 TRACE nova OperationalError: (OperationalError)
(1054, Unknown column 'volumes.instance_id' in 'field list')
Hi Trinath,
One
I have Installed Nova Volume in the Openstack Essex Controller. But
When I restart the nova-volume service, I get the following error
[...]
2012-09-03 12:26:22 TRACE nova OperationalError: (OperationalError)
(1054, Unknown column 'volumes.instance_id' in 'field list')
Hi Trinath,
Hi Jorge,
What version are you testing against?
I recently got a series of patches onto master that addressed a bunch
of issues in the EC2 CreateImage support, so that it now works smoothly
with volume-backed nova instances:
https://review.openstack.org/9732
Can you provide relevant glance-api and -registry log excerpts?
Also probably best to track this as a glance question[1] or bug[2].
Cheers,
Eoghan
[1] https://answers.launchpad.net/glance
[2] https://bugs.launchpad.net/glance
- Original Message -
Hello
I'm getting this error
We're running a system with a really wide variety of node types. This
variety (nodes with 24GB, 48GB, GPU nodes, and 1TB mem nodes) causes
some real trouble with quotas. Basically, for any tenant that is going
to use the large memory nodes (even in smaller slices), we need to set
quotas
, 2012 3:48 PM, Eoghan Glynn egl...@redhat.com wrote:
The harder part is that we need to be able to specify
independent/orthogonal quota constraints on different flavors. It
would be really useful to be able to say basically, you can have
2TB
of memory from this flavor, and 4TB
Would that address your requirement?
I think so. If these acted as a hard limit in conjunction with
existing quota constraints, I think it would do the trick.
I've raised this a nova blueprint, so let's see if it gets any traction:
Note that I do distinguish between a 'real' async op (where you
really return little more than a 202) and one that returns a
skeleton of the resource being created - like instance.create() does
now.
So the latter approach at least provides a way to poll on the resource
status, so as to
Folks,
A question for the CI side-of-the-house ...
What else is running on the Jenkins slaves, concurrently with the gating CI
tests?
The background is the intermittent glance service launch failure - the recently
added strace-on-failure logic reveals the issue to be an EADDRINUSE when the
Thanks for the quick response ...
Very basic things, not much other than the Jenkins Slave service and
SSH. Nothing that should cause conflicts that you are seeing. We
also intentionally only run one test run per slave at a time.
Interesting, seems the alternate explanation of a
Hi Folks,
I've been looking into the (currently broken) EC2 CreateImage API support
and just wanted to get a sanity check on the following line of reasoning:
- EC2 CreateImage should *only* apply to booted-from-volume nova servers,
for fidelity with the EC2 limitation to EBS-based instances
Hi Folks,
I wanted to use strace(1) to get to the bottom of the glance service
launch failures that have been plaguing Smokestack and Jenkins in the
past few weeks:
https://review.openstack.org/8722
However I just realized that Ubuntu from Maverick onward no longer allows
ptrace to attach
no collisions mapping from uuid -
ec2_id deterministically, and I don't see a clear path forward when
we do get a collision.
Vish
On May 8, 2012, at 12:24 AM, Michael Still wrote:
On 04/05/12 20:31, Eoghan Glynn wrote:
Sorry for the slow reply, I've been trapped in meetings
a collision.
Vish
On May 8, 2012, at 12:24 AM, Michael Still wrote:
On 04/05/12 20:31, Eoghan Glynn wrote:
Sorry for the slow reply, I've been trapped in meetings.
[snip]
So the way things currently stand, the EC2 image ID isn't really
capable of
migration.
I
Current warts:
...
- maintaining amazon ec2 ids across regions requires twiddling the
nova database where this mapping is stored
Hi Mikal,
We discussed that nova s3_images table earlier in the week on IRC.
Now at the time, I wasn't fully clear on the mechanics of the glance
UUID -
Current warts:
...
- maintaining amazon ec2 ids across regions requires twiddling the
nova database where this mapping is stored
Hi Mikal,
We discussed that nova s3_images table earlier in the week on IRC.
Now at the time, I wasn't fully clear on the mechanics of the glance
Hi Andrei,
The underlying issue is starvation of the storage space used to store
image content(as opposed to the image metadata, which takes up very
little space).
The reason the killed image isn't showing up in the output of glance index
is that non-viable images are sanitized from the list.
- Original Message -
Kevin, should we start copying openstack-common tests to client
projects? Or just make sure to not count openstack-common code in
the
code coverage numbers for client projects?
That's a tough one. If we copy in the tests, they end up being somewhat
https://review.openstack.org/#/c/6847/
Nice!
* Migrations added during Folsom release cycle could be compacted
during E release cycle. TBD if/when we do the next compaction.
An alternative idea would be to do the compaction *prior* to the
Folsom relase instead of after, so that the
There's something like 7 pages of open reviews on gerrit. The project
has a good kind of problem with so many people trying to contribute.
The question now is how to scale the development processes to handle
that growth.
It was nice to see a number of discussions at the summit in this
We've just upgraded Gerrit to version 2.3. There are a lot of
changes
behind the scenes that we've been looking forward to (like being able
to
store data in innodb rather than myisam tables for extra data
longevity). And there are a few visible changes that may be of
interest
to
I try to assign quota to individual users, to control how many
instances
each user can run concurrently. But I don't see a doc describing how
to
do that. I use diablo release.
Any help or doc pointer will be greatly appreciated.
Quotas apply at the nova project/tenant granularity, as
Folks,
From previous posts on the ML, it seems there are a couple of
efforts in train to add distributed content deduping to Swift.
My question is whether either or both these approaches involve
active client participation in enabling duplicate chunk
detection?
One could see a spectrum
Thanks for the response Caitlin,
The versioning/dedup ring we are working on at Nexenta will support
both 1 and 3. I'll be presenting at the Summit on this.
Great, I'll look forward to your presentation.
The ultimate goal of distributed dedup is scenario #1. Only the
client software can
APPENDIX B: Outstanding issues
...
2) How do we fit the existing 'copy_from' functionality in?
Is the v2 API retaining some equivalent of the existing
x-image-meta-location header, to allow an externally-stored
image be registered with glance?
e.g. via an image field specified on create or
Eoghan Glynn wrote:
- how is the mapping between project and quota-class established?
I was expecting a project_quota_class_association table or
some-such in the nova DB. Is this association maintained by
keystone instead?
- is the quota_class attribute currently
COMMUNITY STATISTICS
• Activity on the main branch of OpenStack repositories, lines of
code added and removed per developer during week 7 of 2012 (from
Mon Mar 19 00:00:00 UTC 2012 to Mon March 26 00:00:00 UTC 2012)
Hi Stefano,
Assuming you're using git-log to generate
I wanted to let everyone know about a quota classes blueprint I've
submitted; you can find the details here:
* https://blueprints.launchpad.net/nova/+spec/quota-classes
* http://wiki.openstack.org/QuotaClass
I've already implemented this blueprint and pushed to Gerrit, but
have
it
Eoghan Glynn wrote:
A couple of quick questions on how this quota class mechanism is
intended to work ...
- how is the mapping between project and quota-class established?
I was expecting a project_quota_class_association table or
some-such in the nova DB. Is this association
Presumably we'd also need some additional logic in the
quota-classes API
extension to allow tenant-to-quota-class mappings be established
and torn down?
Well, yeah :)
Cool, captured in https://bugs.launchpad.net/nova/+bug/969537
I'll propose a patch early next week.
Cheers,
Eoghan
Hi Juerg,
That's because 'owner' is not supported as an explicit parameter to 'glance
add'.
So as a result the CLI treats it a generic image property, and passes this to
the
API service via the header:
x-image-meta-property-owner: 2
The 'x-image-meta-property-' prefix is used to
Done.
https://bugs.launchpad.net/glance/+bug/962998
Thanks, fixed here: https://review.openstack.org/5727
Cheers,
Eoghan
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
Folks,
One thing that's been on my wishlist since hitting a bunch of
quota exceeded issues when first running Tempest and also on
the Fedora17 openstack test day.
It's ability to easily see the remaining headroom for each
per-project quota, e.g.
$ nova-manage quota --headroom --project=admin
Thanks Jay for the feedback and background info, comments inline ...
Eoghan Glynn wrote:
So the question is whether there's already a means to acheive this
in one fell swoop?
Jay Pipes wrote:
Well, Horizon, in the launch instance modal dialog does show similar
information. See
Kevin Mitchell wrote:
I recently got the quota classes stuff merged into master (after the RC
branch for Essex was cut, of course). After I had completed that work,
I started thinking about quotas in general, and I think there's a better
way to organize how we do quotas in the first place.
Florian,
The key point in the split between glance-api.conf, glance-registry.conf,
glance-cache.conf etc. is the glance application intended to consume that
config.
This follows directly from the naming:
bin/glance-api by default consumes glance-api.conf
bin/glance-registry by default
Yes, it does make perfect sense. Kind thanks for the explanation.
However, what is still unclear is what config iteems that pertain to
other apps must still be present (ie. duplicated in) glance-api.conf
(e.g. image_cache_driver , etc )
This is probably something we should document more
Does /etc/glance/policy.json exist?
Is is readable?
- Original Message -
From: .。o 0 O泡泡 501640...@qq.com
To: openstack openstack@lists.launchpad.net
Sent: Wednesday, 7 March, 2012 2:06:50 PM
Subject: [Openstack] can not start glance-api in glance E4
hi all:
In glance E4
1. Add catalog_name=compute to tempest.conf
2. Change name to type in rest_client.py
Yep, easiest to just apply this patch:
git fetch https://review.openstack.org/p/openstack/tempest
refs/changes/59/4259/1 git format-patch -1 --stdout FETCH_HEAD
Cheers,
Eoghan
This is great news Dean, thank you!
I'll try using your patch to get tempest running on F16,
and I'll get back to you with any issues I encounter.
Cheers,
Eoghan
- Original Message -
From: Dean Troyer dtro...@gmail.com
To: openstack@lists.launchpad.net
Sent: Wednesday, 22 February,
Deltacloud already has support for OpenStack:
http://deltacloud.apache.org/drivers.html
Yep, though the existing support is a thin extension over the
original deltacloud Rackspace driver, so is limited to the 1.0
version of the openstack compute API.
However work is under way on a new
I'm not good in WSGI. I have a foolish question to ask.
Which part of the source codes handle the receiving of the uploading
data.
As far as I know, the uploading data is in body_file from webob. I
traced the webob
code but it made my head blowed.
--- send chunked data - | (webob)
Folks,
I like to request an Essex feature freeze exception for this blueprint:
https://blueprints.launchpad.net/glance/+spec/retrieve-image-from
as implemented by the following patch:
https://review.openstack.org/#change,4096
The blueprint was raised in response to a late-breaking
Yep, that's pretty much exactly the implementation we were hoping
might exist. If it can be built that would be phenomenal. Any
thoughts on whether that might be possible before E4 closes, or
will
it have to wait until Folsom?
I'll propose a blueprint and see if I can get it
Folks,
Just a quick heads-up that this review[1] if accepted will result in
glance taking a soft dependency on pysendfile.
The import is conditional, so where pysendfile is unavailable on a
particular distro, the 'glance add' command will simply fallback to
the pre-existing chunk-at-a-time
BTW, does anybody knows who is taking care of it for Debian?
Apparently Janoš Guljaš ja...@resenje.org was looking at packaging
it for Debian.
But apparently the original mainntainer of the python-sendfile package
is uncontactable so a team upload (Debian Python Modules Team) would
be needed
.
My feeling is that it should be do-able in that timeframe.
Cheers,
Eoghan
From: Eoghan Glynn [mailto:egl...@redhat.com]
A-ha, I see what you mean.
AFAIK that mode of upload separate to the image POST is not
currently
supported, but it would be quite straightforward to add say
Hey Jay,
I'll take this one (assuming no-one else was thinking of grabbing it?).
Cheers,
Eoghan
- Original Message -
From: Jay Pipes jaypi...@gmail.com
To: openstack@lists.launchpad.net
Sent: Tuesday, 7 February, 2012 2:37:17 AM
Subject: [Openstack] [GLANCE] Easy blueprint for a
Hi Reynolds,
I've been looking into your interesting idea around sendfile()[1]
usage, here are a few initial thoughts:
- There's potentially even more speed-up to be harnessed in serving
out images from the filesystem store via sendfile(), than from using
it client-side on the initial
Hi Folks,
The describedby links in nova/api/openstack/compute/versions.py
contain broken hrefs to a v1.1 WADL document[1] and PDF[1].
Looks like a copy'n'paste from the corresponding 1.0 versions of the
WADL[3] and PDF[4], both of which are present and correct.
So I was wondering whether
So I was wondering whether there was an intention to publish a v1.1 WADL ...
Follow up question: would it be nasty to serve out that WADL directly from
github?
e.g
https://github.com/openstack/compute-api/blob/essex-final-tag/openstack-compute-api-1.1/src/os-compute-1.1.wadl
:
https://review.openstack.org/3421
but if you publish the WADL at a well-known path under docs.openstack.org,
that would be much better.
Cheers,
Eoghan
- Original Message -
From: Anne Gentle a...@openstack.org
To: Eoghan Glynn egl...@redhat.com
Cc: openstack@lists.launchpad.net
Sent
70 matches
Mail list logo