Hi all,
After playing around with Openstack Mangement Console, Horizon, I realize
that the image upload functionality is not provided there.
Is there any special reason for that? Is it because there are no Rest
services available at the moment? or else is it felt that providing image
upload via
HI everyone,
when I test live migration use NFS ,that's my sitting according to
http://docs.openstack.org/essex/openstack-compute/admin/content/configuring-live-migrations.html
1.add this line /var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash) in
/etc/exports
2.mount -t nfs
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried also with 3.3, same story. Rolling
back to 3.2 and
Hi Daniel,
Thanks for following this up!
On 8 August 2012 19:53, Daniel P. Berrange berra...@redhat.com wrote:
not tune this downtime setting, I don't see how you'd see 4 mins
downtime unless it was not truely live migration, or there was
Yes, quite right. It turns out Nova is not
Daniel,
Thanks for providing this insight, most useful. I'm interpreting this
as: block migration can be used in non-critical applications, mileage
will vary, thorough testing in the particular environment is
recommended. An alternative implementation will come, but the higher
level feature
That sounds like a kernel, kvm or dnsmasq issue, rather than OpenStack
itself. I think Quantal is on the 3.5 kernel, and I assume OpenStack is
working there..
Maybe give it's dnsmasq package a go first as it's probably the easiest
thing to check...
Ubuntu also have some 3.5 packages for Precise,
j...@redhat.com wrote:
From: Dan Wendlandt d...@nicira.com
If someone (Bob?) has the immediate cycles to make rootwrap work in Folsom
with low to medium
risk of disruption, I'd be open to exploring that, even if it meant
inconsistent usage in quantum
vs. nova/cinder.
Hi Dan. I've
Il giorno 09/ago/2012, alle ore 10:44, Alessandro Tagliapietra
tagliapietra.alessan...@gmail.com ha scritto:
Il giorno 09/ago/2012, alle ore 10:19, Kiall Mac Innes ki...@managedit.ie
ha scritto:
That sounds like a kernel, kvm or dnsmasq issue, rather than OpenStack
itself. I think
Hi all,
I'm trying to invoke Openstack Glance REST API s using a Java client, to
get image details. etc (Ultimately I need to upload an image)
When I invoke http://Glance_URL:PORT/images/detail GET request in Java
code, I'm getting *HTTP 300 *as the response code.
4 300
4 Date: Thu, 09 Aug
Hello,
I'm no expert on the subject, but i think you should just use mount -t nfs
172.18.32.7:/ /var/lib/nova/instances instead of mount -t nfs 172.18.32.7:
/var/lib/nova/instances /var/lib/nova/instances. Also from the stack trace
it seems your libvirtd is not running.
On Thu, Aug 9, 2012 at
On 08/09/2012 01:11 PM, tacy lee wrote:
try adding metadata_host to nova.conf
The thing is the iptable rules have 169.254.169.254 NATed correctly. So
the address is correct. It's just that the VMs cannot access it.
--
simonsmicrophone.com
___
On 08/09/2012 12:59 PM, Scott Moser wrote:
On Aug 8, 2012, at 8:20 PM, Simon Walter si...@gikaku.com wrote:
On 08/09/2012 06:45 AM, Jay Pipes
I guess I'll have to build a VM from scratch, as I was relying on the ssh key
to be able to ssh into the VM, which apparently is supplied by the
On 08/09/2012 07:55 AM, 王鹏 wrote:
HI everyone,
when I test live migration use NFS ,that's my sitting according to
http://docs.openstack.org/essex/openstack-compute/admin/content/configuring-live-migrations.html
1.add this line /var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash) in
Hi
Probably, before We had same problem.
Could you check libvirt log and resolve your host domain and nova.conf
vncserver_listen part?
(vncserver_listen=0.0.0.0)
Thanks!
Suzuki
On Thu, Aug 9, 2012 at 6:27 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Hello,
I'm no expert on the
From: Thierry Carrez thie...@openstack.org
Date: Thu, 09 Aug 2012 10:34:17 +0200
j...@redhat.com wrote:
From: Dan Wendlandt d...@nicira.com
If someone (Bob?) has the immediate cycles to make rootwrap work in
Folsom with low to medium
risk of disruption, I'd be
Hi Adam,
The blueprint as revised to address Joe's comments looks good to me - nice
work. I especially like how the middleware is intended to cache the revocation
list for a configurable amount of time - it mirrors how token caching already
works.
Cheers,
Maru
On 2012-08-07, at 10:09 AM,
Hi, list,
I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
configuration. . Everything is OK in control node, but in compute node, I
always meet with the following issue.
So I can’t ssh to the vm instance.
2012-08-09 13:31:28,290 - util.py[WARNING]:
On Aug 9, 2012, at 1:03 AM, Blair Bethwaite blair.bethwa...@gmail.com wrote:
Hi Daniel,
Thanks for following this up!
On 8 August 2012 19:53, Daniel P. Berrange berra...@redhat.com wrote:
not tune this downtime setting, I don't see how you'd see 4 mins
downtime unless it was not truely
On Thu, Aug 09, 2012 at 07:10:17AM -0700, Vishvananda Ishaya wrote:
On Aug 9, 2012, at 1:03 AM, Blair Bethwaite blair.bethwa...@gmail.com wrote:
Hi Daniel,
Thanks for following this up!
On 8 August 2012 19:53, Daniel P. Berrange berra...@redhat.com wrote:
not tune this downtime
On Aug 9, 2012, at 7:13 AM, Daniel P. Berrange berra...@redhat.com wrote:
With non-live migration, the migration operation is guaranteed to
complete. With live migration, you can get into a non-convergence
scenario where the guest is dirtying data faster than it can be
migrated. With the
j...@redhat.com wrote:
* Switch to rootwrap_config and deprecate root_helper
This would fully align quantum-rootwrap with nova-rootwrap. However I'm
not sure it's reasonable to deprecate root_helper=sudo in Folsom, given
how little tested quantum-rootwrap seems to be on Folsom.
From: Thierry Carrez thie...@openstack.org
Date: Thu, 09 Aug 2012 16:32:23 +0200
j...@redhat.com wrote:
* Switch to rootwrap_config and deprecate root_helper
This would fully align quantum-rootwrap with nova-rootwrap. However
I'm
not sure it's reasonable
With multihost=True, every nova-compute node also needs nova-api-metadata
installed..
That should sort it out...
Thanks,
Kiall
On Aug 9, 2012 2:58 PM, 谢丹铭 xiedanm...@qiyi.com wrote:
Hi, list,
I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
configuration. . Everything
Also the metadata host should be set to 127.0.0.1 for multihost=True..
Thanks,
Kiall
On Aug 9, 2012 2:58 PM, 谢丹铭 xiedanm...@qiyi.com wrote:
Hi, list,
I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
configuration. . Everything is OK in control node, but in compute
All, sorry for top posting, but this is a fine example of why we
really need bloggers to help with the documentation. These fragmented
instructions are difficult to rely on - we need maintainable,
process-oriented treatment of content.
Mirantis peeps, you have added in your blog entries to the
Hi guys,
I currently have a working cloud with a working GPU passthrough setup
(CentOS/libvirt/Xen 4.1.2), now I need to work on adding this new resource to
openstack.
Here is the plan:
1. Create a new instance type (g1.small) with an extra spec like
xpu_arch = radeon
2. Modify
On 08/09/2012 10:32 AM, Thierry Carrez wrote:
j...@redhat.com wrote:
* Switch to rootwrap_config and deprecate root_helper
This would fully align quantum-rootwrap with nova-rootwrap. However I'm
not sure it's reasonable to deprecate root_helper=sudo in Folsom, given
how little
Boris-Michel,
One thing that I noticed was a typo: schedulre that can cause malfunction. I am
not sure what version you are using, but recently the extra_spec checking is
moved to compute_capabilities_filter.py (ComputeCapabilitiesFilter). As far as
I understand, the current ComputeFilter does
You're getting a '300 Multiple Choices' response as you haven't indicated a
version in your request. You can parse the body as json (indicated in the
headers) to see what API versions are available to you at any given time. If
you don't care about taking that extra step, just use a URI with
REMINDER: Another meeting will take place today, in ~2 hours from now
(19:00 UTC), on #openstack-meeting (use http://webchat.freenode.net/
to join).
On Mon, Aug 6, 2012 at 3:09 PM, Eugene Kirpichov ekirpic...@gmail.com wrote:
Hi,
Below are the meeting notes from the IRC meeting about LBaaS
Joseph,
Yes sorry about the typo I was retyping these lines in the email.
Whatever, the problem seems to be that the Simple Scheduler I'm using is not
running the filters at all, so I now use the filter_scheduler (I'm on essex by
the way) and the filter does its job and filters out the host
CC'ing openstack-dev since that is a more appropriate list for this
discussion.
On 08/08/2012 04:35 PM, Eric Windisch wrote:
I believe that the RPC backend should no longer have any default.
I disagree and my reason is fairly straight-forward. Changing the
default will break existing
On Aug 9, 2012, at 8:13 AM, Robert Kukura rkuk...@redhat.com wrote:
We should immediately change devstack to stop running the quantum agents
as root, so at least the root_helper=sudo functionality is really being
used.
It looks like devstack does configure nova with the new
Indeed, uploading large files with the Horizon webserver as an intermediate
relay is a nasty business which we want to discourage. We are looking at ways
to send files directly from the Horizon client-side UI to swift/glance for
large file upload in the future.
All the best,
- Gabriel
Hi!
I'm working on some code for scheduler_hints to be used during migration
and was running devstack/exercise.sh on the latest greatest git. Without
any of my changes installed I see on a 12.04 install the following failures:
This time I was the sole participant and here's what I had to say :)
Our current progress is as follows:
The team has almost finished the core code and is about to start
working on the F5 driver.
Most of the external API is implemented, and it's planned to polish
the driver/core interaction logic
On 08/09/2012 02:39 PM, Eric Windisch wrote:
I also don't understand why having a default that doesn't work for
anyone makes any sense.
I would hope that a localhost only installation with a username and password
of 'guest' include a very small number of anyones. Who is really using
I'm not talking about all configuration options. I'm talking about this
single configuration option. Existing installations that did not
specify rpc_backend because they did not need to will break if the
default is changed.
They would only break in Grizzly, following a one-release
Hi all,
I am having a terrible time getting my instances to work after a hard
reboot. I am using the most up-to date version of all openstack
packages provided by Ubuntu. I have included a list of packages, with
version, at the end of this email.
After a hard reboot nova list reports that the
On Thu, Aug 9, 2012 at 1:57 PM, Joe Gordon j...@cloudscaling.com wrote:
Did you turn off rate limiting in devstack? I have hit that in the past
On Aug 9, 2012 12:36 PM, Thomas Gall thomasag...@gmail.com wrote:
Hi!
I'm working on some code for scheduler_hints to be used during migration
On Thu, Aug 9, 2012 at 2:33 PM, Thomas Gall thomasag...@gmail.com wrote:
FAILED
boot_from_volume
FAILED euca
FAILED floating_ips
FAILED volumes
These
If your eth0 (public interface) can access Internet, with the ip_forward
your instance should be able too...
On Wed, Aug 8, 2012 at 12:05 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
So i have set up a small proof of concept, one controller node and two
compute nodes. Since the
Hello Everyone,
The nova meeting today was quite eventful. Minutes are included below. A couple
of important updates:
* we are putting nova-core review days on hold.
* nova-core is going to pay extra attention to reviewing so that we can get
everything merged by Tuesday
* after F-3 nova-core
Hello Everyone,
We are in the unfortunate position of not knowing how good our OpenStack API
XML support is. All of our integration tests use json. Many of the compute
extensions don't even have XML deserializers. We also assume that there bugs we
don't even know about due to underuse. We need
On Aug 9, 2012, at 1:56 PM, Sébastien Han han.sebast...@gmail.com wrote:
Did I miss something?
Unfortunately this is confusing because the term metadata is used for two
different things.
the metadata visible to the instance is a replication of the aws metadata
server. it is constructed
On 08/09/2012 12:13 AM, Alessandro Tagliapietra wrote:
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried
On Aug 9, 2012, at 3:32 PM, George Reese george.re...@imaginary.com wrote:
Why aren't the integration tests both XML and JSON?
The simple answer is that no one has taken the time to write them. Our devstack
exercises use the python client bindings. Tempest has json clients but no xml
On 08/09/2012 12:13 AM, Alessandro Tagliapietra wrote:
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried
right. hope the document can help you . it is chinese.
the network is flatdchp + mutilhost
http://www.chenshake.com/ubuntu-12-04-openstack-essex-multinode-installation/
On Thu, Aug 9, 2012 at 10:51 PM, Kiall Mac Innes ki...@managedit.ie wrote:
Also the metadata host should be set to
Has anyone surveyed those subscribed to openstack-operators@lists.openstack.org
list for usage? While imperfect, at least it would be asking the
constituency that might be most affected. You might also consider asking
whether they would prefer JSON or XML, regardless of what they use today.I
Situations like this are always interesting to watch. :-)
On the one hand its open-source, so if you care about something then put
up the resources to make it happen.
On the other hand, that doesn't mean that as a developer you get to ignore
the bigger picture and only do 1/2 of the work
As part of my work on Tempest, I've created an alternate backend configuration
to use XML requests/responses. This right now mostly covers Nova, but could
easily be extended to test other projects as well. I hadn't pushed it yet
because it seemed to be low priority, but I'd be more than glad to
Hello,
In my essex install on RHEL6, there is a problem with the metadata service.
The metadata service works for instances running on the controller node,
where
the nova-api(metadata service) is running. But for the other worker nodes,
the metadata service is intermittent, ie. the instances
I would start to check out iptables and routes that are being setup (in
vms and outside).
If you are running a flat (no dhcp) network that usually makes it a lot
harder also.
On 8/9/12 7:31 PM, Xin Zhao xz...@bnl.gov wrote:
Hello,
In my essex install on RHEL6, there is a problem with the
On Aug 9, 2012, at 8:14 PM, Doug Davis d...@us.ibm.com wrote:
Situations like this are always interesting to watch. :-)
On the one hand its open-source, so if you care about something then put up
the resources to make it happen.
This attitude always bothers me. This isn't some Open
On Aug 9, 2012, at 6:28 PM, Daryl Walleck daryl.wall...@rackspace.com wrote:
As part of my work on Tempest, I've created an alternate backend
configuration to use XML requests/responses. This right now mostly covers
Nova, but could easily be extended to test other projects as well. I hadn't
On Aug 9, 2012, at 7:31 PM, Xin Zhao xz...@bnl.gov wrote:
Hello,
In my essex install on RHEL6, there is a problem with the metadata service.
The metadata service works for instances running on the controller node, where
the nova-api(metadata service) is running. But for the other worker
On 08/10/2012 12:17 PM, Vishvananda Ishaya wrote:
$ curl -v http://169.254.169.254:8775/
* About to connect() to 169.254.169.254 port 8775 (#0)
* Trying 169.254.169.254... Connection timed out
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
Any idea
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/351/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 02:01:54 -0400Build duration:3 min 34 secBuild cause:Started by an SCM changeBuilt
Title: precise_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_nova_trunk/361/Project:precise_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 12:02:08 -0400Build duration:49 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/353/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 12:01:57 -0400Build duration:22 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/355/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 13:01:55 -0400Build duration:3 min 41 secBuild cause:Started by an SCM changeBuilt
Title: precise_folsom_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/226/Project:precise_folsom_deployDate of build:Thu, 09 Aug 2012 13:23:28 -0400Build duration:2 min 20 secBuild cause:Started by command lineBuilt on:masterHealth
Title: precise_folsom_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/227/Project:precise_folsom_deployDate of build:Thu, 09 Aug 2012 13:44:06 -0400Build duration:36 secBuild cause:Started by command lineBuilt on:masterHealth
Title: precise_folsom_deploy
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/228/Project:precise_folsom_deployDate of build:Thu, 09 Aug 2012 14:43:52 -0400Build duration:14 minBuild cause:Started by user adamBuilt on:masterHealth
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/357/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 16:01:56 -0400Build duration:6 min 20 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/358/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 18:31:55 -0400Build duration:4 min 39 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_deploy/34/Project:quantal_folsom_deployDate of build:Thu, 09 Aug 2012 18:55:31 -0400Build duration:1 min 3 secBuild cause:Started by user adamBuilt on:masterHealth
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/359/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 19:01:55 -0400Build duration:4 min 15 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/360/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 19:31:55 -0400Build duration:3 min 58 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_nova_trunk/361/Project:quantal_folsom_nova_trunkDate of build:Thu, 09 Aug 2012 21:02:00 -0400Build duration:6 min 21 secBuild cause:Started by an SCM changeBuilt
Title: quantal_folsom_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/quantal_folsom_quantum_trunk/76/Project:quantal_folsom_quantum_trunkDate of build:Thu, 09 Aug 2012 22:01:58 -0400Build duration:7 min 47 secBuild cause:Started by an SCM changeBuilt
72 matches
Mail list logo