Hi Greg,
Sorry to hear you woes. I agree with you that setting things up is
challeniging and sometimes problematic. I would suggest a number of things:
1. Give devstack a bash. This is very helpful and useful to try and
understand how everything fits and works together. www.devstack.org
2. A
Hi,I'm not sure the "dhcp-host" configuration option exists actually. As for having one another host in VLAN mode is something I'd be interested to know as wellRegards,Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 19 févr. 2013 à 08:05, Ritesh Nanda
Hi all,
When create lots of instance simultaneously, there will be lots of
instance in ERROR state. And most of them are caused by network rpc
request timeout. This result is not so graceful.
I think it will be better if scheduler keep a queue of creating request.
when he find all the hosts are
Hi,
If you stop the resource this is perfectly normal to get a resource
diskless and unconfigured since you asked pacemaker to stop it. You just
need to reconnect properly the resource 0 (guess it's the mysql one). For
this, it's more or less the same operation as this one:
Hi gtt,
what does it mean for you 'lots of instance simultaneously'? 100, 1000,
1, more?
We have launched 100 (but less than 1000) simultaneously without any
issue. Rabbit running in a multicore with several gigs of RAM with out of
the box configuration.
Cheers
Diego
--
Diego Parrilla
+1From the HA guide :4 Steps to solve the Split-BrainManually choose a node which data modifications will be discarded. We call it the split brain victim.Choose wisely, all modifications will be lost! When in doubt run a backup of the victim's data before you continue.When running a Pacemaker
Hi Diego
Thanks for you reply.
How many hosts do you have? I have 4 hosts. And in this bug,
https://bugs.launchpad.net/nova/+bug/1094226, The N is 20. In my
environment N is about 16.
I found that nova-network is too busy to deal with so many rpc request
at the same time. The Rabbitmq is strong
Increasing the RPC timeout should help. I have seen this problem in
nova-network in the past. Vish suggestion sounds good.
Recently we launched by mistake 128 VMs in a production environment of a
customer: 0 errors. They are using 12 cores and several gigs for the
nova-network servers with dual
Hi Greg,
I did have trouble with DHCP assignation (see my previous post in this
list), which was being fixed by deleting ovs bridges on network node,
recreating them and restarting OVS plugin and L3/DHCP agents (which were
all on the same physical node).
Maybe it helps.
Anyway, when
Hi Pat
do you expect the one central user store to be replicated, say in
Keystone, or not replicated?
The approach we have taken is to assume that the user stores (we support
multiple distributed ones) are external to Keystone, and will be managed
by external administrators. When a user
Hi,
I progressed in investigating the bug. I forgot to mention I was
following Provided Router/single tenancy setup.
So, at reboot, my tap/qg/qr network interfaces were down :
7: qg-c39e5df4-7f: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
link/ether fe:10:8c:d8:d8:ca brd
On 02/19/2013 01:57 PM, Sylvain Bauza wrote:
Hi,
I progressed in investigating the bug. I forgot to mention I was
following Provided Router/single tenancy setup.
So, at reboot, my tap/qg/qr network interfaces were down :
7: qg-c39e5df4-7f: BROADCAST,MULTICAST mtu 1500 qdisc noop state DOWN
Are you using multi_host setup? If not, as Vish suggested, that will
alleviate much of the problem.
Best,
-jay
On 02/19/2013 04:09 AM, gtt116 wrote:
Hi Diego
Thanks for you reply.
How many hosts do you have? I have 4 hosts. And in this bug,
https://bugs.launchpad.net/nova/+bug/1094226, The
Hope Vish can answer this , what can be the way around to do this.
On Tue, Feb 19, 2013 at 2:01 PM, Razique Mahroua
razique.mahr...@gmail.comwrote:
Hi,
I'm not sure the dhcp-host configuration option exists actually. As for
having one another host in VLAN mode is something I'd be interested
Hello,
Comments inline.
On Mon, 18 Feb 2013 19:56:00 -0600, Dolph Mathews wrote
On Mon, Feb 18, 2013 at 9:59 AM, pat p...@xvalheru.org wrote:
Hello,
Sorry to disturb, but I have some questions regarding keystone middleware.
Some introduction to problem: I need to integrate OpenStack to our
Hi,
Expecting single external user store which is RO for keystone. In common the
users store is LDAP. As I wrote the key thing here is the generated token.
Pat
On Tue, 19 Feb 2013 10:44:59 +, David Chadwick wrote
Hi Pat
do you expect the one central user store to be replicated, say
Le 19/02/2013 13:31, Gary Kotton a écrit :
On 02/19/2013 01:57 PM, Sylvain Bauza wrote:
Hi,
I progressed in investigating the bug. I forgot to mention I was
following Provided Router/single tenancy setup.
So, at reboot, my tap/qg/qr network interfaces were down :
7: qg-c39e5df4-7f:
On 02/19/2013 03:47 PM, Sylvain Bauza wrote:
Le 19/02/2013 13:31, Gary Kotton a écrit :
On 02/19/2013 01:57 PM, Sylvain Bauza wrote:
Hi,
I progressed in investigating the bug. I forgot to mention I was
following Provided Router/single tenancy setup.
So, at reboot, my tap/qg/qr network
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
OpenStack Security Advisory: 2013-004
CVE: CVE-2013-1664, CVE-2013-1665
Date: February 19, 2013
Title: Information leak and Denial of Service using XML entities
Reporter: Jonathan Murray (NCC Group), Joshua Harlow (Yahoo!), Stuart
Stent
Products:
Same problem here. Running Grizzly.
Dashboard keeps prompting me for my credentials. Pretty sure
dashboard sends wrong tenant name to keystone. Here is the
keystone.log entry:
2013-02-19 16:55:06 WARNING [keystone.common.wsgi] Authorization
failed. The
I checked /etc/nova/api-paste.ini.
Here's the relevant section in it:
[filter:authtoken]
paste.filter_factory =
keystone.middleware.auth_token:filter_factory
auth_host = 192.168.203.103
auth_port = 35357
auth_protocol = http
Moreover (sorry for spamming), this
command works fine:
root@leonard:/etc/init.d# keystone --os-username nova
--os-password openstack --os-tenant-name service --os-auth-url
http://192.168.203.103:5000/v2.0/ token-get
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256
OpenStack Security Advisory: 2013-005
CVE: CVE-2013-0282
Date: February 19, 2013
Keystone EC2-style authentication accepts disabled user/tenants
Reporter: Nathanael Burton (National Security Agency)
Products: Keystone
Affects: All versions
Hi Lloyd
Many thanks for the tips. Been a great help. Been trying a few things out.
I'm running Ubuntu 12.4 with OpenStack.
Created a FreeBSD 9.0 image that works for KVM. i.e.
kvm-img create -f raw freebsd3.img 10G
kvm -m 512 -hda freebsd3.img -cdrom FreeBSD-9.0-RELEASE-amd64-disc1.iso
Hi all,
After i installed Openstack by Devstack, i wanna run the tests
in the nova/tests/test_libvirt.py file. I ran the command
'./run_tests.sh test_libvirt' and i caught following error.
$ ./run_tests.sh test_libvirt
Running `tools/with_venv.sh python setup.py testr --slowest
On Tue, Feb 19, 2013 at 1:25 PM, Harvey West harvey.w...@btinternet.com wrote:
This boots ok. kvm -m 2048 -hda freeBSD.img -boot c
(note: did not use virtio mods. Assumed these were just optimized NIC/SCSI
drivers. Which I can live without for the time being)
I ran into the same isssue with
Harvey ,
To get console.log in ubuntu we need to make some changes in grub config
Below steps works in ubuntu, hope only the file location would be different
rest would be the same.
Write /etc/default/grub
GRUB_CMDLINE_LINUX=console=tty0 console=ttyS0,115200
then save and run
update-grub2
You definitely need the libvirt modules. Nova has no way to detect whether the
modules are installed so it will try to attach via virtio.
Note that with grizzly you can use custom glance properties to override the
default vif type and disk bus. See https://review.openstack.org/#/c/21527/ and
You cannot have an external dhcp server with openstack. Openstack needs a way
to know the ip address assigned to a vm to do its listing properly. If you
don't care about the api returning valid ips there is a possibility of using
FlatNetworking (not FlatDHCP) to make nova stick the network into
Hi,
I have a classic Provider Network, private networks as follows :
- internal network 10.0.0.0/24
- external network 192.168.10.0/24 gw 192.168.1.252 (I know, I have to
add a manual route on both gw and network node)
- br-ex is having 192.168.10.254
I have a floating IP 192.168.10.2
Thanks vish,
Can you tell me the location of the external host file we provide to
dnsmasq , so that i can try putting the directive there.
On Wed, Feb 20, 2013 at 1:07 AM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
You cannot have an external dhcp server with openstack. Openstack
Thanks Vish,
This is something I always forget to ask: I'm curious about the historical
reasons for dnsmasq instead of ISC-DHCP managed with OMAPI, for example.
Cheers
Diego
--
Diego Parrilla
http://www.stackops.com/*CEO*
*www.stackops.com | * diego.parri...@stackops.com** | +34 649 94 43 29
No particular reason except that is what libvirt uses by default and it is easy
to modify.
Vish
On Feb 19, 2013, at 11:51 AM, Diego Parrilla Santamaría
diego.parrilla.santama...@gmail.com wrote:
Thanks Vish,
This is something I always forget to ask: I'm curious about the historical
It is unfortunately regenerated every time an instance is launched, but if you
want to test editing it is referenced in the commmandline for dnsmasq. for
example on devstack:
--dhcp-hostsfile=/opt/stack/data/nova/br100.conf
Vish
On Feb 19, 2013, at 11:49 AM, Ritesh Nanda
Thanks again vish
Do we have any roadmap to include a functinality for using ISC-dhcp and
bind dns as default in openstack. As with the release of Bind 10 a lot of
functionality would be getting change from how bind9 worked. Now ISC-dhcp
would be saving lease files in database and lot more.
Even
Damn. Found it.
I stupidly forgot to add manual route to 192.168.1.252 for qg (gateway)
network !
I had all the keys, I knew that for metadata traffic, you need external
mapping to router IP, I saw that iptables was saying 'outbound traffic
thru 192.168.10.1' (ie. qg - router), but I didn't
nova dhcp and dns architecture won't change. But I'm sure an alternate dhcp
implementation could be done as a quantum l3 plugin. So that combined with
moniker would likely be the path forward.
Vish
On Feb 19, 2013, at 12:18 PM, Ritesh Nanda riteshnand...@gmail.com wrote:
Thanks again vish
Hey Marco,have you been able to run some performance test on your Gluster cluster?Thanks :)
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 18 févr. 2013 à 14:20, Marco CONSONNI mcocm...@gmail.com a écrit :Hello Sam,I've tried two of them: NFS and Gluster.Some problems
Le 19/02/2013 13:31, Gary Kotton a écrit :
The problem is as follows:
When you reboot the host the openvswitch will create the interfaces on
restart. this causes problems with the dhcp and the l3 agents.
the solution to this is to run the quantum-ovs-cleanup utility on
reboot prior to the
We have used Gluster for small deployments, but lately we have changed our
mind. Basically we have bet on Gluster for 2013 because of:
- 10GbE everywhere, and Gluster MUST run in 10GbE (or Infiniband)
- 3.3 release fixes some issues when locking big files: Granular locking
- libgfapi reborn, no
Or is there any way in openstack i can implement dynamic dns in openstack.
If you want your VM's to automatically get a DNS entry, I have a MyDNS
add-on that can do it. MyDNS is a DNS server that uses a database
instead of zone files.
You just have to pick a domain or subdomain, and then
Hi All,
When I was doing a Swift/Keystone only install with DevStack I used the
following in my localrc
disable_all_services
enable_service key swift mysql
Then stack.sh paused with the error message
ERROR: at least one rpc backend must be enabled,
set one of 'rabbit', 'qpid',
nothing in swift requires rabbit, qpid, or zeromq
--john
On Feb 19, 2013, at 4:53 PM, Everett Toews everett.to...@rackspace.com wrote:
Hi All,
When I was doing a Swift/Keystone only install with DevStack I used the
following in my localrc
disable_all_services
enable_service key
It doesn't - the AMQP is needed for the Nova/Glance/Cinder/Ceilometer
integration and that internal RPC mechanism that they use.
-joe
On Feb 19, 2013, at 4:53 PM, Everett Toews everett.to...@rackspace.com wrote:
Hi All,
When I was doing a Swift/Keystone only install with DevStack I used the
So it sounds like what we're talking about here is running a Ceph or
GlusterFS node alongside the compute node (ie, on the same physical
system). I assume then that VMs access their volumes via NFS or iSCSI from
Cinder on the controller node, and in turn, Cinder reads and writes to the
cluster
Hello All!
How does high availability and public key infrastructure work together in
openstack?
For example we have Swift proxies for horizontal scaling. They authenticate
themselves with Keystone.
Do the set of machines in the swift proxy cluster use the same public-private
key pair?
Much
Hi experts,
I've installed openstack with devStack. The problem is can I get some existing
implementation and try to patch it with my openstack?
I notice that RackSpace has already implement it. Link as following:
Sorry for misunderstand. I need to patch API Key for my own openstack. That's
the point.
Thank you!
Best Regards!
Henry
At 2013-02-20 12:48:33,Zhiqiang Zhao dreamerhe...@126.com wrote:
Hi experts,
I've installed openstack with devStack. The problem is can I get some existing
On 20 February 2013 03:40, Michaël Van de Borne michael.vandebo...@cetic.be
wrote:
Same problem here. Running Grizzly. Dashboard keeps prompting me for my
credentials. Pretty sure dashboard sends wrong tenant name to keystone.
Here's the relevant section in
Is there any plan to have security releases for the supported versions
of the various Openstack components? Like having a 2012.2.3.3 for
keystone (the last number being the security release).
--
-- Matthew Thode
signature.asc
Description: OpenPGP digital signature
Hi everyone,
I am trying to install OpenStack for the first time. After successfully
installing Keystone service, when i run the service i can't come back to
the server's command prompt. To get to command prompt i have to stop the
Keystone service.
During the process of adding images in Glance i
Hi Stackers-
Has any one tried on enabling LBaaS in Folsom Installation.
Kindly share me the installation/integration of LBaaS into Folsom.
I think, there is some Service agent work already into quantum
--
Regards,
--
Trinath Somanchi,
+91 9866 235
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/25/Project:raring_grizzly_deployDate of build:Tue, 19 Feb 2013 06:20:33 -0500Build duration:56 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
See http://10.189.74.7:8080/job/folsom_coverage/477/
--
Started by command line by jenkins
Building on master in workspace
http://10.189.74.7:8080/job/folsom_coverage/ws/
No emails were triggered.
[workspace] $ /bin/bash -x
Title: precise_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/145/Project:precise_grizzly_keystone_trunkDate of build:Tue, 19 Feb 2013 08:31:09 -0500Build duration:2 min 11 secBuild cause:Started by an SCM
at 20130219-0846Build needed
Title: raring_grizzly_keystone_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_keystone_trunk/155/Project:raring_grizzly_keystone_trunkDate of build:Tue, 19 Feb 2013 08:46:42 -0500Build duration:3 min 15 secBuild cause:Started by an SCM changeBuilt
at 20130219
Title: raring_grizzly_quantum_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/338/Project:raring_grizzly_quantum_trunkDate of build:Tue, 19 Feb 2013 10:40:20 -0500Build duration:7 min 55 secBuild cause:Started by user James PageBuilt
Title: raring_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/339/Project:raring_grizzly_quantum_trunkDate of build:Tue, 19 Feb 2013 10:48:29 -0500Build duration:13 minBuild cause:Started by user James PageBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/163/Project:raring_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 11:25:07 -0500Build duration:4 min 48 secBuild cause:Started by user Chuck ShortBuilt
Title: precise_grizzly_keystone_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/146/Project:precise_grizzly_keystone_trunkDate of build:Tue, 19 Feb 2013 13:31:30 -0500Build duration:6 min 39 secBuild cause:Started by user James
Title: raring_grizzly_keystone_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_keystone_trunk/156/Project:raring_grizzly_keystone_trunkDate of build:Tue, 19 Feb 2013 13:32:21 -0500Build duration:7 min 43 secBuild cause:Started by user James
at 20130219
Title: raring_grizzly_deploy
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/26/Project:raring_grizzly_deployDate of build:Tue, 19 Feb 2013 14:22:14 -0500Build duration:25 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
See http://10.189.74.7:8080/job/folsom_coverage/478/
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/725/Project:raring_grizzly_nova_trunkDate of build:Tue, 19 Feb 2013 15:39:28 -0500Build duration:3 min 1 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/161/Project:precise_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 16:31:08 -0500Build duration:1 min 38 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/164/Project:raring_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 16:31:09 -0500Build duration:3 min 30 secBuild cause:Started by an SCM changeBuilt
at 20130219
/fake_nvpapiclient.pyeditquantum/plugins/nicira/nicira_nvp_plugin/QuantumPlugin.pyeditquantum/plugins/nicira/nicira_nvp_plugin/common/exceptions.pyConsole Output[...truncated 12231 lines...]Finished at 20130219-1645Build needed 00:11:07, 32836k disc spaceINFO:root:Uploading package to ppa:openstack
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/162/Project:precise_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 17:01:09 -0500Build duration:1 min 44 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/165/Project:raring_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 17:01:10 -0500Build duration:3 min 13 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_quantum_trunk/330/Project:precise_grizzly_quantum_trunkDate of build:Tue, 19 Feb 2013 17:01:11 -0500Build duration:14 minBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_quantum_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_quantum_trunk/342/Project:raring_grizzly_quantum_trunkDate of build:Tue, 19 Feb 2013 17:04:24 -0500Build duration:16 minBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/163/Project:precise_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 18:31:09 -0500Build duration:1 min 36 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/166/Project:raring_grizzly_cinder_trunkDate of build:Tue, 19 Feb 2013 18:31:09 -0500Build duration:3 min 37 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/726/Project:raring_grizzly_nova_trunkDate of build:Tue, 19 Feb 2013 18:32:46 -0500Build duration:17 minBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_deploy
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/28/Project:raring_grizzly_deployDate of build:Tue, 19 Feb 2013 20:26:14 -0500Build duration:25 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
Title: precise_folsom_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_folsom_deploy/395/Project:precise_folsom_deployDate of build:Tue, 19 Feb 2013 21:14:50 -0500Build duration:1 min 30 secBuild cause:Started by command line by jenkinsBuilt
at 20130219-2212Build needed 00:09:28
at 20130219-2217Build needed 00:13:03
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/168/Project:raring_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 01:31:09 -0500Build duration:2 min 36 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/166/Project:precise_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 02:31:08 -0500Build duration:1 min 29 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/169/Project:raring_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 02:31:08 -0500Build duration:2 min 38 secBuild cause:Started by an SCM changeBuilt
87 matches
Mail list logo