It appeared that I needed to increase the default value of
limit_param_default in /etc/glance/glance-registry.conf (default value is
25). It helped to fix the problem.
On Mon, Feb 18, 2013 at 6:51 PM, Andrii Loshkovskyi
loshkovs...@gmail.comwrote:
Hello,
I have a few tenants and several
Hi all,
is there some documentation about the openstack nova dns? (floating_ip_dns)
How is it configured in nova.conf ?
Best,
Davide.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
I haven't had the chance to do a full reviews but here is a few toughts :
- the account quota goes along this review:
https://review.openstack.org/#/c/21563/
so we can get easily have account metadatas from middlewares.
- I am not sure I fancy much the json blob in the ini config (why not a
Hi,
I did Openstack(Folsom+Quantum) setup, with one controller node ( also
running networking services) and 8 compute nodes. Both controller node and
compute nodes have 2 NIC each, one on public network (internet) and one on
private network.
My requirement is that for each tenant, every VM
The FreeBSD install using default boot loader.
Assuming this is the right docs to refer to.
http://www.freebsd.org/doc/handbook/serialconsole-setup.html
Switching to grub boot loader within FreeBSD not done before.
From: Ritesh Nanda riteshnand...@gmail.com
Hi,
you might also have a look at https://github.com/cschwede/swquota
Account quota is stored in account metadata and set by a reseller
account.
I changed the code slightly to make it more consistent to the already
merged container quota.
@chmouel: I might create a pull request to swift
Matthew Thode wrote:
Is there any plan to have security releases for the supported
versions of the various Openstack components? Like having a
2012.2.3.3 for keystone (the last number being the security
release).
We provide hotfixes in the advisories, and the fixes are included in
our next
192.168.202.103 = public controller iface
192.168.203.103 = private controller iface
anyway, I still get the login problem using any of those values
Le 20/02/2013 06:59, Kieran Spear a écrit :
On 20 February 2013 03:40, Michaël Van de Borne
michael.vandebo...@cetic.be
Also, on quick think to look is the apache error log while you access the
horizon.
look at the log as, tail -f /var/log/apache/error.log.
On Wed, Feb 20, 2013 at 5:07 PM, Michaël Van de Borne
michael.vandebo...@cetic.be wrote:
192.168.202.103 = public controller iface
192.168.203.103 =
We are more or less through with Folsom Basic installation. We are
following the basic installation guide on openstack website.
But we are encountering many networking and installation errors.
1) Installation errors occur on the controller node.
2) I think we are doing the networking wrong. We
Unless tester3 is given explicit permissions he can't do anything.
To be of any use the 'test' user (who is an admin) would need to grant
'test3' read/write access to a container. Permissions are granted
using the X-Container-Read and X-Container-Write headers on
containers,
you need analyze your keystone log to get the reason why keystone fails,
they are in /opt/stack/logs/screen/
thanks,
lyon
在 2013-2-20 下午1:39,harryxiyou harryxi...@gmail.com写道:
Hi all,
When i reboot my OS, i have to install devstack again. But i
caught following errors during the restart
+
On Wed, Feb 20, 2013 at 8:00 PM, Liang Liang lyon.lian...@gmail.com wrote:
you need analyze your keystone log to get the reason why keystone fails,
they are in /opt/stack/logs/screen/
I cannot find /opt/stack/logs dir.
--
Thanks
Harry Wei
___
The OpenStack Technical Committee (TC) met in #openstack-meeting at
20:00 UTC yesterday.
Here is a quick summary of the outcome of this meeting:
* The TC approved the graduation of the Heat project (to be integrated
in common Havana release)
* The TC considered the suggestion of the Board of
Le 20/02/2013 14:04, Chathura M. Sarathchandra Magurawalage a écrit :
There are apparently two instances running in the compute node but
nova just see only one. Probably when I have deleted an instance
earlier it had not deleted the instance properly.
root@controller:~# nova list
Hi Folks,
we're trying to set up keystone with ldap. so far, so good. we can
authenticate. but we can not give the admin-user 'god'-rights. Through
tweaks of the policy.json-file it's possible that the admin-user sees
all tenants, but he doesn't get the 'admin' tab in horizon.
Any help is really
Hi,
Previously using nova-network, all my VMs were having :
# route -n
Table de routage IP du noyau
Destination Passerelle Genmask Indic Metric Ref Use Iface
10.0.0.00.0.0.0 255.255.255.0 U 0 00 eth0
169.254.0.0 0.0.0.0 255.255.0.0
On 02/20/13 05:40, Thierry Carrez wrote:
Matthew Thode wrote:
Is there any plan to have security releases for the supported
versions of the various Openstack components? Like having a
2012.2.3.3 for keystone (the last number being the security
release).
We provide hotfixes in the
Hi Schwede,
I have already read your codes. To implement the authentication of updating
metadata in the middleware is good for maintainance.
But the account usage is not accurate because of the eventual consistency.
So I do the usage quota based on container.
I will refactor my implementation
2013/2/20 Chmouel Boudjnah chmo...@chmouel.com
I haven't had the chance to do a full reviews but here is a few toughts :
- the account quota goes along this review:
https://review.openstack.org/#/c/21563/
so we can get easily have account metadatas from middlewares.
Yes, I also want
Hey Anil, thanks for responding. Here's the output:
root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-vsctl show
9d9f7949-2b80-40c8-a9e0-6a116200ed96
Bridge br-int
Port br-int
Interface br-int
type: internal
Port int-br-eth1
Interface
Here's the last command output you asked for:
root@kvm-cs-sn-10i:/var/lib/nova/instances# ovs-ofctl dump-flows br-eth1
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=78793.694s, table=0, n_packets=6, n_bytes=468,
priority=2,in_port=6 actions=drop
cookie=0x0, duration=78794.033s, table=0,
2013/2/20 harryxiyou harryxi...@gmail.com
On Wed, Feb 20, 2013 at 8:00 PM, Liang Liang lyon.lian...@gmail.com
wrote:
you need analyze your keystone log to get the reason why keystone fails,
they are in /opt/stack/logs/screen/
I cannot find /opt/stack/logs dir.
Use screen to see the
On Wed, 2013-02-20 at 18:11 +0800, Alex Yang wrote:
Storage Quotas Design
This is the design draft of Storage Quota.
Implementation of this design is
https://github.com/AlexYangYu/StackLab-swift/tree/dev-quota
I'll also point out Boson: https://wiki.openstack.org/wiki/Boson and
Thanks Mark,
I tried it and it works, would like to appreciate the idea behind it. As i
am using vlan manager in nova-network , i would like to create a new domain
for each tenant created in keystone.That tenant would be having a seprate
network.
I can understand the process , i would even try
I solved the problem by downgrading the horizon for packages below
apt-get install \
openstack-dashboard=2012.1.3+stable~20120815-691dd2-0ubuntu1.1 \
openstack-dashboard-ubuntu-theme=2012.1.3+stable~20120815-691dd2-0ubuntu1.1
\
python-django-horizon=2012.1.3+stable~20120815-691dd2-0ubuntu1.1
att
I did the update of all services (new, cinder, glance, horizon, keystone),
and let all the options of the * default. conf, just changing the filter
and sql session: authtoken.
And now it works again
0.2-1ubuntu1~cloud0 - python-jsonschema
0.6-1ubuntu1~cloud0 - python-prettytable
This is just a reminder that we appreciate everyone's assistance so
far, polishing the end result of the wiki migration which took place
last weekend. It's looking great, but there's always room for
further improvement of course!
To aid in any remaining cleanup, the old wiki is available at...
On Thu, Feb 21, 2013 at 12:44 AM, Yujie Du duyujie@gmail.com wrote:
2013/2/20 harryxiyou harryxi...@gmail.com
[...]
Use screen to see the logging output:
$screen -d -m -S screen-name -t shell -s /bin/bash
$ screen -x stack
After executed
$ screen -d -m -S stack -t shell -s
Hi Kevin,
On Wed, Feb 20, 2013 at 5:26 PM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
I'll also point out Boson: https://wiki.openstack.org/wiki/Boson and
https://github.com/klmitch/boson with some initial work. Unfortunately,
I'm not able to work on Boson at the moment due to
On Wed, 2013-02-20 at 21:09 +0100, Chmouel Boudjnah wrote:
On Wed, Feb 20, 2013 at 5:26 PM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
I'll also point out Boson: https://wiki.openstack.org/wiki/Boson and
https://github.com/klmitch/boson with some initial work. Unfortunately,
I was guessing this is the part of what synaps do but from looking at
https://wiki.openstack.org/wiki/Synaps it may just provide only
notifications and not enforcements.
Chmouel.
On Wed, Feb 20, 2013 at 9:17 PM, Kevin L. Mitchell
kevin.mitch...@rackspace.com wrote:
On Wed, 2013-02-20 at 21:09
Hi Greg,
I would like to understand a little bit better your setup.
I scrolled through the thread, but I'm not sure if you've already
provided the information I'm asking.
On the compute node, the pair phy-br-eth1/int-br-eth1 is a veth pair
or a OVS patch port?
I'm assuming you're using the OVS
On 2/20/13 1:28 PM, Tim Bell tim.b...@cern.ch wrote:
We also feel an integrated approach such as Boson is the way forward for
quota rather than each project have its own (and potentially differing)
implementations on areas such as delegation.
+1 to an integrated approach.
It's easy to envision
hi all
i have a question
i am working to make a mirror or live replication bretween 2 PC for
OpenSTack, so if 1 server down, the other will take over, and i hope
the user dont know it.
i got also vmotion in vmware,
can we do it in openstack?
thx for the help
F
Hi Frans,so basically, what you are looking for is a mirroring solution for your OpenStack deployment that has been made on two servers?Are both All-in-One (eg they both provide all the OpenStack services and configured ISO?)thanks
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37
It's great to see that things are starting to run properly!
I'm sorry I did not read you were running a provider network. That
would have been the typical symptom of a missing mapping.
Some more comments inline on the 'new' issues.
Salvatore
On 20 February 2013 21:57, Greg Chavez
On Thu, Feb 21, 2013 at 4:17 AM, Razique Mahroua
razique.mahr...@gmail.comwrote:
Hi Frans,
so basically, what you are looking for is a mirroring solution for your
OpenStack deployment that has been made on two servers?
Are both All-in-One (eg they both provide all the OpenStack services and
Thanks.
I would be more concerned about the SIOCDELRT error above. Do you try to
manually remove a network route at bootup ? Seems like the 'route del' is
failing because the route is not already existing.
I am not doing doing anything that I am aware of.
As already said, you absolutely
Hi all,
I am trying to get a better grasp on which nova.conf options are needed for
a given setup. I have found this list:
http://docs.openstack.org/folsom/openstack-compute/admin/content/list-of-compute-config-options.htmlbut
it doesn't really answer what is needed and when. For example if I
Looks like you found the reference listings but really want config info,
which I can point you to.
As you have noted, you have choices for networking and volumes: quantum or
nova-network, cinder or nova-volume. Here are pointers to specific docs.
Quantum config:
On Wed, Feb 20, 2013 at 3:14 PM, Hirendra Rathor
hirendra.rat...@gmail.com wrote:
Hi Hirendra Rathor,
I was getting same error when I picked up devstack for the first time few
days ago. I could have tried troubleshooting it but I wasn't particularly
happy with the fact that I had to launch
[Removed the dev list -- no need to cross-post.]
It looks like you have broken permissions on
'/usr/local/lib/python2.7/dist-packages/httplib2-0.7.7-py2.7.egg' and/or
subdirectories. Make sure everything is world readable.
- Chris
On Feb 20, 2013, at 7:48 PM, harryxiyou harryxi...@gmail.com
On Thu, Feb 21, 2013 at 12:14 PM, Chris Behrens cbehr...@codestud.com wrote:
[Removed the dev list -- no need to cross-post.]
It looks like you have broken permissions on
'/usr/local/lib/python2.7/dist-packages/httplib2-0.7.7-py2.7.egg'
nd/or subdirectories. Make sure everything is world
Well, you probably don't want world writeable, but :) 755 on dirs and 644
on files is probably more appropriate! But at least you know the issue.
- Chris
On Feb 20, 2013, at 9:21 PM, harryxiyou harryxi...@gmail.com wrote:
On Thu, Feb 21, 2013 at 12:14 PM, Chris Behrens
On Thu, Feb 21, 2013 at 1:35 PM, Chris Behrens cbehr...@codestud.com wrote:
Well, you probably don't want world writeable, but :)
755 on dirs and 644 on files is probably more appropriate!
Ah..., this may be better ;-)
But at least you know the issue.
Yup, thanks.
--
Thanks
Harry Wei
Hi,
$ screen -r
than navigate to c-vol with Ctrl+A then N
To detach from screen : Ctrl+A then D
Regards,
Jean-Baptiste RANSY
Envoyé de mon ASUS Pad
harryxiyou harryxi...@gmail.com a écrit :
Hi all,
I have tested OpenStack with Sheepdog like follwoing
1, Install Ubuntu 12.04
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/167/Project:precise_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 04:01:08 -0500Build duration:1 min 49 secBuild cause:Started by an SCM changeBuilt
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
See http://10.189.74.7:8080/job/folsom_coverage/480/
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/30/Project:raring_grizzly_deployDate of build:Wed, 20 Feb 2013 04:54:26 -0500Build duration:56 minBuild cause:Started by command line by jenkinsStarted by command line by
Title: precise_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_cinder_trunk/168/Project:precise_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 06:01:08 -0500Build duration:1 min 35 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_cinder_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_cinder_trunk/171/Project:raring_grizzly_cinder_trunkDate of build:Wed, 20 Feb 2013 06:01:09 -0500Build duration:2 min 50 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/734/Project:raring_grizzly_nova_trunkDate of build:Wed, 20 Feb 2013 08:31:13 -0500Build duration:3 min 53 secBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/737/Project:raring_grizzly_nova_trunkDate of build:Wed, 20 Feb 2013 11:31:12 -0500Build duration:14 minBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_deploy
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/31/Project:raring_grizzly_deployDate of build:Wed, 20 Feb 2013 11:45:29 -0500Build duration:25 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/739/Project:raring_grizzly_nova_trunkDate of build:Wed, 20 Feb 2013 13:05:45 -0500Build duration:2 min 47 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_glance_trunk/129/Project:precise_grizzly_glance_trunkDate of build:Wed, 20 Feb 2013 15:31:09 -0500Build duration:12 minBuild cause:Started by an SCM changeBuilt
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/34/Project:raring_grizzly_deployDate of build:Wed, 20 Feb 2013 14:46:48 -0500Build duration:56 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8610/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8611/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8612/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8613/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8615/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8616/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8617/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8618/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8619/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8620/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8621/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8622/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8623/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8624/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8625/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8626/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8627/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8628/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
Title: raring_grizzly_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_deploy/41/Project:raring_grizzly_deployDate of build:Wed, 20 Feb 2013 21:21:34 -0500Build duration:56 minBuild cause:Started by command line by jenkinsBuilt on:masterHealth
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8629/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8630/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8631/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
at 20130220
at 20130220
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8632/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8633/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8634/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8635/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8636/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8639/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8640/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8641/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8642/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8644/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/8645/
--
Started by timer
Building remotely on pkg-builder in workspace
http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/ws/
[cloud-archive_folsom_version-drift] $ /bin/bash
97 matches
Mail list logo