Hi Vish,
The code was merge to the master
(https://github.com/openstack/nova/commit/d5b91dd39bd89eed98742cd02ea604a842a45447)
yesterday.
But the bug with rule removal wasn't fix. I'll open a bug. But I try
to investigate it and I don't find the problem.
Could you help me ?
Regards,
Édouard.
On
Hello,
I have been playing with creating and destroying instances in the GUI.
Sometimes, if I create more than 10 or so, some will get stuck in an error
state. Is this some kind of timeout or something waiting for the image file
perhaps?
Thanks,
Andrew
Hi Andrew:
Could you include extracts of logs from nova-scheduler, nova-compute or
nova-network where those errors appear?
Thanks.
JuanFra.
2012/12/13 Andrew Holway a.hol...@syseleven.de
Hello,
I have been playing with creating and destroying instances in the GUI.
Sometimes, if I create
i can ping and ssh into instance with private ip and floating ip
instance can ping the control node ip, but cannot ping the compute node and
any external network
i have installed quantum in the control node host, and it only got 1 nic
(same as compute node), and use eth0:0 and eth0:1 to vitualize
Michael Still wrote:
Stand down. Padraig has suggested a better way.
Also note that new dependency discussions are a better fit for
openstack-dev.
--
Thierry Carrez (ttx)
Committee for the Usage of the Right Mailing-lists
___
Mailing list:
Hi guys,
You think you have Openstack working, then you cough and it all breaks.
I'm getting the following error when trying to start nova-compute after a
reboot of the compute node (to install non-related patches)
libvirtError: Domain not found: no domain with matching name
'instance-002a'
Hi Guys
I was wondering if there is any possibility of getting metadata output in the
listing when you issue a GET on a container.
At the moment it returns eg.:
object
name10620_1b8b2553c6eb9987ff647d69e3181f9eeb3a43ef.jpg/name
hashe453fcd7ff03e9e0e460555e875b1da1/hash
bytes9272/bytes
Ken Thomas wrote:
Greetings all!
I'm look into using keyring as a way to (optionally) remove clear text
passwords from the various config files. (See
https://blueprints.launchpad.net/oslo/+spec/pw-keyrings for details.)
[...]
This is a development topic, a better fit for the openstack-dev
Hey Joe,yes, several solutions thereFirst, check if the domain exists by running$ virsh list --all (supposing you use libvirt)check /var/lib/nova/instances/instance-002aif the dir. exists $cd into it and run "virsh define libvirt.xml"then restart nova-computeIf the dir. doesn't exist, you may
I think you'd better build a test environment and try it out. We have a
experience in upgrading the 1.4.9 to 1.7.4 and encounter some problems.
在 2012-12-12,上午6:25,Alejandro Comisario alejandro.comisa...@mercadolibre.com
写道:
Hi guys, we are planning to upgrade our production cluster from
It turned out to be that last one. What I don't understand is where
openstack found the instance id from. That doesn't exist in the database,
or anywhere on the file system I could find.
Kind regards
-- joe.
On 13 December 2012 10:27, Razique Mahroua razique.mahr...@gmail.comwrote:
Hey Joe,
I think the instance ID is the database ID (base 8) encoded in base 16. 0x2A = ID 52 into the database.Did you updated the ID 52 ?I may be wrong ^^
Razique Mahroua-Nuage Corazique.mahr...@gmail.comTel: +33 9 72 37 94 15
Le 13 déc. 2012 à 11:51, Joe Warren-Meeks joe.warren.me...@gmail.com a écrit
Hey
I grepped out the last hour where I have been doing lots of creating and
terminating of instances. OMG there is so much logs. Its like treacle!
http://gauntlet.sys11.net/logs/compute.log
2012-12-13 11:14:23 TRACE nova.openstack.common.rpc.amqp Timeout: Timeout while
waiting on RPC
Hi all:
I'm installing OpenStack Dashboard 2012.2 on CentOS 6.3 and I got next
error related to css/js compression:
File /usr/lib/python2.6/site-packages/django/template/base.py, line 837,
in render_node
[Thu Dec 13 11:58:37 2012] [error] [client 192.10.1.36] return
node.render(context)
[Thu
On 12/13/2012 12:24 PM, JuanFra Rodriguez Cardoso wrote:
Hi all:
I'm installing OpenStack Dashboard 2012.2 on CentOS 6.3 and I got next
error related to css/js compression:
Yes, I bet, it's not related with Dashboard, although the error message
tells you so.
Which version are you
Hi Matthias:
Thanks for replying. Rest of openstack services are working ok.
Theses are versions installed of Horizon and Django (from EPEL 6.7)
- openstack-dashboard-2012.2-4.el6.noarch.
- Django14-1.4.2-2.el6.noarch
Do you recommend I install Horizon from github repository?
Thanks!
On 12/13/2012 12:07 PM, ZhiQiang Fan wrote:
i can ping and ssh into instance with private ip and floating ip
instance can ping the control node ip, but cannot ping the compute
node and any external network
In order to be able to help would it be possible that you provide IP
addresses and
hi all,
I installed openstack(Folsom) on ubuntu 12.04. Everythiong seems to be ok.
instance can be started, but after sometime the instance will automatically
stopped.
in the log:
*DEBUG:nova.openstack.common.rpc.amqp:received* {u'_context_roles':
[u'admin'], u'_context_request_id':
I set up multi_host and this seems to have fixed the problem.
I suppose is it resource contention on nova-network
On Dec 13, 2012, at 12:01 PM, Andrew Holway wrote:
Hey
I grepped out the last hour where I have been doing lots of creating and
terminating of instances. OMG there is
Hey guys,
What sort of Web Server is behind OpenStack dashboard (horizon)? Is it some
sort of Apache???
Cheers,
Desta
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
anytime this happend with me I do the following.
create a lost_instance.xml with the content below(I think it's possible to
create a simpler file, with less content) just to register a VM with
libvirt. Make sure to change the name/name tag. In your case
domain type='kvm'
Dashboard runs from an apache installation using python's WSGI stack and
Django framework.
--Syed
On Thu, Dec 13, 2012 at 8:01 PM, Desta Haileselassie Hagos
desta161...@gmail.com wrote:
Hey guys,
What sort of Web Server is behind OpenStack dashboard (horizon)? Is it
some sort of Apache???
Its vanilla apache httpd afaik.
On Dec 13, 2012, at 3:31 PM, Desta Haileselassie Hagos wrote:
Hey guys,
What sort of Web Server is behind OpenStack dashboard (horizon)? Is it some
sort of Apache???
Cheers,
Desta
___
Mailing
Hey Sam,
Keyring is already in the distros? So I can go ahead and add it as a
hard dependency to the build when I get this in?
About your question,,, The basic idea is that you can define config keys
a 'secure', and *if* you provide a 'secure_source', then cfg.py will use
*your* code to get
Why not hehe, though I must admit it's easier, yes you can have a simpler template :)just copy a libvirt.xml and update the instance name.You could even come up with a small bash script./recover.sh $instance-name that would perform the following steps :• Retrieve the instance name and put it into
I think 5000 is kind of public port, for external use, and 35357 a private
port for internal use. But probably I'm wrong! :)
Is your OS_AUTH_URL and SERVICE_ENDPOINT defined?
unset one of them and try again.
On Thu, Dec 13, 2012 at 12:54 AM, Hao Wang hao.1.w...@gmail.com wrote:
Hi Stackers,
The metadata for objects is stored at the object level, not in the
container dbs. Reporting metadata information for container listings
would require the server to HEAD every object in the container, which
would cause too much work on the backend.
--
Chuck
On Wed, Dec 12, 2012 at 7:01 AM,
Thanks Chuck.
What I am playing at here is I want to create an rsync like script where I can
save permissions, owner uid/gid and mode (maybe even xattrs), so that a restore
will work with these.
Swift makes this very easy with the object metadata - however, running a sync
would require i HEAD
Hi Lloyd,
On Tue, Dec 11, 2012 at 9:03 PM, Lloyd Dewolf lloydost...@gmail.com wrote:
On Fri, Dec 7, 2012 at 12:15 PM, Anne Gentle a...@openstack.org wrote:
tl;dr: Migration of wiki.openstack.org from MoinMoin to Mediawiki
commences 12/17.
Yeah for the standard of wikis and wiki markup
At some point a clear-text password will show up, but that doesn't require
said password to always be in clear-text.
Think of a remote system that provides said passwords and authenticates
the system asking for said password using some private/public key
authentication that can be easily revoked
+ Openstack-dev
On 12/13/12 10:05 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
At some point a clear-text password will show up, but that doesn't require
said password to always be in clear-text.
Think of a remote system that provides said passwords and authenticates
the system asking for
+ The right openstack-dev, haha
On 12/13/12 10:06 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
+ Openstack-dev
On 12/13/12 10:05 AM, Joshua Harlow harlo...@yahoo-inc.com wrote:
At some point a clear-text password will show up, but that doesn't
require
said password to always be in
The DevStack and Ubuntu configurations run with the Ubuntu distro's default
version of Apache and modWSGI. Personally I'm also a big fan of NginX. Horizon,
being a Django/WSGI-compliant application can run behind any webserver that
supports the Python WSGI standard.
- Gabriel
Have you tried doing what it said and running manage.py compress? (make sure
you're in the proper Python environment/venv when running that command)
That error indicates one of two things:
1. You have your settings set with COMPRESS_ENABLED = True and
COMPRESS_OFFLINE = True but you
There aren't any code examples in the wiki that I know of. If you have
examples we can certainly find a way to indicate Apache 2.0 for code, I
don't find this problematic.
Yeah, we can wrap a source lang=python/source block in a template
that also adds in license text for any code. Should be
On Thu, Dec 13, 2012 at 11:31 AM, Ryan Lane rl...@wikimedia.org wrote:
There aren't any code examples in the wiki that I know of. If you have
examples we can certainly find a way to indicate Apache 2.0 for code, I
don't find this problematic.
Yeah, we can wrap a source lang=python/source
Hi Gui,
Thanks for the reply. I think I have to reconfigure another refresh
environment. It's a little difficult for both of us to make it with a
system full of problem.
Both of the ports are opened by the process of /usr/bin/keystone-all. No
idea why it is like this. Going through source codes
I follow the OpenStack Network (Quantum) Administration Guide and build an
internal network and I want VMs in the private network to access Internet.
So I follow the instructions and create a external network, and the
internal VM has a floating ip, but it can not connect to the physical
control node (also act as network node): eth0 192.168.32.18 eth0:0
10.0.0.3 eth0:1(br-ex bridge) 192.168.32.129
compute node: eth0 192.168.32.19 eth0:0 10.0.0.4
fixed ip for instance: 10.0.18.0/24
floating ip for instance: 192.168.32.130-192.168.32.135 range
192.168.32.128/24 gateway 192.168.32.1
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2029/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson2495201369323677794.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2031/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5974477024181884247.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2032/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3994593479294920408.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2033/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson2518897029113388426.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2034/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson879101166252669049.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2035/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8663774743342223089.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2036/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7677010881777076494.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2037/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5744221891199847966.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2038/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8224084768414447077.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2039/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson842309923744977977.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2040/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1388912701353931724.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2041/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8831104658200544632.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2042/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5148201591452283996.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2043/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5843042539461378439.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2044/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3323005342601785115.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2046/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7934219847015730228.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2047/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson180282003145980350.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2048/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5761010973023396981.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2049/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7389381267461601814.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2050/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3794593317761488877.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2051/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5989144721102846386.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2052/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson2911635329601925768.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2053/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson2030176357030352850.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2054/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4479821319696542541.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2055/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3363521421450963765.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2056/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3949286126370628863.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2057/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8760220084865501534.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2058/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1628430857040114045.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2059/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5036054192698365811.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2060/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1283093540522244240.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2061/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson2483233839595994040.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2062/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3030080471176367502.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2063/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson397856829011923.sh
+ OS_RELEASE=folsom
+
at 20121213
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2064/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson176424924272395262.sh
+ OS_RELEASE=folsom
+
Title: precise_essex_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_deploy/18354/Project:precise_essex_deployDate of build:Thu, 13 Dec 2012 11:52:26 -0500Build duration:13 minBuild cause:Started by user james-pageBuilt on:masterHealth
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2065/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1098267192056543686.sh
+ OS_RELEASE=folsom
+
Title: precise_essex_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_deploy/18355/Project:precise_essex_deployDate of build:Thu, 13 Dec 2012 12:12:03 -0500Build duration:13 minBuild cause:Started by user james-pageBuilt on:masterHealth
Title: precise_essex_glance_stable
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_glance_stable/35/Project:precise_essex_glance_stableDate of build:Thu, 13 Dec 2012 12:14:02 -0500Build duration:15 minBuild cause:Started by user james-pageBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2066/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4482905467928655613.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2069/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3665158041147654004.sh
+ OS_RELEASE=folsom
+
Title: precise_essex_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_deploy/18357/Project:precise_essex_deployDate of build:Thu, 13 Dec 2012 13:29:46 -0500Build duration:14 minBuild cause:Started by user james-pageBuilt on:masterHealth
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2072/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3659137043722111235.sh
+ OS_RELEASE=folsom
+
Title: precise_essex_deploy
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/precise_essex_deploy/18358/Project:precise_essex_deployDate of build:Thu, 13 Dec 2012 13:59:19 -0500Build duration:14 minBuild cause:Started by command lineBuilt on:masterHealth
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2075/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7749655085909196526.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2076/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7196858423388295556.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2077/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3281874522943147741.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2078/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8605772244381703146.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2079/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7851936956212274320.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2082/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7050236248745318093.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2083/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4547858955664932500.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2084/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7993387490314621101.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2085/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7212288925833475424.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2086/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson6954425003680714662.sh
+ OS_RELEASE=folsom
+
at 20121213
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2087/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson3586173596602003685.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2088/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7884110336034453032.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2089/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson4006405171975655803.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2090/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8999482509131924239.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2091/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8505473396581343400.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/2092/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1228824240127259324.sh
+ OS_RELEASE=folsom
+
1 - 100 of 120 matches
Mail list logo