Dear All,
Is there any support of OpenStack technology for tablets (for example
android tablets, with native android interfaces)?
With best regards,
Desta
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Hey,
I had NFS over RDMA working but hit this bug with o_direct. I cannot remember
the exact circumstances of this cropping up.
https://bugzilla.linux-nfs.org/show_bug.cgi?id=228
You can work around this with KVM by explicitly specifying the block size in
the XML. I do not know how you would
Kun Huang wrote:
Thanks, Thierry Carrez. Your explanation is easy to understand. I have
got why we need such a mechanism.
BTW, is root-wrap a general or popular way to keep security? I have no
experience on security, but I have heard the /root /should be banned
because of security. Ideally,
Also,
You might be better off asking on the KVM list for this low level stuff.
http://www.linux-kvm.org/page/Lists,_IRC#Mailing_Lists
On Jan 14, 2013, at 11:15 AM, Andrew Holway wrote:
Hey,
I had NFS over RDMA working but hit this bug with o_direct. I cannot remember
the exact
Hello all,
I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients
(each hosted on a different machine) with 10 threads each uploading files
using the official python-swiftclient. Each thread is uploading to a
separate container.
I have 5 storage nodes and 1 proxy node. The
My OpenStack environment is running under traditional VLAN model.
Currently, I have two VMs which is running Ubuntu 12.04 server and their
Fixed IP is:
Host 1: eth0: 172.16.0.3
Host 2: eth0: 172.16.0.5
Then I add a subinterface to each VM like this:
Host 1: eth0:1 192.168.2.2
Host 2: eth0:1
Apparently not. This is the output of glance image-show:
+---+--+
| Property | Value|
+---+--+
| Property 'kernel_id' |
I forgot to mention that I'm also using the suggestions mentioned here:
http://docs.openstack.org/developer/swift/deployment_guide.html#general-system-tuning
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Hello all,
I'm trying to upload 200GB of 200KB
Chuck / John.
We are having 50.000 request per minute ( where 10.000+ are put from small
objects, from 10KB to 150KB )
We are using swift 1.7.4 with keystone token caching so no latency over
there.
We are having 12 proxyes and 24 datanodes divided in 4 zones ( each
datanode has 48gb of ram, 2
Hi,
Which network compornent do you use ? nova-network ? quantum ?
If my perception is correct, you need to use quantum to have
multiple nic on VM. When you create multiple subnet, VM will boot
with multiple NICs and these VMs can take communicate with each
others with each NICs.
Best Regards,
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert
leande...@gmail.commailto:leande...@gmail.com wrote:
Hello all,
I'm trying to upload 200GB of 200KB files to Swift. I'm using 4 clients (each
hosted on a different machine) with 10 threads each uploading files using the
official
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?
On Mon, Jan 14, 2013 at 1:20 PM, Robert van Leeuwen
robert.vanleeu...@spilgames.com wrote:
On Mon, Jan 14, 2013 at 11:02 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Hello
By stopping, do you mean halt the service (kill the process) or is it a
change in the configuration file?
Just halt the service.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe :
According to the info below, i think the current size is 256 right?
If I format the storage partition, will that automatically clear all the
contents from the storage or do I need to clean something else as well?
Output from xfs_info:
meta-data=/dev/sda3 isize=256agcount=4,
Hirai,
Thanks for your quick response. Currently I am using nova-network in VLAN
model. Actually the real requirement is the user want to bind many
subinterfaces on one NIC, and they want to connect the other server using
every subinterface, so we can't want to create so many VLANs for one
Hello all,
I've come to realize that my swift storage partitions are setup with the
wrong node size. The only way for me to fix this is to format the
partitions. I was wondering how I could reset swift (remove all data from
stored files) without having to install it again.
Regards,
Leander
I see. With replication switched off during upload, does inserting into
various containers speed up the process or is it irrelevant?
On Mon, Jan 14, 2013 at 1:49 PM, Robert van Leeuwen
robert.vanleeu...@spilgames.com wrote:
According to the info below, i think the current size is 256 right?
Hello,
I've installed openstack Folsom on centos with the epel repo.
Basically the all things works on 4 nodes configuration(controller,
network and 2 compute). Quantum is configured with GRE and L3
services.
At my point, i'd like to go futher on the storage part.
I'm trying to use the cinder
I see. With replication switched off during upload, does inserting into
various containers speed up the process
or is it irrelevant?
I'm not sure what's your question but maybe this helps:
In short:
The replication daemon is walking across your files to check if any files
need to be
Allow me to rephrase. I've read somewhere (can't remember where) that it
would be faster to upload files if they would be uploaded to separate
containeres. This was suggested for a standard swift installation with a
certain replication factor. Since I'll be uploading the files with the
replicators
Hi!
Is Grizzly-2 available on Ubuntu Raring Ringtail (13.04) daily builds?
Tks!
Thiago
On 10 January 2013 19:44, Steven Dake sd...@redhat.com wrote:
Hi folks,
The OpenStack release team has released the second milestone of the
Grizzly development cycle (grizzly-2). This is a significant
Ok, thanks for all the tips/help.
Regards,
Leander
On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
robert.vanleeu...@spilgames.com wrote:
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be faster to
upload files if they would be uploaded to separate
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be faster to upload
files if they would be uploaded to separate containeres.
This was suggested for a standard swift installation with a certain
replication factor.
Since I'll be uploading the files with the
On 01/14/2013 06:06 AM, Antonio Messina wrote:
Apparently not. This is the output of glance image-show:
+---+--+
| Property | Value|
Here's my configuration for cinder using NFS, also I submit bug for
creating volume snapshot and volume from snapshot here, I already fixed it
in my local, not submit yet:
https://bugs.launchpad.net/cinder/+bug/1097266
*/etc/cinder/cinder.conf
*
[DEFAULT]
rootwrap_config=/etc/cinder/rootwrap.conf
John and swifters,
I see this problem as a big problem and I think that the scenario
described by Alejandro is a very common scenario. I am thinking if it is
possible to have like two rings (one with the newer extended power, one
with the existing ring power), when significant changes
Hi All,
is there a nova command to add a NIC to a running instance (ie without the
need to do nova boot ... --nic 1 --nic new-nic) ?
Documentation not showing up anything...
Regards,
W. Dec
___
Mailing list: https://launchpad.net/~openstack
Post to
Hello,
Thank you for your configuration, it will help me a lot.
One last question : what is the data in the file of
nfs_shares_config : /var/lib/cinder/nfsshare plz ?
Thx in advance
2013/1/14 Ray Sun qsun01...@cienet.com.cn:
Here's my configuration for cinder using NFS, also I submit bug for
Yes, I think it would be a great topic for the summit.
--John
On Jan 14, 2013, at 7:54 AM, Tong Li liton...@us.ibm.com wrote:
John and swifters,
I see this problem as a big problem and I think that the scenario described
by Alejandro is a very common scenario. I am thinking if it is
Well, I've fixed the node size and disabled the all the replicator and
auditor processes. However, it is even slower now than it was before :/.
Any suggestions?
On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Ok, thanks for all the tips/help.
Regards,
Hi Alejandro,
I really doubt that partition size is causing these issues. It can be
difficult to debug these types of issues without access to the
cluster, but I can think of a couple of things to look at.
1. Check your disk io usage and io wait on the storage nodes. If
that seems abnormally
Hi Leander,
The following assumes that the cluster isn't in production yet:
1. Stop all services on all machines
2. Format and remount all storage devices
3. Re-create rings with the correct partition size
4. Push new rings out to all servers
5. Start services back up and test.
--
Chuck
Hey Leander,
Can you post what performance you are getting? If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.
--
Chuck
On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
this by calling swift stat sleep 60s swift stat. I did some
calculation based on those values to get to the end result.
Currently I'm resetting swift with a node size of 64, since 90% of the
files are less than 70KB in
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be delayed
quite a bit as you are heavily loading the cluster. It also might be
worthwhile to use a tool like swift-bench to test your cluster to make
sure it is properly
I'm currently using the swift client to upload files, would you recommend
another approach?
On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be
Thanks!
Regards,
Leander
On Mon, Jan 14, 2013 at 4:29 PM, Chuck Thier cth...@gmail.com wrote:
Hi Leander,
The following assumes that the cluster isn't in production yet:
1. Stop all services on all machines
2. Format and remount all storage devices
3. Re-create rings with the
Hi all,
I've recently started playing with (and working with) OpenStack with a view to
migrate our production infrastructure from esx 4 to Essex.
My issue, or at least utter idiocy, is in the network configuration. Basically
I can't work out whether in the configuration of OpenStack I have
Brilliant; sorry- I didn't attach the diagram.
On 14 Jan 2013, at 16:52, James Condron
james.cond...@simplybusiness.co.ukmailto:james.cond...@simplybusiness.co.uk
wrote:
Hi all,
I've recently started playing with (and working with) OpenStack with a view to
migrate our production
I currently have 4 machines running 10 clients each uploading 1/40th of the
data. More than 40 simultaneous clientes starts to severely affect
Keystone's ability to handle these operations.
On Mon, Jan 14, 2013 at 4:58 PM, Chuck Thier cth...@gmail.com wrote:
That should be fine, but it doesn't
Also, I'm unable to run the swift-bench with keystone.
I always get this error:
Traceback (most recent call last):
File /usr/bin/swift-bench, line 149, in module
controller.run()
File /usr/lib/python2.7/dist-packages/swift/common/bench.py, line 159,
in run
puts =
That should be fine, but it doesn't have any way of reporting stats
currently. You could use tools like ifstat to look at how much
bandwidth you are using. You can also look at how much cpu the swift
tool is using. Depending on how your data is setup, you could run
several swift-client
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
I currently have 4 machines running 10 clients each uploading 1/40th of the
data. More than 40 simultaneous clientes starts to severely affect
Keystone's ability to handle these operations.
You might also
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Also, I'm unable to run the swift-bench with keystone.
Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727
My keystone dev instance isn't working at the moment,
On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:
There is an integer key in the s3_images table that stores the map
between the UUID and the AMI image id:
https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L964
Not sure this is available via
This doesn't exist yet, but I thought at one point it was being worked on.
Hot-adding nics would be a great feature for the quantum integration especially.
Blueprint here:
https://blueprints.launchpad.net/nova/+spec/network-adapter-hotplug
There was work done here:
On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:
There is an integer key in the s3_images table that stores the map
between the UUID and the AMI image id:
I'm using the ubuntu 12.04 packages of the folsom repository by the way.
On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Also, I'm unable to run the swift-bench with keystone.
Are you by any chance referring to this topic
https://lists.launchpad.net/openstack/msg08639.html regarding the keystone
token cache? If so I've already added the configuration line and have not
noticed any speedup :/
On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com
wrote:
On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:
On Jan 14, 2013, at 7:49 AM, Jay Pipes jaypi...@gmail.com wrote:
There is an integer key in the s3_images table that
I'd recommend Folsom over Essex :) And I'd highly recommend these
articles from Mirantis which really step through the networking setup in
VLANManager. Read through them in the following order and I promise at
the end you will have a much better understanding of networking in Nova.
On Mon, Jan 14, 2013 at 7:07 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:
On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com
wrote:
On Mon, Jan 14, 2013 at 6:18 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:
On Jan 14, 2013, at 7:49 AM, Jay Pipes
On Jan 14, 2013, at 10:15 AM, Antonio Messina antonio.s.mess...@gmail.com
wrote:
On Mon, Jan 14, 2013 at 7:07 PM, Vishvananda Ishaya vishvana...@gmail.com
wrote:
On Jan 14, 2013, at 9:28 AM, Antonio Messina antonio.s.mess...@gmail.com
wrote:
On Mon, Jan 14, 2013 at 6:18 PM,
If memcache is being utilized by your keystone middleware, you should see
keystone attaching to it on the first incoming request, e.g.:
keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache for
caching token
You may also want to use auth_token from keystoneclient = v0.2.0 if
You would have to look at the proxy log to see if a request is being
made. The results from the swift command line are just the calls that
the client makes. The server still haves to validate the token on
every request.
--
Chuck
On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
Neither keystone nor swift proxy are producing any logs. I'm not sure what
to do :S
On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier cth...@gmail.com wrote:
You would have to look at the proxy log to see if a request is being
made. The results from the swift command line are just the calls that
Rather than ping-ponging emails back and forth on this list, it would
be easier if you could hop on to the #openstack-swift IRC channel on
freenode to discuss further.
--
Chuck
On Mon, Jan 14, 2013 at 1:00 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
Neither keystone nor swift proxy
Chuck et All.
Let me go through the point one by one.
#1 Even seeing that object-auditor allways runs and never stops, we
stoped the swift-*-auditor and didnt see any improvements, from all the
datanodes we have an average of 8% IO-WAIT (using iostat), the only thing
that we see is the pid
I have just completed writing a piece of middleware for logging requests
in WSGI stacks. I have dubbed this useful piece of code, Bark, and it
is available on PyPi. Here are the links:
* http://pypi.python.org/pypi/bark
* https://github.com/klmitch/bark
I've written an extensive
Hey Alejandro,
Those were the most common issues that people run into when they are having
performance issues with swift. The other thing to check is to look at the
logs to make sure there are no major issues (like bad drives, misconfigured
nodes, etc.), which could add latency to the requests.
Hi,
Could someone please confirm whether or not Quantum client
(top-of-trunk) supports SSL ?
http://wiki.openstack.org/SecureClientConnections appears to suggest
that SSL is not supported.
Snippet from the wiki page:
...
quantumclient (not started)
replace httplib2 with requests
add
Hi,
The meeting time shows me Jan 25 as scheduled date, is it right?
* Meeting status: *Not started* Starting date: *Friday, January 25,
2013* Starting
time: *12:00 am, Europe Time (Berlin, GMT+01:00)* Duration: *2
hours* Host's
name: *Sean Roberts
Sean, do you have an agenda? I like
Yes! https://launchpad.net/ubuntu/raring/+source/nova
:-P
On 14 January 2013 13:20, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:
Hi!
Is Grizzly-2 available on Ubuntu Raring Ringtail (13.04) daily builds?
Tks!
Thiago
On 10 January 2013 19:44, Steven Dake sd...@redhat.com wrote:
Reminder for the global user group planning meetup, tomorrow, Tuesday, 15 Jan
2012 11:00am to 1:00pm PST http://www.meetup.com/openstack/events/93593062/
Join us!
Sean Roberts
Infrastructure Strategy
sean...@yahoo-inc.commailto:sean...@yahoo-inc.com
Direct (408) 349-5234 Mobile (925) 980-4729
The scheduled webex still says it is for the 24th. You may want to adjust it so
it will launch on time.
(I will be in LA tomorrow so I have to dial in vs attending in person)
Regards,
Colin
If you would like to schedule a time to speak with me, please click here to see
my calendar and pick a
Here are the action items that I captured .
Public Apt-Repo's yum repos built off trunk - OpenStack
Puppet manifest on a VM (easy openstack)
Rolling upgrade Continuous deployment
Mentoring user groups leader
Checklists
Standard Slides preso
users group / restricted to college
Redirect funding
Agenda for tomorrow's user group / meetup planning meeting
*
* Review of user group and meetup template
* Meetups fit into the larger plan of user committee and summit to summit
development how?
* Where would be the location for social materials like videos, meetup logs,
and other
Hi,
I am sorry that I may not able to join the meeting but the Hong Kong OpenStack
User Group and Hong Kong Cyberport will be updating you any information if
necessary.
May I have a bit comment on the following agenda:
Can the Foundation start working with some Universities for meeting space,
2013/1/15 Bruce Lok bruce...@cyberport.hk
Hi,
I am sorry that I may not able to join the meeting but the Hong Kong
OpenStack User Group and Hong Kong Cyberport will be updating you any
information if necessary.
** **
May I have a bit comment on the following agenda:
** **
Test if i have join this maillist, don't read
--
Thanks
Harry Wei
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help :
I tried to boot a new instance with two NICs, but seems I can't setup
network_id to a single one, that means I have to create at least two VLANs
for NICs. Is there any way that I can boot my instance with two NICs and
assign the ip in a single VLAN? And why this is not allowed currently?
Thanks.
Hi All,
snip
Hi,
I am sorry that I may not able to join the meeting but the Hong Kong OpenStack
User Group and Hong Kong Cyberport will be updating you any information if
necessary.
May I have a bit comment on the following agenda:
Can the Foundation start working with some Universities for
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/469/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 07:01:03 -0500Build duration:3 min 53 secBuild cause:Started by an SCM changeBuilt
at 20130114
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5120/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1895817599462925974.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5121/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson6544004505538030305.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5122/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7303099275704079374.sh
+ OS_RELEASE=folsom
+
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/470/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 08:31:04 -0500Build duration:3 min 50 secBuild cause:Started by an SCM changeBuilt
Title: precise_grizzly_nova_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_nova_trunk/466/Project:precise_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 08:31:03 -0500Build duration:11 minBuild cause:Started by an SCM changeBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5123/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1296521835439203482.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5124/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8734854266742836545.sh
+ OS_RELEASE=folsom
+
Title: raring_grizzly_nova_trunk
General InformationBUILD FAILUREBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_nova_trunk/471/Project:raring_grizzly_nova_trunkDate of build:Mon, 14 Jan 2013 09:01:04 -0500Build duration:4 min 23 secBuild cause:Started by an SCM changeBuilt
at 20130114-0910Build needed 00:06:53, 116284k disc spaceERROR:root:Error occurred during package creation/build: Command '['sbuild', '-d', 'precise-grizzly', '-n
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5125/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8659063694713879122.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5126/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1933592077357983603.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5127/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1860915769796582513.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5128/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7670778259837257597.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5129/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5741044338060832235.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5130/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson6118770960325836242.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5136/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson8799501169259009443.sh
+ OS_RELEASE=folsom
+
Title: precise_grizzly_keystone_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/precise_grizzly_keystone_trunk/80/Project:precise_grizzly_keystone_trunkDate of build:Mon, 14 Jan 2013 12:01:04 -0500Build duration:4 min 44 secBuild cause:Started by an SCM
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5137/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5279458807317819463.sh
+ OS_RELEASE=folsom
+
Title: raring_grizzly_glance_trunk
General InformationBUILD SUCCESSBuild URL:https://jenkins.qa.ubuntu.com/job/raring_grizzly_glance_trunk/73/Project:raring_grizzly_glance_trunkDate of build:Mon, 14 Jan 2013 12:10:56 -0500Build duration:12 minBuild cause:Started by user james-pageBuilt
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5138/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1029321532974771193.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5139/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson5012153280788242984.sh
+ OS_RELEASE=folsom
+
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
text/html; charset=UTF-8: Unrecognized
--
Mailing list: https://launchpad.net/~openstack-ubuntu-testing-notifications
Post to : openstack-ubuntu-testing-notifications@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack-ubuntu-testing-notifications
More help :
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5141/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson7929871280516769598.sh
+ OS_RELEASE=folsom
+
See http://10.189.74.7:8080/job/cloud-archive_folsom_version-drift/5142/
--
Started by timer
Building remotely on pkg-builder
[cloud-archive_folsom_version-drift] $ /bin/bash -xe
/tmp/hudson1204781299510230500.sh
+ OS_RELEASE=folsom
+
1 - 100 of 120 matches
Mail list logo