Re: [Openstack] [Glance] Storage images on NFS server

2013-07-18 Thread Blair Bethwaite
On 18 July 2013 16:21, Jake G. dj_dark_jungl...@yahoo.com wrote:

 Wondering how to configure Openstack so that all images are stored on NFS
 storage instead of the default /var/lib/glance/images.
 Is this as simple as mounting the NFS store to the /var/lib/glance/images
 directory?


Yep.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Use IOMMU even when not doing device pass-through?

2013-05-16 Thread Blair Bethwaite
Hi all,

We're running a KVM based OpenStack cloud. I recently realised we don't
have the IOMMU turned on in our hypervisors. All indications I know about
and can find suggest it's only really useful if you want guests accessing
host devices directly, e.g., PCI pass-through. But I wonder if there are
any other performance advantages to be gained...? Virtio, for one, doesn't
seem to use this/need this.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] nova-network IPv6

2013-04-29 Thread Blair Bethwaite
Hi all,

We've got a Folsom cloud running nova-network in FlatDHCP multi-host,
currently with just a single public fixed range.

We followed
http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-compute-to-use-ipv6-addresses.htmlto
get IPv6 going (plus adding a bunch of rules allowing ICMPv6 on the
compute bridges for NDP). We've now got instances with the desired global
IPv6 addresses but are having connectivity issues, e.g., no luck ping6-ing
instances from in or outside the cloud LAN.

We started looking into possible security group issues which is when we
found this related tidbit:
-
Currently, ipv6 and other protocols cannot be managed with the security
rules, making them permitted by default
-
(
http://docs.openstack.org/folsom/openstack-compute/admin/content/instance-networking.html
)

Is that correct? If it is, it seems like the kind of thing that really
ought to be mentioned in the configuration info as it's a show-stopper for
public IPv6 deployment!

Also, any tips on what might be getting in the way would be much
appreciated.

--
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] List of Cinder compatible devices

2013-04-16 Thread Blair Bethwaite
Hi John,

Sounds sensible. I think this post kind of diverged from what Tim was
asking, which seems to have been more about backend devices...?

Should this page be removed then, and a summary of what you wrote added to
the Cinder wiki page (including any exceptions for current core drivers)?

(On a related note, I don't think List Snapshots should be there, IIUC
that is not a function of the driver and rather about looking in the DB at
data produced by create/delete snapshot driver calls.)

Cheers,


On 1 February 2013 04:12, John Griffith john.griff...@solidfire.com wrote:



 On Thu, Jan 31, 2013 at 8:56 AM, Koert van der Veer ko...@cloudvps.comwrote:

  In that case, it is probably best to transpose the table, with series
 included, the number of products will yield too many columns to be workable.

 Also: Do blank spaces indicate not supported or unknown?

 koert


 On 01/31/2013 04:47 PM, Shake Chen wrote:

 I think need add Vendor storage series.

 like not all the EMC storage would support Cinder.



  On Thu, Jan 31, 2013 at 11:19 PM, Sébastien Han han.sebast...@gmail.com
  wrote:

 Just added some stuff about RBD where E refers to Essex.

 --
 Regards,
 Sébastien Han.


 On Thu, Jan 31, 2013 at 11:20 AM, Avishay Traeger avis...@il.ibm.com
 wrote:
  openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on
  01/31/2013 12:37:07 AM:
  From: Tom Fifield fifie...@unimelb.edu.au
  To: openstack@lists.launchpad.net,
  Date: 01/31/2013 12:38 AM
  Subject: Re: [Openstack] List of Cinder compatible devices
  Sent by: openstack-bounces+avishay=il.ibm@lists.launchpad.net
 
  Here's a starting point:
 
  http://wiki.openstack.org/CinderSupportMatrix
 
  Regards,
 
  Tom
 
  Tom,
  Thanks for doing this.  I recommend that instead of Y, we should put
 the
  letter of the version in which the feature first appeared.  So for
 example,
  E, F, G, ...
 
  Thanks,
  Avishay
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 Shake Chen



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 So thanks for putting this together but this brings something up that
 I've been meaning to raise on the dev-list anyway.  In my opinion it should
 be a requirement that for a driver to be accepted in Cinder it implements
 all of the functionality of the base LVM driver (ie all of the rows listed
 in the matrix here).

 Having to go through and determine what feature is or is not supported per
 driver is EXACTLY what I want to avoid. If we go down the path of building
 a matrix and allowing partial integration it's going to create a huge mess
 and IMO the user experience is going to suffer greatly.  Of course a driver
 can do more than what's on the list, but I think this is the minimum
 requirement and I've been pushing back on submissions based on this.

 The only exceptions have been some of the newer Grizzly features, but
 that's only because we're moving those up to generalized cases that folks
 can inherit from if they use iSCSI.  For those that want to do FC or AOE
 drivers however they're going to need to have a solution of their own.

 My thought is there should be a simple list of back-end device and version
 and whether it's supported in Grizzly or Folsom or .  All API features
 should be assumed available.


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Non responding instance

2013-04-08 Thread Blair Bethwaite
Dave,

Have you tried rebooting it (via OpenStack dashboard/CLI/API)? Obviously
you'll lose memory state, but the VM will boot from the same (current)
virtual disks, so if those disks are ok the instance will have all
previously saved/altered data.


On 8 April 2013 19:09, Dave Pigott dave.pig...@linaro.org wrote:

 Hi all,

 I've got an instance that's been running for several months now, and
 suddenly we can not ping or telnet to it. It shows up in the list as Active
 and Running, but just isn't responding. We earlier had a disk full problem
 on that node, but this happened before that occurred, so I don't think
 that's responsible (but I could be wrong). The instance is used extensively
 so I'd prefer to fix it rather than have to kill it and spin up another
 instance. FYI all other instances in the cloud are fine and we can ssh and
 ping them.

 Any ideas what I can do to get this instance active again?

 Thanks

 Dave
 Lava Lab Lead
 Linaro Ltd
 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] ssh from VM to VM

2013-03-17 Thread Blair Bethwaite
You probably also copied the private key when you did this, which from your
description, is the bit you were missing. I.e., you were going from a
hostA(with private key X) - hostB (pub key X in authorized_keys, no copy
of private key X) - hostC (pub key X in authorized_keys), hostC was
denying you access because you did not have private key X that it could
authenticate with.

Sounds like you probably want to be using ssh auth forwarding see ssh -A
when using ssh-agent, this way you're not proliferating copies of your
private key!

Cheers,


On 17 March 2013 06:47, Chathura M. Sarathchandra Magurawalage 
77.chath...@gmail.com wrote:

 I solved the issue by copying the rsa public key of the first VM to the
 second VM. Thought I did not have to do this.

 Thanks.

 On 16 March 2013 12:34, Pranav pps.pra...@gmail.com wrote:

 I think you need not exchange key pairs for Cirros image.
 Regards,
 Pranav


 On Sat, Mar 16, 2013 at 4:32 PM, Chathura M. Sarathchandra Magurawalage 
 77.chath...@gmail.com wrote:

 Thanks for your reply.

 I have inserted PasswordAuthentication yes to the ssh config file. All
 VMs have the same metadata including the ssh public key of the controller.
 So I cant see why only cirros vms can do this.

 Still does not work.



 On 16 March 2013 06:24, Aaron Rosen aro...@nicira.com wrote:

 I suspect that that host 10.5.5.6 has ssh configured for
 PasswordAuthentication set to no and you don't have your public key of the
 host you are on, in the authorized_key file of 10.5.5.6.

 Aaron

  On Fri, Mar 15, 2013 at 7:26 PM, Chathura M. Sarathchandra
 Magurawalage 77.chath...@gmail.com wrote:

 Hello,

 I can't ssh from Ubuntu cloud VM to other VM. I get following

 ubuntu@master:~$ ssh cirros@10.5.5.6 -v
 OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012
 debug1: Reading configuration data /etc/ssh/ssh_config
 debug1: /etc/ssh/ssh_config line 19: Applying options for *
 debug1: Connecting to 10.5.5.6 [10.5.5.6] port 22.
 debug1: Connection established.
 debug1: identity file /home/ubuntu/.ssh/id_rsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_rsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_dsa-cert type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa type -1
 debug1: identity file /home/ubuntu/.ssh/id_ecdsa-cert type -1
 debug1: Remote protocol version 2.0, remote software version
 OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH*
 debug1: Enabling compatibility mode for protocol 2.0
 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1
 debug1: SSH2_MSG_KEXINIT sent
 debug1: SSH2_MSG_KEXINIT received
 debug1: kex: server-client aes128-ctr hmac-md5 none
 debug1: kex: client-server aes128-ctr hmac-md5 none
 debug1: sending SSH2_MSG_KEX_ECDH_INIT
 debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
 debug1: Server host key: ECDSA
 7b:8f:6a:ee:ba:e5:0a:c5:04:01:ca:bd:e5:38:69:55
 debug1: Host '10.5.5.6' is known and matches the ECDSA host key.
 debug1: Found key in /home/ubuntu/.ssh/known_hosts:4
 debug1: ssh_ecdsa_verify: signature correct
 debug1: SSH2_MSG_NEWKEYS sent
 debug1: expecting SSH2_MSG_NEWKEYS
 debug1: SSH2_MSG_NEWKEYS received
 debug1: Roaming not allowed by server
 debug1: SSH2_MSG_SERVICE_REQUEST sent
 debug1: SSH2_MSG_SERVICE_ACCEPT received
 debug1: Authentications that can continue: publickey
 debug1: Next authentication method: publickey
 debug1: Trying private key: /home/ubuntu/.ssh/id_rsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_dsa
 debug1: Trying private key: /home/ubuntu/.ssh/id_ecdsa
 debug1: No more authentication methods to try.
 Permission denied (publickey).

 But I can ssh from to my Cirros VMs. Also I can ssh from Ubuntu VM to
 Cirros VM.

 Any Idea?

 Thanks.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev][block migration] How to test kvm block migration feature?

2013-03-01 Thread Blair Bethwaite
On 1 March 2013 18:59, Qinglong.Meng mengql112...@gmail.com wrote:

  I want to make testing about openstack(release F) block migration
 based KVM. But I don't any usefully configure doc. it's only about based
 XEN.
  here is the org doc:

 http://docs.openstack.org/folsom/openstack-compute/admin/content/configuring-migrations.html


I think the section headed Block Migration is incorrectly nested under
XenServer, or there should also be a Block Migration section under
KVM-Libvirt. Also missing are the nova flags you should have:
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER,
VIR_MIGRATE_NON_SHARED_INC, VIR_MIGRATE_LIVE
That, along with the other changes mentioned in the section (obviously
excluding shared storage), should give you live block-migration.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] AggregateInstanceExtraSpecs very slow?

2013-02-26 Thread Blair Bethwaite
On 27 February 2013 10:28, Sam Morrison sorri...@gmail.com wrote:

 Thanks Chris, this helps a lot. I've updated the bug report for anyone
 else following along.


This https://bugs.launchpad.net/nova/+bug/1133495 for any other lurkers too
lazy to search.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Question about Disk Setup of Nova Compute Node

2013-01-16 Thread Blair Bethwaite
On 17 January 2013 09:48, Sean Bigdatafun sean.bigdata...@gmail.com wrote:

 IMO, it's always a tradeoff. I am just very curious how Amazon configures
 its EC2 hardware.


Good luck getting that information! You might be able to make some guesses
from benchmark info though, there is plenty around. This post from Scalyr
takes a great all round look at EC2 IO performance, and it's pretty recent
data: http://blog.scalyr.com/2012/10/16/a-systematic-look-at-ec2-io/ .

Personally, I would guess that for ephemeral drives they use either no RAID
at all or RAID10, probably none though, especially as they don't advertise
any reliability for these drives. The issue of rebuild performance in
parity-based RAID configs and the effect it would have on guest performance
would be a significant consideration. Having said that, it's difficult to
find reports of ephemeral drive failures, and Amazon don't quote any
failure rates themselves.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Live migration problem in OpenStack ESSEX

2012-12-17 Thread Blair Bethwaite
On 18 December 2012 14:45, Hanyu Xiao hanyu.x...@eayun.com wrote:

 If you want to use live migration, you need a shared storage


Not so, if you are using KVM then block migration is also an option, and it
seems to be documented in the trunk docs now too. See:
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-migrations.html

The note about compatible XenServer hypervisors might be a little confusing
- it does not mean block migration only works with specific versions of
Xen, it means for it to work with Xen your Xen must support Storage
XenMotion.

Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] S3 Token

2012-12-08 Thread Blair Bethwaite
Hi Chmouel,

On 9 December 2012 00:22, Chmouel Boudjnah chmo...@chmouel.com wrote:

 i personally would vote for number three, I don't  think much people using
 this (i.e: swift+keystone+s3) or at least I didn't hear many feedback about
 the middleware.


Option 3 is very unpalatable IMHO. People have existing clients using the
EC2 creds with libraries such as Boto and Libcloud, so any kind of complete
removal would need to be staged over releases to give fair warning. What's
more, surely part of the point in providing the AWS APIs in the first place
is to ease porting, requiring different creds for EC2 and S3 APIs
unnecessarily complicates that.

I vote for option 1 or 2.

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [swift3] api - boto and libcloud = AccessDenied

2012-12-07 Thread Blair Bethwaite
Ah yes, you said that already - sorry. It looks like you missed the S3token
middleware in your proxy conf, see:
http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-swift-with-s3-emulation-to-use-keystone.html


On 7 December 2012 21:16, Antonio Messina arcimbo...@gmail.com wrote:

 I tried with the EC2 credentials too, but without luck.

 Do I need to create specific endpoints for s3 in keystone? Using what
 URLs? Is my configuration and code correct?

 .a.



 On Thu, Dec 6, 2012 at 11:46 PM, Blair Bethwaite 
 blair.bethwa...@gmail.com wrote:

 Hi Antonio,

 It sounds like you might be using the wrong credentials. The S3 layer
 works with the EC2 credentials.


 On 6 December 2012 06:33, Antonio Messina arcimbo...@gmail.com wrote:


 Hi all,

 I'm trying to access SWIFT using the S3 API compatibility layer, but I
 always get an AccessDenied.

 I'm running folsom on ubuntu precise 12.04 LTS, packages are from
 ubuntu-cloud.archive.canonical.com repository. Swift is correctly
 configured, login and password have been tested with the web interface and
 from command line. Glance uses it to store the images.

 I've installed swift-plugin-s3 and I've configured proxy-server.conf as
 follow:

 pipeline = catch_errors healthcheck cache ratelimit authtoken
 keystoneauth swift3  proxy-logging proxy-server
 [filter:swift3]
 use = egg:swift3#swift3

 I've then tried to connect using my keystone login and password (and
 I've also tried with the EC2 tokens, with the same result).

 The code I'm using is:

 from libcloud.storage.types import Provider as StorageProvider
 from libcloud.storage.providers import get_driver as get_storage_driver

 s3driver = get_storage_driver(StorageProvider.S3)
 s3 = s3driver(ec2access, ec2secret, secure=False, host=s3host, port=8080)
 s3.list_containers()

 What I get is:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 176, in list_containers
 response = self.connection.request('/')
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 605, in request
 connection=self)
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 93, in __init__
 raise Exception(self.parse_error())
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 68, in parse_error
 raise InvalidCredsError(self.body)
 libcloud.common.types.InvalidCredsError: '?xml version=1.0
 encoding=UTF-8?\r\nError\r\n  CodeAccessDenied/Code\r\n
 MessageAccess denied/Message\r\n/Error'


 Using boto instead:

  import boto
  s3conn = boto.s3.connection.S3Connection(
 aws_access_key_id=ec2access, aws_secret_access_key=ec2secret, port=s3port,
 host=s3host, is_secure=False,debug=3)
  s3conn.get_all_buckets()
 send: 'GET / HTTP/1.1\r\nHost: cloud-storage1:8080\r\nAccept-Encoding:
 identity\r\nDate: Wed, 05 Dec 2012 19:25:00 GMT\r\nContent-Length:
 0\r\nAuthorization: AWS
 7c67d5b35b5a4127887c5da319c70a18:WXVx9AONXvIkDiIdg8rUnfncFnM=\r\nUser-Agent:
 Boto/2.6.0 (linux2)\r\n\r\n'
 reply: 'HTTP/1.1 403 Forbidden\r\n'
 header: Content-Type: text/xml; charset=UTF-8
 header: Content-Length: 124
 header: X-Trans-Id: tx7a823c742f624f2682bfddb19f31bcc2
 header: Date: Wed, 05 Dec 2012 19:24:42 GMT
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/boto/s3/connection.py,
 line 364, in get_all_buckets
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0 encoding=UTF-8?
 Error
   CodeAccessDenied/Code
   MessageAccess denied/Message
 /Error

 Login and password work when using the command line tool `swift`.

 I think I may be missing something very basilar here, but I couldn't
 find so much documentation...

 Thanks in advance

 .a.

 --
 antonio.s.mess...@gmail.com
 arcimbo...@gmail.com
 GC3: Grid Computing Competence Center
 http://www.gc3.uzh.ch/
 University of Zurich
 Winterthurerstrasse 190
 CH-8057 Zurich Switzerland

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 --
 Cheers,
 ~Blairo




 --
 antonio.s.mess...@gmail.com
 arcimbo...@gmail.com
 GC3: Grid Computing Competence Center
 http://www.gc3.uzh.ch/
 University of Zurich
 Winterthurerstrasse 190
 CH-8057 Zurich Switzerland




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [swift3] api - boto and libcloud = AccessDenied

2012-12-07 Thread Blair Bethwaite
On 7 December 2012 21:51, Antonio Messina arcimbo...@gmail.com wrote:

 'swift-proxy.conf' is a typo for 'proxy-server.conf' right?


Looks like it, yes.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Fwd: [swift3] api - boto and libcloud = AccessDenied

2012-12-06 Thread Blair Bethwaite
Hi Antonio,

It sounds like you might be using the wrong credentials. The S3 layer works
with the EC2 credentials.


On 6 December 2012 06:33, Antonio Messina arcimbo...@gmail.com wrote:


 Hi all,

 I'm trying to access SWIFT using the S3 API compatibility layer, but I
 always get an AccessDenied.

 I'm running folsom on ubuntu precise 12.04 LTS, packages are from
 ubuntu-cloud.archive.canonical.com repository. Swift is correctly
 configured, login and password have been tested with the web interface and
 from command line. Glance uses it to store the images.

 I've installed swift-plugin-s3 and I've configured proxy-server.conf as
 follow:

 pipeline = catch_errors healthcheck cache ratelimit authtoken keystoneauth
 swift3  proxy-logging proxy-server
 [filter:swift3]
 use = egg:swift3#swift3

 I've then tried to connect using my keystone login and password (and I've
 also tried with the EC2 tokens, with the same result).

 The code I'm using is:

 from libcloud.storage.types import Provider as StorageProvider
 from libcloud.storage.providers import get_driver as get_storage_driver

 s3driver = get_storage_driver(StorageProvider.S3)
 s3 = s3driver(ec2access, ec2secret, secure=False, host=s3host, port=8080)
 s3.list_containers()

 What I get is:

 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 176, in list_containers
 response = self.connection.request('/')
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 605, in request
 connection=self)
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/common/base.py,
 line 93, in __init__
 raise Exception(self.parse_error())
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/libcloud/storage/drivers/s3.py,
 line 68, in parse_error
 raise InvalidCredsError(self.body)
 libcloud.common.types.InvalidCredsError: '?xml version=1.0
 encoding=UTF-8?\r\nError\r\n  CodeAccessDenied/Code\r\n
 MessageAccess denied/Message\r\n/Error'


 Using boto instead:

  import boto
  s3conn = boto.s3.connection.S3Connection( aws_access_key_id=ec2access,
 aws_secret_access_key=ec2secret, port=s3port, host=s3host,
 is_secure=False,debug=3)
  s3conn.get_all_buckets()
 send: 'GET / HTTP/1.1\r\nHost: cloud-storage1:8080\r\nAccept-Encoding:
 identity\r\nDate: Wed, 05 Dec 2012 19:25:00 GMT\r\nContent-Length:
 0\r\nAuthorization: AWS
 7c67d5b35b5a4127887c5da319c70a18:WXVx9AONXvIkDiIdg8rUnfncFnM=\r\nUser-Agent:
 Boto/2.6.0 (linux2)\r\n\r\n'
 reply: 'HTTP/1.1 403 Forbidden\r\n'
 header: Content-Type: text/xml; charset=UTF-8
 header: Content-Length: 124
 header: X-Trans-Id: tx7a823c742f624f2682bfddb19f31bcc2
 header: Date: Wed, 05 Dec 2012 19:24:42 GMT
 Traceback (most recent call last):
   File stdin, line 1, in module
   File
 /home/antonio/.virtualenvs/cloud/local/lib/python2.7/site-packages/boto/s3/connection.py,
 line 364, in get_all_buckets
 response.status, response.reason, body)
 boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
 ?xml version=1.0 encoding=UTF-8?
 Error
   CodeAccessDenied/Code
   MessageAccess denied/Message
 /Error

 Login and password work when using the command line tool `swift`.

 I think I may be missing something very basilar here, but I couldn't find
 so much documentation...

 Thanks in advance

 .a.

 --
 antonio.s.mess...@gmail.com
 arcimbo...@gmail.com
 GC3: Grid Computing Competence Center
 http://www.gc3.uzh.ch/
 University of Zurich
 Winterthurerstrasse 190
 CH-8057 Zurich Switzerland

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Swift] Can i create a public container with write access?

2012-11-20 Thread Blair Bethwaite
Hi Sujay,

On 19 November 2012 14:52, Sujay M sujay@gmail.com wrote:

 I was wondering if i can create a public container such that i need not
 authenticate to upload files on to it. I know how to create one with read
 for all post -r '.r:*' permissions but how to create write for all
 containers? Thanks in advance.


I think you might want TempURL -
http://docs.openstack.org/developer/swift/misc.html?highlight=tempurl#module-swift.common.middleware.tempurl

-- 
Cheers,
~Blairo
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] File Injection through Horizon

2012-10-03 Thread Blair Bethwaite
On 3 October 2012 22:27, Kiall Mac Innes ki...@managedit.ie wrote:
 Really? The same cloud-init as ubuntu uses?

.NET service, can execute cmd.exe batch and PowerShell scripts, I believe.

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] File Injection through Horizon

2012-10-03 Thread Blair Bethwaite
On 3 October 2012 22:35, Kiall Mac Innes ki...@managedit.ie wrote:
 Ah - I had meant the RHEL version :)

That'd be why you explicitly mentioned Debian-isms! Nothing to see here...

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-12 Thread Blair Bethwaite
Hi Vish,

On 10 August 2012 00:27, Vishvananda Ishaya vishvana...@gmail.com wrote:

 On Aug 9, 2012, at 7:13 AM, Daniel P. Berrange berra...@redhat.com wrote:


 With non-live migration, the migration operation is guaranteed to
 complete. With live migration, you can get into a non-convergence
 scenario where the guest is dirtying data faster than it can be
 migrated. With the way Nova currently works the live migration
 will just run forever with no way to stop it. So if you want to
 enable live migration by default, we'll need todo more than
 simply set the flag. Nova will need to be able to monitor the
 migration, and either cancel it after some time, or tune the
 max allowed downtime to let it complete

 Ah good to know. So it sounds like we should keep the default as-is
 for now and revisit it later.

I'm not so sure. It seems to me that nova migrate should be the
offline/paused migration and nova live-migration should be _live_
migration, like it says. Semantic mismatches like this exposed to
operators/users are bad news. As it is, I don't even know what nova
migrate is supposed to do...? There's at least a need to improve the
docs on this.

Daniel's point about the non-convergence cases with
[live|block]-migration is certainly good to know. It sounds like in
practice the key settings, such as the allowable live-migration
downtime, should be tuned to the deployment. Nova should probably
default to a conservatively high allowable downtime.

Daniel; any advice about choosing a sensible value for the allowable downtime?

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-09 Thread Blair Bethwaite
Hi Daniel,

Thanks for following this up!

On 8 August 2012 19:53, Daniel P. Berrange berra...@redhat.com wrote:
 not tune this downtime setting, I don't see how you'd see 4 mins
 downtime unless it was not truely live migration, or there was

Yes, quite right. It turns out Nova is not passing/setting libvirt's
VIR_MIGRATE_LIVE when it is asked to live-migrate a guest, so it is
not proper live-migration. That is the default behaviour unless the
flag is added to the migrate flags in nova.conf, unfortunately that
flag isn't currently mentioned in the OpenStack docs either.

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-09 Thread Blair Bethwaite
Daniel,

Thanks for providing this insight, most useful. I'm interpreting this
as: block migration can be used in non-critical applications, mileage
will vary, thorough testing in the particular environment is
recommended. An alternative implementation will come, but the higher
level feature (live-migration without shared storage) is unlikely to
disappear.

Is that a reasonable appraisal?

On 8 August 2012 19:59, Daniel P. Berrange berra...@redhat.com wrote:
 Block migration is a part of the KVM that none of the upstream developers
 really like, is not entirely reliable, and most distros typically do not
 want to support it due to its poor design (eg not supported in RHEL).

Would you mind/be-able-to elaborate on those reliability issues? E.g.,
is there anything we can do to mitigate them?

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-07 Thread Blair Bethwaite
Hi Sébastien,

Thanks for responding! By the way, I have come across your blog post
regarding this and should reference it for the list:
http://www.sebastien-han.fr/blog/2012/07/12/openstack-block-migration/

On 7 August 2012 17:45, Sébastien Han han.sebast...@gmail.com wrote:
 I think it's a pretty useful feature, a good compromise. As you said using a
 shared fs implies a lot of things and can dramatically decrease your
 performance rather than using the local fs.

Agreed, scale-out distributed file-systems are hard. Consistent
hashing based systems (like Gluster and Ceph) seem like the answer to
many of the existing issues with systems trying to mix scalability,
performance and POSIX compliance. But the key issue is how one
measures performance for these systems... throughput for large
synchronous reads  writes may scale linearly (up to network
saturation), but random IOPS are another thing entirely. As far as I
can tell, random IOPS are the primary metric of concern in the design
of the nova-compute storage, whereas both capacity and throughput
requirements are relatively easy to specify and simply represent hard
limits that must be met to support the various instance flavours you
plan to offer.

It's interesting to note that RedHat do not recommend using RHS
(RedHat Storage), their RHEL-based Gluster (which they own now)
appliance, for live VM storage.

Additionally, operations issues are much harder to handle with a DFS
(even NFS), e.g., how can I put an upper limit on disk I/O for any
particular instance when its ephemeral disk files are across the
network and potentially striped into opaque objects across multiple
storage bricks...?

 I tested it and I will use it
 for my deployment. I'll be happy to discuss more deeply with you about this
 feature :)

Great. We have tested too. Compared to regular (non-block) live
migrate, we don't see much difference in the guest - both scenarios
involve a minute or two of interruption as the guest is moved (e.g.
VNC and SSH sessions hang temporarily), which I find slightly
surprising - is that your experience too?

 I also feel a little concern about this statement:

  It don't work so well, it complicates migration code, and we are building
 a replacement that works.


 I have to go further with my tests, maybe we could share some ideas, use
 case etc...

I think it may be worth asking about this on the KVM lists, unless
anyone here has further insights...?

I grabbed the KVM 1.0 source from Ubuntu Precise and vanilla KVM 1.1.1
from Sourceforge, block migration appears to remain in place despite
those (sparse) comments from the KVM meeting minutes (though I am
naive to the source layout and project structure, so could have easily
missed something). In any case, it seems unlikely Precise would see a
forced update to the 1.1.x series.

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-07 Thread Blair Bethwaite
Hi Jay,

On 8 August 2012 06:13, Jay Pipes jaypi...@gmail.com wrote:
 Why would you find this surprising? I'm just curious...

The live migration algorithm detailed here:
http://www.linux-kvm.org/page/Migration, seems to me to indicate that
only a brief pause should be expected. Indeed, the summary says,
Almost unnoticeable guest down time.

But to the contrary. I tested live-migrate (without block migrate)
last night using a guest with 8GB RAM (almost fully committed) and
lost any access/contact with the guest for over 4 minutes - it was
paused for the duration. Not something I'd want to do to a user's
web-server on a regular basis...

 cc'd Daniel Berrange, who seems to be keyed in on upstream KVM/Qemu
 activity. Perhaps Daniel could shed some light.

That would be wonderful. Thanks!

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] KVM live block migration: stability, future, docs

2012-08-07 Thread Blair Bethwaite
On 8 August 2012 11:33, Jay Pipes jaypi...@gmail.com wrote:
 Sorry, from your original post, I didn't think you were referring to
 live migration, but rather just server migration. You had written
 Compared to regular (non-block) live migrate, but I read that as
 Compared to regular migrate and thought you were referring to the
 server migration behaviour that Nova supports... sorry about that!

Jay, is your use of the wording behaviour that Nova supports there,
significant? I mean, you're not trying to indicate that Nova does not
support _live_ migration, are you?

Anyway, I found this relevant and stale bug:
https://bugs.launchpad.net/nova/+bug/883845. VIR_MIGRATE_LIVE remains
undefined in 
https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py.
We only just discovered the lack of this as a default option, so we'll
test further, this time with VIR_MIGRATE_LIVE=1 explicitly specified
in nova.conf...

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] KVM live block migration: stability, future, docs

2012-08-06 Thread Blair Bethwaite
Hi all,

KVM block migration support in OpenStack
(https://blueprints.launchpad.net/nova/+spec/kvm-block-migration)
seems to be somewhat of a secret - there's almost nothing in the
docs/guides (which to the contrary state that live migration is only
possible with shared storage) and only a couple of mentions on list,
yet it's been around since Diablo. Should this be taken to mean it's
considered unstable, or just that no-one interested in documenting it
understands the significance of such a feature to deployment
architects? After all, decent shared storage is an expensive prospect
with a pile of associated design and management overhead!

I'd be happy to contribute some documentation patches (starting with
the admin guide) that cover this. But first I'd like to get some
confirmation that it's here to stay, which will be significant for our
own large deployment. We've tested with Essex on Ubuntu Precise and
seen a bit of weird file-system behaviour, which we currently suspect
might be a consequence of using ext3 in the guest. But also, there
seems to be some associated lag with interactive services (e.g. active
VNC session) in the guest, not yet sure how this compares to the
non-block live migration case.

We'd really appreciate anybody actively using this feature to speak up
and comment on their mileage, especially with respect to ops.

I'm slightly concerned that KVM may drop this going forward
(http://www.spinics.net/lists/kvm/msg72228.html), though that would be
unlikely to affect anybody deploying on Precise.

-- 
Cheers,
~Blairo

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp