Re: [Openstack] (no subject)
Hi, I restarted proxyserver, and i checked to head the account by executing this command : curl -k -v -H 'X-Auth-Token: My x-auth-token' My x-storage-url but I got the same error This what i found in the syslog file: Jan 20 09:43:34 node1 proxy-server - x.x.x.x 20/Jan/2012/08/43/34 GET /auth/v1.0 HTTP/1.0 200 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 - - - - - - 0.0012 Jan 20 09:45:19 node1 proxy-server Account GET returning 503 for [] (txn: txf1ab6d3bbe994668a816d3ec585ab8eb) (client_ip: x.x.x.x) Jan 20 09:45:19 node1 proxy-server x.x.x.x x.x.x.x 20/Jan/2012/08/45/19 GET /v1/AUTH_system HTTP/1.0 503 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 system%2CAUTH_tkec61648aa80744f18ebb28ece90073b1 - - - txf1ab6d3bbe994668a816d3ec585ab8eb - 0.0082 Can you please help me to solve this problem Thanks in advance for any help Best regards Khaled Subject: Re: [Openstack] (no subject) From: m...@not.mn Date: Thu, 19 Jan 2012 18:35:20 -0600 CC: openstack@lists.launchpad.net To: khaled-...@hotmail.com look in syslog on your proxy server to see what caused the error. --John On Jan 19, 2012, at 6:28 PM, Khaled Ben Bahri wrote: Hi all, I tryed to install OpenStack swift, after creating and configuring all nodes, when i want to check that swift works, I execute this command : swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass stat but I have an error : Account HEAD failed: https://x.x.x.x:8080/v1/AUTH_system 503 Internal Server Error Can any one please help me Thanks in advance for any help Best regards Khaled ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] How to resolve error Domain not found: no domain with matching ...
When I reboot my vm today(reboot works well before), it show : *2012-01-20 16:55:19,112 WARNING nova.virt.libvirt_conn [-] Error from libvirt during undefine of instance-0013. Code=42 Error=Domain not found: no domain with matching uuid '60f24c80-aee7-8e21-a14f-0682b82b13e2'** * *2012-01-20 16:55:19,113 ERROR nova.exception [-] Uncaught exception** **(nova.exception): TRACE: Traceback (most recent call last):** **(nova.exception): TRACE: File /data/stack/nova/nova/exception.py, line 98, in wrapped** **(nova.exception): TRACE: return f(*args, **kw)** **(nova.exception): TRACE: File /data/stack/nova/nova/virt/libvirt/connection.py, line 499, in reboot** **(nova.exception): TRACE: self.destroy(instance, network_info, cleanup=False)** **(nova.exception): TRACE: File /data/stack/nova/nova/virt/libvirt/connection.py, line 299, in destroy** **(nova.exception): TRACE: virt_dom.undefine()** **(nova.exception): TRACE: File /usr/lib/python2.7/dist-packages/libvirt.py, line 975, in undefine** **(nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainUndefine() failed', dom=self)** **(nova.exception): TRACE: libvirtError: Domain not found: no domain with matching uuid '60f24c80-aee7-8e21-a14f-0682b82b13e2'** **(nova.exception): TRACE: ** **2012-01-20 16:55:19,153 ERROR nova.rpc [-] Exception during message handling** **(nova.rpc): TRACE: Traceback (most recent call last):** **(nova.rpc): TRACE: File /data/stack/nova/nova/rpc/impl_kombu.py, line 620, in _process_data** **(nova.rpc): TRACE: rval = node_func(context=ctxt, **node_args)** **(nova.rpc): TRACE: File /data/stack/nova/nova/exception.py, line 98, in wrapped** **(nova.rpc): TRACE: return f(*args, **kw)** **(nova.rpc): TRACE: File /data/stack/nova/nova/compute/manager.py, line 117, in decorated_function** **(nova.rpc): TRACE: function(self, context, instance_id, *args, **kwargs)** **(nova.rpc): TRACE: File /data/stack/nova/nova/compute/manager.py, line 631, in reboot_instance** **(nova.rpc): TRACE: self.driver.reboot(instance_ref, network_info)** **(nova.rpc): TRACE: File /data/stack/nova/nova/exception.py, line 129, in wrapped** **(nova.rpc): TRACE: raise Error(str(e))** **(nova.rpc): TRACE: Error: Domain not found: no domain with matching uuid '60f24c80-aee7-8e21-a14f-0682b82b13e2'** **(nova.rpc): TRACE: * why it happened ? Then I goto the instances dir, add uuid60f24c80-aee7-8e21-a14f-0682b82b13e2/uuid to ***.xml use virsh create ***.xml to create an vm, then I reboot again , it failed. Otherwise , I use virsh define ***.xml, then I reboot again, it successfully. I want to know is it correct ? -- 非淡薄无以明志,非宁静无以致远 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] [ANNOUNCE] OpenStack Nova and Glance 2011.3.1 released
Hey, In the months since the Diablo release, we have been busy selectively back-porting bugfixes to the stable/diablo branches of Nova and Glance. Well, those fixes are now available as 2011.3.1 releases! These releases are bugfix updates to Diablo and are intended to be relatively risk free with no intentional regressions or API changes. The list of bugs fixed can be seen here: https://launchpad.net/nova/+milestone/2011.3.1 https://launchpad.net/glance/+milestone/2011.3.1 Please read (and add to!) the release notes at: http://wiki.openstack.org/ReleaseNotes/2011.3.1 Enjoy! Mark. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Memory quota in nova-compute nodes.
On Jan 19, 2012, at 6:03 PM, Joe Gordon wrote: Hi Jorge, I have two questions: 1) Has anyone optimized nova to work in a HPC environment like you describe? Such as an intelligent scheduler that will generate VMs that consume x percent of a physical machines resources (so you don't end up with one machine with two separate Hadoop instances competing for resources)? I don't know about that. It will be a great feature! 2) Why not use something like http://en.wikipedia.org/wiki/TORQUE_Resource_Manager or http://hadoop.apache.org/common/docs/r0.16.4/hod.html? I'm new with infrastructure for processing applications and I really didn't know these frameworks. Very interesting. But, the hadoop example is just one of our uses. There are another ones that not necessarily use HPC environment. So, the private cloud will be a easy way to provide infrastructure. And, there are others that use clusters but aren't map and reduce tasks, not using hadoop. best, Joe Gordon On Thu, Jan 19, 2012 at 10:49 AM, Jorge Luiz Correa corre...@gmail.com wrote: Hum, I'm just studying and understanding the ccgroups to try this with libvirt and kvm (all nodes are linux here). My case is a test that can be very useful for us. We have about 150 computers spread over the LAN. These computers are desktops and notebook underutilized. So, our test scenario is not an isolated datacenter, what I think is the ideal scenario for private clouds. We need to run some simulations that requires, most of the time, a lot of processor nodes with not so much memory (for example, to run Hadoop). Once the computers have 8 GB or 16 GB and are used to run office applications, they are idle. We think to attach them to the cloud (a controller of a private cloud) and use these idle resources. But, we want to ensure nova-compute not interfere in computers' usability (in this case we can define what is considered usability, like 2 cores and 4 GB of memory). These idle resources over the LAN can be VERY useful, and are cheap (they have already been purchased)! And, we have a laboratory with 20 good hosts that is used along some periods of time. At the lab we can use all the hosts resources when it isn't being used. This is our test scenario. Regards. :) On Thu, Jan 19, 2012 at 4:00 PM, Christian Berendt bere...@b1-systems.de wrote: Hi Jorge. I would like to know if it's possible to configure quota in each nova-compute node. For example, I set up a new hardware with 8 GB of memory and install the nova-compute, but I wish only 4 GB of memory are used (dedicated to nova-compute). Is it possible? If so, how can I configure that? I can't remember such a function at the moment, but it's relative simple to implement such a feature (at least for Linux systems) using cgroups. Can you please describe your use case. At the moment I can't follow where I should use the feature. Why should I install nova-compute on a bare metal system with 32 GByte memory and only use 16 GByte memory? Bye, Christian. -- Christian Berendt Linux / Unix Consultant Developer Mail: bere...@b1-systems.de B1 Systems GmbH Osterfeldstraße 7 / 85088 Vohburg / http://www.b1-systems.de GF: Ralph Dehner / Unternehmenssitz: Vohburg / AG: Ingolstadt,HRB 3537 ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp -- - MSc. Correa, J.L. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Listing non-public images in Glance
b) If authentication is not in effect, should we chage to listing everything, public and not? I can file a bug and see it implemented. In case its useful, I think that currently (without authentication) a command such as: curl http://localhost:9292/v1/images?is_public=None will list both public and private images. -Stuart On Thu, 19 Jan 2012, Jay Pipes wrote: Hi Pete! Answers inline :) On Thu, Jan 19, 2012 at 2:26 PM, Pete Zaitcev zait...@redhat.com wrote: Hello: This clearly seems like I am missing something obvious, but is it possible to list non-public images in Glance? No. But if you no the ID, you can issue a call to HEAD|GET /images/ID and it will show you the image information. This was done this way for legacy reasons IIRC. Nowadays, with authentication enabled, you have much better, finer-grained, and logical access permissions to images (see below) It came up because I have a Glance setup without Keystone or other authentication for now, like this: [pipeline:glance-api] pipeline = versionnegotiation context apiv1app Images that have X-Image-Meta-Is_public: False do not get listed with glance index. I am not saying that it is wrong per se, all the documentation implies that a GET to /v1/images only produces a listing of public images, and it looks like all functional and unit tests in ./glance/tests set the public flag as necessary. Correct. But I'm wondering: a) If authentication is in effect, can users list their own images? Yes. If authentication is enabled and a user calls GET /images, they see a list of non-deleted, non-killed-status *public* images (is_public=True) AND any images where the owner_id is the user's Tenant or User ID AND any images that have manually been shared with the Tenant or User ID via the image-memberships functionality. Note that I say Tenant or User above. There is a configuration value (owner_is_tenant, default is True) that controls whether the authentication layer considers the X-Auth-Tenant or the X-Auth-User value as being the owner... It is easy to forget what you have. The Image Warehouse service in Aeolus permits to list images regardless, as long as bucket is accessible. b) If authentication is not in effect, should we chage to listing everything, public and not? I can file a bug and see it implemented. Interesting proposal, and one we debated over when Kevin Mitchell originally added support for authentication (and thus image ownership). We decided to keep it the way it is because we did not want to change existing behaviour of servers that did not have authentication enabled... Cheers! -jay -- Pete ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] (no subject)
Hi, What is the content of your proxy-server.conf ? I would be interested to know what is your auth server and can you reproduce this problem with the tempauth auth server. This could mean a lot of different problems, make sure of the basic stuff lke file permissions on the storage nodes. Chmouel. On Fri, Jan 20, 2012 at 2:57 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: Hi, I restarted proxyserver, and i checked to head the account by executing this command : curl -k -v -H 'X-Auth-Token: My x-auth-token' My x-storage-url but I got the same error This what i found in the syslog file: Jan 20 09:43:34 node1 proxy-server - x.x.x.x 20/Jan/2012/08/43/34 GET /auth/v1.0 HTTP/1.0 200 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 - - - - - - 0.0012 Jan 20 09:45:19 node1 proxy-server Account GET returning 503 for [] (txn: txf1ab6d3bbe994668a816d3ec585ab8eb) (client_ip: x.x.x.x) Jan 20 09:45:19 node1 proxy-server x.x.x.x x.x.x.x 20/Jan/2012/08/45/19 GET /v1/AUTH_system HTTP/1.0 503 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 system%2CAUTH_tkec61648aa80744f18ebb28ece90073b1 - - - txf1ab6d3bbe994668a816d3ec585ab8eb - 0.0082 Can you please help me to solve this problem Thanks in advance for any help Best regards Khaled Subject: Re: [Openstack] (no subject) From: m...@not.mn Date: Thu, 19 Jan 2012 18:35:20 -0600 CC: openstack@lists.launchpad.net To: khaled-...@hotmail.com look in syslog on your proxy server to see what caused the error. --John On Jan 19, 2012, at 6:28 PM, Khaled Ben Bahri wrote: Hi all, I tryed to install OpenStack swift, after creating and configuring all nodes, when i want to check that swift works, I execute this command : swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass stat but I have an error : Account HEAD failed: https://x.x.x.x:8080/v1/AUTH_system 503 Internal Server Error Can any one please help me Thanks in advance for any help Best regards Khaled ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] (no subject)
Hi, This is the content of the proxy-server.conf [DEFAULT] cert_file = /etc/swift/cert.crt key_file = /etc/swift/cert.key bind_port = 8080 workers = 8 user = swift [pipeline:main] pipeline = healthcheck cache swift3 tempauth proxy-server [app:proxy-server] use = egg:swift#proxy allow_account_management = true account_autocreate = true [filter:swift3] use = egg:swift#swift3 [filter:tempauth] use = egg:swift#tempauth user_system_root = testpass .admin https://x.x.x.x:8080/v1/AUTH_system [filter:healthcheck] use = egg:swift#healthcheck [filter:cache] use = egg:swift#memcache memcache_servers = x.x.x.x:11211 Best regards Khaled From: chmo...@openstack.org Date: Fri, 20 Jan 2012 09:14:53 -0600 Subject: Re: [Openstack] (no subject) To: khaled-...@hotmail.com CC: m...@not.mn; openstack@lists.launchpad.net Hi, What is the content of your proxy-server.conf ? I would be interested to know what is your auth server and can you reproduce this problem with the tempauth auth server. This could mean a lot of different problems, make sure of the basic stuff lke file permissions on the storage nodes. Chmouel. On Fri, Jan 20, 2012 at 2:57 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: Hi, I restarted proxyserver, and i checked to head the account by executing this command : curl -k -v -H 'X-Auth-Token: My x-auth-token' My x-storage-url but I got the same error This what i found in the syslog file: Jan 20 09:43:34 node1 proxy-server - x.x.x.x 20/Jan/2012/08/43/34 GET /auth/v1.0 HTTP/1.0 200 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 - - - - - - 0.0012 Jan 20 09:45:19 node1 proxy-server Account GET returning 503 for [] (txn: txf1ab6d3bbe994668a816d3ec585ab8eb) (client_ip: x.x.x.x) Jan 20 09:45:19 node1 proxy-server x.x.x.x x.x.x.x 20/Jan/2012/08/45/19 GET /v1/AUTH_system HTTP/1.0 503 - curl/7.21.6%20%28x86_64-pc-linux-gnu%29%20libcurl/7.21.6%20OpenSSL/1.0.0e%20zlib/1.2.3.4%20libidn/1.22%20librtmp/2.3 system%2CAUTH_tkec61648aa80744f18ebb28ece90073b1 - - - txf1ab6d3bbe994668a816d3ec585ab8eb - 0.0082 Can you please help me to solve this problem Thanks in advance for any help Best regards Khaled Subject: Re: [Openstack] (no subject) From: m...@not.mn Date: Thu, 19 Jan 2012 18:35:20 -0600 CC: openstack@lists.launchpad.net To: khaled-...@hotmail.com look in syslog on your proxy server to see what caused the error. --John On Jan 19, 2012, at 6:28 PM, Khaled Ben Bahri wrote: Hi all, I tryed to install OpenStack swift, after creating and configuring all nodes, when i want to check that swift works, I execute this command : swift -A https://$PROXY_LOCAL_NET_IP:8080/auth/v1.0 -U system:root -K testpass stat but I have an error : Account HEAD failed: https://x.x.x.x:8080/v1/AUTH_system 503 Internal Server Error Can any one please help me Thanks in advance for any help Best regards Khaled ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] (no subject)
On Fri, Jan 20, 2012 at 9:25 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: user_system_root = testpass .admin* https://x.x.x.x:8080/v1/AUTH_system* This doesn't seem right (the https URL at the end should not be there) Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] (no subject)
I followed the example in this link : http://swift.openstack.org/howto_installmultinode.html#config-proxy Best regards Khaled From: chmo...@openstack.org Date: Fri, 20 Jan 2012 09:32:27 -0600 Subject: Re: [Openstack] (no subject) To: khaled-...@hotmail.com CC: openstack@lists.launchpad.net On Fri, Jan 20, 2012 at 9:25 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: user_system_root = testpass .admin https://x.x.x.x:8080/v1/AUTH_system This doesn't seem right (the https URL at the end should not be there) Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] (no subject)
One can always learn, I never used that option (but I don't use much of the tempauth server). You probably want to look over the logs of the storage node see if there is any ERROR in the logs there. Chmouel. On Fri, Jan 20, 2012 at 9:36 AM, Khaled Ben Bahri khaled-...@hotmail.comwrote: I followed the example in this link : http://swift.openstack.org/howto_installmultinode.html#config-proxy Best regards Khaled -- From: chmo...@openstack.org Date: Fri, 20 Jan 2012 09:32:27 -0600 Subject: Re: [Openstack] (no subject) To: khaled-...@hotmail.com CC: openstack@lists.launchpad.net On Fri, Jan 20, 2012 at 9:25 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: user_system_root = testpass .admin* https://x.x.x.x:8080/v1/AUTH_system* This doesn't seem right (the https URL at the end should not be there) Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Fwd: Associating VM to a quantum network
I have os-create-server but nova-client management does not accept the --nic option, as you said, Dan. When I instantiate a vm, it is created but no IP is assigned to it, thus, it does not change status to ACTIVE. Though I can't access the vm, I can attach the vm vif to a network port (using bin/cli plug_iface). Quantum should give an IP to the vm just while it is being created or IP must be only assigned to the vif afterward? If afterwad, how to do this? Best regards, 2012/1/17 Dan Wendlandt d...@nicira.com On Tue, Jan 17, 2012 at 3:03 AM, Alisson Soares Limeira Pontes apon...@cpqd.com.br wrote: Yes Dan, you are right. I am using the stackOps distro, and the network 1 was created before quantum instalation, i don't know how neather why. Now I have these two networks root@nova-controller:~/quantum-2011.3# nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 3192.168.1.160/27 None 192.168.1.162 8.8.4.4None None None 85b629fc-7fec-4fdd-b842-76a3711e83d9 4192.168.1.128/27 None 192.168.1.130 8.8.4.4None None None a52b0d7b-791a-4e81-8772-8df04b9ccd70 Anyone knows how can i instantiate a vm and associate it to one of these networks? You can use the os-create-server extension to do this. We recently added support for this extension to the nova client utility using the --nic option, though I believe it only went in during Essex-2, so I'm not sure if it would be in a distro like StackOps yet. The following command would create a VM connected to quantum network 0c02f3d3-204a-4e37-b820-5d15e6d74a9f. : nova boot --flavor 1 --image 07f3c46c-5062-4837-b43d-ec1a93b894dc --nic net-id=0c02f3d3-204a-4e37-b820-5d15e6d74a9f test1 Dan 2012/1/12 Dan Wendlandt d...@nicira.com Hi Alisson, I assume you are following the directions as described at: http://docs.openstack.org/incubation/openstack-network/admin/content/index.html? If so, then you have QuantumManager enabled in Nova and when you ran nova-manage create network, this should have reached out to quantum to create a network, then stash that quantum network uuid in the nova db for future use. Can you try accessing quantum directly to see if that networks exist? Since you did not specify a --project when creating the network with nova-manage, QuantumManager will create the network with a quantum tenant-id set to the --quantum_default_tenant_id flag (defaults to default). So assuming the default, try running: bin/cli list_nets default What networks does this show? My best guess is that it will show only a single network: cbbbf92d-26d3-4a8d-8394-bb173fc35cbb, meaning one of your two nova networks show above was not created on Quantum. Is it possible that you created that network before enabling QuantumManager using the --network_manager flag? If so, you would need to delete that old network, and recreate it while Nova is using Quantum Manager. If both were created with QuantumManager enabled, then it is possible one of them failed. Can you find the network manager logs from the period when you ran the 'nova-manage create network' commands? Thanks, Dan On Thu, Jan 12, 2012 at 7:18 AM, Alisson Soares Limeira Pontes apon...@cpqd.com.br wrote: Hello everyone, I need some help to instantiate an image and associate it to a quantum network. I installed a dual node OpenStack.diablo setup (controller and compute), which worked fine for instantiate a vm. Then, I installed OVS and Quantum. It seems that Quantum is working because I can create a network and attach an interface to it using $ python quantum/bin/cli: $ python bin/cli plug_iface $TENANT $NETWORK $PORT $VIF_UUID Plugged interface foo to port:5a1e121b-ccc8-471d-9445-24f15f9f854c on network:e754e7c0-a8eb-40e5-861a-b182d30c3441 I also can create a network using nova-manage: root@nova-controller:~/quantum-2011.3# /var/lib/nova/bin/nova-manage network create --label=public --fixed_range_v4=192.168.1.144/28 root@nova-controller:~/quantum-2011.3# /var/lib/nova/bin/nova-manage network list id IPv4 IPv6 start address DNS1 DNS2 VlanID project uuid 1192.168.1.128/28 None 192.168.1.130 8.8.8.88.8.4.4None None None 3192.168.1.144/28 None 192.168.1.146 8.8.4.4None None None cbbbf92d-26d3-4a8d-8394-bb173fc35cbb But when I try to boot a vm it remains in build status forever. The nova-network.log and nova-compute.log are below. It seems that the image cannot run because I did not assign a network to it, isn't it? How can I do this, I looked at the Quantum API (PUT, GET, POST...)
Re: [Openstack] (no subject)
when I check log file on storage nodes, i found that data4 is not mounted, data4 is a shared folder that i mounted on storage nodes to have a bigger storage space Is it necessary that storage devices have to be mounted on /srv/node?? or i can mount them on other directory Can I use a list of shared folder mounted on nodes to use it as a storage device, I mounted a shared folder in order to have a bigger space for storage, because the hard disk of the node is not so big, I will restart configuring from the begining and tell you the result Khaled From: chmo...@openstack.org Date: Fri, 20 Jan 2012 09:42:24 -0600 Subject: Re: [Openstack] (no subject) To: khaled-...@hotmail.com CC: openstack@lists.launchpad.net One can always learn, I never used that option (but I don't use much of the tempauth server). You probably want to look over the logs of the storage node see if there is any ERROR in the logs there. Chmouel. On Fri, Jan 20, 2012 at 9:36 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: I followed the example in this link : http://swift.openstack.org/howto_installmultinode.html#config-proxy Best regards Khaled From: chmo...@openstack.org Date: Fri, 20 Jan 2012 09:32:27 -0600 Subject: Re: [Openstack] (no subject) To: khaled-...@hotmail.com CC: openstack@lists.launchpad.net On Fri, Jan 20, 2012 at 9:25 AM, Khaled Ben Bahri khaled-...@hotmail.com wrote: user_system_root = testpass .admin https://x.x.x.x:8080/v1/AUTH_system This doesn't seem right (the https URL at the end should not be there) Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Memory quota in nova-compute nodes.
On Thursday, January 19, 2012 at 12:52 PM, Jorge Luiz Correa wrote: I would like to know if it's possible to configure quota in each nova-compute node. For example, I set up a new hardware with 8 GB of memory and install the nova-compute, but I wish only 4 GB of memory are used (dedicated to nova-compute). Is it possible? If so, how can I configure that? I've seen quotas in projects, configured using nova-manage command line tool. But it isn't what I'm looking for. In Essex, you can use 'reserved_host_memory_mb' with the ZoneManager to reserve a certain amount of memory per host. If you're on Diablo, Joe Gordon made a pluggable scheduler based on the SimpleScheduler to do the same: https://github.com/cloudscaling/cs-nova-simplescheduler The relevent key here would be 'cs_host_reserved_memory_mb'. Note that both of these define how much memory goes to your OS and applications, rather than how much memory is set aside for Nova / VMs. If you had 8GB and wanted to give Nova 6GB, you would reserve 2GB for your host OS. This is a soft limit, your OS will happily take more memory absent cgroup support as aforementioned. -- Eric Windisch ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be immediately included in all object listings in every possible situation? Nope. ..so is there such a guarantee for PUTs of *new* objects (like S3 non us-classic), or does can generally be relied on just mean that the chances for new puts are better? Also like S3, Swift can't make any strong guarantees about read-after-update or read-after-delete consistency. We do have an X-Newest header that can be added to GETs and HEADs to make the proxy do a quorum of backend servers and return the newest available version, which greatly improves these, at the cost of latency. That sounds very interesting. Could you give some more details on what exactly is guaranteed when using this header? What happens if the server having the newest copy is down? - If the swift server looses an object, will the object name still be returned in object listings? Will attempts to retrieve it result in 404 errors (as if it never existed) or a different error? It will show up in listings, but give a 404 when you attempt to retrieve it. I'm not sure how we can improve that with Swift's general model, but feel free to make suggestions. From an application programmers point of view, it would be very helpful if lost objects could be distinguished from non-existing object by a different HTTP error. Trying to access a non-existing object may indicate a bug in the application, so it would be nice to know when it happens. Also, it would be very helpful if there was a way to list all lost objects without having to issue HEAD requests for every stored object. Could this information be added to the XML and JSON output of container listings? Then an application would have the chance to periodically check for lost data, rather than having to handle all lost objects at the instant they're required. I am working on a swift backend for S3QL (http://code.google.com/p/s3ql/), a program that exposes online cloud storage as a local UNIX file system. To prevent data corruption, there are two requirements that I'm currently struggling to provide with the swift backend: - There needs to be a way to reliably check if one object (holding the file system metadata) is the newest version. The S3 backend does this by requiring storage in the non us-classic regions and using list-after-create consistency with a marker object that has has a generation number of the metadata embedded in its name. I'm not yet sure if this would work with swift as well (the google storage backend just relies on the strong read-after-write consistency). - The file system checker needs a way to identify lost objects. Here the S3 backend just relies on the durability guarantee that effectively no object will ever be lost. Again, I'm not sure how to implement this for swift. Any suggestions? Best, -Nikolaus -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be immediately included in all object listings in every possible situation? Nope. ..so is there such a guarantee for PUTs of *new* objects (like S3 non us-classic), or does can generally be relied on just mean that the chances for new puts are better? Also like S3, Swift can't make any strong guarantees about read-after-update or read-after-delete consistency. We do have an X-Newest header that can be added to GETs and HEADs to make the proxy do a quorum of backend servers and return the newest available version, which greatly improves these, at the cost of latency. That sounds very interesting. Could you give some more details on what exactly is guaranteed when using this header? What happens if the server having the newest copy is down? - If the swift server looses an object, will the object name still be returned in object listings? Will attempts to retrieve it result in 404 errors (as if it never existed) or a different error? It will show up in listings, but give a 404 when you attempt to retrieve it. I'm not sure how we can improve that with Swift's general model, but feel free to make suggestions. From an application programmers point of view, it would be very helpful if lost objects could be distinguished from non-existing object by a different HTTP error. Trying to access a non-existing object may indicate a bug in the application, so it would be nice to know when it happens. Also, it would be very helpful if there was a way to list all lost objects without having to issue HEAD requests for every stored object. Could this information be added to the XML and JSON output of container listings? Then an application would have the chance to periodically check for lost data, rather than having to handle all lost objects at the instant they're required. I am working on a swift backend for S3QL (http://code.google.com/p/s3ql/), a program that exposes online cloud storage as a local UNIX file system. To prevent data corruption, there are two requirements that I'm currently struggling to provide with the swift backend: - There needs to be a way to reliably check if one object (holding the file system metadata) is the newest version. The S3 backend does this by requiring storage in the non us-classic regions and using
[Openstack] Swift supported file system
Hi, Can any one please tell me if we can mount and use for openstack swift a mounted shared folder as a storage device Is it necessary that storage devices have to be mounted on /srv/node?? Thanks in advance Best regards Khaled ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be immediately included in all object listings in every possible situation? Nope. ..so is there such a guarantee for PUTs of *new* objects (like S3 non us-classic), or does can generally be relied on just mean that the chances for new puts are better? Also like S3, Swift can't make any strong guarantees about read-after-update or read-after-delete consistency. We do have an X-Newest header that can be added to GETs and HEADs to make the proxy do a quorum of backend servers and return the newest available version, which greatly improves these, at the cost of latency. That sounds very interesting. Could you give some more details on what exactly is guaranteed when using this header? What happens if the server having the newest copy is down? - If the swift server looses an object, will the object name still be returned in object listings? Will attempts to retrieve it result in 404 errors (as if it never existed) or a different error? It will show up in listings, but give a 404 when you attempt to retrieve it. I'm not sure how we can improve that with Swift's general model, but feel free to make suggestions. From an application programmers point of view, it would be very helpful if lost objects could be distinguished from non-existing object by a different HTTP error. Trying to access a non-existing object may indicate a bug in the application, so it would be nice to know when it happens. Also, it would be very helpful if there was a way to list all lost objects without having to issue HEAD requests for every stored object. Could this information be added to the XML and JSON output of container listings? Then an application would have the chance to periodically check for lost data, rather than having to handle all lost objects at the instant they're required. I am working on a swift backend for S3QL (http://code.google.com/p/s3ql/), a program that exposes online cloud storage as a local UNIX file system. To prevent data corruption, there are two requirements that I'm currently struggling to
Re: [Openstack] Swift Consistency Guarantees?
Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be immediately included in all object listings in every possible situation? Nope. ..so is there such a guarantee for PUTs of *new* objects (like S3 non us-classic), or does can generally be relied on just mean that the chances for new puts are better? Also like S3, Swift can't make any strong guarantees about read-after-update or read-after-delete consistency. We do have an X-Newest header that can be added to GETs and HEADs to make the proxy do a quorum of backend servers and return the newest available version, which greatly improves these, at the cost of latency. That sounds very interesting. Could you give some more details on what exactly is guaranteed when using this header? What happens if the server having the newest copy is down? - If the swift server looses an object, will the object name still be returned in object listings? Will attempts to retrieve it result in 404 errors (as if it never existed) or a different error? It will show up in listings, but give a 404 when you attempt to retrieve it. I'm not sure how we can improve that with Swift's general model, but feel free to make suggestions. From an application programmers point of view, it would be
Re: [Openstack] Swift Consistency Guarantees?
By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be immediately included in all object listings in every possible situation? Nope. ..so is there such a guarantee for PUTs of *new* objects (like S3 non us-classic), or does can generally be relied on just mean that the chances for new puts are better? Also like S3, Swift can't make any strong guarantees about read-after-update or read-after-delete consistency. We do have an X-Newest header that can be added to GETs and HEADs to make the proxy do a quorum of backend servers and return the newest available version, which greatly improves these, at the cost of latency. That sounds very interesting. Could you give some more details on what exactly is guaranteed when using this header? What happens if the server having the newest copy is down? - If the swift server looses an object, will the object name still be returned in object listings? Will attempts to
Re: [Openstack] Swift Consistency Guarantees?
Hi, Sorry for being so persistent, but I'm still not sure what happens if the 2 servers that carry the new replica are down, but the 1 server that has the old replica is up. Will GET fail or return the old replica? Best, Niko On 01/20/2012 02:52 PM, Stephen Broeker wrote: By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean read-after-create consistency? Because below you say about read-after-write: - If I receive a (non-error) response to a PUT request, am I guaranteed that the object will be
Re: [Openstack] Swift Consistency Guarantees?
In this case, I believe that the GET will succeed. On Fri, Jan 20, 2012 at 11:58 AM, Nikolaus Rath nikol...@rath.org wrote: Hi, Sorry for being so persistent, but I'm still not sure what happens if the 2 servers that carry the new replica are down, but the 1 server that has the old replica is up. Will GET fail or return the old replica? Best, Niko On 01/20/2012 02:52 PM, Stephen Broeker wrote: By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google Storage make very explicit (non-) consistency guarantees for stored objects. I'm looking for a similar documentation about OpenStack's Swift, but haven't had much success. I don't think there's any documentation on this, but it would probably be good to write up. Consistency in Swift is very similar to S3. That is, there aren't many non-eventual consistency guarantees. Listing updates can happen asynchronously (especially under load), and older versions of files can show up in requests (deletes are just a new deleted version of the file). Ah, ok. Thanks a lot for stating this so explicitly. There seems to be a lot of confusion about this, now I can at least point people to something. Swift can generally be relied on for read-after-write consistency, like S3's regions other than the the US Standard region. The reason S3 in US Standard doesn't have this guarantee is because it's more geographically widespread - something Swift isn't good at yet. I can imagine we'll have the same limitation when we get there. Do you mean
Re: [Openstack] Swift Consistency Guarantees?
Hmm, but if there are e.g. 4 replicas, two of which are up-to-date but offline, and two available but online, swift would serve the old version? -Niko On 01/20/2012 03:06 PM, Chmouel Boudjnah wrote: As Stephen mentionned if there is only one replica left Swift would not serve it. Chmouel. On Fri, Jan 20, 2012 at 1:58 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, Sorry for being so persistent, but I'm still not sure what happens if the 2 servers that carry the new replica are down, but the 1 server that has the old replica is up. Will GET fail or return the old replica? Best, Niko On 01/20/2012 02:52 PM, Stephen Broeker wrote: By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon S3 and Google
Re: [Openstack] Swift supported file system
On Fri, Jan 20, 2012 at 1:59 PM, Florian Hines florian.hi...@gmail.comwrote: Is it necessary that storage devices have to be mounted on /srv/node?? You can change where you mount devices with the devices config option in the default section of your config. Note that there is a few places in the code that reference directly /srv/nodes as reported on bug #885006 but that should not affect the operation of the cluster (they are only part of prints). Chmouel. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
I'm finding this thread a bit confusing. You're comparing offered SERVICES to Software. While some of the details of the software will dictate what's possible, some are heavily dependent on how you deploy the swift software, and what kind of deployment decisions you (or your service provider) make. As an extreme example - if you deploy 1 container server in a highly available fashion (hardware style), the you probably could get consistent container listings in the different update followed by read scenarios. Hosting huge swift installations with such a setup is not realistic - but that doesn't say you can't do that. Similarly, swift offers quite a lot of flexibility in setting the eventual consistency window sizes (replication frequency, rates and such). So, while there are theoretical answers to missing replicas, the likelihood of those occurring depends on your deployment and operational practices employed. (e.g. how many replicas are made, how quickly are failed nodes/drives fixed and their content replicated to their replacement etc). In the amazon case, much of this is captured in the 17 9's or the 3 9's guarantees for the reduced redundancy class. If your approach is from an API perspective, then issues around # of replicas (which is deployment parameter) are probably not relevant - if you trust your provider. If your approach in this is from a Swift developer / deployer perspective - then nvm. keep asking, cause it's much easier to read email than python ;) On Fri, Jan 20, 2012 at 3:06 PM, Chmouel Boudjnah chmo...@openstack.orgwrote: As Stephen mentionned if there is only one replica left Swift would not serve it. Chmouel. On Fri, Jan 20, 2012 at 1:58 PM, Nikolaus Rath nikol...@rath.org wrote: Hi, Sorry for being so persistent, but I'm still not sure what happens if the 2 servers that carry the new replica are down, but the 1 server that has the old replica is up. Will GET fail or return the old replica? Best, Niko On 01/20/2012 02:52 PM, Stephen Broeker wrote: By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is down, then it is ignored. That is the whole point about 3 replicas. On Fri, Jan 20, 2012 at 10:43 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, What happens if one of the nodes is down? Especially if that node holds the newest copy? Thanks, Nikolaus On 01/20/2012 12:33 PM, Stephen Broeker wrote: The X-Newest header can be used by a GET Operation to ensure that all of the Storage Nodes (3 by default) are queried for the latest copy of the Object. The COPY Object operation already has this functionality. On Fri, Jan 20, 2012 at 9:12 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, No one able to further clarify this? Does swift offer there read-after-create consistence like non-us-standard S3? What are the precise syntax and semantics of X-Newest header? Best, Nikolaus On 01/18/2012 10:15 AM, Nikolaus Rath wrote: Michael Barton mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com mailto:mike-launch...@weirdlooking.com writes: On Tue, Jan 17, 2012 at 4:55 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Amazon
Re: [Openstack] Swift Consistency Guarantees?
Some general notes for consistency and swift (all of the below assumes 3 replicas): Objects: When swift PUTs an object, it attempts to write to all 3 replicas and only returns success if 2 or more replicas were written successfully. When a new object is created, it has a fairly strong consistency for read after create. The only case this would not be true, is if all of the devices that hold the object are not available. When an object is PUT on top of another object, then there is more eventual consistency that can come in to play for failure scenarios. This is very similar to S3's consistency model. It is also important to note that in the case of failure, and a device is not available for a new replica to be written to, it will attempt to write the replica to a handoff node. When swift GETs an object, by default it will return the first object it finds from any available replicas. Using the X-Newest header will require swift to compare the times tamps and only serve a replica that has the most recent time stamp. If only one replica is available with an older version of the object, it will be returned, but in practice this would be quite an edge case. Container Listings: When an object is PUT in to swift, each object server that a replica is written to is also assigned one of the containers servers to update. On the object server, after the replica is successfully written, an attempt will be made to update the listing of its assigned container server. If that update fails, it is queued locally (which is called an async pending), to be updated out of band by another process. The container updater process continually looks for these async pendings and will attempt to make the update, and will remove it from the queue when successful. There are many reasons that a container update can fail (failed device, timeout, heavily used container, etc.). Thus container listings are eventually consistent in all cases (which is also very similar to S3). Consistency Window: For objects, the biggest factor that determines the consistency window is object replication time. In general this is pretty quick for even large clusters, and we are always working on making this better. If you want to limit consistency windows for objects, then you want to make sure you isolate the chances of failure as much as possible. By setting up your zones to be as isolated as possible (separate power, network, physical locality, etc.) you minimize the chance that there will be a consistency window. For containers, the biggest factor that determines the consistency window, is disk IO for the sqlite databases. In recent testing, basic SATA hardware can handle somewhere in the range of 100 PUTs per second (for smaller containers) to around 10 PUTs per second for very large containers (millions of objects) before aync pendings start stacking up and you begin to see consistency issues. With better hardware (for example RAID 10 of SSD drives), it is easy to get 400-500 PUTs per second with containers that have a billion objects in it. It is also a good idea to run your container/account servers on separate hardware than the object servers. After that, the same things for object servers also apply to the container servers. All that said, please don't just take my word for it, and test it for yourself :) -- Chuck On Fri, Jan 20, 2012 at 2:18 PM, Nikolaus Rath nikol...@rath.org wrote: Hmm, but if there are e.g. 4 replicas, two of which are up-to-date but offline, and two available but online, swift would serve the old version? -Niko On 01/20/2012 03:06 PM, Chmouel Boudjnah wrote: As Stephen mentionned if there is only one replica left Swift would not serve it. Chmouel. On Fri, Jan 20, 2012 at 1:58 PM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, Sorry for being so persistent, but I'm still not sure what happens if the 2 servers that carry the new replica are down, but the 1 server that has the old replica is up. Will GET fail or return the old replica? Best, Niko On 01/20/2012 02:52 PM, Stephen Broeker wrote: By default there are 3 replicas. A PUT Object will return after 2 replicas are done. So if all nodes are up then there are at least 2 replicas. If all replica nodes are down, then the GET Object will fail. On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org mailto:nikol...@rath.org wrote: Hi, So if an object update has not yet been replicated on all nodes, and all nodes that have been updated are offline, what will happen? Will swift recognize this and give me an error, or will it silently return the older version? Thanks, Nikolaus On 01/20/2012 02:14 PM, Stephen Broeker wrote: If a node is
Re: [Openstack] Swift Consistency Guarantees?
A new put can only succeed if it has successfully updated a full majority of the replicas (typically 2 out of 3). Therefore two different updates cannot concurrently succeed, one of them has to know that it is the later transaction. If you aren't forcing a Get to reference all servers, using the option Stephen mentioned, then you MAY get an old version before the replication process is complete. That is what is meant by eventual consistency. For most users, it is not worth slowing down a get for the slight risk of not fetching *the* latest update, but each user can decide that for themselves. Requiring *full* consistency, where the next get is guaranteed to get the most recent PUT, would result in Far longer worst-case transaction times. Something like a switch reset would delay a transaction rather than Queuing up a synchronization of the 3rd server after it reconnects. If you're trying to do a distributed database a Cloud Storage API might not be the best solution for you. But Most applications will deal with a slight amount of uncertainty very well. Your application had to work even if you fetched an object a millisecond *before* someone else updated it, right? How important can it be that you not get the old version a millisecond *after* it was updated? ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
Hi Andi, My perspective is that I'm working on an application that should work with arbitrary service providers using swift. Therefore, I'm interested in the minimal set of guarantees that I can always rely on, no matter how the service provider has configured his particular swift instance. Best, Nikolaus On 01/20/2012 04:10 PM, andi abes wrote: I'm finding this thread a bit confusing. You're comparing offered SERVICES to Software. While some of the details of the software will dictate what's possible, some are heavily dependent on how you deploy the swift software, and what kind of deployment decisions you (or your service provider) make. As an extreme example - if you deploy 1 container server in a highly available fashion (hardware style), the you probably could get consistent container listings in the different update followed by read scenarios. Hosting huge swift installations with such a setup is not realistic - but that doesn't say you can't do that. Similarly, swift offers quite a lot of flexibility in setting the eventual consistency window sizes (replication frequency, rates and such). So, while there are theoretical answers to missing replicas, the likelihood of those occurring depends on your deployment and operational practices employed. (e.g. how many replicas are made, how quickly are failed nodes/drives fixed and their content replicated to their replacement etc). In the amazon case, much of this is captured in the 17 9's or the 3 9's guarantees for the reduced redundancy class. If your approach is from an API perspective, then issues around # of replicas (which is deployment parameter) are probably not relevant - if you trust your provider. If your approach in this is from a Swift developer / deployer perspective - then nvm. keep asking, cause it's much easier to read email than python ;) -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
On Fri, 20 Jan 2012 15:17:32 -0500 Nikolaus Rath nikol...@rath.org wrote: Thanks! So there is no way to reliably get the most-recent version of an object under all conditions. If you bend the conditions hard enough to hit the CAP theorem, you do. -- Pete ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
On 01/20/2012 06:35 PM, Pete Zaitcev wrote: On Fri, 20 Jan 2012 15:17:32 -0500 Nikolaus Rath nikol...@rath.org wrote: Thanks! So there is no way to reliably get the most-recent version of an object under all conditions. If you bend the conditions hard enough to hit the CAP theorem, you do. From what I have heard so far, it seems to be sufficient if all servers holding the newest replica are down for me to get old data. I don't think that this condition is already hitting the CAP theorem, or is it? Best, -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
nice reply ;) On Fri, Jan 20, 2012 at 6:35 PM, Pete Zaitcev zait...@redhat.com wrote: On Fri, 20 Jan 2012 15:17:32 -0500 Nikolaus Rath nikol...@rath.org wrote: Thanks! So there is no way to reliably get the most-recent version of an object under all conditions. If you bend the conditions hard enough to hit the CAP theorem, you do. -- Pete ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
You would need to have the following occur, to make your scenario plausible: * you write the object, which places it on a majority of the replica nodes (i.e. 2 out of 3) * replication is slowly churning away, but doesn't quite catch up * both the nodes that have the updated data fail simultaneously, before replication catches up the remaining node. Swift chooses the A P from CAP. if the swift proxy where to wait till all replicas got updated before it returned a reply, it would be choosing the C but probably dropping both the P and maybe the A (depending on how it would handle a failure). So yes.. you are hitting CAP on the head... On Fri, Jan 20, 2012 at 6:56 PM, Nikolaus Rath nikol...@rath.org wrote: On 01/20/2012 06:35 PM, Pete Zaitcev wrote: On Fri, 20 Jan 2012 15:17:32 -0500 Nikolaus Rath nikol...@rath.org wrote: Thanks! So there is no way to reliably get the most-recent version of an object under all conditions. If you bend the conditions hard enough to hit the CAP theorem, you do. From what I have heard so far, it seems to be sufficient if all servers holding the newest replica are down for me to get old data. I don't think that this condition is already hitting the CAP theorem, or is it? Best, -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
On 01/20/2012 05:08 PM, Caitlin Bestler wrote: If you aren’t forcing a Get to reference all servers, using the option Stephen mentioned, then you MAY get an old version before the replication process is complete. That is what is meant by “eventual consistency”. Well, but apparently this may also happen *with* the X-Newest option. If you’re trying to do a distributed database a Cloud Storage API might not be the best solution for you. But Most applications will deal with a slight amount of uncertainty very well. Your application had to work even if you fetched an object a millisecond **before** someone else updated it, right? How important can it be that you not get the old version a millisecond **after** it was updated? No, a millisecond delay would not be a problem. But since I don't know what what sort of swift setup my application will have to deal with, I'd rather assume only what's truly 100% certain. Best, -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Swift Consistency Guarantees?
Hi Chuck, Thanks for the detailed explanation! That pretty much answers all of my questions. I think this can (and should) placed as-is somewhere in the Swift Dokumentation and/or the Wiki. Best, Nikolaus On 01/20/2012 04:58 PM, Chuck Thier wrote: Some general notes for consistency and swift (all of the below assumes 3 replicas): Objects: When swift PUTs an object, it attempts to write to all 3 replicas and only returns success if 2 or more replicas were written successfully. When a new object is created, it has a fairly strong consistency for read after create. The only case this would not be true, is if all of the devices that hold the object are not available. When an object is PUT on top of another object, then there is more eventual consistency that can come in to play for failure scenarios. This is very similar to S3's consistency model. It is also important to note that in the case of failure, and a device is not available for a new replica to be written to, it will attempt to write the replica to a handoff node. When swift GETs an object, by default it will return the first object it finds from any available replicas. Using the X-Newest header will require swift to compare the times tamps and only serve a replica that has the most recent time stamp. If only one replica is available with an older version of the object, it will be returned, but in practice this would be quite an edge case. Container Listings: When an object is PUT in to swift, each object server that a replica is written to is also assigned one of the containers servers to update. On the object server, after the replica is successfully written, an attempt will be made to update the listing of its assigned container server. If that update fails, it is queued locally (which is called an async pending), to be updated out of band by another process. The container updater process continually looks for these async pendings and will attempt to make the update, and will remove it from the queue when successful. There are many reasons that a container update can fail (failed device, timeout, heavily used container, etc.). Thus container listings are eventually consistent in all cases (which is also very similar to S3). Consistency Window: For objects, the biggest factor that determines the consistency window is object replication time. In general this is pretty quick for even large clusters, and we are always working on making this better. If you want to limit consistency windows for objects, then you want to make sure you isolate the chances of failure as much as possible. By setting up your zones to be as isolated as possible (separate power, network, physical locality, etc.) you minimize the chance that there will be a consistency window. For containers, the biggest factor that determines the consistency window, is disk IO for the sqlite databases. In recent testing, basic SATA hardware can handle somewhere in the range of 100 PUTs per second (for smaller containers) to around 10 PUTs per second for very large containers (millions of objects) before aync pendings start stacking up and you begin to see consistency issues. With better hardware (for example RAID 10 of SSD drives), it is easy to get 400-500 PUTs per second with containers that have a billion objects in it. It is also a good idea to run your container/account servers on separate hardware than the object servers. After that, the same things for object servers also apply to the container servers. All that said, please don't just take my word for it, and test it for yourself :) -- Chuck -Nikolaus -- »Time flies like an arrow, fruit flies like a Banana.« PGP fingerprint: 5B93 61F8 4EA2 E279 ABF6 02CF A9AD B7F8 AE4E 425C ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Proposal for new devstack (v2?)
For those that want to try it glance should be working! Since glance has dependencies on keystone, and the database both of these will be installed and started automatically (uninstall and starting and such should work!) How to accomplish this can be seen at the github page readme. https://github.com/yahoo/Openstack-Devstack2#readme I will write up some documents on the github twiki sometime soon with more details! Nova and the rest should be coming along soon. Hopefully this will make everyones lives easier in the end :-) Please try it out, feedback welcome :-) -Josh On 1/18/12 10:17 PM, Gary Kotton ga...@radware.com wrote: Brilliant! From: openstack-bounces+garyk=radware@lists.launchpad.net [mailto:openstack-bounces+garyk=radware@lists.launchpad.net] On Behalf Of Joshua Harlow Sent: Wednesday, January 18, 2012 9:21 PM To: Mark McLoughlin Cc: Andy Smith; openstack Subject: Re: [Openstack] Proposal for new devstack (v2?) Sweet, we are working on getting functionality for rhel and ubuntu up and going and then hopefully some docs (and code comments) can be added in so other people can know exactly what is going on (without the typical go read the code response). But the idea is the following: Have a set of json files (+ I added the ability to have simple comments) that specify the needed dependencies + versions (+ other metadata) for each distribution. https://github.com/yahoo/Openstack-Devstack2/blob/master/conf/pkgs/general.json Have those different sections be handled by a class specific to a distribution (or possibly shared, ie fedora and rhel). https://github.com/yahoo/Openstack-Devstack2/tree/master/devstack/packaging (WIP as we work with the rhel peoples to get the dependencies flushed out) Similar with pip installs (if any): https://github.com/yahoo/Openstack-Devstack2/tree/master/conf/pips Then this information can be updated as needed for each release of openstack (with exact dependencies, y a win for everyone!) so that this whole pkg process becomes better for everyone. Of course also we are allowing other types of running besides screen (I like just having it in the background via a fork with output going to files...) That's whats going on so far :-) Thx, -Josh On 1/18/12 3:45 AM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2012-01-17 at 11:20 -0800, Joshua Harlow wrote: My goals were/are/(may continue to be, haha) the following: ... 3. Have the ability to have pkg/pip installation (and definition separate from the main code, already starting to be done), in more than 1 distro. * This allows others to easily know what versions of packages work for a given openstack release for more than one distro (yes that's right, more than ubuntu) Serious kudos to you guys on this part. IMHO, having a devstack that supports multiple distros is a massive win for OpenStack generally. Hopefully we can dig in and help with Fedora support soonish Cheers, Mark. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Proposal for new devstack (v2?)
Sweet. Will do next week. ~sean On Jan 20, 2012, at 7:25 PM, Joshua Harlow harlo...@yahoo-inc.commailto:harlo...@yahoo-inc.com wrote: For those that want to try it glance should be working! Since glance has dependencies on keystone, and the database both of these will be installed and started automatically (uninstall and starting and such should work!) How to accomplish this can be seen at the github page readme. https://github.com/yahoo/Openstack-Devstack2#readme I will write up some documents on the github twiki sometime soon with more details! Nova and the rest should be coming along soon. Hopefully this will make everyones lives easier in the end :-) Please try it out, feedback welcome :-) -Josh On 1/18/12 10:17 PM, Gary Kotton ga...@radware.com wrote: Brilliant! From: openstack-bounces+garyk=radware@lists.launchpad.net [mailto:openstack-bounces+garyk=radware@lists.launchpad.net] On Behalf Of Joshua Harlow Sent: Wednesday, January 18, 2012 9:21 PM To: Mark McLoughlin Cc: Andy Smith; openstack Subject: Re: [Openstack] Proposal for new devstack (v2?) Sweet, we are working on getting functionality for rhel and ubuntu up and going and then hopefully some docs (and code comments) can be added in so other people can know exactly what is going on (without the typical “go read the code” response). But the idea is the following: Have a set of json files (+ I added the ability to have simple comments) that specify the needed dependencies + versions (+ other metadata) for each distribution. https://github.com/yahoo/Openstack-Devstack2/blob/master/conf/pkgs/general.json Have those different sections be handled by a class specific to a distribution (or possibly shared, ie fedora and rhel). https://github.com/yahoo/Openstack-Devstack2/tree/master/devstack/packaging (WIP as we work with the rhel peoples to get the dependencies flushed out) Similar with pip installs (if any): https://github.com/yahoo/Openstack-Devstack2/tree/master/conf/pips Then this information can be updated as needed for each release of openstack (with exact dependencies, y a win for everyone!) so that this whole pkg process becomes better for everyone. Of course also we are allowing other types of running besides screen (I like just having it in the background via a fork with output going to files...) That’s whats going on so far :-) Thx, -Josh On 1/18/12 3:45 AM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2012-01-17 at 11:20 -0800, Joshua Harlow wrote: My goals were/are/(may continue to be, haha) the following: ... 3. Have the ability to have pkg/pip installation (and definition separate from the main code, already starting to be done), in more than 1 distro. * This allows others to easily know what versions of packages work for a given openstack release for more than one distro (yes that's right, more than ubuntu) Serious kudos to you guys on this part. IMHO, having a devstack that supports multiple distros is a massive win for OpenStack generally. Hopefully we can dig in and help with Fedora support soonish Cheers, Mark. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.netmailto:openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
Re: [Openstack] Proposal for new devstack (v2?)
Note rhel6 isn't fully there yet. But in progress ;) On 1/20/12 7:33 PM, Sean Roberts sean...@yahoo-inc.com wrote: Sweet. Will do next week. ~sean On Jan 20, 2012, at 7:25 PM, Joshua Harlow harlo...@yahoo-inc.com wrote: Re: [Openstack] Proposal for new devstack (v2?) For those that want to try it glance should be working! Since glance has dependencies on keystone, and the database both of these will be installed and started automatically (uninstall and starting and such should work!) How to accomplish this can be seen at the github page readme. https://github.com/yahoo/Openstack-Devstack2#readme I will write up some documents on the github twiki sometime soon with more details! Nova and the rest should be coming along soon. Hopefully this will make everyones lives easier in the end :-) Please try it out, feedback welcome :-) -Josh On 1/18/12 10:17 PM, Gary Kotton ga...@radware.com wrote: Brilliant! From: openstack-bounces+garyk=radware@lists.launchpad.net [mailto:openstack-bounces+garyk=radware@lists.launchpad.net] On Behalf Of Joshua Harlow Sent: Wednesday, January 18, 2012 9:21 PM To: Mark McLoughlin Cc: Andy Smith; openstack Subject: Re: [Openstack] Proposal for new devstack (v2?) Sweet, we are working on getting functionality for rhel and ubuntu up and going and then hopefully some docs (and code comments) can be added in so other people can know exactly what is going on (without the typical go read the code response). But the idea is the following: Have a set of json files (+ I added the ability to have simple comments) that specify the needed dependencies + versions (+ other metadata) for each distribution. https://github.com/yahoo/Openstack-Devstack2/blob/master/conf/pkgs/general.json Have those different sections be handled by a class specific to a distribution (or possibly shared, ie fedora and rhel). https://github.com/yahoo/Openstack-Devstack2/tree/master/devstack/packaging (WIP as we work with the rhel peoples to get the dependencies flushed out) Similar with pip installs (if any): https://github.com/yahoo/Openstack-Devstack2/tree/master/conf/pips Then this information can be updated as needed for each release of openstack (with exact dependencies, y a win for everyone!) so that this whole pkg process becomes better for everyone. Of course also we are allowing other types of running besides screen (I like just having it in the background via a fork with output going to files...) That's whats going on so far :-) Thx, -Josh On 1/18/12 3:45 AM, Mark McLoughlin mar...@redhat.com wrote: On Tue, 2012-01-17 at 11:20 -0800, Joshua Harlow wrote: My goals were/are/(may continue to be, haha) the following: ... 3. Have the ability to have pkg/pip installation (and definition separate from the main code, already starting to be done), in more than 1 distro. * This allows others to easily know what versions of packages work for a given openstack release for more than one distro (yes that's right, more than ubuntu) Serious kudos to you guys on this part. IMHO, having a devstack that supports multiple distros is a massive win for OpenStack generally. Hopefully we can dig in and help with Fedora support soonish Cheers, Mark. ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp
[Openstack] Openstack packages tags in Debian
Hi, I have spent quite some time making sure that we have nice tags for Openstack in Debian, making it easier to find each of our daemons and programs. I have also pushed for a new tag called Suite::openstack, so that it's easy now to find absolutely all packages related to openstack with a very simple query. Now, since there's a lot of packages (more than 50 binary packages!!!), and that I did it only with my limited knowledge, I must have done few mistakes. So if you are interested, I would welcome anyone to review the work, and submit more tags, also removing the ones which are wrong. Everything is there: http://debtags.debian.net/ And if you want more specifically to show all Openstack packages in Debian, you can go there: http://debtags.debian.net/reports/maint/openstack-de...@lists.alioth.debian.org Cheers, Thomas Goirand (zigo) ___ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp