Re: [Openstack] Swift proxy logs

2013-07-16 Thread Chuck Thier
Hello,

If you followed the rsyslog instructions in the SAIO, then the proxy logs
will be in /var/log/swift/proxy.log and proxy.error.  If not, then it will
be either in /var/log/syslog or /var/log/messages, depending on your server
distro.

--
Chuck


On Tue, Jul 16, 2013 at 4:57 AM, CHABANI Mohamed El Hadi 
chabani.mohamed.h...@gmail.com wrote:

 Hi guys,

 I've a problem with The Swift proxy log, i can't find it in
 /var/log/syslog (only for replicators, updaters...) neither in
 /var/log/upstart. and in the proxy-server.conf also there is no parameter
 to specify the log path (i took the one by default in
 http://docs.openstack.org/developer/swift/development_saio.html).

 Any help please ?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] [Swift] Updating best practices for XFS inode size

2013-06-28 Thread Chuck Thier
Swift stores object metadata in the xattrs of the file on disk and XFS
stores xattrs in the inodes.  When swift was first developed, there were
performance issues with using the default inode size in XFS, and led to us
recommending to change the inode size when creating XFS filesystems.

In the past couple of years, the XFS team has made some big improvements to
inode allocation and use.  With some prompting from the XFS team at Redhat,
I revisited testing the default inode size with swift.  If you are using
recent Linux Kernels, using the default inode size no longer has any impact
to write (PUT) performance through swift.  With this change we also get
some added benefits such as improved caching of inodes and better overall
file system performance.

I think this is most greatly demonstrated in the following graph displaying
total replication time over a period of two weeks:

[image: Inline image 1]

The green line represents the average of a handful of storage nodes that
have the inode size set to 1024, and the blue line represents storage nodes
that have the default inode size (256).

I currently have a merge proposal[1] in process that updates the swift
documentation, but thought I would send an email to the larger group to
spread the word.

I would like to thank the XFS folks at Redhat for letting us know about the
improvements in XFS, and the XFS team in general for the great work they
have done.

--
Chuck

[1] https://review.openstack.org/#/c/34890/
replication.png___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift questions

2013-05-22 Thread Chuck Thier
Hey Mark,

On Wed, May 22, 2013 at 8:59 AM, Mark Brown ntdeveloper2...@yahoo.comwrote:

 Thank you for the responses Chuck.

 As part of a rebalance, the replicator, I would assume, copies the object
 from the old partition to the new partition, and then deletes it from the
 old partition. Is that a fair assumption?


That is correct, the replicators will replicate the data in the partition
at the new location.  The server that has the old data will delete it once
it is certain that data is available in all replicas.


 Also, is there anything that explains in more detail how the handoff node
 is picked? What if the ring changes and the data still lives on the handoff
 node?


I don't think there is currently any documentation that describes how
handoff nodes are picked.  Handoff nodes can be used for a variety of
reasons, including when servers are down, or not responding.  The server
that has handoff partitions is always responsible for getting the data to
the correct location.

There is some more detailed information on replication at:

http://docs.openstack.org/developer/swift/overview_replication.html

(if you haven't read it yet)

--
Chuck
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift questions

2013-05-22 Thread Chuck Thier
On Wed, May 22, 2013 at 7:54 PM, Mark Brown ntdeveloper2...@yahoo.comwrote:

 Thanks Chuck.

 Just one more question about rebalancing. Have there been measurements on
 how much it affects performance when a rebalance is in progress? I would
 assume its an operation that puts some load on the system, while also
 keeping track of  whether the original object changed while moving, among
 other things. Its probably not that frequent, but it would be great to
 understand how it works in the real world with large workloads.


There are no exact measurements on how it affects performance.  A lot of it
depends on how aggressively you are re-balancing, size of the cluster,
usage, etc.  A lot of replication will increase the amount of network
traffic going on and disk io.


 I did have a general question about container sync. Is it something that
 is used a lot, and works well?


I have heard of some using it successfully, but it does have some issues,
so I personally wouldn't recommend it beyond experimentation, and small
clusters.

--
Chuck
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How can I interpret Swift-bench Output data

2013-04-16 Thread Chuck Thier
The important lines are the **FINAL** lines (the others are just to print
status, so you did 1000 PUTS at 9.1 PUTs per second average, 52.9 per
second for DEL GET and 7.6 for DEL.

--
Chuck


On Tue, Apr 16, 2013 at 3:28 PM, Sujay M sujay@gmail.com wrote:

 Hi all,

 Can you please let me know how one can interpret the swift-bench output
 data. What does the result signify?
 Here is a sample output data.

 swift-bench 2013-04-17 01:48:22,708 INFO 18 PUTS [0 failures], 8.8/s
 swift-bench 2013-04-17 01:48:37,795 INFO 187 PUTS [0 failures], 10.9/s
 swift-bench 2013-04-17 01:48:52,903 INFO 289 PUTS [0 failures], 9.0/s
 swift-bench 2013-04-17 01:49:07,916 INFO 511 PUTS [0 failures], 10.8/s
 swift-bench 2013-04-17 01:49:23,186 INFO 659 PUTS [0 failures], 10.5/s
 swift-bench 2013-04-17 01:49:38,823 INFO 760 PUTS [0 failures], 9.7/s
 swift-bench 2013-04-17 01:49:54,486 INFO 825 PUTS [0 failures], 8.8/s
 swift-bench 2013-04-17 01:50:09,516 INFO 985 PUTS [0 failures], 9.0/s
 swift-bench 2013-04-17 01:50:10,283 INFO 1000 PUTS **FINAL** [0 failures],
 9.1/s
 swift-bench 2013-04-17 01:50:13,743 INFO 93 GETS [0 failures], 42.1/s
 swift-bench 2013-04-17 01:50:28,782 INFO 549 GETS [0 failures], 31.8/s
 swift-bench 2013-04-17 01:50:43,877 INFO 1329 GETS [0 failures], 41.1/s
 swift-bench 2013-04-17 01:50:58,891 INFO 1956 GETS [0 failures], 41.3/s
 swift-bench 2013-04-17 01:51:13,899 INFO 3123 GETS [0 failures], 50.1/s
 swift-bench 2013-04-17 01:51:28,900 INFO 4025 GETS [0 failures], 52.0/s
 swift-bench 2013-04-17 01:51:43,916 INFO 5142 GETS [0 failures], 55.7/s
 swift-bench 2013-04-17 01:51:58,932 INFO 5717 GETS [0 failures], 53.2/s
 swift-bench 2013-04-17 01:52:14,003 INFO 6030 GETS [0 failures], 49.2/s
 swift-bench 2013-04-17 01:52:29,011 INFO 6343 GETS [0 failures], 46.1/s
 swift-bench 2013-04-17 01:52:44,019 INFO 7185 GETS [0 failures], 47.1/s
 swift-bench 2013-04-17 01:52:59,022 INFO 8294 GETS [0 failures], 49.5/s
 swift-bench 2013-04-17 01:53:14,034 INFO 9575 GETS [0 failures], 52.5/s
 swift-bench 2013-04-17 01:53:20,595 INFO 1 GETS **FINAL** [0
 failures], 52.9/s
 swift-bench 2013-04-17 01:53:22,728 INFO 37 DEL [0 failures], 17.6/s
 swift-bench 2013-04-17 01:53:37,754 INFO 256 DEL [0 failures], 14.9/s
 swift-bench 2013-04-17 01:53:52,787 INFO 381 DEL [0 failures], 11.8/s
 swift-bench 2013-04-17 01:54:07,838 INFO 572 DEL [0 failures], 12.1/s
 swift-bench 2013-04-17 01:54:23,568 INFO 656 DEL [0 failures], 10.4/s
 swift-bench 2013-04-17 01:54:39,543 INFO 696 DEL [0 failures], 8.8/s
 swift-bench 2013-04-17 01:54:54,598 INFO 737 DEL [0 failures], 7.8/s
 swift-bench 2013-04-17 01:55:09,723 INFO 813 DEL [0 failures], 7.5/s
 swift-bench 2013-04-17 01:55:24,871 INFO 961 DEL [0 failures], 7.7/s
 swift-bench 2013-04-17 01:55:32,347 INFO 1000 DEL **FINAL** [0 failures],
 7.6/s


 Regards,
 Sujay M

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] TC Candidacy

2013-03-16 Thread Chuck Thier
Hello all,

I would like to run for a seat on the TC.   I am one of the original
developers of Rackspace Cloud Files which became Openstack Swift, and was
deeply involved with the creation of Openstack.  I also lead the team at
Rackspace to create Cloud Block Storage which built off the foundation of
Openstack Cinder, and during that time contributed directly and indirectly
to Nova and Cinder.  I have the unique experience of not only developing
across several Openstack projects, but also being responsible for deploying
the projects at a very large scale.  I have a track record for fighting for
reasonable APIs, upgradeability, and maintainability across projects.  I
have also fought to ensure that we have equal and fair representation from
all projects in the Openstack community.

The purpose of the TC is not to legislate from the bench, but when
questions and issues are brought to the TC I will continue to support these
ideals.  I deeply care for Openstack and its future success, so please
consider me for this position.

Thanks,

--
Chuck Thier
@creiht
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] How to debug a Swift's not found error while downloading a file?

2013-02-13 Thread Chuck Thier
Hi Giuseppe,

The first thing you can do is use the swift-get-nodes utility to find
out where those objects would normally be located.  In your case it
will look something like:

swift-get-nodes /etc/swift/object.ring.gz AUTH_ACCOUNTHASH images
8ab06434-5152-4563-b122-f293fd9af465

Of course substituting the correct info for the account.


--
Chuck

On Wed, Feb 13, 2013 at 6:32 AM, Giuseppe Civitella
giuseppe.civite...@gmail.com wrote:
 Hi all,

 I'm receiving an error not found while trying to download a file
 from a Swift container. This happens even if I can see that file when
 listing the container's content.
 I was wondering which is the best way to inspect the problem and
 manually recover the file when this kind of things happen.

 This is the command output:

 gcivitella@arale:~/Lab/openstack/nova$ swift -V 2 -A
 http://folsom.xxx.it:5000/v2.0 -U service:glance -K xxx list images
 325f4900-649a-459f-b509-a08ad5bc95b3
 83bd8ac7-48a1-4f1f-9695-0ad255d72132
 8ab06434-5152-4563-b122-f293fd9af465
 be44d07b-9a64-427b-9b27-3cad99ae8cda
 c13018d6-46a2-4a6a-9df6-7976f8b44aaa
 f3f64460-3486-49e1-8320-78abecd0c1b9

 gcivitella@arale:~/Lab/openstack/nova$ swift -V 2 -A
 http://folsom.xxx.it:5000/v2.0 -U service:glance -K xxx download
 images 8ab06434-5152-4563-b122-f293fd9af465
 Object 'images/8ab06434-5152-4563-b122-f293fd9af465' not found

 Thanks a lot
 Giuseppe

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift]

2013-02-11 Thread Chuck Thier
Howdy,

The scripts are generated when setup.py is run (either as `setup.py
install` or `setup.py develop`

--
Chuck

On Mon, Feb 11, 2013 at 11:02 AM, Kun Huang academicgar...@gmail.com wrote:
 Hi, swift developers

 I found the script /usr/local/bin/swift is:

 #!/usr/bin/python  -u
 # EASY-INSTALL-DEV-SCRIPT: 'python-swiftclient==1.3.0.4.gb5f222b','swift'
 __requires__ = 'python-swiftclient==1.3.0.4.gb5f222b'
 from pkg_resources import require;
 require('python-swiftclient==1.3.0.4.gb5f222b')
 del require
 __file__ = '/home/swift/bin/python-swiftclient/bin/swift'
 execfile(__file__)

 It seems generated by easy_install, but I didn't found which step did this.
 (I use SAIO to build swift environment).

 Could someone do me a favor? Which command in SAIO generated swift scripts
 in /usr/local/ .

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Disk Recommendation - OpenStack Swift

2013-01-29 Thread Chuck Thier
Hi John,

It would be difficult to recommend a specific drive, because things
change so often.   New drives are being introduced all the time.
Manufacturers buy their competition and cancel their awesome products.
 So the short answer is that you really need to test the drives out in
your environment and in your use case.  I can pass on some wisdom from
our experience.

1.  Enterprise drives are not worth it.  We have not seen a
significant difference between the failure rate of enterprise class
drives and commodity drives.  I have heard this as well from other
large swift deployers, as well as other large storage providers.  Even
if enterprise drives had a significantly less failure rate, the added
cost would not be worth it.

2.  Be wary of Green drives.  The green features on these drives can
work against you in a swift cluster (like auto parking heads and
spinning down).  If you are going with a green drive, make sure they
are well tested, and/or at least have the capability to turn these
features off.

3.  Go big.  If you can, use 3T or larger drives.  You get a more even
distribution and better overall utilization with larger drives.

4.  Don't believe everything you read on the internet (including me
:))  Test! Test! Test!

--
Chuck

On Mon, Jan 28, 2013 at 7:11 PM, John van Ommen john.vanom...@gmail.com wrote:
 Does anyone on the list have a disk they'd recommend for OpenStack swift?

 I am looking at hardware from Dell and HP, and I've found that the
 disks they offer are very expensive.  For instance, HP's 2TB disk has
 a MSRP of over $500, while you can get a Western Digital 2TB 'Red'
 disk for $127.

 Is there any reason to opt for the drives offered by Dell or HP?  (I
 assume they're re-branded disks from Seagate and WD anyways.)

 Are there any disk SKUs that you'd recommend?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Chuck Thier
Hi Alejandro,

I really doubt that partition size is causing these issues.  It can be
difficult to debug these types of issues without access to the
cluster, but I can think of a couple of things to look at.

1.  Check your disk io usage and io wait on the storage nodes.  If
that seems abnormally high, then that could be one of the sources of
problems.  If this is the case, then the first things that I would
look at are the auditors, as they can use up a lot of disk io if not
properly configured.  I would try turning them off for a bit
(swift-*-auditor) and see if that makes any difference.

2.  Check your network io usage.  You haven't described what type of
network you have going to the proxies, but if they share a single GigE
interface, if my quick calculations are correct, you could be
saturating the network.

3.  Check your CPU usage.  I listed this one last as you have said
that you have already worked at tuning the number of workers (though I
would be interested to hear how many workers you have running for each
service).  The main thing to look for, is to see if all of your
workers are maxed out on CPU, if so, then you may need to bump
workers.

4.  SSL Termination?  Where are you terminating the SSL connection?
If you are terminating SSL in Swift directly with the swift proxy,
then that could also be a source of issue.  This was only meant for
dev and testing, and you should use an SSL terminating load balancer
in front of the swift proxies.

That's what I could think of right off the top of my head.

--
Chuck

On Mon, Jan 14, 2013 at 5:45 AM, Alejandro Comisario
alejandro.comisa...@mercadolibre.com wrote:
 Chuck / John.
 We are having 50.000 request per minute ( where 10.000+ are put from small
 objects, from 10KB to 150KB )

 We are using swift 1.7.4 with keystone token caching so no latency over
 there.
 We are having 12 proxyes and 24 datanodes divided in 4 zones ( each datanode
 has 48gb of ram, 2 hexacore and 4 devices of 3TB each )

 The workers that are puting objects in swift are seeing an awful
 performance, and we too.
 With peaks of 2secs to 15secs per put operations coming from the datanodes.
 We tunes db_preallocation, disable_fallocate, workers and concurrency but we
 cant reach the request that we need ( we need 24.000 put per minute of small
 objects ) but we dont seem to find where is the problem, other than from the
 datanodes.

 Maybe worth pasting our config over here?
 Thanks in advance.

 alejandro

 On 12 Jan 2013 02:01, Chuck Thier cth...@gmail.com wrote:

 Looking at this from a different perspective.  Having 2500 partitions
 per drive shouldn't be an absolutely horrible thing either.  Do you
 know how many objects you have per partition?  What types of problems
 are you seeing?

 --
 Chuck

 On Fri, Jan 11, 2013 at 3:28 PM, John Dickinson m...@not.mn wrote:
  If effect, this would be a complete replacement of your rings, and that
  is essentially a whole new cluster. All of the existing data would need to
  be rehashed into the new ring before it is available.
 
  There is no process that rehashes the data to ensure that it is still in
  the correct partition. Replication only ensures that the partitions are on
  the right drives.
 
  To change the number of partitions, you will need to GET all of the data
  from the old ring and PUT it to the new ring. A more complicated, but
  perhaps more efficient) solution may include something like walking each
  drive and rehashing+moving the data to the right partition and then letting
  replication settle it down.
 
  Either way, 100% of your existing data will need to at least be rehashed
  (and probably moved). Your CPU (hashing), disks (read+write), RAM 
  (directory
  walking), and network (replication) may all be limiting factors in how long
  it will take to do this. Your per-disk free space may also determine what
  method you choose.
 
  I would not expect any data loss while doing this, but you will probably
  have availability issues, depending on the data access patterns.
 
  I'd like to eventually see something in swift that allows for changing
  the partition power in existing rings, but that will be
  hard/tricky/non-trivial.
 
  Good luck.
 
  --John
 
 
  On Jan 11, 2013, at 1:17 PM, Alejandro Comisario
  alejandro.comisa...@mercadolibre.com wrote:
 
  Hi guys.
  We've created a swift cluster several months ago, the things is that
  righ now we cant add hardware and we configured lots of partitions 
  thinking
  about the final picture of the cluster.
 
  Today each datanodes is having 2500+ partitions per device, and even
  tuning the background processes ( replicator, auditor  updater ) we 
  really
  want to try to lower the partition power.
 
  Since its not possible to do that without recreating the ring, we can
  have the luxury of recreate it with a very lower partition power, and
  rebalance / deploy the new ring.
 
  The question is, having a working cluster with *existing data* is it
  possible to do

Re: [Openstack] [OpenStack][Swift] Reset Swift | Clear Swift and Account Database

2013-01-14 Thread Chuck Thier
Hi Leander,

The following assumes that the cluster isn't in production yet:

1.  Stop all services on all machines
2.  Format and remount all storage devices
3.  Re-create rings with the correct partition size
4.  Push new rings out to all servers
5.  Start services back up and test.

--
Chuck

On Mon, Jan 14, 2013 at 8:02 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Hello all,

 I've come to realize that my swift storage partitions are setup with the
 wrong node size. The only way for me to fix this is to format the
 partitions. I was wondering how I could reset swift (remove all data from
 stored files) without having to install it again.

 Regards,

 Leander

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Hey Leander,

Can you post what performance you are getting?  If they are all
sharing the same GigE network, you might also check that the links
aren't being saturated, as it is pretty easy to saturate pushing 200k
files around.

--
Chuck

On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Well, I've fixed the node size and disabled the all the replicator and
 auditor processes. However, it is even slower now than it was before :/. Any
 suggestions?


 On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 Ok, thanks for all the tips/help.

 Regards,

 Leander


 On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
 robert.vanleeu...@spilgames.com wrote:

  Allow me to rephrase.
  I've read somewhere (can't remember where) that it would be faster to
  upload files if they would be uploaded to separate containeres.
  This was suggested for a standard swift installation with a certain
  replication factor.
  Since I'll be uploading the files with the replicators turned off, does
  it really matter if I insert a group of them in separate containeres?

 My guess is this concerns the SQLite database load distribution.
 So yes, it still matters.

 Just to be clear: turning replicators off does not matter at ALL when
 putting files in a healthy cluster.
 Files will be replicated / put on all required nodes at the moment the
 put request is done.
 The put request will only give an OK when there is quorum writing the
 file (the file is stored on more than half of the required object nodes)
 The replicator daemons do not have anything to do with this.

 Cheers,
 Robert

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Using swift stat probably isn't the best way to determine cluster
performance, as those stats are updated async, and could be delayed
quite a bit as you are heavily loading the cluster.  It also might be
worthwhile to use a tool like swift-bench to test your cluster to make
sure it is properly setup before loading data into the system.

--
Chuck

On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I'm getting around 5-6.5 GB a day of bytes written on Swift. I calculated
 this by calling swift stat  sleep 60s  swift stat. I did some
 calculation based on those values to get to the end result.

 Currently I'm resetting swift with a node size of 64, since 90% of the files
 are less than 70KB in size. I think that might help.


 On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:

 Hey Leander,

 Can you post what performance you are getting?  If they are all
 sharing the same GigE network, you might also check that the links
 aren't being saturated, as it is pretty easy to saturate pushing 200k
 files around.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Well, I've fixed the node size and disabled the all the replicator and
  auditor processes. However, it is even slower now than it was before :/.
  Any
  suggestions?
 
 
  On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Ok, thanks for all the tips/help.
 
  Regards,
 
  Leander
 
 
  On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
  robert.vanleeu...@spilgames.com wrote:
 
   Allow me to rephrase.
   I've read somewhere (can't remember where) that it would be faster
   to
   upload files if they would be uploaded to separate containeres.
   This was suggested for a standard swift installation with a certain
   replication factor.
   Since I'll be uploading the files with the replicators turned off,
   does
   it really matter if I insert a group of them in separate
   containeres?
 
  My guess is this concerns the SQLite database load distribution.
  So yes, it still matters.
 
  Just to be clear: turning replicators off does not matter at ALL when
  putting files in a healthy cluster.
  Files will be replicated / put on all required nodes at the moment
  the
  put request is done.
  The put request will only give an OK when there is quorum writing the
  file (the file is stored on more than half of the required object
  nodes)
  The replicator daemons do not have anything to do with this.
 
  Cheers,
  Robert
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
That should be fine, but it doesn't have any way of reporting stats
currently.  You could use tools like ifstat to look at how much
bandwidth you are using.  You can also look at how much cpu the swift
tool is using.  Depending on how your data is setup, you could run
several swift-client processes in parallel until you max either your
network or cpu.  I would start with one client first, until you max it
out, then move on to the next.

--
Chuck

On Mon, Jan 14, 2013 at 10:45 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I'm currently using the swift client to upload files, would you recommend
 another approach?


 On Mon, Jan 14, 2013 at 4:43 PM, Chuck Thier cth...@gmail.com wrote:

 Using swift stat probably isn't the best way to determine cluster
 performance, as those stats are updated async, and could be delayed
 quite a bit as you are heavily loading the cluster.  It also might be
 worthwhile to use a tool like swift-bench to test your cluster to make
 sure it is properly setup before loading data into the system.

 --
 Chuck

 On Mon, Jan 14, 2013 at 10:38 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  I'm getting around 5-6.5 GB a day of bytes written on Swift. I
  calculated
  this by calling swift stat  sleep 60s  swift stat. I did some
  calculation based on those values to get to the end result.
 
  Currently I'm resetting swift with a node size of 64, since 90% of the
  files
  are less than 70KB in size. I think that might help.
 
 
  On Mon, Jan 14, 2013 at 4:34 PM, Chuck Thier cth...@gmail.com wrote:
 
  Hey Leander,
 
  Can you post what performance you are getting?  If they are all
  sharing the same GigE network, you might also check that the links
  aren't being saturated, as it is pretty easy to saturate pushing 200k
  files around.
 
  --
  Chuck
 
  On Mon, Jan 14, 2013 at 10:15 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Well, I've fixed the node size and disabled the all the replicator
   and
   auditor processes. However, it is even slower now than it was before
   :/.
   Any
   suggestions?
  
  
   On Mon, Jan 14, 2013 at 3:23 PM, Leander Bessa Beernaert
   leande...@gmail.com wrote:
  
   Ok, thanks for all the tips/help.
  
   Regards,
  
   Leander
  
  
   On Mon, Jan 14, 2013 at 3:21 PM, Robert van Leeuwen
   robert.vanleeu...@spilgames.com wrote:
  
Allow me to rephrase.
I've read somewhere (can't remember where) that it would be
faster
to
upload files if they would be uploaded to separate containeres.
This was suggested for a standard swift installation with a
certain
replication factor.
Since I'll be uploading the files with the replicators turned
off,
does
it really matter if I insert a group of them in separate
containeres?
  
   My guess is this concerns the SQLite database load distribution.
   So yes, it still matters.
  
   Just to be clear: turning replicators off does not matter at ALL
   when
   putting files in a healthy cluster.
   Files will be replicated / put on all required nodes at the
   moment
   the
   put request is done.
   The put request will only give an OK when there is quorum writing
   the
   file (the file is stored on more than half of the required object
   nodes)
   The replicator daemons do not have anything to do with this.
  
   Cheers,
   Robert
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
  
  
  
   ___
   Mailing list: https://launchpad.net/~openstack
   Post to : openstack@lists.launchpad.net
   Unsubscribe : https://launchpad.net/~openstack
   More help   : https://help.launchpad.net/ListHelp
  
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
On Mon, Jan 14, 2013 at 11:01 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 I currently have 4 machines running 10 clients each uploading 1/40th of the
 data. More than 40 simultaneous clientes starts to severely affect
 Keystone's ability to handle these operations.

You might also double check that you are running a very recent version
of keystone that includes the update to use the swift memcache servers
and the correct configuration.  This will cache tokens and prevent
having to make a call to keystone for every single request.  If that
is an issue, that is likely causing a lot of latency to each request.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Also, I'm unable to run the swift-bench with keystone.


Hrm... That was supposed to be fixed with this bug:
https://bugs.launchpad.net/swift/+bug/1011727

My keystone dev instance isn't working at the moment, but I'll see if
I can get one of the team to take a look at it.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
You would have to look at the proxy log to see if a request is being
made.  The results from the swift command line are just the calls that
the client makes.  The server still haves to validate the token on
every request.

--
Chuck

On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Below is an output from Swift stat, since I don't see any requests to
 keystone, I'm assuming that memcache is being used right?

 REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X HEAD -H
 X-Auth-Token: [TOKEN]

 DEBUG:swiftclient:REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X
 HEAD -H X-Auth-Token: [TOKEN]

 RESP STATUS: 204

 DEBUG:swiftclient:RESP STATUS: 204

Account: AUTH_[ID]
 Containers: 44
Objects: 4818
  Bytes: 112284450
 Accept-Ranges: bytes
 X-Timestamp: 1358184925.20885
 X-Trans-Id: tx8cffb469c9c542be830db10a2b90d901




 On Mon, Jan 14, 2013 at 6:31 PM, Dolph Mathews dolph.math...@gmail.com
 wrote:

 If memcache is being utilized by your keystone middleware, you should see
 keystone attaching to it on the first incoming request, e.g.:

   keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache for
 caching token

 You may also want to use auth_token from keystoneclient = v0.2.0 if
 you're not already (instead of from keystone itself).


 -Dolph


 On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 Are you by any chance referring to this topic
 https://lists.launchpad.net/openstack/msg08639.html regarding the keystone
 token cache? If so I've already added the configuration line and have not
 noticed any speedup :/




 On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:

 I'm using the ubuntu 12.04 packages of the folsom repository by the way.


 On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com wrote:

 On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Also, I'm unable to run the swift-bench with keystone.
 

 Hrm... That was supposed to be fixed with this bug:
 https://bugs.launchpad.net/swift/+bug/1011727

 My keystone dev instance isn't working at the moment, but I'll see if
 I can get one of the team to take a look at it.

 --
 Chuck




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp




___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [OpenStack][Swift] Fast way of uploading 200GB of 200KB files to Swift

2013-01-14 Thread Chuck Thier
Rather than ping-ponging emails back and forth on this list, it would
be easier if you could hop on to the #openstack-swift IRC channel on
freenode to discuss further.

--
Chuck

On Mon, Jan 14, 2013 at 1:00 PM, Leander Bessa Beernaert
leande...@gmail.com wrote:
 Neither keystone nor swift proxy are producing any logs. I'm not sure what
 to do :S


 On Mon, Jan 14, 2013 at 6:50 PM, Chuck Thier cth...@gmail.com wrote:

 You would have to look at the proxy log to see if a request is being
 made.  The results from the swift command line are just the calls that
 the client makes.  The server still haves to validate the token on
 every request.

 --
 Chuck

 On Mon, Jan 14, 2013 at 12:37 PM, Leander Bessa Beernaert
 leande...@gmail.com wrote:
  Below is an output from Swift stat, since I don't see any requests to
  keystone, I'm assuming that memcache is being used right?
 
  REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID] -X HEAD -H
  X-Auth-Token: [TOKEN]
 
  DEBUG:swiftclient:REQ: curl -i http://192.168.111.215:8080/v1/AUTH_[ID]
  -X
  HEAD -H X-Auth-Token: [TOKEN]
 
  RESP STATUS: 204
 
  DEBUG:swiftclient:RESP STATUS: 204
 
 Account: AUTH_[ID]
  Containers: 44
 Objects: 4818
   Bytes: 112284450
  Accept-Ranges: bytes
  X-Timestamp: 1358184925.20885
  X-Trans-Id: tx8cffb469c9c542be830db10a2b90d901
 
 
 
 
  On Mon, Jan 14, 2013 at 6:31 PM, Dolph Mathews dolph.math...@gmail.com
  wrote:
 
  If memcache is being utilized by your keystone middleware, you should
  see
  keystone attaching to it on the first incoming request, e.g.:
 
keystoneclient.middleware.auth_token [INFO]: Using Keystone memcache
  for
  caching token
 
  You may also want to use auth_token from keystoneclient = v0.2.0 if
  you're not already (instead of from keystone itself).
 
 
  -Dolph
 
 
  On Mon, Jan 14, 2013 at 11:43 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  Are you by any chance referring to this topic
  https://lists.launchpad.net/openstack/msg08639.html regarding the
  keystone
  token cache? If so I've already added the configuration line and have
  not
  noticed any speedup :/
 
 
 
 
  On Mon, Jan 14, 2013 at 5:19 PM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
 
  I'm using the ubuntu 12.04 packages of the folsom repository by the
  way.
 
 
  On Mon, Jan 14, 2013 at 5:18 PM, Chuck Thier cth...@gmail.com
  wrote:
 
  On Mon, Jan 14, 2013 at 11:03 AM, Leander Bessa Beernaert
  leande...@gmail.com wrote:
   Also, I'm unable to run the swift-bench with keystone.
  
 
  Hrm... That was supposed to be fixed with this bug:
  https://bugs.launchpad.net/swift/+bug/1011727
 
  My keystone dev instance isn't working at the moment, but I'll see
  if
  I can get one of the team to take a look at it.
 
  --
  Chuck
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [SWIFT] Change the partition power to recreate the RING

2013-01-14 Thread Chuck Thier
Hey Alejandro,

Those were the most common issues that people run into when they are having
performance issues with swift.  The other thing to check is to look at the
logs to make sure there are no major issues (like bad drives, misconfigured
nodes, etc.), which could add latency to the requests.  After that, I'm
starting to run out of the common issues that people run into, and it might
be worth contracting with one of the many swift consulting companies to
help you out.  If you have time, and can hop on #openstack-swift on
freenode IRC we might be able to have a little more interactive discussion,
or some other may come up with some ideas.

--
Chuck


On Mon, Jan 14, 2013 at 2:01 PM, Alejandro Comisario 
alejandro.comisa...@mercadolibre.com wrote:

 Chuck et All.

 Let me go through the point one by one.

 #1 Even seeing that object-auditor allways runs and never stops, we
 stoped the swift-*-auditor and didnt see any improvements, from all the
 datanodes we have an average of 8% IO-WAIT (using iostat), the only thing
 that we see is the pid xfsbuf runs once in a while causing 99% iowait for
 a sec, we delayed the runtime for that process, and didnt see changes
 either.

 Our object-auditor config for all devices is as follow :

 [object-auditor]
 files_per_second = 5
 zero_byte_files_per_second = 5
 bytes_per_second = 300

 #2 Our 12 proxyes are 6 physical and 6 kvm instances running on nova,
 checking iftop we are at an average of 15Mb/s of bandwidth usage so i dont
 think we are saturating the networking.
 #3 The overall Idle CPU on all datanodes is 80%, im not sure how to check
 the CPU usage per worker, let me paste the config for a device for object,
 account and container.

 *object-server.conf*
 *--*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6010
 user = swift
 log_facility = LOG_LOCAL2
 log_level = DEBUG
 workers = 48
 disable_fallocate = true

 [pipeline:main]
 pipeline = object-server

 [app:object-server]
 use = egg:swift#object

 [object-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 600

 [object-updater]
 concurrency = 8

 [object-auditor]
 files_per_second = 5
 zero_byte_files_per_second = 5
 bytes_per_second = 300

 *account-server.conf*
 *---*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6012
 user = swift
 log_facility = LOG_LOCAL2
 log_level = DEBUG
 workers = 48
 db_preallocation = on
 disable_fallocate = true

 [pipeline:main]
 pipeline = account-server

 [app:account-server]
 use = egg:swift#account

 [account-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 600

 [account-auditor]

 [account-reaper]

 *container-server.conf*
 *-*
 [DEFAULT]
 devices = /srv/node/sda3
 mount_check = false
 bind_port = 6011
 user = swift
 workers = 48
 log_facility = LOG_LOCAL2
 allow_versions = True
 disable_fallocate = true

 [pipeline:main]
 pipeline = container-server

 [app:container-server]
 use = egg:swift#container
 allow_versions = True

 [container-replicator]
 vm_test_mode = yes
 concurrency = 8
 run_pause = 500

 [container-updater]
 concurrency = 8

 [container-auditor]

 #4 We dont use SSL for swift so, no latency over there.

 Hope you guys can shed some light.


 *
 *
 *
 *
 *Alejandro Comisario
 #melicloud CloudBuilders*
 Arias 3751, Piso 7 (C1430CRG)
 Ciudad de Buenos Aires - Argentina
 Cel: +549(11) 15-3770-1857
 Tel : +54(11) 4640-8443


 On Mon, Jan 14, 2013 at 1:23 PM, Chuck Thier cth...@gmail.com wrote:

 Hi Alejandro,

 I really doubt that partition size is causing these issues.  It can be
 difficult to debug these types of issues without access to the
 cluster, but I can think of a couple of things to look at.

 1.  Check your disk io usage and io wait on the storage nodes.  If
 that seems abnormally high, then that could be one of the sources of
 problems.  If this is the case, then the first things that I would
 look at are the auditors, as they can use up a lot of disk io if not
 properly configured.  I would try turning them off for a bit
 (swift-*-auditor) and see if that makes any difference.

 2.  Check your network io usage.  You haven't described what type of
 network you have going to the proxies, but if they share a single GigE
 interface, if my quick calculations are correct, you could be
 saturating the network.

 3.  Check your CPU usage.  I listed this one last as you have said
 that you have already worked at tuning the number of workers (though I
 would be interested to hear how many workers you have running for each
 service).  The main thing to look for, is to see if all of your
 workers are maxed out on CPU, if so, then you may need to bump
 workers.

 4.  SSL Termination?  Where are you terminating the SSL connection?
 If you are terminating SSL in Swift directly with the swift proxy,
 then that could also be a source of issue.  This was only meant for
 dev and testing, and you should use an SSL terminating load

Re: [Openstack] [swift] RAID Performance Issue

2012-12-20 Thread Chuck Thier
Yes, that's why I was careful to clarify that I was talking about parity
RAID.  Performance should be fine otherwise.

--
Chuck

On Wed, Dec 19, 2012 at 8:26 PM, Hua ZZ Zhang zhu...@cn.ibm.com wrote:

 Chuck, David,

 Thanks for your explanation and sharing.
 Since RAID 0 doesn't have parity or mirroring to provide low level
 redundancy which indicate there's no write penalty, it can improve overall
 performance for concurrent IO of multiple disks.
 I'm wondering if it make sense to use such kind of RAID without
 parity/mirroring to increase R/W performance and leave replication and
 distribution to higher level of Swift.



 [image: Inactive hide details for Chuck Thier ---2012-12-20 上午
 12:35:58---Chuck Thier cth...@gmail.com]Chuck Thier ---2012-12-20 上午
 12:35:58---Chuck Thier cth...@gmail.com


*Chuck Thier cth...@gmail.com*
Sent by: openstack-bounces+zhuadl=cn.ibm@lists.launchpad.net

2012-12-20 上午 12:33


 To


David Busby d.bu...@saiweb.co.uk,


 cc


openstack@lists.launchpad.net openstack@lists.launchpad.net


 Subject


Re: [Openstack] [swift] RAID Performance Issue


 There are a couple of things to think about when using RAID (or more
 specifically parity RAID) with swift.

 The first has already been identified in that the workload for swift
 is very write heavy with small random IO, which is very bad for most
 parity RAID.  In our testing, under heavy workloads, the overall RAID
 performance would degrade to be as slow as a single drive.

 It is very common for servers to have many hard drives (our first
 servers that we did testing with had 24 2T drives).  During testing,
 RAID rebuilds were looking like they would take 2 weeks or so, which
 was not acceptable.  While the array was in a degraded state, the
 overall performance of that box would suffer dramatically, which would
 have ripple effects across the rest of the cluster.

 We tried to make things work well with RAID 5 for quite a while as it
 would have made operations easier, and the code simpler since we
 wouldn't have had to handle many of the failure scenarios.

 Looking back, having to not rely on RAID has made swift a much more
 robust and fault tolerant platform.

 --
 Chuck

 On Wed, Dec 19, 2012 at 4:32 AM, David Busby d.bu...@saiweb.co.uk wrote:
  Hi Zang,
 
  As JuanFra points out there's not much sense in using Swift on top of
 raid
  as swift handel; extending on this RAID introduces a write penalty
  (http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in
 turn
  leads to performance issues, refer the link for write penalty's per
  configuration.
 
  As I recall (though this was from way back in October 2010) the suggested
  method of deploying swift is onto standalone XFS drives, leaving swift to
  handel the replication and distribution.
 
 
  Cheers
 
  David
 
 
 
 
 
 
  On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso
  juanfra.rodriguez.card...@gmail.com wrote:
 
  Hi Zang:
 
  Basically, it makes no sense to use Swift on top of RAID because Swift
  just delivers replication schema.
 
  Regards,
  JuanFra.
 
  2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com
 
  Hi,
 
  I have read the admin document of Swift and find there's recommendation
  of not using RAID 5 or 6 because swift performance degrades quickly
 with it.
  Can anyone explain why this could happen? If the RAID is done by
 hardware
  RAID controller, will the performance issue still exist?
  Anyone can share such kind of experience of using RAID with Swift?
  Appreciated for any suggestion from you.
 
  -Zhang Hua
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



pic02594.gifecblank.gifgraycol.gif___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [swift] RAID Performance Issue

2012-12-19 Thread Chuck Thier
There are a couple of things to think about when using RAID (or more
specifically parity RAID) with swift.

The first has already been identified in that the workload for swift
is very write heavy with small random IO, which is very bad for most
parity RAID.  In our testing, under heavy workloads, the overall RAID
performance would degrade to be as slow as a single drive.

It is very common for servers to have many hard drives (our first
servers that we did testing with had 24 2T drives).  During testing,
RAID rebuilds were looking like they would take 2 weeks or so, which
was not acceptable.  While the array was in a degraded state, the
overall performance of that box would suffer dramatically, which would
have ripple effects across the rest of the cluster.

We tried to make things work well with RAID 5 for quite a while as it
would have made operations easier, and the code simpler since we
wouldn't have had to handle many of the failure scenarios.

Looking back, having to not rely on RAID has made swift a much more
robust and fault tolerant platform.

--
Chuck

On Wed, Dec 19, 2012 at 4:32 AM, David Busby d.bu...@saiweb.co.uk wrote:
 Hi Zang,

 As JuanFra points out there's not much sense in using Swift on top of raid
 as swift handel; extending on this RAID introduces a write penalty
 (http://theithollow.com/2012/03/21/understanding-raid-penalty/) this in turn
 leads to performance issues, refer the link for write penalty's per
 configuration.

 As I recall (though this was from way back in October 2010) the suggested
 method of deploying swift is onto standalone XFS drives, leaving swift to
 handel the replication and distribution.


 Cheers

 David






 On Wed, Dec 19, 2012 at 9:12 AM, JuanFra Rodriguez Cardoso
 juanfra.rodriguez.card...@gmail.com wrote:

 Hi Zang:

 Basically, it makes no sense to use Swift on top of RAID because Swift
 just delivers replication schema.

 Regards,
 JuanFra.

 2012/12/19 Hua ZZ Zhang zhu...@cn.ibm.com

 Hi,

 I have read the admin document of Swift and find there's recommendation
 of not using RAID 5 or 6 because swift performance degrades quickly with it.
 Can anyone explain why this could happen? If the RAID is done by hardware
 RAID controller, will the performance issue still exist?
 Anyone can share such kind of experience of using RAID with Swift?
 Appreciated for any suggestion from you.

 -Zhang Hua


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Metadata in listing

2012-12-13 Thread Chuck Thier
The metadata for objects is stored at the object level, not in the
container dbs.  Reporting metadata information for container listings
would require the server to HEAD every object in the container, which
would cause too much work on the backend.

--
Chuck

On Wed, Dec 12, 2012 at 7:01 AM, Morten Møller Riis m...@gigahost.dk wrote:
 Hi Guys

 I was wondering if there is any possibility of getting metadata output in
 the listing when you issue a GET on a container.

 At the moment it returns eg.:

 object
 name10620_1b8b2553c6eb9987ff647d69e3181f9eeb3a43ef.jpg/name
 hashe453fcd7ff03e9e0e460555e875b1da1/hash
 bytes9272/bytes
 content_typeimage/jpeg/content_type
 last_modified2012-09-20T23:27:34.473230/last_modified
 /object

 If I have X-Object-Meta-Something on an object it would be nice to see it
 here as well. I know I can get it by doing a HEAD request. But this gets
 heavy for many objects.

 Any suggestions?

 Best regards
 Morten Møller Riis






 Med venlig hilsen / Best regards
 Morten Møller Riis
 m...@gigahost.dk

 Gigahost
 Gammeltorv 8, 2.
 1457 København K





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [Cinder] Cinder snapshots

2012-11-12 Thread Chuck Thier
Top posting to give some general history and some thoughts.

Snapshots, as implemented currently in cinder, are derived from the
EBS definition of a snapshot.  This is more of a consistent block
level backup of a volume.  New volumes can be created from any given
snapshot.  This is *not* usually what a snapshot is in a traditional
storage system, though snapshots are usually used in the process.

My concern with trying to expose more general snapshot, cloning, etc
functionalities in the api is that they are very backend dependent,
and will have different capabilities and properties.  Trying to define
a generic API for these cases that works across all backends is going
to be problematic, which is why I've always been supportive of having
these implemented as extensions specific to each backend.

--
Chuck

On Sun, Nov 11, 2012 at 11:09 PM, Avishay Traeger avis...@il.ibm.com wrote:
 John Griffith john.griff...@solidfire.com wrote on 11/11/2012 17:01:37:
 Hey Avishay,

 I guess I'm still confused about what a snapshot is
 in OpenStack.  Currently you can't really do anything with them via
 OpenStack.
 Sure you can, you can use them to create a new volume.  They make a
 good backup mechanism IMO.

 Right - you can either make a new volume (=clone) or figure out somehow
 what name Cinder gave to the snapshot and make a backup on your own.

 [ Off topic question: How can a user determine the name of a
 volume/snapshot on the back-end for a given volume/snapshot? ]

 I guess my question to you in your definition then is 'what's the
 difference between a snapshot and a clone'?

 IMHO, a clone is a full, independent copy of the volume.  It would copy all
 data (either in foreground or background) and leave the user with a volume
 - as if he created a new volume and used dd to copy all the data.  A
 snapshot is something that depends on the original volume and in most
 implementations would contain only changed blocks.

 Clone definitely has other meanings - what's your definition?

 Also, it Seems to me if we go with the idea of adding snapshot-
 restore, and a true clone feature you get everything that you're
 asking for here and more... so I'm not sure of the problem?  Maybe
 you could help me understand why defining snapshot and clone in the
 manner described doesn't seem appropriate?

 So as I stated above, clone would give me a full copy, which can be
 wasteful.  I should be able to read and write to snapshots within the
 context of Cinder.

 FWIW, I think a R/O attach is something that would be good to have
 as an option regardless.

 It's a necessity IMO.  R/W is less common, but also very important.
 It's obvious that Cinder can't support features found in some storage
 controllers, but even LVM2 supports R/W snapshots (released almost 10 years
 ago).  Don't get me wrong - LVM is awesome - but if ubiquitous,
 freely-available software supports a feature that controllers also
 generally support, I think Cinder should support it too.

 Thanks,
 John

 Thank you,
 Avishay


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] OpenNebula and Swift integration

2012-11-06 Thread Chuck Thier
Hi Javier,

On Tue, Nov 6, 2012 at 5:07 AM, Javier Fontan jfon...@opennebula.org wrote:
 Hello,

 We recently had interest from some of our enterprise users to use
 Swift Object Store as the backend for the VM images. I have been
 researching on a possible integration with OpenNebula but I have some
 questions.

 AFAIK Swift is only Object Store and exposes the object through a REST
 interface. Is there any plan to add block storage support like Ceph so
 VMs can use the objects directly?


There isn't currently any plans for this.  At one time we considered
it, but decided that it would not be a good idea to build block
storage on top of Swift.

 We would love to have the same users and permissions in both
 OpenNebula and Swift so the management is only done in one place. It
 seems that the TempAuth system is the way to go to perform this
 authentication. Is it going to be supported in the future or is it
 going to be dumped in favor of just Keystone?


You should be able to write your own auth middleware that integrates
swift into the OpenNebula auth system.  Docs are here:

http://docs.openstack.org/developer/swift/development_auth.html

You can also use TempAuth as an example to work from.

 Are the object ACLs stored within Swift? Can I provide the object ACLs
 from the Auth subsystem (OpenNebula in this case)? I plan to map Swift
 objects to OpenNebula Images and they already have ACLs in place.


Currently ACLs are at the container level in swift and not at the
object level.  That said, for your specific use case, I think you
could implement the image ACLs in your auth middleware, but it has
been a while since I have looked at that code.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [openstack-dev] [nova] Disk attachment consistency

2012-08-14 Thread Chuck Thier
Hey Vish,

First, thanks for bringing this up for discussion.  Coincidentally a
similar discussion had come up with our teams, but I had pushed it
aside at the time due to time constraints.  It is a tricky problem to
solve generally for all hypervisors.  See my comments inline:

On Mon, Aug 13, 2012 at 10:35 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:


 Long term solutions
 --

 We probably shouldn't expose a device path, it should be a device number. 
 This is probably the right change long term, but short term we need to make 
 the device name make sense somehow. I want to delay the long term until after 
 the summit, and come up with something that works short-term with our 
 existing parameters and usage.

I totally agree with delaying the long term discussion, and look
forward to discussing these types of issues more at the summit.


 The first proposal I have is to make the device parameter optional. The 
 system will automatically generate a valid device name that will be accurate 
 for xen and kvm with guest kernel 3.2, but will likely be wrong for old kvm 
 guests in some situations. I think this is definitely an improvement and only 
 a very minor change to an extension api (making a parameter optional, and 
 returning the generated value of the parameter).

I could get behind this, and was was brought up by others in our group
as a more feasible short term solution.  I have a couple of concerns
with this.  It may cause just as much confusion if the api can't
reliably determine which device a volume is attached to.  I'm also
curious as to how well this will work with Xen, and hope some of the
citrix folks will chime in.  From an api standpoint, I think it would
be fine to make it optional, as any client that is using old api
contract will still work as intended.


 (review at https://review.openstack.org/#/c/10908/)

 The second proposal I have is to use a feature of kvm attach and set the 
 device serial number. We can set it to the same value as the device 
 parameter. This means that a device attached to /dev/vdb may not always be at 
 /dev/vdb (with old kvm guests), but it will at least show up at 
 /dev/disk/by-id/virtio-vdb consistently.

 (review coming soon)

 First question: should we return this magic path somewhere via the api? It 
 would be pretty easy to have horizon generate it but it might be nice to have 
 it show up. If we do return it, do we mangle the device to always show the 
 consistent one, or do we return it as another parameter? guest_device perhaps?

 Second question: what should happen if someone specifies /dev/xvda against a 
 kvm cloud or /dev/vda against a xen cloud?
 I see two options:
 a) automatically convert it to the right value and return it

I thought that it already did this, but I would have to go back and
double check.  But it seemed like for xen at least, if you specify
/dev/vda, Nova would change it to /dev/xvda.

 b) fail with an error message


I don't have a strong opinion either way, as long as it is documented
correctly.  I would suggest thought that if it has been converting it
in the past, that we continue to do so.

 Third question: what do we do if someone specifies a device value to a kvm 
 cloud that we know will not work. For example the vm has /dev/vda and 
 /dev/vdb and they request an attach at /dev/vdf. In this case we know that it 
 will likely show up at /dev/vdc. I see a few options here and none of them 
 are amazing:

 a) let the attach go through as is.
   advantages: it will allow scripts to work without having to manually find 
 the next device.
   disadvantages: the device name will never be correct in the guest
 b) automatically modify the request to attach at /dev/vdc and return it
   advantages: the device name will be correct some of the time (kvm guests 
 with newer kernels)
   disadvantages: sometimes the name is wrong anyway. The user may not expect 
 the device number to change
 c) fail and say, the next disk must be attached at /dev/vdc:
   advantages: explicit
   disadvantages: painful, incompatible, and the place we say to attach may be 
 incorrect anyway (kvm guests with old kernels)

I would choose b, as it tries to get things in the correct state.  c
is a bad idea as it would change the overall api behavior, and current
clients wouldn't expect it.

There are also a couple of other interesting tidbits, that may be
related, or at least be worthwhile to know while discussing this.

Xen Server 6.0 has a limit of 16 virtual devices per guest instance.
Experimentally it also expects those to be /dev/xvda - /dev/xvdp.  You
can't for example attach a device to /dev/xvdq, even if there are no
other devices attached to the instance.  If you attempt to do this,
the volume will go in to the attaching state, fail to attach, and then
fall back to the available state (This can be a bit confusing to new
users who try to do so).  Does anyone know if there are similar
limitations for KVM?

Also if you attempt 

Re: [Openstack] [nova] [cinder] Nova-volume vs. Cinder in Folsom

2012-07-12 Thread Chuck Thier
We currently have a large deployment that is based on nova-volume as it is
in trunk today, and just ripping it out will be quite painful.  For us,
option #2 is the only suitable option.

We need a smooth migration path, and time to successfuly migrate to Cinder.
Since there is no clear migration path between Openstack Nova releases, we
have to track very close to trunk.  The rapid change of nova and nova-volume
trunk has already been a very difficult task.  Ripping nova-volume out of nova
would bring us to a standstill until we could migrate to Cinder.

Cinder has made great strides to get where it is today, but
I really hope we, the Openstack community, will take the time to consider the
ramifications, and make sure that we take the time needed to ensure both
a successful release of Cinder and a successful transition from nova-volume
to Cinder.

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Deleting a volume stuck in attaching state?

2012-06-20 Thread Chuck Thier
On Wed, Jun 20, 2012 at 12:16 PM, Jay Pipes jaypi...@gmail.com wrote:
 On 06/20/2012 11:52 AM, Lars Kellogg-Stedman wrote:

 A strategy we are making in Nova (WIP) is to allow instance
 termination no matter what. Perhaps a similar strategy could be
 adopted for volumes too? Thanks,


 The 'nova-manage volume delete ...' solution worked just fine in this
 case...but in general, as a consumer of the software, we would really
 prefer to be able to delete things regardless of their state using
 established tools, rather than manipulating the database directly.
 I'm always worried that I'll screw something up due to my incomplete
 understanding of the database schema.


 Agreed. I think that the DELETE API calls should support a force parameter
 that would stop any pending operation, cleanup any mess, and delete the
 resource.

 The fact that nova-manage volume delete works is really just because the
 nova-manage tool talks directly to the database, and I know that Vish and
 others are keen to have all tools speak only the REST APIs and not any
 backdoor interfaces or database queries directly...

 Best,
 -jay


Just as an FYI, there is a bug related to this

https://bugs.launchpad.net/nova/+bug/944383

and similar to

https://bugs.launchpad.net/nova/+bug/953594

We would like to see this functionality as well, but if it is exposed
as an api call, we need to make sure other artifacts are cleaned up as
well.  The fix for the latter bug at least allows us to delete a
volume that is an error state.

In a similar vein, it would be really nice to have a force detach as
described in

https://bugs.launchpad.net/nova/+bug/944383

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Default swift replicas in devstack

2012-05-14 Thread Chuck Thier
Hey Chmouel,

The first easy step would be to by default not start the aux services
(like replication).  And if someone wants to test those, they can run
them manually (similarly to how we do dev with the saio).

--
Chuck

On Mon, May 14, 2012 at 10:17 AM, Chmouel Boudjnah chmo...@chmouel.com wrote:
 Hello,

 devstack install swift if you are adding the service to your localrc
 (as specified in devstack README.rst). By default if swift is
 installed it will configure three different replicas and due of the
 nature of replication makes a lot of IO which lead to people saying
 /devstack with swift kill my VM/.

 Since most people are not going to install swift on devstack for
 debugging/deving object replication and don't need obviously
 availability I was wondering if anyone have any objection if I set
 swift default replicas to 1 in devstack. This is obviously a setting
 that people can adjust if they want.

 Chmouel.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Do you have the code of the previous versions of The Ring?

2012-04-19 Thread Chuck Thier
Hi Sally,

I don't know if we have the code for the original rings, but gholt has
a good series of blog posts that hits on several of the different
stages we went through when designing the ring in swift:

http://www.tlohg.com/p/building-consistent-hashing-ring.html

--
Chuck

2012/4/19 Sally Cong congnysa...@gmail.com:
 Hi everyone,

 I would like to know if anyone have the code or pseudo code of different
 version of The Rings(in swift)?
 Such as living ring、completely non-partitioned ring、etc.

 I'm a senior student and is very interested in swift, especially The Ring.
 I'm collecting as much information about The Ring as possible. I need your
 help.


 Thanks

 Best Regards,
 Sally

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift Consistency Guarantees?

2012-01-20 Thread Chuck Thier
Some general notes for consistency and swift (all of the below assumes
3 replicas):

Objects:

  When swift PUTs an object, it attempts to write to all 3 replicas
and only returns success if 2 or more replicas were written
successfully.  When a new object is created, it has a fairly strong
consistency for read after create.  The only case this would not be
true, is if all of the devices that hold the object are not available.
 When an object is  PUT on top of another object, then there is more
eventual consistency that can come in to play for failure scenarios.
This is very similar to S3's consistency model.  It is also important
to note that in the case of failure, and a device is not available for
a new replica to be written to, it will attempt to write the replica
to a handoff node.

  When swift GETs an object, by default it will return the first
object it finds from any available replicas.  Using the X-Newest
header will require swift to compare the times tamps and only serve a
replica that has the most recent time stamp.  If only one replica is
available with an older version of the object, it will be returned,
but in practice this would be quite an edge case.

Container Listings:

  When an object is PUT in to swift, each object server that a replica
is written to is also assigned one of the containers servers to
update.  On the object server, after the replica is successfully
written, an attempt will be made to update the listing of its assigned
container server.  If that update fails, it is queued locally (which
is called an async pending), to be updated out of band by another
process.  The container updater process continually looks for these
async pendings and will attempt to make the update, and will remove it
from the queue when successful.  There are many reasons that a
container update can fail (failed device, timeout, heavily used
container, etc.).  Thus container listings are eventually consistent
in all cases (which is also very similar to S3).

Consistency Window:

For objects, the biggest factor that determines the consistency window
is object replication time.  In general this is pretty quick for even
large clusters, and we are always working on making this better.  If
you want to limit consistency windows for objects, then you want to
make sure you isolate the chances of failure as much as possible.  By
setting up your zones to be as isolated as possible (separate power,
network, physical locality, etc.) you minimize the chance that there
will be a consistency window.

For containers, the biggest factor that determines the consistency
window, is disk IO for the sqlite databases.  In recent testing, basic
SATA hardware can handle somewhere in the range of 100 PUTs per second
(for smaller containers) to around 10 PUTs per second for very large
containers (millions of objects) before aync pendings start stacking
up and you begin to see consistency issues.  With better hardware (for
example RAID 10 of SSD drives), it is easy to get 400-500 PUTs per
second with containers that have a billion objects in it.  It is also
a good idea to run your container/account servers on separate hardware
than the object servers. After that, the same things for object
servers also apply to the container servers.

All that said, please don't just take my word for it, and test it for
yourself :)

--
Chuck




On Fri, Jan 20, 2012 at 2:18 PM, Nikolaus Rath nikol...@rath.org wrote:
 Hmm, but if there are e.g. 4 replicas, two of which are up-to-date but
 offline, and two available but online, swift would serve the old version?

 -Niko


 On 01/20/2012 03:06 PM, Chmouel Boudjnah wrote:
 As Stephen mentionned if there is only one replica left Swift would not
 serve it.

 Chmouel.

 On Fri, Jan 20, 2012 at 1:58 PM, Nikolaus Rath nikol...@rath.org
 mailto:nikol...@rath.org wrote:

     Hi,

     Sorry for being so persistent, but I'm still not sure what happens if
     the 2 servers that carry the new replica are down, but the 1 server that
     has the old replica is up. Will GET fail or return the old replica?

     Best,
     Niko

     On 01/20/2012 02:52 PM, Stephen Broeker wrote:
      By default there are 3 replicas.
      A PUT Object will return after 2 replicas are done.
      So if all nodes are up then there are at least 2 replicas.
      If all replica nodes are down, then the GET Object will fail.
     
      On Fri, Jan 20, 2012 at 11:21 AM, Nikolaus Rath nikol...@rath.org
     mailto:nikol...@rath.org
      mailto:nikol...@rath.org mailto:nikol...@rath.org wrote:
     
          Hi,
     
          So if an object update has not yet been replicated on all
     nodes, and all
          nodes that have been updated are offline, what will happen?
     Will swift
          recognize this and give me an error, or will it silently
     return the
          older version?
     
          Thanks,
          Nikolaus
     
     
          On 01/20/2012 02:14 PM, Stephen Broeker wrote:
           If a node is 

Re: [Openstack] swift and rsync

2011-10-12 Thread Chuck Thier
Hi Fabrice,

The design of Swift has always assumed that the backend services are
running on a secured, private network.  If this is not going to be the
case, or you would like to provide more security on that network, a
lot more work needs to be done than just rsync.  That said,  I don't
think it  would be too difficult to add rsync options in the
replication configuration.  It isn't something that is on our current
timeline, but we would gladly accept such a patch.

--
Chuck

On Wed, Oct 12, 2011 at 6:07 AM, Fabrice Bacchella
fbacche...@spamcop.net wrote:
 swift uses rsync for some synchronization tasks. But for what I can see, it 
 mades a very raw usage of it :
 In db_replicator.py :
    def _rsync_file(self, db_file, remote_file, whole_file=True):
        ...
        popen_args = ['rsync', '--quiet', '--no-motd',
                      '--timeout=%s' % int(math.ceil(self.node_timeout)),
                      '--contimeout=%s' % int(math.ceil(self.conn_timeout))]
        ...

 In replicator.py:
    def rsync(self, node, job, suffixes):
        ...
        args = [
            'rsync',
            '--recursive',
            '--whole-file',
            '--human-readable',
            '--xattrs',
            '--itemize-changes',
            '--ignore-existing',
            '--timeout=%s' % self.rsync_io_timeout,
            '--contimeout=%s' % self.rsync_io_timeout,
        ]


 Nothing can be changed like the rsync binary, the port used, ...

 Worst, there is no security at all, so one has to rely on networks isolation 
 to protect data.

 Is there any plan to improve that, by providing optionnal arguments in the 
 conf for example ? Or at lease some not to difficult way to use some other 
 methods ?

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Load Balancers for Swift Proxy-servers ----why Pound ?

2011-09-19 Thread Chuck Thier
Howdy,

In general Nginx is really good, and we like it a lot, but it has one
design flaw that causes it to not work well with swift.  Nginx spools
all requests, so if you are getting a lot large (say 5GB) uploads, it
can be problematic.  In our testing a while ago, Pound proved to have
the best SSL performance over a 10G link.  There are a couple of
things that would be interesting to test that have come out since the
last time that we did testing:

1.  Encryption offloading to the newer Intel chips with it built in.
2.  Yahoo's Traffic Server has since become better documented.

--
Chuck

On Mon, Sep 19, 2011 at 6:47 AM, Kuo Hugo tonyt...@gmail.com wrote:
 Hello , Stackers
 I'm interesting about the reason of Pound as SLB(software Load balance) in
 Swift docs.
 Most articles talk about the performance of SLB , and Nginx seems the winner
 of SLB battle .
 Lower CPU usage / lots of connections etc
 Does Pound has better performance for Swift ?
 And if there's a clear comparison table would be great , really confusing
 about that ..
 In my study , Pound using 7 times of cpu usage than Nginx of SLB in same
 conditions .
 btw , most of articles just using HTTP instead of HTTPS.   Does Pound better
 than Nginx under HTTPS?
 Cheers
 Hugo Kuo
 --
 +Hugo Kuo+
 tonyt...@gmail.com
 hugo@cloudena.com
 +886-935-004-793
 www.cloudena.com

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Guidelines for OpenStack APIs

2011-09-19 Thread Chuck Thier
Hi Mark,

I just wanted to clarify to the reasoning why we use POST for metadata
modification in Swift.  In general I totally agree that PUT/POST
should be used for creation (PUT when you know the identification of
the representation, POST when you do not).  And PUT should be used
when modifying the representation.  The problem is that in swift, the
representation of an object is both the object data, and the metadata
for that object.  In fact, PUT works fine for update, you just have to
re-upload the entire file and the metadata, even if all you want to
change the metadata.  That is why to change just the metadata, we
implemented it in POST.

--
Chuck

On Mon, Sep 19, 2011 at 1:03 AM, Mark McLoughlin mar...@redhat.com wrote:
 Hi,

 On Sun, 2011-09-18 at 22:38 -0500, Jonathan Bryce wrote:
 After the mailing list discussion around APIs a few weeks back,
 several community members asked the Project Policy Board to come up
 with a position on APIs. The conclusion of the PPB was that each
 project's PTL will own the definition and implementation of the
 project's official API, and APIs across all OpenStack projects should
 follow a set of guidelines that the PPB will approve. This will allow
 the APIs to be tied to the functionality in the project while ensuring
 a level of consistency and familiarity across all projects for API
 consumers.

 We've started an Etherpad to collect input and comments on suggested
 guidelines. It's a little messy but proposed guidelines are set off
 with an asterisk (*):

 http://etherpad.openstack.org/RFC-API-Guidelines

 On PUT/POST:

    * PUTs create things

    * POSTs modify existing things

 Quite a debate that triggered :)

 Looking at the swift API, the semantics of PUT conform just fine to the
 HTTP spec. You do PUT on the URI of the resource and the resource gets
 created if it didn't already exist.

 OTOH, POST to update the object's metadata doesn't make much sense. We
 don't accept the entity enclosed in the request as a new subordinate.
 PATCH[1] would probably have made more sense.

 The spec is actually quite clear on the different between PUT and POST:

  The fundamental difference between the POST and PUT requests is
   reflected in the different meaning of the Request-URI. The URI in a
   POST request identifies the resource that will handle the enclosed
   entity. That resource might be a data-accepting process, a gateway to
   some other protocol, or a separate entity that accepts annotations.
   In contrast, the URI in a PUT request identifies the entity enclosed
   with the request

 So, perhaps the guidelines should be:

  * Either POST or PUT creates things, depending on the meaning of the
    request URI

  * PUT or PATCH modifies existing things

 IMHO, if any of the existing APIs don't conform exactly to the
 guidelines, it's not a big deal. The guidelines should aim to correct
 past mistakes to make sure new APIs don't inherit them.

 Finally, FWIW here's a couple of attempts we made to describe some
 RESTful API design guidelines:

  http://readthedocs.org/docs/restful-api-design/en/latest/
  http://fedoraproject.org/wiki/Cloud_APIs_REST_Style_Guide

 Cheers,
 Mark.

 [1] - http://tools.ietf.org/html/rfc5789


 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] [HELP][SWIFT] - Are SQLite files pushed to Object Server ?

2011-09-14 Thread Chuck Thier
Hi,

Each container server sqlite db is replicated to 3 of your container
nodes.  Container replication (which operates a bit differently than
object replication) ensures that they stay in sync.  The container
nodes can be run either on the same nodes as your storage nodes, or on
separate nodes.  That depends on you and how you set up your container
ring.

--
Chuck

On Wed, Sep 14, 2011 at 6:12 AM, Mohammad Nour El-Din
nour.moham...@gmail.com wrote:
 Hi...

   It has been mentioned in [1] that, quoting:

 The listings are stored as sqlite database files, and replicated
 across the cluster similar to how objects are. Does this mean that
 they are replicated, but in separate way than object files, or they
 are pushed into the Object Server and hence replicated as any other
 saved objects ?

 Looking forward to your reply.

 --
 Thanks
 - Mohammad Nour
 
 Life is like riding a bicycle. To keep your balance you must keep moving
 - Albert Einstein

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] A possible alternative to Gerrit ...

2011-09-04 Thread Chuck Thier
Taking a bit of a step back, it seems to me that the biggest thing
that prevents us from using a pure github workflow is the absolute
requirement of a gated trunk.  Perhaps a better question to ask
weather or not this should be an absolute requirement.  For me, it is
a nice to have, but shouldn't be an absolute requirement.  Perhaps a
good issue for the PPB to take up?

Thoughts?

--
Chuck

On Sat, Sep 3, 2011 at 4:42 PM, Josh Kearney j...@jk0.org wrote:
 I don't intend to fan the fumes here, but I think the point we are trying to 
 make is that the decision to use Gerrit was made before most of the community 
 was even aware of it -- much less having a chance to come up with a solution 
 like Sandy did (which, IMHO, is far more practical than the Gerrit workflow).

 How much more resistance will it take before we can consider an alternative? 
 Would a poll be out of the question?

 On Sep 3, 2011, at 3:50 PM, Thierry Carrez thie...@openstack.org wrote:

 Jay Payne wrote:
 Can we dial down the drama a bit?    It's things like this that will
 discourage people from submitting new ideas.   Calling just the
 introduction of a new idea a revolt is a diservice to the community.

 Well, maybe revolt is not the best term, but this is about resisting
 the transition to Gerrit -- one can only regret that this new idea
 wasn't introduced in the previous months, while the Gerrit solution was
 still under development and while we hadn't start transitioning core
 projects to that new system.

 --
 Thierry Carrez (ttx)
 Release Manager, OpenStack

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] S3 Compatability (Was RE: EC2 compatibility?)

2011-09-02 Thread Chuck Thier
Hi Caitlin,

Right now the best source of what S3 features are available through
the S3 compatibility layer are here:

http://swift.openstack.org/misc.html#module-swift.common.middleware.swift3

--
Chuck

On Fri, Sep 2, 2011 at 2:59 PM, Caitlin Bestler
caitlin.best...@nexenta.com wrote:
 Joshua Harlow asked:

  Is there any doc on how complete the openstack EC2 api is with a fixed
 version of amazon's actual release API.

 Maybe some table that says which functions are implemented and which
 ones aren't for a given version of the
 EC2 api and a given version of openstack?

 The same thing is also needed for Swift compatibility with the Amazon S3
 protocol.
 Reading the code suggest that there are whole features not supported,
 but having a list of which ones were deliberately
 deferred would be useful. Aside from helping users who might need a
 given feature, it will also let them know
 whether to request a new feature or to file a bug report.





 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thinking about Backups/Snapshots in Nova Volume

2011-07-21 Thread Chuck Thier
Hey Andi,

Perhaps it would be better to re-frame the question.

What should the base functionality of the Openstack API for
backup/snapshot functionality be?

I'm looking at it from the perspective of initially providing the
capabilities that EC2/EBS currently provides (which they call
snapshots).  To me, this is the absolute base of what is needed, and
is what I am basically proposing as the idea of backups.

I also see that allowing for true volume snapshot capabilities are
desirable down the road.  The difficulty with snapshots, is that their
properties can vary greatly between different storage systems, and
thus needs some care in defining what a Nova Volume snapshot should
support.  I would expect that the different storage providers would
initially provide this support through extensions to the API.  At that
point it may be easier to find what commonalities there are, and to
find what types of features are most demanded in the cloud.

--
Chuck

On Thu, Jul 21, 2011 at 5:57 AM, Andiabes andi.a...@gmail.com wrote:
 I think vish pointed out the main differences between the 2 entities, and 
 maybe that can lead to name disambiguation...

 Backup is a full copy, and usable without the original object being available 
 in any state ( original or modified). It's expensive, since it's a full copy. 
 Main use cases are dr and recovery.

 Snapshot represents a point in time state of the object. It's relatively 
 cheap ( with the expectation that some copy-on-write or differencing 
 technique is used). Only usable if the reference point of the snapshot is 
 available (could be thought of as an incremental backup); what that reference 
 point is depends on the underlying implementation technology. Main use case 
 is rewinding to so a historic state some time in the future.

 That said, with the prereqs met, both can probably be used to mount a new 
 volume.
 Reasonable?

 On Jul 20, 2011, at 5:27 PM, Chuck Thier cth...@gmail.com wrote:

 Yeah, I think you are illustrating how this generates much confusion :)

 To try to be more specific, the base functionality should be:

 1. Create a point in time backup of a volume
 2. Create a new volume from a backup (I guess it seems reasonable to
 call this a clone)

 This emulates the behavior of what EC2/EBS provide with volume
 snapshots.  In this scenario, a restore is create a new volume from
 the backup, and delete the old volume.

 In the Storage world, much more can generally be done with snapshots.
 For example in most storage system snapshots are treated just like a
 normal volume and can be mounted directly.  A snapshot is often used
 when creating a backup to ensure that you have a consistent point in
 time backup, which I think most of the confusion comes from.

 What we finally call it doesn't matter as much to me, as long as we
 paint a consistent story that isn't confusing, and that we get it in
 the Openstack API.

 --
 Chuck

 On Wed, Jul 20, 2011 at 3:33 PM, Vishvananda Ishaya
 vishvana...@gmail.com wrote:
 In rereading this i'm noticing that you are actually suggesting alternative 
 usage:

 backup/clone

 snapshot/restore

 Correct?

 It seems like backup and snapshot are kind of interchangable.  This is 
 quite confusing, perhaps we should refer to them as:

 partial-snapshot

 whole-snapshot

 or something along those lines that conveys that one is a differencing 
 image and one is a copy of the entire object?

 On Jul 20, 2011, at 12:01 PM, Chuck Thier wrote:

 At the last developers summit, it was noted by many, that the idea of
 a volume snaphsot in the cloud is highly overloaded.  EBS uses the
 notion of snapshots for making point in time backups of a volume that
 can be used to create a new volume from.  These are not true snapshots
 though from a storage world view.  Because of this I would like to
 make the following proposal:

 Add a backup API to the Openstack API for Nova Volume.  This is to
 provide EBS style snapshot functionality in the Openstack API.  I'm
 proposing to name it backup instead of snapshot as that seems to
 better describe what is happening.  It also allows room for other
 storage backends to expose real snapshot capabilities down the road.

 In the case of Lunr, we would be making backups of volumes to swift
 (possibly abstracted through glance in the future).

 I have started a blueprint and spec at:

 https://blueprints.launchpad.net/nova/+spec/backups-api
 http://etherpad.openstack.org/volume-backup

 Please feel free to comment and contribute.

 --
 Chuck

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https

Re: [Openstack] Thinking about Backups/Snapshots in Nova Volume

2011-07-21 Thread Chuck Thier
Hey Andi,

The backup implementation is up to the storage providers.  As for
lunr, we are working on a very similar feature set to back up volumes
to swift (and only backing up the changes since the last backup).

--
Chuck

On Thu, Jul 21, 2011 at 1:21 PM, andi abes andi.a...@gmail.com wrote:
 hmm - they definitely muddy the waters, but provide a really cool feature
 set:

 Amazon EBS Snapshots

 Amazon EBS provides the ability to back up point-in-time snapshots of your
 data to Amazon S3 for durable recovery. Amazon EBS snapshots are incremental
 backups, meaning that only the blocks on the device that have changed since
 your last snapshot will be saved. If you have a device with 100 GBs of data,
 but only 5 GBs of data has changed since your last snapshot, only the 5
 additional GBs of snapshot data will be stored back to Amazon S3. Even
 though the snapshots are saved incrementally, when you delete a snapshot,
 only the data not needed for any other snapshot is removed. So regardless of
 which prior snapshots have been deleted, all active snapshots will contain
 all the information needed to restore the volume. In addition, the time to
 restore the volume is the same for all snapshots, offering the restore time
 of full backups with the space savings of incremental


 That quoted - it's not exactly a low bar to meet in terms of capability.
 Chuck - are you proposing that as the target for Diablo?

 p.s - typing on a real keyboard is so much easier than an iPad, and leads to
 much better grammar...
 On Thu, Jul 21, 2011 at 12:19 PM, Chuck Thier cth...@gmail.com wrote:

 Hey Andi,

 Perhaps it would be better to re-frame the question.

 What should the base functionality of the Openstack API for
 backup/snapshot functionality be?

 I'm looking at it from the perspective of initially providing the
 capabilities that EC2/EBS currently provides (which they call
 snapshots).  To me, this is the absolute base of what is needed, and
 is what I am basically proposing as the idea of backups.

 I also see that allowing for true volume snapshot capabilities are
 desirable down the road.  The difficulty with snapshots, is that their
 properties can vary greatly between different storage systems, and
 thus needs some care in defining what a Nova Volume snapshot should
 support.  I would expect that the different storage providers would
 initially provide this support through extensions to the API.  At that
 point it may be easier to find what commonalities there are, and to
 find what types of features are most demanded in the cloud.

 --
 Chuck

 On Thu, Jul 21, 2011 at 5:57 AM, Andiabes andi.a...@gmail.com wrote:
  I think vish pointed out the main differences between the 2 entities,
  and maybe that can lead to name disambiguation...
 
  Backup is a full copy, and usable without the original object being
  available in any state ( original or modified). It's expensive, since it's 
  a
  full copy. Main use cases are dr and recovery.
 
  Snapshot represents a point in time state of the object. It's relatively
  cheap ( with the expectation that some copy-on-write or differencing
  technique is used). Only usable if the reference point of the snapshot is
  available (could be thought of as an incremental backup); what that
  reference point is depends on the underlying implementation technology. 
  Main
  use case is rewinding to so a historic state some time in the future.
 
  That said, with the prereqs met, both can probably be used to mount a
  new volume.
  Reasonable?
 
  On Jul 20, 2011, at 5:27 PM, Chuck Thier cth...@gmail.com wrote:
 
  Yeah, I think you are illustrating how this generates much confusion :)
 
  To try to be more specific, the base functionality should be:
 
  1. Create a point in time backup of a volume
  2. Create a new volume from a backup (I guess it seems reasonable to
  call this a clone)
 
  This emulates the behavior of what EC2/EBS provide with volume
  snapshots.  In this scenario, a restore is create a new volume from
  the backup, and delete the old volume.
 
  In the Storage world, much more can generally be done with snapshots.
  For example in most storage system snapshots are treated just like a
  normal volume and can be mounted directly.  A snapshot is often used
  when creating a backup to ensure that you have a consistent point in
  time backup, which I think most of the confusion comes from.
 
  What we finally call it doesn't matter as much to me, as long as we
  paint a consistent story that isn't confusing, and that we get it in
  the Openstack API.
 
  --
  Chuck
 
  On Wed, Jul 20, 2011 at 3:33 PM, Vishvananda Ishaya
  vishvana...@gmail.com wrote:
  In rereading this i'm noticing that you are actually suggesting
  alternative usage:
 
  backup/clone
 
  snapshot/restore
 
  Correct?
 
  It seems like backup and snapshot are kind of interchangable.  This is
  quite confusing, perhaps we should refer to them as:
 
  partial-snapshot
 
  whole-snapshot

[Openstack] Adding Chap support for ISCI volumes in Nova Volume

2011-07-21 Thread Chuck Thier
I would like to see one way CHAP support added to Nova Volume.  Not a
whole lot more to add, but would be interested in any feedback.

Blueprint: https://blueprints.launchpad.net/nova/+spec/isci-chap
Spec: http://etherpad.openstack.org/iscsi-chap

--
Chuck

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thinking about Backups/Snapshots in Nova Volume

2011-07-20 Thread Chuck Thier
Yeah, I think you are illustrating how this generates much confusion :)

To try to be more specific, the base functionality should be:

1. Create a point in time backup of a volume
2. Create a new volume from a backup (I guess it seems reasonable to
call this a clone)

This emulates the behavior of what EC2/EBS provide with volume
snapshots.  In this scenario, a restore is create a new volume from
the backup, and delete the old volume.

In the Storage world, much more can generally be done with snapshots.
For example in most storage system snapshots are treated just like a
normal volume and can be mounted directly.  A snapshot is often used
when creating a backup to ensure that you have a consistent point in
time backup, which I think most of the confusion comes from.

What we finally call it doesn't matter as much to me, as long as we
paint a consistent story that isn't confusing, and that we get it in
the Openstack API.

--
Chuck

On Wed, Jul 20, 2011 at 3:33 PM, Vishvananda Ishaya
vishvana...@gmail.com wrote:
 In rereading this i'm noticing that you are actually suggesting alternative 
 usage:

 backup/clone

 snapshot/restore

 Correct?

 It seems like backup and snapshot are kind of interchangable.  This is quite 
 confusing, perhaps we should refer to them as:

 partial-snapshot

 whole-snapshot

 or something along those lines that conveys that one is a differencing image 
 and one is a copy of the entire object?

 On Jul 20, 2011, at 12:01 PM, Chuck Thier wrote:

 At the last developers summit, it was noted by many, that the idea of
 a volume snaphsot in the cloud is highly overloaded.  EBS uses the
 notion of snapshots for making point in time backups of a volume that
 can be used to create a new volume from.  These are not true snapshots
 though from a storage world view.  Because of this I would like to
 make the following proposal:

 Add a backup API to the Openstack API for Nova Volume.  This is to
 provide EBS style snapshot functionality in the Openstack API.  I'm
 proposing to name it backup instead of snapshot as that seems to
 better describe what is happening.  It also allows room for other
 storage backends to expose real snapshot capabilities down the road.

 In the case of Lunr, we would be making backups of volumes to swift
 (possibly abstracted through glance in the future).

 I have started a blueprint and spec at:

 https://blueprints.launchpad.net/nova/+spec/backups-api
 http://etherpad.openstack.org/volume-backup

 Please feel free to comment and contribute.

 --
 Chuck

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to     : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp



___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Refocusing the Lunr Project

2011-07-08 Thread Chuck Thier
Openstack Community,

Through the last few months the Lunr team has learned many things.  This
week, it has become clear to us that it would be better to integrate
with the existing Nova Volume code. It is upon these reflections that we
have decided to narrow the focus of the Lunr Project.

Lunr will continue to focus on delivering an open commodity storage
platform that will integrate with the Nova Volume service.  This will
be accomplished by implementing a Nova Volume driver. We will work
with the Nova team, and other storage vendors, to drive the features
needed to provide a flexible volume service.

I believe that this new direction will ensure a bright future for storage
in Nova, and look forward to continuing to work with everyone in making this
possible.

Sincerely,

Chuck Thier (@creiht)
Lunr Team Lead
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Refocusing the Lunr Project

2011-07-08 Thread Chuck Thier
Hey Jorge,

That is up to the Nova team, though I imagine that it will continue down the
road that it had been progressing (as an extension to the current openstack
api).

--
Chuck

On Fri, Jul 8, 2011 at 12:37 PM, Jorge Williams 
jorge.willi...@rackspace.com wrote:

  Chuck,

  What does this mean in terms of APIs?  Will there be a separate Volume
 API?  Will volumes be embedded in the compute API?

  -jOrGe W.


  On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:

 Openstack Community,

 Through the last few months the Lunr team has learned many things.  This
 week, it has become clear to us that it would be better to integrate
 with the existing Nova Volume code. It is upon these reflections that we
 have decided to narrow the focus of the Lunr Project.

 Lunr will continue to focus on delivering an open commodity storage
 platform that will integrate with the Nova Volume service.  This will
 be accomplished by implementing a Nova Volume driver. We will work
 with the Nova team, and other storage vendors, to drive the features
 needed to provide a flexible volume service.

 I believe that this new direction will ensure a bright future for storage
 in Nova, and look forward to continuing to work with everyone in making
 this
 possible.

 Sincerely,

 Chuck Thier (@creiht)
 Lunr Team Lead ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


  This email may include confidential information. If you received it in
 error, please delete it.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Refocusing the Lunr Project

2011-07-08 Thread Chuck Thier
Replying to the list for completeness sake:

The only thing that changes from that perspective is that the block storage
API remains part of Nova.  We will have a separate service that will be
implemented as a driver that plugs into nova.  We will continue to focus on
the commodity storage part of the puzzle, where Nova will continue the API
and orchestration.

--
Chuck

On Fri, Jul 8, 2011 at 2:57 PM, Erik Carlin erik.car...@rackspace.comwrote:

  From a Rackspace perspective, we do not want to expose block operations
 in the compute API.  The plan has been to expose the attach/detach as nova
 API extensions, and that still makes sense, but will there be a separate,
 independent block service and API?

  Erik

   From: Chuck Thier cth...@gmail.com
 Date: Fri, 8 Jul 2011 13:15:56 -0500
 To: Jorge Williams jorge.willi...@rackspace.com
 Cc: openstack@lists.launchpad.net openstack@lists.launchpad.net
 Subject: Re: [Openstack] Refocusing the Lunr Project

  Hey Jorge,

  That is up to the Nova team, though I imagine that it will continue down
 the road that it had been progressing (as an extension to the current
 openstack api).

  --
 Chuck

 On Fri, Jul 8, 2011 at 12:37 PM, Jorge Williams 
 jorge.willi...@rackspace.com wrote:

 Chuck,

  What does this mean in terms of APIs?  Will there be a separate Volume
 API?  Will volumes be embedded in the compute API?

  -jOrGe W.


   On Jul 8, 2011, at 10:40 AM, Chuck Thier wrote:

Openstack Community,

 Through the last few months the Lunr team has learned many things.  This
 week, it has become clear to us that it would be better to integrate
 with the existing Nova Volume code. It is upon these reflections that we
 have decided to narrow the focus of the Lunr Project.

 Lunr will continue to focus on delivering an open commodity storage
 platform that will integrate with the Nova Volume service.  This will
 be accomplished by implementing a Nova Volume driver. We will work
 with the Nova team, and other storage vendors, to drive the features
 needed to provide a flexible volume service.

 I believe that this new direction will ensure a bright future for storage
 in Nova, and look forward to continuing to work with everyone in making
 this
 possible.

 Sincerely,

 Chuck Thier (@creiht)
  Lunr Team Lead ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp


  This email may include confidential information. If you received it in
 error, please delete it.


  ___ Mailing list:
 https://launchpad.net/~openstack Post to : 
 openstack@lists.launchpad.netUnsubscribe :
 https://launchpad.net/~openstack More help :
 https://help.launchpad.net/ListHelp This email may include confidential
 information. If you received it in error, please delete it.

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Lunr Update

2011-06-14 Thread Chuck Thier
That is a good question, but if you can wait just a little bit longer, I
think things will become much clearer :)

--
Chuck

On Tue, Jun 14, 2011 at 2:39 PM, andi abes andi.a...@gmail.com wrote:

 Thanks !

 silly question... could futurestack (https://launchpad.net/futurestack) be
 the home for these pointers (and maybe other results from the design summit,
 which alas I missed) ?

 a.


 On Tue, Jun 14, 2011 at 2:05 PM, Chuck Thier cth...@gmail.com wrote:

 Hi Andi,

 There was the initial blue print  at:
 https://blueprints.launchpad.net/nova/+spec/integrate-block-storage

 And the notes from the discussion at the design summit at:
 http://etherpad.openstack.org/lunr-blueprint

 There are also discussion post summit archived on the mailing lists.

 There should be clearer docs and information available when we make the
 initial code relase.

 --
 Chuck

 On Tue, Jun 14, 2011 at 12:31 PM, andi abes andi.a...@gmail.com wrote:

 Chuck,
   I was trying to find docs/blueprints and such on launchpad and on
 openstack.org, but was not very successful. Can you share pointers to
 whatever material is out there?

 thx



 On Tue, May 31, 2011 at 7:16 PM, Chuck Thier cth...@gmail.com wrote:

 Howdy Stackers,

 It has been a while, so I thought I would send out an update.

 We are still in the process of doing initial RD work, and hope to have
 some code available for people to poke at and comment on in the next few
 weeks.  This will include a first rough cut of our proposed volume API,
 driver model, and general architecture.  I feel that we are making good
 strides, and look forward to sharing the code with everyone soon.  The code
 will likely still be very rough, but should be a good starting point for 
 us,
 and the community to begin building off of.

 I would also like to clarify the initial purpose of Lunr.  The main goal
 of Lunr is to provide EBS like functionality on top of various storage
 backends.  We will be adopting a driver model that should be similar to 
 that
 of the current Nova volume, and at the worst, be very trivial to port from
 Nova to Lunr.  We welcome contributions that support various 3rd party
 storage systems, and plan to support extra functionality through the
 openstack extension API specification.  Lunr will also include a reference
 storage driver that will support exporting iscsi volumes from commodity
 hardware.

 We also have started a IRC channel (#lunr on freenode), so feel free to
 pop in and say hi.

 --
 Chuck




 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp





___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Swift PPA's

2011-05-05 Thread Chuck Thier
Hey Soren,

We've asked similar questions before :)

Ever since the packaging was pulled out of the source tree, we have been
mostly out of the packaging loop.  Since then most of the packaging details
have been handled by monty and you.

We build our own packages for production, so we have mostly ignored it,
figuring you guys would keep the packaging in line with what Nova was doing.
 I'm all for cleaning that stuff up though, so let us know if there is
anything you need from us.

We do though have a couple of docs that reference the ppas, so if we make
changes, then we should make changes there as well:

 http://swift.openstack.org/howto_installmultinode.html
http://swift.openstack.org/debian_package_guide.html

--
Chuck

On Thu, May 5, 2011 at 9:02 AM, Soren Hansen so...@linux2go.dk wrote:

 As a bit of a follow-up to my previous e-mail about PPA's, I tried to
 figure out what the story is with the Swift PPA's, but I failed.

 Can someone on the Swift team fill me in on what your various PPA's
 are for? Possibly by adding some info on the wiki page:

   http://wiki.openstack.org/Packaging/Ubuntu

 ..once I'm done fiddling with it, that is :)

 --
 Soren Hansen| http://linux2go.dk/
 Ubuntu Developer| http://www.ubuntu.com/
 OpenStack Developer | http://www.openstack.org/

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Location of packaging branches

2011-05-05 Thread Chuck Thier
Hi Thomas,

The swift-init thing is just a useful tool that we use to manage the
services in dev, and while at one time we had init scripts, our ops guys
just started using the swift-init tool out of convenience.

That said, it should be easy to create other init scripts.  The format for
starting a service outside  of swift-init is:

swift-SERVER_NAME /path/to/conf

and almost all swift scripts should respond to a --help command line option
as well.

If any of the services need better return codes, please submit a bug.

Thanks,

--
Chuck

On Wed, May 4, 2011 at 11:18 AM, Thomas Goirand tho...@goirand.fr wrote:

 On 05/04/2011 06:20 PM, Soren Hansen wrote:
  2011/5/4 Thomas Goirand tho...@goirand.fr:
  I've tried to start swift proxy-server without using swift-init. One
  of the reason is that I can't use the embedded LSB messages of it (who
  knows what the LSB messages will be changed for, one day...), and also
  because it really doesn't fit the Debian environment. I think it's ok
  to have a swift-init thing (but maybe it would have been worth
  calling it swiftctl rather than init), but I don't think it's a good
  idea to use it for init scripts, which is a configuration file, and
  can be edited by users. So, I tried to run swift-proxy-server using
  start-stop-daemon, but then I have the following message:
 
  root@GPLHost:node3320_ ~# /etc/init.d/swift-proxy start
  Starting Swift proxy server: swift-proxy-serverUnable to locate config
  for swift-proxy-server
  .
 
  Any idea how to fix?
 
  I suggest you take this up on the openstack mailing list.

 What others think about the above? Does swift-init even honor standard
 return values, so I can give its answer to log_end_msg?  I don't think
 using Python print function replaces messages that the distribution
 can customize. I can see many Unix distributions where it's an issue
 already (like RedHat with the [ Ok ] style...). So I have here 2 solutions:

 1- Silence out any swift-init messages (using a redirection to /dev/null)

 2- Not using swift-init at all (why should I, when there's
 start-stop-daemon that does the job perfectly?), but then I must find
 out why
 swift-proxy-server can't find its config file. That would really be my
 preferred way, since that would shorten my init.d script (that wouldn't
 need to check for the presence of swift-init, do redirection of outputs,
 and all sorts of useless tricks).

 If I choose the later, is:

 swift-proxy-server /etc/swift/proxy-server.conf

 the way to do it, or is there some magic parameters that I missed?

 Thomas

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-02 Thread Chuck Thier
On Mon, May 2, 2011 at 2:45 PM, Eric Windisch e...@cloudscaling.com wrote:


 On May 2, 2011, at 12:50 PM, FUJITA Tomonori wrote:

  Hello,
 
  Chuck told me at the conference that lunr team are still working on
  the reference iSCSI target driver design and a possible design might
  exploit device mapper snapshot feature.

 To clarify on the subject of snapshots: The naming of snapshots in Nova and
 their presence on disk is more confusing than it should be. There was some
 discussion of attempting to clarify the naming conventions.  Storage
 snapshots as provided by the device mapper are copy-on-write block devices,
 while Nova will also refer to file-backing stores as snapshots.  This latter
 definition is also used by EC2, but otherwise unknown and unused in the
 industry.


One of the things that was made very evident at the conference was the
confusion around snapshots in Lunr.  We were just talking about this in the
office, and we are considering renaming snapshots in the Lunr API to
backups to better indicate its intentions.  Backups will be made from a
volume, and a user will be able to create new volumes based on a backup.

This leads to another interesting question.  While our reference
implementation may not directly expose snapshot functionality, I imagine
other storage implementations could want to. I'm interested to hear what use
cases others would be interested in with snapshots.  The obvious ones are
things like creating a volume based on a snapshot, or rolling a volume back
to a previous snapshot.  I would like others' input here to shape what the
snapshot API might look like.

--
Chuck
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] lunr reference iSCSI target driver

2011-05-02 Thread Chuck Thier
We have no current plans to make an iSCSI target for swift.  Not only would
there be performance issues, but also consistency issues among other things.
 For Lunr, swift will only be a target for backups from block devices.

I think some of this confusion stems from the confusion around snapshots,
and Fujita's proposal would make an interesting case if we were going to use
swift for more traditional snapshots.  But since we are looking to use swift
as backups for volumes, we will not need that type of functionality
initially.

Eric:  Our current snapshot prototype uses FUSE since that is very simple to
do in python, but we are also considering using a NBD (among other options).
 Once we have this nailed down a bit more, we will send out more details.

--
Chuck

On Mon, May 2, 2011 at 8:29 PM, Nelson Nahum nel...@zadarastorage.comwrote:

 Is Swift as a Block device a real option? It looks to me that
 performance will be a big problem. Also how the three copies of Swift
 will be presented as iSCSI?  Only one? Each one with its own iSCSI
 target? Who serialize the writes in this scenario?

 Nelson

 On Mon, May 2, 2011 at 6:11 PM, Eric Windisch e...@cloudscaling.com
 wrote:
 
  Surely, FUSE is another possible option, I think. I heard that lunr
  team was thinking about the approach too.
 
  I'm concerned about the performance/stability of FUSE, but I'm not sure
 if using iSCSI is a significantly better option when the access is likely to
 be local. If I had to choose something in-between, I'd evaluate if NBD was
 any better of a solution.
 
  I expect there will be great demand for an implementation of a Swift as a
 block device client.  Care should be made in deciding what will be the
 best-supported method/implementation. That said, you have an implementation,
 and that goes a long way versus the alternatives which don't currently
 exist.
 
 
  As I wrote in the previous mail, the tricky part of the dm-snapshot
  approach is getting the delta of snaphosts (I assume that we want to
  store only deltas on Swift). dm-snapshot doesn't provide the
  user-space API to get the deltas. So Lunr needs to access to
  dm-snapshot volume directly. It's sorta backdoor approach (getting the
  information that Linux kernel doesn't provide to user space). As a
  Linux kernel developer, I would like to shout at people who do such :)
 
 
  With dm-snapshot, the solution is to look at the device mapper table (via
 the device mapper API) and access the backend volume. I don't see why this
 is a bad solution. In fact, considering that the device mapper table could
 be arbitrarily complex and some backend volumes might be entirely virtual,
 i.e. dm-zero, this seems fairly reasonable to me.
 
  I really don't see at all how Swift-as-block-device relates at all to
 (storage) snapshots, other than the fact that this makes it possible to use
 Swift with dm-snapshot.
 
  Regards,
  Eric Windisch
  e...@cloudscaling.com
 
 
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Thinking about Openstack Volume API

2011-04-22 Thread Chuck Thier
One of the first steps needed to help decouple volumes from Nova, is to
define what the Openstack Volume API should look like.  I would like to
start
by discussing the main api endpoints, and discussing the interaction of
compute attaching/detaching from volumes.

All of the following endpoints will support basic CRUD opperations similar
to
others described in the Openstack 1.1 API.

/volumes
Justin already has a pretty good start to this.  We will need to discuss
what data we will need to store/display about volumes, but I will save
that for a later discussion.

/snapshots
This will allow us to expose snapshot functionality from the underlying
storage systems.

/exports
This will be used to expose a volume to be consumed by an external
system.
The Nova attach api call will make a call to /exports to set up a volume
to be attached to a VM.  This will store information that is specific
about a particular attachement (for example maybe CHAP authentication
information for an iscsi export).  This helps with decoupling volumes
from nova, and makes the attachement process more generic so that other
systems can easily consume the volumes service.  It is also undecided
if
this should be a publicly available api, or just used by backend
services.

The exports endpoint is the biggest change that we are proposing, so we
would
like to solicit feedback on this idea.

--
Chuck Thier (@creiht)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Thinking about Openstack Volume API

2011-04-22 Thread Chuck Thier
Hey Vish,

Yes, we have been thinking about that a bit.  The current idea is to have
volume types, and depending on the type, it would expect a certain amount of
data for that type.  The scheduler would then map that type and
corresponding data to provision the right type of storage.

--
Chuck

On Fri, Apr 22, 2011 at 6:17 PM, Vishvananda Ishaya
vishvana...@gmail.comwrote:

 This all seems reasonable to me.  Do you have a concept of how you will
 expose different SLAs within the deployment.  Is it metadata on the volume
 that and handled by the scheduler?  Or will different SLAs be at separate
 endpoints?

 In other words am i creating a volume with a PUT /
 provider.com/high-perf-volumes/account/volumes/
 or just a /provider.com/account/volumes/ and a X-High-Perf header ?

 Vish

 On Apr 22, 2011, at 2:40 PM, Chuck Thier wrote:

  One of the first steps needed to help decouple volumes from Nova, is to
  define what the Openstack Volume API should look like.  I would like to
 start
  by discussing the main api endpoints, and discussing the interaction of
  compute attaching/detaching from volumes.
 
  All of the following endpoints will support basic CRUD opperations
 similar to
  others described in the Openstack 1.1 API.
 
  /volumes
  Justin already has a pretty good start to this.  We will need to
 discuss
  what data we will need to store/display about volumes, but I will
 save
  that for a later discussion.
 
  /snapshots
  This will allow us to expose snapshot functionality from the
 underlying
  storage systems.
 
  /exports
  This will be used to expose a volume to be consumed by an external
 system.
  The Nova attach api call will make a call to /exports to set up a
 volume
  to be attached to a VM.  This will store information that is specific
  about a particular attachement (for example maybe CHAP authentication
  information for an iscsi export).  This helps with decoupling volumes
  from nova, and makes the attachement process more generic so that
 other
  systems can easily consume the volumes service.  It is also undecided
 if
  this should be a publicly available api, or just used by backend
 services.
 
  The exports endpoint is the biggest change that we are proposing, so we
 would
  like to solicit feedback on this idea.
 
  --
  Chuck Thier (@creiht)
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp


___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


[Openstack] Lunr FAQ

2011-04-21 Thread Chuck Thier
I have already received quite a few questions about the Lunr project, so I
would like to attempt to answer most of them in one post.

* What is it?

Lunr is a project to extend and evolve the Volume Services currently in
the Nova compute project.

* What is happening to introduce the project to the Openstack community?
 Are we able to discuss this at next week's design summit?

We are working on getting the following done before the design summit
next week:

* Choose a project name - DONE (Lunr)
* Identify a project lead - DONE (Chuck Thier @creiht)
* Set up a project repository
* Introduce the project to the Openstack mailing list - DONE
* Begin Openstack Volume API discussion on the mailing list - Coming
soon!
* Create a blueprint signaling the intention of building this service -
DONE
* Schedule blueprint for discussion at the next design summit - DONE

 * Is this an open project?  How does it work with other Openstack projects?

The Lunr project will be developed as an opensource project and involve
the Openstack community.  Initially it is proposed as an incubated Openstack
project.  The Lunr project plans on adopting the common services required
(such as authn/authz) when they are available.  The Lunr API will be able to
be consumed by Nova and any other Openstack project that needs to provision
and manage block storage volumes.

  * What is the release schedule for this project?
We are working on having a deployable service in the Diablo release
timeframe that will work with Nova.

  * How does this relate to the existing volume manager?
The current volume manager in Nova will continue to exist.  Lunr will be
an optional volume service, similar in a way to how the Network as a
Service project will be developing.

  * I'm vendor X and would like to ensure that my storage system works with
Openstack and/or contribute.  How do I do this?

There is great interest from many vendors in working with Lunr.  At the
base level, we will be adopting a very similar (if not the same) driver
approach that is currently in the Nova volume service.  This will provide a
simple interface for storage providers to integrate their storage products
into Lunr.  If you are interested, please participate in the community
discussion around the API and development of the system, and if you can,
come to the next developer summit and talk with us.

  * Are we throwing away perfectly good Nova volume manager code?  Why do a
separate project rather than extending the current implementation?

The existing Nova volume manager will continue to exist, with Lunr
extending and evolving the future of block storage for in Openstack.  We
will re-use code and ideas from the Nova volume manager where it makes
sense.  Having a volume manager that is independent of Nova, while allowing
Nova and other services to consume the volume services through a defined
API, is a design pattern that is consistent with other Openstack services
(for example Glance and Networks)

Moving forward, the Openstack block storage system needs to evolve to
allow for true enterprise and service provider functionality; such as mixed
storage targets, tiered storage targets, QOS scheduling, and many other
aspects of block storage management for private and public clouds.  The most
efficient manner for the Openstack community to accomplish this is to
separate the concerns of volume management from compute orchestration.

--
Chuck Thier (@creiht)
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Some Debian fixes I pushed in the launchpad bzr

2011-04-05 Thread Chuck Thier
I think you may have just hit an edge case in the ring-builder code.  I
don't think it likes it if you remove all the devices from the ring.  There
is also another edge case where some operations (like rebalancing) will fail
if you have less than 3 zones.

BTW, the easiest way to test a working installation would be to follow the
all in one instructions (http://swift.openstack.org/development_saio.html).
 This will allow you to also run the suite of functional tests.

--
Chuck

On Tue, Apr 5, 2011 at 4:18 AM, Thomas Goirand z...@debian.org wrote:

 On 04/05/2011 05:27 AM, Chuck Thier wrote:
   Swift *should* work on any device that has a supported file system
  (such as XFS).  We use loopback devices often on our development
  machines.  What specific problem are you running into?

 I got this:

 root@GPLHost:node3320_ /etc/swift# swift-ring-builder account.builder
 account.builder, build version 1
 262144 partitions, 3 replicas, 1 zones, 1 devices, 100.00 balance
 The minimum number of hours before a partition can be reassigned is 1
 Devices:id  zone  ip address  port  name weight partitions
 balance meta
 0 1   87.98.215.249  6002 node3320vg0/swift1 100.00
  0 -100.00
 root@GPLHost:node3320_ /etc/swift# swift-ring-builder account.builder
 remove d0
 Traceback (most recent call last):
  File /usr/bin/swift-ring-builder, line 571, in module
Commands.__dict__.get(command, Commands.unknown)()
  File /usr/bin/swift-ring-builder, line 428, in remove
builder.remove_dev(dev['id'])
  File /usr/lib/pymodules/python2.6/swift/common/ring/builder.py, line
 178, in remove_dev
self._set_parts_wanted()
  File /usr/lib/pymodules/python2.6/swift/common/ring/builder.py, line
 322, in _set_parts_wanted
sum(d['weight'] for d in self.devs if d is not None)
 ZeroDivisionError: integer division or modulo by zero

 According to the doc in the code of that Python script (which I also
 translated into a man page), the syntax I'm using should be correct.
 What's wrong here?

 Thomas

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] dotnet cloud files access?

2011-04-05 Thread Chuck Thier
Hi Jon,

I'm not familiar with the C# bindings, but the fact that you can do
some operations sounds promising.  A 503 return code from the server means
that something wrong happened server side, so you might check the server
logs to see if they provide any useful information.  Another useful test
would be to verify that you can correctly upload a file with the st tool.

Out of curiosity,  are you also running swift on OS X?

--
Chuck

On Mon, Apr 4, 2011 at 8:13 PM, Jon Slenk jsl...@internap.com wrote:

 hi,

 I've been trying to get https://github.com/rackspace/csharp-cloudfiles
 to work with a local Swift install, but no dice yet. Has anybody else
 succeeded with that?

 (I happen to be running it all on Mono on Mac OS X oh brother. I
 hacked up a demo C# app myself that does successful REST AuthN and can
 Download (but for some reason fails to upload) so I know I /can/ get
 things to work, so I don't grok why the Rackspace code won't work. I'm
 getting 503's.)

 thanks for any ideas.

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Some Debian fixes I pushed in the launchpad bzr

2011-04-04 Thread Chuck Thier
  I also worked on swift. Can you have a look? I'm not so sure what I did
  is fully correct yet, because I didn't succeed in running everything
  fully. It seems that swift doesn't like using device-mapper as
  partitions, is that correct? Which leads me to reinstall my test server
  from scratch, leaving some empty partitions for swift then. Or do you
  think this could be fixed, somehow?

 I'll have to defer to the honourable Swift devs on this list on this.


 Swift *should* work on any device that has a supported file system (such as
XFS).  We use loopback devices often on our development machines.  What
specific problem are you running into?

--
Chuck
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack API - Volumes?

2011-03-23 Thread Chuck Thier
Now that I have had a chance to look into this a little more,  I realize
that I had missed the connection of talking about the volume functionality
missing in the Openstack Rest API (versus the EC2 api that was already
there).

Justin: Sorry I'm a bit late to the game on this, but is there a
blueprint/documentation for the API that I could take a look at?

And I agree with the others that it would be nice to get the Openstack API
side of things moving since it is a pretty core element and should be in
Cactus, but like I said, I'm not a core dev for nova, so what needs to be
done to get this going?

I also think that since this is a core piece that doesn't exist already, I
don't think it should be an extension and we should work on getting it in
there.

--
Chuck

On Mon, Mar 21, 2011 at 10:14 PM, Adam Johnson adj...@gmail.com wrote:

 Hey everyone,

 I wanted to bring up the topic of volumes in the OpenStack API.  I
 know there was some discussion about this before, but it seems to have
 faded on the ML.   I know Justinsb has done some work on this already,
 and has a branch here:

 https://code.launchpad.net/~justin-fathomdb/nova/justinsb-openstack-api-volumes

 Im wondering what the consensus is on what the API should look like,
 and when we could get this merged into Nova?

 Thanks,
 Adam Johnson
 Midokura

 ___
 Mailing list: https://launchpad.net/~openstack
 Post to : openstack@lists.launchpad.net
 Unsubscribe : https://launchpad.net/~openstack
 More help   : https://help.launchpad.net/ListHelp

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] Openstack API - Volumes?

2011-03-22 Thread Chuck Thier
Hey Adam,

As far as I understand it, there is already basic volume support within
nova, and that there are currently some suggested changes for the current
API.  We are planning to fit within that API as much as possible, and work
with the community to make any changes that may need to be made (an easy
example is snapshot support for snapshots in swift).

Most of our team will be at the developers summit, and look forward to
discussing it with others then.

--
Chuck

On Tue, Mar 22, 2011 at 8:00 PM, Adam Johnson adj...@gmail.com wrote:

 Hey Chuck,

 Thanks for the response.  We are very interested in this project as
 well, and may want to contribute.  We're currently evaluating options
 such as Sheepdog and others to see if they fit the bill.  In order to
 fully evaluate these options, it would be great to have *some* API
 support in the project.

 I think adding volumes as an extension short term seems like a good
 idea.  Whatever it takes to get something in there, so we can get our
 hands dirty and experiment some more.

 Adam

 On Wed, Mar 23, 2011 at 3:53 AM, Chuck Thier cth...@gmail.com wrote:
  Hi Adam,
  We have just begun an RD effort for building a scalable block storage
  service with commodity hardware.  We are still working more on the back
 end
  specifics for this, so we haven't spent much time looking at the end user
  apis yet.  We are open to hearing feedback on that front though, and plan
 on
  discussing it more at the next developer summit.
 
  This project will be developed in the open, and when complete will
  be submitted to the Policy Board as a candidate for inclusion in
 Openstack
  proper.  I don't have a lot of details to share yet, but plan on
 discussing
  much at the next summit.
  --
  Chuck
  On Tue, Mar 22, 2011 at 11:20 AM, John Purrier j...@openstack.org
 wrote:
 
  I know that creiht is looking at this for Rackspace. Chuck, anything to
  add
  to this discussion?
 
  John
 
  -Original Message-
  From: openstack-bounces+john=openstack@lists.launchpad.net
  [mailto:openstack-bounces+john=openstack@lists.launchpad.net] On
  Behalf
  Of Adam Johnson
  Sent: Monday, March 21, 2011 10:15 PM
  To: openstack@lists.launchpad.net
  Subject: [Openstack] Openstack API - Volumes?
 
  Hey everyone,
 
  I wanted to bring up the topic of volumes in the OpenStack API.  I
  know there was some discussion about this before, but it seems to have
  faded on the ML.   I know Justinsb has done some work on this already,
  and has a branch here:
 
 
 https://code.launchpad.net/~justin-fathomdb/nova/justinsb-openstack-api-volu
  mes
 
  Im wondering what the consensus is on what the API should look like,
  and when we could get this merged into Nova?
 
  Thanks,
  Adam Johnson
  Midokura
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 
  ___
  Mailing list: https://launchpad.net/~openstack
  Post to : openstack@lists.launchpad.net
  Unsubscribe : https://launchpad.net/~openstack
  More help   : https://help.launchpad.net/ListHelp
 
 

___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp


Re: [Openstack] State of OpenStack Auth

2011-03-03 Thread Chuck Thier
The problem with this logic is that you are optimizing wrong.  In a token
based auth system, the tokens are valid generally for a period of time (24
hours normally with Rackspace auth), and it is a best practice to cache
this.  Saying that you are reducing HTTP requests for 1 request that has to
happen every 24 hours isn't saving you that much.

But back to the auth questions in general, I would like to comment on a
couple of things that have come up:

1.  Basic Auth - I'm not fond of this mainly because auth credentials
(identification and secret) are sent, and have to be verified on every
single request.  This also means that every endpoint is going to be handling
the users' secrets for every request.  I think there is good precedent with
no major service providers using basic auth (even including twitter moving
away from basic auth, to OAuth)

2. Signed vs. Token based auth - Why not support both?  It isn't that
complex.  It is also interesting that OAuth v1 was signature based, while
OAuth v2 has moved to a token based auth system, so there is broad support
in the general community for both methods.

--
Chuck

On Thu, Mar 3, 2011 at 2:59 PM, Michael Mayo m...@openstack.org wrote:

 I was thinking more of a sniff someone's traffic and perform those same
 requests again sort of attack.  But then again, I'm an iPhone guy and not a
 security expert :)

 In the end, I'm simply advocating that we reduce the number of HTTP
 requests needed to get information or get things done.  Getting rid of the
 auth server call is a first step.  Future steps could be things like
 including child entities in responses (for instance, getting a server list
 also returning complete image and flavor entities).  Then perhaps we could
 allow creates and actions to happen on multiple entities (create 10
 servers instead of calling create server 10 times, reboot a set of
 servers, etc).

 How we reduce API calls isn't that important to me.

 Thanks!
 Mike


 On Mar 3, 2011, at 12:52 PM, Jorge Williams wrote:


  I think you're overestimating the security risk in issue 3.  You're
 bank's website uses HTTPS.  In order to launch a successful man-in-the
 middle attack against it you would have to compromise the certificate
 authority.  Basic Auth with HTTPS against a single endpoint is pretty darn
 secure and no more secure than option 1.  The big advantage of 2 is that you
 can initiate conversations *without* HTTPS, there may be cases when you'd
 want to do that, but seeing that we have the potential to move passwords
 around when we create servers,  I don't see why you would want to go that
 route.  Using a token becomes important when we start talking about
 delegation, but let's not go there right now :-)

  -jOrGe W.



  On Mar 3, 2011, at 2:33 PM, Michael Mayo wrote:

  Here are my thoughts, as a client developer:

  *1. Hit auth server first for token, then hit compute and storage
 endpoints*

  This is fairly simple, but there are a couple of problems with it:

  a. It's not very curl or browser friendly (you have to curl the auth
 server first and copy the token, which is annoying)
 b. It's a waste of an HTTP request.  That may not matter for most people,
 but in the case of something like a mobile client, it's a serious problem.
  Network traffic is a very precious resource on cell phones, so if you can
 do anything to reduce the number of HTTP requests you need to do something,
 you should.  This is not only true for the OpenStack mobile apps I write,
 but also for developers making apps that need to use swift to upload content
 somewhere.

  *2. Signed requests*

  This is a little more painful from a development standpoint, but it's not
 really that big of a deal.  The only downside to this approach is that it's
 not curl or browser friendly.  However, the upside of preventing replay
 attacks is pretty valuable.

  *3. HTTP Basic*

  HTTP Basic is great because it's super easy to use and it's curl and
 browser friendly.  However, replay attacks are possible so you open yourself
 up to a security issue there.

  *My Vote (Assuming I Actually Have One)*

  I think signed requests are the best option since it's more secure than
 HTTP Basic.  We could make an oscurl command line tool that would sign a
 request and behave exactly like curl.  That shouldn't be too hard.  But if
 that can't happen, HTTP Basic is the next best choice.  Requiring API users
 to get a new auth token every n hours via an auth endpoint kind of sucks,
 especially from a mobile client perspective.



  On Mar 3, 2011, at 9:04 AM, Jorge Williams wrote:



 I agree with Greg here.  Signatures complicate life for our clients, they
 are not browser friendly, and I'm not really convinced that we need them. If
 we are going to have a default (and I think that we should) it should be
 dead simple to integrate with.   I would vote for basic auth with https.

 -jOrGe W.

 On Mar 3, 2011, at 9:40 AM, Greg wrote:

 On Mar 2, 2011, at 8:30 PM, Jesse Andrews