Re: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster
On Thu, 10 May 2018 20:07:03 +0800 Yuxin Wangwrote: > I'm working on a swift project. Our customer cares about S3 compatibility > very much. I tested our swift cluster with ceph/s3-tests and analyzed the > failed cases. It turns out that lots of the failed cases are related to > unique container/bucket. But as we know, containers are just unique in a > tenant/project. >[...] > Do you have any ideas on how to do or maybe why not to do? I'd highly > appreciate any suggestions. I don't have a recipy, but here's a thought: try making all the accounts that need the interoperability with S3 belong to the same Keystone tenant. As long as you do not give those accounts the owner role (one of those listed in operator_roles=), they will not be able to access each other's buckets (Swift containers). Unfortunately, I think they will not be able to create any buckets either, but perhaps it's something that can be tweaked - for sure if you're willing to far enough to make new middleware. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] swift3 Plugin Development
On Fri, 9 Jun 2017 10:37:15 +0530 Niels de Voswrote: > > > we are looking for S3 plugin with ACLS so that we can integrate gluster > > > with that. > > > > Did you look into porting Ceph RGW on top of Gluster? > > This is one of the longer term options that we have under consideration. > I am very interested in your reasons to suggest it, care to elaborate a > little? RGW seems like the least worst starting point in terms of the end result you're likely to get. The swift3 does a good job for us in OpenStack Swift, providing a degree of compatibility with S3. When Kota et.al. took over from Tomo, they revived the development successfully. However, it remains fundamentally limited in what it does, and its main function is to massage S3 to fit it on top of Swift. If you place it in front of Gluster, you're saddled with this fundamental incompatibility, unless you fork swift3 and rework it beyond recognition. In addition, surely you realize that swift3 is only a shim and you need to have an object store to back it. Do you even have one in Gluster? Fedora used to ship a self-contained S3 store "tabled", so unlike swift3 it's complete. It's written in C, so may be better compatible with Gluster's development environment. However, it was out of development for years and it only supports canned ACL. You aren't getting the full ACLs with it that you're after. The RGW gives you all that. It's well-compatible with S3, because it is its native API (with Swift API being grafted on). Yehuda and crea maintain a good compatibility. Yes, it's in C++, but the dialect is reasonable, The worst downside is, yes, it's wedded to Ceph's RADOS and you need a major surgery to place it on top of Gluster. Nonetheless, it seems like a better defined task to me than trying to maintain your own webserver, which you must do if you select swift3. There are still some parts of RGW which will give you trouble. In particular, it uses loadable classes, which run in the context of Ceph OSD. There's no place in Gluster to run them. You may have to drag parts of OSD into the project. But I didn't look closely enough to determine the feasibility. In your shoes, I'd talk to Yehuda about this. He knows the problem domain exceptionally and will give you a good advice, even though you're a competitor in Open Source in general. Kinda like I do now :-) Cheers, -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Swift3 Plugin Development
On Thu, 8 Jun 2017 17:06:02 +0530 Venkata R Edarawrote: > we are looking for S3 plugin with ACLS so that we can integrate gluster > with that. Did you look into porting Ceph RGW on top of Gluster? -- P __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Embracing new languages in OpenStack
On Wed, 9 Nov 2016 11:14:32 + (GMT) Chris Dentwrote: > The conversations about additional languages in this community have > been one our most alarmingly regressive and patronizing. They seem > to be bred out of fear rather than hope and out of lack of faith in > each other than in trust. We've got people who want to build stuff. > Isn't that the important part? I dunno, it seems fine to discuss. I'm disappointed that TC voted Golang down on August 2, but I can see where they come from. The problem we're grappling with on the Swift side is (in my view) mainly that the Go reimplementation provides essential performance advantages which manifest at a certain scale (around 100 PB with current technology). For this reason, ignoring Hummingbird and prohibiting Go is not going to suppress them. As the operators deploy Hummingbird in preference to the Python implementation, the focus of the development is going to migrate, and the end result is going to be an effective exile of a founding project from the OpenStack. (Even if happens, it's probably not a big deal. Just look how well Ceph is doing, community-wise. Operators aren't crying bloody tears either, do they?) The conflict is that since re-writing e.g. Newtron in Go does not confer the same performance advantage (AFAIK -- your VLANs aren't going to set up and tear down 80 times faster), the disruption isn't worth the trouble for the majority of OpenStack projects. This is why TC voted us down. And the talk about the community is mostly there to heal psychologically. So, it wasn't "regressive" or "patronizing", just business. See how Flavio outlined specific steps in a constructive manner. I'm quite glad that Ash wants to do something about CI. And I'm going to look into fully supporting existing configurations. Maybe share it with Designate and thus create something like a proto-"oslo.go.config". Of course we need to have some code to share first. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [all] Embracing new languages in OpenStack
On Mon, 07 Nov 2016 15:53:51 -0800 Joshua Harlowwrote: > Standards though exist for a reason (and ini files are pretty common > across languages and such); though of course oslo.config has some very > tight integration with openstack, the underlying ini concept it > eventually reads/writes really isn't that special (so hopefully such a > thing existing isn't that hard). Swift in Go demonstrated that it's not the ini format that's the problem for reimplementation, the paste-deploy is. In particular, the names from pipeline= define section names, and egg names define what code is executed. So one can have "pipeline=keystone app", but [keystone] section is actually use=egg:swift#tempauth, not Keystone. It's pefectly legal and will work today, even if such a configuration is a hostile move against future coworkers. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift][keystone] Using JSON as future ACL format
On Mon, 6 Jun 2016 13:05:46 -0700 "Thai Q Tran"wrote: > My intention is to spark discussion around this topic with the goal of > moving the Swift community toward accepting the JSON format. If would be productive if you came up with a specific proposal how to retrofit JSON for container ACLs. Note that JSON is already used natively for account ACLs in Swift. Personally I don't see an actual need of usernames with colons expressed by operators. The issue that you have identified was known for a while and apparently did not cause any difficulties in practice. Just don't put colons into usernames. And if you switch to IDs, those are just UUIDs. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Fri, 13 May 2016 10:14:02 +0200 Dmitry Tantsurwrote: > [...] If familiarity for Python > developers is an argument here, mastering Cython or making OpenStack run > on PyPy must be much easier for a random Python developer out there to > seriously bump the performance. Unfortunately, practice showed that PyPy is not an answer. It changes nothing about the poor and coarse thread scheduling. It focuses on the entirely different aspect of performance, the one which while unpleasant was not unsurmountable in Python. Checksumming sure is faster, but then rememebr that Swift offloads Erasure Coding to C already. We dragged this Python cart 2 years too far already. Don't for a second imagine that Hummingbird is some kind project prompted by Go being new and shiny. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Mon, 9 May 2016 14:17:40 -0500 Edward Leafewrote: > Whenever I hear claims that Python is “too slow for X”, I wonder > what’s so special about X that makes it so much more demanding than, > say, serving up YouTube. In case of Swift, the biggest issue was the scheduler. As RAX people found, although we (OpenStack Swift community of implementors) were able to obtain an acceptable baseline performance, a bad drive dragged down the whole node. Before Hummingbird, dfg (and redbo) screwed around with fully separate processes that provided isolation, but that was not scaling well. So, there was an endless parade of solutions on the base of threads. Some patches went in, some did not. At some points things were so bad that dfg posted a patch, which maintained a scoring board in an SQLite file. They were willing to add a bunch of I/O to every request just to avoid the worst case that Python forced upon them. The community (that is basically John, Sam, and I) put brakes on that. But only at that point redbo created Hummingbird, which solved the issue for them. Once Hummingbird went into production, they found that it was easy to polish and it could be much faster. Some of the benchmarks were beating Python by 80 times. CPU consumption went way down, too. But all that was secondary in the adoption of Go. If not a significant scalability crisis in the field, Swift in Go would not have happened. Scott Simpson gave a preso at Vancouver Summit that had some details and benchmarks. Google is no help finding it online, unfortunately. Only finds the panel discussion. Maybe someone had it saved. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Mon, 9 May 2016 09:06:02 -0400 Rayson Howrote: > Since the Go toolchain is pretty self-contained, most people just follow > the official instructions to get it installed... by a one-step: > > # tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz I'm pretty certain the humanity has moved on from this sort of thing. Nowadays "most people" use packaged language runtimes that come with the Linux they're running. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] Swift api compat. Was: supporting Go
On Wed, 4 May 2016 21:52:49 + "Fox, Kevin M"wrote: > Swift is in a strange place where the api is implemented in a way to > favor one particular vendor backend implementation. Sorry, but I disagree with the above assessement. There is no one particular vendor like that, because the only vendor of Swift source is OpenStack, and vendors of pre-packaged Swift are legion, all equal: Red Hat, HPe (Helion), SwiftStack, Mirantis, and more. > I'd love to be able to plugin Swift into our sites, but because we > can only have one, the various tradoffs have lead us to deploy RadosGW > most of the time. The fact that you succeeded in running OpenStack with RadosGW proves that there is no issue here that impedes a development or use of OpenStack. We at Red Hat will be happy to support an installation of OpenStack using Ceph underpinning it as integrated storage solution. Or, an installation that uses the OpenStack-released, reference implementation of Swift, which we integrate too. We're flexible like that, according to the needs of each customer. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Tue, 3 May 2016 22:37:30 + "Fox, Kevin M"wrote: > RadosGW has been excluded from joining the OpenStack community in part > due to its use of c++. Sounds like sheer lunacy. Nothing like that ever happened, AFAIK. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Tue, 3 May 2016 22:11:06 + "Fox, Kevin M"wrote: > If we let go in, and there are no pluggable middleware, where does > RadosGW and other Swift api compatible implementations then stand? They remain where they are now. > Should we bless c++ too? As I understand it, there are a lot of clouds > deployed with the RadosGW but Refstack rejects them. RadosGW is not trying to become a part of the OpenStack, while Hummingbird is. This is why we're discussing Go and nog C++ in this thread. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [tc] supporting Go
On Tue, 3 May 2016 12:16:24 -0400 Rayson Howrote: > I like Go! However, Go does not offer binary compatibility between point > releases. For those who install from source it may not be a big issue, but > for commercial distributions that pre-package & pre-compile everything, > then the compiled Go libs won't be compatible with old/new releases of the > Go compiler that the user may want to install on their systems. IMHO, it's not yet a problem worth worrying about. The C++ have demonstrated poor binary compatibility over releases, even 25 years after its creation. And it's not a big concern. Annoying, yes, but not a deal-breaker. In case of Fedora, we'll ship with a nailed Golang version in each release. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] swift missing X-Timestamp header commit review
On Wed, 20 Jan 2016 13:46:13 +0200 (EET) Mustafa ÇELİK (BİLGEM-BTE)wrote: > commit-1: This one is my patch for the bug. > https://review.openstack.org/#/c/268163/ > I need someone to review my commit-1. > Can somebody help me with code-review? Sure... I am somewhat unenthusiastic about this idea. Suppose we started return the back-end timestamp values. What is the value of doing it? Robert Francis proposed using the X-Timestamp for tiering middleware. How? -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?
On Tue, 22 Dec 2015 08:56:08 -0800 Clint Byrumwrote: > You could create a unique swift container, upload things to that, and > then update a pointer in a well-known location to point at that container > for the new plan only after you've verified it is available. This is a > primitive form of Read-copy-update. It's worse than you think. Container updates lag often in Swift. I suggest a pseudo-container or a manifest object instead. However, renames in Swift are copies. Ergo, an external database has to point to the current tip or the latest generation manifest. Which brings us to... > So if you are only using the DB for consistency, you might want to just > use tooz+swift. Yep. Still has to store the templates themselves somewhere though. -- Pete __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian
On Thu, 11 Jun 2015 11:08:55 +0300 Duncan Thomas duncan.tho...@gmail.com wrote: There's only one cinder driver using it (nimble storage), and it seems to be using only very basic features. There are half a dozen suds forks on pipi, or there's pisimplesoap that the debian maintainer recommends. None of the above are currently packaged for Ubuntu that I can see, so can anybody in-the-know make a reaasoned recommendation as to what to move to? In instances I had to deal with (talking to VMware), it was easier and better to roll-your-own with python-xml and libhttp. -- P __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] Allow hostname for nodes in Ring
On Fri, 10 Oct 2014 04:56:55 + Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote: Today the following patch was abandoned and I contacted with the author, so I would like to take it over if nobody else is chafing to take it. Is it OK? https://review.openstack.org/#/c/80421/ If it is OK, I will proceed it with following procedure. (1) Open new bug report (there is no bug report for this) I'm not sure that I should write a BP instead of a bug report. (2) Make a patch based on the current patch on gerrit If the author agrees or ambivalent about it, you are free to re-use the old Change ID. And you're always free to post your patch anew. I don't know if the bug report is all that necessary or useful. The scope of the problem is well defined without, IMHO. -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip
On Wed, 23 Jul 2014 08:54:30 -0700 John Dickinson m...@not.mn wrote: So basically, it's a question of do we add the feature, knowing that most people who use it will in fact be making their lives more difficult, or do we keep it out, knowing that we won't be serving those who actually require the feature. Speaking of the latter, do you know of one operator at least who is stuck with a Facebook-style v6-only datacenter? -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] About Swift as an object storage gateway, like Cinder in block storage
On Mon, 7 Jul 2014 11:05:40 +0800 童燕群 tyan...@qq.com wrote: The workflow of this middle-ware working with swift may be like this pic: Since you're plugging this into a/c/o nodes, there's no difference between this and Pluggable Back-ends. Note that PBE is already implemented in case of object server, see class DiskFile. Account/Container remainder is here: https://review.openstack.org/47713 Do you have a request from your operations to implement this, or it's a nice-to-have excercise for you? If the former, what specific vendor store you are targeting? -- Pete P.S. Note that Cinder includes a large management component, which Swift lacks by itself. In Cinder you can add new back-ends through Cinder's API and CLI. In Swift, you have to run swift-ring-builder and edit configs. Your blueprint does not address this gap. ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Swift: reason for using xfs on devices
On Wed, 2 Jul 2014 00:16:42 + Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote: So I think if performance of swift is more important rather than scalability of it, it is a good idea to use ext4. The real problem is what happens when your drives corrupt the data. Both ext4 and XFS demonstrated good resilience, but XFS leaves empty files in directories where corrupt files were, while ext4's fsck moves them to lost+found without a trace. When that happens, Swift's auditors cannot know that something was amiss and the replication is not triggered (because hash lists are only updated by auditors). Mr. You Yamagata worked on a patch to address this problem, but did not complete it. See here: https://review.openstack.org/11452 -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka)
On Fri, 14 Mar 2014 09:03:22 +0100 Chmouel Boudjnah chmo...@enovance.com wrote: fujita (the maint of swift3 in CC of this email) has commented that he's been working on it. I think we should've not kicked it out. Maybe just re-fold it back into Swift? -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Guru Meditation output seems useless
Dear Solly: I cobbled together a working prototype of Guru Meditation for Swift just to see how it worked. I did not use Oslo classes, but used the code from Dan's prototype and from your Nova review. Here's the Gerrit link: https://review.openstack.org/70513 Looking at the collected tracebacks, most of all they seem singularly useless. No matter how loaded the process is, they always show something like: File /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 226, in run self.wait(sleep_time) File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 84, in wait presult = self.do_poll(seconds) File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 73, in do_poll return self.poll.poll(int(seconds * 1000.0)) File /usr/lib/python2.6/site-packages/swift/common/daemon.py, line 103, in lambda *args)) File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 79, in signal_handler dump_threads(gthr_model, report_fp) File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 53, in dump_threads thread.dump(report_fp) File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 29, in dump traceback.print_stack(self.stack, file=report_fp) The same is true for native and gree threads: they all seem to be anchored in he lambda that passes parameters to the signal handler, so they show nothing of value. So, my question is: did you look at traces in Nova and if yes, did you catch anything? If yes, where is the final code that works? -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Glance v1 and v2
On Tue, 18 Feb 2014 10:57:03 +0100 Joe Hakim Rahme joe.hakim.ra...@enovance.com wrote: Again, I have just spent a couple of days playing with it on a devstack. I'm by no means a reference on the subject of the API v2. I hope this will help you get a better idea of where it stands today. Thanks a lot, it clears up some misconceptions on my part. I noticed that glance CLI client was using v1, should've looked at the source for --os-image-api, but I was sure we just postponed it. [1]: http://docs.openstack.org/api/openstack-image-service/2.0/content/image-sharing.html Thanks for the links too. -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Glance v1 and v2
Hello: does anyone happen to know, or have a detailed write-up, on the differences between so-called Glance v1 and Glance v2? In particular do we still need Glance Registry in Havana, or do we not? The best answer so far was to run the registry anyway, just in case, which does not feel entirely satisfactory. Surely someone should know exactly what is going on in the API and have a good idea what the implications are for the users of Glance (API, CLI, and Nova (I include Horizon into API)). Thanks, -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [Glance] delayed delete and user credentials
Hi, guys: I looked briefly at a bug/fix, which looks exceedingly strange to me: https://review.openstack.org/59689 As much as I can tell, the problem (lp:1238604) is that pending delete fails because by the time the delete actually occurs, Glance API does not have proper permissions to talk to Glance Registry. So far so good, but the solution that we accepted is to forward the user credentials to Registry... but only if configured to do so. Does it make any sense to anyone? Why configure something that must always work? How can sysadmin select the correct value? -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] Swift account auditor duplicated code
Hi, Guys: Here's a weird piece of duplicated call to account_audit() in swift/account/auditor.py: for path, device, partition in all_locs: self.account_audit(path) if time.time() - reported = 3600: # once an hour self.logger.info(_('Since %(time)s: Account audits: ' ...) self.account_audit(path) dump_recon_cache({'account_audits_since': reported, ...) reported = time.time() This was apparently caused by Florian's ccb6334c going on top of Darrell's 3d3ed34f. Is this intentional, and if not, should we be fixing it? -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Swift] question on Application class concurrency; paste.app_factory mechanism
On Wed, 24 Jul 2013 02:31:45 + Luse, Paul E paul.e.l...@intel.com wrote: I was thinking that each connection would get its own instance thus it would be sage to store connection-transient information there but I was surprised by my quick test. Yeah, you have it tracked in __call__, typically hanging off controller. -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [Swift] erasure codes, digging deeper
On Thu, 18 Jul 2013 12:31:02 -0500 Chuck Thier cth...@gmail.com wrote: I'm with Chmouel though. It seems to me that EC policy should be chosen by the provider and not the client. For public storage clouds, I don't think you can make the assumption that all users/clients will understand the storage/latency tradeoffs and benefits. Would not a tiered pricing make them figure it out quickly? Make EC cheaper by the factor of cost of storage used and voila. At first I also had a violent reaction to this kind of exposure of internals. After all S3 went this far while being entirely opaque. But we're not S3, that's the key. -- Pete ___ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev