Hi Matthieu,
Am 22.01.14 20:02, schrieb Matthieu Huin:
The idea is to have a middleware checking a domain's current usage
against a limit set in the configuration before allowing an upload.
The domain id can be extracted from the token, then used to query
keystone for a list of projects
Thanks John for the summary - and all contributors for their work!
Others are looking in to how to grow clusters (changing the partition
power)
I'm interested who else is also working on this - I successfully
increased partition power of several (smaller) clusters and would like
to discuss
Hello together,
I'd like to discuss a way to increase the partition power of an existing
Swift cluster.
This is most likely interesting for smaller clusters that are growing
beyond their original planed size.
As discussed earlier [1] a rehashing is required after changing the
partition power to
Am 02.12.13 15:47, schrieb Gregory Holt:
Achieving this transparently is part of the ongoing plans, starting
with things like the DiskFile refactoring and SSync. The idea is to
isolate the direct disk access from other servers/tools, something
that (for instance) RSync has today. Once the
Am 02.12.13 17:10, schrieb Gregory Holt:
On Dec 2, 2013, at 9:48 AM, Christian Schwede
christian.schw...@enovance.com wrote:
That sounds great! Is someone already working on this (I know about
the ongoing DiskFile refactoring) or even a blueprint available?
There is https
On 14.11.14 20:43, Tim Bell wrote:
It would need to be tiered (i.e. migrate whole collections rather than
files) and a local catalog would be needed to map containers to tapes.
Timeouts would be an issue since we are often waiting hours for recall
(to ensure that multiple recalls for the same
A solution to this might be to set the default policy as a configuration
setting in the proxy. If you want a replicated swift cluster just allow
this policy in the proxy and set it to default. The same for EC cluster,
just set the allowed policy to EC. If you want both (and let your users
decide
Hello Jonathan,
On 17.02.15 22:17, Halterman, Jonathan wrote:
Various services desire the ability to control the location of data
placed in Swift in order to minimize network saturation when moving data
to compute, or in the case of services like Hadoop, to ensure that
compute can be moved to
Hello Jonathan,
On 18.02.15 18:13, Halterman, Jonathan wrote:
1. Swift should allow authorized services to place a given number
of object replicas onto a particular rack, and onto separate
racks.
This is already possible if you use zones and regions in your ring
files. For example, if you
Thanks Steven for your feedback! Please see my answers inline.
On 02.08.16 23:46, Steven Hardy wrote:
> On Tue, Aug 02, 2016 at 09:36:45PM +0200, Christian Schwede wrote:
>> Hello everyone,
>>
>> I'd like to improve the Swift deployments done by TripleO. There are a
>&
On 04.08.16 10:27, Giulio Fidente wrote:
> On 08/02/2016 09:36 PM, Christian Schwede wrote:
>> Hello everyone,
>
> thanks Christian,
>
>> I'd like to improve the Swift deployments done by TripleO. There are a
>> few problems today when deployed with the current
s!
Thanks,
Christian
--
Christian Schwede
_
Red Hat GmbH
Technopark II, Haus C, Werner-von-Siemens-Ring 11-15, 85630 Grasbrunn,
Handelsregister: Amtsgericht Muenchen HRB 153243
Geschaeftsfuehrer: Mark Hegarty, Charlie Peters, Michael
On 04.08.16 15:39, Giulio Fidente wrote:
> On 08/04/2016 01:26 PM, Christian Schwede wrote:
>> On 04.08.16 10:27, Giulio Fidente wrote:
>>> On 08/02/2016 09:36 PM, Christian Schwede wrote:
>>>> Hello everyone,
>>>
>>> thanks Christian,
>>
Hello,
kindly asking for a FFE for a required setting to improve Swift-based
TripleO deployments:
https://review.openstack.org/#/c/358643/
This is required to land the last patch in a series of TripleO-doc patches:
https://review.openstack.org/#/c/293311/
> we're trying to address in TripleO a couple of use cases for which we'd
> like to trigger a Mistral workflow from a Heat template.
>
> One example where this would be useful is the creation of the Swift
> rings, which need some data related to the Heat stack (like the list of
> Swift nodes and
rcloud operational by
> definition, so I think this is probably OK?
>
> Steve
>>
>> -Original Message-
>> From: Christian Schwede [mailto:cschw...@redhat.com]
>> Sent: Thursday, January 05, 2017 6:14 AM
>> To: OpenStack Development Mailing List <openstac
Hello everyone,
there was an earlier discussion on $subject last year [1] regarding a
bug when upscaling or replacing nodes in TripleO [2].
Shortly summarized: Swift rings are built on each node separately, and
if adding or replacing nodes (or disks) this will break the rings
because they are no
17 matches
Mail list logo