Re: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster

2018-05-14 Thread Pete Zaitcev
On Thu, 10 May 2018 20:07:03 +0800
Yuxin Wang  wrote:

> I'm working on a swift project. Our customer cares about S3 compatibility 
> very much. I tested our swift cluster with ceph/s3-tests and analyzed the 
> failed cases. It turns out that lots of the failed cases are related to 
> unique container/bucket. But as we know, containers are just unique in a 
> tenant/project.
>[...]
> Do you have any ideas on how to do or maybe why not to do? I'd highly 
> appreciate any suggestions.

I don't have a recipy, but here's a thought: try making all the accounts
that need the interoperability with S3 belong to the same Keystone tenant.
As long as you do not give those accounts the owner role (one of those
listed in operator_roles=), they will not be able to access each other's
buckets (Swift containers). Unfortunately, I think they will not be able
to create any buckets either, but perhaps it's something that can be
tweaked - for sure if you're willing to far enough to make new middleware.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] swift3 Plugin Development

2017-06-09 Thread Pete Zaitcev
On Fri, 9 Jun 2017 10:37:15 +0530
Niels de Vos  wrote:

> > > we are looking for S3 plugin with ACLS so that we can integrate gluster 
> > > with that.
> > 
> > Did you look into porting Ceph RGW on top of Gluster?
> 
> This is one of the longer term options that we have under consideration.
> I am very interested in your reasons to suggest it, care to elaborate a
> little?

RGW seems like the least worst starting point in terms of the end
result you're likely to get.

The swift3 does a good job for us in OpenStack Swift, providing a degree
of compatibility with S3. When Kota et.al. took over from Tomo, they revived
the development successfully. However, it remains fundamentally limited in
what it does, and its main function is to massage S3 to fit it on top
of Swift. If you place it in front of Gluster, you're saddled with
this fundamental incompatibility, unless you fork swift3 and rework it
beyond recognition.

In addition, surely you realize that swift3 is only a shim and you need
to have an object store to back it. Do you even have one in Gluster?

Fedora used to ship a self-contained S3 store "tabled", so unlike swift3
it's complete. It's written in C, so may be better compatible with Gluster's
development environment. However, it was out of development for years and
it only supports canned ACL. You aren't getting the full ACLs with it that
you're after.

The RGW gives you all that. It's well-compatible with S3, because it is
its native API (with Swift API being grafted on). Yehuda and crea maintain
a good compatibility. Yes, it's in C++, but the dialect is reasonable,
The worst downside is, yes, it's wedded to Ceph's RADOS and you need
a major surgery to place it on top of Gluster. Nonetheless, it seems like
a better defined task to me than trying to maintain your own webserver,
which you must do if you select swift3.

There are still some parts of RGW which will give you trouble. In particular,
it uses loadable classes, which run in the context of Ceph OSD. There's no
place in Gluster to run them. You may have to drag parts of OSD into the
project. But I didn't look closely enough to determine the feasibility.

In your shoes, I'd talk to Yehuda about this. He knows the problem domain
exceptionally and will give you a good advice, even though you're a
competitor in Open Source in general. Kinda like I do now :-)

Cheers,
-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift3 Plugin Development

2017-06-08 Thread Pete Zaitcev
On Thu, 8 Jun 2017 17:06:02 +0530
Venkata R Edara  wrote:

> we are looking for S3 plugin with ACLS so that we can integrate gluster 
> with that.

Did you look into porting Ceph RGW on top of Gluster?

-- P

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-10 Thread Pete Zaitcev
On Wed, 9 Nov 2016 11:14:32 + (GMT)
Chris Dent  wrote:

> The conversations about additional languages in this community have
> been one our most alarmingly regressive and patronizing. They seem
> to be bred out of fear rather than hope and out of lack of faith in
> each other than in trust. We've got people who want to build stuff.
> Isn't that the important part?

I dunno, it seems fine to discuss. I'm disappointed that TC voted Golang
down on August 2, but I can see where they come from.

The problem we're grappling with on the Swift side is (in my view) mainly
that the Go reimplementation provides essential performance advantages
which manifest at a certain scale (around 100 PB with current technology).
For this reason, ignoring Hummingbird and prohibiting Go is not going to
suppress them. As the operators deploy Hummingbird in preference to the
Python implementation, the focus of the development is going to migrate,
and the end result is going to be an effective exile of a founding
project from the OpenStack.

(Even if happens, it's probably not a big deal. Just look how well Ceph
is doing, community-wise. Operators aren't crying bloody tears either,
do they?)

The conflict is that since re-writing e.g. Newtron in Go does not confer
the same performance advantage (AFAIK -- your VLANs aren't going to set
up and tear down 80 times faster), the disruption isn't worth the trouble
for the majority of OpenStack projects. This is why TC voted us down.
And the talk about the community is mostly there to heal psychologically.

So, it wasn't "regressive" or "patronizing", just business. See how Flavio
outlined specific steps in a constructive manner.

I'm quite glad that Ash wants to do something about CI. And I'm going
to look into fully supporting existing configurations. Maybe share it with
Designate and thus create something like a proto-"oslo.go.config".
Of course we need to have some code to share first.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-10 Thread Pete Zaitcev
On Mon, 07 Nov 2016 15:53:51 -0800
Joshua Harlow  wrote:

> Standards though exist for a reason (and ini files are pretty common 
> across languages and such); though of course oslo.config has some very 
> tight integration with openstack, the underlying ini concept it 
> eventually reads/writes really isn't that special (so hopefully such a 
> thing existing isn't that hard).

Swift in Go demonstrated that it's not the ini format that's the problem
for reimplementation, the paste-deploy is. In particular, the names from
pipeline= define section names, and egg names define what code is executed.
So one can have "pipeline=keystone app", but [keystone] section is actually
use=egg:swift#tempauth, not Keystone. It's pefectly legal and will work
today, even if such a configuration is a hostile move against future
coworkers.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Swift] Where can I get the libshss library ?

2016-10-13 Thread Pete Zaitcev
On Thu, 13 Oct 2016 11:58:58 +0900
Yu Watanabe  wrote:

> Oct 12 19:44:45 opstack-objstorage1 swift-account-server[27793]: Error:
> [swift-hash]: both swift_hash_path_suffix and swift_hash_path_prefix are
> m...ft.conf

Leaving libshss aside, Swift being unable to read /etc/swift/swift.conf
is typically caused by SElinux (assumed you have actually set the
hash prefix previously).

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] 404 re-reading just created container

2016-06-07 Thread Pete Zaitcev
On Fri, 3 Jun 2016 16:09:06 +1200
Mark Kirkwood  wrote:

> This is a Swift 2.7.0.1 system with 2 proxies and 3 storage nodes (each 
> the latter with 6 devices).

What is your replica count?

> The proxies are load balanced behind Haproxy (which I'm guessing is 
> causing the 404 - see below)

HAproxy is often troublesome, but I don't expect it is at fault in
this instance. The logs that you quoted show that the proxy-server
returned the 404. Do proxies happen to talk to separate memcached
instances by any chance?

What you really need in this instance is to capture logs that have
the same tx-id on _all_ nodes. Then we can construct the scenario.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [swift][keystone] Using JSON as future ACL format

2016-06-07 Thread Pete Zaitcev
On Mon, 6 Jun 2016 13:05:46 -0700
"Thai Q Tran"  wrote:

> My intention is to spark discussion around this topic with the goal of
> moving the Swift community toward accepting the JSON format.

If would be productive if you came up with a specific proposal how to
retrofit JSON for container ACLs. Note that JSON is already used natively
for account ACLs in Swift.

Personally I don't see an actual need of usernames with colons expressed
by operators. The issue that you have identified was known for a while
and apparently did not cause any difficulties in practice. Just don't
put colons into usernames. And if you switch to IDs, those are just UUIDs.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Using swift-client in C with cURL to access a swift store

2016-05-20 Thread Pete Zaitcev
On Fri, 20 May 2016 10:24:38 -0700
Clay Gerrard  wrote:

> > Look at cf_xxx functions here:
> >  https://git.fedorahosted.org/cgit/iwhd.git/tree/backend.c
> > Clone
> >  git://git.fedorahosted.org/iwhd.git
> >  
> ^ should *also* go on the associated projects list!

Naah, iwhd is completely dead. Only good to steal some code.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Using swift-client in C with cURL to access a swift store

2016-05-20 Thread Pete Zaitcev
On Wed, 18 May 2016 11:21:29 -0700
Clay Gerrard  wrote:

> I haven't heard much about folks using Swift bindings for C - there's no C
> bindings listed on the associated projects page [1].  I'm sure just using

The so-called "Image Warehouse" of the Aeolus project has a minimal
handler for CF in C. No DLO/SLO support though.

Look at cf_xxx functions here:
 https://git.fedorahosted.org/cgit/iwhd.git/tree/backend.c
Clone
 git://git.fedorahosted.org/iwhd.git

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [swift] Object replication failure counts confusing in 2.7.0

2016-05-20 Thread Pete Zaitcev
On Wed, 18 May 2016 16:46:05 +1200
Mark Kirkwood  wrote:

> May 18 04:31:17 markir-dev-ostor002 object-server: object replication 
> failure 4, detail Traceback (most recent call last):#012  File 
> "/opt/cat/openstack/swift/local/lib/python2.7/site-packages/swift/obj/replicator.py",
>  
> line 622, in build_replication_jobs#012 int(partition))#012ValueError: 
> invalid literal for int() with base 10: 'auditor_status_ALL.json'#012

Mark, I saw the patch you attached to the bug 1583305, but it only deals
with counting of failures. It does nothing to ignore auditor's files, it seems.
Would you be willing to cook up something like Tim's fix in commit
ad16e2c77bb61bdf51a7d3b2c258daf69bfc74da
?

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [tc] supporting Go

2016-05-13 Thread Pete Zaitcev
On Fri, 13 May 2016 10:14:02 +0200
Dmitry Tantsur  wrote:

> [...] If familiarity for Python 
> developers is an argument here, mastering Cython or making OpenStack run 
> on PyPy must be much easier for a random Python developer out there to 
> seriously bump the performance.

Unfortunately, practice showed that PyPy is not an answer. It changes
nothing about the poor and coarse thread scheduling. It focuses on the
entirely different aspect of performance, the one which while unpleasant
was not unsurmountable in Python. Checksumming sure is faster, but then
rememebr that Swift offloads Erasure Coding to C already.

We dragged this Python cart 2 years too far already. Don't for a second
imagine that Hummingbird is some kind project prompted by Go being new
and shiny.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread Pete Zaitcev
On Mon, 9 May 2016 14:17:40 -0500
Edward Leafe  wrote:

> Whenever I hear claims that Python is “too slow for X”, I wonder
> what’s so special about X that makes it so much more demanding than,
> say, serving up YouTube.

In case of Swift, the biggest issue was the scheduler. As RAX people
found, although we (OpenStack Swift community of implementors) were able
to obtain an acceptable baseline performance, a bad drive dragged down
the whole node. Before Hummingbird, dfg (and redbo) screwed around with
fully separate processes that provided isolation, but that was not
scaling well. So, there was an endless parade of solutions on the base
of threads. Some patches went in, some did not. At some points things
were so bad that dfg posted a patch, which maintained a scoring board
in an SQLite file. They were willing to add a bunch of I/O to every
request just to avoid the worst case that Python forced upon them.
The community (that is basically John, Sam, and I) put brakes on that.
But only at that point redbo created Hummingbird, which solved the
issue for them.

Once Hummingbird went into production, they found that it was easy to
polish and it could be much faster. Some of the benchmarks were
beating Python by 80 times. CPU consumption went way down, too.
But all that was secondary in the adoption of Go. If not a significant
scalability crisis in the field, Swift in Go would not have happened.

Scott Simpson gave a preso at Vancouver Summit that had some details and
benchmarks. Google is no help finding it online, unfortunately. Only
finds the panel discussion. Maybe someone had it saved.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread Pete Zaitcev
On Mon, 9 May 2016 09:06:02 -0400
Rayson Ho  wrote:

> Since the Go toolchain is pretty self-contained, most people just follow
> the official instructions to get it installed... by a one-step:
> 
> # tar -C /usr/local -xzf go$VERSION.$OS-$ARCH.tar.gz

I'm pretty certain the humanity has moved on from this sort of thing.
Nowadays "most people" use packaged language runtimes that come with
the Linux they're running.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Swift api compat. Was: supporting Go

2016-05-05 Thread Pete Zaitcev
On Wed, 4 May 2016 21:52:49 +
"Fox, Kevin M"  wrote:

> Swift is in a strange place where the api is implemented in a way to
> favor one particular vendor backend implementation.

Sorry, but I disagree with the above assessement. There is no one
particular vendor like that, because the only vendor of Swift source
is OpenStack, and vendors of pre-packaged Swift are legion, all equal:
Red Hat, HPe (Helion), SwiftStack, Mirantis, and more.

> I'd love to be able to plugin Swift into our sites, but because we
> can only have one, the various tradoffs have lead us to deploy RadosGW
> most of the time.

The fact that you succeeded in running OpenStack with RadosGW proves that
there is no issue here that impedes a development or use of OpenStack.
We at Red Hat will be happy to support an installation of OpenStack using
Ceph underpinning it as integrated storage solution. Or, an installation
that uses the OpenStack-released, reference implementation of Swift,
which we integrate too. We're flexible like that, according to the needs
of each customer.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-04 Thread Pete Zaitcev
On Tue, 3 May 2016 22:37:30 +
"Fox, Kevin M"  wrote:

> RadosGW has been excluded from joining the OpenStack community in part
> due to its use of c++.

Sounds like sheer lunacy. Nothing like that ever happened, AFAIK.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-04 Thread Pete Zaitcev
On Tue, 3 May 2016 22:11:06 +
"Fox, Kevin M"  wrote:

> If we let go in, and there are no pluggable middleware, where does
> RadosGW and other Swift api compatible implementations then stand?

They remain where they are now.

> Should we bless c++ too? As I understand it, there are a lot of clouds
> deployed with the RadosGW but Refstack rejects them.

RadosGW is not trying to become a part of the OpenStack, while Hummingbird is.
This is why we're discussing Go and nog C++ in this thread.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread Pete Zaitcev
On Tue, 3 May 2016 12:16:24 -0400
Rayson Ho  wrote:

> I like Go! However, Go does not offer binary compatibility between point
> releases. For those who install from source it may not be a big issue, but
> for commercial distributions that pre-package & pre-compile everything,
> then the compiled Go libs won't be compatible with old/new releases of the
> Go compiler that the user may want to install on their systems.

IMHO, it's not yet a problem worth worrying about. The C++ have demonstrated
poor binary compatibility over releases, even 25 years after its creation.
And it's not a big concern. Annoying, yes, but not a deal-breaker. In case
of Fedora, we'll ship with a nailed Golang version in each release.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Swift accounts and container replication

2016-04-07 Thread Pete Zaitcev
On Mon, 04 Apr 2016 18:13:17 +0100
Carlos Rodrigues  wrote:

> I have two regions nodes of swift, geographically dispersed, and i have
> storage policies for both regions. 
> 
> How can i do to replicate the accounts and containers between two
> regions?

Policies do not apply to accounts and containers (although a container
signifies what policy applies to objects in it, the container itself
is not a subject to the policy). So all of your accounts and containers
are going to be replicated within the distributed cluster.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift ringbuilder and disk size/capacity relationship

2016-04-07 Thread Pete Zaitcev
On Wed, 16 Mar 2016 13:23:31 +1300
Mark Kirkwood  wrote:

> So integrating swift-recon into regular monitoring/alerting 
> (collectd/nagios or whatever) is one approach (mind you most folk 
> already monitor disk usage data... and there is nothing overly special 
> about ensuring you don't run of space)!

So the overall conclusion is that the operator must monitor the cluster's
state and not let it run out of space. If you do in fact run out, second
order trouble starts happening, in particular pending processing will
not run right.

In case of one or a few nodes run out of space due to a bug or some
unrelated problem, Swift may maintain the desired durability by using
so-called "handoff" devices. If you restore the primaries, replication
will relocate affected partitions from handoffs. That will keep the
cluster functional while the recovery is being implemented.

But overall there's no magic. The general idea is, you make your customers
pay and if the business is profitable, they pay you enough to buy new
storage just fast enough to keep ahead of them filling it. For operators
of private clouds, we have quotas.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] swift missing X-Timestamp header commit review

2016-01-20 Thread Pete Zaitcev
On Wed, 20 Jan 2016 13:46:13 +0200 (EET)
Mustafa ÇELİK (BİLGEM-BTE)  wrote:

> commit-1: This one is my patch for the bug. 
> https://review.openstack.org/#/c/268163/ 
> I need someone to review my commit-1. 
> Can somebody help me with code-review? 

Sure... I am somewhat unenthusiastic about this idea. Suppose we started
return the back-end timestamp values. What is the value of doing it?

Robert Francis proposed using the X-Timestamp for tiering middleware.
How?

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] Is Swift a good choice of database for the TripleO API?

2016-01-19 Thread Pete Zaitcev
On Tue, 22 Dec 2015 08:56:08 -0800
Clint Byrum  wrote:

> You could create a unique swift container, upload things to that, and
> then update a pointer in a well-known location to point at that container
> for the new plan only after you've verified it is available. This is a
> primitive form of Read-copy-update.

It's worse than you think. Container updates lag often in Swift.
I suggest a pseudo-container or a manifest object instead. However,
renames in Swift are copies. Ergo, an external database has to point
to the current tip or the latest generation manifest. Which brings us
to...

> So if you are only using the DB for consistency, you might want to just
> use tooz+swift.

Yep. Still has to store the templates themselves somewhere though.

-- Pete

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Getting rid of suds, which is unmaintained, and which we want out of Debian

2015-06-16 Thread Pete Zaitcev
On Thu, 11 Jun 2015 11:08:55 +0300
Duncan Thomas duncan.tho...@gmail.com wrote:

 There's only one cinder driver using it (nimble storage), and it seems to
 be using only very basic features. There are half a dozen suds forks on
 pipi, or there's pisimplesoap that the debian maintainer recommends. None
 of the above are currently packaged for Ubuntu that I can see, so can
 anybody in-the-know make a reaasoned recommendation as to what to move to?

In instances I had to deal with (talking to VMware), it was easier and
better to roll-your-own with python-xml and libhttp.

-- P

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Using swift with a single replica on software-defined storage

2015-06-03 Thread Pete Zaitcev
On Wed, 27 May 2015 10:28:53 +0200
Vincenzo Pii vinc@gmail.com wrote:

 My question is the following: when performance doesn't matter and
 reliability is taken care of below swift (so swift will always manage to
 read/write an object as devices will always be consistent and available),
 are there other aspects that should be considered if swift runs with a
 replica counter of 1?

You might need to keep updaters running. Also, keep the proxy helpers
like account reaper and object expirer, depending on your configuration.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] Using unevenly sized disks in a cluster

2015-02-27 Thread Pete Zaitcev
On Thu, 26 Feb 2015 12:40:32 -0800
Shrinand Javadekar shrin...@maginatics.com wrote:

 Let's say I start with two disks of 10GB each [...]
 At a later point, I increase the capacity of the cluster by adding one
 500GB disk. [...]

 How does Swift place data such that it the as unique as possible
 policy is maintained. Beyond a point, won't it have to place both
 replicas on the same device (the 500GB disk)?

Yes, it will.

The thing to understand there is that the dispersion (and thus your
data safety) and the utilization are in direct conflict in the scenario
that you outlined. This problem is not unique to Swift, BTW. Any
replicated storage faces it.

The way out is to realize that you created the problem in the first place.
You could've bought 2 200 GB disks instead.

We have knobs such as weight, and now the overload, that help hapless
administrators who are cornered by the circumstances and the decisions
of their predecessors. But fundamentally, IMHO you need to realize that
the solution can only come from capacity expansion that takes into
account the characteristics of a replicated system.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [swift] Allow hostname for nodes in Ring

2014-10-15 Thread Pete Zaitcev
On Fri, 10 Oct 2014 04:56:55 +
Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote:

 Today the following patch was abandoned and I contacted with the author, 
 so I would like to take it over if nobody else is chafing to take it.
 Is it OK?
 
 https://review.openstack.org/#/c/80421/
 
 If it is OK, I will proceed it with following procedure.
 (1) Open new bug report (there is no bug report for this)
 I'm not sure that I should write a BP instead of a bug report.
 (2) Make a patch based on the current patch on gerrit

If the author agrees or ambivalent about it, you are free to re-use
the old Change ID.

And you're always free to post your patch anew.

I don't know if the bug report is all that necessary or useful.
The scope of the problem is well defined without, IMHO.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Openstack-operators] [swift] How to encrypt account/container/object data that travels through storage nodes?

2014-10-15 Thread Pete Zaitcev
On Wed, 17 Sep 2014 15:16:22 -0300
Gui Maluf guimal...@gmail.com wrote:

 Replicas are copied between storage nodes and swift presume all storage
 nodes are running in a secure network. Taking any scenario of a Globally
 Distributed OpenStack Swift Cluster
 https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/,
 how could nodes replicates through Regions, or even between zones, using
 VPN, SSL or any secure/encrypted way?

I'm afraid there's no other practical way but create VPNs between
datacenters and tunnel your back-end Swift traffic. Although it
could be possible to use SSL (with minimal changes), there's no
authentication or authorization in Swift back-end services.
If you let attackers on your replication network, it's game over.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Implement swift service-list in python-swiftclient

2014-09-16 Thread Pete Zaitcev
On Tue, 9 Sep 2014 15:36:03 +0530
Ashish Chandra mail.ashishchan...@gmail.com wrote:

 1) Do we have plans to include swift service-list in swiftclient ?
 If yes then I would be filing a blueprint in python-swiftclient to
 implement the same coz I require it to populate under the Admin - System
 Info - Object Storage Services.

File a patch in Gerrit and let's have a look. Sounds like a reasonable
idea to me on the face of it. You should be able to get away with
formatting the contents of JSON fetched from /info, hopefuly
without changes to the server-side code.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] IP address of Swift nodes : need help

2014-08-30 Thread Pete Zaitcev
On Tue, 19 Aug 2014 00:12:43 +0530
Jyoti Ranjan jran...@gmail.com wrote:

 In other words, is it necessary to use static IP for Swift nodes?

It is, but you can assign them with DHCP. In ISC DHCP the syntax is

  host r21s05 {
fixed-address r21s05;
hardware ethernet 68:9c:70:95:8e:51;
  }

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] [swift] Use FQDN in Ring files instead of ip

2014-08-05 Thread Pete Zaitcev
On Wed, 23 Jul 2014 08:54:30 -0700
John Dickinson m...@not.mn wrote:

 So basically, it's a question of do we add the feature, knowing that
 most people who use it will in fact be making their lives more difficult,
 or do we keep it out, knowing that we won't be serving those who actually
 require the feature.

Speaking of the latter, do you know of one operator at least who is
stuck with a Facebook-style v6-only datacenter?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] About Swift as an object storage gateway, like Cinder in block storage

2014-07-11 Thread Pete Zaitcev
On Mon, 7 Jul 2014 11:05:40 +0800
童燕群 tyan...@qq.com wrote:

 The workflow of this middle-ware working with swift may be like this pic:

Since you're plugging this into a/c/o nodes, there's no difference
between this and Pluggable Back-ends. Note that PBE is already implemented
in case of object server, see class DiskFile. Account/Container remainder
is here:
 https://review.openstack.org/47713

Do you have a request from your operations to implement this, or it's
a nice-to-have excercise for you? If the former, what specific vendor
store you are targeting?

-- Pete

P.S. Note that Cinder includes a large management component, which Swift
lacks by itself. In Cinder you can add new back-ends through Cinder's API
and CLI. In Swift, you have to run swift-ring-builder and edit configs.
Your blueprint does not address this gap.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [Swift] Running out of ports or fds?

2014-07-11 Thread Pete Zaitcev
On Tue, 8 Jul 2014 16:26:10 -0700
Shrinand Javadekar shrin...@maginatics.com wrote:

 I see that these servers do not use a persistent http connection
 between them. So every blob get/put/delete request will create a new
 connection, use it and tear it down. In a highly concurrent
 environment with thousands of such operations happening per second,
 there could be two problems:

It's a well-known problem in Swift. Operators with proxies driving
sufficient traffic for it to manifest set sysctl net.ipv4.tcp_tw_reuse.

There were attempts to reuse connections, but they floundered upon
the complexities of actually implementing a connection cache.
Keep in mind that you still have to allow simultaneous connections
to the same node for concurrency. It snowballs quickly.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Swift: reason for using xfs on devices

2014-07-01 Thread Pete Zaitcev
On Wed, 2 Jul 2014 00:16:42 +
Osanai, Hisashi osanai.hisa...@jp.fujitsu.com wrote:

 So I think if performance of swift is more important rather than scalability 
 of it, it is a
 good idea to use ext4.

The real problem is what happens when your drives corrupt the data.
Both ext4 and XFS demonstrated good resilience, but XFS leaves empty
files in directories where corrupt files were, while ext4's fsck moves
them to lost+found without a trace. When that happens, Swift's auditors
cannot know that something was amiss and the replication is not
triggered (because hash lists are only updated by auditors).

Mr. You Yamagata worked on a patch to address this problem, but did
not complete it. See here:
 https://review.openstack.org/11452

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Swift generating thousands of rsyncs per second on a small cluster

2014-07-01 Thread Pete Zaitcev
On Tue, 1 Jul 2014 10:50:15 +0100
Diogo Vieira d...@eurotux.com wrote:

 Can you tell me if this is normal behaviour? If so, how will
 this scale when I add more objects? Will it keep getting more
 and more CPU usage?

Dunno if it's normal or not, but clusters installed with default
parameters do that. My nodes in a similar cluster make about 200
REPLICATE ops every second. The noise is annoying, but it only
adds up to 80 KB/s.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Swift and Keystone behind NAT Firewall

2014-06-12 Thread Pete Zaitcev
On Thu, 12 Jun 2014 09:52:59 +0100
Diogo Vieira d...@eurotux.com wrote:

 Ok, I guess that might work, but I have one problem with that approach.
 For a service I'm developing I have to know the public URL for an object
 in the store. For that I use Keystone to find the endpoint of Swift and
 I get the internal ip. Is there a way for me to set a public endpoint
 or get the correct ip (the one accessible publicly) of the service?

I don't think any Swift client supports the public/private split,
even if you have in Keystone catalog. You have to set the external
IP (or preferably hostname, actually) in the endpoint descriptor
in Keystone.

The practical way to make the split is to use a hostname, and then
have internal DNS point to internal IP and and external to public IP.

The meaning of the final question is somewhat foggy, because you
have a complete control of what endpoint in Keystone lists.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Swift and Keystone behind NAT Firewall

2014-06-11 Thread Pete Zaitcev
On Wed, 11 Jun 2014 18:52:43 +0100
Diogo Vieira d...@eurotux.com wrote:

 I have one Proxy Node in the same machine as the Keystone service is as
 well as a Storage Node. On the other machine I have only a Storage Node.

 What should be the approach used to make this publicly available? What
 ports should be opened in the firewall and what changes do I need to make
 in any of the services?

You'll be fine with the 5000 for Keystone and whatever your Swift's
front end uses (80, 443, 8080, or 8443). Whatever is used in -A is
your Keystone port, use that.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] swift don't start with ceilometer

2014-06-05 Thread Pete Zaitcev
On Wed, 04 Jun 2014 17:28:54 +0800
Yugang LIU 88.a...@gmail.com wrote:

 pkg_resources.VersionConflict: (happybase 0.7
 (/usr/lib/python2.7/dist-packages),
 Requirement.parse('happybase=0.5,!=0.7'))
 
 it need happybase 0.5, but my system version is 0.7?

You either need to go down on happybase or find a ceilometer that
works with 0.7. I don't see a way around it, sorry.

-- P

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [OpenStack-Infra] [openstack-dev] Moving swift3 to stackforge (was: Re: Intermittent failures cloning noVNC from github.com/kanaka)

2014-03-15 Thread Pete Zaitcev
On Fri, 14 Mar 2014 09:03:22 +0100
Chmouel Boudjnah chmo...@enovance.com wrote:

 fujita (the maint of swift3 in CC of this email)  has commented that he's
 been working on it.

I think we should've not kicked it out. Maybe just re-fold it
back into Swift?

-- Pete

___
OpenStack-Infra mailing list
OpenStack-Infra@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-infra


Re: [openstack-dev] Moving swift3 to stackforge (was: Re: [OpenStack-Infra] Intermittent failures cloning noVNC from github.com/kanaka)

2014-03-14 Thread Pete Zaitcev
On Fri, 14 Mar 2014 09:03:22 +0100
Chmouel Boudjnah chmo...@enovance.com wrote:

 fujita (the maint of swift3 in CC of this email)  has commented that he's
 been working on it.

I think we should've not kicked it out. Maybe just re-fold it
back into Swift?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Guru Meditation output seems useless

2014-03-03 Thread Pete Zaitcev
Dear Solly:

I cobbled together a working prototype of Guru Meditation for Swift
just to see how it worked. I did not use Oslo classes, but used the
code from Dan's prototype and from your Nova review. Here's the
Gerrit link:
 https://review.openstack.org/70513

Looking at the collected tracebacks, most of all they seem singularly
useless. No matter how loaded the process is, they always show
something like:

  File /usr/lib/python2.6/site-packages/eventlet/hubs/hub.py, line 226, in run
self.wait(sleep_time)
  File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 84, in 
wait
presult = self.do_poll(seconds)
  File /usr/lib/python2.6/site-packages/eventlet/hubs/poll.py, line 73, in 
do_poll
return self.poll.poll(int(seconds * 1000.0))
  File /usr/lib/python2.6/site-packages/swift/common/daemon.py, line 103, in 
lambda
*args))
  File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 
79, in signal_handler
dump_threads(gthr_model, report_fp)
  File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 
53, in dump_threads
thread.dump(report_fp)
  File /usr/lib/python2.6/site-packages/swift/common/guru_meditation.py, line 
29, in dump
traceback.print_stack(self.stack, file=report_fp)

The same is true for native and gree threads: they all seem to be
anchored in he lambda that passes parameters to the signal handler,
so they show nothing of value.

So, my question is: did you look at traces in Nova and if yes,
did you catch anything? If yes, where is the final code that works?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] [SWIFT] What happens when the shard count on available disk is less than the total shard count?

2014-02-28 Thread Pete Zaitcev
On Fri, 28 Feb 2014 09:10:06 -0800
Stephen Wood smwo...@gmail.com wrote:

 However I realize that the shard count is completely different now.

What is a shard count? Do you have a document that uses such
terminology?

 I
 originally used a partition value of 15 but this now seems much to high for
 4 servers with only disk each.

So what? As long as there are no ill effects, it's all good.
Meaning if you have enough RAM to keep your ring once it's loaded,
then no problem, isn't it? It's not like your A+C servers magically
shrunk when you swapped the winchesters for SSDs, right?

 Can I dynamically
 adjust the partition values after the swift ring has been created?

No, you can't.

  Or
 should I just take the disks on my 4 SSD hosts and put their weight as 2^15
 / 4 so the overall shard count stays the same?

I am failing to make sense of the above sentence. Weight only matters
for builder scattering partitions at devices relative to each other.
So, if one replaces rotating media with SSDs, but keeps the cluster
running, the number of parititions stays the same, right? At that point
weights can be redefined at, say, 100, or any other number, without
any effect on total or per-device number of partitions.

I think we need to circle back to the definition of the mysterious
shard count before we can get to the bottom of this.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] Capturing performance penalities Physical versus VM [Swift]

2014-02-24 Thread Pete Zaitcev
On Fri, 21 Feb 2014 13:47:31 -0800
Adam Lawson alaw...@aqorn.com wrote:

 So, are there benchmarks already published somewhere that demonstrate
 how/why a virtual infrastructure is such a bad idea with Swift? We all know
 Swift is designed to be a storage provider - not a storage consumer and
 that running Swift with shared storage with RAID is utterly foolish. But my
 recommendation is being overridden by folks who feel the bad performance
 isn't really all that bad and because the VM's are going to be temporary.

I don't consider this question needing this sort of exhuberant treatment.
Deployments on VMs do happen. But they aren't very economical or safe.
RAID does not actually hurt all that much, depending on the level.
Its downside is that 1) it buys you nothing and you're just wasting
equipment, and 2) it offers admins a lot of rope to hang themselves.

Since Swift audits its storage constantly, VMs hosting Swift
will inevitably defeat benefits of virtualization by chewing CPU
and I/O, and you can't oversubscribe. So the only benefit remainin
is flexibility, such as migrations, which Swift does not need.

Another problem with all this is that Swift clusters aren't very
easy to migrate wholesale, unless the user app can copy all its
data from cluster to cluster. Therefore, the cluster of VMs must
be established with realistic parameters for numbers of partitions
and replication (that means 3), even if it seems excessive.

But if VMs stick around long term, I'd be afraid of the loss of
availability. Sooner or later you'll have 2 VMs share a controller
or SAN box (even if volumes are different). It's a huge PITA to
keep track and I guarantee that admins will not. Then you're one
power failure from losing quorum.

But you need to present all this without passing a judgement about
VMs being utterly foolish or such. Let's say, practice suggests
that they are suboptimal and offer dangerous traps in data area.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [openstack-dev] Glance v1 and v2

2014-02-18 Thread Pete Zaitcev
On Tue, 18 Feb 2014 10:57:03 +0100
Joe Hakim Rahme joe.hakim.ra...@enovance.com wrote:

 Again, I have just spent a couple of days playing with it on a devstack. I'm 
 by
 no means a reference on the subject of the API v2. I hope this will help you 
 get
 a better idea of where it stands today.

Thanks a lot, it clears up some misconceptions on my part.
I noticed that glance CLI client was using v1, should've looked
at the source for --os-image-api, but I was sure we just postponed it.

 [1]: 
 http://docs.openstack.org/api/openstack-image-service/2.0/content/image-sharing.html

Thanks for the links too.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] Glance v1 and v2

2014-02-14 Thread Pete Zaitcev
Hello:

does anyone happen to know, or have a detailed write-up, on the
differences between so-called Glance v1 and Glance v2?

In particular do we still need Glance Registry in Havana, or
do we not? The best answer so far was to run the registry anyway,
just in case, which does not feel entirely satisfactory.
Surely someone should know exactly what is going on in the API
and have a good idea what the implications are for the users
of Glance (API, CLI, and Nova (I include Horizon into API)).

Thanks,
-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Glance] delayed delete and user credentials

2014-02-06 Thread Pete Zaitcev
Hi, guys:

I looked briefly at a bug/fix, which looks exceedingly strange to me:
 https://review.openstack.org/59689

As much as I can tell, the problem (lp:1238604) is that pending delete
fails because by the time the delete actually occurs, Glance API does
not have proper permissions to talk to Glance Registry.

So far so good, but the solution that we accepted is to forward
the user credentials to Registry... but only if configured to do so.
Does it make any sense to anyone? Why configure something that must
always work? How can sysadmin select the correct value?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [Openstack] Building Swift cluster wiuth Ubuntu VM's

2014-02-04 Thread Pete Zaitcev
On Sun, 2 Feb 2014 20:08:02 -0800
Adam Lawson alaw...@aqorn.com wrote:

 (http://docs.openstack.org/developer/swift/howto_installmultinode.html)
 
 I'm running into issues where the instructions start to get really
 hazy/unclear after step #4 in Configure Proxy Server.

Woops, there was a bug in numbering of steps :-)
https://review.openstack.org/71051

 It says in step #5 to
 run a series of commands  for each device listed in /srv/node but I
 haven't seen a step prior to step #5 that populates that directory with
 devices.

Well it's not important if the directory is already populated.
The docs actually mean the real devices you plan to use -- on each node,
not just on proxy.

 Thoughts?

You should already know what nodes have what devices. Note that device
namspace is flat. So, write something like this on a piece of paper:

NODE  device   zone   Remark

vm1   vm1 is proxy, has no devices
vm2   xdb1
vm2   xdc2
vm3   xdb3
vm3   xdc unused, just because
vm4   xdb4
vm4   xdc5

If you want to stuff them under /var/Monsoon, that's up to you.
Note, however, that utilities that have to guess where you hide
your devices may not work completely in such case (see swift-get-nodes).

You could get better advice by asking on openstack-operators list.

-- P

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [SWIFT] Memcached timeouts?

2014-01-28 Thread Pete Zaitcev
On Fri, 24 Jan 2014 12:51:45 -0800
Stephen Wood smwo...@gmail.com wrote:

 My memcached settings are basically stock. Any idea what could be causing
 these errors?

I don't have a good idea, but I would start by backing up these
commits:

https://github.com/openstack/swift/commit/0b370269111957fec7521d284fcbd742ff8b8c13
https://github.com/openstack/swift/commit/6607beab0dc8043251b490471761fa2dd85f2816
https://github.com/openstack/swift/commit/ae8470131ead095e3bf1c290bac866a5e6e29e79

That would revert memcache client to the state it was in at the
last release time (with known bugs). The three may depend on each
other, so it should be easier just chuck the whole block for the
first try.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] Proxy server bottleneck

2014-01-13 Thread Pete Zaitcev
On Fri, 10 Jan 2014 15:25:02 -0800
Shrinand Javadekar shrin...@maginatics.com wrote:

 I see that the proxy-server already has a workers config option. However,
 looks like that is the # of threads in one proxy-server process.

Not so. Workers are separate Linux processes. Look at os.fork() in
run_sever(). Perhaps you're looking for the 'max_client' option,
which restricts the green pool passed to eventlet in each of the workers.

 It does not send the request and go back do accept more requests.

The requests each gets a green thread inside the eventlet, by way
of monkey-patching of some syscalls.

Anyhow, please be more specific about the numbers you see on your
proxies, e.g. how many IOPS, GHz, workers, and so on. Otherwise
it's poinless to speculate about your bottleneck case and its causes.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [SWIFT] Unable to delete containers?

2013-12-05 Thread Pete Zaitcev
On Thu, 5 Dec 2013 10:06:14 -0800
Stephen Wood smwo...@gmail.com wrote:

 $ swift list
 ssbench_46
 
 $ swift delete ssbench_46
 Container 'ssbench_46' not found
 
 $ swift list ssbench_46
 Container 'ssbench_46' not found
 
 How does one go about actually removing these phantom containers?

Just re-post them with  swift post ssbench_46 , then delete,
but make sure cluster is in a good health first.

I had this bug assigned for a while now, but all my attempts to
reproduce it ended in the inconsistency healing itself when updater
had a chance to run properly. It was a little surprising, because
as I understand, our auditors and replicators do not repair
this inconsistency, and it should be easy to bust updater.
Perhaps you found a good way to do it.
(ref. https://bugzilla.redhat.com/show_bug.cgi?id=1013594)

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] Swift on RHEL

2013-12-05 Thread Pete Zaitcev
On Thu, 5 Dec 2013 21:19:49 +
Kotwani, Mukul mukul.g.kotw...@hp.com wrote:

 Has anyone used and/or deployed RHEL (5.8) with Swift?
 I was also looking for a Supported platforms for Swift, and
 I could not find it.

I don't think an RDO for RHEL 5.x ever existed. First packages
were built a year after RHEL 6 GA shipped. The oldest build in Koji
is openstack-swift-1.0.2-5.fc15 (a community build by Silas), and the
oldest RHOS build is openstack-swift-1.4.8-2.el6 from 2012.

Frankly I'm surprised you managed to get it running at all. (Which
Swift release is that, BTW? We even require PBR nowadays.)

I dimly remember bothering with XFS for RHEL 5, but it was so long
ago that I cannot even remember if I got that cluster to do anything
useful.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] Changing local logging destinations?

2013-12-03 Thread Pete Zaitcev
On Wed, 27 Nov 2013 15:33:52 -0800
John Dickinson m...@not.mn wrote:

  Can someone give me advice on how to redirect logs from /var/log/syslog to 
  /var/log/swift/{object|account|container}.logs?

 While a Swift-All-In-One certainly isn't something your should run in 
 production, the SAIO document does have some guidance on how to configure 
 rsyslogd to split out log messages.
 
 http://docs.openstack.org/developer/swift/development_saio.html#optional-setting-up-rsyslog-for-individual-logging

One also needs to drop Swift from the main log. E.g. on Fedora,
where /var/log/messages is used like /var/log/syslog on Debian,
it looks like this:

#*.info;mail.none;authpriv.none;cron.none/var/log/messages
*.info;mail.none;authpriv.none;cron.none;local0.none/var/log/messages

-- P

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] [Swift] Container DB update after object PUT

2013-12-03 Thread Pete Zaitcev
On Thu, 28 Nov 2013 05:24:35 +
Shao, Minglong minglong.s...@netapp.com wrote:

   1.  Proxy server sends three requests to three object servers.
   2.  One object server writes the object successfully, sends an update to 
 the container DB and an “OK” reply to the proxy server. But the other two 
 fail, so they send “failed” to the proxy server.
   3.  The proxy server sends back “failed” to the client because it doesn’t 
 meet the quorum. But the container DB still gets the update to insert an 
 entry of this object.

In addition to Sam's answer, the eventual consistency will be restored
even if we're out of quorum. Only needs 1 object replica surviving.
Note that the client receives a 5xx in such case, but object survives
and its container record does too.

-- Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


Re: [Openstack] WORM support

2013-11-13 Thread Pete Zaitcev
On Wed, 13 Nov 2013 16:47:29 +
Perez, Daniel - ES daniel.pe...@exelisinc.com wrote:

 I am wondering if I can circumvent the use of tombstone files?

Not without writing some code, but you can write your own Pluggable
Back-end like Gluster does. In that case you're free to implement
any delete mechanism you like. Note, however, that current PBEs
do not support our normal replication. This is not a concern for
Gluster, since it is itself distributed, but something you might
need for WORM.

--Pete

___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack


[openstack-dev] Swift account auditor duplicated code

2013-09-05 Thread Pete Zaitcev
Hi, Guys:

Here's a weird piece of duplicated call to account_audit() in
swift/account/auditor.py:

for path, device, partition in all_locs:
self.account_audit(path)
if time.time() - reported = 3600:  # once an hour
self.logger.info(_('Since %(time)s: Account audits: ' ...)
self.account_audit(path)
dump_recon_cache({'account_audits_since': reported, ...)
reported = time.time()

This was apparently caused by Florian's ccb6334c going on top of
Darrell's 3d3ed34f. Is this intentional, and if not, should we be
fixing it?

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] question on Application class concurrency; paste.app_factory mechanism

2013-07-25 Thread Pete Zaitcev
On Wed, 24 Jul 2013 02:31:45 +
Luse, Paul E paul.e.l...@intel.com wrote:

 I was thinking that each connection would get its own instance thus it
 would be sage to store connection-transient information there but I was
 surprised by my quick test.

Yeah, you have it tracked in __call__, typically hanging off controller.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] erasure codes, digging deeper

2013-07-24 Thread Pete Zaitcev
On Thu, 18 Jul 2013 12:31:02 -0500
Chuck Thier cth...@gmail.com wrote:

 I'm with Chmouel though.  It seems to me that EC policy should be chosen by
 the provider and not the client.  For public storage clouds, I don't think
 you can make the assumption that all users/clients will understand the
 storage/latency tradeoffs and benefits.

Would not a tiered pricing make them figure it out quickly?
Make EC cheaper by the factor of cost of storage used and voila.

At first I also had a violent reaction to this kind of exposure
of internals. After all S3 went this far while being entirely
opaque. But we're not S3, that's the key.

-- Pete

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev