[openstack-dev] [Swift] Redhat Research

2018-09-25 Thread Clay Gerrard
In one of the Swift sessions at the Denver PTG Doug Hellman suggested there
are some programs in RedHat that work with university graduate students on
doing computer science research [1] which might be appropriate for certain
kinds of work on some parts of OpenStack.

Swift has solved a few interesting mathy sort problems over the years,
we've got a few things still left to tackle.  Coming up we've got container
sharding concensus, LOSF slab file compaction, unified consistency engine
RPC, and that troubling little golang fork/rewrite abandonware [2].

Probably others too.

I field questions about how swift works 8-10 times a year from university
students apparently doing some sort of analysis on Swift related to the
course work... it never occurred to me I might be able to suggested
something they might think on which would be useful?

I don't really have the capacity or the know how to pursue it further than
this, any one have any ideas or experience along these lines?

-clayg

1. https://research.redhat.com/
2. https://github.com/troubling/hummingbird
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] GUI for Swift object storage

2018-09-18 Thread Clay Gerrard
I don't know about a good open source cross-platform GUI client, but the
SwiftStack one is slick and doesn't seem to be behind a paywall (yet?)

https://www.swiftstack.com/downloads

There's probably some proprietary integration that won't make sense - but
it should work with any Swift end-point.  Let me know how it goes!

-Clay

N.B. IANAL, so you should probably double check the license/terms if you're
planning on doing anything more sophisticated than personal use.

On Mon, Sep 17, 2018 at 9:31 PM M Ranga Swami Reddy 
wrote:

> All GUI tools are non open source...need to pay like cyberduck etc.
> Looking for open source GUI for Swift API access.
>
> On Tue, 18 Sep 2018, 06:41 John Dickinson,  wrote:
>
>> That's a great question.
>>
>> A quick google search shows a few like Swift Explorer, Cyberduck, and
>> Gladinet. But since Swift supports the S3 API (check with your cluster
>> operator to see if this is enabled, or examine the results of a GET /info
>> request), you can use any available S3 GUI client as well (as long as you
>> can configure the endpoints you connect to).
>>
>> --John
>>
>> On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote:
>>
>> Hi - is there any GUI (open source) available for Swift objects storage?
>>
>> Thanks
>> Swa
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above

2018-05-09 Thread Clay Gerrard
On Wed, May 9, 2018 at 12:35 PM, Jim Rollenhagen 
wrote:

>
> It works with both, see the link from earlier in the thread:
> https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546
> f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185
>
>
Ah!  Perfect!  Thanks for point that out (again )

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above

2018-05-09 Thread Clay Gerrard
On Wed, May 9, 2018 at 12:08 PM, Matthew Thode 
wrote:

>
> * Proper fix would be to make ceph support the account field
>

Is the 'rgw_swift_account_in_url' option not correct/sufficient?


> * Workaround would be to specify an old swiftclient to install (3.1.0,
> pre-ocata)
>

Doesn't seem great if a sysadmin wants to co-install the newer swiftclient
cli


> * Workaround would be to for swiftclient to be forked and 'fixed'
>
>
Not clear to me what the "fix" would be here - just don't do validation?
I'll assume the "fork threat" here is for completeness/emphasis :D

Do you know if ironic works with "normal" swift tempurls or only the
radosgw implementation of the swift api?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift-backed volume backups are still breaking the gate

2018-01-25 Thread Clay Gerrard
On Thu, Jan 25, 2018 at 7:01 PM, Matt Riedemann  wrote:

> Is ThreadSafeSysLogHandler something that could live in oslo.log so we
> don't have to whack this mole everywhere at random times?


That might make sense, unless we can get eventlet's monkey patching of the
logging module to do something similar...

FWIW, Swift doesn't use oslo.log and has it's own crufty logging issues:

https://bugs.launchpad.net/swift/+bug/1380815

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift-backed volume backups are still breaking the gate

2018-01-25 Thread Clay Gerrard
Does it help that swift also had to fix this?

https://github.com/openstack/swift/blob/6d2503652b5f666275113cf9f3e185a2d9b3a121/swift/common/utils.py#L4415

The interesting/useful bit is where we replace our primary loghandlers
createLock method to use one of these [Green|OS]-thread-safe PipeMutex lock
things...

-Clay

On Thu, Jan 25, 2018 at 1:12 PM, Matt Riedemann  wrote:

> We thought things were fixed with [1] but it turns out that swiftclient
> logs requests and responses at DEBUG level, so we're still switching thread
> context during a backup write and failing the backup operation, causing
> copious amounts of pain in the gate and piling up the rechecks.
>
> I've got a workaround here [2] which will hopefully be good enough to
> stabilize things for awhile, but there is probably not much point in
> rechecking a lot of patches, at least ones that run through the integrated
> gate, until that is merged.
>
> [1] https://review.openstack.org/#/c/537437/
> [2] https://review.openstack.org/#/c/538027/
>
> --
>
> Thanks,
>
> Matt
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] TC Report 43

2017-10-24 Thread Clay Gerrard
On Tue, Oct 24, 2017 at 11:26 AM, Chris Dent  wrote:

>
>
> Since this is the second [week in a
> row](https://anticdent.org/tc-report-42.html) that Josh showed up with
> an idea, I wonder what next week will bring?
>
>
^ That's pretty cool.  Thanks for sending this as always Chris.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] SPDK uses Swift as a target system to support k-v store

2017-10-16 Thread Clay Gerrard
Sounds interesting!  I'd be *very* interested to see *any* work that's been
done to make use of SPDK functionality for faster or more efficient HTTP ->
rotational media storage.  Couple of thoughts...

1) I think the memcache or redis API would be a better target for SPDK
research than Swift's HTTP blobstore API  - if you're just going for a pure
generic kv interface to a abstract/SSD/NVMe/XPoint storage backend [1]

2) if you wanna see how you might store Swift objects in a generic k/v
store you might consider earlier work to support k/v drives [2]

Good luck, I'm excited to see how you might try to apply SPDK to OpenStack
Swift!

Say Hi to Paul!

-Clay

1. the account/container layer of the Swift API in particular leverages a
lot of query like functionality and i'm not aware of any successful attempt
at API parity on a pure k/v store abstraction.
2. https://github.com/swiftstack/kinetic-swift

On Fri, Oct 13, 2017 at 9:18 PM, We We  wrote:

> Hi, all
>
> I am a newcomer in Swift, I have proposed a proposal for k-v store  in the
> SPDK community. The  proposal has submitted on
> https://trello.com/b/P5xBO7UR/things-to-do, please spare some time to
> visit it. In this proposal, we would like to  uses Swift as a target system
> to support k-v store. Could you please share with me if you have any ideas
> about it. I'd love to hear from your professional thoughts.
>
> Thx,
>
> Helloway
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Clay Gerrard
On Thu, Oct 12, 2017 at 3:20 PM, Clay Gerrard <clay.gerr...@gmail.com>
wrote:

> I ment to include reference back to (what I believe) was the original work:
>

https://review.openstack.org/#/c/453262/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Clay Gerrard
I ment to include reference back to (what I believe) was the original work:


On Thu, Oct 12, 2017 at 3:17 PM, Clay Gerrard <clay.gerr...@gmail.com>
wrote:

>
>
> On Thu, Oct 12, 2017 at 2:48 PM, Emilien Macchi <emil...@redhat.com>
> wrote:
>
>>
>> The vision exercise was, in my opinion, one of the more exciting
>> things we have done in 2017.
>>
>
> Yeah for sure, that was a big goings-on.
>
> It's not an easy thing to do because of our diverses opinions, but
>> together we managed to write something down, propose it to the
>> community in the open and make it better afterward (of course this
>> will never finish).
>>
>> Outcome related, I loved the fact we're thinking outside of the
>> OpenStack community and see how we can make OpenStack projects usable
>> in environments without all the ecosystem. I also like to see our
>> strong efforts to increase diversity in all sorts and our work to
>> improve community health.
>>
>> Beside the outcome, I loved to see all TC members able to work
>> together on this Vision in the open, I hope we can do more of that in
>> the future, even outside of the TC (in teams). (ex: doc team had a PTG
>> session about visioning as well).
>> I hope I answered the question,
>
>
> Yup.  Unifying and disseminating an acceptable future state of an
> organization is definitely _one_ important job of "leadership".
>
>
>> please let me know if that's not the
>> case if you want more details.
>> --
>> Emilien Macchi
>>
>>
> Thanks!
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Clay Gerrard
On Thu, Oct 12, 2017 at 2:48 PM, Emilien Macchi  wrote:

>
> The vision exercise was, in my opinion, one of the more exciting
> things we have done in 2017.
>

Yeah for sure, that was a big goings-on.

It's not an easy thing to do because of our diverses opinions, but
> together we managed to write something down, propose it to the
> community in the open and make it better afterward (of course this
> will never finish).
>
> Outcome related, I loved the fact we're thinking outside of the
> OpenStack community and see how we can make OpenStack projects usable
> in environments without all the ecosystem. I also like to see our
> strong efforts to increase diversity in all sorts and our work to
> improve community health.
>
> Beside the outcome, I loved to see all TC members able to work
> together on this Vision in the open, I hope we can do more of that in
> the future, even outside of the TC (in teams). (ex: doc team had a PTG
> session about visioning as well).
> I hope I answered the question,


Yup.  Unifying and disseminating an acceptable future state of an
organization is definitely _one_ important job of "leadership".


> please let me know if that's not the
> case if you want more details.
> --
> Emilien Macchi
>
>
Thanks!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][election] Question for the candidates

2017-10-12 Thread Clay Gerrard
I like a representative democracy.  It mostly means I get a say in which
other people I have to trust to think deeply about issues which effect me
and make decisions which I agree (more or less) are of benefit to the
social groups in which I participate.  When I vote IRL I like to consider
voting records.  Actions speak louder blah blah.

To candidates:

Would you please self select a change (or changes) from
https://github.com/openstack/governance/ in the past ~12 mo or so where
they thought the outcome or the discussion/process was particular good and
explain why you think so?

It'd be super helpful to me, thanks!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Can swift leverage os page buffer?

2017-10-11 Thread Clay Gerrard
On Wed, Oct 11, 2017 at 3:46 PM, Jialin Liu  wrote:

> Hi,
> I'm new to openstack swift, I'm a HPC user, by several days of exploration
> of swift, I have some naive questions:
> 1. Can swift, e.g., PUT, leverage OS' page buffer?;
>

Sure, but perhaps to a limited degree depending on what you're expecting?
We try pretty hard to fsync everything before we return success:

https://github.com/openstack/swift/blob/master/swift/obj/diskfile.py#L1517

^ and just below that is _finalize_put


> Is there a Linux kernel module for swift?
>

No.


> What are the existing optimizations for caching the write in order to gain
> better bandwidth on spin disks?
>

... um ... that's a pretty broad question, swift has been around for quite
a while, it would require some research.  You can read through the diskfile
modules; most of the bits that touch disk at the object layer are in
there.  There's a number of tunables that effect how Swift will treat the
filesystem - and the low-level specifics of what they do are in the code.
Any results of rigorous classifications you perform and want to publish are
always interesting and welcome in the community.  There's been a few
interesting things published over the years - but there's no been no
community effort to collect them into a single home that I'm aware of;
you'd have to track them down - I recall Seagate did some interesting
analysis a while back.  RedHat's performance group is always doing stuff,
Intel did some stuff.  Current community efforts are focused on further
refining erasure code storage - which have a positive impact on medium and
large object uploads both by reducing the total number of backend bytes to
store (compared to replicated object) and also by fanning that object data
+ parity out to larger numbers of spindles on the backend.  On the other
end of the spectrum OVH is leading an effort to further improve performance
for lots of small files.

2. Does swift support asynchronous PUT/Get?
>
>
I don't know what that means, so I'll say no.  I might hazard a guess it
has something to do with a PUT only storing some reduced redundancy unsafe
staged data and then doing something to improve durability after it already
promised the client their write was "safe" - which is not something Swift
does.


>
> Also please let me know if the dev list is good for me to ask this kind of
> questions.
>

The ML is nice in that people can give more detailed responses and the
archives tend to be a bit more searchable - the trade off is a longer
latency on responses.  You can also jump on IRC - swift folks hang out in
#openstack-swift on freenode.

Best Regards,

Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] leverage page cache in openstack swift

2017-10-06 Thread Clay Gerrard
There's a couple of options in the object server that are related to how
object data is cached (or generally *not*)

https://github.com/openstack/swift/blob/master/swift/obj/server.py#L921

At scale on dense nodes it becomes challenging to keep all the filesystem
metdata in the page cache, so we've tried a few different tricks and
tunings over the years to optimize towards using as much RAM as possible on
minimizing the number of seeks "wasted" to pickup filesystem directory and
extent information instead of object data.  BUT on nodes with more RAM and
less objects (or fewer *active* objects) it is definitely possible to tune
towards keeping more object data in the page cache.

Good luck!

-Clay


On Fri, Oct 6, 2017 at 2:34 PM, Jialin Liu  wrote:

> Hi,
> Is there any existing work that leveraging operating system's page cache
> for swift?
> like many other parallel file systems, lustre, the IO is cached in buffer
> and call is returned immediately to user space.
>
> Best,
> Jialin
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] multi threads with swift backend

2017-09-06 Thread Clay Gerrard
I'm pretty sure that would only be possible with a code change in glance to
move the consumption of the swiftclient abstraction up a layer from the
client/connection objects to swiftclient's service objects [1].  I'm not
sure if that'd be something that would make a lot of sense to the Image
Service team.

-Clay

1. https://docs.openstack.org/python-swiftclient/latest/service-api.html

On Wed, Sep 6, 2017 at 9:02 AM, Arnaud MORIN  wrote:

> Hi all,
>
> Is there any chance that glance can use the multiprocessing from
> swiftclient library (equivalent of xxx-threads options from cli)?
> If yes, how to enable it?
> I did not find anything useful in the glance configuration options.
> And looking at glance_store code make me think that it's not possible...
> Am I wrong?
>
> Regards,
> Arnaud
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] 503 Errors for PUT and POST calls....

2017-08-08 Thread Clay Gerrard
Probably the "devices" option in the object server is misconfigured?

On my lab and production servers I configure the object-server.conf with

[DEFAULT]
devices = /srv/node

And then I make sure my mounted devices appear at:

/srv/node/d1
/srv/node/d2
/srv/node/d3

etc

The path in the error message:

/srv/xvdb1/node/xvdb1/

Looks like the object-server.conf is configured with:

devices = /srv/xvdb1/node

And the ring has devices like "xvdb1"

But as the error states: "No such file or directory at"

devices + device => /srv/xvdb1/node/xvdb1/...

And I trust the error that the path doesn't exist (or if it does maybe the
swift processes don't have permissions?)

Hope you can get it squared.  You might jump in IRC and join
#openstack-swift on Freenode for some more iterative feedback (I'd
recommend irccloud.com if you're new to IRC).

GL,

-Clay


On Tue, Aug 8, 2017 at 2:54 AM, Shyam Prasad N 
wrote:
>
> Hi,
>
> In my openstack swift cluster, I'm seeing a lot of 503 errors as a
> result of tracebacks in swift logs with "No such file or directory"
> exceptions...
> # grep -Rnw txdaba05e70c6b4dfaa5884-0059895aca /var/log/swift/*
> /var/log/swift/proxy.error:31030:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c proxy-server: ERROR 500
> Traceback (most recent call last):#012  File
> "/usr/lib/python2.7/dist-packages/swift/obj/server.py", line 1032, in
> __call__#012res = method(req)#012  File
> "/usr/lib/python2.7/dist-packages/swift/common/utils.py", line 1412,
> in _timing_stats#012resp = func(ctrl, *args, **kwargs)#012  File
> "/usr/lib/python2.7/dist-packages/swift/obj/server.py", line 751, in
> PUT#012writer.put(metadata)#012  File
> "/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 2451,
> in put#012super(DiskFileWriter, self)._put(metadata, True)#012
> File "/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line
> 1476, in _put#012self._finalize_put, metadata, target_path,
> cleanup)#012  File
> "/usr/lib/python2.7/dist-packages/swift/common/utils.py", line 3342,
> in force_run_in_thread#012return self._run_in_eventlet_tpool(func,
> *args, **kwargs)#012  File
> "/usr/lib/python2.7/dist-packages/swift/common/utils.py", line 3322,
> in _run_in_eventlet_tpool#012raise result#012OSError: [Errno 2] No
> such file or directory#012 From Object Server re:
>
/v1/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1
> 10.3.60.8:6010/xvdb1 (txn: txdaba05e70c6b4dfaa5884-0059895aca)
> (client_ip: 10.3.60.11)
> /var/log/swift/proxy.error:31031:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c proxy-server: Object
> PUT returning 503 for [500, 201] (txn:
> txdaba05e70c6b4dfaa5884-0059895aca) (client_ip: 10.3.60.11)
> /var/log/swift/proxy.error:31032:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c proxy-server: STDERR:
> 10.3.60.11 - - [08/Aug/2017 06:31:39] "PUT
>
/v1/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1
> HTTP/1.1" 503 346 1.553481 (txn: txdaba05e70c6b4dfaa5884-0059895aca)
> /var/log/swift/proxy.log:27701:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c proxy-server:
> 10.3.60.11 10.3.60.11 08/Aug/2017/06/31/39 PUT
>
/v1/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1
> HTTP/1.0 503 - - AUTH_tke6014ecd5... 16777216 118 -
> txdaba05e70c6b4dfaa5884-0059895aca - 1.5526 - - 1502173898.383203983
> 1502173899.935844898 0
> /var/log/swift/storage1.log:41634:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c object-server:
> 10.3.60.8 - - [08/Aug/2017:06:31:39 +] "PUT
>
/xvdb1/118/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1"
> 500 981 "PUT
http://10.3.60.8:8080/v1/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1
"
> "txdaba05e70c6b4dfaa5884-0059895aca" "proxy-server 2117" 1.0534 "-"
> 2127 0
> /var/log/swift/storage2.log:128852:Aug  7 23:31:39
> BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c container-server:
> 10.3.60.9 - - [08/Aug/2017:06:31:39 +] "PUT
>
/xvdb2/972/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1"
> 201 - "PUT
http://10.3.60.8:8080/xvdb2/118/AUTH_test/8kpc/data/37363A32353A33393A63353A36633A3566CA558859.73.0.1
"
> "txdaba05e70c6b4dfaa5884-0059895aca" "object-server 1728" 0.0006 "-"
> 2099 0
>
> I'm also seeing some errors removing tempfile errors in storage logs
also...
> Aug  8 02:28:15 BulkStore-c2f99bd4-75ce-11e7-b536-02e7b943c03c
> object-server: Error removing tempfile:
> /srv/xvdb1/node/xvdb1/tmp/tmpFouKzU: #012Traceback (most recent call
> last):#012  File
> "/usr/lib/python2.7/dist-packages/swift/obj/diskfile.py", line 2396,
> in create#012os.unlink(tmppath)#012OSError: [Errno 2] No such file
> or directory: '/srv/xvdb1/node/xvdb1/tmp/tmpFouKzU' (txn:
> tx860a415e4c454baeab4fc-005989842e)
>

Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-28 Thread Clay Gerrard
On Wed, Jun 28, 2017 at 7:50 AM, Thierry Carrez 
wrote:

> It's hard for newcomers to the OpenStack world to see what
> is a part of OpenStack and what's not.


Just an aside, this Perception problems works in our favor sometimes too.
I know in the past some BigCorp contributors have been told to "go work on
OpenStack" and the ambiguity leaves room for creative allocation of
resources.  Sometimes the rule-of-thumb is as simple as "if it's hosted on
OpenStack infra you can contribute".  I think internally we understand
software in service of the mission isn't strictly limited to projects under
TC governance - but on the outside ... there's a "perception problem" ;)

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] How to deal with confusion around "hosted projects"

2017-06-28 Thread Clay Gerrard
On Wed, Jun 28, 2017 at 7:50 AM, Thierry Carrez 
wrote:

>
> 2- it lets us host things that are not openstack but which we work on
> (like abandoned Python libraries or GPL-licensed things) in a familiar
> environment
>
>
Do we no longer think openstack hosted infra holds a competitive advantage
for teams that are trying to efficiently collaborate building software in
service of the mission?

If we do, why would we agree on a plan that basically tells teams that are
trying to "get stuff done" to go work it out on github and travis ci or
whatever?  Maybe worse, what happens when a team/project/community grows up
around *one* workflow (not because it's better; but just because the
OpenStack workflow is exclusionary) but then it sees it's
operators/deployers/users adoption swelling around OpenStack and wants to
"join"?  Is adopting the hosted infrastructure later... optional?

If we do NOT think hosting on openstack-infra offers competitive advantage
for teams that are trying to efficiently collaborate building software in
service of the mission ... why heck not?!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bug triage experimentation

2017-06-23 Thread Clay Gerrard
Sean,

This sounds amazing and Swift could definitely use some [automated]
assistance here.  It would help if you could throw out a WIP somewhere.

First thought that comes to mind tho  storyboard.o.o :\

-Clay

On Fri, Jun 23, 2017 at 9:52 AM, Sean Dague  wrote:

> The Nova bug backlog is just over 800 open bugs, which while
> historically not terrible, remains too large to be collectively usable
> to figure out where things stand. We've had a few recent issues where we
> just happened to discover upgrade bugs filed 4 months ago that needed
> fixes and backports.
>
> Historically we've tried to just solve the bug backlog with volunteers.
> We've had many a brave person dive into here, and burn out after 4 - 6
> months. And we're currently without a bug lead. Having done a big giant
> purge in the past
> (http://lists.openstack.org/pipermail/openstack-dev/2014-
> September/046517.html)
> I know how daunting this all can be.
>
> I don't think that people can currently solve the bug triage problem at
> the current workload that it creates. We've got to reduce the smart
> human part of that workload.
>
> But, I think that we can also learn some lessons from what active github
> projects do.
>
> #1 Bot away bad states
>
> There are known bad states of bugs - In Progress with no open patch,
> Assigned but not In Progress. We can just bot these away with scripts.
> Even better would be to react immediately on bugs like those, that helps
> to train folks how to use our workflow. I've got some starter scripts
> for this up at - https://github.com/sdague/nova-bug-tools
>
> #2 Use tag based workflow
>
> One lesson from github projects, is the github tracker has no workflow.
> Issues are openned or closed. Workflow has to be invented by every team
> based on a set of tags. Sometimes that's annoying, but often times it's
> super handy, because it allows the tracker to change workflows and not
> try to change the meaning of things like "Confirmed vs. Triaged" in your
> mind.
>
> We can probably tag for information we know we need at lot easier. I'm
> considering something like
>
> * needs.system-version
> * needs.openstack-version
> * needs.logs
> * needs.subteam-feedback
> * has.system-version
> * has.openstack-version
> * has.reproduce
>
> Some of these a bot can process the text on and tell if that info was
> provided, and comment how to provide the updated info. Some of this
> would be human, but with official tags, it would probably help.
>
> #3 machine assisted functional tagging
>
> I'm playing around with some things that might be useful in mapping new
> bugs into existing functional buckets like: libvirt, volumes, etc. We'll
> see how useful it ends up being.
>
> #4 reporting on smaller slices
>
> Build some tooling to report on the status and change over time of bugs
> under various tags. This will help visualize how we are doing
> (hopefully) and where the biggest piles of issues are.
>
> The intent is the normal unit of interaction would be one of these
> smaller piles. Be they the 76 libvirt bugs, 61 volumes bugs, or 36
> vmware bugs. It would also highlight the rates of change in these piles,
> and what's getting attention and what is not.
>
>
> This is going to be kind of an ongoing experiment, but as we currently
> have no one spear heading bug triage, it seemed like a good time to try
> this out.
>
> Comments and other suggestions are welcomed. The tooling will have the
> nova flow in mind, but I'm trying to make it so it takes a project name
> as params on all the scripts, so anyone can use it. It's a little hack
> and slash right now to discover what the right patterns are.
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] Status update, Jun 16

2017-06-16 Thread Clay Gerrard
On Fri, Jun 16, 2017 at 7:19 AM, Doug Hellmann 
wrote:

> Excerpts from Thierry Carrez's message of 2017-06-16 11:17:30 +0200:
>
> > == Need for a TC meeting next Tuesday ==
> >
> > In order to make progress on the Pike goal selection, I think a
> > dedicated IRC meeting will be necessary. We have a set of valid goals
> > proposed already: we need to decide how many we should have, and which
> > ones. Gerrit is not great to have that ranking discussion, so I think we
> > should meet to come up with a set, and propose it on the mailing-list
> > for discussion. We could use the regular meeting slot on Tuesday,
> > 20:00utc. How does that sound ?
> >
>
> +1
>
>
I'm loving this new ML thing the TC is doing!  Like... I'm not going to
come to the meeting.  I'm not a helpful person in general and probably
wouldn't have anything productive to say.

But I love the *idea* that I know *when and where* this is being decided so
that if I *did* care enough about community goals to come make a stink
about it I know exactly what I should do - _show up and say my piece_!
Just this *idea* is going to help a *ton* later when John tells me "shut up
clay; just review the patch" [1] - because if I had something to say about
it i should have been there when it was time to say something about it!

Obvs, if anyone *else* has a passion about community goals and how
OpenStack uses them to push for positive change in the boarder ecosystem
(and thinks they can elucidate that on IRC to positive results).  *YOU*
should *totally* be there!

Y'all have fun,

-Clay

1. N.B. john is *not* a high conflict guy; but he's dealt with me for
~20years so he get's a pass
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Moving away from "big tent" terminology

2017-06-15 Thread Clay Gerrard
On Thu, Jun 15, 2017 at 8:07 AM, Sean Dague  wrote:

>
> I do kind of wonder if we returned the stackforge or
> friends-of-openstack or whatever to the github namespace when we
> mirrored if it would clear a bunch of things up for people. It would
> just need to be an extra piece of info in our project list about where
> those projects should mirror to (which may not be the same namespace as
> in gerrit).
>
>
Whoa.  

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Clay Gerrard
On Fri, Jun 2, 2017 at 1:21 PM, Matt Riedemann  wrote:

>
> I don't think the maintenance issue is the prime motivator, it's the fact
> paste is in /etc which makes it a config file and therefore an impediment
> to smooth upgrades. The more we can move into code, like default policy and
> privsep, the better.


Ah, that makes sense, Swift has had to do all kinds of non-sense to
manipulate pipelines to facilitate smooth upgrade.  But I always assumed
our heavy use of middleware and support for custom extension via third
party middleware just meant it was complexity inherent to our problem we
had to eat until we wrote something better.

https://github.com/openstack/swift/blob/d51ecb4ecc559bf4628159edc2119e96c05fe6c5/swift/proxy/server.py#L50

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Target classes in Cinder

2017-06-02 Thread Clay Gerrard
On Fri, Jun 2, 2017 at 12:47 PM, John Griffith 
wrote:

>
>
> What I'm wondering is, even though I certainly think this is a FAR
> SUPERIOR design to what we had, I don't like having both code-paths and
> designs in the code base.
>

Might be useful to enumerate those?  Perhaps drawing attention to the
benefits would spur some driver maintainers that haven't made the switch to
think they could leverage the work into something impactful?


> Should we consider reverting the drivers that are using the new model back
> and remove cinder/volume/targets?
>

Probably not anytime soon if it means dropping 76 of 80 drivers?  Or at
least that's a different discussion ;)


> Or should we start flagging those new drivers that don't use the new model
> during review?
>

Seems like a reasonable social construct to promote going forward - at
least it puts a tourniquet on it.  Perhaps there some intree development
documentation that could be updated to point people in the right direction
or some warnings that can be placed around the legacy patterns to keep
people for stumbling on bad examples?


> Also, what about the legacy/burden of all the other drivers that are
> already in place?
>
>
What indeed... but that's down the road right - for the moment it's just
figuring how to give things a bit of a kick in the pants?  Or maybe
admitting w/o a kick in the pants - living with the cruft is the plan of
record?

I'm curious to see how this goes, Swift has some plugin interfaces that
have been exposed through the ages and the one thing constant with
interface patterns is that the cruft builds up...

Good Luck!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][ptls][all] Potential Queens Goal: Migrate Off Paste

2017-06-02 Thread Clay Gerrard
Can we make this (at least) two (community?) goals?

#1 Make a thing that is not paste that is better than paste (i.e. > works,
ie >= works & is maintained)
#2 Have some people/projects "migrate" to it

If the goal is just "take over paste maintenance" that's maybe ok - but is
that an "OpenStack community" goal or just something that someone who has
the bandwidth to do could do?  It also sounds cheaper and probably about as
good.

Alternatively we can just keep using paste until we're tired of working
around it's bugs/limitations - and then replace it with something in tree
that implements only 100% of what the project using it needs to get done -
then if a few projects do this and they see they're maintaining similar
code they could extract it to a common library - but iff sharing their
complexity isolated behind an abstraction sounds better than having
multiple simpler and more agile ways to do similar-ish stuff - and only
*then* make a thing that is not paste but serves a similar use-case as
paste and is also maintained and easy to migrate too from paste.  At which
point it might be reasonable to say "ok, community, new goal, if you're not
already using the thing that's not paste but does about the same as paste -
then we want to organize some people in the community experienced with the
effort of such a migration to come assist *all openstack projects* (who use
paste) in completing the goal of getting off paste - because srly, it's
*that* important"

-Clay

On Wed, May 31, 2017 at 1:38 PM, Mike  wrote:

> Hello everyone,
>
> As part of our community wide goals process [1], we will discuss the
> potential goals that came out of the forum session in Boston [2].
> These discussions will aid the TC in making a final decision of what
> goals the community will work towards in the Queens release.
>
> For this thread we will be discussing migrating off paste. This was
> suggested by Sean Dague. I’m not sure if he’s leading this effort, but
> here’s a excerpt from him to get us started:
>
> A migration path off of paste would be a huge win. Paste deploy is
> unmaintained (as noted in the etherpad) and being in etc means it's
> another piece of gratuitous state that makes upgrading harder than it
> really should be. This is one of those that is going to require
> someone to commit to working out that migration path up front. But it
> would be a pretty good chunk of debt and upgrade ease.
>
>
> [1] - https://governance.openstack.org/tc/goals/index.html
> [2] - https://etherpad.openstack.org/p/BOS-forum-Queens-Goals
>
> —
> Mike Perez
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] new additional team meeting

2017-05-26 Thread Clay Gerrard
On Fri, May 26, 2017 at 4:06 PM, John Dickinson  wrote:

>
> The new meeting is at a reasonable time for just about everyone, other
> than those who live in New York to San Francisco time zones.


define *un*-reasonable ;)  Regardless we'll have the logs.


>
> I'd like to thank Mahati for leading the group for organizing and
> facilitating this new idea.
>

+1


> I'm looking forward to seeing how this will help our team communicate and
> grow.
>
>
It'll be great!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] 404 on geo-replicated clusters with write affinity

2017-05-17 Thread Clay Gerrard
Hi Bruno,

On Wed, May 17, 2017 at 3:47 PM, Bruno L  wrote:

>
> I see multiple bugs in launchpad that are related to this issue.
>

AFAIK, only one bug for this issue is still open, and has the most recent
thoughts added from the Forum

https://bugs.launchpad.net/swift/+bug/1503161

write a brief summary of the changes proposed,
>

I think the skinny of what's in the bug report is "make more backend DELETE
requests to handoffs".

Personally I was coming around to the idea that an explicit configurable
(i.e. similar to "request_node_count" for GET) would be easy to reason
about and give us a lot of flexibility (it would pair well per-policy
configuration WIP https://review.openstack.org/#/c/448240/).  It's possible
this could be implicit using some heuristic over the sort order of
primaries in the ring - but I think it'd be whole 'nother thing, and could
be added later as an "auto" sort of value for the option (i.e. workers =
[N|auto], or "replicas + 2" sort of syntax).

Additionally, it's been pointed out various times that collecting
X-Backend-Timestamp from the responses would allow for further reasoning
over the collected responses in addition to just the status codes (similar
to WIP for GET https://review.openstack.org/#/c/371150/ ) - but I'm
starting to think that'd be an enhancement to the extra handoff DELETE
requests rather than an complete alternative solution.

I don't think anyone really likes the idea of blindly translating a
majority 404 response on DELETE to 204 and calling it a win - so
unfortunately the fix is non-trival.  Glad to see you're interested in
getting your teeth into this one - let me know if there's anything I can do
to help!

Good Luck,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] experimenting with systemd for services

2017-04-05 Thread Clay Gerrard
On Wed, Apr 5, 2017 at 1:30 PM, Andrea Frittoli 
wrote:
>
>
> I just want to say thank you! to you clarkb clayg and everyone involved :)
> This is so much better!
>
> andreaf
>
>
Sean is throwing credit at me where none is due.  IIRC I was both in the
room and in a very-normal-for-me state of confusion while he and clark
talked about this - but I did not know they were working on it.
Nevertheless, I am dusting off my devstack vm with USE_SYSTEMD=True and
found his blog post interesting ;)

Kudos!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][TripleO][release][deployment] Packaging problems due to branch/release ordering

2017-04-05 Thread Clay Gerrard
I hate this stuff.

Not just pbr (tho I do have a long history of being kicked in the nuts by
pbr for no good reason I can ascertain).  But when suddenly some process
OpenStack invented I've never *heard of in two years* breaks - and
overnight me and 100's of other folks have to stop what their doing to read
up on some esoteric thing they never bought into.

https://blueprints.launchpad.net/pbr/+spec/pbr-semver
https://review.openstack.org/#/c/108270/

"My use-case is to pretend every commit is a release.  And since I can't
expect you're going to manage something as complicated as your projects
*version* in between releases (obvs.).  Only possible solution is a new
esoteric procedure no one as ever heard of baked into your commit messages."

What could go wrong?

This in all my package build infrastructure *so hard*:

# Shut up, pbr, we know what we're doing
export PBR_VERSION="$DOWNSTREAM_VERSION"

As long as that doesn't break - I should probably just +2 the thing and go
back to keeping my mouth shut.  But ... why after 2 years of blissful
ignorance do I have to suddenly care about this nonsense?  I'm grepping git
logs from Nova, Cinder, Keystone, Swift - what am I missing - who's using
this!?

Please forgive my obviously frustrated tone - I do understand form the spec
and reviews that folks have over time put a lot of thought into this and
I'm not going to fully understand it in an hour of cursory glance.  Which
is... kinda of why I'm frustrated.  This stuff is maddness and it's in my
way.

-Clay

On Wed, Apr 5, 2017 at 9:08 AM, Akihiro Motoki  wrote:

> I see Emilien proposed a number of patches to individual projects with
> "Sem-Ver: api-break" in the commit message.
> As far as I understand the pbr documentation [1] correctly (see the
> forth paragraph in the section) which is pointed by Emilien,
> the change looks reasonable.
>
> Honestly it would be great if we have a green signal for the similar
> change as a community
> as not all developers are familiar with this kind of changes.
>
> Can all developers get the green signal for the similar change?
>
> Akihiro
>
> [1] https://docs.openstack.org/developer/pbr/#version
>
>
> 2017-04-05 10:36 GMT+09:00 Emilien Macchi :
> > adding [all] for more visibility... See comments inline:
> >
> > On Tue, Mar 21, 2017 at 2:02 PM, Emilien Macchi 
> wrote:
> >> On Mon, Mar 13, 2017 at 12:29 PM, Alan Pevec  wrote:
> >>> 2017-03-09 14:58 GMT+01:00 Jeremy Stanley :
>  In the past we addressed this by automatically merging the release
>  tag back into master, but we stopped doing that a cycle ago because
>  it complicated release note generation.
> >>>
> >>> Also this was including RC >= 2 and final tags so as soon as the first
> >>> stable maintenance version was released, master was again lower
> >>> version.
> >>
> >> topic sounds staled.
> >> Alan,  do we have an ETA on the RDO workaround?
> >
> > Without progress on RDO tooling and the difficulty of implementing it,
> > I went ahead and proposed a semver bump for some projects:
> >
> > https://review.openstack.org/#/q/topic:sem-ver/pike
> >
> > Except for Swift where I don't know if they'll bump X, I proposed to
> bump Y.
> > For all other projects, I bumped X as they did from Newton to Ocata.
> > (where version is X.Y.Z).
> >
> > Please give any feedback on the reviews if you prefer another kind of
> bump.
> > Thanks for reviewing that asap, so TripleO CI can test upgrades from
> > Ocata to Pike soon.
> >
> > Thanks,
> >
> >> Thanks,
> >>
> >>> Cheers,
> >>> Alan
> >>>
> >>> 
> __
> >>> OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> --
> >> Emilien Macchi
> >
> >
> >
> > --
> > Emilien Macchi
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][infra] does someone cares about Jenkins? I stopped.

2017-03-02 Thread Clay Gerrard
On Thu, Mar 2, 2017 at 10:41 AM, Paul Belanger 
wrote:

>  In fact, the openstack-infra team does mirror a
> lot of things today


I bumped into this the other day:

https://specs.openstack.org/openstack-infra/infra-specs/specs/unified_mirrors.html

... but so far haven't found any specific details on ci.openstack.org?
Obviously the . stuff obviously got implemented [1] - I'm
not sure what extent you have to "opt-in" to that in gate jobs?

Did someone did say AFS?

https://docs.openstack.org/infra/system-config/afs.html?highlight=mirror

-Clay

1. http://mirror.iad.rax.openstack.org/
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [infra] [gate] [all] openstack services footprint lead to oom-kill in the gate

2017-02-02 Thread Clay Gerrard
On Thu, Feb 2, 2017 at 12:50 PM, Sean Dague  wrote:

>
> This is one of the reasons to get the wsgi stack off of eventlet and
> into a real webserver, as they handle HTTP request backups much much
> better.
>
>
To some extent I think this is generally true for *many* common workloads,
but the specifics depend *a lot* on the application under the webserver
that's servicing those requests.

I'm not entirely sure what you have in mind, and may be mistaken to assume
this is a reference to Apache/mod_wsgi?  If that's the case, depending on
how you configure it - aren't you still going to end up with an instance of
the wsgi application per worker-process and have the same front of line
queueing issue unless you increase workers?  Maybe if the application is
thread-safe you can use os thread workers - and preemptive interruption for
the GIL is more attractive for the application than eventlet's cooperative
interruption.  Either-way, it's not obvious that has a big impact on the
memory footprint issue (assume the issue is memory growth in the
application and not specifically eventlet.wsgi.server).  But you may have
more relevant experience than I do - happy to be enlightened!

Thanks,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][openstackclient][zun] Collision on the keyword 'container'

2016-12-21 Thread Clay Gerrard
On Tue, Dec 20, 2016 at 2:11 PM, Dean Troyer  wrote:
>
> This is exactly how it should work.  I do want to make an additional
> important but subtle point:  while it looks like those are namespaced
> commands, we used 'server' not 'compute' because it is not a
> compute-namespaced, but a server-specific resource.

I *think* I understood this - the server command is representative of a
server resource, not a service.  It's somewhat circumstantial that often
times when you think about the top level base primitive resources OpenStack
provides cloud users - that they occasionally align with a single service
API endpoint.  But a big design goal for a unified client seems like it
might hopefully help abstract the services away so the user can focus on
their "stuff" ;)

>'object store container' would be consistent, 'object store object' is
awful.

Fully agree, would suggest:

"object "
"object container "
"object account "

I think this follows closely where the other resource commands are going?

>
> Notice that in the command list lined above the 'backup' resource has
> been deprecated and renamed as 'volume backup'.  We could possibly
> also do this with 'object' and 'container' from Swift, we will be
> doing this with other resources (flavor -> server flavor comes to
> mind).

I had not noticed the backup command, or flavor, thank you for pointing
those out.  This is excellent news!

>
> Backward compatibility is very important to us though, so renaming
> these resources takes a long time to complete.  Freeing up the
> top-level bare container resource would be three cycles out at best.
>

Seems reasonable to me!  AIUI the top level "object" resource would stay,
it would grow "container" & "account" sub resources, and the "object store
account" and "container" top-level commands would be deprecated.  Then
during the development of the release after a release which includes those
changes you could start to remove the deprecated interfaces.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osc][openstackclient][zun] Collision on the keyword 'container'

2016-12-20 Thread Clay Gerrard
On Tue, Dec 20, 2016 at 1:00 PM, Hongbin Lu  wrote:

>
>
> $ openstack objectstore container  
>
> $ openstack container container  
>
> $ openstack secret container  
>
>
>
> Thoughts?
>

This is the closest thing I can see that's somewhat reasonable - with the
obvious exception of "container container " - which is pretty gross.

Here's the best list I could find of what's going on now:

http://docs.openstack.org/developer/python-openstackclient/command-list.html

The collision of top-level resource names is not new.  You see stuff like
"volume create" & "server create" - but also "volume backup create" &
"server backup create"- which is an obvious pattern to replicate for
disambiguating top level name conflicts with similarly named
(sub?)-resources between services - except apparently in an effort to keep
things flat no one saw it coming with a name like "container".

But IMHO an object-store "container" is not a top level OpenStack resource,
is it?  I would think users would be happy to dump stuff into the object
store using "object create" - and reasonably expect to use "object
container create" to create a container *for their objects*?  This isn't a
generic OpenStack "container" - you can't use this generic "container" for
anything except objects?  Oddly, this pattern is already in use with the
pre-existing "object store account" command?!

Is it really already too late to apply some sane organization to the object
store commands in the openstack-cli and make room in the command namespace
for a top level OpenStack resource to manage a linux-containers' service?
Because of backwards compatibility issues?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-09 Thread Clay Gerrard
On Wed, Nov 9, 2016 at 3:14 AM, Chris Dent  wrote:

>
> As a community we don't want to be bound together by rules, we want
> to be enabled by processes that support making and doing things
> effectively. The things that we make and do is what binds us
> together.
>

Here here!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] requests-mock

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 4:24 PM, Jamie Lennox  wrote:

>
> So I'm not going to comment too much on the quality of the library as i
> obviously think it's good
>

acahcpahch, no worries.

Your insights are invaluable - thanks for taking the time to connect some
dots for me - i'm starting to get up to speed.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] requests-mock

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 10:44 AM, Clay Gerrard <clay.gerr...@gmail.com>
wrote:

>  I'm not really sure what a fixture is this context?
>

Answered my own question in this case!

http://requests-mock.readthedocs.io/en/latest/fixture.html

It's one of *these* of course:

https://pypi.python.org/pypi/fixtures
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 10:52 AM, Anita Kuno  wrote:

> On 2016-10-11 01:40 PM, Ed Leafe wrote:
>
>> On Oct 11, 2016, at 12:17 PM, Anita Kuno  wrote:
>>
>> There really needs to be a period when a) we know who all the candidates
 are, and b) voting has not yet begun.

>>> Why?
>>>
>>> The voting period is open for a period of several days, voters have the
>>> ability to vote at any time during that voting period.
>>>
>>   Because many people vote as soon as they receive their ballot.
>>
>
> That is their choice.
>
>
Anita,

I agree, that voters may choose to refrain from voting.  I don't hear
anyone saying "people *can not* make time for thoughtful reflection on the
candidates" - but suggestion that perhaps they *did not*?  Is there anyway
we could get numbers about how many voters waited until the end of week
like I did?

If most voters did wait until later in the week, I think we can reject the
premise as false and accept that the week while voting is open *is* the
time in the process that most of the electorate uses for reflection on the
candidates.

If many *did* vote early in the week before some policy/platform points
were discussed one might even assume these voters have some remorse -
perhaps they will behave differently next time?  Not known.

OTOH, if we actively broadcast a period of time with the expressed purpose
of facilitating this discussion I think it sends a message that we as a
community expect this discussion to happen and have an impact on the
results of the election.  Is there a *downside* to a 3 week election period
as proposed by Ed, Chris and others?

-Clay




>   I know that I typically do so that it doesn’t get lost under the flood
>> of email.
>>
>
> I have found putting a star on the email when it comes it helps to ensure
> I don't lose it, but everyone has a different email organizing workflow.
>
>   This wouldn’t be so bad if you could later change your vote, but once it
>> is cast, it can’t be changed. What that means is that if a candidate I knew
>> little about says something that either interests me or turns me off, I can
>> *use* that information.
>>
>
> You still can now, you just have to choose to listen to candidates prior
> to voting.
>
> Monty suggested somewhere that we reissue the email ballots everyday
> (since we had email issues this time, I have no idea if that would result
> in us being kicked off the service we currently use or not). If the issue
> is, I want to ensure I can find my ballot when I need it, I think we can
> explore options that don't include requiring election officials to expand
> their commitment for an additional week.
>
>
>> A voter can ask the panel of candidates any question they wish such that
>>> they are satisfied prior to voting.
>>>
>> Of course; no one has said otherwise. But if someone else asks a question
>> that may not have occurred to me to ask, the answers given can still be
>> influential on my choices. Look at Gordon Chung’s question in this recent
>> cycle: I’m sure that there were lots of people who benefited from that
>> question and the many answers, not just Gordon.
>>
>
> I know I benefited from Gord's question, both as a candidate and as a
> voter. Thank you, Gord.
>
> Again, I feel the choice exists.
>
>
>> Additionally should the decision be made to go forward with some form of
>>> the candidate answers as I offered to the electorate in October 2014, those
>>> answers could be available as platforms are posted such that all responses
>>> are available as soon as the poll begins.
>>>
>> I think that this is a great idea, and would be willing to help in the
>> effort to make that happen again.
>>
>
> Thanks Ed, it felt satisfying to offer it when I did it. I hope others
> feel the same as you.
>
> Thanks,
> Anita.
>
>
>
>>
>> -- Ed Leafe
>>
>>
>>
>>
>>
>>
>> 
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 9:57 AM, Thiago da Silva  wrote:

>
> it would also be nice to have a better place for the questions/answers to
> be stored. During last week there was a ton of great discussion, but when
> it came to voting time (towards end of the week) it was difficult/time
> consuming to find what each person had said.
>
>
I *also* found the process of ranking difficult and time consuming for
candidates I was less familiar with - toward then end I was keeping a set
of notes with my interpretation of the short version of peoples attitudes
on my self selected set of platform/policy points that I cared about.

I think the only reasonable way to do this is ad-hoc; during the discussion
period (which I think we *really* should "leave some breathing room" in the
process to encourage that reflection and discussion) individuals in the
community that feel compelled to do so should try to share some
summarizations of the candidates.  I think ultimately it will be borderline
campaign propaganda (it's hard to reduce a lengthy nuanced response to a
complex subject into a pithy one-line summary) - but the hope or assumption
would be that our electorate is trying to make informed decisions and
someone putting in effort to sharing the results of their research is
trying to help with that goal.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] requests-mock

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 5:40 AM, Ian Cordasco 
wrote:

> So, as a core developer
> of Requests, I would endorse requests-mock for this category of
> dependency.
>

Excellent; exactly the kind of feedback I was hoping to solicit.  If you're
looking to add this catagory of dependency - requests-mock is best-of-breed.

Although, honestly, I was also hoping to get some input from the
perspective of maintaining a client library specifically to address the
question of if this category of dependency is something that makes sense?

e.g.

Ian, you've worked on glance - so I went looking how glanceclient uses
requests_mock and found another change from Jamie:

https://review.openstack.org/#/c/141992/

In this change, it *seems* like instead of glanceclient maintaining some
stubs that are used to set explicit expectations about the external
behavior of the other system - glaceclient has *outsourced* it's
expectations to implicitly couple with whatever keystoneclient provides.
On one hand this seems like it might reduce some maintenance burden.  OTOH,
the burden of maintaining some of the expectations about the behavior of
the external system you depend on seems *related* to maintaining the glance
client?  I'm not sure if this is a great tradeoff?  Maybe?  I'm not sure if
this gets into what you were talking about wrt integration tests?  The
change I'm currently evaluating doesn't import an external fixture I don't
think... I'm not really sure what a fixture is this context?

http://requests-mock.readthedocs.io/en/latest/api/requests_mock.html?highlight=fixture#requests-mock-fixture-module

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] dependency hygiene [was requests-mock]

2016-10-11 Thread Clay Gerrard
On Tue, Oct 11, 2016 at 4:02 AM, Davanum Srinivas  wrote:

> Clay,
>
> Apologies for the top post.


Oh goodness, none needed my friend!


> https://github.com/openstack/requirements#global-
> requirements-for-openstack-projects
>
>
Eternal wells of gratitude for that read!  There was so many good gems in
the review guidelines section!

Loved this bit:

Everyone likes everyone else to use the latest version of their code.
However, deployers really don't like to be constantly updating things.
Unless it's actually impossible to use the minimum version specified in
global-requirements.txt, it should not be changed.

Leave that decision to deployers and distros.


https://github.com/openstack/requirements#for-upgrading-requirements-versions

There is a requirements team, you can reach them on the
> #openstack-requirements channel
>

Cool I might stop by... Seems like there's some good knowledge to glean
from the requirement team's experience and focus.

Even simple stuff like the links to different distro packaging
search/status:

https://github.com/openstack/requirements#finding-distro-status

... is very helpful!


> https://wiki.openstack.org/wiki/Requirements


Hrmm...

minor upstream version updates should be considered routine/cursory review


https://wiki.openstack.org/wiki/Requirements#Review_Criteria

Maybe my lens is off - seems like there's some conflicting attitudes on the
wiki and the published guidelines on at least version bumps?

Also those guidelines focus mostly on the requirements team - not the
program teams (which is more what I'm looking for right now).

There's the bit on the bot updates to requirements.txt:

This is intended as a time saving device for projects, as they can fast
approve requirements syncs and not have to manually worry about whether or
not they are up to date with the global definition.

https://github.com/openstack/requirements#automatic-sync-of-accepted-requirements

But if it *is* just a connivence function what's the big deal that some
projects (only swift?) are particularly sensitive to the minimum dependency
version issue?

There's the bit on the *process* of projects electing to participate in the
very welcome and helpful requriements team review:

This job ensures that a project can not change any dependencies to versions
not compatible with global-requirements.txt


https://github.com/openstack/requirements#enforcement-in-projects

... but  beyond "dependencies must meet the global requirements teams
minimum bar and be added to global-requirements *first*" (which based on
that teams review guidelines is *great* service to the community btw) -
it's obviously meant to be just the starting point?  A proposed change to
global requirements is supposed to reference the already in review change
on the program code and the discussion about the appropriateness of
outsourcing this impossible to live without functionality and coupling your
project's fate to the dependency is obviously meant to happen by the
program team?

Is there any other information out there that's more focused on consistent
guidelines for the *program* teams wrt to dependency hygiene - or is it
cool everyone just sorta does their own thing within the bounds of reason
(which are kept in check by the global requirements process)?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] requests-mock

2016-10-10 Thread Clay Gerrard
Greetings!

Anyone have any experience to share positive or negative using
requests-mock?  I see it's been used to replace another dependency that had
some problems in many of the OpenStack python client libraries:

Added to global requirements -> https://review.openstack.org/#/c/104067
Added to novaclient -> https://review.openstack.org/#/c/112179/
Added to cinderclient -> https://review.openstack.org/#/c/106665/
Added to keystoneclient -> https://review.openstack.org/#/c/106659/

But I'm not sure how folks other than Jamie are getting on with it?  When
writing new tests do you tend to instinctively grab for requests-mock - or
do you mostly not notice it's there?  Any unexpected behaviors ever have
you checking it out with bzr and reading over the source?  Do you recall
ever having any bumps or bruises in the gate or in your development
workflow because of a new release from requests-mock?  No presumed fault on
Jamie!  It seems like he's doing a Herculean one-man job there; but it can
be difficult go it alone:

https://bugs.launchpad.net/requests-mock/+bug/1616690

It looks like the gate on this project is configured to run nova & keystone
client tests; so that may be sufficient to catch any sort of issue that
might come up in something that depends on it?  Presumably once he lands a
change - he does the update to global-requirements and then all of
OpenStack get's it from there?

I ask of course because I really don't understand how this works [1] :D

But anyway - Jamie was kind enough to offer to refactor some tests for us -
but in the process seems to require to bring in these new dependencies - so
I'm trying to evaluate if I can recommend requiring this code in order to
develop on swiftclient [2].

Any feedback is greatly appreciated!

-Clay

1. As you may know (thanks to some recently publicity) swift & swiftclient
joined OpenStack in the time of dinosaurs with a general policy of trying
to keep dependencies to a *minimum* - but then one day the policy changed
to... *always* add dependencies whenever possible?  j/k I'm not acctually
sure what the OpenStack policy is on dependency hygiene :D  Anyway, I can't
say *exactly* where that "general policy" came from originally?  Presumably
crieht or gholt just had some first hand experience that the dependencies
you choose to add have a HUGE impact on your project over it's lifetime -
or read something from Joel on Software -
http://www.joelonsoftware.com/articles/fog07.html - or traveled
into the future and read the "go proverbs" and google'd "npm breaks
internet, again".  Of course they've since moved on from OpenStack but the
general idea is still something that new contributors to swift &
swiftclient get acclimated to and the circle of confusion continues
https://github.com/openstack/swift/blob/master/CONTRIBUTING.rst#swift-design-principles
- but hey!  maybe I can educate myself about the OpenStack policy/process;
add this dependency and maybe the next one too; then someday break the
cycle!?!?

2. https://review.openstack.org/#/c/360298
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2016-10-03 Thread Clay Gerrard
This was a riveting soapbox missive - and I agree with what you said -
particularly about the focus on breaking down the barriers to building out
and supporting the OpenStack contributor base.

But I don't have a good sense for how you want to apply that focus in
action on the TC?  I went back and looked at a number of mailing list
threads you participated in - and happily recalled your very matter of fact
presentation.  Frankly it was quite refreshing to see how often you seemed
to offer historical context more that your personal opinion (!!!)

Is there any *specific* goals you have for a one year term on the TC - or
do you think more focus on contributors is the most important thing to fix
first?  Would you perhaps consider to share your thoughts in response to
Gordon's question [1]?

-Clay

1.
http://lists.openstack.org/pipermail/openstack-dev/2016-October/104953.html


On Thu, Sep 29, 2016 at 4:47 PM, Jeremy Stanley  wrote:

> I guess I'll send a copy of mine to the ML too, since all the cool
> kids seem to be doing it...
>
> Most of you probably know me as "that short dude in the Hawaiian
> shirt and long hair." I'll answer to "Jeremy," "fungi" or even just
> "hey you." I'm starting my third cycle as PTL of the Infrastructure
> team, and have been a core reviewer and root sysadmin for
> OpenStack's community-maintained project infrastructure for the past
> four years. I've also been doing vulnerability management in
> OpenStack for almost as long, chaired conference tracks, and given
> talks to other communities on a variety of OpenStack-related topics.
> I help with elections, attend and participate in TC meetings and
> review proposed changes to governance. I have consistent, strong
> views in favor of free software and open/transparent community
> process.
>
> https://wiki.openstack.org/user:fungi
>
> I see OpenStack not as software, but as a community of people who
> come together to build something for the common good. We've been
> fortunate enough to experience a bubble of corporate interest which
> has provided amazing initial momentum in the form of able software
> developers and generous funding, but that can't last forever. As
> time goes on, we will need to rely increasingly on effort from
> people who contribute to OpenStack because it interests them, rather
> than because some company is paying them to do so. The way I see it,
> we should be preparing now for the future of our project:
> independent, volunteer contributors drawn from the global free
> software community. However, we're not succeeding in attracting them
> the way some other projects do, which brings me to a major
> concern...
>
> OpenStack has a public relations problem we need to solve, and soon.
> I know I'm not the only one who struggles to convince contributors
> in other communities that we're really like them, writing free
> software under transparent processes open to any who wish to help.
> This skepticism comes from many sources, some overt (like our
> massive trade conferences and marketing budget) while others
> seemingly inconsequential (such as our constant influx of new
> community members who are unfamiliar with free software concepts and
> lack traditional netiquette). Overcoming this not-really-free
> perception is something we absolutely must do to be able to attract
> the unaffiliated volunteers who will continue to maintain OpenStack
> through the eventual loss of our current benefactors and well into
> stabilization.
>
> Prior to OpenStack, I worked for longer than I care to remember as
> an "operator" at Internet service, hosting and telecommunications
> providers doing Unix systems administration, network engineering,
> virtualization and information security. When I first started my
> career, you couldn't be a capable systems administrator without a
> firm grasp of programming fundamentals and couldn't be a good
> programmer without understanding the basics of systems
> administration. I'm relieved that, after many years of companies
> trying to tell us otherwise, our industry as a whole is finally
> coming back around to the same realization. Similarly, I don't
> believe we as a community benefit by socializing a separation of
> "operators" from "developers" and feel the role distinction many
> attempt to strike between the two is at best vague, while at its
> worst completely alienating a potential source of current and future
> contributions.
>
> What causes software to succeed in the long run is not hype,
> limitless funding or even technical superiority, it's the size and
> connectedness of its community of volunteers and users who invest
> themselves and their personal time. The work we're doing now is
> great, don't get me wrong, but for it to survive into the next
> decade and beyond we need to focus more on building a close-knit
> community of interested contributors even if it's not in the best
> interests of industry pundits or vendor product roadmaps.
>
> 

Re: [openstack-dev] TC candidacy

2016-10-03 Thread Clay Gerrard
On Tue, Sep 27, 2016 at 2:52 PM, Emilien Macchi  wrote:

>
> - Make sure it works outside Devstack.
>
> There is a huge gap between what is tested by Devstack gate and what
> operators
> deploy on the field.  This gap tends to stretch the feedback loop between
> developers and operators.  As a community, we might want to reduce this gap
> and make OpenStack testing more effective and more realistic.
> That's an area of focus I would like to work and spread over OpenStack
> projects if I'm elected.
>
>
This is a really interesting platform point.  It's been a concern in he
community since *at least* Vancouver [1].  We've had lots of different
viewpoints towards project install-ability raised this election:


   - John Dickenson says installation of projects should go horizontal [2]
   - Monty Taylor says services oriented deployment teams are the wasteful
   exception [3]
   - John Griffith says how the TC approaches services oriented OpenStack
   will be an important factor in the future definition of OpenStack and it's
   relevancy [4]


Do you think this is an important topic for OpenStack right now?  I'd be
really interested to hear any *new* insights from the previous PTL of *one*
of OpenStack's installation automation projects?  What could or should be
done to reduce the bias/reliance towards a devstack or an
"openstack-all-in-one" deployment model?  Can or should the TC be the
champion of the discussion around "how to install" OpenStack?  How much of
an impact does choices made in *testing* effect the install-ability and
ease-of-use of OpenStack in general?

Somewhat unrelated.  Do you have any personal thoughts/insights on how you
believe OpenStack should approach potentially disruptive or "competing"
design in general - like ansible/puppet or even Kolla?

-Clay

1. https://www.youtube.com/watch?v=ZY8hnMnUDjU=youtu.be=379
2.
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104815.html
3.
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104844.html
4.
http://lists.openstack.org/pipermail/openstack-dev/2016-September/104833.html
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-10-03 Thread Clay Gerrard
I just re-read your announcement - and I couldn't be happier you're running
:D

I was so surprised at the fallout from your suggestion that the TC should
actively engage in more broadcasting of important topics [1] to bring in
more voices from the community early!?  Links to IRC logs and Gerrit commit
history seemed to totally miss the point?

I'm curious if any other discussions or questions on the mailing list have
excited or frustrated you in the past week?  Is there any specific goals
you have for a one year term on the TC - or do you think more clarity on
past "perceived agreements" and a better focus on visibility and
communication is the more important thing to fix first?

Thanks again,

-Clay

1. This is a great example how much this attitude of "full contact
community engagement" is *needed* and how much it's *lacking* with the
current set of representatives ->
http://lists.openstack.org/pipermail/openstack-dev/2016-September/103223.html -
not surprisingly it took someone from "the outside" to make it happen!

On Wed, Sep 28, 2016 at 9:41 AM, Chris Dent  wrote:

>
> Despite its name, the Technical Committee has become the part of the
> OpenStack contributor community that enshrines, defines, and -- in some
> rare cases -- enforces what it means to be "OpenStack". Meanwhile,
> the community has seen a great deal of growth and change.
>
> Some of these changes have led to progress and clarity, others have left
> people confused about how they can best make a contribution and what
> constraints their contributions must meet (for example, do we all know
> what it means to be an "official" project?).
>
> Much of the confusion, I think, can be traced to two things:
>
> * Information is not always clear nor clearly available, despite
>   valiant efforts to maintain a transparent environment for the
>   discussion of policy and process. There is more that can be done
>   to improve engagement and communication. Maybe the TC needs
>   release notes?
>
> * Agreements are made without the full meaning and implications of those
>   agreements being collectively shared. Most involved think they agree,
>   but there is limited shared understanding, so there is limited
>   effective collaboration. We see this, for example, in the ongoing
>   discussions on "What is OpenStack?". Agreement is claimed without
>   actually existing.
>
> We can fix this, but we need a TC that has a diversity of ideas and
> experiences. Other candidates will have dramatically different opinions
> from me. This is good because we must rigorously and vigorously question
> the status quo and our assumptions. Not to tear things down, but to
> ensure our ideas are based on present day truths and clear visions of
> the future. And we must do this, always, where it can be seen and
> joined and later discovered; gerrit and IRC are not enough.
>
> To have legitimate representation on the Technical Committee we must
> have voices that bring new ideas, are well informed about history, that
> protect the needs of existing users and developers, encourage new users
> and developers, that want to know how, that want to know why. No single
> person can speak with all these voices.
>
> Several people have encouraged me to run for the TC, wanting my
> willingness to ask questions, to challenge the status quo and to drive
> discourse. What I want is to use my voice to bring about frequent and
> positive reevaluation.
>
> We have a lot of challenges ahead. We want to remain a pleasant,
> progressive and relevant place to participate. That will require
> discovering ways to build bridges with other communities and within our
> own. We need to make greater use of technologies which were not invented
> here and be more willing to think about the future users, developers and
> use cases we don't yet have (as there will always be more of those). We
> need to keep looking and pushing forward.
>
> To that end I'm nominating myself to be a member of the Technical
> Committee.
>
> If you have specific questions about my goals, my background or anything
> else, please feel free to ask. I'm on IRC as cdent or send some email.
> Thank you for your consideration.
>
> --
> Chris Dent   ┬─┬ノ( º _ ºノ)https://anticdent.org/
> freenode: cdent tw: @anticdent
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC Candidacy

2016-10-03 Thread Clay Gerrard
On Thu, Sep 29, 2016 at 8:34 PM, John Griffith 
wrote:

>
>
> I think what's more important and critical is the future and where
> OpenStack is going over the course of the next few years.
>

I think this is a really important topic right now!  Do you see any
dangerous road blocks coming up!?


>
> Do we want our most popular topic at the key-notes to continue being
> customers telling their story of "how hard" it was to do OpenStack?
>

No ;)


>
> It's my belief that the TC has a great opportunity (with the right people)
> to take input from the "outside world" and drive a meaningful and
> innovative future for OpenStack.  Maybe try and dampen the echo-chamber a
> bit, see if we can identify some real problems that we can help real
> customers solve.
>

I think this is where we *all* should be focused - do you think the TC can
offer specific support to the projects here - or is more about just
removing other roadblocks and keeping the community on target?


> I'd like to see us embracing new technologies and ways of doing things.
> I'd love to have a process where we don't care so much about the check
> boxes of what oslo libs you do or don't use in a project, or how well you
> follow the hacking rules; but instead does your project actually work?  Can
> it actually be deployed by somebody outside of the OpenStack community or
> with minimal OpenStack experience?
>
> It's my belief that Projects should offer real value as stand-alone
> services just as well as they do working with other OpenStack services.
>

Very controversial (surprisingly?)!  Why do you think this is important?
Do you think this is in conflict with the goal of OpenStack as one
community?



>
> Feel free to ask me about my thoughts on anything specific, I'm happy to
> answer any questions that I can as honestly as I can.
>

Don't mind if I do ;)

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [elections][tc]Thoughts on the TC election process

2016-10-03 Thread Clay Gerrard
On Mon, Oct 3, 2016 at 9:46 AM, Edward Leafe  wrote:

> After the nominations close, the election officials will assign each
> candidate a non-identifying label, such as a random number, and those
> officials will be the only ones who know which candidate is associated with
> which number.


I'm really uneasy about this suggestion.  Especially when it comes to
re-election, for the purposes of accountability I think it's really
important that voters be able to identify the candidates.  For some people
there's a difference in what they say and what they end up doing when left
calling shots from the bubble for too long.

As far as the other stuff... idk if familiarity == bias.  I'm sure lots of
occasions people vote for people they know because they *trust* them; but I
don't think that's bias?  I think a more common problem is when people vote
for a *name* they recognize without really knowing that person or what
they're about.  Or perhaps just as bad - *not* voting because they realize
they have on context to consider these candidates beyond name familiarity
and an (optional) email.

I think a campaign period, and especially some effort [1] to have
candidates verbalize their viewpoints on topics that matter to the
constituency could go a long way towards giving people some more context
beyond "i think this name looks familiar; I don't really recognize this
name"

-Clay

1.
http://lists.openstack.org/pipermail/openstack-dev/2016-October/104953.html
<- "role of the TC and your priorities"; seems like a reasonable thing for
someone to be able to answer about folks they're putting in the top six
slots in the voting card!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Pecan Version 1.2

2016-09-26 Thread Clay Gerrard
I'm interested to hear how this works out.

I thought upper-constraints was somehow supposed to work to prevent this?
Like maybe don't install a brand new shiny upstream version on the gate
infrastructure test jobs until it passes all our tests?  Prevent a fire
drill?  That bug was active back in July - but I guess 1.2 was released
pretty recently?   maybe I don't understand the timeline.

-Clay

On Mon, Sep 26, 2016 at 2:21 PM, Dave McCowan (dmccowan)  wrote:

>
> The Barbican project uses Pecan as our web framework.
>
> At some point recently, OpenStack started picking up their new version
> 1.2.  This version [1] changed one of their APIs such that certain calls
> that used to return 200 now return 204.  This has caused immediate problems
> for Barbican (our gates for /master, stable/newton, and stable/mitaka all
> fail) and a potential larger impact (changing the return code of REST calls
> is not acceptable for a stable API).
>
> Before I start hacking three releases of Barbican to work around Pecan's
> change, I'd like to ask:  are any other projects having trouble with
> Pecan Version 1.2?  Would it be possible/appropriate to block this version
> as not working for OpenStack?
>
> Thanks,
> Dave McCowan
>
>
> [1]
> http://pecan.readthedocs.io/en/latest/changes.html
> https://github.com/pecan/pecan/issues/72
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptl] code churn and questionable changes

2016-09-22 Thread Clay Gerrard
FWIW, No, this is *not* just an problem for OpenStack

https://youtu.be/wf-BqAjZb8M?t=531

^ Raymond Hettinger

Ultimately the problem is mis-aligned goals between the individual and the
project maintainers.  They want to "do stuff" and get a change landed; we
want to maximize the positive results for the project with our limited
available time.

But I caution you not to allow yourself to conflate their effort (while
borderline unhelpful to project/maintainers) as "bad" - they don't owe us
anything.  As long as blasting out a bunch of bugs and non-material changes
get them a few "commits in OpenStack" there is probably actual honest to
goodness reasons for that behavior to continue.

Go ahead and try to evaluate each change as best you can on it's own
merits.  Be friendly and helpful.  If you -2 provide *context*.  Point them
at canned materials you have for new contributors!

But, don't get your hopes up that any of these folks are going to "come
around".

-Clay


On Thu, Sep 22, 2016 at 1:05 PM, Sean M. Collins  wrote:

> Sean Dague wrote:
> > If this is the bug that triggered this discussion, yes, please never do
> > anything like that -
> > https://bugs.launchpad.net/python-openstacksdk/+bug/1475722
> >
>
> Here was another fun one.
>
> https://bugs.launchpad.net/python-cinderclient/+bug/1586268
>
> I commented as such that we don't like these kind of patches, but I
> couldn't find the mailing list thread where we last had this discussion.
>
> https://review.openstack.org/#/c/343133/
>
> Anyway, yeah this kind of thing is really annoying and burns a ton of
> resources for no good reason
>
> --
> Sean M. Collins
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla] warning about PBR issue for kolla operators

2016-09-13 Thread Clay Gerrard
There's a note in the "for development" section [1] that notes the
development instructions don't include anything that puts kolla in your
sys.path or any bin scripts copied out anywhere into the PATH - i.e. it's
not installed

That seems less than ideal for a developer - did I miss a `pip install -e
.` somewhere?

-Clay

1.
http://docs.openstack.org/developer/kolla/quickstart.html#installing-kolla-and-dependencies-for-development

On Tue, Sep 13, 2016 at 4:33 PM, Steven Dake (stdake) 
wrote:

> Hey folks,
>
>
>
> The quickstart guide was modified as a result of a lot of painful
> debugging over the last cycle approximately a month ago.  The only solution
> available to us was to split the workflow into an operator workflow
> (working on stable branches) and a developer workflow (working on master).
> We recognize operators are developers and the docs indicate as much.  Many
> times operators want to work with master as they are evaluating Newton and
> planning to place it into production.
>
>
>
> I’d invite folks using master with the pip install ./ method to have a
> re-read of the quickstart documentation. The documentation was changed in
> subtle ways (with warning and info boxes) but folks that have been using
> Kolla prior to the quckstart change may be using kolla in the same way the
> quickstart previously recommended.  Folks tend to get jammed up on this
> issue – we have helped 70-100 people work past this problem before we
> finally sorted out a workable solution (via documentation).
>
>
>
> The real issue lies in how PBR operates and pip interacts with Kolla and
> is explained in the quickstart.  From consulting with Doug Hellman and
> others in the release team, it appears the issue that impacts Kolla is not
> really solveable within PBR itself.  (I don’t mean to put words in Doug’s
> mouth, but that is how I parsed our four+ hour discussion) on the topic.
>
>
>
> The documentation is located here:
>
> http://docs.openstack.org/developer/kolla
>
>
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] governance proposal worth a visit: Write down OpenStack principles

2016-09-12 Thread Clay Gerrard
On Sun, Sep 11, 2016 at 11:53 PM, Thierry Carrez 
wrote:
>
> FWIW I agree with Jay that the wording "a product" is definitely
> outdated and does not represent the current reality. "Product"
> presupposes a level of integration that we never achieved, and which is,
> in my opinion, not desirable at this stage. I think that saying "a
> framework" would be more accurate today. Something like "OpenStack is
> one community with one common mission, producing one framework of
> collaborating components" would capture my thinking.
>


This is why I always have and presumably always will support Thierry on the
TC.  His initial thinking *frequently* seems out of alignment with me, but
after observing others healthy debate and discussion [1] - I always find we
tend we both come around a little and seem to be pointing in basically the
same direction in the end.  Thierry is *reasonable*.  Throwing out old
assumptions when new information is raised is an absolute imperative - and
here we see Thierry plainly and openly offering concession to a reasonable
counterpoint.

It's unfortunate just how often we have to see Thierry do this on behalf of
some other members on the TC!  In fact, sometimes I'm scared to imagine
just how much worse off OpenStack governance might be if it wasn't for the
small handful of reasonable individuals willing to subject themselves to
the pain of the political grind.

PTL announcements are coming out and TC announcements are coming up soon!

http://governance.openstack.org/

Let's make sure we get some more *reasonable* people in there to help out
Thierry!  Thank you Thierry for your service!

-Clay

1. Like many other OpenStack contributors - I try not to personally get
involved in these non-technical debates.  It's a huge distraction from the
mission; and for me personally I recognize I'm too passionate and
ineloquent to make any meaningful direct contribution anyway.  But I care
about the OpenStack mission; and I feel compelled as of late to do my part
to ensure reasonable people are focusing on the right things in OpenStack
governance.  OpenStack for Operators!
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] governance proposal worth a visit: Write down OpenStack principles

2016-09-08 Thread Clay Gerrard
On Thu, Sep 8, 2016 at 6:13 AM, Chris Dent  wrote:
>
> That is, it thinks of itself as an existing truth to be ratified.
>

Gah!  YES!!  Exactly this!  Well said!

And this attitude keeps getting echoed again and again from the current
oligarchy TC!  "We know what OpenStack is; we know the principles we
operate under."

[from https://review.openstack.org/#/c/357260/]
> If the OpenStack Principles are in conflict with someone's thoughts
> or dreams, it is probably best to find something else to do that is more
> aligned with those

How can you get behind a document with a closing paragraph like this!?
OBEY OR LEAVE?!

> Because of the ordering of the process and the
> presumption of the document they will simply choose to ignore it and
> carry on with whatever other important things they've got going on.

Gah!  YES!  Please pay attention!

I know how it feels - I too want to focus *only* on THE MISSON:

[from
http://governance.openstack.org/reference/charter.html#openstack-project-teams
]
> OpenStack “Project Teams” are groups of people dedicated to the
> completion of the OpenStack project mission, which is ‘’to produce the
> ubiquitous Open Source Cloud Computing platform that enables building
> interoperable public and private clouds regardless of size, by being
> simple to implement and massively scalable while serving the cloud
> users’ needs.’’ Project Teams may create any code repository and produce
> any deliverable they deem necessary to achieve their goals.

^ what an amazing mission!!!

But remember, our mission must be "performed under the *oversight* of
the TC" - the trick is - *we* elect that *representative* governance!
Elections are next month people!

http://governance.openstack.org/#current-members

OpenStack for the Operators!

Not OpenStack for OpenStack sake!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Clay Gerrard
On Fri, Aug 12, 2016 at 11:52 AM, Andreas Jaeger <a...@suse.com> wrote:
>
> On 08/12/2016 08:37 PM, Clay Gerrard wrote:
> >
> > ... but ... it doesn't have a --install option?  Do you know if that is
> > strictly out-of-scope or roadmap or ... ?
>
>
> Right now we don't need it - we take the output and pipe that to yum/apt
> etc...
>
> See
>
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/scripts/install-distro-packages.sh
>

the -b option is great - thanks for the pointer!

  -b, --brief   List only missing packages one per line.

It should have been more obvious to me that it meant "you should totally
use this as input into your package manager"!

But, to be clear, when you say "we don't need it" - you *mean" - "yeah, we
totally need it and added it as bash in a different project"?  ;)

but also *not* strictly out-of-scope?  Or not sure?  Or patches welcome and
we'll see!?  Or .. we can *both* continue to use our existing tools to
solve this problem in the same way we always have?  :P

Thanks again,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread Clay Gerrard
I'd noticed other-requirements.txt around, but figured it needed a bunch of
custom tooling to actually make it useful.

And... it's a subprocess wrapper to a handful of package management tools
(surprised to see emerge and pacmac - Kudos!) and a custom format for
describing package requirements...

... but ... it doesn't have a --install option?  Do you know if that is
strictly out-of-scope or roadmap or ... ?

-Clay

On Fri, Aug 12, 2016 at 10:31 AM, Andreas Jaeger  wrote:

> TL;DR: Projects can use bindep.txt to document in a programmatic way
> their binary dependencies
>
> Python developers record their dependencies on other Python packages in
> requirements.txt and test-requirements.txt. But some packages
> havedependencies outside of python and we should document
> thesedependencies as well so that operators, developers, and CI systems
> know what needs to be available for their programs.
>
> Bindep is a solution to this, it allows a repo to document
> binarydependencies in a single file. It even enablies specification of
> which distribution the package belongs to - Debian, Fedora, Gentoo,
> openSUSE, RHEL, SLES and Ubuntu have different package names - and
> allows profiles, like a test profile.
>
> Bindep is one of the tools the OpenStack Infrastructure team has written
> and maintains. It is in use by already over 130 repositories.
>
> For better bindep adoption, in the just released bindep 2.1.0 we have
> changed the name of the default file used by bindep from
> other-requirements.txt to bindep.txt and have pushed changes [3] to
> master branches of repositories for this.
>
> Projects are encouraged to create their own bindep files. Besides
> documenting what is required, it also gives a speedup in running tests
> since you install only what you need and not all packages that some
> other project might need and are installed  by default. Each test system
> comes with a basic installation and then we either add the repo defined
> package list or the large default list.
>
> In the OpenStack CI infrastructure, we use the "test" profile for
> installation of packages. This allows projects to document their run
> time dependencies - the default packages - and the additional packages
> needed for testing.
>
> Be aware that bindep is not used by devstack based tests, those have
> their own way to document dependencies.
>
> A side effect is that your tests run faster, they have less packages to
> install. A Ubuntu Xenial test node installs 140 packages and that can
> take between 2 and 5 minutes. With a smaller bindep file, this can change.
>
> Let's look at the log file for a normal installation with using the
> default dependencies:
> 2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
> Need to get 148 MB of archives.
> After this operation, 665 MB of additional disk space will be used.
>
> Compare this with the openstack-manuals repostiry that uses bindep -
> this example was 20 seconds and not minutes:
> 0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
> Need to get 35.8 MB of archives.
> After this operation, 128 MB of additional disk space will be used.
>
> If you want to learn more about bindep, read the Infra Manual on package
> requirements [1] or the bindep manual [2].
>
> If you have further questions about bindep, feel free to ask the Infra
> team on #openstack-infra.
>
> Thanks to Anita for reviewing and improving this blog post and to the
> OpenStack Infra team that maintains bindep, especially to Jeremy Stanley
> and Robert Collins.
>
> Note I'm sending this out while not all our test clouds have images that
> know about bindep.txt (they only handle other-requirements.txt). The
> infra team is in the process of ensuring updated images in all our test
> clouds for later today. Thanks, Paul!
>
> Andreas
>
>
> References:
> [1] http://docs.openstack.org/infra/manual/drivers.html#
> package-requirements
> [2] http://docs.openstack.org/infra/bindep/
> [3] https://review.openstack.org/#/q/branch:master+topic:bindep-mv
> --
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Clay Gerrard
The 
use_untested_probably_broken_deprecated_manger_so_maybe_i_can_migrate_cross_fingers
option sounds good!  The experiment would be then if it's still enough of a
stick to keep 3rd party drivers pony'd up on their commitment to the Cinder
team to consistently ship quality releases?

What about maybe the operator just not upgrading till post migration?  It's
the migration that sucks right?  You either get to punt a release and hope
it gets "back in good faith" or do it now and that 3rd party driver has
lost your business/trust.

-Clay

On Friday, August 12, 2016, Walter A. Boring IV > wrote:

>
> I was leaning towards a separate repo until I started thinking about all
> the overhead and complications this would cause. It's another repo for
> cores to watch. It would cause everyone extra complication in setting up
> their CI, which is already one of the biggest roadblocks. It would make
> it a little harder to do things like https://review.openstack.org/297140
> and https://review.openstack.org/346470 to be able to generate 
> this:http://docs.openstack.org/developer/cinder/drivers.html. Plus more infra
> setup, more moving parts to break, and just generally more
> complications.
>
> All things that can be solved for sure. I just question whether it would
> be worth having that overhead. Frankly, there are better things I'd like
> to spend my time on.
>
> I think at this point my first preference would actually be to define a
> new tag. This addresses both the driver removal issue as well as the
> backporting of driver bug fixes. I would like to see third party drivers
> recognized and treated as being different, because in reality they are
> very different than the rest of the code. Having something like
> follows_deprecation_but_has_third_party_drivers_that_dont would make a
> clear statement that their is a vendor component to this project that
> really has to be treated differently and has different concerns
> deployers need to be aware of.
>
> Barring that, I think my next choice would be to remove the tag. That
> would really be unfortunate as we do want to make it clear to users that
> Cinder will not arbitrarily break APIs or do anything between releases
> without warning when it comes to non-third party drivers. But if that is
> what we need to do to effectively communicate what to expect from
> Cinder, then I'm OK with that.
>
> My last choice (of the ones I'm favorable towards) would be marking a
> driver as untested/unstable/abandoned/etc rather than removing it. We
> could flag these a certain way and have then spam the logs like crazy
> after upgrade to make it very and painfully clear that they are not
> being maintained. But as Duncan pointed out, this doesn't have as much
> impact for getting vendor attention. It's amazing the level of executive
> involvement that can happen after a patch is put up for driver removal
> due to non-compliance.
>
> Sean
>
> __
>
> I believe there is a compromise that we could implement in Cinder that
> enables us to have a deprecation
> of unsupported drivers that aren't meeting the Cinder driver requirements
> and allow upgrades to work
> without outright immediately removing a driver.
>
>
>1. Add a 'supported = True' attribute to every driver.
>2. When a driver no longer meets Cinder community requirements, put a
>patch up against the driver
>3. When c-vol service starts, check the supported flag.  If the flag
>is False, then log an exception, and disable the driver.
>4. Allow the admin to put an entry in cinder.conf for the driver in
>question "enable_unsupported_driver = True".  This will allow the c-vol
>service to start the driver and allow it to work.  Log a warning on every
>driver call.
>5. This is a positive acknowledgement by the operator that they are
>enabling a potentially broken driver. Use at your own risk.
>6. If the vendor doesn't get the CI working in the next release, then
>remove the driver.
>7. If the vendor gets the CI working again, then set the supported
>flag back to True and all is good.
>
>
> This allows a deprecation period for a driver, and keeps operators who
> upgrade their deployment from losing access to their volumes they have on
> those back-ends.  It will give them time to contact the community and/or do
> some research, and find out what happened to the driver.   This also
> potentially gives the operator time to find a new supported backend and
> start migrating volumes.  I say potentially, because the driver may be
> broken, or it may work enough to migrate volumes off of it to a new backend.
>
> Having unsupported drivers in tree is terrible for the Cinder community,
> and in the long run terrible for operators.
> Instantly removing drivers because CI is unstable is terrible for
> operators in the short term, 

Re: [openstack-dev] [tc][cinder] tag:follows-standard-deprecation should be removed

2016-08-12 Thread Clay Gerrard
On Thu, Aug 11, 2016 at 7:14 AM, Erno Kuvaja  wrote:

>
> Lets say I was ops evaluating different options as storage vendor for
> my cloud and I get told that "Here is the list of supported drivers
> for different OpenStack Cinder back ends delivered by Cinder team", I
> start looking what the support level of those drivers are and see that
> Cinder follows standard deprecation which is fairly user/ops friendly
> with decent warning etc. I'm happy with that, not knowing OpenStack I
> would not even look if different subcomponents of Cinder happens to
> follow different policy. Now I buy storage vendor X HW and at Oct I
> realize that the vendor's driver is not shipped, nor any remains of it
> is visible anymore, I'd be reasonably pissed off. If I knew that the
> risk is there I would select my HW based on the negotiations that my
> HW is contractually tied to maintain that driver and it's CI, and that
> would be fine as well or if not possible I'd select some other
> solution I could get reasonably guarantee that it will be
> supported/valid at it's expected life time. As said I don't think
> there is anything wrong with the 3rd party driver policy, but
> maintaining that and the tag about standard-deprecation project wide
> is sending wrong message to those who do not know better to safeguard
> their rear ends.
>

Can we clarify if anyone is aware of this *actually* happening?Because
this description of events sounds *terrible*?  If we have a case-in-point I
think it'd be down right negligent to not give the situation a proper RCA,
but I'd be *real* curious to hear the previous "4 whys" that lead to
"ultimately; the problems was the tags..."

I'm much more inclined to think that we should trust the Cinder team to do
what they think is best based on their experience.  If their experience is
that it's better for their operators that they *not* ship "deprecated (but
probably broken)" drivers - GOOD FOR THEM!  I think it'd be great if the
"standard deprecation policy" can be informed and updated based on the
experience of a successful project like Cinder - if not, yeah I really hope
they continue to do the *right* thing over the *standard* thing.

OTOH, if what they think is right is causing *real* problems, let's surface
those - if they got to this policy based on experience, new information
will spur new ideas.  But that's different than some pontification based on
hypotheticals.  Speaking of which, why is this even coming up in the
*development* ML?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-11 Thread Clay Gerrard
On Thu, Aug 11, 2016 at 2:25 PM, Ed Leafe  wrote:

>
> Overall this looks good, although it seems a bit odd to have
> ALL_CAPS_STRINGS to represent all:caps:strings throughout. The example you
> gave:
>
> >>> print os_caps.HW_CPU_X86_SSE42
> hw:cpu:x86:sse42
>
>
Just to be clear, this project doesn't *do* anything right?  Like it won't
parse `/proc/cpuinfo` and actually figure out a machines cpu flags that can
then be broadcast as "capabilities"?

Like, TBH I think it took me longer than I would prefer to honestly admit
to find out about /sys/block//queue/rotational [1]

So if there was a library about standardizing how hardware capabilities are
discovered and reported - that maybe seems like a sane sort of thing for a
collection of related projects to agree on.  But I'm not sure if this does
that?

-Clay

1. https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-10 Thread Clay Gerrard
On Tue, Aug 9, 2016 at 11:54 AM, Hayes, Graham  wrote:

>
> It might not make a difference to deployers / packagers who only deploy
> one project from OpenStack, but they are in the minority - having a
> known good minimum for requirements helps deployers who have multiple
> services to deploy.
>

I'm not sure how true that is.  I think in the largest cloud organizations
the different cloud services are managed by different teams that are
working hard to deliver a stable, well tested, continuously deployed,
*service* that is always rapidly approaching tracking master.

In these organizations it may be that the team ultimately fully responsible
for the successful quality and availability of just a *single* service -
doesn't need to *test* and deploy with the next new minor version of routes
(if their requirements don't mandate it) just to get out the next bug fix -
because they're not *trying* to co-install *all* the services onto a
homogenous fleet?

Even if these kind of teams were let's say... a non-trivial minority... I
don't think their needs should be *ignored*.  I agree with John & Thomas
and am excited to hear about Tony's effort.

 -Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Clay Gerrard
On Wed, Aug 10, 2016 at 10:57 AM, Matthew Treinish 
wrote:

>
> http://specs.openstack.org/openstack/qa-specs/specs/tempest/implemented/
> branchless-tempest.html
>
>
>
This was actually a *great* read, thanks for that link!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Clay Gerrard
On Wed, Aug 10, 2016 at 10:57 AM, Matthew Treinish 
wrote:

> We also test every incoming
> tempest change on all the stable branches, and nothing can land unless it
> works
> on all supported branches.


Did not know that, pretty awesome!


>
-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Clay Gerrard
On Wed, Aug 10, 2016 at 10:21 AM, Matthew Treinish 
wrote:

> But, to keep the gate running
> involves a lot of coordination between multiple projects that are tightly
> coupled. Things like an entire extra set of job definitions in zuul, a
> branch on
> global requirements, a devstack branch, extra devstack-gate logic, a bunch
> of
> extra config options for skips in tempest, extra node types, etc. Keeping
> all
> those things working together is a big part of what stable maint actually
> entails.


that actually makes more sense (sorry I missed any earlier explanation) -
I'm reading this as there is only ever one CI *system* running at a time,
and that system needs to know a bunch about how to *setup* a test job on an
old branches - not that any of the old versions of code or tests or even
the history of the CI system that existed and was able to test them at the
time is GONE - its just that the current deployed system needs to move on...


> That's why at the EOL we tag
> the branch tip and then delete it. Leaving the branch around advertises
> that
> we're in a position to accept new patches to it, which we aren't after the
> EOL.
>
>
Oh wow... so it *is* GONE ;)

And really "we can't test it so no-one can" might be a big part of the
issue that was brought up in the earlier thread.  Maybe trying to support
stable branches longer than 18 months is *not* something can to be broadly
supported inside of OpenStack (there seemed to be some interest in the
etherpad going out to 24 months some day, even though older branches would
have less and less support for new testing capabilities).  But I think the
heart of this thread is "we appreciate the complexity and effort that it
takes to deliver what we have for older branches.  [full stop]  We need a
way to extend some minimal life support into older releases in a way that
is compatible with the current policy.  [full stop] "

Would it be *too* confusing to have "End of Full OpenStack Supported
Official Testing/Life" != "End of a projects commitment to people running
clouds using our software to try and help them be successful"?  Without
having to define unilaterally for every installation that the only option
for success is upgrade to the next about to be abandoned in 6-18 mo major
release?

I think ideally we'd be looking for a way to let them have their cake
without extra work.

OTOH, forking to support old branches seems just as reasonable to me as
well (that's what we do)...

However, I fully admit, I'm probably thinking about it wrong.  :D

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Clay Gerrard
On Mon, Aug 8, 2016 at 8:31 AM, Matthew Treinish 
wrote:

> When we EOL a branch all of the infrastructure for running any ci against
> it goes away.


But... like... version control?  I mean I'm sure it's more complicated than
that or you wouldn't have said this - but I don't understand, sorry.

Can you elaborate on this?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Cinder] [stable] [all] Changing stable policy for drivers

2016-08-10 Thread Clay Gerrard
On Wed, Aug 10, 2016 at 7:42 AM, Ben Swartzlander 
wrote:

>
> A big source of problems IMO is that tempest doesn't have stable branches.
> We use the master branch of tempest to test stable branches of other
> projects, and tempest regularly adds new features.
>

How come not this +1000 just fix this?
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] glance backend: replace swift by file in CI

2016-06-27 Thread Clay Gerrard
There's probably some minimal gain in cross compatibility testing to
sticking with the status quo.  The Swift API is old and stable, but I
believe there was some bug in recent history where some return value in
swiftclient changed from a iterable to a generator or something and some
aggressive non-duck type checking broke something somewhere

I find that bug reports sorta interesting, the reported memory pressure
there doesn't make sense.  Maybe there's some non-
essential middleware configured on that proxy that's causing the workers to
bloat up like that?

-clayg

On Mon, Jun 27, 2016 at 12:30 PM, Emilien Macchi  wrote:

> Hi,
>
> Today we're re-investigating a CI failure that we had multiple times [1]:
> Swift memory usage grows until it is OOM-killed.
>
> The perimeter of this thread is about our CI and not production
> environments.
> Indeed, our CI is running limited resources while production
> environments should not hit this problem.
>
> After some investigation on #ŧripleo, we found out this scenario was
> happening almost every time since recently:
>
> * undercloud is deployed, glance and swift are running. Glance is
> configured with Swift backend to store images.
> * tripleo CI upload overcloud image into Glance, image is successfully
> uploaded.
> * when overcloud starts deploying, some nodes randomly fail to deploy
> because the undercloud OOM-kills swift-proxy-server that is still
> sending the ovecloud image requested by Glance API. Swift fails,
> Glance fails, overcloud deployment fails with a "No valid hosts
> found".
>
> It's likely due to performances issues in our CI, and there is nothing
> we can do but adding more resources or reducing the number of
> environments, something we won't do at this time, because our recent
> improvements in our CI (more ram, SSD, etc).
>
> As a first iteration, I propose [2] that we stop using Swift as a
> backend for Glance. Indeed, our undercloud is currently single-node, I
> see zero value of using Swift to store the overcloud image.
> If there is a value, then we can add the option to whether or not
> using it (and set it to False in our CI to use file backend, which
> won't lead to OOM).
>
> Note: on the overcloud: we currently support file, swift and rbd
> backends, that you can easily select during your deployment.
>
> [1] https://bugs.launchpad.net/tripleo/+bug/1595916
> [2] https://review.openstack.org/#/c/334555/
> --
> Emilien Macchi
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] RFC 2616 was *so* 2010

2016-02-05 Thread Clay Gerrard
... really more like 1999, but when OpenStack started back in '10 - RFC
2616 was the boss.

Since then (circa '14) we've got 7230 et. al. - a helpful attempt to
disambiguate things!  Hooray progress!

But when someone recently opened this bug I got confused:

https://bugs.launchpad.net/swift/+bug/1537811

The wording is 7230 *is* in fact pretty clear - MUST NOT [send a
content-length header, zero or otherwise, with a 204 response] - but I
really can't find nearly as strong a prescription in 2616.

Swift is burdened with a long lived, stable API - which has lead to wide
adoption from a large ecosystem of clients that have for better or worse
often adopted the practice of expecting the API to behave the way it does
even when we might otherwise agree it has a wart here or there.

But I'm less worried about the client part - we've handled that plenty of
times in the past - ultimately it's a value/risk trade off.  Can we fix it
without breaking anything - if we do break someone what's the risk of that
fallout vs. the value of cleaning it up now (in this particular example RFC
7230 is equally strongly prescriptive of clients, in that we should be able
to say "content-length: booberries" in a 204 response and a well behaved
client is expected to ignore the header and know that the 204 response is
terminated with the first blank line following the headers).  Again, we've
handled this before and I'm sure we'll make the right choice for our
project and broad client base.

But I *am* worried about RFC 7230!?  Is it reasonable that a HTTP 1.1
compliant server according to 2616 could possibly NOT be a HTTP 1.1
compliant server after 7230?  Should the wording of this particular
prescription be SHOULD NOT (is that even possible?!  I think I read
somewhere that RFC's can have revisions; but I always just pretend they're
like some sort of divine law which must be followed or face eternal scorn
from your fellow engineers)  Maybe sending a "content-length: 0" header
with a 204 response was *never* tolerable (despite being descriptive and
innocuous), but you just couldn't tell you weren't conforming because of
all the reasons 7230 got drafted in the first place!?  Does anyone know how
to get ahold of Mark Nottingham so he can explain to me how all this works?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [openstack-swift] JOSS package to access Swift - any experience?

2016-01-11 Thread Clay Gerrard
I've used it before in a limited capacity; and still recommend it to java
developers who want to use a language binding to connect to swift.

As best I can tell the primary maintainers for the project still work at
the same company, but there are some pending issues than't haven't received
attention:

https://github.com/javaswift/joss/pulse/monthly

And some pull requests with associated issues and and passing tests that
haven't been merged. e.g.

https://github.com/javaswift/joss/pull/86

... hard to say what to do - they're obviously too busy to keep up with it
- and may not be using it or interested it anymore.  If you're interested
probably email the maintainer(s) and ask if there's any help that could be
offered?

-Clay


On Sun, Jan 10, 2016 at 12:21 AM, Gil Vernik  wrote:

> Hello,
>
> Java OpenStack Storage aka JOSS is a
> dedicated Java binding for accessing the Swift REST API.
> Reference to this package appears on
> https://wiki.openstack.org/wiki/SDKs#Java
>
> I personally find this package very nice, lightweight and easy to use.
> However the last time someone contributed to this project was in November
> 2014.
> I know that Swift API didn't changed much from 2014, but still recent
> Swift and Keystone additions are missed in JOSS.
>
> I wonder if someone from Swift community uses this package.
>
>
> Thanks,
> Gil Vernik.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Feedback about Swift API - Especially about Large Objects

2015-10-09 Thread Clay Gerrard
A lot of these deficiencies are drastically improved with static large
objects - and non-trivial to address (impossible?) with DLO's because of
their dynamic nature.  It's unfortunate, but DLO's don't really serve your
use-case very well - and you should find a way to transition to SLO's [1].

We talked about improving the checksumming behavior in SLO's for the
general naive sync case back at the hack-a-thon before the Vancouver summit
- but it's tricky (MD5 => CRC) - and would probably require a API version
bump.

All we've been able to get done so far is improve the native client
handling [2] - but if using SLO's you may find a similar solution quite
manageable.

Thanks for the feedback.

-Clay

1.
http://docs-draft.openstack.org/91/219991/7/check/gate-swift-docs/75fb84c//doc/build/html/overview_large_objects.html#module-swift.common.middleware.slo
2.
https://github.com/openstack/python-swiftclient/commit/ff0b3b02f07de341fa9eb81156ac2a0565d85cd4

On Friday, October 9, 2015, Pierre SOUCHAY 
wrote:

> Hi Swift Developpers,
>
> We have been using Swift as a IAAS provider for more than two years now,
> but this mail is about feedback on the API side. I think it would be great
> to include some of the ideas in future revisions of API.
>
> I’ve been developping a few Swift clients in HTML (in Cloudwatt Dashboard)
> with CORS, Java with Swing GUI (
> https://github.com/pierresouchay/swiftbrowser) and Go for Swift to
> filesystem (https://github.com/pierresouchay/swiftsync/), so I have now a
> few ideas about how improving a bit the API.
>
> The API is quite straightforward and intuitive to use, and writing a
> client is now that difficult, but unfortunately, the Large Object support
> is not easy at all to deal with.
>
> The biggest issue is that there is now way to know whenever a file is a
> large object when performing listings using JSON format, since, AFAIK a
> large object is an object with 0 bytes (so its size in bytes is 0), but it
> also has a hash of a zero file bytes.
>
> For instance, a signature of such object is :
>  {"hash": "d41d8cd98f00b204e9800998ecf8427e", "last_modified":
> "2015-06-04T10:23:57.618760", "bytes": 0, "name": "5G", "content_type": "
> octet/stream"}
>
> which is, exactly the hash of a 0 bytes file :
> $ echo -n | md5
> d41d8cd98f00b204e9800998ecf8427e
>
> Ok, now lets try HEAD :
> $ curl -vv -XHEAD -H X-Auth-Token:$TOKEN '
> https://storage.fr1.cloudwatt.com/v1/AUTH_61b8fe6dfd0a4ce69f6622ea7e0f/large_files/5G
> …
> < HTTP/1.1 200 OK
> < Date: Fri, 09 Oct 2015 19:43:09 GMT
> < Content-Length: 50
> < Accept-Ranges: bytes
> < X-Object-Manifest: large_files/5G/.part-50-
> < Last-Modified: Thu, 04 Jun 2015 10:16:33 GMT
> < Etag: "479517ec4767ca08ed0547dca003d116"
> < X-Timestamp: 1433413437.61876
> < Content-Type: octet/stream
> < X-Trans-Id: txba36522b0b7743d683a5d-00561818cd
>
> WTF ? While all files have the same value for ETag and hash, this is not
> the case for Large files…
>
> Furthermore, the ETag is not the md5 of the whole file, but the hash of
> the hash of all manifest files (as described somewhere hidden deeply in the
> documentation)
>
> Why this is a problem ?
> ---
>
> Imagine a « naive »  client using the API which performs some kind of Sync.
>
> The client download each file and when it syncs, compares the local md5 to
> the md5 of the listing… of course, the hash is the hash of a zero bytes
> files… so it downloads the file again… and again… and again. Unfortunaly
> for our naive client, this is exactly the kind of files we don’t want to
> download twice… since the file is probably huge (after all, it has been
> split for a reason no ?)
>
> I think this is really a design flaw since you need to know everything
> about Swift API and extensions to have a proper behavior. The minimum would
> be to at least return the same value as the ETag header.
>
> OK, let’s continue…
>
> We are not so Naive… our Swift Sync client know that 0 files needs more
> work.
>
> * First issue: we have to know whenever the file is a « real » 0 bytes
> file or not. You may think most people do not create 0 bytes files after
> all… this is dummy. Actually, some I have seen two Object Storage
> middleware using many 0 bytes files (for instance to store meta data or two
> set up some kind of directory like structure). So, in this cas, we need to
> perform a HEAD request to each 0 bytes files. If you have 1000 files like
> this, you have to perform 1000 HEAD requests to finally know that there are
> not any Large file. Not very efficient. Your Swift Sync client took 1
> second to sync 20G of data with naive approach, now, you need 5 minutes…
> hash of 0 bytes is not a good idea at all.
>
> * Second issue: since the hash is the hash of all parts (I have an idea
> about why this decision was made, probably for performance reasons), your
> client cannot work on files since the hash of local file is not the hash of
> the 

Re: [openstack-dev] [hacking] [style] multi-line imports PEP 0328

2015-08-25 Thread Clay Gerrard
On Tue, Aug 25, 2015 at 8:45 AM, Kevin L. Mitchell 
kevin.mitch...@rackspace.com wrote:

 On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote:
  So, I know that hacking has H301 (one import per line) - but say maybe
  you wanted to import *more* that one thing on a line (there's some
  exceptions right?  sqlalchemy migrations or something?)

 There's never a need to import more than one thing per line given the
 rule to only import modules, not objects.  While that is not currently
 enforced by hacking, it is a strong style guideline.  (Exceptions for
 things like sqlalchemy do exist, of course.)


Thank you for echoing my premise - H301 exists, but there are exceptions,
so...

On Mon, 2015-08-24 at 22:53 -0700, Clay Gerrard wrote:
Anyway - I'm sure there could be a pep8 plugin rule that enforces use of
parenthesis for multi line imports instead backslash line breaks [1] - but
would that be something that hacking would want to carry (since *most* of
the time H301 would kick in first?) - or if not; is there a way to plug it
into pep8 outside of hacking without having to install some random one-off
extension for this one rule separately?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [hacking] [style] multi-line imports PEP 0328

2015-08-24 Thread Clay Gerrard
So, I know that hacking has H301 (one import per line) - but say maybe you
wanted to import *more* that one thing on a line (there's some exceptions
right?  sqlalchemy migrations or something?)

Anyway - I'm sure there could be a pep8 plugin rule that enforces use of
parenthesis for multi line imports instead backslash line breaks [1] - but
would that be something that hacking would want to carry (since *most* of
the time H301 would kick in first?) - or if not; is there a way to plug it
into pep8 outside of hacking without having to install some random one-off
extension for this one rule separately?

-Clay

1. https://www.python.org/dev/peps/pep-0328/#rationale-for-parentheses
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Removing python-swiftclient from requirements.txt

2015-07-29 Thread Clay Gerrard
So helpful!  Thank you.

On Wed, Jul 29, 2015 at 7:48 AM, Doug Hellmann d...@doughellmann.com
wrote:



 There is some documentation in the pbr manual
 (http://docs.openstack.org/developer/pbr/#extra-requirements). The
 feature is implemented throughout the packaging tool chain now.


Ah, excellent!  PEP 0426 seemed keen on standardizing, but I'm not seeing
any recent movement and setuptools support [2] seems to indicate the
ecosystem can move forward without it?

1. https://www.python.org/dev/peps/pep-0426/#extras-optional-dependencies
2.
http://pythonhosted.org/setuptools/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies




 Here one would say pip install foo[testing] and on Python 2.7
 would also get the quux library.


 I knew pip constantly telling me to upgrade it would pay off eventually.
So cool.

Thanks!

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [packaging][all] setting minimum version of setuptools in setup.py

2015-07-29 Thread Clay Gerrard
I agree an error message is better than breaking for insane reasons.

But... maybe as an aside... what about not breaking?

How come the openstack ecosystem doesn't have wait for PEP 426 to be
approved and for setuptools 17.1 to be widely deployed before it can
require/depend on it?  Is there no failure/degraded case where we can take
advantage of moving forward in the infrastructure where we apparently need
it - but not necessarily force everyone else to upgrade if it's not
directly solving something for them (e.g. people doing packaging of
openstack projects, but don't personally necessarily currently maintain or
distribute a setuptools package)?

On the web this happens all the time Sure maybe you can't get the newest
HTML5 wiz-bang, but I can still render something on IE8, OTOH, IE7, wow -
pls upgrade vs. How are you not even running setuptools 17.1 right now in
your build environment - it was *literally* released *almost* two months
ago!?  I... I can't even... it hurts to look at you.

Just Curious.

I only recently found out that PEP 426 was a thing, so I think it's pretty
great to see people driving the python packaging ecosystem forward.  For
those involved.  Kudos.

-Clay

On Wed, Jul 29, 2015 at 10:27 AM, Robert Collins robe...@robertcollins.net
wrote:

 Similar to pbr, we have a minimum version of setuptools required to
 consistently install things in OpenStack. Right now thats 17.1.

 However, we don't declare a setup_requires version for it.

 I think we should.

 setuptools can't self-upgrade, and we don't have declarative deps yet,
 so one reaction I expect here is 'how will this help'.

 The problem lies in the failure modes. With no dependency declared,
 setuptools will try and *silently fail*, or try and fail with this one
 weird error - that doesn't say anything about 'setuptools 3.3. cannot
 handle PEP 426 version markers'.

 If we set a minimum (but not a maximum) setuptools version as a
 setup_requires, I think we'll signal our actual dependencies to
 redistributors, and folk consuiming python packages, in a much more
 direct fashion. They'll still have to recover manually, but thats ok
 IMO. As long as we don't set upper bounds, we won't deadlock ourselves
 like we did in the past.

 -Rob

 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [glance] Removing python-swiftclient from requirements.txt

2015-07-28 Thread Clay Gerrard
Doug,

I believe our glance friends are not the only project with some open
questions on dealing with the required dependency for optional plugin
use-case.  You've made a recommendation to leverage some python tooling
functionality that I'm not familiar with.  I was hoping I could probe you
to elaborate so I can try and educate myself more?

... inline

On Tue, Jul 28, 2015 at 4:55 PM, Doug Hellmann d...@doughellmann.com
wrote:



 Please set up an extras entry for each backend instead of just
 removing the dependencies.  That will signal to users that you know
 what dependencies there are for a backend,


You referenced nova [1], and oslo.versionedobjects [2] for examples - but
I'd be more curious for the documentation if you have any idea where I
might look for it?  Is this a feature of pkg_resources, distutils,
setuptools, pbr?  What exactly does describing dependencies via this
extras key afford?


 but that they are optional,
 and still allow someone to do the equivalent of pip install
 glance[vmware] or pip install glance[swift] to get those
 dependencies.


I'm not familiar with that syntax for pip or it's equivalent!  That sounds
awesome!  Can you do like [extras:pluginname] in your setup.cfg and pip
install project[pluginname] just works!?  OMGBBQ!


 Nova and oslo.versionedobjects have examples in their
 setup.cfg if you need a template.


Hrm... I'm missing how either one of those setup.cfg's [1, 2] include an
example relevant to this use-case (i.e. required dependency for optional
backend plugin)?



 I didn't mention in the reviews, but this will also make integration
 tests in our gate easier, since you can put .[vmware] or .[swift] in
 the tox.ini to pull in those dependencies.


Hrm... yes testing.  So that's part just a new -e for the tox.ini - but
I'm not quite sure I follow how each environment would specify different
dependencies for the virtualenv?

I hope you can point me to some more information on the subject.

Thank you very much for pushing this out to a wider audience,

clayg

1. https://github.com/openstack/nova/blob/master/setup.cfg
2.
https://github.com/openstack/oslo.versionedobjects/blob/master/setup.cfg#L25
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Clay Gerrard
On Wed, Jul 22, 2015 at 12:37 PM, Luse, Paul E paul.e.l...@intel.com
wrote:



 Wrt why the replication code seems to work if you delete just a .data


no it doesn't - https://gist.github.com/clayg/88950d77d25a441635e6


  forces a listing every 10 passes for some reason.  Clay?


IIRC the every 10 passes trick is for suffix dirs, if that's not in the
reconstructor we might should add it, easy test would be to rm a suffix
tree and let the reconstructor run for awhile.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-22 Thread Clay Gerrard
On Wed, Jul 22, 2015 at 12:24 PM, Changbin Liu changbin@gmail.com
wrote:


 But now I wonder: is it by design that EC does not handle an accidental
 deletion of just the data file?


Well, the design goal was not do not handle the accidental deletion of
just the data file - it was make replication fast enough that it works -
and that required not listing all the dirs all the time.


 Deleting both data file and hashes.pkl file is more like a
 deliberately-created failure case instead of a normal one.


To me deleting some file that swift wrote to disk without updating (or
removing) the index it normally updates during write/delete/replicate to
accelerate replication seems like a deliberately created failure case?  You
could try to flip a bit or truncate a data file and let the auditor pick it
up.  Or rm a suffix and wait for the every-so-often suffixdir listdir to
catch it, or remove an entire partition, or wipe a new filesystem onto the
disk.  Or shutdown a node and do a PUT, then shutdown the handoff node, and
run the reconstructor.  Any of the normal failure conditions like that
(and plenty more!) are all detected by and handled efficiently.

To me Swift EC repairing seems different from the triple-replication mode,
 where you delete any data file copy, it will be restored.



Well, replication and reconstruction are different in lots of ways - but
not this part.  If you rm a .data file without updating the index you'll
need some activity (post/copy/put/quarantine) in the suffix before the
replication engine can notice.

Luckily (?) people don't often go under the covers into the middle of the
storage system and rm data like that?

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack] [Swift] Erasure coding reconstructor doesn't work

2015-07-21 Thread Clay Gerrard
How did you deleted one data fragment?

Like replication the EC consistency engine uses some sub directory hashing
to accelerate replication requests in a consistent system - so if you just
rm a file down in an hashdir somewhere you also need to delete the
hashes.pkl up in the part dir (or call the invalidate_hash method like PUT,
DELETE, POST, and quarantine do)

Every so often someone discusses the idea of having the auditor invalidate
a hash after long enough or take some action on empty hashdirs (mind the
races!) - but its really only an issue when someone delete's something by
hand so we normally manage to get distracted with other things.

-Clay

On Tue, Jul 21, 2015 at 1:38 PM, Changbin Liu changbin@gmail.com
wrote:

 Folks,

 To test the latest feature of Swift erasure coding, I followed this
 document (
 http://docs.openstack.org/developer/swift/overview_erasure_code.html) to
 deploy a simple cluster. I used Swift 2.3.0.

 I am glad that operations like object PUT/GET/DELETE worked fine. I can
 see that objects were correctly encoded/uploaded and downloaded at proxy
 and object servers.

 However, I noticed that swift-object-reconstructor seemed don't work as
 expected. Here is my setup: my cluster has three object servers, and I use
 this policy:

 [storage-policy:1]
 policy_type = erasure_coding
 name = jerasure-rs-vand-2-1
 ec_type = jerasure_rs_vand
 ec_num_data_fragments = 2
 ec_num_parity_fragments = 1
 ec_object_segment_size = 1048576

 After I uploaded one object, I verified that: there was one data fragment
 on each of two object servers, and one parity fragment on the third object
 server. However, when I deleted one data fragment, no matter how long I
 waited, it never got repaired, i.e., the deleted data fragment was never
 regenerated by the swift-object-reconstructor process.

 My question: is swift-object-reconstructor supposed to be NOT WORKING
 given the current implementation status? Or, is there any configuration I
 missed in setting up swift-object-reconstructor?

 Thanks

 Changbin

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [swift] [ceilometer] installing ceilometermiddleware

2015-06-29 Thread Clay Gerrard
Is Swift the only project that uses the ceilometermiddleware - or just the
only project that uses ceilometermiddleware that doesn't already have a
oslo.config instance handy?

FWIW There's a WIP patch that's trying to bring a *bit* of oslo.config love
to the keystone middleware for policy.json [1].  Not sure if a similar
approach could solve the broker/url/parsing issue described in that other
thread.

If swift is the only project that uses ceilometermiddleware currently it
seems to make sense move the installation to lib/swift in devstack?

-Clay

1. https://review.openstack.org/#/c/149930/

On Sun, Jun 28, 2015 at 5:06 AM, Chris Dent chd...@redhat.com wrote:

 On Sat, 27 Jun 2015, Chris Dent wrote:

  * What code should be calling and hosting install_ceilometermiddleware?

  Since it is lib/swift that is using it, there makes some sense?
  Especially since it already has a relatively long block of
  configuration instruction.


 I've put up a devstack review for this change:

 https://review.openstack.org/#/c/196378/


 --
 Chris Dent tw:@anticdent freenode:cdent
 https://tank.peermore.com/tanks/cdent

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Why doesn't Swift cache object data?

2015-06-29 Thread Clay Gerrard
On Fri, Jun 26, 2015 at 8:16 PM, Michael Barton m...@weirdlooking.com
wrote:


 What's the logical difference between having object data in memory on a
 memcache server and having it in page cache on an object server?


+1 - about a syscall - i.e. not much - I think memcache does it's own heap
management - so it's probably all userspace - but the locality is all wrong
- just do it on the object nodes [1]!

... if you want object data served from memory - just turn on
keep_cache_private and crank up keep_cache_size [2]

-Clay

1. concurrent GETs would help serve the warmed copies first -
https://review.openstack.org/#/c/117710/
2. mind your /proc/fs/xfs/stat graphs tho - maybe not an issue if your
object data filesystem is on an SSD storage policy tho
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] M Naming Poll ended - results still need to clear legal

2015-06-22 Thread Clay Gerrard
http://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_4983776e190c8dbc

how is the top pick not the author of the book of five rings [1]

-Clay

1. https://en.wikipedia.org/wiki/The_Book_of_Five_Rings



On Mon, Jun 22, 2015 at 7:07 AM, Monty Taylor mord...@inaugust.com wrote:

 Hey all!

 The M naming poll has concluded. I'd say and the winner is ... except
 we still need to get the winning choice(s) vetted for legal entanglements.

 Feel free to go and look at the results, they are publicly available -
 but please DON'T start making t-shirts or having parties (ok, have as
 many parties as you want) until we've gotten the alls-clear from the
 trademark folks. I've sent the list to them already, so they're working
 on it furiously as we speak.

 As soon as I get the word back, I will send out another announcement
 with the official result.

 Thanks!
 Monty

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder] Some Changes to Cinder Core

2015-05-22 Thread Clay Gerrard
FWIW, as a nod to the great people we've had the privilege of working with
as Swift Core Maintainers - we've taking to promoting people who have moved
on to Core Emeritus:

https://review.openstack.org/#/c/155890/

On Fri, May 22, 2015 at 4:34 PM, Mike Perez thin...@gmail.com wrote:

 This is long overdue, but it gives me great pleasure to nominate Sean
 McGinnis for
 Cinder core.

 Reviews:
 https://review.openstack.org/#/q/reviewer:+%22Sean+McGinnis%22,n,z

 Contributions:
 https://review.openstack.org/#/q/owner:+%22Sean+McGinnis%22,n,z

 30/90 day review stats:
 http://stackalytics.com/report/contribution/cinder-group/30
 http://stackalytics.com/report/contribution/cinder-group/90

 As new contributors step up to help in the project, some move onto
 other things. I would like to recognize Avishay Traeger for his
 contributions, and now
 unfortunately departure from the Cinder core team.

 Cinder core, please reply with a +1 for approval. This will be left
 open until May 29th. Assuming there are no objections, this will go
 forward after voting is closed.

 --
 Mike Perez

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Clay Gerrard
On Thu, May 7, 2015 at 3:48 PM, Clint Byrum cl...@fewbar.com wrote:

 I'm still very curious to hear if anybody has been willing to try to
 make Swift work on pypy.


yeah, Alex Gaynor was helping out with it for awhile.  It worked.  And it
helped.  A little bit.

Probably still worth looking at if you're curious, but I'm not aware of
anyone who's currently working aggressively to productionize swift running
on pypy.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Go! Swift!

2015-05-07 Thread Clay Gerrard
On Thu, May 7, 2015 at 5:05 PM, Adam Lawson alaw...@aqorn.com wrote:

 what sort of limitations have you discovered that had to do specifically
 with the fact we're using Python?


Python is great.  Conscious decision to optimize for developer wall time
over cpu cycles has made it a great language for 20 years - and probably
will for another 20 at *least* (IMHO).

I don't think you would pick out anything to point at as a limitation of
python that you couldn't point at any dynamic interpreted language, but my
list is something like this:

   - Dynamic Interpreted Runtime overhead
   - Eventlet non-blocking hub is NOT OK for blocking operations (cpu, disk)
   - OTOH, dispatch to threads has overhead AND GIL
   - non-byte-aligned buffers restricts access to O_DIRECT and asyncio

*So often* this kinda stuff just doesn't matter.  Or even lots of times
even when it *does* matter - it doesn't matter that much in the grand
scheme of things.  Or maybe it matters a non-trivial amount, *but* there's
still other things that just mater more *right now*.  I think Swift has
been in that last case for a long time, maybe we still are - great thing
about open-source is redbo can publish an experiment on a feature branch in
gerrit and in-between the hard work of testing it - we can pontificate
about it on the mailing list!  ;)

FWIW, I don't think anyone should find it particularly surprising that a
mature data-path project would naturally gravitate closer to the metal in
the critical paths - it shouldn't be a big deal - unless it all works out -
and it's $%^! tons faster - then BOOYAH! ;)

But I'd suggest you be very careful not to draw any assumptions in general
about a great language like python even if this one time this one project
thought maybe they should find out if some part of the distributed system
might be better by some measure in something not-python.  ;)

Cheers,

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] HEAD Object API status code

2015-05-06 Thread Clay Gerrard
Can you give an example of an Object HEAD request returning 204?  I tried a
HEAD of an object with a body and also a HEAD of an object of length 0 and
I seem to get 200's...

Container's and accounts are a little more interesting story... [2]

-Clay

2. https://review.openstack.org/#/c/32647/

On Wed, May 6, 2015 at 5:40 PM, Ouchi, Atsuo ouchi.at...@jp.fujitsu.com
wrote:

 Hello Swift developers,

 I would like to ask you on a Swift API specification.

 Swift returns 204 status code to a valid HEAD Object request with a
 Content-Length header,
 whereas the latest HTTP/1.1 specification (RFC7230) states that you must
 not send
 the header with a 204 status code.

  3.3.2.  Content-Length
 (snip)
 A server MUST NOT send a Content-Length header field in any response
 with a status code of 1xx (Informational) or 204 (No Content).  A
 server MUST NOT send a Content-Length header field in any 2xx
 (Successful) response to a CONNECT request (Section 4.3.6 of
 [RFC7231]).

 What I would like to know is, when you designed Swift APIs what was the
 reasoning
 behind choosing 204 status code to HEAD Object, over other status codes
 such as 200?

 Thanks,
 Atsuo
 --
Ouchi Atsuo /
 ouchi.at...@jp.fujitsu.com
tel. 03-6424-6612 / ext.
 72-60728968
 Service Development Department, Foundation Service
 Division
 Fujitsu
 Limited


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev] [Cinder] Ubuntu LVM hangs in gate testing

2015-03-17 Thread Clay Gerrard
Can the bits that make those devices invalid and udev out of date call udev
admin --settle to just block till things are upto date and hopefully the
subseqent vg and pv scans quicker?

On Monday, March 16, 2015, John Griffith john.griffi...@gmail.com wrote:

 Hey Everyone,

 Thought I'd reach out to the ML to see if somebody might have some insight
 or suggestions to a problem I've been trying to solve.

 The short summary is:

 During a dsvm-full run in the gate there are times when /dev/sdX devices
 on the system may be created/deleted.  The trick here though is that on the
 Cinder side with LVM we're doing a lot of things that rely on VGS and LVS
 calls (including periodic tasks that are run).  Part of the scan routine
 unfortunately is for LVM to go through and open any block devices that is
 sees on the system and read them to see if it's an LVM device.

 The problem here is that the timing in the gate tests when everything is
 on a single node can result in udev not being quite up to date and the LVM
 scan process attempts to open a device that is no longer valid.  In this
 case (which we hit a few times on every single gate test), LVM blocks on
 the Open until the device times out and gives:
 -1 ENXIO (No such device or address)

 The problem is this can take up to almost a full minute for the timeout,
 so we have a few tests that take upwards of 150 seconds that actually
 should complete in about 30 seconds.  In addition this causes things like
 concurrent lvcreate cmds to block as well.  Note this is kind of
 inefficient anyway (even if all of the devices are valid), so there is a
 case to be made for not doing it if possible.

 Nothing fails typically in this scenario, things are just slow.

 I thought this would be easy to fix a while back by adding a local
 lvm.conf with a device filter.  It turns out however that the device filter
 only filters out items AFTER the vgs or lvs, it doesn't filter out the
 opens.  For that you need either:
 1. global_filter
 2. lvmetad service enabled

 The problem with '#1' is that the global_filter parameter is only honored
 on a global level NOT just in a local lvm.conf like we have currently.  To
 use that though we would have to set things such that Cinder was the only
 thing using LVM (not sure if that's doable or not).

 The problem with '#2' is that Trusty doesn't seem to have lvmetad in it's
 lvm2 packages until 2.02.111 (which isn't introduced until Vivid).

 I'm wondering if anybody knows of a backport or another method to get
 lvmetad capability in Ubuntu Trusty?

 OR

 What are some thoughts regarding the addition of a global_filter in
 /etc/lvm.conf?  We'd have to make a number of modifications to any services
 in devstack that setup LVM to make sure their PV's are added to the
 filter.  This might not be a big deal because most everyone uses loopback
 files so we could just to a loop regex and hit everything in one shot (I
 think).  But this means that anybody using LVM in devstack for something
 else is going to need to understand what's going on and add their devices
 to the filter.

 Ok... so not really the short version after all, but I'm wondering if
 anybody has any ideas that maybe I'm missing here.  I'll likely proceed
 with the idea of a global filter later this week if I don't hear any strong
 objections, or even better maybe somebody knows how to get lvmetad on
 Trusty which I *think* would be ideal for a number of other reasons.

 Thanks,
 John

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-11 Thread Clay Gerrard
On Wed, Mar 11, 2015 at 1:16 PM, Weidong Shao weidongs...@gmail.com wrote:


 ​the url is encoded in the object hash​! This somehow entangles the data
 storage/validity with its account and makes it difficult to migrate the
 data. I guess it is too late to debate on the design of this. Do you know
 the technical reasons for doing this?




Well, yeah - can't see much good coming of trying to debate the design :)

The history may well be an aside from the issue at hand, but...

Not having a lookup/indirection layer was a design principle for achieving
the desired scaling properties of Swift.  Before Swift some of the
developers that worked on it had built another system that had a lookup
layer and it was a huge pain in the ass after a half billion objects or so
- but as with anything it's not the only way to do it, just trying
something and it seemed to work out.

I'd guess at least some of the justification came from: uri's don't change
- people change them [1].

Without a lookup layer that you can update (i.e. name = resource =
new_name = resource) - you can either create a new resources that happens
to have the same content of the other and delete the old OR add some custom
namespace redirection to make the resource accessible from another name (a
vanity url middleware comes up from time to time - reseller prefix rewrite
may be as good a use-case as any).

I made sure I was watching the swauth repo [2] - if you open any issues
there I'll try to keep an eye on them.  Thanks!

-Clay

1. http://www.w3.org/Provider/Style/URI.html
2. https://github.com/gholt/swauth/issues
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] auth migration and user data migration

2015-03-11 Thread Clay Gerrard
On Mon, Mar 9, 2015 at 12:27 PM, Weidong Shao weidongs...@gmail.com wrote:


 I noticed swauth project is not actively maintained. In my local testing,
 swauth did not work after I upgraded swift to latest.


Hrm... I think gholt would be open to patches/support, I know of a number
of deployers of Swauth - so maybe if there's issues we should try to
enumerate them.


 I want to migrate off swauth. What are the auth alternative beside
 tempauth?



Keystone.  The only other systems I know about are proprietary - what are
your needs?


 On account-to-account server-side copy, is there an operation that is
 similar to mv? i.e., I want the data associated with an account to assume
 ownership of  a new account, but I do not want to copy the actual data on
 the disks.



The account url is encoded in the object hash - the only realistic way to
change the location (account/container/object) of an entity to swift is to
read from it's current location and write it to the new location the delete
the old object.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] auto-abandon changesets considered harmful (was Re: [stable][all] Revisiting the 6 month release cycle [metrics])

2015-03-02 Thread Clay Gerrard
On Mon, Mar 2, 2015 at 8:07 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Why do you say auto-abandon is the wrong tool? I've no problem with the 1
 week warning if somebody wants to implement it - I can see the value. A
 change-set that has been ignored for X weeks is pretty much the dictionary
 definition of abandoned


+1 this

I think Tom's suggested help us help you is a great pre-abandon warning.
In swift as often as not the last message ended with something like you
can catch me on freenode in #openstack-swift if you have any questions

But I really can't fathom what's the harm in closing abandoned patches as
abandoned?

If the author doesn't care about the change enough to address the review
comments (or failing tests!) and the core reviewers don't care about it
enough to *fix it for them* - where do we think the change is going to
go?!  It sounds like the argument is just that instead of using abandoned
as an explicit description of an implicit state we can just filter these
out of every view we use to look for something useful as no changes for X
weeks after negative feedback rather than calling a spade a spade.

I *mostly* look at patches that don't have feedback.  notmyname maintains
the swift review dashboard AFAIK:

http://goo.gl/r2mxbe

It's possible that a pile of abandonded-changes-not-marked-as-abandonded
wouldn't actually interrupt my work-flow.  But I would imagine maintaining
the review dashboard might occasionally require looking at ALL the changes
in the queue in an effort to look for a class of changes that aren't
getting adequate feedback - that workflow might find the extra noise less
than helpful.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] H302 considered harmful

2015-02-27 Thread Clay Gerrard
So, Swift doesn't enforce H302 - and our imports are sorta messy frankly -
but it doesn't really bother me, and I do rather enjoy the terseness of not
having to spell out the module name.  It's not really a chore to maintain,
if you don't know where a name came from split the window (or drop a
marker) and pop up to the top of the file and there it is - mystery solved.
  But I've been living in the code base to long to say if hurts new-comers
trying to grok where things are coming from.  I'd be willing to entertain
feedback on this.

But one thing that I'd really love to hear feedback on is if people using
H302 ever find it's inconvenient to enforce the rule *all the time*?
Particularly in stdlib where it'd be *such* bad form to collide with a
common name like `defaultdict` or `datetime` anyway, if you see on of those
names without the module - you *know* where it came from (hopefully?):

 * `collections.defaultdict(collections.defaultdict(list))` - no thank you
 * `datetime.datetime - meh

Anyway, every time I start some project greenfield I try to make myself
H302, (i *do* get so sick of is it time.time() or time() in this file?) -
but I normally break down as soon as I get to a name I'd rather just be be
in my right there in my globals... @contextlib.contextmanager,
functools.partial, itertools.ifilter - maybe it's just stdlib names?

Not sure if there's any compromise, probably better to either *just import
modules*, or live with the inconsistency (you eventually get nose-blind to
do it ;P)

-Clay

On Wed, Feb 25, 2015 at 10:51 AM, Duncan Thomas duncan.tho...@gmail.com
wrote:

 Hi

 So a review [1] was recently submitted to cinder to fix up all of the H302
 violations, and turn on the automated check for them. This is certainly a
 reasonable suggestion given the number of manual reviews that -1 for this
 issue, however I'm far from convinced it actually makes the code more
 readable,

 Is there anybody who'd like to step forward in defence of this rule and
 explain why it is an improvement? I don't discount for a moment the
 possibility I'm missing something, and welcome the education in that case

 Thanks


 [1] https://review.openstack.org/#/c/145780/
 --
 Duncan Thomas

 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [qa][swift] Signature of return values in tempest swift client

2015-02-13 Thread Clay Gerrard
On Fri, Feb 13, 2015 at 2:15 PM, David Kranz dkr...@redhat.com wrote:

 Swift is different in that most interesting data is in the headers except
 for GET methods, and applying the same methodology as the others does not
 make sense to me. There are various ways the swift client could be changed
 to return one value, or it could be left as is.


Can you point to the relevant implementation?

FWIW, we've found in swiftclient that's it's been extremely restrictive to
return tuples and in retrospect would have preferred either a (status,
headers, body) signature (which unfortunately leaves a lot of interesting
parsing up to the client) or something more like a dictionary or
SwiftResponse that as described in the spec has properties for getting at
interesting values - and most importantly allow for future additive changes.

It sounds like you're on the right track trying to make clients return a
single value (or a dict or something) - I'm tertiarily curious with what
you come up.

-Clay
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Swift GUI (free or open source)?

2015-01-27 Thread Clay Gerrard
https://github.com/cschwede/django-swiftbrowser is done by a swift core dev

You should browse:

http://docs.openstack.org/developer/swift/associated_projects.html#associated-projects

On Mon, Jan 26, 2015 at 11:50 AM, Adam Lawson alaw...@aqorn.com wrote:

 I'm researching for a web-based visualization that simply displays
 OpenStack Swift and/or node status, cluster health etc in some manner.
 being able to run a command would be cool but a little more than I need.
 Does such a thing currently exist? I know about SwiftStack but I'm
 wondering if there are other efforts that have produced a way to visualize
 Swift telemetry.

 Has anyone run across such a thing?


 *Adam Lawson*

 AQORN, Inc.
 427 North Tatnall Street
 Ste. 58461
 Wilmington, Delaware 19801-2230
 Toll-free: (844) 4-AQORN-NOW ext. 101
 International: +1 302-387-4660
 Direct: +1 916-246-2072


 __
 OpenStack Development Mailing List (not for usage questions)
 Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-12-04 Thread Clay Gerrard
more fidelity in the recon's seems fine, statsd emissions are also a
popular target for telemetry radiation.

On Thu, Nov 27, 2014 at 5:01 AM, Osanai, Hisashi 
osanai.hisa...@jp.fujitsu.com wrote:


 Hi,

 I think it is a good idea to have the object-replicator's failure info
 in recon like the other replicators.

 I think the following info can be added in object-replicator in addition to
 object_replication_last and object_replication_time.

 If there is any technical reason to not add them, I can make it. What do
 you think?

 {
 replication_last: 1416334368.60865,
 replication_stats: {
 attempted: 13346,
 empty: 0,
 failure: 870,
 failure_nodes: {192.168.0.1: 3,
   192.168.0.2: 860,
   192.168.0.3: 7},
 hashmatch: 0,
 remove: 0,
 start: 1416354240.9761429,
 success: 1908
 ts_repl: 0
 },
 replication_time: 2316.5563162644703,
 object_replication_last: 1416334368.60865,
 object_replication_time: 2316.5563162644703
 }

 Cheers,
 Hisashi Osanai

 On Tuesday, November 25, 2014 4:37 PM, Matsuda, Kenichiro [mailto:
 matsuda_keni...@jp.fujitsu.com] wrote:
  I understood that the logs are necessary to judge whether no failure on
  object-replicator.
  And also, I thought that the recon info of object-replicator having
 failure
  (just like the recon info of account-replicator and container-replicator)
  is useful.
  Are there any reason to not included failure in recon?

 On Tuesday, November 25, 2014 5:53 AM, Clay Gerrard [mailto:
 clay.gerr...@gmail.com] wrote:
   replication logs

 On Friday, November 21, 2014 4:22 AM, Clay Gerrard [mailto:
 clay.gerr...@gmail.com] wrote:
  You might check if the swift-recon tool has the data you're looking
 for.  It can report
  the last completed replication pass time across nodes in the ring.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-24 Thread Clay Gerrard
replication logs

On Thu, Nov 20, 2014 at 9:32 PM, Matsuda, Kenichiro 
matsuda_keni...@jp.fujitsu.com wrote:

 Hi,

 Thank you for the info.

 I was able to get replication info easily by swift-recon API.
 But, I wasn't able to judge whether no failure from recon info of
 object-replicator.

 Could you please advise me for a way of get object-replicator's failure
 info?

 [replication info from recon]
 * account

 --
 # curl http://192.168.1.11:6002/recon/replication/account | python
 -mjson.tool
 {
 replication_last: 1416354262.7157061,
 replication_stats: {
 attempted: 20,
 diff: 0,
 diff_capped: 0,
 empty: 0,
 failure: 20,
 hashmatch: 0,
 no_change: 40,
 remote_merge: 0,
 remove: 0,
 rsync: 0,
 start: 1416354240.9761429,
 success: 40,
 ts_repl: 0
 },
 replication_time: 21.739563226699829
 }

 --

 * container

 --
 # curl http://192.168.1.11:6002/recon/replication/container | python
 -mjson.tool
 {
 replication_last: 1416353436.9448521,
 replication_stats: {
 attempted: 13346,
 diff: 0,
 diff_capped: 0,
 empty: 0,
 failure: 870,
 hashmatch: 0,
 no_change: 1908,
 remote_merge: 0,
 remove: 0,
 rsync: 0,
 start: 1416349377.3627851,
 success: 1908,
 ts_repl: 0
 },
 replication_time: 4059.5820670127869
 }

 --

 * object

 --
 # curl http://192.168.1.11:6002/recon/replication | python -mjson.tool
 {
 object_replication_last: 1416334368.60865,
 object_replication_time: 2316.5563162644703
 }
 # curl http://192.168.1.11:6002/recon/replication/object | python
 -mjson.tool
 {
 object_replication_last: 1416334368.60865,
 object_replication_time: 2316.5563162644703
 }

 --

 Best Regards,
 Kenichiro Matsuda.


 From: Clay Gerrard [mailto:clay.gerr...@gmail.com]
 Sent: Friday, November 21, 2014 4:22 AM
 To: OpenStack Development Mailing List (not for usage questions)
 Subject: Re: [openstack-dev] [swift] a way of checking replicate
 completion on swift cluster

 You might check if the swift-recon tool has the data you're looking for.
 It can report the last completed replication pass time across nodes in the
 ring.

 On Thu, Nov 20, 2014 at 1:28 AM, Matsuda, Kenichiro 
 matsuda_keni...@jp.fujitsu.com wrote:
 Hi,

 I would like to know about a way of checking replicate completion on swift
 cluster.
 (e.g. after rebalanced Ring)

 I found the way of using swift-dispersion-report from Administrator's
 Guide.
 But, this way is not enough, because swift-dispersion-report can't checking
 replicate completion for other data that made by not
 swift-dispersion-populate.

 And also, I found the way of using replicator's logs from QA.
 But, I would like to more easy way, because check of below logs is very
 heavy.

   (account/container/object)-replicator * All storage node on swift cluster

 Could you please advise me for it?

 Findings:
   Administrator's Guide  Cluster Health

 http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
   how to check replicator work complete

 https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/

 Best Regards,
 Kenichiro Matsuda.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] a way of checking replicate completion on swift cluster

2014-11-20 Thread Clay Gerrard
You might check if the swift-recon tool has the data you're looking for.
It can report the last completed replication pass time across nodes in the
ring.

On Thu, Nov 20, 2014 at 1:28 AM, Matsuda, Kenichiro 
matsuda_keni...@jp.fujitsu.com wrote:

 Hi,

 I would like to know about a way of checking replicate completion on swift
 cluster.
 (e.g. after rebalanced Ring)

 I found the way of using swift-dispersion-report from Administrator's
 Guide.
 But, this way is not enough, because swift-dispersion-report can't checking
 replicate completion for other data that made by not
 swift-dispersion-populate.

 And also, I found the way of using replicator's logs from QA.
 But, I would like to more easy way, because check of below logs is very
 heavy.

   (account/container/object)-replicator * All storage node on swift cluster

 Could you please advise me for it?

 Findings:
   Administrator's Guide  Cluster Health

 http://docs.openstack.org/developer/swift/admin_guide.html#cluster-health
   how to check replicator work complete

 https://ask.openstack.org/en/question/18654/how-to-check-replicator-work-complete/

 Best Regards,
 Kenichiro Matsuda.


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Information Dispersal Algorithm implementation in OpenStack

2014-11-10 Thread Clay Gerrard
I've never read that paper before and can not find a free copy online.
Based on the abstract it seems to be a parity based algorithm (erasure
codes) and would not be directly applicable to replication based
dispersion/data placement.

There is current work to enable an erasure code scheme for data encoding
[1] - but the current effort is focused on using the existing swift ring
implementation for partition desperation and placement.

If you research overlaps with the general idea of taking a object and
splitting into n chunks such that any k of the m chunks can construct the
file object you may want to align with the current development on-going in
this area [2].

Best regards,

1.
https://github.com/openstack/swift-specs/blob/master/specs/swift/erasure_coding.rst
2.
https://review.openstack.org/#/q/status:open+project:openstack/swift+branch:feature/ec,n,z

On Mon, Nov 10, 2014 at 2:23 PM, Aniket K anike...@tcs.com wrote:

 Dear Team,

 In OpenStack Swift, built-in replication technique is used to provide 3x+
 data redundancy that is enabling OpenStack to provide high availability. We
 are studying use of *Information Dispersal Algorithm(IDA)* as a technique
 instead of replication to provide high availability along with security.

 We would like to know,
 1.Was it considered in Swift before? If so, can you please share
 your opinion?
 2.Will this fit into existing Swift architecture that supports
 eventual consistency?
 3.Do you see advantages like storage efficiency, security etc. of
 using this technique over replication?

 Any pointer/suggestions/guidance in this regard will be appreciated.

 More information about IDA can be found at:
 *http://en.wikipedia.org/wiki/Secret_sharing*
 http://en.wikipedia.org/wiki/Secret_sharing

 Thanks in advance.

 Regards,
 Aniket Kulkarni

 =-=-=
 Notice: The information contained in this e-mail
 message and/or attachments to it may contain
 confidential or privileged information. If you are
 not the intended recipient, any dissemination, use,
 review, distribution, printing or copying of the
 information contained in this e-mail message
 and/or attachments to it are strictly prohibited. If
 you have received this communication in error,
 please notify us by reply e-mail or telephone and
 immediately and permanently delete the message
 and any attachments. Thank you


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift]Questions on concurrent operations on the same object

2014-11-10 Thread Clay Gerrard
Did you find out anything more on this?

There's lots on places in Swift organized around concurrent access to
objects - so I think it's probably good that you have that 423 response;
your clients will probably see it...

When you have multiple replicas the proxy's PUT will return shortly after
it has a quorum of successful responses - so in that sense it's possible
for a client to receive success while at least one node has their reference
to that object in hand.  But I'm guessing that's not really your setup?

Are you *sure* ssbench will never perform concurrent operations to the same
file?  Depending on the configuration - processes, total concurrency,
number of objects, etc. - it can be a non-trivial problem to coordinate all
of that.  ssbench may be doing best effort with no guarantee.

-Clay


On Fri, Oct 31, 2014 at 6:32 PM, jordan pittier jordan.pitt...@scality.com
wrote:

 Hi guys,

 We are currently benchmarking our Scality object server backend for Swift.
 We basically created a new DiskFile class that is used in a new
 ObjectController that inherits from the native server.ObjectController.
 It's pretty similar to how Ceph can be used as a backend for Swift objects.
 Our DiskFile is used to make HTTP request to the Scality Ring which
 supports GET/PUT/Delete on objects.

 Scality implementation is here :
 https://github.com/scality/ScalitySproxydSwift/blob/master/swift/obj/scality_sproxyd_diskfile.py

 We are using SSBench to benchmark and when the concurrency is high, we see
 somehow interleaved operations on the same object. For example, our
 DiskFile will be asked to DELETE an object while the object is currently
 being PUT by another client. The Scality ring doesnt support multi writers
 on the same object. So a lot of ssbench operations fail with a HTTP
 response '423 - Object is locked'.

 We dive into ssbench code and saw that it should not do interleaved
 operations. By adding some logging in our DiskFile class, we kinda of guess
 that the Object server doesn't wait for the put() method of the
 DiskFileWriter to finish before returning HTTP 200 to the Swift Proxy. Is
 this explanation correct ? Our put() method in the DiskFileWriter could
 take some time to complete, thus this would explain that the PUT on the
 object is being finalized while a DELETE arrives.

 Some questions :
 1) Is it possible that the put() method of the DiskFileWriter is somehow
 non blocking ? (or that the result of put() is not awaited?). If not, how
 could ssbench thinks that an object is completely PUT and that ssbench is
 allowed to delete it ?
 2) If someone could explain me in a few words (or more :)) how Swift deals
 with multiple writers on the same object, that will be very much
 appreciated.

 Thanks a lot,
 Jordan


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceph] Why performance of benchmarks with small blocks is extremely small?

2014-09-29 Thread Clay Gerrard
I also have limited experience with Ceph and rados bench - but it looks
like you're setting the number of worker threads to only 1?  (-t 1)

I think the default is 16, and most storage distributed storage systems
designed for concurrency are going to do a bit better if you exercise more
concurrent workers... so you might try turning that up until you see some
diminishing returns.  Be sure to watch for resource contention on the load
generating server.

-Clay

On Mon, Sep 29, 2014 at 4:49 AM, Pasquale Porreca 
pasquale.porr...@dektech.com.au wrote:

  Hello

 I have no experience with Ceph and this specific benchmark tool, anyway I
 have experience with several other performance benchmark tools and file
 systems and I can say it always happen to have very very low performance
 results when the file size is too small (i.e.  1MB).

 My suspect is that benchmark tools are not reliable for file size so
 small, since the time to write is so small that the overhead introduced by
 the test itself is not at all negligible.

 I saw that the default object size for rados is 4 MB, did you try your
 test without the option -b 512? I think the results should be different
 for several order of magnitude.

 BR


 On 09/27/14 17:14, Timur Nurlygayanov wrote:

   Hello all,

  I installed OpenStack with Glance + Ceph OSD with replication factor 2
 and now I can see the write operations are extremly slow.
 For example, I can see only 0.04 MB/s write speed when I run rados bench
 with 512b blocks:

  rados bench -p test 60 write --no-cleanup -t 1 -b 512

  Maintaining 1 concurrent writes of 512 bytes for up to 60 seconds or 0
 objects
  Object prefix: benchmark_data_node-17.domain.tld_15862
sec Cur ops   started  finishedavg MB/s cur MB/s   last
 lat  avg lat
  0   0 0 0  0
 0   -   0
  1   183820.0400341   0.0400391
 0.008465   0.0120985
  2   1   169   168  0.04101110.0419922
 0.080433   0.0118995
  3   1   240   239  0.03889590.034668
 0.008052   0.0125385
  4   1   356   355  0.0433309   0.0566406
 0.00837 0.0112662
  5   1   472   471  0.0459919   0.0566406
 0.008343   0.0106034
  6   1   550   549  0.0446735   0.0380859
 0.036639   0.0108791
  7   1   581   580  0.0404538   0.0151367
 0.008614   0.0120654


 My test environment configuration:
  Hardware servers with 1Gb network interfaces, 64Gb RAM and 16 CPU cores
 per node, HDDs WDC WD5003ABYX-01WERA0.
  OpenStack with 1 controller, 1 compute and 2 ceph nodes (ceph on separate
 nodes).
 CentOS 6.5, kernel 2.6.32-431.el6.x86_64.

  I tested several config options for optimizations, like in
 /etc/ceph/ceph.conf:

  [default]
 ...
 osd_pool_default_pg_num = 1024
 osd_pool_default_pgp_num = 1024
 osd_pool_default_flag_hashpspool = true
 ...
 [osd]
 osd recovery max active = 1
 osd max backfills = 1
 filestore max sync interval = 30
 filestore min sync interval = 29
 filestore flusher = false
 filestore queue max ops = 1
 filestore op threads = 16
 osd op threads = 16
 ...
 [client]
 rbd_cache = true
 rbd_cache_writethrough_until_flush = true

  and in /etc/cinder/cinder.conf:

  [DEFAULT]
  volume_tmp_dir=/tmp

 but in the result performance was increased only on ~30 % and it not looks
 like huge success.

  Non-default mount options and TCP optimization increase the speed in
 about 1%:

 [root@node-17 ~]# mount | grep ceph
 /dev/sda4 on /var/lib/ceph/osd/ceph-0 type xfs
 (rw,noexec,nodev,noatime,nodiratime,user_xattr,data=writeback,barrier=0)

 [root@node-17 ~]# cat /etc/sysctl.conf
 net.core.rmem_max = 16777216
 net.core.wmem_max = 16777216
 net.ipv4.tcp_rmem = 4096 87380 16777216
 net.ipv4.tcp_wmem = 4096 65536 16777216
 net.ipv4.tcp_window_scaling = 1
 net.ipv4.tcp_timestamps = 1
 net.ipv4.tcp_sack = 1


 Do we have other ways to significantly improve CEPH storage performance?
  Any feedback and comments are welcome!

  Thank you!


  --

  Timur,
 QA Engineer
 OpenStack Projects
 Mirantis Inc


 ___
 OpenStack-dev mailing 
 listOpenStack-dev@lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 --
 Pasquale Porreca

 DEK Technologies
 Via dei Castelli Romani, 22
 00040 Pomezia (Roma)

 Mobile +39 3394823805
 Skype paskporr


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clay Gerrard
On Mon, Sep 29, 2014 at 2:53 PM, Chmouel Boudjnah chmo...@enovance.com
wrote:



 eventual consistency will only affect container listing  and I don't think
 there is a need for container listing in that driver.


well now hold on...

if you're doing an overwrite in the face of server failures you could still
get a stale read if a server with an old copy comes back into the fray and
you read before replication sorts it out, or read a old version of a key
you deleted

-Clay
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][swift] Has anybody considered storing tokens in Swift?

2014-09-29 Thread Clay Gerrard
On Mon, Sep 29, 2014 at 4:15 PM, Clint Byrum cl...@fewbar.com wrote:


 It would, however, be bad to get a 404 for something that is otherwise
 present.. as that will result in an erroneous failure for the client.


That almost never happens, but is possible if all the primaries are down*,
a system than leans harder on the C a similar failure would be expected to
treat a similar impossible question as a failure/error.

* It's actually if all the same nodes that answered the previous write are
down; there's some trixies with error-limiting and stable handoffs that
help with subsequent read-your-writes behavior that actually make it fairly
difficult to write data that you can't then read back out unless you
basically track where all of the writes go and then shut down *exactly*
those nodes and make a read before replication beats you to it.  Just
shutting down all three primary locations will just write and then read
from the same handoff locations, even if the primaries subsequently come
back online (unless the primaries have an old copy - but it sounds like
that's not going on in your application).

Again, all of this has to do with under failure edge cases.  A healthy
swift system; or even one that's only moderately degraded won't really see
much of this.

Depending on the deployment latencies may be a concern if you're using this
as a cache - have you looked at Swauth [1] already?

-Clay

1. https://github.com/gholt/swauth
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Swift global cluster replication latency...

2014-08-18 Thread Clay Gerrard
Correct, best-effort.  There is no guarantee or time boxing on cross-region
replication.  The best way to manage cross site replication is by tuning
your replica count to ensure you have primary copies in each region -
eventually.  Possibly evaluate if you need write_affinity at all (you can
always just stream across the WAN on PUT directly into the primary
locations).  Global replication is a great feature, but still ripe for
optimization and turning:

https://review.openstack.org/#/c/99824/

With storage policies you now also have the ability to have a local policy
and global policy giving operators and users even more control about where
they need their objects.  For example you might upload to local policy and
then manage geo-distribution with a COPY request to the global policy.

Do you have a specific use case for geo-distributed objects that you could
share or are you just trying to understand the implementation?

-Clay


On Mon, Aug 18, 2014 at 3:32 AM, Shyam Prasad N nspmangal...@gmail.com
wrote:

 Hi,

 Went through the following link:

 https://swiftstack.com/blog/2012/09/16/globally-distributed-openstack-swift-cluster/

 I'm trying to simulate the 2-region 3-replica scenario. The document says
 that the 3rd replica will be asynchronously moved to the remote location
 with a 2-region setup.

 What I want to understand is if whether the latency of this asynchronous
 copy can be tweaked/monitored? I couldn't find any configuration parameters
 to tweak this. Do we have such an option? Or is it done on a best-effort
 basis?

 Thanks in advance...

 --
 -Shyam

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] (no subject)

2014-05-09 Thread Clay Gerrard
I thought those tracebacks only showed up with old versions of eventlet or
and eventlet_debug = true?

In my experience that normally indicates a client disconnect on a chucked
encoding transfer request (request w/o a content-length).  Do you know if
your clients are using transfer encoding chunked?

Are you seeing the 408 make it's way out to the client?  It wasn't clear to
me if you only see these tracebacks on the object-servers or in the proxy
logs as well?  Perhaps only one of the three disks involved in the PUT are
timing out and the client still gets a successful response?

As the disks fill up replication and auditing is going to consume more disk
resources - you may have to tune the concurrency and rate settings on those
daemons.  If the errors happen consistently you could try running with
background consistency processes temporarily disabled and rule out if
they're causing disk contention on your setup with your config.

-Clay


On Fri, May 9, 2014 at 8:54 AM, Ben Nemec openst...@nemebean.com wrote:

 This is a development list, and your question sounds more usage-related.
  Please ask your question on the users list: http://lists.openstack.org/
 cgi-bin/mailman/listinfo/openstack

 Thanks.

 -Ben


 On 05/09/2014 06:57 AM, Shyam Prasad N wrote:

 Hi,

 I have a two node swift cluster receiving continuous traffic (mostly
 overwrites for existing objects) of 1GB files each.

 Soon after the traffic started, I'm seeing the following traceback from
 some transactions...
 Traceback (most recent call last):
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 692,
 in PUT
  chunk = next(data_source)
File /home/eightkpc/swift/swift/proxy/controllers/obj.py, line 559,
 in lambda
  data_source = iter(lambda: reader(self.app.client_chunk_size), '')
File /home/eightkpc/swift/swift/common/utils.py, line 2362, in read
  chunk = self.wsgi_input.read(*args, **kwargs)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 147,
 in read
  return self._chunked_read(self.rfile, length)
File /usr/lib/python2.7/dist-packages/eventlet/wsgi.py, line 137,
 in _chunked_read
  self.chunk_length = int(rfile.readline().split(;, 1)[0], 16)
 ValueError: invalid literal for int() with base 16: '' (txn:
 tx14e2df7680fd472fb92f0-00536ca4f0) (client_ip: 10.3.0.101)

 Seeing the following errors on storage logs...
 object-server: 10.3.0.102 - - [09/May/2014:01:36:49 +] PUT
 /xvdg/492/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 408 - PUT
 http://10.3.0.102:8080/v1/AUTH_test/8kpc/30303A30323A30333A30343A30353A
 30396AEF6B537B00.2.data
 txf3b4e5f677004474bbd2f-00536c30d1 proxy-server 12241 95.6405 -

 It's success sometimes, but mostly 408 errors. I don't see any other
 logs for the transaction ID. or around these 408 errors in the log
 files. Is this a disk timeout issue? These are only 1GB files and normal
 writes to files on these disks are quite fast.

 The timeouts from the swift proxy files are...
 root@bulkstore-112:~# grep -R timeout /etc/swift/*
 /etc/swift/proxy-server.conf:client_timeout = 600
 /etc/swift/proxy-server.conf:node_timeout = 600
 /etc/swift/proxy-server.conf:recoverable_node_timeout = 600

 Can someone help me troubleshoot this issue?

 --
 -Shyam


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] Delete operation problem

2014-04-22 Thread Clay Gerrard
409 on DELETE (object?) is a pretty specific error.  That should mean that
the timestamp assigned to the delete is earlier than the timestamp of the
data file.

Most likely mean that you're getting some time-drift on your proxies (but
that assumes multi-node) or maybe that you're reusing names between threads
and your object server's see PUT(ts1) PUT(ts3) DELETE(ts2) - but that'd be
a pretty tight race...

Should all be logged - try and find a DELETE that went 409 and trace the
transaction id.


On Mon, Apr 21, 2014 at 5:54 AM, taurus huang huanggeng.8...@gmail.comwrote:

 Please provide the log file: /var/log/swift/swift.log   AND
 /var/log/keystone/keystone.log


 On Mon, Apr 21, 2014 at 11:55 AM, Sumit Gaur sumitkg...@gmail.com wrote:

 Hi
 I using jclouds lib integrated with Openstack Swift+ keystone
 combination. Things are working fine except stability test. After 20-30
 hours of test jclouds/SWIFT start degrading in TPS and keep going down over
 the time.

 1) I am running the (PUT-GET-DEL) cycle in 10 parallel threads.
 2) I am getting a lot of 409 and DEL failure for the response too from
 SWIFT.


 Can sombody help me what is going wrong here ?

 Thanks
 sumit

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [SWIFT] SWIFT object caching (HOT content)

2014-03-11 Thread Clay Gerrard
At the HK summit, the topic of hot content came up and seemed to broken
into two parts.

1) developing a caching storage tier for hot content that would allow
proxies to more quickly serve small data requests with even higher rates of
concurrent access.
2) developing a mechanism to programmatically/automatically (or even
explicitly) identify hot content that should be cached or expired from
the caching storage tier.

Much progress has been made during this development/release cycle on
storage policies [1] which would seem to offer a semantic building block
for the caching storage tierr - but to my knowledge no one is actively
working on the details of a caching storage police (besides maybe a
high-replica ring backed with ssds), or the second (harder?) part of
identifying which data should be cached or for how long.

I glanced at those blueprints and I'm not sure they line up entirely with
the current thinking on hot content - probably be a good idea to revisit
the topic at upcoming summit in ALT.  I believe proposals are open. [2]

-Clay

1. https://blueprints.launchpad.net/swift/+spec/storage-policies
2. http://summit.openstack.org/


On Mon, Mar 10, 2014 at 10:09 PM, Anbu a...@enovance.com wrote:

 Hi,
 I came across this blueprint
 https://blueprints.launchpad.net/swift/+spec/swift-proxy-caching and a
 related etherpad https://etherpad.openstack.org/p/swift-kt about SWIFT
 object caching.
 I would like to contribute in this and I would also like to know if
 anybody has made any progress in this area.
 If anyone is aware of a discussion that has happened/happening in this,
 kindly point me to it.

 Thank you,
 Babu

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] openstack-swift put performance

2014-03-07 Thread Clay Gerrard
Well... What results did you get?  What did you expect?  What do you hope
to achieve?

How are you balancing your client requests across the five nodes?  I'm not
sure you're going to get anywhere near 2000^H^H requests a second from a
single thread (!?) - Swift's performs best against many concurrent requests.

But I'm still game for micro optimization too!

It's sort of understood that HTTP overhead will have a more and more
non-trivial impact on requests as the size of the objects get smaller.

I would say the *biggest* benefit of Swift's expect 100 continue support is
to avoid transferring data needlessly to servers that can't support it (I'm
speaking particularly on the 507 case here) - but if someone had the
numbers to prove out dramatic improvement on small requests I could see it
possibly becoming optional (error limiting on 507 is pretty aggressive).

There's other benefits to expect 100 continue, but I could imagine a
deployment where the benefit is negligible and I'd listen if anyone had
numbers to better understand the cost.

-Clay


On Thu, Mar 6, 2014 at 11:23 PM, Ivan Pustovalov ip.disab...@gmail.comwrote:

 HI!
 I have a cluster of 5 nodes with 3 replicas. All of the servers (e.g.
 proxy, account, object, container )
 are installed on a single server, and I have 5 of these servers.
 I send put object requests from one testing thread and check client
 response time from cluster.
 And obtained results did not satisfy me.
 When I was researching tcp traffic, I found time loss on waiting HTTP 100
 from object servers, 10-15 ms on each and 10 ms on proxy while checking
 quorum.

 In my case, users can put small objects (e.g. 16 kbytes) into the cloud
 and I look forward to a load of 2000 requests per second. This time loss
 significantly reduces cloud performance.
 How I can reduce this time loss and what are best practices for tuning?

 --
 Regards, Ivan Pustovalov.

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   >