Re: [openstack-dev] GUI for Swift object storage

2018-09-17 Thread John Dickinson

That's a great question.

A quick google search shows a few like Swift Explorer, Cyberduck, and 
Gladinet. But since Swift supports the S3 API (check with your cluster 
operator to see if this is enabled, or examine the results of a `GET 
/info` request), you can use any available S3 GUI client as well (as 
long as you can configure the endpoints you connect to).



--John





On 17 Sep 2018, at 16:48, M Ranga Swami Reddy wrote:

Hi - is there any GUI (open source) available for Swift objects 
storage?


Thanks
Swa
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [election][tc] announcing candidacy

2018-09-04 Thread John Dickinson



On 4 Sep 2018, at 12:16, Julia Kreger wrote:


Greetings Stackers!

I hereby announce my candidacy for a position on the OpenStack 
Technical

Committee.

In many respects I consider myself a maverick, except reality is 
sometimes

entirely different than my own self perception, upon reflection.
I find self reflection and introspection to be powerful tools, along 
with
passion and desire for the common good. That desire for the common 
good
is the driving force behind my involvement in OpenStack, which I hope 
to

see as a vibrant and thriving community for years to come.

Have things changed? Yes, I think they are ever evolving. I think we 
can only
take the logical paths that we see before us at the time. Does this 
mean

we will make mistakes? Absolutely, but mistakes are also opportunities
to learn and evolve as time goes on; which perhaps is an unspoken 
backbone
of our community. The key is that we must not fear change but embrace 
it.


Changing our community for the better is a process we can only take
one step at a time, and we must recognize our strength
is in our diversity. As we move forward, as we evolve, we need to keep 
in
mind our goals and overall vision. In a sense, these things vary 
across all
projects, but our central community vision and goal helps provide 
direction.


As we continue our journey, I believe we need to lift up new 
contributors,
incorporate new thoughts, and new ideas. Embracing change while 
keeping our
basic course so new contributors can better find and integrate with 
our

community as we continue forward. We need to listen and take that as
feedback to better understand other perspectives, for it is not only
our singular personal perspective which helps give us direction,
but the community as a whole.

For those who do not know me well my name is Julia Ashley Kreger. 
Often
I can be found on IRC as TheJulia, in numerous OpenStack related 
channels.

I have had the pleasure of serving the community this past year on the
Technical Committee. I have also served the ironic community quite a 
bit

during my time in the OpenStack community, which began during the Juno
cycle.

I am the current Project Team Lead for the Ironic team. I began
serving in that capacity starting with the Rocky cycle. Prior,
I served as the team's release liaison. You might have seen me as one
of those crazy people advocating for standalone usage. Prior lives
included deploying and operating complex systems, but that is enough
about me.

Ultimately I believe I bring a different perspective to the TC and it 
is for
this, and my many strong beliefs and experiences, I feel I am well 
suited


to serve the community for another year on the Technical Committee.

Thank you for your consideration,

Julia

freenode: TheJulia
Twitter: @ashinclouds
https://www.openstack.org/community/members/profile/19088/julia-kreger
http://stackalytics.com/?release=all_id=juliaashleykreger

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Julia,

Do you have any specific examples of new ideas you are wanting to 
propose or advocate for, should you be re-elected?


--John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][openstack-helm] On the problem of OSF copyright headers

2018-08-28 Thread John Dickinson



On 28 Aug 2018, at 9:59, Jeremy Stanley wrote:


[Obligatory disclaimer: I am not a lawyer, this is not legal advice,
and I am not representing the OpenStack Foundation in any legal
capacity here.]

TL;DR: You should not be putting "Copyright OpenStack Foundation" on
content in Git repositories, or anywhere else for that matter
(unless you know that you are actually an employee of the OSF or
otherwise performing work-for-hire activities at the behest of the
OSF). The OpenStack Individual Contributor License Agreement (ICLA)
does not require copyright assignment. The foundation does not want,
nor does it even generally accept, copyright assignment from
developers. Your copyrightable contributions are your own, or by
proxy are the copyright of your employer if you have created them as
a part of any work-for-hire arrangement (unless you've negotiated
with your employer to retain copyright of your work).

This topic has been raised multiple times in the past. In the wake
of a somewhat protracted thread on the
legal-disc...@lists.openstack.org mailing list (it actually started
out on the openstack-dev mailing list but was then redirected to a
more appropriate venue) back in April, 2013, we attempted to record
a summary in the wiki article we'd been maintaining regarding
various frequently-asked legal questions:
https://wiki.openstack.org/wiki/LegalIssuesFAQ#OpenStack_Foundation_Copyright_Headers

In the intervening years, we've worked to make sure other important
documentation moves out of the wiki and into more durable
maintenance (mostly Git repositories under code review, rendered and
published to a Web site). I propose that as this particular topic is
germane to contributing to the development of OpenStack software,
the OpenStack Technical Committee should publish a statement on the
governance site similar in nature to or perhaps as an expansion of
the https://governance.openstack.org/tc/reference/licensing.html
page where we detail copyright licensing expectations for official
OpenStack project team deliverables. As I look back through that
wiki article, I see a few other sections which may also be
appropriate to cover on the governance site.

The reason I'm re-raising this age-old discussion is because change
https://review.openstack.org/596619 came to my attention a few
minutes ago, in which some 400+ files within the
openstack/openstack-helm repository were updated to assign copyright
to "OpenStack Foundation" based on this discussion from an
openstack-helm IRC meeting back in March (which seems to have
involved no legal representative of the OSF):
http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101

There are also a couple of similar changes under the same review
topic for the openstack/openstack-helm-infra and
openstack/openstack-helm-addons repositories, one of which I managed
to -1 before it could be approved and merged. I don't recall any
follow-up discussion on the legal-disc...@lists.openstack.org or
even openstack-dev@lists.openstack.org mailing lists, which I would
have expected for any change of this legal significance.

The point of this message is of course not to berate anyone, but to
raise the example which seems to indicate that as a community we've
apparently not done a great job of communicating the legal aspects
of contributing to OpenStack. If there are no objections, I'll push
up a proposed addition to the openstack/governance repository
addressing this semi-frequent misconception and follow up with a
link to the review. I'm also going to post to the legal-discuss ML
so as to make the subscribers there aware of this thread.
--
Jeremy Stanley


It would be *really* helpful to have a simple rule or pattern for each 
file's header. Something like "Copyright (c) created>-present by contributors to this project".


As you mentioned, this issue comes up about every two years, and having 
contributors police (via code review) the appropriate headers for every 
commit is not a sustainable pattern. The only thing I'm sure about is 
that the existing copyright headers are not correct, but I have no idea 
what the correct header are.


--John





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] change in review policy: normally one +2 is sufficient

2018-05-30 Thread John Dickinson

During today's Swift team meeting[1], we discussed the idea of
relaxing review guidelines. We agreed the normal case is "one core
reviewer's approval is sufficient to land code".

We've long had a "one +2" policy for trivial and obviously correct
patches. Put simply, the old policy was one of "normally, two +2s are
needed, but if a reviewer feels it's not necessary to get another
review, go ahead and land it." Our new policy inverts that. Normally,
one +2 is needed, but a core may want to ask for additional reviews
for significant or complex patches.

When the Swift team gathers in Denver for the next PTG, we'll spend
some time revisiting this decision and reflect on the impact it has
had for the community and codebase.

[1] 
http://eavesdrop.openstack.org/meetings/swift/2018/swift.2018-05-30-21.00.log.html



--John




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptls] final stages of python 3 transition

2018-05-21 Thread John Dickinson



On 20 May 2018, at 17:19, Thomas Goirand wrote:


On 05/20/2018 06:24 PM, Matthew Treinish wrote:

On Sun, May 20, 2018 at 03:05:34PM +0200, Thomas Goirand wrote:

Thanks for these details. What exactly is the trouble with the Swift
backend? Do you know? Is anyone working on fixing it? At my company,
we'd be happy to work on that (if of course, it's not too time 
demanding).




Sorry I didn't mean the swift backend, but swift itself under 
python3:


https://wiki.openstack.org/wiki/Python3#OpenStack_applications_.28tc:approved-release.29

If you're trying to deploy everything under python3 I don't think 
you'll be
able to deploy swift. But if you already have a swift running then 
the glance

backend should work fine under pythom 3.


Of course I know Swift isn't Python 3 ready. And that's sad... :/


yep. we're still working on it. slowly.



However, we did also experience issues with the swift backend last 
week.

Hopefully, with the switch to uwsgi it's going to work. I'll let you
know if that's not the case.


Is the "switch to uwsgi" something about how you're running swift or 
something about how you're running glance?


FWIW, my experience with putting TLS in front of Swift is to run Swift 
as "normal" (ie run `swift-proxy-server /etc/swift/proxy-server.conf` 
itself instead of under apache or nginx or something else). Then use 
HAProxy or hitch to terminate TLS and forward that internally to the 
proxy server.




Cheers,

Thomas Goirand (zigo)

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster

2018-05-14 Thread John Dickinson



On 14 May 2018, at 13:43, Pete Zaitcev wrote:


On Thu, 10 May 2018 20:07:03 +0800
Yuxin Wang  wrote:

I'm working on a swift project. Our customer cares about S3 
compatibility very much. I tested our swift cluster with 
ceph/s3-tests and analyzed the failed cases. It turns out that lots 
of the failed cases are related to unique container/bucket. But as we 
know, containers are just unique in a tenant/project.

[...]
Do you have any ideas on how to do or maybe why not to do? I'd highly 
appreciate any suggestions.


I don't have a recipy, but here's a thought: try making all the 
accounts
that need the interoperability with S3 belong to the same Keystone 
tenant.

As long as you do not give those accounts the owner role (one of those
listed in operator_roles=), they will not be able to access each 
other's
buckets (Swift containers). Unfortunately, I think they will not be 
able

to create any buckets either, but perhaps it's something that can be
tweaked - for sure if you're willing to far enough to make new 
middleware.


-- Pete




Pete's idea is interesting. The upstream Swift community has talked 
about what it will take to support this sort of S3 compatibility, and 
we've got some pretty good ideas. We'd love your help to implement 
something. You can find us in #openstack-swift in freenode IRC.


As a general overview, swift3 (which has now been integrated into 
Swift's repo as the "s3api" middleware) maps S3 buckets to a unique 
(account, container) pair in Swift. This mapping is critical because the 
Swift account plays a part in Swift's data placement algorithm. This 
allows both you and I to both have an "images" container in the same 
Swift cluster in our respective accounts. However, AWS doesn't have an 
exposed "thing" that's analogous to the account. In order to fill in 
this missing info, we have to map the S3 bucket name to the appropriate 
(account, container) pair in Swift. Currently, the s3api middleware does 
this by encoding the account name into the auth token. This way, when 
you and I are each accessing our own "images" container as a bucket via 
the S3 API, our requests go to the right place and do the right thing.


This mapping technique has a couple of significant limits. First, we 
can't do the mapping without the token, so unauthenticated (ie public) 
S3 API calls can never work. Second, bucket names are not unique. This 
second issue may or may not be a bug. In your case, it's an issue, but 
it may be of benefit to others. Either way, it's a difference from the 
way S3 works.


In order to fix this, we need a new way to do the bucket->(account, 
container) mapping. One idea is to have a key-value registry. There may 
be other ways to solve this too, but it's not a trivial change. We'd 
welcome your help in figuring out the right solution!


--John



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][requirements] uncapping eventlet

2018-05-07 Thread John Dickinson
I've discovered that eventlet 0.23.0 (released April 6) does break 
things for Swift. I'm not sure about other projects yet.


https://bugs.launchpad.net/swift/+bug/1769749

--John




On 7 May 2018, at 13:50, Doug Hellmann wrote:


Excerpts from Michel Peterson's message of 2018-05-07 17:54:02 +0300:
On Fri, Apr 20, 2018 at 11:06 PM, Doug Hellmann 


wrote:



We have made great progress on this but we do still have quite a
few of these patches that have not been approved.  Many are failing
test jobs and will need a little attention ( the failing 
requirements

jobs are real problems and rechecking will not fix them).  If you
have time to help, please jump in and take over a patch and get it
working.

https://review.openstack.org/#/q/status:open+topic:uncap-eventlet



I did a script to fix those and I've submitted patches.


Thanks!

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] container name in swift

2018-04-02 Thread John Dickinson
no

On 2 Apr 2018, at 11:46, Jialin Liu wrote:

> Hi,
> Can a container name in openstack swift contains / ?
> e.g.,
> abc/111/mycontainer
>
>
> Best,
> Jialin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift][Keystone] Swift vs Keystone permission models

2017-11-22 Thread John Dickinson


On 22 Nov 2017, at 22:08, Adrian Turjak wrote:

> Hello fellow openstackers,
>
> I'm trying to figure something out that confuses me around how Swift
> handles roles from Keystone, and what the ACLs allow.
>
> In the Swift config you can specify roles which can manage the
> containers for their project ('operator_roles'), and anyone with such a
> role appears to bypass ACLs on their containers.
>
> But beyond that Swift doesn't seem to handle roles in the same way other
> projects do. It has no policy.json file, so you can't limit access to
> the Swift API to specific roles beyond 'operator_roles'. To do any real
> limiting in Swift you have to use ACLs. Sure you can limit specific
> containers via ACLs to a user on a project, and even with a given role,
> but ACLs are user defined, and it feels odd that they would bypass scope
> and roles.
>
> If you assign an ACL to a container  for a given user but don't specify
> project, Swift only cares that the user is authenticated (does at least
> need to be a scope token, right?) and that the ACL is valid, but does
> not respect role/token scope really.
>
> That means that even if you wanted to do a read_only role for everything
> (nova, cinder, etc), you could always bypass that with ACLs in Swift.
> This means Swift's authorisation model can entirely bypass the Keystone
> one in the context of Swift. This seems kind of broken. I can understand
> some cases where that would be useful, but it seems to go against the
> rest of the authorisation model in OpenStack, where roles define
> explicitly where and what you have access to (or at least at meant to).
>
> Am I understanding this wrong? Or missing something obvious? Or is this
> just how it is and it won't change? Because it feels wrong, and I'm not
> sure if that's just me not understanding it, me being paranoid in ways I
> shouldn't, or this really isn't right. I don't like the idea that we
> have two authorisation mechanisms (the core one being Keystone) that can
> be bypassed by Swift ACLs for the purposes of itself. Which makes Swift
> in truth have a higher precedence than Keystone for the purposes of
> scope when it comes to it's own resource. It means there are multiple
> sources of truth, one which is the authority for all other services, and
> another that is the authority for itself. That might makes for all kind
> of mistakes, as people will assume that Keystone scope is honored
> everywhere, since mostly that is the case.
>
> I'm asking because I'd like to setup fine grained roles for each
> service, and when I make a role that can only talk to Nova, I don't
> really like the idea of an ACL being able to bypass that. Not to mention
> there really isn't anything role based I can do via roles/Keystone for
> Swift that can't be bypassed in some way by ACLs, nor can I make a role
> that is read_only for Swift for that given project. I can't have
> swift_readonly, swift_write, swift_manage (manage being able to do
> ACLs). Even with account level ACLs (which don't yet work with Keystone
> anyway), they wouldn't be implied by roles and would have to be set
> manually on project creation, so... it doesn't really work either.
>
> Part of me would at least feel far more comfortable if there was a
> setting in Swift that enforced roles and scope so that you could only
> ever talk to Swift in your project regardless of ACLs, but that feels
> like only one of many things that would need to happen. What I imagine
> as being my ideal scenario for Swift in OpenStack is Swift respects
> roles and scope always, but then ACLs are a way of making fine grained
> per user/group permissions to containers/objects within that scope.
> Sharing between projects may be useful, but rescoping to the other
> project isn't too hard if everything is mostly role based, and sharing
> to a project/user that cannot outright accept that sharing permission is
> innately scary (which is why Glance's cross-project sharing model works
> well). Even more so if the user can't audit their permissions (can a
> user see what ACLs apply to them?).
>
> I'm hoping saner minds can help me either understand or figure out if
> I'm being silly, or if the permission model between Swift and Keystone
> really is weird/broken.
>
> This is also coming from a public cloud perspective rather than a
> private one, so who knows if what I'm trying to solve fits with what
> others may be thinking. I'm also curious how other clouds look at this,
> and what their views are around permissions management between keystone
> and swift.
>
> Cheers,
> Adrian
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


The short answer (and you should probably get a longer one at some point) is 
that you're right. I wouldn't say it's 

Re: [openstack-dev] [glance] Does glance_store swift driver support range requests ?

2017-11-15 Thread John Dickinson


On 15 Nov 2017, at 7:40, Jay Pipes wrote:

> On 11/15/2017 06:28 AM, Duncan Thomas wrote:
>> On 15 November 2017 at 11:15, Matt Keenan  wrote:
>>> On 13/11/17 22:51, Nikhil Komawar wrote:
>>>
>>> I think it will a rather hard problem to solve. As swift store can be
>>> configured to store objects in different configurations. I guess the next
>>> question would be, what is your underlying problem -- multiple build
>>> requests or is this for retry for a single download?
>>>
>>> If the image is in image cache and you are hitting the glance node with
>>> cached image (which is quite possible for tiny deployments), this feature
>>> will be relatively easier.
>>>
>>>
>>> So the specific image stored in glance is a Unified Archive
>>> (https://docs.oracle.com/cd/E36784_01/html/E38524/gmrlo.html).
>>>
>>> During a UAR deployment the archive UUID is required and it is contained in
>>> the first 33 characters of the UAR image, thus a range request for this
>>> portion is required when initiating the deployment. Then the rest of the
>>> archive is extracted and deployed.
>>
>> Given the range you want is always at the beginning, is a range
>> request any different to doing a full get request and dripping the
>> connection when you've got the bytes you want?
>
> Or just store the UAR UUID in the image metadata...
>
> -jay

Swift supports range requests (and multiple ranges at the same time)

eg: "Range: bytes=1-34, 100-1024"


>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread John Dickinson


On 14 Nov 2017, at 16:08, Davanum Srinivas wrote:

> On Wed, Nov 15, 2017 at 10:44 AM, John Dickinson <m...@not.mn> wrote:
>>
>>
>> On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:
>>
>>> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M <kevin@pnnl.gov> wrote:
>>>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>>>> that upgrades are hugely time consuming still.
>>>>
>>>> If you want to reduce the push for number #2 and help developers get their 
>>>> wish of getting features into users hands sooner, the path to upgrade 
>>>> really needs to be much less painful.
>>>>
>>>
>>> +1000
>>>
>>> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
>>> execute the upgrade. (and we skipped a version)
>>> Scheduling all the relevant internal teams is a monumental task
>>> because we don't have dedicated teams for those projects and they have
>>> other priorities.
>>> Upgrading affects a LOT of our systems, some we don't fully have
>>> control over. And it can takes months to get new deployment on those
>>> systems. (and after, we have to test compatibility, of course)
>>>
>>> So I guess you can understand my frustration when I'm told to upgrade
>>> more often and that skipping versions is discouraged/unsupported.
>>> At the current pace, I'm just falling behind. I *need* to skip
>>> versions to keep up.
>>>
>>> So for our next upgrades, we plan on skipping even more versions if
>>> the database migration allows it. (except for Nova which is a huge
>>> PITA to be honest due to CellsV1)
>>> I just don't see any other ways to keep up otherwise.
>>
>> ?!?!
>>
>> What does it take for this to never happen again? No operator should need to 
>> plan and execute an upgrade for a whole year to upgrade one year's worth of 
>> code development.
>>
>> We don't need new policies, new teams, more releases, fewer releases, or 
>> anything like that. The goal is NOT "let's have an LTS release". The goal 
>> should be "How do we make sure Mattieu and everyone else in the world can 
>> actually deploy and use the software we are writing?"
>>
>> Can we drop the entire LTS discussion for now and focus on "make upgrades 
>> take less than a year" instead? After we solve that, let's come back around 
>> to LTS versions, if needed. I know there's already some work around that. 
>> Let's focus there and not be distracted about the best bureaucracy for not 
>> deleting two-year-old branches.
>>
>>
>> --John
>
> John,
>
> So... Any concrete ideas on how to achieve that?
>
> Thanks,
> Dims
>

Depends on what the upgrade problems are. I'd think the project teams that 
can't currently do seamless or skip-level upgrades would know best about what's 
needed. I suspect there will be both small and large changes needed in some 
projects.

Mathieu's list of realities in a different reply seem very normal. Operators 
are responsible for more than just OpenStack projects, and they've got to 
coordinate changes in deployed OpenStack projects with other systems they are 
running. Working through that list of realities could help identify some areas 
of improvement.

Spitballing process ideas...
* use a singular tag in launchpad to track upgrade stories. better yet, report 
on the status of these across all openstack projects so anyone can see what's 
needed to get to a smooth upgrade
* redouble efforts on multi-node and rolling upgrade testing. make sure every 
project is using it
* make smooth (and skip-level) upgrades a cross-project goal and don't set 
others until that one is achieved
* add upgrade stories and tests to the interop tests
* allocate time for ops to specifically talk about upgrade stories at the PTG. 
make sure as many devs are in the room as possible.
* add your cell phone number to the project README so that any operator can 
call you as soon as they try to upgrade (perhaps not 100% serious)
* add testing infrastructure that is locked to distro-provided versions of 
dependencies (eg install on xenial with only apt or install on rhel 7 with only 
yum)
* only do one openstack release a year. keep N-2 releases around. give ops a 
chance to upgrade before we delete branches
* do an openstack release every month. severely compress the release cycle and 
force everything to work with disparate versions. this will drive good testing, 
strong, stable interfaces, and smooth upgrades


Ah, just saw Kevin's reply in a different message. I really like his idea of 
"use ops tooling 

Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-14 Thread John Dickinson


On 14 Nov 2017, at 15:18, Mathieu Gagné wrote:

> On Tue, Nov 14, 2017 at 6:00 PM, Fox, Kevin M  wrote:
>> The pressure for #2 comes from the inability to skip upgrades and the fact 
>> that upgrades are hugely time consuming still.
>>
>> If you want to reduce the push for number #2 and help developers get their 
>> wish of getting features into users hands sooner, the path to upgrade really 
>> needs to be much less painful.
>>
>
> +1000
>
> We are upgrading from Kilo to Mitaka. It took 1 year to plan and
> execute the upgrade. (and we skipped a version)
> Scheduling all the relevant internal teams is a monumental task
> because we don't have dedicated teams for those projects and they have
> other priorities.
> Upgrading affects a LOT of our systems, some we don't fully have
> control over. And it can takes months to get new deployment on those
> systems. (and after, we have to test compatibility, of course)
>
> So I guess you can understand my frustration when I'm told to upgrade
> more often and that skipping versions is discouraged/unsupported.
> At the current pace, I'm just falling behind. I *need* to skip
> versions to keep up.
>
> So for our next upgrades, we plan on skipping even more versions if
> the database migration allows it. (except for Nova which is a huge
> PITA to be honest due to CellsV1)
> I just don't see any other ways to keep up otherwise.

?!?!

What does it take for this to never happen again? No operator should need to 
plan and execute an upgrade for a whole year to upgrade one year's worth of 
code development.

We don't need new policies, new teams, more releases, fewer releases, or 
anything like that. The goal is NOT "let's have an LTS release". The goal 
should be "How do we make sure Mattieu and everyone else in the world can 
actually deploy and use the software we are writing?"

Can we drop the entire LTS discussion for now and focus on "make upgrades take 
less than a year" instead? After we solve that, let's come back around to LTS 
versions, if needed. I know there's already some work around that. Let's focus 
there and not be distracted about the best bureaucracy for not deleting 
two-year-old branches.


--John



/me puts on asbestos pants

>
> --
> Mathieu
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Need help with Openstack Swift Installation and configuration with S3

2017-11-14 Thread John Dickinson
To do this you want to use the `swift3` middleware, available at 
https://github.com/openstack/swift3. Its `README.md` file has installation 
instructions.

Also note that the Swift community is currently integrating the `swift3` 
middleware into Swift's code repo. In the future, you will not need to install 
any external components (ie `swift3`); you'll just have it available out of the 
box.

--John




On 14 Nov 2017, at 4:01, Shalabh Aggarwal wrote:

> Hi,
>
> We just started with Openstack Swift which we intend to use as a
> replacement for an API which was written to work with AWS S3.
> We know that Swift is S3 compatible and our API should work out of the box
> with it.
>
> We have not been able to get Swift running with S3 plugins.
> I was wondering if there is a go to documentation which covers the
> installation of Openstack Swift along with configuration with S3.
>
> Any help would be highly appreciated.
>
> Thanks!
>
> -- 
> Regards,
>
> Shalabh Aggarwal
> t: +91 9990960159
> w: www.shalabhaggarwal.com


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Upstream LTS Releases

2017-11-10 Thread John Dickinson
On 7 Nov 2017, at 15:28, Erik McCormick wrote:

> Hello Ops folks,
>
> This morning at the Sydney Summit we had a very well attended and very
> productive session about how to go about keeping a selection of past
> releases available and maintained for a longer period of time (LTS).
>
> There was agreement in the room that this could be accomplished by
> moving the responsibility for those releases from the Stable Branch
> team down to those who are already creating and testing patches for
> old releases: The distros, deployers, and operators.
>
> The concept, in general, is to create a new set of cores from these
> groups, and use 3rd party CI to validate patches. There are lots of
> details to be worked out yet, but our amazing UC (User Committee) will
> be begin working out the details.
>
> Please take a look at the Etherpad from the session if you'd like to
> see the details. More importantly, if you would like to contribute to
> this effort, please add your name to the list starting on line 133.
>
> https://etherpad.openstack.org/p/SYD-forum-upstream-lts-releases
>
> Thanks to everyone who participated!
>
> Cheers,
> Erik
>
> ___
> OpenStack-operators mailing list
> openstack-operat...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

I'm not a fan of the current proposal. I feel like the discussion jumped into a 
policy/procedure solution without getting much more feedback from operators. 
The room heard "ops want LTS" and we now have a new governance model to work 
out.

What I heard from ops in the room is that they want (to start) one release a 
year who's branch isn't deleted after a year. What if that's exactly what we 
did? I propose that OpenStack only do one release a year instead of two. We 
still keep N-2 stable releases around. We still do backports to all open stable 
branches. We still do all the things we're doing now, we just do it once a year 
instead of twice.

Looking at current deliverables in the openstack releases repo, most (by nearly 
a factor of 2x) are using "cycle-with-intermediary".

john@europa:~/Documents/openstack_releases/deliverables/pike(master)$ grep 
release-model * | cut -d ':' -f 2- | sort | uniq -c
  44 release-model: cycle-trailing
 147 release-model: cycle-with-intermediary
  37 release-model: cycle-with-milestones
   2 release-model: untagged

Any deliverable that using this model is already successfully dealing with 
skip-level upgrades. Skip-level upgrades are already identified as needed and 
prioritized functionality in projects that don't yet support them. Let's keep 
working on getting that functionality supported across all OpenStack 
deliverables. Let's move to one LTS release a year. And let's get all project 
deliverables to start using cycle-with-intermediary releases.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][election] Question for candidates: How do you think we can make our community more inclusive?

2017-10-31 Thread John Dickinson


On 24 Oct 2017, at 4:47, Thierry Carrez wrote:

> Colleen Murphy wrote:
>> On Sat, Oct 21, 2017 at 9:00 AM, Diana Clarke
>> <diana.joan.cla...@gmail.com <mailto:diana.joan.cla...@gmail.com>> wrote:
>>
>> Congrats on being elected to the TC, Colleen!
>>
>> You mentioned earlier in this thread that, "a major problem in the
>> tech world is not just attracting underrepresented contributors, but
>> retaining them".
>>
>> I'm curious if the TC has considered polling the people that have left
>> OpenStack for their experiences on this front.
>>
>> Something along the lines of:
>>
>>     "I see you contributed 20 patches in the last cycle, but haven't
>> contributed recently, why did you stop contributing?".
>>
>> Given the recent layoffs, I suspect many of the responses will be
>> predicable, but you might find some worthwhile nuggets there
>> nonetheless.
>>
>> I'm not aware of such an initiative so far but I do think it would be
>> useful, and perhaps something we can partner with the foundation on.
>
> Kind of parallel to the polling idea:
>
> John Dickinson has some interesting scripts that he runs to detect
> deviation from a past contribution pattern (like someone who used to
> contribute a few patches per cycle but did not contribute anything over
> the past cycle, or someone who used to contribute a handful of patches
> per month who did not send a single patch over the past month). Once
> oddities in the contribution pattern are detected, he would contact the
> person to ask if anything happened or changed that made them stop
> contributing.
>
> John would probably describe it better than I did. I like that it's not
> just quantitative but more around deviation from an established
> contribution pattern, which lets him spot issues earlier.

That's a pretty good summary.

>
> Note that this sort of analysis works well when combined with personal
> outreach, which works better at project team level... If done at
> OpenStack level you would likely have more difficulty making it feel
> personal (if I end up reaching out to a Tacker dev that stopped
> contributing, it won't be as effective as if the Tacker PTL did the
> outreach). One thing we could do would be to productize those tools and
> make them available to a wider number of people.

TBH I haven't used these tools that much for a while. Between an increased an 
increased personal reach-out ("Hey, what's going on?") and obvious stuff like 
companies pulling away from OpenStack contributions, there haven't been any 
surprises. Most of the active contributors have been pretty up-front about 
their ability (or lack thereof) to contribute.

>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to write python object in openstack swift

2017-10-05 Thread John Dickinson
I'm not familiar with that format, but in general, if you're dealing with large 
objects, you can get better performance by splitting the data in the client, 
uploading each separate segment concurrently, and then creating a large object 
manifest to tie the segments together (allowing future access by either 
segments or as a logical whole).

https://docs.openstack.org/swift/latest/overview_large_objects.html#module-swift.common.middleware.slo

--John




On 5 Oct 2017, at 11:24, Jialin Liu wrote:

> Thank you John,
>
> I think I figured out the way. My case is a little bit rare as I'm dealing
> with the HDF5 file format, I can not think of any way to retrieve the
> entire in memory file as a single python object and then serialize it.
> What I did is to use the hdf5.get_file_image function to get a memory image
> of the file, and then use, io.ByteIO(image) to make it a file-like object.
>
> So far, it works well, but I believe it is not the best way in terms of
> performance.
>
> Best,
> Jialin
>
> On Thu, Oct 5, 2017 at 8:25 AM, John Dickinson <m...@not.mn> wrote:
>
>> If you've got an arbitrary object in Python, you'll need to serialize it
>> to a file-like object. You could keep it in memory and use a StringIO
>> type, or you could serialize it to disk and open() it like any other file.
>>
>> Ultimately, Swift is storing arbitrary bytes and doesn't care what they
>> are. You, as the Swift client (i.e. API user), need to dump those bytes on
>> the network to send them to Swift. As long as you're transforming your
>> Python object[s] in some regular way that makes sense in your application,
>> it doesn't matter what bytes you send to Swift.
>>
>> --John
>>
>> On 5 Oct 2017, at 8:07, Jialin Liu wrote:
>>
>> Hi,
>> It seems to me that openstack swift only supports file upload/download, is
>> it possible to put a python object to swift store?
>> The doc says we could use file-like object, e.g., StringIO, but this is
>> very limited. I'd like to write a numpy array or other python object into
>> the swift store, can anybody tell me the solution?
>>
>> Best,
>> Jialin
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] how to write python object in openstack swift

2017-10-05 Thread John Dickinson
If you've got an arbitrary object in Python, you'll need to serialize it to a 
file-like object. You could keep it in memory and use a `StringIO` type, or you 
could serialize it to disk and `open()` it like any other file.

Ultimately, Swift is storing arbitrary bytes and doesn't care what they are. 
You, as the Swift client (i.e. API user), need to dump those bytes on the 
network to send them to Swift. As long as you're transforming your Python 
object[s] in some regular way that makes sense in your application, it doesn't 
matter what bytes you send to Swift.

--John




On 5 Oct 2017, at 8:07, Jialin Liu wrote:

> Hi,
> It seems to me that openstack swift only supports file upload/download, is
> it possible to put a python object to swift store?
> The doc says we could use file-like object, e.g., StringIO, but this is
> very limited. I'd like to write a numpy array or other python object into
> the swift store, can anybody tell me the solution?
>
> Best,
> Jialin
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][nova][mogan] How to show respect to the original authors?

2017-09-20 Thread John Dickinson


On 20 Sep 2017, at 9:25, Michael Still wrote:

> Dims, I'm not sure that's actually possible though. Many of these files
> have been through rewrites and developed over a large number of years.
> Listing all authors isn't practical.
>
> Given the horse has bolted on forking these files, I feel like a comment
> acknowledging the original source file is probably sufficient.

In Swift's repo, we acknowledge the original authors in a section of the 
AUTHORS file

https://github.com/openstack/swift/blob/master/AUTHORS

--John



>
> What is concerning to me is that some of these files are part of the "ABI"
> of nova, and if mogan diverges from that then I think we're going to see
> user complaints in the future. Specifically configdrive, and metadata seem
> like examples of this. I don't want to see us end up in another "managed
> cut and paste" like early oslo where nova continues to develop these and
> mogan doesn't notice the changes.
>
> I'm not sure how we resolve that. One option would be to refactor these
> files into a shared library.
>
> Michael
>
>
>
>
> On Wed, Sep 20, 2017 at 5:51 AM, Davanum Srinivas  wrote:
>
>> Zhenguo,
>>
>> Thanks for bringing this up.
>>
>> For #1, yes please indicate which file from Nova, so if anyone wanted
>> to cross check for fixes etc can go look in Nova
>> For #2, When you pick up a commit from Nova, please make sure the
>> commit message in Mogan has the following
>>* The gerrit change id(s) of the original commit, so folks can
>> easily go find the original commit in gerritt
>>* Add "Co-Authored-By:" tags for each author in the original commit
>> so they get credit
>>
>> Also, Please make sure you do not alter any copyright or license
>> related information in the header when you first copy a file from
>> another project.
>>
>> Thanks,
>> Dims
>>
>> On Wed, Sep 20, 2017 at 4:20 AM, Zhenguo Niu 
>> wrote:
>>> Hi all,
>>>
>>> I'm from Mogan team, we copied some codes/frameworks from Nova since we
>> want
>>> to be a Nova with a bare metal specific API.
>>> About why reinventing the wheel, you can find more informations here [1].
>>>
>>> I would like to know what's the decent way to show our respect to the
>>> original authors we copied from.
>>>
>>> After discussing with the team, we plan to do some improvements as below:
>>>
>>> 1. Adds some comments to the beginning of such files to indicate that
>> they
>>> leveraged the implementation of Nova.
>>>
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> baremetal/ironic/driver.py#L19
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> console/websocketproxy.py#L17-L18
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> consoleauth/manager.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> engine/configdrive.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> engine/metadata.py#L18
>>> https://github.com/openstack/mogan/blob/master/mogan/network/api.py#L18
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/aggregate.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/keypair.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/server_fault.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> objects/server_group.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> scheduler/client/report.py#L17
>>> https://github.com/openstack/mogan/blob/master/mogan/
>> scheduler/filter_scheduler.py#L17
>>>
>>> 2. For the changes we follows what nova changed, should reference to the
>>> original authors in the commit messages.
>>>
>>>
>>> Please let me know if there are something else we need to do or there are
>>> already some existing principles we can follow, thanks!
>>>
>>>
>>>
>>> [1] https://wiki.openstack.org/wiki/Mogan
>>>
>>>
>>> --
>>> Best Regards,
>>> Zhenguo Niu
>>>
>>> 
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Davanum Srinivas :: https://twitter.com/dims
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature

Re: [openstack-dev] Anyone know CNCF and how to get their landscape 'tweaked'

2017-09-15 Thread John Dickinson
I submitted an issue on the landscape[1], but there's been no activity or 
comments on it. It looks like David Flanders had some interaction on it a long 
time ago [2].

Big poster-pictures like this inevitably get used to promote marketing messages 
(the linked ML thread explicitly says that was the point of the diagram), so it 
would be nice to have OpenStack projects accurately represented.[3]

[1] https://github.com/cncf/landscape/issues/150
[2] https://github.com/cncf/landscape/issues/41
[3] Please no "But what *is* OpenStack?" debates for the thousandth time. ;-)


--John



On 15 Sep 2017, at 12:15, Joshua Harlow wrote:

> Hi folks,
>
> Something that has been bugging me (a tiny bit, not a lot) is the following 
> CNCF landscape picture.
>
> https://raw.githubusercontent.com/cncf/landscape/master/landscape/CloudNativeLandscape_v0.9.7.jpg
>
> If you look for openstack (for some reason it's under bare metal) in that you 
> may also get the weird feeling (as I did) that there is some kind of 
> misunderstanding that the CNCF leadership/technical community/... as to what 
> is openstack.
>
> I am wondering if we (or the openstack foundation?) can have a larger 
> sit-down with those folks and explain to them what is openstack and why its 
> components are not just baremetal...
>
> Full cncf-toc thread at:
>
> https://lists.cncf.io/pipermail/cncf-toc/2017-September/thread.html#1170
>
> -Josh
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] Denver PTG topic planning

2017-07-06 Thread John Dickinson
We've created an etherpad for organizing the topics we'll discuss in Denver at 
the PTG.

https://etherpad.openstack.org/p/swift-ptg-queens

I'd like to encourage operators who run Swift clusters to attend the Denver 
PTG. This will be the most productive time for the whole community to get 
together and discuss current issues and future plans.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread John Dickinson


On 16 Jun 2017, at 10:51, Clint Byrum wrote:

> This is great work.
>
> I'm sure you've already thought of this, but could you explain why
> you've chosen not to put the small objects in the k/v store as part of
> the value rather than in secondary large files?

I don't want to co-opt an answer from Alex, but I do want to point to some of 
the other background on this LOSF work.

https://wiki.openstack.org/wiki/Swift/ideas/small_files
https://wiki.openstack.org/wiki/Swift/ideas/small_files/experimentations
https://wiki.openstack.org/wiki/Swift/ideas/small_files/implementation

Look at the second link for some context to your answer, but the summary is 
"that means writing a file system, and writing a file system is really hard".

--John



>
> Excerpts from Alexandre Lécuyer's message of 2017-06-16 15:54:08 +0200:
>> Swift stores objects on a regular filesystem (XFS is recommended), one file 
>> per object. While it works fine for medium or big objects, when you have 
>> lots of small objects you can run into issues: because of the high count of 
>> inodes on the object servers, they can’t stay in cache, implying lot of 
>> memory usage and IO operations to fetch inodes from disk.
>>
>> In the past few months, we’ve been working on implementing a new storage 
>> backend in Swift. It is highly inspired by haystack[1]. In a few words, 
>> objects are stored in big files, and a Key/Value store provides information 
>> to locate an object (object hash -> big_file_id:offset). As the mapping in 
>> the K/V consumes less memory than an inode, it is possible to keep all 
>> entries in memory, saving many IO to locate the object. It also allows some 
>> performance improvements by limiting the XFS meta updates (e.g.: almost no 
>> inode updates as we write objects by using fdatasync() instead of fsync())
>>
>> One of the questions that was raised during discussions about this design 
>> is: do we want one K/V store per device, or one K/V store per Swift 
>> partition (= multiple K/V per device). The concern was about failure domain. 
>> If the only K/V gets corrupted, the whole device must be reconstructed. 
>> Memory usage is a major point in making a decision, so we did some benchmark.
>>
>> The key-value store is implemented over LevelDB.
>> Given a single disk with 20 million files (could be either one object 
>> replica or one fragment, if using EC)
>>
>> I have tested three cases :
>>- single KV for the whole disk
>>- one KV per partition, with 100 partitions per disk
>>- one KV per partition, with 1000 partitions per disk
>>
>> Single KV for the disk :
>>- DB size: 750 MB
>>- bytes per object: 38
>>
>> One KV per partition :
>> Assuming :
>>- 100 partitions on the disk (=> 100 KV)
>>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
>> prefix)
>>
>>- 7916 KB per KV, total DB size: 773 MB
>>- bytes per object: 41
>>
>> One KV per partition :
>> Assuming :
>>- 1000 partitions on the disk (=> 1000 KV)
>>- 16 bits part power (=> all keys in a given KV will have the same 16 bit 
>> prefix)
>>
>>- 1388 KB per KV, total DB size: 1355 MB total
>>- bytes per object: 71
>>
>>
>> A typical server we use for swift clusters has 36 drives, which gives us :
>> - Single KV : 26 GB
>> - Split KV, 100 partitions : 28 GB (+7%)
>> - Split KV, 1000 partitions : 48 GB (+85%)
>>
>> So, splitting seems reasonable if you don't have too many partitions.
>>
>> Same test, with 10 million files instead of 20
>>
>> - Single KV : 13 GB
>> - Split KV, 100 partitions : 18 GB (+38%)
>> - Split KV, 1000 partitions : 24 GB (+85%)
>>
>>
>> Finally, if we run a full compaction on the DB after the test, you get the
>> same memory usage in all cases, about 32 bytes per object.
>>
>> We have not made enough tests to know what would happen in production. 
>> LevelDB
>> does trigger compaction automatically on parts of the DB, but continuous 
>> change
>> means we probably would not reach the smallest possible size.
>>
>>
>> Beyond the size issue, there are other things to consider :
>> File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
>> open during operation.
>>
>> Having one KV per partition also means you have to move entries between KVs 
>> when you change the part power. (if we want to support that)
>>
>> A compromise may be to split KVs on a small prefix of the object's hash, 
>> independent of swift's configuration.
>>
>> As you can see we're still thinking about this. Any ideas are welcome !
>> We will keep you updated about more "real world" testing. Among the tests we 
>> plan to check how resilient the DB is in case of a power loss.
>>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc

Re: [openstack-dev] [swift] Optimizing storage for small objects in Swift

2017-06-16 Thread John Dickinson
Alex, this is fantastic work and great info. Thanks for sharing it.

Additional comments inline.

On 16 Jun 2017, at 6:54, Alexandre Lécuyer wrote:

> Swift stores objects on a regular filesystem (XFS is recommended), one file 
> per object. While it works fine for medium or big objects, when you have lots 
> of small objects you can run into issues: because of the high count of inodes 
> on the object servers, they can’t stay in cache, implying lot of memory usage 
> and IO operations to fetch inodes from disk.
>
> In the past few months, we’ve been working on implementing a new storage 
> backend in Swift. It is highly inspired by haystack[1]. In a few words, 
> objects are stored in big files, and a Key/Value store provides information 
> to locate an object (object hash -> big_file_id:offset). As the mapping in 
> the K/V consumes less memory than an inode, it is possible to keep all 
> entries in memory, saving many IO to locate the object. It also allows some 
> performance improvements by limiting the XFS meta updates (e.g.: almost no 
> inode updates as we write objects by using fdatasync() instead of fsync())
>
> One of the questions that was raised during discussions about this design is: 
> do we want one K/V store per device, or one K/V store per Swift partition (= 
> multiple K/V per device). The concern was about failure domain. If the only 
> K/V gets corrupted, the whole device must be reconstructed. Memory usage is a 
> major point in making a decision, so we did some benchmark.
>
> The key-value store is implemented over LevelDB.
> Given a single disk with 20 million files (could be either one object replica 
> or one fragment, if using EC)
>
> I have tested three cases :
>   - single KV for the whole disk
>   - one KV per partition, with 100 partitions per disk
>   - one KV per partition, with 1000 partitions per disk
>
> Single KV for the disk :
>   - DB size: 750 MB
>   - bytes per object: 38
>
> One KV per partition :
> Assuming :
>   - 100 partitions on the disk (=> 100 KV)
>   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
>
>   - 7916 KB per KV, total DB size: 773 MB
>   - bytes per object: 41
>
> One KV per partition :
> Assuming :
>   - 1000 partitions on the disk (=> 1000 KV)
>   - 16 bits part power (=> all keys in a given KV will have the same 16 bit 
> prefix)
>
>   - 1388 KB per KV, total DB size: 1355 MB total
>   - bytes per object: 71
>
> A typical server we use for swift clusters has 36 drives, which gives us :
> - Single KV : 26 GB
> - Split KV, 100 partitions : 28 GB (+7%)
> - Split KV, 1000 partitions : 48 GB (+85%)
>
> So, splitting seems reasonable if you don't have too many partitions.
>
> Same test, with 10 million files instead of 20
>
> - Single KV : 13 GB
> - Split KV, 100 partitions : 18 GB (+38%)
> - Split KV, 1000 partitions : 24 GB (+85%)
>
>
> Finally, if we run a full compaction on the DB after the test, you get the
> same memory usage in all cases, about 32 bytes per object.
>
> We have not made enough tests to know what would happen in production. LevelDB
> does trigger compaction automatically on parts of the DB, but continuous 
> change
> means we probably would not reach the smallest possible size.

This is likely a very good assumption (that the KV will continuously change and 
never get to minimum size).

My initial instinct is to go for one KV per drive.

One per partition does sound nice, but it is more sensitive to proper cluster 
configuration and deployment. For example, if an operator were to deploy a 
relatively small cluster but have a part power that's too big for the capacity, 
the KV strategy would end up with many thousands of mostly-empty partitions 
(imagine a 5-node cluster, 60 drives with a part power of 18 -- you're looking 
at more than 13k parts per drive per storage policy). Going for one KV per 
whole drive means that poor ring settings won't impact this area of storage as 
much.

>
>
> Beyond the size issue, there are other things to consider :
> File descriptors limits : LevelDB seems to keep at least 4 file descriptors 
> open during operation.
>
> Having one KV per partition also means you have to move entries between KVs 
> when you change the part power. (if we want to support that)

Yes, let's support that (in general)! But doing on KV per drive means it 
already works for this LOSF work.

>
> A compromise may be to split KVs on a small prefix of the object's hash, 
> independent of swift's configuration.

This is an interesting idea to explore. It will allow for smaller individual KV 
stores without being as sensitive to the ring parameters.

>
> As you can see we're still thinking about this. Any ideas are welcome !
> We will keep you updated about more "real world" testing. Among the tests we 
> plan to check how resilient the DB is in case of a power loss.

I'd also be very interested in other tests around concurrent access to the KV 
store. If we've only got one per whole 

Re: [openstack-dev] [api][horizon][all] Poorly Pagination UX

2017-06-14 Thread John Dickinson


On 14 Jun 2017, at 9:03, Ivan Kolodyazhny wrote:

> Hi stackers,
>
>
>
> There are several bugs in Horizon [1], [2] and [3] related to pagination.
> TBH, these issues are reproducible via CLI too. Unfortunately, all of them
> are related to our API’s implementation [4].
>
> Bugs [1] and [2] can’t be fixed in current API’s implementations because we
> use ‘marker’ object in them [4]. We can try to implement some hacks on the
> Horizon’s side to play with ‘sort order’ param, but even that in some cases
> we can fix all bugs because we don’t have necessary params to good paging
> implementation.
>
> What does it mean? E.g:
>
> You have 2 volumes and 1 item per page like described at [5]. In this case,
> when we remove volume B we can’t open a page with volume A because current
> ‘marker’ is volume B and regardless to sort order API will return zero
> volumes with this marker.
>
> As a double workaround, we can redirect to the first page. But this makes
> Horizon UX more terrible when user has a lot of pages of instances,
> volumes, etc and want to delete several of them without using filtering
> feature.
>
> As an another option, we can do some hacks on the Horizon side, but it
> requires to make more API calls what is not a good option in big production
> deployments.
>
> As a long-term solution it could be good to change our API’s to have better
> paging. E.g. use ‘page number’ param instead of ‘marker’. API could also
> return total_page number so Horizon will be able to use these options to
> render paged tables well.

The problem with "page number" is that it completely breaks down for large 
sets. If I've got a few million items in a result set, finding the current page 
number and the total number of pages becomes very expensive server-side. Using 
the marker pattern allows for walking over the set and returning results 
without needing to know the total set at the time of creating the response.

Unfortunately, for the marker to work effectively, the result set needs to have 
a defined order, and this can end up pushing a lot of the work to the client[1] 
to provide nice sorting, searching, and pagination.

In an ideal world, I'd love to see a service that keeps track of the different 
"things" that are managed. Want to know all the server images created before 
Tuesday? Ask the tracker service. Want to know all the cat pictures smaller 
than 1MB and marked as having a blue background? Ask the tracker service. etc.

What is this tracker service? elastic? glance? glare? searchlight? something 
else? I don't know.

[1] In general, pushing work to the client is a fantastic idea, but it does 
come with a "cost", namely developer mental overhead.


--John



>
> In the world of microversions, we can implement such changes without
> breaking any existing API users and change API Guidelines with note about
> these changes.
>
> I’m glad to get any feedback from Horizon users, API WG and component teams
> if community is interested in this big cross-project effort.
>
> [1] https://bugs.launchpad.net/horizon/+bug/1564498
> [2] https://bugs.launchpad.net/horizon/+bug/1212174
> [3] https://bugs.launchpad.net/horizon/+bug/1274427
> [4]
> https://github.com/openstack/api-wg/blob/master/guidelines/pagination_filter_sort.rst
> [5] https://bugs.launchpad.net/horizon/+bug/1564498/comments/5
>
>
>
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] upcoming impact to configuration in Swift 2.15.0

2017-06-12 Thread John Dickinson
policy_type = erasure_coding
ec_type = isa_l_rs_vand
ec_num_data_fragments = 7
ec_num_parity_fragments = 6
ec_object_segment_size = 1048576
deprecated = true

# this policy is the good replacement for the one above
[storage-policy:3]
name = deepfreeze7-6
aliases = df76
policy_type = erasure_coding
ec_type = isa_l_rs_cauchy
ec_num_data_fragments = 7
ec_num_parity_fragments = 6
ec_object_segment_size = 1048576
```

## Need more help?

Please feel free to ask any questions either here on the mailing list
or in #openstack-swift on freenode IRC.

Thank you for placing your trust in us to store your data in Swift. We
are all deeply saddened by this bug and the extra work it may cause
operators.

John Dickinson, Swift PTL
The OpenStack Swift community


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Strawman Queens PTG week slicing

2017-06-01 Thread John Dickinson


On 1 Jun 2017, at 7:38, Thierry Carrez wrote:

> Thierry Carrez wrote:
>> In a previous thread[1] I introduced the idea of moving the PTG from a
>> purely horizontal/vertical week split to a more
>> inter-project/intra-project activities split, and the initial comments
>> were positive.
>>
>> We need to solidify how the week will look like before we open up
>> registration (first week of June), so that people can plan their
>> attendance accordingly. Based on the currently-signed-up teams and
>> projected room availability, I built a strawman proposal of how that
>> could look:
>>
>> https://docs.google.com/spreadsheets/d/1xmOdT6uZ5XqViActr5sBOaz_mEgjKSCY7NEWcAEcT-A/pubhtml?gid=397241312=true
>
> OK, it looks like the feedback on this strawman proposal was generally
> positive, so we'll move on with this.
>
> For teams that are placed on the Wednesday-Friday segment, please let us
> know whether you'd like to make use of the room on Friday (pick between
> 2 days or 3 days). Note that it's not a problem if you do (we have space
> booked all through Friday) and this can avoid people leaving too early
> on Thursday afternoon. We just need to know how many rooms we might be
> able to free up early.
>
> In the same vein, if your team (or workgroup, or inter-project goal) is
> not yet listed and you'd like to have a room in Denver, let us know ASAP.
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Swift would like to go through Friday.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] Do we care about pypy for clients (broken by cryptography)

2017-05-31 Thread John Dickinson


On 31 May 2017, at 5:34, Monty Taylor wrote:

> On 05/31/2017 06:39 AM, Sean McGinnis wrote:
>> On Wed, May 31, 2017 at 06:37:02AM -0500, Sean McGinnis wrote:
>>> We had a discussion a few months back around what to do for cryptography
>>> since pycrypto is basically dead [1]. After some discussion, at least on
>>> the Cinder project, we decided the best way forward was to use the
>>> cryptography package instead, and work has been done to completely remove
>>> pycrypto usage.
>>>
>>> It all seemed like a good plan at the time.
>>>
>>> I now notice that for the python-cinderclient jobs, there is a pypy job
>>> (non-voting!) that is failing because the cryptography package is not
>>> supported with pypy.
>>>
>>> So this leaves us with two options I guess. Change the cryto library again,
>>> or drop support for pypy.
>>>
>>> I am not aware of anyone using pypy, and there are other valid working
>>> alternatives. I would much rather just drop support for it than redo our
>>> crypto functions again.
>>>
>>> Thoughts? I'm sure the Grand Champion of the Clients (Monty) probably has
>>> some input?
>
> There was work a few years ago to get pypy support going - but it never 
> really seemed to catch on. The chance that we're going to start a new push 
> and be successful at this point seems low at best.
>
> I'd argue that pypy is already not supported, so dropping the non-voting job 
> doesn't seem like losing very much to me. Reworking cryptography libs again, 
> otoh, seems like a lot of work.
>
> Monty

On the other hand, I've been working with Intel on getting PyPy support in 
Swift (it works, just need to reenable that gate job), and I know they were 
working on support in other projects. Summary is that for Swift, we got around 
2x improvement (lower latency + higher throughput) by simply replacing CPython 
with PyPy. I believe similar gains in other projects were expected.

I've cc'd Peter from Intel who's been working on PyPy quite a bit. I know he'll 
be very interested in this discussion and will have valuable input.

--john




>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] new additional team meeting

2017-05-26 Thread John Dickinson
In Boston, one of the topics is how to better facilitate communication in our 
global community. Like some other OpenStack projects, we've decided to add an 
additional meeting that is better scheduled for different time zones.

Our current meeting is remaining: 2100UTC on Wednesdays in #openstack-meeting.

We are adding another biweekly meeting starting this next week (May 31): 
0700UTC on Wednesdays in #openstack-meeting.

The new meeting is at a reasonable time for just about everyone, other than 
those who live in New York to San Francisco time zones. In this new meeting, 
we'll be addressing specific patches and concerns that those in attendance 
have. We'll also be using the time to help raise issues and discuss topics that 
pertain to the entire community.

I'd like to thank Mahati for leading the group for organizing and facilitating 
this new idea.

Although a new meeting will bring new challenges, I'm looking forward to seeing 
how this will help our team communicate and grow.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Missing answers on Pike release goals

2017-05-23 Thread John Dickinson


On 23 May 2017, at 8:05, Doug Hellmann wrote:

> Excerpts from Sean McGinnis's message of 2017-05-23 08:58:08 -0500:

 - Is it that the reporting process is too heavy ? (requiring answers
 from projects that are obviously unaffected)
>>>
>>> I've thought about this, OSC was unaffected by one of the goals but
>>> not the other, so I can't really hide in this bucket.  It really is
>>> not that hard to put up a review saying "not me".
>>>
 - Is it that people ignore the deadlines and missed the reminders ?
 (some unaffected project teams also do not do releases, and therefore
 ignore the release countdown emails)
>>>
>>> In my case, not so much "ignore" but "put off until tomorrow" where
>>> tomorrow turned in to 6 weeks.  I really don't have a hard reason
>>> other than simply not prioritizing it because I knew one of the goals
>>> was going to take some coordination work
>>>
>>
>> +1 - this has been my case, unfortunately.
>>
>> A patch submission has the feeling of a major thing that goes through
>> a lot of process (at least still in my head). I wonder if we would be
>> better off tracking some of this through a wiki page or even an
>> etherpad, with just the completion of the goal being something
>> submitted to the repo. Then it would be really easy to update at any
>> point with notes like "WIP patch put up but still working on it" along
>> the way.
>
> The review process for this type of governance patch is pretty light
> (they fall under the one-week-no-objections house rule), but I
> decided to use a patch instead of the wiki specifically because it
> allows for feedback. We've had several cases where teams didn't
> provide enough detail or didn't think a goal applied to them when
> it did (deploying with WSGI came up at least once).  Wiki changes
> can be tracked, but if someone has a question they have to go track
> down the author in some other venue to get it answered.
>
> I also didn't want teams to have to keep anything up to date during
> the cycle, because I didn't want this to be yet another "status
> report". Each goal needs at most 2 patches: one at the start of the
> cycle to acknowledge and point to whatever other artifacts are being
> used for tracking the work already, and then one at the end of the
> cycle to indicate how much of the work was completed and what the
> next steps are. We tied the process deadlines to existing deadlines
> when we thought teams would already be thinking of these sorts of
> topics (most teams have spec deadlines around milestone 1 and then
> everyone has the same release date at the end of the cycle).
>

I can sympathize with the "do it tomorrow" turns into 6 weeks later...

Part of the issue for me, personally, is that a governance patch does *not* 
feel simple or lightweight. I assume (in part based on experience) that any 
governance patch I propose will be closely examined and I will be forced to 
justify every corner case and comment made. Frankly, writing the patch that 
will stand up too a critical eye will take a long time. I'll do it tomorrow...

Let's take the py3 goal as an example. Note: I am *not* wanting to get into a 
discussion about particular py3 issues or whatever. This is a discussion on the 
goals process, and I'm only using one of the current goals as an example of why 
I haven't proposed a governance patch for it.

Swift does not support Py3. So clearly, there's work to be done to meet the 
goal. I've talked with others in the community about some of the blockers and 
concerns about porting to Py3. Several of the concerns are not trivial and will 
take substantial work to overcome[1]. A governance patch will need to list 
these issues, but I don't know if this is a complete list. If I propose a list 
that's incomplete, I feel like I'll be judged on the list I first proposed 
("you finished the list, why doesn't it work?") instead of being a more dynamic 
process. I need to spend more time understanding what the issues are to make 
sure I have a complete list. I'll propose that patch tomorrow...

The outstanding work to get Py3 support in Swift is very large. Yet there are 
more goals being discussed now, and there's no way I can get Py3 support in 
Swift in Pike. Or Queens. Or probably Rocky either. That's not to say it isn't 
an important goal, but the scope combined with the TC deadline means that my 
governance patch for this goal (the tl;dr version is "not gonna happen") has to 
address this in sufficient detail to stand up to review by TC members who are 
on the PSF! I guess I'll start writing that tomorrow...

While I know that Py3 support is important, I also have to prioritize it 
against other important things. My employer has prioritized certain features 
because that directly impacts our ability to add customers (which directly 
affects my ability to get paid). Other employers in the community are doing the 
same for their employees. In the broader community, as clusters have grown over 
the 

Re: [openstack-dev] [doc][ptls][all] Documentation publishing future

2017-05-22 Thread John Dickinson


On 22 May 2017, at 15:50, Anne Gentle wrote:

> On Mon, May 22, 2017 at 5:41 PM, Sean McGinnis 
> wrote:
>
>> On Mon, May 22, 2017 at 09:39:09AM +, Alexandra Settle wrote:
>>
>> [snip]
>>
>>> 1. We could combine all of the documentation builds, so that each
>> project has a single doc/source directory that includes developer,
>> contributor, and user documentation. This option would reduce the number of
>> build jobs we have to run, and cut down on the number of separate sphinx
>> configurations in each repository. It would completely change the way we
>> publish the results, though, and we would need to set up redirects from all
>> of the existing locations to the new locations and move all of the existing
>> documentation under the new structure.
>>>
>>> 2. We could retain the existing trees for developer and API docs, and
>> add a new one for "user" documentation. The installation guide,
>> configuration guide, and admin guide would move here for all projects.
>> Neutron's user documentation would include the current networking guide as
>> well. This option would add 1 new build to each repository, but would allow
>> us to easily roll out the change with less disruption in the way the site
>> is organized and published, so there would be less work in the short term.
>>>
>>> 3. We could do option 2, but use a separate repository for the new
>> user-oriented documentation. This would allow project teams to delegate
>> management of the documentation to a separate review project-sub-team, but
>> would complicate the process of landing code and documentation updates
>> together so that the docs are always up to date.
>>>
>>
>> I actually like the first two a little better, but I think this might
>> actually be the best option. My hope
>> would be that there could continue to be a docs team that can help out
>> with some of this, and by having a
>> separate repo it would allow usto set up separate teams with rights to
>> merge.
>>
>
> Hey Sean, is the "right to merge" the top difficulty you envision with 1 or
> 2? Or is it finding people to do the writing and reviews? Curious about
> your thoughts and if you have some experience with specific day-to-day
> behavior here, I would love your insights.
>
> Anne
>

I prefer option 1, which should be obvious from Anne's reference to my exiting 
work to enable that. Option 2 seems yucky (to me) because it adds yet another 
docs tree and sphinx config to projects, and thus is counter to my hope that 
we'll have one single docs tree per repo.

I disagree with option 3. It seems to be a way to organize the content simply 
to wall-off access to parts of it; e.g. docs people can't land stuff in the 
code part and potentially some code people can't land stuff in the docs part. 
However, docs should always land with the code that changed them. Separating 
the docs into a separate repo removes the ability to land docs with code.

I really like the plan Alex has described about docs team representatives 
participating more directly with the projects. If those representatives should 
be able to add a +2 or -2 to project patches, then make those representatives 
core reviewers for the respective project. Like every other core reviewer, they 
should be trusted to use good judgement for choosing what to review and what 
score to give it.

Let's work towards option 1. Although I think option 2 is largely orthogonal to 
option 1 (i.e. the "user" docs should be merged into the project trees 
regardless of unification of the various in-project docs trees), it can happen 
before or after option 1 is done.


--John



>
>>
>>> Personally, I think option 2 or 3 are more realistic, for now. It does
>> mean that an extra build would have to be maintained, but it retains that
>> key differentiator between what is user and developer documentation and
>> involves fewer changes to existing published contents and build jobs. I
>> definitely think option 1 is feasible, and would be happy to make it work
>> if the community prefers this. We could also view option 1 as the
>> longer-term goal, and option 2 as an incremental step toward it (option 3
>> would make option 1 more complicated to achieve).
>>>
>>> What does everyone think of the proposed options? Questions? Other
>> thoughts?
>>>
>>> Cheers,
>>>
>>> Alex
>>>
>>>
>>
>>> 
>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
>> unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
> -- 
>
> Read my blog: justwrite.click 

Re: [openstack-dev] [ptg] How to slice the week to minimize conflicts

2017-05-18 Thread John Dickinson


On 18 May 2017, at 2:27, Thierry Carrez wrote:

> Hi everyone,
>
> For the PTG events we have a number of rooms available for 5 days, of
> which we need to make the best usage. We also want to keep it simple and
> productive, so we want to minimize room changes (allocating the same
> room to the same group for one or more days).
>
> For the first PTG in Atlanta, we split the week into two groups.
> Monday-Tuesday for "horizontal" project team meetups (Infra, QA...) and
> workgroups (API WG, Goals helprooms...), and Wednesday-Friday for
> "vertical" project team meetups (Nova, Swift...). This kinda worked, but
> the feedback we received called for more optimizations and reduced
> conflicts.
>
> In particular, some projects which have a lot of contributors overlap
> (Storlets/Swift, or Manila/Cinder) were all considered "vertical" and
> happened at the same time. Also horizontal team members ended up having
> issues to attend workgroups, and had nowhere to go for the rest of the
> week. Finally, on Monday-Tuesday the rooms that had the most success
> were inter-project ones we didn't really anticipate (like the API WG),
> while rooms with horizontal project team meetups were a bit
> under-attended. While we have a lot of constraints, I think we can
> optimize a bit better.
>
> After giving it some thought, my current thinking is that we should
> still split the week in two, but should move away from an arbitrary
> horizontal/vertical split. My strawman proposal would be to split the
> week between inter-project work (+ teams that rely mostly on liaisons in
> other teams) on Monday-Tuesday, and team-specific work on Wednesday-Friday:
>
> Example of Monday-Tuesday rooms:
> Interop WG, Docs, QA, API WG, Packaging WG, Oslo, Goals helproom,
> Infra/RelMgt/support teams helpdesk, TC/SWG room, VM Working group...
>
> Example of Wednesday-Thursday or Wednesday-Friday rooms:
> Nova, Cinder, Neutron, Swift, TripleO, Kolla, Infra...
>
> (NB: in this example infra team members end up being available in a
> general support team helpdesk room in the first part of the week, and
> having a regular team meetup on the second part of the week)
>
> In summary, Monday-Tuesday would be mostly around themes, while
> Wednesday-Friday would be mostly around teams. In addition to that,
> teams that /prefer/ to run on Monday-Tuesday to avoid conflicting with
> another project meetup (like Manila wanting to avoid conflicting with
> Cinder, or Storlets wanting to avoid conflicting with Swift) could
> *choose* to go for Monday-Tuesday instead of Wednesday-Friday.
>
> It's a bit of a long shot (we'd still want to equilibrate both sides in
> terms of room usage, so it's likely that the teams that are late to
> decide to participate would be pushed on one side or the other), but I
> think it's a good incremental change that could solve some of the issues
> reported in the Atlanta week slicing, as well as generally make
> inter-project coordination simpler.
>
> If we adopt that format, we need to be pretty flexible in terms of what
> is a "workgroup": to me, any inter-project work that would like to have
> a one-day or two-day room should be able to get some.
> Nova-{Cinder,Neutron,Ironic} discussions would for example happen in the
> VM & BM working group room, but we can imagine others just like it.
>
> Let me know what you think. Also feel free to propose alternate creative
> ways to slice the space and time we'll have. We need to open
> registration very soon (June 1st is the current target), and we'd like
> to have a rough idea of the program before we do that (so that people
> can report which days they will attend more accurately).
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Sounds like a good idea to me.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] ptgbot: how to make "what's currently happening" emerge

2017-05-18 Thread John Dickinson


On 18 May 2017, at 2:57, Thierry Carrez wrote:

> Hi again,
>
> For the PTG events we have, by design, a pretty loose schedule. Each
> room is free to organize their agenda in whatever way they see fit, and
> take breaks whenever they need. This flexibility is key to keep our
> productivity at those events at a maximum. In Atlanta, most teams ended
> up dynamically building a loose agenda on a room etherpad.
>
> This approach is optimized for team meetups and people who strongly
> identify with one team in particular. In Atlanta during the first two
> days, where a lot of vertical team contributors did not really know
> which room to go to, it was very difficult to get a feel of what is
> currently being discussed and where they could go. Looking into 20
> etherpads and trying to figure out what is currently being discussed is
> just not practical. In the feedback we received, the need to expose the
> schedule more visibly was the #1 request.
>
> It is a thin line to walk on. We clearly don't want to publish a
> schedule in advance or be tied to pre-established timeboxes for every
> topic. We want it to be pretty fluid and natural, but we still need to
> somehow make "what's currently happening" (and "what will be discussed
> next") emerge globally.
>
> One lightweight solution I've been working on is an IRC bot ("ptgbot")
> that would produce a static webpage. Room leaders would update it on
> #openstack-ptg using commands like:
>
> #swift now discussing ring placement optimizations
> #swift next at 14:00 we plan to discuss better #keystone integration
>
> and the bot would collect all those "now" and "next" items and publish a
> single (mobile-friendly) webpage, (which would also include
> ethercalc-scheduled things, if we keep any).
>
> The IRC commands double as natural language announcements for those that
> are following activity on the IRC channel. Hashtags can be used to
> attract other teams attention. You can announce later discussions, but
> the commitment on exact timing is limited. Every "now" command would
> clear "next" entries, so that there wouldn't be any stale entries and
> the command interface would be kept dead simple (at the cost of a bit of
> repetition).
>
> I have POC code for this bot already. Before I publish it (and start
> work to make infra support it), I just wanted to see if this is the
> right direction and if I should continue to work on it :) I feel like
> it's an incremental improvement that preserves the flexibility and
> self-scheduling while addressing the main visibility concern. If you
> have better ideas, please let me know !
>
> -- 
> Thierry Carrez (ttx)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



Seems like a reasonable idea and helpful tool. For the Swift team, we generally 
end up with more than one thing being discussed at a time at different 
tables/corners in the same room. A "#swift now discussing foo, bar, and baz" 
(instead of one-thing-at-a-time) would be how we'd likely use it. I'd guess 
other teams work in a similar way, too.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] [glance] [cinder] [neutron] [keystone] - RFC cross project request id tracking

2017-05-16 Thread John Dickinson


On 14 May 2017, at 4:04, Sean Dague wrote:

> One of the things that came up in a logging Forum session is how much effort 
> operators are having to put into reconstructing flows for things like server 
> boot when they go wrong, as every time we jump a service barrier the 
> request-id is reset to something new. The back and forth between Nova / 
> Neutron and Nova / Glance would be definitely well served by this. Especially 
> if this is something that's easy to query in elastic search.
>
> The last time this came up, some people were concerned that trusting 
> request-id on the wire was concerning to them because it's coming from random 
> users. We're going to assume that's still a concern by some. However, since 
> the last time that came up, we've introduced the concept of "service users", 
> which are a set of higher priv services that we are using to wrap user 
> requests between services so that long running request chains (like image 
> snapshot). We trust these service users enough to keep on trucking even after 
> the user token has expired for this long run operations. We could use this 
> same trust path for request-id chaining.
>
> So, the basic idea is, services will optionally take an inbound 
> X-OpenStack-Request-ID which will be strongly validated to the format 
> (req-$uuid). They will continue to always generate one as well. When the 
> context is built (which is typically about 3 more steps down the paste 
> pipeline), we'll check that the service user was involved, and if not, reset 
> the request_id to the local generated one. We'll log both the global and 
> local request ids. All of these changes happen in oslo.middleware, 
> oslo.context, oslo.log, and most projects won't need anything to get this 
> infrastructure.
>
> The python clients, and callers, will then need to be augmented to pass the 
> request-id in on requests. Servers will effectively decide when they want to 
> opt into calling other services this way.
>
> This only ends up logging the top line global request id as well as the last 
> leaf for each call. This does mean that full tree construction will take more 
> work if you are bouncing through 3 or more servers, but it's a step which I 
> think can be completed this cycle.
>
> I've got some more detailed notes, but before going through the process of 
> putting this into an oslo spec I wanted more general feedback on it so that 
> any objections we didn't think about yet can be raised before going through 
> the detailed design.
>
>   -Sean
>
> -- 
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


I'm not sure the best place to respond (mailing list or gerrit), so
I'll write this up and post it to both places.

I think the idea behind this proposal is great. It has the potential
to bring a lot of benefit to users who are tracing a request across
many different services, in part by making it easy to search in an
indexing system like ELK.

The current proposal has some elements that won't work with the way
Swift currently solves this problem. This is mostly due to the
proposed uuid-ish check for validation. However, the Swift solution
has a few aspects that I believe would be very helpful for the entire
community.

NB: Swift returns both an `X-OpenStack-Request-ID` and an `X-Trans-ID`
header in every response. The `X-Trans-ID` was implemented before the
OpenStack request ID was proposed, and so we've kept the `X-Trans-ID` so
as not to break existing clients. The value of `X-OpenStack-Request-ID`
in any response from Swift is simply a mirror of the `X-Trans-ID` value.

The request id in Swift is made up of a few parts:

X-Openstack-Request-Id: txbea0071df2b0465082501-00591b3077saio-extraextra


In the code, this in generated from:

'tx%s-%010x%s' % (uuid.uuid4().hex[:21], time.time(), 
quote(trans_id_suffix))

...meaning that there are three parts to the request id. Let's take
each in turn.

The first part always starts with 'tx' (originally from the
"transaction id") and then is the first 21 hex characters of a uuid4.
The truncation is to limit the overall length of the value.

The second part is the hex value of the current time, padded to 10
characters.

Finally, the third part is the quoted suffix, and it defaults to the
empty string. The suffix itself can be made of two parts. The first is
configured in the Swift proxy server itself (ie the service that does
the logging) via the `trans_id_suffix` config. This allows an operator
to set a different suffix for each API endpoint or each region or each
cluster in order to help distinguish them in logs. For example, if a
deployment with multiple clusters uses centralized log aggregation, a
different trans_id_suffix value for each 

Re: [openstack-dev] [swift] Adding our developments to Assosciated projects list

2017-05-03 Thread John Dickinson
Nejc,

Great to see stuff being built on and around Swift. There's no special process 
for adding stuff to that page. It's a .rst file in the Swift code repo 
(https://github.com/openstack/swift/blob/master/doc/source/associated_projects.rst)
 and you can submit a patch for it via the normal patch submission process. As 
soon as the patch lands, the docs are rebuilt and you'll see the update 
reflected in the live page 
(https://docs.openstack.org/developer/swift/associated_projects.html)

Please feel free to stop by the #openstack-swift IRC channel and tell us about 
iostack and crystal.

--John




On 3 May 2017, at 6:48, Nejc Bat wrote:

> Greetings,
>
> Since I haven't found any appropriate guideline on the webpages and
> wiki I have decided to contact the lists. I apologize in
> advance if I'm spamming a bit.
>
> As part of the IOStack project (http://iostack.eu/) we developed a
> really promising feature for Swift that we called Crystal
> (http://crystal-sds.org/)
>
> I'm interested what is the official process of getting our results
> featured in the "Associated Projects"
> (https://docs.openstack.org/developer/swift/associated_projects.html)
> part of the documentation.
>
> Any help will be greatly appreciated. Thank you in advance.
>
> -- 
>
> *Best regards, / Lep pozdrav,
>
> Nejc Bat
> /Project Manager / Vodja projektov/*
> --
> E-mail: nejc@arctur.si
>
> Arctur d.o.o.
> Industrijska cesta 1a, Kromberk
> SI-5000 Nova Gorica
> Slovenija / EU
>
> Phone: (+386) 5 3029 070
> GSM: (+386) 41 400 987
> Telefax: (+386) 5 3022 042
> Internet: http://www.arctur.net/


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [monasca][swift][storyboard] migration experience

2017-04-17 Thread John Dickinson
Moasca-teers

As the migration away from Launchpad and to Storyboard moves forward,
the Swift team has been considering making the move. Monasca has
already made the move, so I'd love to hear your thoughts on that
process.

1) How did you do the change? All at once or gradually? If you did it
over again, would you do it the same way?

2) Are you missing anything from the migration? How do you manage
security bugs that are embargoed?

3) How are you managing being "different" in the OpenStack community
for where to go file bugs or track work?

4) Have you had any negative impact from old links to Launchpad bugs?
What about links in commit messages? How did you manage links in
patches that were open at the time of migration?

5) What other thoughts do you have on the move?


Thank you for your time and thoughts,


John







signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [kolla][all][tc] Starting a core reviewer mentorship program for Kolla deliverables

2017-04-13 Thread John Dickinson


On 12 Apr 2017, at 18:54, Michał Jastrzębski wrote:

> I was also thinking of having "cores just for single thing" in Kolla,
> but I think that won't really work as most of our code is variation of
> other parts with very little unique things to this, it's tuning up
> details that takes most of our work (aside of already split
> kolla-ansible and kolla-kubernetes).
>
> That being said I fully support any effort to train new core team
> members. In fact, I'd like to explore few ideas for that and maybe ask
> for opinions of broader community:
>
> 1. Mentorship program
> Every person who wants to be a core but doesn't feel confident about
> his/hers ability can ask core team for mentor. This mentor-mantee
> relationship will work in a way that mantee can ask mentor to
> re-review his/hers review (+2 of my +1 or -1 with explanation what did
> I miss), technical questions and things like that. I would be happy to
> act as first point of contact to pair mantees with mentors (once we
> establish some mechanism for that).
>
> 2. +1 matters
> One thing that I see happening is misconception that +1 doesn't
> matter. It does, but maybe we need to do something to break that
> impression. Frankly I don't have good ideas for that, any experiences
> anyone?
>
> I think we, as broader OpenStack community could make this full
> fledged cross project discussion and trade experiences (that's why I
> broaden subject tags a little:)).


Thanks for broadening the subject tags. I hadn't seen this thread yet.

While I don't have "answers" for you questions (of course I have ideas and 
opinions though), the mentorship idea and starting to break the "+1 doesn't 
matter" meme are both great! Thank you for bring up the topic.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread John Dickinson


On 21 Mar 2017, at 15:34, Alex Schultz wrote:

> On Tue, Mar 21, 2017 at 3:45 PM, John Dickinson <m...@not.mn> wrote:
>> I've been following this thread, but I must admit I seem to have missed 
>> something.
>>
>> What problem is being solved by storing per-server service configuration 
>> options in an external distributed CP system that is currently not possible 
>> with the existing pattern of using local text files?
>>
>
> This effort is partially to help the path to containerization where we
> are delivering the service code via container but don't want to
> necessarily deliver the configuration in the same fashion.  It's about
> ease of configuration where moving service -> config files (on many
> hosts/containers) to service -> config via etcd (single source
> cluster).  It's also about an alternative to configuration management
> where today we have many tools handling the files in various ways
> (templates, from repo, via code providers) and trying to come to a
> more unified way of representing the configuration such that the end
> result is the same for every deployment tool.  All tools load configs
> into $place and services can be configured to talk to $place.  It
> should be noted that configuration files won't go away because many of
> the companion services still rely on them (rabbit/mysql/apache/etc) so
> we're really talking about services that currently use oslo.

Thanks for the explanation!

So in the future, you expect a node in a clustered OpenStack service to be 
deployed and run as a container, and then that node queries a centralized etcd 
(or other) k/v store to load config options. And other services running in the 
(container? cluster?) will load config from local text files managed in some 
other way.

No wait. It's not the *services* that will load the config from a kv 
store--it's the config management system? So in the process of deploying a new 
container instance of a particular service, the deployment tool will pull the 
right values out of the kv system and inject those into the container, I'm 
guessing as a local text file that the service loads as normal?

This means you could have some (OpenStack?) service for inventory management 
(like Karbor) that is seeding the kv store, the cloud infrastructure software 
itself is "cloud aware" and queries the central distributed kv system for the 
correct-right-now config options, and the cloud service itself gets all the 
benefits of dynamic scaling of available hardware resources. That's pretty 
cool. Add hardware to the inventory, the cloud infra itself expands to make it 
available. Hardware fails, and the cloud infra resizes to adjust. Apps running 
on the infra keep doing their thing consuming the resources. It's clouds all 
the way down :-)

Despite sounding pretty interesting, it also sounds like a lot of extra 
complexity. Maybe it's worth it. I don't know.

Thanks again for the explanation.


--John




>
> Thanks,
> -Alex
>
>>
>> --John
>>
>>
>>
>>
>> On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:
>>
>>> Jay,
>>>
>>> the /v3alpha HTTP API  (grpc-gateway) supports watch
>>> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>>>
>>> -- Dims
>>>
>>> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes <jaypi...@gmail.com> wrote:
>>>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>>>>
>>>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:
>>>>>>
>>>>>> Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>>>>>>>
>>>>>>> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow <harlo...@fastmail.com>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> * How does reloading work (does it)?
>>>>>>>
>>>>>>>
>>>>>>> No. There is nothing that we can do in oslo that will make services
>>>>>>> magically reload configuration. It's also unclear to me if that's
>>>>>>> something to do. In a containerized environment, wouldn't it be
>>>>>>> simpler to deploy new services? Otherwise, supporting signal based
>>>>>>> reload as we do today should be trivial.
>>>>>>
>>>>>>
>>>>>> Reloading works today with files, that's why the question is important
>>>>>> to think through. There is a special flag to set on options that are
>>>>>> "mutable" and then there are functions within oslo.config to reload.
>>>>>> Those are usual

Re: [openstack-dev] [oslo][kolla][openstack-helm][tripleo][all] Storing configuration options in etcd(?)

2017-03-21 Thread John Dickinson
I've been following this thread, but I must admit I seem to have missed 
something.

What problem is being solved by storing per-server service configuration 
options in an external distributed CP system that is currently not possible 
with the existing pattern of using local text files?


--John




On 21 Mar 2017, at 14:26, Davanum Srinivas wrote:

> Jay,
>
> the /v3alpha HTTP API  (grpc-gateway) supports watch
> https://coreos.com/etcd/docs/latest/dev-guide/apispec/swagger/rpc.swagger.json
>
> -- Dims
>
> On Tue, Mar 21, 2017 at 5:22 PM, Jay Pipes  wrote:
>> On 03/21/2017 04:29 PM, Clint Byrum wrote:
>>>
>>> Excerpts from Doug Hellmann's message of 2017-03-15 15:35:13 -0400:

 Excerpts from Thomas Herve's message of 2017-03-15 09:41:16 +0100:
>
> On Wed, Mar 15, 2017 at 12:05 AM, Joshua Harlow 
> wrote:
>
>> * How does reloading work (does it)?
>
>
> No. There is nothing that we can do in oslo that will make services
> magically reload configuration. It's also unclear to me if that's
> something to do. In a containerized environment, wouldn't it be
> simpler to deploy new services? Otherwise, supporting signal based
> reload as we do today should be trivial.


 Reloading works today with files, that's why the question is important
 to think through. There is a special flag to set on options that are
 "mutable" and then there are functions within oslo.config to reload.
 Those are usually triggered when a service gets a SIGHUP or something
 similar.

 We need to decide what happens to a service's config when that API
 is used and the backend is etcd. Maybe nothing, because every time
 any config option is accessed the read goes all the way through to
 etcd? Maybe a warning is logged because we don't support reloads?
 Maybe an error is logged? Or maybe we flush the local cache and start
 reading from etcd on future accesses?

>>>
>>> etcd provides the ability to "watch" keys. So one would start a thread
>>> that just watches the keys you want to reload on, and when they change
>>> that thread will see a response and can reload appropriately.
>>>
>>> https://coreos.com/etcd/docs/latest/dev-guide/api_reference_v3.html
>>
>>
>> Yep. Unfortunately, you won't be able to start an eventlet greenthread to
>> watch an etcd3/gRPC key. The python grpc library is incompatible with
>> eventlet/gevent's monkeypatching technique and causes a complete program
>> hang if you try to communicate with the etcd3 server from a greenlet. Fun!
>>
>> So, either use etcd2 (the no-longer-being-worked-on HTTP API) or don't use
>> eventlet in your client service.
>>
>> Best,
>> -jay
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> -- 
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][all] Reseller - do we need it?

2017-03-16 Thread John Dickinson


On 16 Mar 2017, at 14:10, Lance Bragstad wrote:

> Hey folks,
>
> The reseller use case [0] has been popping up frequently in various
> discussions [1], including unified limits.
>
> For those who are unfamiliar with the reseller concept, it came out of
> early discussions regarding hierarchical multi-tenancy (HMT). It
> essentially allows a certain level of opaqueness within project trees. This
> opaqueness would make it easier for providers to "resell" infrastructure,
> without having customers/providers see all the way up and down the project
> tree, hence it was termed reseller. Keystone originally had some ideas of
> how to implement this after the HMT implementation laid the ground work,
> but it was never finished.
>
> With it popping back up in conversations, I'm looking for folks who are
> willing to represent the idea. Participating in this thread doesn't mean
> you're on the hook for implementing it or anything like that.
>
> Are you interested in reseller and willing to provide use-cases?
>
>
>
> [0]
> http://specs.openstack.org/openstack/keystone-specs/specs/keystone/mitaka/reseller.html#problem-description



This is interesting to me. It sounds very similar to the reseller concept that 
Swift has. In Swift, the reseller is used to group accounts. Remember that an 
account in Swift is like a bank account. It's where you put stuff, and is 
mapped to one or more users via an auth system. So a Swift account is scoped to 
a particular reseller, and an auth system is responsible for one or more 
resellers.

You can see this in practice with the "reseller prefix" that's used in Swift's 
URLs. The default is "AUTH_", so my account might be "AUTH_john". But it's 
totally possible that there could be another auth system assigned to a 
different reseller prefix. If that other reseller prefix is "BAUTH_", then 
there could exist a totally independent "BAUTH_john" account. The only thing 
that ties some user creds (or token) to a particular account in Swift is the 
auth system.

So this reseller concept in Swift allows deployers to have more than one auth 
system installed in the same cluster at the same time. And, in fact, this is 
exactly why it was first used. If you get an account with Rackspace Cloud 
Files, you'll see the reseller prefix is "MossoCloudFS_", but it turns out that 
when Swift was created JungleDisk was an internal Rackspace product and also 
stored a bunch of data in the same system. JungleDisk managed it's own users 
and auth system, so they had a different reseller prefix that was tied to a 
different auth system.

From the Keystone spec, it seems that the reseller idea is a way to group 
domains, very much like the reseller concept in Swift. I'd suggest that instead 
of building ever-increasing hierarchies of groups of users, supporting more 
than one auth system at a time is a proven way to scale out this solution. So 
instead of adding the complexity to Keystone of ever-deepening groupings, 
support having more than one Keystone instance (or even Keystone + another auth 
system) in a project's pipeline. This allows grouping users into distinct 
partitions, and it scales by adding more keystones instead of bigger keystones.


--John






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptls] Project On-Boarding Rooms

2017-03-15 Thread John Dickinson
I'm interested in having a room for Swift.

--John




On 15 Mar 2017, at 11:20, Kendall Nelson wrote:

> Hello All!
>
> As you may have seen in a previous thread [1] the Forum will offer project
> on-boarding rooms! This idea is that these rooms will provide a place for
> new contributors to a given project to find out more about the project,
> people, and code base. The slots will be spread out throughout the whole
> Summit and will be 90 min long.
>
> We have a very limited slots available for interested projects so it will
> be a first come first served process. Let me know if you are interested and
> I will reserve a slot for you if there are spots left.
>
> - Kendall Nelson (diablo_rojo)
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2017-March/113459.html


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [cinder][glance][horizon][keystone][nova][qa][swift] Feedback needed: Removal of legacy per-project vanity domain redirects

2017-03-08 Thread John Dickinson


On 8 Mar 2017, at 11:04, Andreas Jaeger wrote:

> On 2017-03-08 17:23, Brian Rosmaita wrote:
>> On 3/8/17 10:12 AM, Monty Taylor wrote:
>>> Hey all,
>>>
>>> We have a set of old vanity redirect URLs from back when we made a URL
>>> for each project:
>>>
>>> cinder.openstack.org
>>> glance.openstack.org
>>> horizon.openstack.org
>>> keystone.openstack.org
>>> nova.openstack.org
>>> qa.openstack.org
>>> swift.openstack.org
>>>
>>> They are being served from an old server we'd like to retire. Obviously,
>>> moving a set of http redirects is trivial, but these domains have been
>>> deprecated for about 4 now, so we figured we'd clean house if we can.
>>>
>>> We know that the swift team has previously expressed that there are
>>> links out in the wild pointing to swift.o.o/content that still work and
>>> that they don't want to break anyone, which is fine. (although if the
>>> swift team has changed their minds, that's also welcome)
>>>
>>> for the rest of you, can we kill these rather than transfer them?
>>
>> My concern is that glance.openstack.org is easy to remember and type, so
>> I imagine there are links out there that we have no control over using
>> that URL.  So what are the consequences of it 404'ing or "site cannot be
>> reached" in a browser?
>>
>> glance.o.o currently redirects to docs.o.o/developer/glance
>>
>> If glance.o.o failed for me, I'd next try openstack.org (or
>> www.openstack.org).  Those would give me a page with a top bar of links,
>> one of which is DOCS.  If I took that link, I'd get the docs home page.
>>
>> There's a search bar there; typing in 'glance' gets me
>> docs.o.o/developer/glance as the first hit.
>>
>> If instead I scroll past the search bar, I have to scroll down to
>> "Project-Specific Guides" and follow "Services & Libraries" ->
>> "Openstack Services" -> "image service (glance) -> docs.o.o/developer/glance
>>
>> Which sounds kind of bad ... until I type "glance docs" into google.
>> Right now the first hit is docs.o.o/developer/glance.  And all the kids
>> these days use the google.  So by trying to be clever and hack the URL,
>> I could get lost, but if I just google 'glance docs', I find what I'm
>> looking for right away.
>>
>> So I'm willing to go with the majority on this, with the caveat that if
>> one or two teams keep the redirect, it's going to be confusing to end
>> users if the redirect doesn't work for other projects.
>
> Very few people know about these URLs at all and there are only a few
> places that use it in openstack (I just send a few patches for those).
> If you google for "openstack glance", you won't get this URL at all,

On the other hand, "swift.openstack.org" is the first result when searching for 
"openstack swift", and although it's very hard to find backlinks, my quick 
looking at a few search engines do suggest that there are quite a few pages 
that reference the swift vanity subdomain.

I've already approved Monty's patch to swift, but I think that's a separate 
issue from keeping the redirects active. I *am* concerned with the things Brian 
brings up. It's hard to find these docs when starting from the docs landing 
page. These dev docs have great info, and removing an easy to type/remember url 
seems to be counter to helping our users. Personally, I type 
"swift.openstack.org" all the time, just because it's short and easy to 
remember and I don't remember the one it redirects to.


--John





>
> Andreas
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread John Dickinson


On 1 Mar 2017, at 10:07, Alexandra Settle wrote:

> On 3/1/17, 5:58 PM, "John Dickinson" <m...@not.mn> wrote:
>
>
>
> On 1 Mar 2017, at 9:52, Alexandra Settle wrote:
>
> > Hi everyone,
> >
> > I would like to propose that we introduce a “Review documentation” 
> period on the release schedule.
> >
> > We would formulate it as a deadline, so that it fits in the schedule 
> and making it coincide with the RC1 deadline.
> >
> > For projects that are not following the milestones, we would translate 
> this new inclusion literally, so if you would like your project to be 
> documented at docs.o.o, then doc must be introduced and reviewed one month 
> before the branch is cut.
>
> Which docs are these? There are several different sets of docs that are 
> hosted on docs.o.o that are managed within a project repo. Are you saying 
> those won't get pushed to
> docs.o.o if they are patched within a month of the cycle release?
>
> The only sets of docs that are published on the docs.o.o site that are 
> managed in project-specific repos is the project-specific installation 
> guides. That management is entirely up to the team themselves, but I would 
> like to push for the integration of a “documentation review” period to ensure 
> that those teams are reviewing their docs in their own tree.
>
> This is a preferential suggestion, not a demand. I cannot make you review 
> your documentation at any given period.
>
> The ‘month before’ that I refer to would be for introduction of documentation 
> and a review period. I will not stop any documentation being pushed to the 
> repo unless, of course, it is untested and breaks the installation process.

There's the dev docs, the install guide, and the api reference. Each of these 
are published at docs.o.o, and each have elements that need to be up-to-date 
with a release.

>
>
> >
> > In the last week since we released Ocata, it has become increasingly 
> apparent that the documentation was not updated from the development side. We 
> were not aware of a lot of new enhancements, features, or major bug fixes for 
> certain projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
> >
> > The new inclusion to the schedule may seem unnecessary, but a lot of 
> people rely on this and the PTL drives milestones from this schedule.
> >
> > From our side, I endeavor to ensure our release managers are working 
> harder to ping and remind doc liaisons and PTLs to ensure the documentation 
> is appropriately updated and working to ensure this does not happen in the 
> future.

Overall, I really like the general concept here. It's very important to have 
good docs. Good docs start with the patch, and we should be encouraging the 
idea of "patch must have both tests and docs before landing".

On a personal note, though, I think I'll find this pretty tough. First, it's 
really hard for me to define when docs are "done", so it's hard to know that 
the docs are "right" at the time of release. Second, docs are built and 
published at each commit, so updating the docs "later, in a follow-on patch" is 
a simple thing to hope for and gives fast feedback, even after a release. (Of 
course the challenge is actually *doing* the patch later--see my previous 
paragraph.)

> >
> > Thanks,
> >
> > Alex
>
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: 
> openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [docs][release][ptl] Adding docs to the release schedule

2017-03-01 Thread John Dickinson


On 1 Mar 2017, at 9:52, Alexandra Settle wrote:

> Hi everyone,
>
> I would like to propose that we introduce a “Review documentation” period on 
> the release schedule.
>
> We would formulate it as a deadline, so that it fits in the schedule and 
> making it coincide with the RC1 deadline.
>
> For projects that are not following the milestones, we would translate this 
> new inclusion literally, so if you would like your project to be documented 
> at docs.o.o, then doc must be introduced and reviewed one month before the 
> branch is cut.

Which docs are these? There are several different sets of docs that are hosted 
on docs.o.o that are managed within a project repo. Are you saying those won't 
get pushed to docs.o.o if they are patched within a month of the cycle release?


>
> In the last week since we released Ocata, it has become increasingly apparent 
> that the documentation was not updated from the development side. We were not 
> aware of a lot of new enhancements, features, or major bug fixes for certain 
> projects. This means we have released with incorrect/out-of-date 
> documentation. This is not only an unfortunately bad reflection on our team, 
> but on the project teams themselves.
>
> The new inclusion to the schedule may seem unnecessary, but a lot of people 
> rely on this and the PTL drives milestones from this schedule.
>
> From our side, I endeavor to ensure our release managers are working harder 
> to ping and remind doc liaisons and PTLs to ensure the documentation is 
> appropriately updated and working to ensure this does not happen in the 
> future.
>
> Thanks,
>
> Alex


> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] thoughts on the PTG

2017-02-28 Thread John Dickinson
The PTG was great. Not only was there three full days of Swift-specific dev 
work, we had lots of opportunity to work with other projects. Here's a quick 
rundown of the highs and lows of the week.

The first obvious benefit to the PTG is the relatively easy access to other 
projects. Yes, there's some issues we all need to work through related to 
scheduling and awareness, but the fact that we're all in the same building is 
useful. The Swift team was able to work with many different groups last week. 
I'm excited about the ongoing work around requirements, packaging/testing 
infrastructure, barbican, improving docs, and improving the QA process.

The PTG also had the general "flavor" of previous Swift midcycle hackathons, in 
that there weren't a lot of external constraints on our time. It was a week 
devoted to working closely together, face-to-face. I'm happy that the PTG was 
able to preserve this aspect of midcycles.

However, one downside I saw was the smaller attendance. The PTG was slightly 
smaller than a midcycle and much smaller than previous summits. I missed the 
extra diversity of opinion and especially the much smaller set of ops feedback 
that we had. As devs, it's very easy to get carrier away designing something 
that isn't actually super important to our users. Our users are the ops and 
admins who run the clusters, so their feedback is crucial for prioritization 
and knowing what particular issues are most broken.

I'm not sure yet what the upcoming summit/forum event will look like. I think 
there's still a lot of confusion around what that will look like and who should 
be there. But after that event, we'll all be able to look back and see what 
needs to be changed (if anything).

Within the Swift community, we made good progress on quite a few topics, but I 
want to highlight just a few. We had a lot of discussion around general storage 
server performance and efficiency. This topic has a *lot* of different parts, 
including some golang rewrites. All of these were addressed, and we'll be 
working through them  with the whole OpenStack community.

We also talked about enhancing a few existing features. One improvement we've 
made recent progress on is support for global erasure coding. This takes 
advantage of the efficiencies erasure coding brings, but it works around the 
challenges of deploying it in globally distributed clusters. We've landed the 
first part of this work (replicating EC fragments), but there's still a lot to 
do related to how and when the fragments in different regions are accessed.

I was excited about the cross-project conversations we had with the Barbican 
team. We're working on enhancing Swift's encryption-at-rest capabilities, and 
users are asking for integration with key management systems. Barbican provides 
this, so we spent quite a while figuring out what's needed and what the 
eventual solution will look like. In addition to the Swift code that needs to 
be written, one of the next steps is to work with the Barbican team on gate 
testing together.

We talked about many other things during the week. Overall, it was a great 
week, and I'm excited to be working with my fellow Swift contributors.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Fwd: TripleO mascot - how can I help your team?

2017-02-16 Thread John Dickinson


On 16 Feb 2017, at 15:01, Chris Dent wrote:

> On Thu, 16 Feb 2017, Dan Prince wrote:
>
>> And yes. We are all OpenStack developers in a sense. We want to align
>> things in the technical arena. But I think you'll also find that most
>> people more closely associate themselves to a team within OpenStack
>> than they perhaps do with the larger project. Many of us in TripleO
>> feel that way I think. This is a healthy thing, being part of a team.
>> Don't make us feel bad because of it by suggesting that uber OpenStack
>> graphics styling takes precedent.
>
> I'd very much like to have a more clear picture of the number of
> people who think of themselves primarily as "OpenStack developers"
> or primarily as "$PROJECT developers".
>
> I've always assumed that most people in the community™ thought of
> themselves as the former but I'm realizing (in part because of what
> Dan's said here) that's bias or solipsism on my part and I really
> have no clue what the situation is.
>
> Anyone have a clue?

Without resorting to anecdote, two things from the foundation are interesting 
to me on this topic.

First, the foundation is making individual project logos for each project, thus 
promoting the individual project, and has said they will factor in to future 
OpenStack marketing.

Second, the foundation messaging around the PTG emphasizes per-project 
developers. From https://www.openstack.org/ptg/

The event is not optimized for non-contributors or people who can’t relate to a 
specific project team. Each team is given a single room and attendees are 
expected to spend all their time with their project team. Attendees who can’t 
pick a specific team to work with are encouraged to skip the event in favor of 
attending the OpenStack Summit, where a broader range of topics is discussed.




--John




>
> -- 
> Chris Dent ¯\_(ツ)_/¯   https://anticdent.org/
> freenode: cdent tw: 
> @anticdent__
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Call for mentors and funding - Outreachy

2017-02-14 Thread John Dickinson


On 13 Feb 2017, at 20:17, Mahati C wrote:

> Hello everyone,
>
> An update on the Outreachy program, including a request for volunteer
> mentors and funding. For those of you who are not aware, Outreachy helps
> people from underrepresented groups get involved in free and open source
> software  by matching interns with established mentors in the upstream
> community. For more info, please visit:
> https://wiki.openstack.org/wiki/Outreachy
>
> We so far have a confirmation for three spots for OpenStack in this round
> of Outreachy. But we are receiving more applicants who are interested in
> contributing to different OpenStack projects. Interested mentors - please
> publish your project ideas to this page
> https://wiki.openstack.org/wiki/Internship_ideas. Here is a link that helps
> you get acquainted with mentorship process:
> https://wiki.openstack.org/wiki/Outreachy/Mentors
>
> We are looking for additional sponsors to help support the increase in
> OpenStack applicants. The sponsorship cost is 6,500 USD per intern, which
> is used to provide them a stipend for the three-month program. You can
> learn more about sponsorship here:
> https://wiki.gnome.org/Outreachy/Admin/InfoForOrgs#Action
>
> Outreachy has been one of the most important and effective diversity
> efforts we’ve invested in. It has evidently been a way to retain new
> contributors, we’ve had some amazing participants become long-term
> contributors to our community.
>
> Please help spread the word. If you are interested in becoming a mentor or
> sponsoring an intern, please contact me (mahati.chamarthy AT intel.com) or
> Victoria (victoria AT redhat.com).
>
> Thanks,
> Mahati


As someone who's participated as a mentor more than once, I'd definitely 
encourage anyone involved in upstream OpenStack work to consider doing this. 
You don't have to have any "special" qualifications! You're fine just the way 
you are! All that's needed is a willingness to want to see others succeed. If 
you already help other people in the community out by giving good reviews and 
answering questions in IRC, you're already doing 90% of what's needed to be an 
Outreachy mentor. In my experience, it's also good to spend 1-3 hours a week as 
"office hours" or on a short video chat with the mentee, but that's it. The 
extra commitment is not burdensome. Overall, it's a rewarding experience for 
everyone. Please consider being a mentor.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift][swiftclient][horizon] Improve UX by enabling HTTP headers configuration in UI and CLI

2017-02-10 Thread John Dickinson


On 10 Feb 2017, at 7:07, Denis Makogon wrote:

> Greetings.
>
> I've been developing Swift middleware that depends on specific HTTP headers
> and figured out that there's only one way to specify them on client side -
> only in programmatically i can add HTTP headers to each Swift HTTP API
> method (CLI and dashboard are not supporting HTTP headers configuration,
> except by default enabled cases like "copy" middleware because swiftclient
> defines in as separate API method).
>
> My point here is, as developer, i don't have OpenStack-aligned way to
> examine HTTP headers-dependent middleware without hacking into both
> swiftclient and dashboard what makes me fall back to cURL that brings a lot
> overhead while working with Swift.
>
> So, is there any interest in having such thing in swiftclient and,
> subsequently, in dashboard?
> If yes, let me know (it shouldn't be that complicated because at
> swiftclient python API level we already capable to send HTTP headers).

Good news! python-swiftclient already supports sending arbitrary headers via 
the CLI with the -H/--header (in addition to the SDK, as you mentioned). IIRC 
this is *not* yet supported in the combined openstack client, but I think it 
would be a great addition.

--John


>
> Kind regards,
> Denis Makogon
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][py3][swift][devstack] USE_PYTHON3 works! (well somewhat)

2017-01-03 Thread John Dickinson


On 3 Jan 2017, at 10:38, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2017-01-03 09:02:19 -0800:
>>
>> On 2 Jan 2017, at 13:06, Davanum Srinivas wrote:
>>
>>> Folks,
>>>
>>> Short Story :
>>> [1] has merged in devstack, it adds support for a python 3.5 based
>>> up/down devstack test that just starts services and brings them down.
>>> see [2] for a test run.
>>>
>>> Need help from swift folks:
>>> Swift still needs work i have gotten as far as [3] UnpicklingError on
>>> ring data using [4][5][6][7]. Can someone from the swift team pick
>>> this up?
>>> Once you get this working, please add "swift" to the white list in [8]
>>> and remove the disable_service for swift services in [9]
>>
>> IIRC the issue is the differences between pickle, json, and arrays in py2 vs 
>> py3 (short summary: you can't deserialize in py3 stuff that was serialized 
>> in py2 without first changing the py2 code).
>
> Is that right? It seems like it would be the other way around. There
> are newer pickle protocols in 3 that aren't available at all for 2.
>
> There are also some new options to the load() function in 3 to do
> things like fix imports for standard library modules that were
> renamed and set the right default string encoding to make it possible
> to change the *3* code to be able to more easily load a pickle
> created by 2 [1].  Is that what you meant?

Nah, it's not the pickle protocol. It's the different way (some versions of) 
py27 [de]serializes arrays vs how py3 does it. The following breaks for 
py2.7.10 and works for py2.7.12 (.11 is untested).

python3 -c 'import array, pickle, os, sys; pickle.dump(array.array("I", [0, 0, 
0]), os.fdopen(1, "wb"), protocol=2)' | python -c 'import pickle, os, sys; 
print(pickle.load(os.fdopen(0, "rb")))'

So maybe there are ways to always ensure doing the right thing through some 
combination of try/excepts, "if six.PY3" blocks, plus lots of docs for ops to 
make sure upgrades go smoothly, but the real solution is to not use pickle. 
This is doubly true when considering that the data structure will be used by 
non-python code, too.

This is one of the things to put on the list for py3, and it certainly is not 
the last.

--John




>
> Doug
>
> [1] https://docs.python.org/3.5/library/pickle.html#pickle.load
>
>>
>> We're tracking this at https://bugs.launchpad.net/swift/+bug/1644387 and 
>> have a patch at https://review.openstack.org/#/c/401397/, but I can't 
>> guarantee that services will start as soon as that patch lands. (i.e. it's 
>> necessary, but might not be sufficient)
>>
>> --John
>>
>>>
>>> Other teams:
>>> Please consider adding DSVM jobs with USE_PYTHON3=True for your
>>> projects. This will hopefully help us get to our Pike goal for Python
>>> 3.x [10]
>>>
>>> Please stop by #openstack-python3 channel to chat.
>>>
>>> Thanks,
>>> Dims
>>>
>>> [1] https://review.openstack.org/#/c/414176/
>>> [2] 
>>> http://logs.openstack.org/76/414176/11/check/gate-devstack-dsvm-py35-updown-ubuntu-xenial-nv/
>>> [3] 
>>> http://logs.openstack.org/00/412500/7/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/f5a7fe7/logs/screen-s-proxy.txt.gz
>>> [4] https://review.openstack.org/#/c/412500/
>>> [5] https://review.openstack.org/#/c/414727/
>>> [6] https://review.openstack.org/#/c/416064/
>>> [7] https://review.openstack.org/#/c/416084/
>>> [8] https://review.openstack.org/#/c/414176/11/inc/python
>>> [9] 
>>> https://review.openstack.org/#/c/413775/5/jenkins/jobs/devstack-gate.yaml
>>> [10] https://review.openstack.org/#/c/349069/
>>>
>>> -- 
>>> Davanum Srinivas :: https://twitter.com/dims
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][py3][swift][devstack] USE_PYTHON3 works! (well somewhat)

2017-01-03 Thread John Dickinson


On 2 Jan 2017, at 13:06, Davanum Srinivas wrote:

> Folks,
>
> Short Story :
> [1] has merged in devstack, it adds support for a python 3.5 based
> up/down devstack test that just starts services and brings them down.
> see [2] for a test run.
>
> Need help from swift folks:
> Swift still needs work i have gotten as far as [3] UnpicklingError on
> ring data using [4][5][6][7]. Can someone from the swift team pick
> this up?
> Once you get this working, please add "swift" to the white list in [8]
> and remove the disable_service for swift services in [9]

IIRC the issue is the differences between pickle, json, and arrays in py2 vs 
py3 (short summary: you can't deserialize in py3 stuff that was serialized in 
py2 without first changing the py2 code).

We're tracking this at https://bugs.launchpad.net/swift/+bug/1644387 and have a 
patch at https://review.openstack.org/#/c/401397/, but I can't guarantee that 
services will start as soon as that patch lands. (i.e. it's necessary, but 
might not be sufficient)

--John




>
> Other teams:
> Please consider adding DSVM jobs with USE_PYTHON3=True for your
> projects. This will hopefully help us get to our Pike goal for Python
> 3.x [10]
>
> Please stop by #openstack-python3 channel to chat.
>
> Thanks,
> Dims
>
> [1] https://review.openstack.org/#/c/414176/
> [2] 
> http://logs.openstack.org/76/414176/11/check/gate-devstack-dsvm-py35-updown-ubuntu-xenial-nv/
> [3] 
> http://logs.openstack.org/00/412500/7/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/f5a7fe7/logs/screen-s-proxy.txt.gz
> [4] https://review.openstack.org/#/c/412500/
> [5] https://review.openstack.org/#/c/414727/
> [6] https://review.openstack.org/#/c/416064/
> [7] https://review.openstack.org/#/c/416084/
> [8] https://review.openstack.org/#/c/414176/11/inc/python
> [9] https://review.openstack.org/#/c/413775/5/jenkins/jobs/devstack-gate.yaml
> [10] https://review.openstack.org/#/c/349069/
>
> -- 
> Davanum Srinivas :: https://twitter.com/dims
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Proposed changes to unit-test setup

2016-11-22 Thread John Dickinson


On 22 Nov 2016, at 10:47, Andreas Jaeger wrote:

> When we (infra) changed the unit test jobs to not set up databases by
> default, we created special python-db and tox-db jobs that set up both
> MySQL and PostgreSQL databases. And that complicated the setup of those
> projects and lead to problems like setting projects up via bindep for
> both databases even if one was used.
>
> We had last week an IRC discussion [1] and came up with the following
> approach:
>
>   Projects can use a tools/test-setup.sh script that is called from
>   our unit test (tox, python27, python34, python35) targets. The
>   script is executed as root and should set up the needed databases -
>   or whatever is needed. The script needs to reside in the repository -
>   and thus might need to get backported to older branches.
>
>   This setup should be used for any kind of repo specific unit test
>   setup.
>
>   Projects are suggested to add to their developer documents, e.g. the
>   README or CONTRIBUTING or TESTING file, the usage of
>   tools/testsetup.sh. Developers should be able to use the script to
>   set up prerequisites for unit tests locally.
>
>   Long term goal is for projects to not use the -db jobs anymore, new
>   changes for them should not be accepted.
>
> This is implemented in project-config [2], an example usage in
> nodepool [3,4], which leads to a cleanup [5].
>
> Further investigation shows that the special searchlight setup can be
> solved with the same approach (searchlight change [6], project-config
> [7]). Here it's interesting to note that moving the setup in the
> repository, found a problem: The repo needs elasticsearch 1 for
> liberty and 2 for newer branches, this can now be done inside the
> repository.
>
> The xfs TMPDIR setup of swift [2] could been done in general this way as
> well but that change needs to set TMPDIR for the unittests, passing
> information from the set up builder to the tox builder. This is
> currently not possible using only the proposed solution, and so would
> still require a custom tox job. Alternative, this could be changed with
> some other way of passing the value of TMPDIR between these different
> invocations.

(actually link [10])

This sounds like a great idea that I would have loved to have in place a few 
weeks ago!

One question: is there a limit to which tox environments will call this in-repo 
script? Above you list a few, but Swift and other projects have repo-specific 
environments that would need the setup as well. How will that work?


>
> Today, a change was proposed [8,9] that would setup docker for kolla
> and kolla-ansible. I suggest to not merge it and instead use the same
> approach here.
>
> Credits for the proposal go to Jeremy - and this got triggered by
> comments by Jim. Thanks!
>
> Andreas
>
> [1]
> http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-11-17.log.html#t2016-11-17T15:07:38
>
> [2] https://review.openstack.org/399105
> [3] https://review.openstack.org/399079
> [4] https://review.openstack.org/399177
> [5] https://review.openstack.org/399180
> [6] https://review.openstack.org/399159
> [7] https://review.openstack.org/399169
> [8] https://review.openstack.org/400128
> [9] https://review.openstack.org/400474
> [10] https://review.openstack.org/394600
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [security] FIPS Compliance (Was: [requirements][kolla][security] pycrypto vs cryptography)

2016-11-18 Thread John Dickinson


On 18 Nov 2016, at 8:14, Dean Troyer wrote:

>> -Original Message-
>> From: Luke Hinds 
> [...]
>>> for non security related functions, but when it comes to government
>>> compliance and running OpenStack on public clouds (and even private for the
>>> Telcos / NFV), not meeting FIPS will in some cases block production getting
>>> a green light, or at least make it a big challenge to push through.
>
> Are there any know cases of this happening?  If so, can those be
> publicly documented to quantify how much this issue is hurting
> deployments?
>
> On Fri, Nov 18, 2016 at 9:57 AM, Ian Cordasco  wrote:
>> Also, instead of creating bugs, I would suggest instead that we try to make 
>> this into a community goal. We would work with the TC and for P or Q, make 
>> it a goal to start migrating off of MD5 and have a goal for a cycle or two 
>> later to completely remove reliance on MD5.
>>
>> Doing this piecemeal via bugs will not be efficient and we'll need community 
>> buy-in.
>
> We would also need to get a reasonable scoping of the issue (which
> projects, how many instances, etc) to help decide if this is an
> achievable goal (in the sense of the 'community goals').
>
> As you noted, this will not be easy for Swift or Glance (others?), but
> if the impact to deployers can be quantified it makes it easier to
> spend energy here.

Swift does use md5 in two places: placement and integrity checking.

Placement:
MD5 is used in Swift's ring to balance the placement of data across the 
cluster. In pseudo code, we...

>>> h = hash(secret_prefix + name_of_thing + secret_suffix)
>>> lookup_index = h >> (32 - configurable_count)  # get the prefix bits
>>> list_of_drives = drive_table[lookup_index]  # get the drives this this is on

So what we're doing is using some bits at the beginning of the md5 output to 
splay the data across the system. Since md5 has even dispersion across the key 
space, this allows all the drives in the cluster to fill up evenly. This is key 
to Swift's availability, scaling, durability, and performance.

We've previously explored the idea of using some different algorithm for the 
ring hashing. We haven't for a few reasons, but primarily it's because md5 is 
"good enough" for our placement needs (fast enough, disperse enough) and is 
built in to the standard library. Also, because of the reason's below, we'll 
have to keep md5 around anyway, so there's been no big push to change this 
implementation and add a new dependency.


Integrity checking:
Swift uses md5 to detect bit flips and to do end-to-end integrity checking. We 
calculate and store the md5 of every object stored in swift and use that to 
detect if there are bit flips in the data. We have a background process that 
reads every bit of the object, computes the md5, and checks if that matches the 
stored md5. If not, the bad data is quarantined and durability is repaired from 
other data in the system. We also allow the end-user to send the expected md5 
on an object write via the etag header. If the data written to disk doesn't 
match the supplied etag, the request fails. We also return the md5 of the data 
in the etag on object read responses and use the deterministic nature of the 
hash for conditional header requests (if-match, etc).

It's highly unlikely that we will ever be able to remove md5 from this part of 
the system, even if only for legacy purposes. Even if we had a new API version 
(which we've never done before) that used a different hash function, we'd still 
have to support the v1 API. We'd also have to deal with the EB of data already 
stored in Swift clusters today. They are all hashed with md5, and we'd still 
need to use it for auditing all of the existing data.

Any "changes" in a hash library in Swift would likely be additions, not a 
replacement.

That being said, from my reading the BLAKE2* family of hash algorithms looks 
very interesting.



--John






>
> dt
>
> -- 
>
> Dean Troyer
> dtro...@gmail.com
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [gnocchi] new measures backlog scheduling

2016-11-14 Thread John Dickinson


On 14 Nov 2016, at 13:57, gordon chung wrote:

> hi,
>
> one issue i've noticed with our 'backlog' scheduling is that we register
> all our new measures in a single folder/filestore object. this folder or
> object in most production cases can grow quite large (tens/hundreds of
> thousands). so we don't load it all into memory, the drivers will only
> grab the first x items and process them. unfortunately, we don't control
> the ordering of the returned items so it is dependent on the ordering
> the backend returns. for Ceph, it returns in what i guess is some
> alphanumeric order. the file driver i believe returns based on how the
> filesystem indexes files. i have no idea how swift ordering behaves. the

Listings in Swift are lexicographically sorted by the object name.

> result of this is that we may starve some new measures from being
> processed because they keep getting pushed back by more recent measures
> if less agents are deployed.
>
> with that said, this isn't a huge issue because measures can be
> processed on demand using refresh parameter but it's not ideal.
>
> i was thinking, to better handle processing while minimising the effects
> of a driver's natural indexing, we can hash our new measures into
> buckets based on metric_id. Gnocchi would hash all incoming metrics into
> 100? buckets and metricd agents would divide up these buckets and loop
> through them. this would ensure we have smaller buckets to deal with and
> therefore less chance, metrics get pushed back and starved. that said it
> will add additional requirements of 100? folders/filestore objects
> rather than 1. it will also mean we may be making significantly more
> smaller fetches vs single (possibly) giant fetch.
>
> to extend this, we could also hash into project_id groups and thus allow
> some projects to have more workers and thus more performant queries?
> this might be too product tailored. :)

I'm stepping in to an area here (gnocchi) that I know very little about, so 
please forgive me where I mess up.

First, as a practical note, stuff in Swift will be *much* better when you 
spread it across the entire namespace. It's a lot better to data stored in many 
containers instead of putting all data into just one container. Spreading the 
data out takes advantages of Swift's scaling characteristics and makes users 
and ops happier.

Second, at the risk of overgeneralizing[1], you may want to consider using the 
consistent hashing code from Ironic and Nova (and is being discussed as a new 
oslo library). Consistent hashing gives you the nice property of being able to 
change the number of buckets you're hashing data into without having to rehash 
most of the existing data. Think of it this way, if you hash into two buckets 
and use even/odd (i.e. last bit) to determine which bucket data goes into, then 
when you need a third bucket you have to switch to MOD 3 and two-thirds of your 
existing data will move into a different bucket. That's bad, and it gets even 
worse as you add more and more buckets. With consistent hashing, you can get 
the property that if you add 1% more buckets, you'll only move about 1% of the 
existing data to a different bucket.

So going even further into my ignorance of the gnocchi problem space, I could 
imagine that there may be some benefit of being able to change the number of 
hash buckets over time based on how many items, how many workers are processing 
them, rate of new metrics, etc. If there's benefit to changing the number of 
hash buckets over time, then looking at consistent hashing is probably worth 
the time. If there is no benefit to changing the number of hash buckets over 
time, then a simple `hash(data) >> (hash_len - log2(num_hash_buckets) + 1)` or 
`hash(data) MOD num_hash_buckets` is probably sufficient.

--John



[1] I've got a nice hammer. Your problem sure looks like a nail to me.

>
> thoughts?
>
> cheers,
> -- 
> gord
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all][ptg] how are cross-project sessions scheduled?

2016-11-10 Thread John Dickinson
I've been fielding questions about the upcoming PTG, but one has come
up that I don't know how to answer.

How will cross-project sessions at the PTG be scheduled?

>From looking at the "Event Layout" section on
https://www.openstack.org/ptg, it seems to imply that each team in the
left column will likely set their own schedule. So if there's some
cross-project topic someone wants to bring up, then that person should
figure out which team it best fits under and petition that team to
include the topic. Is that correct?


--John







signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] Embracing new languages in OpenStack

2016-11-07 Thread John Dickinson


On 7 Nov 2016, at 10:31, Ash wrote:

> On Mon, Nov 7, 2016 at 9:58 AM, Hayes, Graham  wrote:
>
>> On 07/11/2016 17:14, Flavio Percoco wrote:
>>> Greetings,
>>>
>>> I literally just posted a thing on my blog with some thoughts of what
>> I'd expect
>>> any new language being proposed for OpenStack to cover before it can be
>>> accepted.
>>>
>>> The goal is to set the expectations right for what's needed for new
>> languages to
>>> be accepted in the community. During the last evaluation of the Go
>> proposal I
>>> raised some concerns but I also said I was not closed to discussing this
>> again
>>> in the future. It's clear we've not documented expectations yet and this
>> is a
>>> first step to get that going before a new proposal comes up and we start
>>> discussing this topic again.
>>>
>>> I don't think a blog post is the "way we should do this" but it was my
>> way to
>>> dump what's in my brain before working on something official (which I'll
>> do
>>> this/next week).
>>>
>>> I also don't think this list is perfect. It could either be too
>> restrictive or
>>> too open but it's a first step. I won't paste the content of my post in
>> this
>>> email but I'll provide a tl;dr and eventually come back with the actual
>>> reference document patch. I thought I'd drop this here in case people
>> read my
>>> post and were confused about what's going on there.
>>>
>>> Ok, here's the TL;DR of what I believe we should know/do before
>> accepting a new
>>> language into the community:
>>
>> Its a great starting point, but there is one issue:
>>
>> This is a *huge* amount of work to get into place, for the TC to still
>> potentially refuse the language. (I know you covered this in your blog
>> post, but I think there is a level of underestimation there.)
>>
>>
>>> - Define a way to share code/libraries for projects using the language
>>
>> ++ - Definitely needed
>>
>>> - Work on a basic set of libraries for OpenStack base services
>>
>> What do we include here? You called out these:
>>
>> keystoneauth / keystone-client
>> oslo.config
>> oslo.db
>> oslo.messaging
>>
>> We need to also include
>>
>> oslo.log
>>
>> Do they *all* need to be implemented? Just some? Do they need feature
>> parity?
>>
>> For example the previous discussion about Go had 2 components that would
>> have required at most 2 of these base libraries (and I think that was
>> mainly on the Designate side - the swift implementation did not need
>> any (I think - I am open to correction)
>>
>>> - Define how the deliverables are distributed
>>
>> ++ - but this needs to be done with the release management teams input.
>> What I think is reasonable may not be something that they are interested
>> in supporting.
>>
>>> - Define how stable maintenance will work
>>
>> ++
>>
>>> - Setup the CI pipelines for the new language
>>
>> This requires the -infra team dedicate (what are already stretched)
>> resources to a theoretical situation, doing work that may be thrown
>> away if the TC rejects a language.
>>
> Here's another take on the situation. If there are people who genuinely
> wish to see a CI pipeline that can support something like Go, perhaps you
> can establish a prerequisite of working with the Infra team on establishing
> the new pipeline. In my opinion, this seems to be the major gate. So, if
> there's a clear path identified, resources provided, and the Infra team is
> ultimately benefitted, then I'm not sure why there should be another
> rejection. Just a thought. I know this proposal continues to come up and
> I'm a big fan of seeing other languages supported, especially Go. But I
> also understand that it can break things. Personally, I would even
> volunteer to work on such an Infra effort.
>
> BTW, it is quite possible that another group might feel the same
> constraints. It's perfectly reasonable. But if we can overcome such
> obstacles, would the TC still have a concern?


Here's the notes we had from last May when we started the Golang discussion 
where we started working through these questions.

https://etherpad.openstack.org/p/golang-infra-issues-to-solve


>
>>
>> I foresee these requirements as a whole being overly onerous, and
>> having the same result as saying "no new languages".
>>
>> I think that there should be base research into all of these, but the
>> requirements for some of these to be fully completed could be basically
>> impossible.
>>
>>>
>>> The longer and more detailed version is here:
>>>
>>> https://blog.flaper87.com/embracing-other-languages-openstack.html
>>>
>>> Stay tuned,
>>> Flavio
>>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>


> __
> OpenStack Development 

Re: [openstack-dev] [oslo] New and next-gen libraries (a BCN followup)

2016-11-04 Thread John Dickinson


On 4 Nov 2016, at 7:50, Jim Rollenhagen wrote:

> On Thu, Nov 3, 2016 at 3:04 PM, Joshua Harlow <harlo...@fastmail.com> wrote:
>> Jay Faulkner wrote:
>>>>
>>>> On Nov 3, 2016, at 11:27 AM, Joshua Harlow<harlo...@fastmail.com>  wrote:
>>>>
>>>> Just as a followup from the summit,
>>>>
>>>> One of the sessions (the new lib one) had a few proposals:
>>>>
>>>> https://etherpad.openstack.org/p/ocata-oslo-bring-ideas
>>>>
>>>> And I wanted to try to get clear owners for each part (there was some
>>>> followup work for each); so just wanted to start this email to get the
>>>> thoughts going on what to do for next steps.
>>>>
>>>> *A hash ring library*
>>>>
>>>> So this one it feels like we need at least a tiny oslo-spec for and for
>>>> someone to write down the various implementations, what they share, what
>>>> they do not share (talking to swift, nova, ironic and others? to figure 
>>>> this
>>>> out). I think alexis was thinking he might want to work through some of 
>>>> that
>>>> but I'll leave it for him to chime in on that (or others feel free to 
>>>> also).
>>>>
>>>> This one doesn't seem very controversial and the majority of the work is
>>>> probably on doing some analysis of what exists and then picking a library
>>>> name and coding that up, testing it, and then integrating (pretty 
>>>> standard).
>>>>
>>>
>>> Ironic and Nova both share a hash ring implementation currently
>>> (ironic-conductor and nova-compute driver for ironic). It would be sensible
>>> to reuse this implementation, oslo-ify it, and have that code shared.
>>>
>>> I question the value of re-implementing something like this from scratch
>>> though.
>>>
>>> Thanks,
>>> Jay Faulkner
>>> OSIC
>>>
>>
>> Right I don't think the intention would be to implement it from scratch, but
>> to do some basic analysis of what exists (and think about and document the
>> patterns), and try to find the common parts (which likely involves renaming
>> some specific nova/ironic methods from what I see); especially if we can get
>> swift to perhaps (TBD) also use and contribute to this library.
>
> As the person who copied that code into Nova, the Nova code is a strict subset
> of the Ironic code.
>
> Some of us talked to John Dickinson off-list, and it seems the Swift hash ring
> has very different use cases and very different implementation. I
> think we should
> focus on pulling the Nova/Ironic code out first, and then talking to
> Swift if we can
> also make it work for them (sounds like it's not helpful today).
>
> // jim


We had some great conversations last week face to face about this. The summary 
is that the "ring" in Ironic/Nova and the placement "ring" in Swift are vastly 
different in scope, requirements, and capabilities. I don't think it makes 
sense to try to unify them at this time.

As always, I'm available to talk further about this, if you want.


--John







signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tripleo] Ocata specs

2016-11-01 Thread John Dickinson


On 1 Nov 2016, at 14:46, Zane Bitter wrote:

> On 01/11/16 15:13, James Slagle wrote:
>> On Tue, Nov 1, 2016 at 7:21 PM, Emilien Macchi  wrote:
>>> Hi,
>>>
>>> TripleO (like some other projects in OpenStack) have not always done
>>> good job in merging specs on time during a cycle.
>>> I would like to make progress on this topic and for that, I propose we
>>> set a deadline to get a spec approved for Ocata release.
>>> This deadline would be Ocata-1 which is week of November 14th.
>>>
>>> So if you have a specs under review, please make sure it's well
>>> communicated to our team (IRC, mailing-list, etc); comments are
>>> addressed.
>>>
>>> Also, I would ask our team to spend some time to review them when they
>>> have time. Here is the link:
>>> https://review.openstack.org/#/q/project:openstack/tripleo-specs+status:open
>>
>> Given that we don't always require specs, should we make the same
>> deadline for blueprints to get approved for Ocata as well?
>>
>> In fact, we haven't even always required blueprints for all features.
>> In order to avoid any surprise FFE's towards the end of the cycle, I
>> think it might be wise to start doing so. The overhead of creating a
>> blueprint is very small, and it actually works to the implementer's
>> advantage as it helps to focus review attention at the various
>> milestones.
>>
>> So, we could say:
>> - All features require a blueprint
>> - They may require a spec if we need to reach concensus about the feature 
>> first
>> - All Blueprints and Specs for Ocata not approved by November 14th
>> will be deferred to Pike.
>>
>> Given we reviewed all the blueprints at the summit, and discussed all
>> the features we plan to implement for Ocata, I think it would be
>> reasonable to go with the above. However, 'm interested in any
>> feedback or if anyone feels that requiring a blueprint for features is
>> undesirable.
>
> The blueprint interface in Launchpad is kind of horrible for our purposes 
> (too many irrelevant fields to fill out). For features that aren't 
> big/controversial enough to require a spec, some projects have adopted a 
> 'spec-lite' process. Basically you raise a *bug* in Launchpad, give it 
> 'Wishlist' priority and tag it with 'spec-lite'.
>
> Sometimes a blueprint is the right answer (e.g. if it's high-priority and you 
> want to track it), but it's good to have the option.
>

Back in the Newton summit (in Austin), the Swift team spent quite a while 
discussing specs and blueprints and other ideas we'd used to organize what's 
being worked on. We concluded that specs weren't working (for some of the same 
reasons mentioned in this thread), so we decided to try something new. Our 
current process is described in the email I sent out last May: 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094026.html. So 
far, it's working pretty well.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [all] planning the PTG -- lessons from Swift's midcycles

2016-10-12 Thread John Dickinson
The Swift team has been doing midcycles for a while now, and as the new PTG 
gets closer, I want to write down our experience with what has worked for us. I 
hope it is beneficial to other teams, too.

## Logistics of the event

- 2 rooms is ideal, but can make due with one larger room
- move tables and chairs
- whiteboards/flip charts
- projector/tv
- host provides lunch and snacks
- host provides one evening meal/event

When someone offers to host a midcycle, this is what I ask them to provide. The 
PTG will be slightly different, but the general idea is the same. There's a few 
important things to note here. First, be flexible about who's talking about 
what and when people are working together. The point of getting together in 
person is to facilitate face-to-face communication, so be sure that the room 
logistics don't get in the way by forcing people into a certain configuration. 
Providing lunch and snacks allows the participants to not break any tech or 
social flow in the middle of the day. It keeps people together and helps 
facilitate communication. And the one evening event is super helpful to let 
people relax, have fun, and do something interesting away from a computer. In 
the past we've done everything from locked-room challenges and bowling to 
brewery tours and a boat ride under the Golden Gate bridge.

## Only agenda item is "set the agenda"

- dotmocracy
- too much to do for the people we have to work on it
- what's the big stuff we need the right people to be together for?
- schedule one big talk each am and pm

When it comes to the actual flow of the limited time together, there are two 
important things to keep in mind. First, make sure there's time to cover all 
the topics that are of interest to the people in the room. Second, make sure 
the big important stuff gets discussed without requiring someone to be in two 
places at once.

Unfortunately, these two goals are often in conflict. We've solved this in the 
past by starting the midcycle with one and only one agenda item: set the 
agenda. The most successful way we've done this is to ask the room to shout out 
topics to discuss. Every topic gets written down on a piece of paper or on a 
flipboard. When you've got all the topics written down, then give everyone a 
limited number of dot stickers and ask them to vote for what they want to talk 
about by placing one or more dots next to it. The trick is that there are more 
topics to talk about than people who are there and each person has less dots 
than the full schedule of time we have. So, for example, if there are 3 days 
together, that's a total of 6 morning and afternoon blocks of time. Give 
everyone 4 dots, and force them to prioritize. This also has the very real 
visual side effect of emphasizing that we are a team and not one person can be 
a part of everything going on. After everyone has put their dots on topics, 
sort the topics by number of dots. In our example, we've got 6 blocks of time, 
so choose the top six and schedule them. This ensures that the big stuff can 
get scheduled, the little stuff can fill in the gaps, and people can know when 
to be available for conversations that are important to them.

Imagine than you've got a glass mason jar, and you need to fill it up with 
stuff. You've got big rocks, small rocks, sand, and water. If you fill it up 
with water first, the water will spill out when you add anything else. But if 
you add the big things first, then you can fit more in. The big rocks go first, 
then small rocks fill up the spaces, then sand fills up the cracks, then the 
water can seem in the tiny air gaps. It's the same way with prioritizing the 
in-person meetings. Schedule the big stuff, then fill in any gaps with the 
small stuff.

## Social dynamics during the week

- you won't be able to participate in everything. that's ok
- there will be several conversations going on at one time. be considerate and 
flexible
- don't wait for someone to start a conversation. start it yourself. this is 
very important!

There's a lot going on at in-person meetings. It's ok to not participate in 
everything--you won't be able to, so don't even try. In the best case, there 
will be a lot of conversations going on at once, so be considerate and 
flexible. It's important to not sit back and wait to start a conversation--if 
you need to talk about something, grab the right people, a whiteboard, and a 
corner of the room and start talking.

But what do you talk about? Sometimes it's just talking with a whiteboard. 
Sometimes it's reviewing code together. And occasionally, there's even an 
opportunity for some pair programming.

After a topic has been talked about, check it off on the big list of topics 
that you made the first day. This keeps everyone aware of what has been talked 
about and what needs to be talked about. And by the end of your time together, 
it's a great visual reminder of the success of the week.

## Have fun

Overall, have fun. In-person 

Re: [openstack-dev] [tc] open question to the candidates

2016-10-03 Thread John Dickinson


On 3 Oct 2016, at 12:31, Doug Hellmann wrote:
>
> I think that's the balance we want to
> have: listen to input, collect information, then clearly set the
> direction without over-prescribing the implementation.

In light of this statement, would you reevaluate previous decisions you've made 
regarding implementation details? You've criticized OpenStack projects for not 
using certain code, you've advocated for openstack-wide goals which are all 
about implementation[1], and you voted against Swift and other projects using 
Golang for an internal process[2]. Each of these are quite prescriptive with 
regards to the implementation. Would you change your vote on any of these 
decisions if you are reelected?

[1] https://review.openstack.org/#/c/349068/
[2] http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-02-20.01.log.html




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][elections][TC] TC candidacy

2016-10-03 Thread John Dickinson


On 27 Sep 2016, at 9:21, Sean McGinnis wrote:

> I would like to announce my candidacy for a position on the Technical
> Committee.
>
> I work for Dell EMC with over a decade (quite a bit over, but I don't want to
> think about that) in storage and software development. I have been involved in
> OpenStack since the Icehouse cycle and have served as the Cinder PTL since the
> Mitaka release.
>
> I think it's important to have active PTLs on the TC. TC decisions need to be
> grounded in the reality of day to day project development. I think it will 
> also
> be good for me as a PTL to be forced to take a wider view of things across the
> whole ecosystem.
>
> I think outreach and education is important to spread interest in OpenStack 
> and
> provide awareness to reach new people. I've spoken at several Summits, as well
> as OpenStack Days events and (more pertinent to Cinder) at Storage Network
> Industry Association (SNIA) events.
>
> I think it's important to get feedback from actual operators and end users. I
> have tried to reach out to these users as well as attend the Ops Midcycle in
> order to close that feedback loop.
>
> I would continue to work towards these things and bring that feedback to the
> TC - making sure the decisions we make have the end user in mind.
>
> Another goal for me is simplicity. With the Big Tent, more interacting,
> projects, and lot's of competing interests, things have gotten much more
> complicated over the last several releases.
>
> I say this while acknowledging within Cinder - while I have been PTL - a lot 
> of
> complexity has been added. In most cases there are very valid reasons for 
> these
> changes. So even with a desire to make things as simple as possible, I 
> consider
> myself a pragmatist and recognize that complexity is sometimes unavoidable in
> order to move forward. But one thing I would try to focus on as a TC member
> would be to reduce complexity anywhere it's possible and where it makes sense.

Sean,

Are there some specific areas of complexity that you would like to change in 
OpenStack now? How would you change them? Are there things you see happening in 
OpenStack now that need to be stopped because they will produce too much 
complexity?


--John





>
> It would be an honor to serve as a member of the TC and help do whatever I can
> to help the community continue to succeed and grow.
>
> Thank you for your consideration.
>
> Sean McGinnis (smcginnis)
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] TC candidacy

2016-09-29 Thread John Dickinson


On 29 Sep 2016, at 16:00, gordon chung wrote:

>
> On 29/09/16 04:35 PM, John Dickinson wrote:
>>
>> I am concerned that there is a current focus on preserving the status
>> quo. There's focus on policies and rules instead of use cases; there's
>> focus on conformity instead of innovation; there's focus on forced
>> prioritization instead of inclusivity.
>
> i fully agree with this pov. there's a sentiment among certain members
> that if you choose a path different from the norm, you are not
> "following the OpenStack way".
>
> my question is (i guess this in theory could be ask to everyone), the
> the TC has historically been a reactive council that lets others ask for
> change and acts as the final approver (arguably, just my opinion). do
> you believe the TC should be a proactive committee that drives change?
> and if yes, to what scope? i realise the follow up is very open-end and
> vague and i apologise for this.

The TC is not small enough, and OpenStack is too big, for the TC to proactively 
drive and manage change across all OpenStack projects. In fact, I think 
OpenStack is too big for any one group to try to manage a task for all 
projects. I like how the docs team has been pushing docs into each project 
repository. That distributes the load and solves a scaling problem.

In my experience as a PTL for a single OpenStack project, I've learned that I 
can influence by suggestion, but I cannot mandate anything. In a large part, my 
role has been to respond to what the community is doing by removing any 
barriers that exist, monitoring the status of the team, connecting people 
working on similar tasks, and providing a general vision of where we're going 
as a project.

I see the role of the TC in much the same way. The TC should not be the 
high-priesthood of OpenStack where we go to present our supplications before 
we're allowed to do something. Individual projects should default to "doing" 
and the TC's role there is to make sure barriers for "doing" are removed. The 
individual projects are what initial and drive change in OpenStack, based on 
the needs of their specific users, and the TC aggregates those changes across 
projects to facilitate communication with the broader ecosystem about OpenStack 
in general.

I'm sure I could go on for quite a while about specifics I'd like to see. 
Here's a short list of bullets of things I'd love to see the TC take on:

 * Every project installable in 10 minutes or less.
 * Most projects independently installable to solve a specific use case for a 
deployer.
 * Track contributor activity to identify when and why people contribute. Is 
there some pattern that can be used to identify when someone might leave the 
community? Is there a pattern of how long-term contributors start? What are the 
major barriers for new contributors?
 * How do we reduce the average time a patch spends in review?
 * Why are people adopting OpenStack? If people move away from OpenStack (in 
whole or in part), what was missing in OpenStack?
 * What improvements can we make to facilitate team communication across time 
zones and across cultural lines?
 * How do we provide our corporate sponsors with a reasonably-accurate view of 
future development work?
 * How do we support new languages and deployment tools in OpenStack projects?
 * What improvements can we make to integrate proprietary software and hardware 
into our projects?
 * How does OpenStack work in a world where "cloud" is dominated by AWS, 
Google, and Microsoft?

etc.



--John



>
>>
>> If I am elected to the TC, I will look at every decision in the light
>> of these two needs. I will not focus on codifying rules, and I will
>> not focus on keeping OpenStack small and homogeneous. I will do what I
>> can to help the OpenStack community increase adoption today and remain
>> relevant as the industry changes.
>>
>
> yay to less codifying rules!
>
> cheers,
> -- 
> gord
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] TC candidacy

2016-09-29 Thread John Dickinson
I am throwing my hat into the ring for the TC election.

I've been a part of OpenStack since it started. I've seen it grow from
a few dozen people into the very large community we have today. During
the past 6 years, I've seen controversial topics come and go and the
community grow and adapt. I've also seen OpenStack respond in knee-
jerk fashion to new ideas that challenge the status quo.

I am concerned that there is a current focus on preserving the status
quo. There's focus on policies and rules instead of use cases; there's
focus on conformity instead of innovation; there's focus on forced
prioritization instead of inclusivity.

The primary things OpenStack should be focused on are increased
adoption today and continued relevance in the next five years.

We have a lovely community today, and we attract thousands to our
semi-annual conference. I love seeing companies, big and small, come
share their stories of how they're using OpenStack. It's great to hear
from them over time and see how their use of OpenStack changes and
grows. However, I'm concerned that we keep seeing mostly the same
people, the same companies, and the same sponsors show up to each
event. I'm concerned that, outside of our community bubble, OpenStack
is still largely unknown and little-used. In order to increase
adoption, the TC must look at the use cases we are serving. We must
realize where we are falling short in solving for current use cases,
and we must encourage growth in new use cases which we don't yet
support. To better-solve current use cases, I would like to see more
emphasis on SDK development (across many languages) and the creation
of governance tags identifying projects that are independently
deployable for specific use cases. To encourage solving more use cases
within OpenStack, I would like to see less requirements placed on new
projects and more clear communication about the various ways new
projects can join OpenStack.

In order to stay relevant in the technical world, the OpenStack TC
must figure out how the community itself can foster new ideas and grow
them within OpenStack. We should not ask new projects to split into
OpenStack and non-OpenStack pieces before joining. We should not ask
existing OpenStack contributors to go elsewhere to develop their
ideas. We must inclusively welcome new contributors, new projects, and
new ideas. Sometimes these new ideas require significant effort to
adopt, but I will encourage the TC to take on the challenge.

If I am elected to the TC, I will look at every decision in the light
of these two needs. I will not focus on codifying rules, and I will
not focus on keeping OpenStack small and homogeneous. I will do what I
can to help the OpenStack community increase adoption today and remain
relevant as the industry changes.

I appreciate your vote for me to the TC in this election.

--John

notmyname on IRC

elections patch: https://review.openstack.org/379814




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] PTL candidacy

2016-09-15 Thread John Dickinson
I'd be honored to continue to serve as your PTL for OpenStack Swift.

## Looking back at the Newton cycle

During the Newton cycle, the Swift community has delivered at-rest
encryption. This feature is the culmination of a more than year of
work, and it enables Swift to be deployed in places where it was
previously not possible to be used.

The other big thing that's been going on during the Newton cycle is
our work on a Golang implementation of the object server and
replication engine. This change not only solved critical issues for
existing deployments, but it also enables a lot of the future work
that needs to happen in the years to come (more on that later!).

Golang in Swift is a great OpenStack community success story. An
existing Swift deployer (Rackspace) was seeing issues in their
environment. One developer there started playing with Golang and
reimplemented a small part of Swift. The results were good--very good
--and the rest of the development team there started focusing on this
Golang implementation. The developers shared these results with the
rest of the community, and the Golang code started being developed in
a feature branch of Swift, where others started playing with this code
as well. Rackspace has been running this in production with great
success, and as a community we're bringing into "mainline" Swift so
that everyone can benefit. It's great to see community members play
with new ideas, bring those to the rest of the community, and see them
adopted to solve problems for everyone.

As most people in the Swift community know, and many in the wider
OpenStack community have seen, there has been a very large debate on
the use of Golang in OpenStack projects. Much of my time in this cycle
has been spent working with the TC on this conversation, and I expect
it to continue into the Ocata cycle as well. I'm not satisfied with
the current decision by the TC, and I'll keep pushing for including
Golang inside of OpenStack.

## Looking forward to Ocata and Beyond

Beyond grand debates about programming languages, there's a ton of
stuff going on in Swift right now. Here's a partial list of things
people are working on:

- Golang object server and replication
- Transparent container sharding
- Increase ring partition power
- Global erasure codes
- Policy migrations
- Symlinks
- Improvements to encryption functionality
- Automatic tiering
- Composite rings

Many of these have been in progress for a while, and I fully expect
several to be finished within the Ocata cycle. Many of these features
are related to bigger and bigger clusters. It's tremendously exciting
to see where Swift is being used today, but we're in no way "done".
There is a lot more for us to do to continue to solve storage problems
for users.

Looking much further into the future, there's several things I'd like
us to work towards.

- Small file optimization
- Dealing with larger drives and more dense storage servers
- Using NV memory

The reason I know we'll get this stuff done and continue to make Swift
the best open source object storage system is because of our
community. As the Swift PTL, I feel it's my duty to enable the
community to solve these problems by making every community member
more productive. This is my focus.


--John






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-19 Thread John Dickinson


On 17 Aug 2016, at 15:27, Nick Papadonis wrote:

> comments
>
> On Wed, Aug 17, 2016 at 4:53 PM, Matthew Thode 
> wrote:
>
>> On 08/17/2016 03:52 PM, Nick Papadonis wrote:
>>
>>> Thanks for the quick response!
>>>
>>> Glance worked for me in Mitaka.  I had to specify 'chunked transfers'
>>> and increase the size limit to 5GB.  I had to pull some of the WSGI
>>> source from glance and alter it slightly to call from Apache.
>>>
>>> I saw that Nova claims mod_wsgi is 'experimental'.  Interested in it's
>>> really experimental or folks use it in production.
>>>
>>> Nick
>>
>> ya, cinder is experimental too (at least in my usage) as I'm using
>> python3 as well :D  For me it's a case of having to test the packages I
>> build.
>>
>>
> I converted Cinder to mod_wsgi because from what I recall, I found that SSL
> support was removed from the Eventlet server.  Swift endpoint outputs a log
> warning that Eventlet SSL is only for testing purposes, which is another
> reason why I turned to mod_wsgi for that.

FWIW, most prod Swift deployments I know of use HAProxy or stud to terminate 
TLS before forwarding the http stream to a proxy endpoint (local or remote). 
Especially when combined with a server that has AES-NI, this gives good 
performance.

--John



>
> Nick
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] mod_wsgi: what services are people using with it?

2016-08-17 Thread John Dickinson
I don't know of people running Swift in production with mod_wsgi. The original 
doc you referenced and the related work done upstream was done several years 
ago, IIRC by IBM. Personally, I've never deployed Swift that way.

However, I too am really interested in the general answers to your question, 
especially from the ops mailing list. If there's something broken in docs or 
code that is preventing people from solving their problems with Swift, I want 
to hear about it and fix it.

--John




On 17 Aug 2016, at 13:22, Nick Papadonis wrote:

> Hi Folks,
>
> I was hacking in this area on Mitaka and enabled Glance, Cinder, Heat,
> Swift, Ironic, Horizon and Keystone under Apache mod_wsgi instead of the
> Eventlet server.Cinder, Keystone, Heat and Ironic provide Python source
> in Github to easily enable this.  It appears that Glance and Swift (despite
> the existence of
> https://github.com/openstack/swift/blob/2bf5eb775fe3ad6d3a2afddfc7572318e85d10be/doc/source/apache_deployment_guide.rst)
> provide no such Python source to call from the Apache conf file.
>
> That said, is anyone using Glance, Swift, Neutron or Nova (marked
> experimental) in production environments with mod_wsgi?  I had to put
> together code to launch a subset of these which does not appear integrated
> in Github.  Appreciate your insight.
>
> Thanks,
> Nick
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-15 Thread John Dickinson


On 15 Aug 2016, at 1:37, Thierry Carrez wrote:

> Doug Hellmann wrote:
>> [...]
>> Choosing to be a part of a community comes with obligations as well
>> as benefits.  If, after a lengthy discussion of a community-wide
>> goal, involving everyone in the community, a project team is
>> resolutely opposed to the goal, does that not indicate that the
>> needs of the project team and the needs of the broader community
>> are at odds in some way? And if the project team's needs and the
>> community needs are consistently at odds, over the course of a
>> series of such goals, why would the project team want to constrain
>> itself to stay in the community?  Aren't they clearly going in
>> different directions?
>>
>> Understand, it is not my desire to emphasize any differences of
>> this nature. Rather, I want to reduce them. To do that, I am proposing
>> a process through which common goals can be identified, described,
>> and put into action. I do hope, though, that through the course of
>> the discussion of each individual proposal everyone involved will
>> come to understand the idea and by the time a proposal becomes a
>> "goal" to be implemented I "expect" everyone to, at the very least,
>> understand why a goal is important to others, even if they do not
>> agree with it. That understanding should then lead, on the basis
>> of agreeing to be part of a collaborative community, to supporting
>> the goal.
>>
>> I also expect us to discuss a lot of proposals that we do not agree
>> on, and that either need more time to develop or that end up finding
>> another path to resolution. No one seems all that concerned with
>> the concept that they might propose a goal that everyone else doesn't
>> agree with.  :-)
>>
>> So, yes, by the time we pick a goal I expect teams to do the work,
>> because at that point in the process they will see it as the
>> reasonable course of action.  There is still an "escape valve" in
>> place for teams that, after all of the discussion and shaping of
>> the goals is over, still take issue with a goal. By explaining their
>> position in their response, we will have reference documentation
>> to point to when the inevitable "why doesn't X do Y" questions
>> arise. I will be interested to see how often we actually have to
>> use that.
>
> +1

I agree, too. This is a great process that covers nearly everything.

The reason the prioritization language is so important isn't so that
project teams can "get around" the TC or intentionally be different or
otherwise not be good community participants. I want to make sure we
are not setting up a process that tells projects to "toe the line" or
get out.

In a community as large and diverse in scope as OpenStack, it's
impossible for one person or one small group to be familiar enough
with all of the OpenStack projects to understand the design decisions,
trade-offs, and priorities for each one. The TC certainly doesn't want
to micromanage every project.

Supporting a common goal and making progress on it is much different
than "prioritize these goals above all other work". Like you, I expect
that all projects in OpenStack will work together for the common good.
I don't think any open source project can mandate prioritization on
its contributors and expect to maintain long-term growth.

>
>> Excerpts from John Dickinson's message of 2016-08-12 16:04:42 -0700:
>>> [...]
>>> The proposed plan has a lot of good in it, and I'm really happy to see the 
>>> TC
>>> working to bring common goals and vision to the entirety of the OpenStack
>>> community. Drop the "project teams are expected to prioritize these goals 
>>> above
>>> all other work", and my concerns evaporate. I'd be happy to agree to that 
>>> proposal.
>>
>> Saying that the community has goals but that no one is expected to
>> act to bring them about would be a meaningless waste of time and
>> energy.
>
> I think we can find wording that doesn't use the word "priority" (which
> is, I think, what John objects to the most) while still conveying that
> project teams are expected to act to bring them about (once they said
> they agreed with the goal).
>
> How about "project teams are expected to do everything they can to
> complete those goals within the boundaries of the target development
> cycle" ? Would that sound better ?

Any chance you'd go for something like "project teams are expected to
make progress on these goals and report that progress to the TC every
cycle"?

Yes, I like Thierry's proposed wording better than the originally-
proposed language.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread John Dickinson


On 12 Aug 2016, at 13:31, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-08-12 13:02:59 -0700:
>>
>> On 12 Aug 2016, at 7:28, Doug Hellmann wrote:
>>
>>> Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:

 On 10 Aug 2016, at 8:29, Doug Hellmann wrote:

> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
>> One of the outcomes of the discussion at the leadership training
>> session earlier this year was the idea that the TC should set some
>> community-wide goals for accomplishing specific technical tasks to
>> get the projects synced up and moving in the same direction.
>>
>> After several drafts via etherpad and input from other TC and SWG
>> members, I've prepared the change for the governance repo [1] and
>> am ready to open this discussion up to the broader community. Please
>> read through the patch carefully, especially the "goals/index.rst"
>> document which tries to lay out the expectations for what makes a
>> good goal for this purpose and for how teams are meant to approach
>> working on these goals.
>>
>> I've also prepared two patches proposing specific goals for Ocata
>> [2][3].  I've tried to keep these suggested goals for the first
>> iteration limited to "finish what we've started" type items, so
>> they are small and straightforward enough to be able to be completed.
>> That will let us experiment with the process of managing goals this
>> time around, and set us up for discussions that may need to happen
>> at the Ocata summit about implementation.
>>
>> For future cycles, we can iterate on making the goals "harder", and
>> collecting suggestions for goals from the community during the forum
>> discussions that will happen at summits starting in Boston.
>>
>> Doug
>>
>> [1] https://review.openstack.org/349068 describe a process for managing 
>> community-wide goals
>> [2] https://review.openstack.org/349069 add ocata goal "support python 
>> 3.5"
>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
>> libraries"
>>
>
> The proposal was discussed at the TC meeting yesterday [4], and
> left open to give more time to comment. I've added all of the PTLs
> for big tent projects as reviewers on the process patch [1] to
> encourage comments from them.
>
> Please also look at the associated patches with the specific goals
> for this cycle (python 3.5 support and cleaning up Oslo incubated
> code).  So far most of the discussion has focused on the process,
> but we need folks to think about the specific things they're going
> to be asked to do during Ocata as well.
>
> Doug
>
> [4] 
> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 Commonality in goals and vision is what unites any community. I
 definitely support the TC's effort to define these goals for OpenStack
 and to champion them. However, I have a few concerns about the process
 that has been proposed.

 I'm concerned with the mandate that all projects must prioritize these
 goals above all other work. Thinking about this from the perspective of
 the employers of OpenStack contributors, and I'm finding it difficult
 to imagine them (particularly smaller ones) getting behind this
 prioritization mandate. For example, if I've got a user or deployer
 issue that requires an upstream change, am I to prioritize Py35
 compatibility over "broken in production"? Am I now to schedule my own
 work on known bugs or missing features only after these goals have
 been met? Is that what I should ask other community members to do too?
>>>
>>> There is a difference between priority and urgency. Clearly "broken
>>> in production" is more urgent than other planned work. It's less
>>> clear that, over the span of an entire 6 month release cycle, one
>>> production outage is the most important thing the team would have
>>> worked on.
>>>
>>> The point of the current wording is to make it clear that because these
>>> are goals coming from the entire community, teams are expected to place
>>> a high priority on completing them. In some cases that may mean
>>> working on community goals instead of working on internal team goals. We
>>> all face this tension all the time, so that's nothing new.
>>
>> It's not an issue of choosing to work on community goals. It's an
>> issue of prioritizing these over things that affect current production
>> deployments and things that are 

Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-12 Thread John Dickinson


On 12 Aug 2016, at 7:28, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-08-11 15:00:56 -0700:
>>
>> On 10 Aug 2016, at 8:29, Doug Hellmann wrote:
>>
>>> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
 One of the outcomes of the discussion at the leadership training
 session earlier this year was the idea that the TC should set some
 community-wide goals for accomplishing specific technical tasks to
 get the projects synced up and moving in the same direction.

 After several drafts via etherpad and input from other TC and SWG
 members, I've prepared the change for the governance repo [1] and
 am ready to open this discussion up to the broader community. Please
 read through the patch carefully, especially the "goals/index.rst"
 document which tries to lay out the expectations for what makes a
 good goal for this purpose and for how teams are meant to approach
 working on these goals.

 I've also prepared two patches proposing specific goals for Ocata
 [2][3].  I've tried to keep these suggested goals for the first
 iteration limited to "finish what we've started" type items, so
 they are small and straightforward enough to be able to be completed.
 That will let us experiment with the process of managing goals this
 time around, and set us up for discussions that may need to happen
 at the Ocata summit about implementation.

 For future cycles, we can iterate on making the goals "harder", and
 collecting suggestions for goals from the community during the forum
 discussions that will happen at summits starting in Boston.

 Doug

 [1] https://review.openstack.org/349068 describe a process for managing 
 community-wide goals
 [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
 [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
 libraries"

>>>
>>> The proposal was discussed at the TC meeting yesterday [4], and
>>> left open to give more time to comment. I've added all of the PTLs
>>> for big tent projects as reviewers on the process patch [1] to
>>> encourage comments from them.
>>>
>>> Please also look at the associated patches with the specific goals
>>> for this cycle (python 3.5 support and cleaning up Oslo incubated
>>> code).  So far most of the discussion has focused on the process,
>>> but we need folks to think about the specific things they're going
>>> to be asked to do during Ocata as well.
>>>
>>> Doug
>>>
>>> [4] 
>>> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>> Commonality in goals and vision is what unites any community. I
>> definitely support the TC's effort to define these goals for OpenStack
>> and to champion them. However, I have a few concerns about the process
>> that has been proposed.
>>
>> I'm concerned with the mandate that all projects must prioritize these
>> goals above all other work. Thinking about this from the perspective of
>> the employers of OpenStack contributors, and I'm finding it difficult
>> to imagine them (particularly smaller ones) getting behind this
>> prioritization mandate. For example, if I've got a user or deployer
>> issue that requires an upstream change, am I to prioritize Py35
>> compatibility over "broken in production"? Am I now to schedule my own
>> work on known bugs or missing features only after these goals have
>> been met? Is that what I should ask other community members to do too?
>
> There is a difference between priority and urgency. Clearly "broken
> in production" is more urgent than other planned work. It's less
> clear that, over the span of an entire 6 month release cycle, one
> production outage is the most important thing the team would have
> worked on.
>
> The point of the current wording is to make it clear that because these
> are goals coming from the entire community, teams are expected to place
> a high priority on completing them. In some cases that may mean
> working on community goals instead of working on internal team goals. We
> all face this tension all the time, so that's nothing new.

It's not an issue of choosing to work on community goals. It's an
issue of prioritizing these over things that affect current production
deployments and things that are needed to increase adoption.

>
>> I agree with Hongbin Lu's comments that the resulting goals might fit
>> into the interests of the majority but fundamentally violate the
>> interests of a minority of project teams. As an example, should the TC
>> decide that a future goal is for projects to implement a particular
>> 

Re: [openstack-dev] [all][infra] Binary Package Dependencies - not only for Python

2016-08-12 Thread John Dickinson
bindep is great, and we've been using it in Swift for a while now. I'd 
definitely recommend it to other projects.

Andreas, I didn't see a patch proposed to Swift to move the file. I don't want 
to get in the way of your tool, though. Is there a patch that will be proposed, 
or should I do that myself?

--John




On 12 Aug 2016, at 10:31, Andreas Jaeger wrote:

> TL;DR: Projects can use bindep.txt to document in a programmatic way
> their binary dependencies
>
> Python developers record their dependencies on other Python packages in
> requirements.txt and test-requirements.txt. But some packages
> havedependencies outside of python and we should document
> thesedependencies as well so that operators, developers, and CI systems
> know what needs to be available for their programs.
>
> Bindep is a solution to this, it allows a repo to document
> binarydependencies in a single file. It even enablies specification of
> which distribution the package belongs to - Debian, Fedora, Gentoo,
> openSUSE, RHEL, SLES and Ubuntu have different package names - and
> allows profiles, like a test profile.
>
> Bindep is one of the tools the OpenStack Infrastructure team has written
> and maintains. It is in use by already over 130 repositories.
>
> For better bindep adoption, in the just released bindep 2.1.0 we have
> changed the name of the default file used by bindep from
> other-requirements.txt to bindep.txt and have pushed changes [3] to
> master branches of repositories for this.
>
> Projects are encouraged to create their own bindep files. Besides
> documenting what is required, it also gives a speedup in running tests
> since you install only what you need and not all packages that some
> other project might need and are installed  by default. Each test system
> comes with a basic installation and then we either add the repo defined
> package list or the large default list.
>
> In the OpenStack CI infrastructure, we use the "test" profile for
> installation of packages. This allows projects to document their run
> time dependencies - the default packages - and the additional packages
> needed for testing.
>
> Be aware that bindep is not used by devstack based tests, those have
> their own way to document dependencies.
>
> A side effect is that your tests run faster, they have less packages to
> install. A Ubuntu Xenial test node installs 140 packages and that can
> take between 2 and 5 minutes. With a smaller bindep file, this can change.
>
> Let's look at the log file for a normal installation with using the
> default dependencies:
> 2 upgraded, 139 newly installed, 0 to remove and 41 not upgraded
> Need to get 148 MB of archives.
> After this operation, 665 MB of additional disk space will be used.
>
> Compare this with the openstack-manuals repostiry that uses bindep -
> this example was 20 seconds and not minutes:
> 0 upgraded, 17 newly installed, 0 to remove and 43 not upgraded.
> Need to get 35.8 MB of archives.
> After this operation, 128 MB of additional disk space will be used.
>
> If you want to learn more about bindep, read the Infra Manual on package
> requirements [1] or the bindep manual [2].
>
> If you have further questions about bindep, feel free to ask the Infra
> team on #openstack-infra.
>
> Thanks to Anita for reviewing and improving this blog post and to the
> OpenStack Infra team that maintains bindep, especially to Jeremy Stanley
> and Robert Collins.
>
> Note I'm sending this out while not all our test clouds have images that
> know about bindep.txt (they only handle other-requirements.txt). The
> infra team is in the process of ensuring updated images in all our test
> clouds for later today. Thanks, Paul!
>
> Andreas
>
>
> References:
> [1] http://docs.openstack.org/infra/manual/drivers.html#package-requirements
> [2] http://docs.openstack.org/infra/bindep/
> [3] https://review.openstack.org/#/q/branch:master+topic:bindep-mv
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-11 Thread John Dickinson


On 11 Aug 2016, at 15:14, Ed Leafe wrote:

> On Aug 11, 2016, at 4:50 PM, John Dickinson <m...@not.mn> wrote:
>
>> Is this intended to be a cross-project thing? The message is tagged 
>> "[nova]", so I'm kinda surprised I saw it, but the library seems to be 
>> called openstack capabilities. So if this is going to be a big thing for 
>> everyone, please update the ML subject tag (and help me understand how it 
>> applies to more than just nova). And if it's just for nova (err... 
>> "compute"), then naming it something that doesn't imply every project will 
>> need to use it could help prevent future misunderstanding.
>
> I will let Jay speak for himself (as if I could somehow prevent that!), but 
> the intent here is that this won’t be for Nova specifically; it is targeted 
> primarily for the forthcoming placement service (you might know it as the 
> scheduler). The goal is to have a standard way of representing *qualitative* 
> aspects of resources. So while we are not actively trying to make this a 
> placement engine for all OpenStack services yet, the goal is to not be 
> Nova-specific, so that once we have this up and running, we can offer it as a 
> general placement service for any other project that has such needs.


Sounds great! How can I get involved with the new general purpose placement 
scheduler engine and share Swift's requirements?

--john



signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all] versioning the api-ref?

2016-08-11 Thread John Dickinson


On 11 Aug 2016, at 15:02, Brian Rosmaita wrote:

> I have a question about the api-ref. Right now, for example, the new
> images v1/v2 api-refs are accurate for Mitaka.  But DocImpact bugs are
> being generated as we speak for changes in master that won't be
> available to consumers until Newton is released (unless they build from
> source). If those bug fixes get merged, then the api-ref will no longer
> be accurate for Mitaka API consumers (since it's published upon update).
>
> My question is, how should we handle this? We want the api-ref to be
> accurate for users, but we also want to move quickly on updates (so that
> the updates actually get made in a timely fashion).
>
> My suggestion is that we should always have an api-ref available that
> reflects the stable releases, that is, one for each stable branch.  So
> right now, for instance, the default api-ref page would display the
> api-ref for Mitaka, with links to "older" (Liberty) and "development"
> (master).  But excellent as that suggestion is, it doesn't help right
> now, because the most accurate Mitaka api-ref for Glance, for instance,
> is in Glance master as part of the WADL to RST migration project.  What
> I'd like to do is publish a frozen version of that somewhere as we make
> the Newton updates along with the Newton code changes.
>
> Thus I guess I have two questions:
>
> (1) How should we version (and publish multiple versions of) the api-ref
> in general?
>
> (2) How do we do it right now?
>
> thanks,
> brian
>

I was working with the oslosphinx project to try and solve this issue in a 
cross-project way for the dev docs. I think the ideas there could be useful 
here.

Basically, if you have docs built every commit (instead of every release, like 
normally happens with library projects), you can set show_other_versions to 
True and get a sidebar link of versions based on tags. (Yeah, I know it wasn't 
working earlier, but that should be fixed now).

So with this process, keep building docs per commit so you have the latest 
available. But turn on the sidebar links for other versions, and you can have a 
place for docs from the last few releases in your project. I'm not sure that it 
would work well for stable branches that are updated (but really, if you're 
updating stable, how "stable" is it?)


--John




>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc][ptl] establishing project-wide goals

2016-08-11 Thread John Dickinson


On 10 Aug 2016, at 8:29, Doug Hellmann wrote:

> Excerpts from Doug Hellmann's message of 2016-07-29 16:55:22 -0400:
>> One of the outcomes of the discussion at the leadership training
>> session earlier this year was the idea that the TC should set some
>> community-wide goals for accomplishing specific technical tasks to
>> get the projects synced up and moving in the same direction.
>>
>> After several drafts via etherpad and input from other TC and SWG
>> members, I've prepared the change for the governance repo [1] and
>> am ready to open this discussion up to the broader community. Please
>> read through the patch carefully, especially the "goals/index.rst"
>> document which tries to lay out the expectations for what makes a
>> good goal for this purpose and for how teams are meant to approach
>> working on these goals.
>>
>> I've also prepared two patches proposing specific goals for Ocata
>> [2][3].  I've tried to keep these suggested goals for the first
>> iteration limited to "finish what we've started" type items, so
>> they are small and straightforward enough to be able to be completed.
>> That will let us experiment with the process of managing goals this
>> time around, and set us up for discussions that may need to happen
>> at the Ocata summit about implementation.
>>
>> For future cycles, we can iterate on making the goals "harder", and
>> collecting suggestions for goals from the community during the forum
>> discussions that will happen at summits starting in Boston.
>>
>> Doug
>>
>> [1] https://review.openstack.org/349068 describe a process for managing 
>> community-wide goals
>> [2] https://review.openstack.org/349069 add ocata goal "support python 3.5"
>> [3] https://review.openstack.org/349070 add ocata goal "switch to oslo 
>> libraries"
>>
>
> The proposal was discussed at the TC meeting yesterday [4], and
> left open to give more time to comment. I've added all of the PTLs
> for big tent projects as reviewers on the process patch [1] to
> encourage comments from them.
>
> Please also look at the associated patches with the specific goals
> for this cycle (python 3.5 support and cleaning up Oslo incubated
> code).  So far most of the discussion has focused on the process,
> but we need folks to think about the specific things they're going
> to be asked to do during Ocata as well.
>
> Doug
>
> [4] 
> http://eavesdrop.openstack.org/meetings/tc/2016/tc.2016-08-09-20.01.log.html
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Commonality in goals and vision is what unites any community. I
definitely support the TC's effort to define these goals for OpenStack
and to champion them. However, I have a few concerns about the process
that has been proposed.

I'm concerned with the mandate that all projects must prioritize these
goals above all other work. Thinking about this from the perspective of
the employers of OpenStack contributors, and I'm finding it difficult
to imagine them (particularly smaller ones) getting behind this
prioritization mandate. For example, if I've got a user or deployer
issue that requires an upstream change, am I to prioritize Py35
compatibility over "broken in production"? Am I now to schedule my own
work on known bugs or missing features only after these goals have
been met? Is that what I should ask other community members to do too?

I agree with Hongbin Lu's comments that the resulting goals might fit
into the interests of the majority but fundamentally violate the
interests of a minority of project teams. As an example, should the TC
decide that a future goal is for projects to implement a particular
API-WG document, that may be good for several projects, but it might
not be possible or advisable for others.

I know the TC has no malicious intent here, and I do support the idea
of having cross-project goals. The first goals proposed seem like
great goals.  And I understand the significant challenges of
coordinating goals between a multitude of different projects. However,
I haven't yet added my own +1 to the proposed goals because the
current process means that I am committing that every Swift project
team contributor is now to prioritize that work above all else, no
matter what is happening to their customers, their products, or their
communities.


--John






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] os-capabilities library created

2016-08-11 Thread John Dickinson


On 3 Aug 2016, at 16:47, Jay Pipes wrote:

> Hi Novas and anyone interested in how to represent capabilities in a 
> consistent fashion.
>
> I spent an hour creating a new os-capabilities Python library this evening:
>
> http://github.com/jaypipes/os-capabilities
>
> Please see the README for examples of how the library works and how I'm 
> thinking of structuring these capability strings and symbols. I intend 
> os-capabilities to be the place where the OpenStack community catalogs and 
> collates standardized features for hardware, devices, networks, storage, 
> hypervisors, etc.
>
> Let me know what you think about the structure of the library and whether you 
> would be interested in owning additions to the library of constants in your 
> area of expertise.
>
> Next steps for the library include:
>
> * Bringing in other top-level namespaces like disk: or net: and working with 
> contributors to fill in the capability strings and symbols.
> * Adding constraints functionality to the library. For instance, building in 
> information to the os-capabilities interface that would allow a set of 
> capabilities to be cross-checked for set violations. As an example, a 
> resource provider having DISK_GB inventory cannot have *both* the disk:ssd 
> *and* the disk:hdd capability strings associated with it -- clearly the disk 
> storage is either SSD or spinning disk.
>
> Anyway, lemme know your initial thoughts please.

Is this intended to be a cross-project thing? The message is tagged "[nova]", 
so I'm kinda surprised I saw it, but the library seems to be called openstack 
capabilities. So if this is going to be a big thing for everyone, please update 
the ML subject tag (and help me understand how it applies to more than just 
nova). And if it's just for nova (err... "compute"), then naming it something 
that doesn't imply every project will need to use it could help prevent future 
misunderstanding.

--john


>
> Best,
> -jay
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][infra] Constraints are ready to be used for tox.ini

2016-08-11 Thread John Dickinson


On 11 Aug 2016, at 10:03, Andreas Jaeger wrote:

> TL;DR: upper-constraints can be used now for all tox based jobs
>
> With any software package, you will need additional packages to run it.
> Often, there's a tight coupling: The software package will only run with
> specific other package versions. This dependency information is
> sometimes found in README files, in code, or in package metadata. If you
> install the package, you need to figure out the dependency and
> handle it properly.
>
> The Python package installer pip uses a list of requirements to install
> dependent Python packages. This list not only contains the name of
> packages but also limits which versions to use, or not to use.
> In OpenStack we handle these dependencies in a global requirements list
> and use it for most of the repositories. During initial testing a
> specific package version is tested but at a later point, another one
> might be used, or during deployment again another one.
>
> To document what was tested, give guidenance for deployment, and help to
> figure out breakage by upstream projects, the OpenStack requirements
> projects maintains a set of constraints with packages pinned to specific
> package versions that are known to be working.
> These are in the upper-constraints.txt file.
>
> Devstack already handles upper-constraints.txt when installing packages
> and I'm happy to say that tox, the Python testing framework used in
> OpenStack, can now handle upper-constraints as well everywhere.
>
>
> Constraints for tox based jobs
> ==
> To use constraints, change in tox.ini the install command to:
>
> install_command = pip install
> -c{env:UPPER_CONSTRAINTS_FILE:https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt}
> {opts} {packages}

I've proposed a patch to Swift to do this:

https://review.openstack.org/#/c/354291/

I'd appreciate any advice on it.


>
> Caveat
> ==
>
> Note that constraints are used for the installation of each packages, so
> if you want to install a package from source and have constraints for a
> specific version in the constraints file, it will not work. This happens
> with some of  the OpenStack python client packages: When they install
> their dependencies, those might have a dependency on the client package
> itself. And this then will cause an error since the client package
> should get installed from source.
>
> So, projects need to remove the constraints file for themselves if they
> run into this. Packages like python-novaclient and python-glanceclient
> therefore use a wrapper (tools/tox_install.sh) as
> install command to edit the constraints file first and remove their own
> project from it.
>
> Also, be aware that this only for those jobs that have been enabled for
> it in the project-config repository. It's done for all the generic tox
> enabled targets and should be done for all custom tox targets as well.
> Some repositories are not using constraints like project-config
> itself, so those jobs are not set up.
>
> Constraints for DevStack jobs
> =
> Devstack-gate takes care using constraints, there is nothing for a
> repository to do to honor constraints.
>
> Check the devstacklog.txt file, if constraints are in use it will use
> lines like:
>
> Collecting oslo.context===2.7.0 (from -c
> /opt/stack/new/requirements/upper-constraints.txt (line 204))
>
> References
> ==
>
> http://docs.openstack.org/developer/requirements/
> https://specs.openstack.org/openstack/openstack-specs/specs/requirements-management.html
>
>
> Thanks
> ==
> As usual in OpenStack, such work is a team work of many people. I'd like
> to thank especially:
>
> *Robert Collins 'lifeless': For writing the initial spec, implementation
> work, and giving guideance on many of these details.
> * Sean Dague: He was bold enough to try using constraints everywhere and
> showing us where it failed.
> * Sachi King for making zuul-cloner usable in the post queue. This was a
> missing part in the last months.
> * The OpenStack infra team for many reviews and design discussions -
> especially to Jeremy Stanley and Jim Blair.
>
> -- 
>  Andreas Jaeger aj@{suse.com,opensuse.org} Twitter/Identica: jaegerandi
>   SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
>GF: Felix Imendörffer, Jane Smithard, Graham Norton,
>HRB 21284 (AG Nürnberg)
> GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: 

Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread John Dickinson


On 9 Aug 2016, at 11:33, Ian Cordasco wrote:

>  
>
> -Original Message-
> From: John Dickinson <m...@not.mn>
> Reply: OpenStack Development Mailing List (not for usage questions) 
> <openstack-dev@lists.openstack.org>
> Date: August 9, 2016 at 13:17:08
> To: OpenStack Development Mailing List <openstack-dev@lists.openstack.org>
> Subject:  Re: [openstack-dev] [requirements] History lesson please
>
>> I'd like to advocate for *not* raising minimum versions very often. Every 
>> time some OpenStack
>> project raises minimum versions, this change is propagated to all projects, 
>> and that
>> puts extra burden on anyone who is maintaining packages and dependencies in 
>> their own
>> deployment. If one project needs a new feature introduced in version 32, but 
>> another
>> project claims compatibility with >=28, that's ok. There's no need for the 
>> second project
>> to raise the minimum version when there isn't a conflict. (This is the 
>> position I advocated
>> for at the Austin summit.)
>>
>> Yes, I know that currently we don't test every possible version permutation. 
>> Yes, I know
>> that doing that is hard. I'm not ignoring that.
>
> Right. So with the current set-up, where these requirements are propogated to 
> every project, how do projects express their own minimum version requirement?
>
> Let's assume someone is maintaining their own packages and dependencies. If 
> (for example) Glance requires a minimum version of Routes and Nova has a 
> minimum requirement newer than Glance's, they're not coinstallable (which was 
> the original goal of the requirements team). What you're asking for ends up 
> being "Don't rely on new features in a dependency". If OpenStack drops the 
> illusion of coinstallability that ends up being fine. I don't think anyone 
> wants to drop that though.

In that case, they are still co-installable, because the nova minimum satisfies 
both.

>
> --
> Ian Cordasco


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [requirements] History lesson please

2016-08-09 Thread John Dickinson
I'd like to advocate for *not* raising minimum versions very often. Every time 
some OpenStack project raises minimum versions, this change is propagated to 
all projects, and that puts extra burden on anyone who is maintaining packages 
and dependencies in their own deployment. If one project needs a new feature 
introduced in version 32, but another project claims compatibility with >=28, 
that's ok. There's no need for the second project to raise the minimum version 
when there isn't a conflict. (This is the position I advocated for at the 
Austin summit.)

Yes, I know that currently we don't test every possible version permutation. 
Yes, I know that doing that is hard. I'm not ignoring that.

--John




On 9 Aug 2016, at 9:24, Ian Cordasco wrote:

>  
>
> -Original Message-
> From: Sean Dague 
> Reply: OpenStack Development Mailing List (not for usage questions) 
> 
> Date: August 9, 2016 at 11:21:47
> To: openstack-dev@lists.openstack.org 
> Subject:  Re: [openstack-dev] [requirements] History lesson please
>
>> On 08/09/2016 11:25 AM, Matthew Thode wrote:
>>> On 08/09/2016 10:22 AM, Ian Cordasco wrote:
 -Original Message-
 From: Matthew Thode
 Reply: prometheanf...@gentoo.org , OpenStack Development
>> Mailing List (not for usage questions)
 Date: August 9, 2016 at 09:53:53
 To: openstack-dev@lists.openstack.org
 Subject: Re: [openstack-dev] [requirements] History lesson please

> One of the things on our todo list is to test the 'lower-constraints' to
> make sure they still work with the head of branch.

 That's not sufficient. You need to find versions in between the lowest 
 tested version
>> and the current version to also make sure you don't end up with breakage. 
>> You might have
>> somepackage that has a lower version of 2.0.1 and a current constraint of 
>> 2.12.3. You
>> might even have a blacklist of versions in between those two versions, but 
>> you still need
>> other versions to ensure that things in between those continue to work.

 THe tiniest of accidental incompatibilities can cause some of the most 
 bizarre bugs.

 --
 Ian Cordasco

>>>
>>> I'm aware of this, but this would be a good start.
>>
>> And, more importantly, assuming that testing is only valid if it covers
>> every scenario, sets the bar at entirely the wrong place.
>>
>> A lower bound test would eliminate some of the worst fiction we've got.
>> Testing is never 100%. With a complex system like OpenStack, it's
>> probably not even 1% (of configs matrix for sure). But picking some
>> interesting representative scenarios and seeing that it's not completely
>> busted is worth while.
>
> Right. I'm not advocating for testing every released version of a dependency. 
> In general, it's good to test versions that have *triggered* changes though. 
> If upgrading from 2.3.0 to 2.4.1 caused you to need to fix something, test 
> something earlier than 2.4.1, and 2.4.1, and then something later. That's 
> what I'm advocating for.
>
> --
> Ian Cordasco
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tempest][swift][radosgw] Can we please merge the fix for the RFC 7230 violation issue?

2016-08-08 Thread John Dickinson


On 8 Aug 2016, at 9:50, Jay Pipes wrote:

> Tempest devs,
>
> Let me please draw your attention to a LP bug that may not seem particularly 
> high priority, but I believe could be resolved easily with a patch already 
> proposed.
>
> LP bug 1536251 [1] accurately states that Tempest is actively verifying that 
> an OpenStack API call violates RFC 7230.
>
> When a 204 No Content is received, the Content-Length header MUST NOT be 
> present.
>
> However, Swift returns a Content-Length header and also an HTTP response code 
> of 204 for a request to list containers of a new user (that has no 
> containers).
>
> Tempest has been validating this behaviour even though it is a violation of 
> RFC 7230:
>
> https://github.com/openstack/tempest/blob/master/tempest/api/object_storage/test_account_services.py#L81
>
> RadosGW provides a proxy API that attempts to match the OpenStack Object 
> Storage API, backed by Ceph object storage. In order for RadosGW to pass 
> RefStack's burden of compatibility, it must pass the Tempest OpenStack Object 
> Storage API tests. It currently cannot do so because RadosGW does not violate 
> RFC 7230.
>
> The RadosGW developer community does not wish to argue about whether or not 
> to make Swift's API comply with RFC 7230. At the same time, they do not want 
> to add a configuration option to RadosGW to force the proxy service to 
> violate RFC 7230 just to satisfy the RefStack/Tempest API tests.

I think tempest should merge the proposed patch (or one like it) so that 
content-length isn't checked on a 204 response. That solves the discrepancy 
when deployers are running Swift behind some other web server or caching system 
that is stripping the header. On the Swift side, we've got to consider the 
question of risk of breaking clients vs violating updated RFCs. My gut says 
we'll drip the header, but that's something that we'll discuss. But either way, 
it seems silly to me to be blocked here by tempest.


>
> Therefore, Radoslaw (cc'd) has proposed a patch to Tempest that would allow 
> RadosGW's proxy API to meet the RefStack compatibility tests while also not 
> violating RFC 7230 and not requiring any change of Swift:
>
> https://review.openstack.org/#/c/272062
>
> I ask Tempest devs to re-review the above patch and consider merging it for 
> the sake of collaboration between the OpenStack and Ceph developer 
> communities.
>
> Thanks very much!
> -jay
>
> [1] https://bugs.launchpad.net/tempest/+bug/1536251
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] using feature branches, Swift's experiences

2016-07-19 Thread John Dickinson


On 19 Jul 2016, at 11:56, Dean Troyer wrote:

> On Tue, Jul 19, 2016 at 12:27 PM, John Dickinson <m...@not.mn> wrote:
>
>> Overall, using long-lived upstream feature branches has been very helpful
>> for us and overall a positive experience.
>>
>> I've seen some other teams debate and discuss using a feature branch for
>> their work but wonder about how it works. I've written down our experiences
>> with using feature branches as part of OpenStack development, including
>> some recommendations that help things go smoothly.
>>
>> https://wiki.openstack.org/wiki/Swift/feature_branches
>>
>
> Seriously nice writeup John & Swift team, thanks! I wish I had the benefit
> of that in May when I chose not to do a feature branch for OSC's impending
> major release, it would have removed the fear of the unknown from that
> choice.
>
> I am wondering if there are any corresponding bits of negative advice
> around feature branches, of the sort 'don't do X, even if it seems like a
> good idea, here is why it is not'.  This is how I see our stance on Git
> submodules for example.

I tried to add some of those in the writeup. Feature branches will start 
relatively very slowly. If you put docs at the start of a review branch, you'll 
get a *ton* of nit comments that really slow merging down. The whole fact that 
our OpenStack workflow requires (or strongly suggests) the -review branch is 
somewhat burdensome.

But overall, I really love the experience we've had with feature branches. 
Swift has been able to land seriously huge features (storage polices, erasure 
coding, and crypto) because of them, and still keep all the development in the 
open community.

Yeah, there are some hard parts, but most of that is related to other issues or 
gaps in the OpenStack community (eg tracking/planning work) and is unrelated to 
using a feature branch or not.


I'm glad you find the writeup helpful. I'd encourage all teams to consider 
feature branches for long-lived major feature development.

--John



>
> dt
>
> -- 
> Dean Troyer
> dtro...@gmail.com
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc][all] Plugins for all

2016-07-19 Thread John Dickinson
I've been trying to follow this thread, but I'll admit I'm confused about what 
is being asked about or proposed. I'm not sure what "plugins for all" means.

Is "plugins for all" a way to make every plugin in an OpenStack project work 
the same way? How would that work? There's a huge set of diversity in the kind 
of things that plugins are used for, so I can't really imagine how there is a 
common subset that can be used everywhere.

Is "plugins for all" a way to say that every OpenStack project should have a 
stable API and that's the *only* way to talk to it (i.e. zero knowledge of 
internal state or design)? I think that's a really good idea that is vital to 
the future of OpenStack.

Is "plugins for all" a way to say that there is a set of functionality that is 
guaranteed to be there, eg with Neutron for networking or Trove for DBs, and if 
a project uses that functionality it must use the right OpenStack project?

I've picked up on elements of all three of these goals throughout the thread, 
but I'm not sure what's right or if I'm completely missing it.

Please help!

--John




On 19 Jul 2016, at 9:59, Hayes, Graham wrote:

> On 19/07/2016 16:39, Doug Hellmann wrote:
>> Excerpts from Hayes, Graham's message of 2016-07-18 17:13:09 +:
>>> On 18/07/2016 17:57, Thierry Carrez wrote:
 Hayes, Graham wrote:
> [...]
> The point is that we were supposed to be a level field as a community
> but if we have examples like this, there is not a level playing field.

 While I generally agree on your goals here (avoid special-casing some
 projects in generic support projects like Tempest), I want to clarify
 what we meant by "level playing field" in a recent resolution.
>>>
>>>
>>> Yes - it has been pointed out the title is probably overloading a term
>>> just used for a different purpose - I am more than willing to change it.
>>>
>>> I wasn't sure where I got the name, and I realised that was probably in
>>> my head from that resolution.
>>>
 This was meant as a level playing field for contributors within a
 project, not a level playing field between projects. The idea is that
 any contributor joining any OpenStack project should not be technically
 limited compared to other contributors on the same project. So, no
 "secret sauce" that only a subset of developers on a project have access 
 to.
>>>
>>> There is a correlation here - "special sauce" (not secret obviously)
>>> that only a subset of projects have access to.
>>>
 I think I understand where you're gong when you say that all projects
 should have equal chances, but keep in mind that (1) projects should not
 really "compete" against each other (but rather all projects should
 contribute to the success of OpenStack as a whole) and (2) some
 OpenStack projects will always be more equal than others (for example we
 require that every project integrates with Keystone, and I don't see
 that changing).
>>>
>>> Yes, I agree we should not be competing. But was should not be asking
>>> the smaller projects to re-implement functionality, just because they
>>> did not get integrated in time.
>>>
>>> We require all projects to integrate with keystone for auth, as we
>>> require all projects to integrate with neutron for network operations
>>> and designate for DNS, I just see it as a requirement for using the
>>> other components of OpenStack for their defined purpose.
>>>
>>
>> It would be useful to have some specific information from the QA/Tempest
>> team (and any others with a similar situation) about whether the current
>> situation about how differences between in-tree tests and plugin tests
>> are allowed to use various APIs. For example, are there APIs only
>> available to in-tree tests that are going to stay that way? Or is this
>> just a matter of not having had time to "harden" or "certify" or
>> otherwise prepare those APIs for plugins to use them?
>
> "Staying that way" is certainly the impression given to users from
> other projects.
>
> In any case tempest is just an example. From my viewpoint, we need to
> make this a community default, to avoid even the short (which really
> ends up a long) term discrepancy between projects.
>
> If the standard in the community is equal access, this means when the
> next testing tool, CLI, SDK, $cross_project_tool comes along, it is
> available to all projects equally.
>
> If everyone uses the interfaces, they get better for all users of them,
> "big tent projects" and "tc-approved-release" alike. Having two
> way of doing the same thing means that there will always be
> discrepancies between people who are in the club, and those who are not.
>
> - Graham
>
>> Doug
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> 

[openstack-dev] using feature branches, Swift's experiences

2016-07-19 Thread John Dickinson
Swift has now used 4 feature branches and landed 3 of them:

 * feature/sp -- for storage policy functionality (landed)
 * feature/ec -- for erasure codes (landed)
 * feature/hummingbird -- for golang WIP
 * feature/crypto -- for at-rest encryption (landed)

Overall, using long-lived upstream feature branches has been very helpful for 
us and overall a positive experience.

I've seen some other teams debate and discuss using a feature branch for their 
work but wonder about how it works. I've written down our experiences with 
using feature branches as part of OpenStack development, including some 
recommendations that help things go smoothly.

https://wiki.openstack.org/wiki/Swift/feature_branches

If you've got questions about using feature branches, please feel free to drop 
by #openstack-swift and ask.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] crypto-review review, soft-freeze on master

2016-06-10 Thread John Dickinson
As mentioned in IRC and on 
https://wiki.openstack.org/wiki/Swift/PriorityReviews, Swift is now under a 
soft freeze for master.

The patches for the crypto functionality have been proposed to 
feature/crypto-review. The initial plan is to spend two weeks in this soft 
freeze (until June 24) to get the final crypto patches reviewed and landed. 
We'll reevaluate at that time.

* The patch chain allows you to see the steps needed to get to the full crypto 
functionality.
* Please do not push patches over the patch chain, but please leave (links to) 
diffs as review comments.
* acoles is managing this patch chain.
* If there is something that must land on master before feature/crypto-review 
has landed, please let both notmyname and acoles know.
* Once the patches in the chain have been approved , acoles will remove the -2 
on the first patch and allow the chain to land on the crypto-review branch. 
Then we will land a single merge commit to master, bringing in the crypto 
functionality as one commit.

I'm excited to bring this functionality into Swift.


--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [swift] OpenStack Swift 2.8.0 has been released

2016-06-09 Thread John Dickinson
I'm happy to announce that OpenStack Swift 2.8.0 has been released.

This release includes several feature improvements and important
bug fixes, and I recommend that everyone upgrade as soon as possible.
As always, you can upgrade to this version with no end-user downtime.

The full release notes can be found at
https://github.com/openstack/swift/blob/master/CHANGELOG.

The release is available at https://tarballs.openstack.org/swift/.

Feature highlights:

  * Concurrent bulk deletes now uses concurrency to speed up the process.
This will result in faster API responses to end users. The amount of
concurrency used is configurable by the operator, and it defaults to 2.

  * Server-side copy has been refactored to be entirely encapsulated
in middleware. Not only does this make the code cleaner and make it
easier to support external middleware, this change is necessary for the
upcoming server-side encryption functionality.

  * The `fallocate_reserve` setting can now be a percent of drive
capacity instead of just a fixed number of bytes.

  * the deprecated `threads_per_disk` setting has been removed.
Deployers are encouraged to use `servers_per_port` instead.

Bug-fix highlights:

  * Fixed an infinite recursion issue when syslog is down.

  * Fixed a rare case where a backend failure during a read could
result in a missing byte in the response body.

  * `disable_fallocate` not also correctly disables `fallocate_reserve`.

  * Fixed an issue where a single-replica configuration for account or
container DBs could result in the DB being inadvertently deleted if
it was placed on a handoff node.

  * Reclaim isolated .meta files if they are older than the `reclaim_age`.


This release is the work of 42 different developers, including 16
first-time contributors. That you to the whole community for your work
during this release.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [release][reno][infra] merging tags between branches is confusing our release notes

2016-06-08 Thread John Dickinson


On 8 Jun 2016, at 11:13, Doug Hellmann wrote:

> tl;dr: The switch from pre-versioning to post-versioning means that
> sometimes master appears to be older than stable/$previous, so we
> merge "final" tags from stable/$previous into master to make up for
> it. This introduces versions into the history of master that aren't
> *really* there, but git sees them and so does reno. That, in turn,
> means that the release notes generated from master may place some
> notes in the wrong version, suggesting that something happened
> sooner than it did. I propose that we stop merging tags, and instead
> introduce a new tag on master after we create a branch to ensure
> that the version number there is always higher than stable/$previous.
>
>
> Background
> --
>
> Over the last year or so we've switched from pre-versioning (declaring
> versions in setup.cfg) to post-versioning (relying solely on git
> tags for versions). This made the release process simpler, because
> we didn't need to worry about synchronizing the change of version
> strings within setup.cfg as we created our branches. A side-effect,
> though, is that the version from which we tag appears on both
> branches. That means that stable/$previous and master both have the
> same version for some period of time, and then stable/$previous
> receives a final tag and has a version newer than master. To
> compensate, we merge that final tag from stable/$previous into
> master (taking only the tag, without any of the code changes), so
> that master again has the same version.
>
>
> The Problem
> ---
>
> The tag may be merged into master after other changes have landed
> in master but not stable/$previous, and if those changes include
> release notes then reno will associate them with the newly merged
> tag, rather than the correct version number.
>
> Here's an example I have been using to test reno. In it, 3 separate
> reno notes are created on two branches. Note 1 is on master when
> it is tagged 1.0.0. Then master is branched and note 2 is added to
> the branch and tagged 1.1.0. Then the tag is merged into master and
> note 3 is added.
>
>   * af93946 (HEAD -> master, tag: 2.0.0) add slug-0003.yaml
>   * f78d1a2 add ignore-2.txt
>   *   4502dbd merge 1.1.0 tag into master
>   |\
>   | * bf50a97 (tag: 1.1.0, test_merge_tags) add slug-0002.yaml
>   * | 1e4d846 add ignore-1.txt
>   |/
>   * 9f481a9 (tag: 1.0.0) add slug-0001.yaml
>
> Before the tag is applied to note 3, it appears to be part of 1.1.0,
> even though it is not from the branch where that version was created
> and the version 1.1.0 is included in the release notes for master,
> even though that version should not really be a part of that series.
>
> Technically reno is doing the right thing, because even "git describe"
> honors the merged tag and treats commit f78d1a2 as 1.1.0-4-gaf93946.
> So because we've merged the version number into a different series
> branch, that version becomes part of that series.
>
>
> The Proposal
> 
>
> We should stop merging tags between branches, at all. Then our git
> branches will be nice and linear, without merges, and reno will
> associate the correct version number with each note.
>
> To compensate for the fact that master will have a lower version
> number after the branch, we can introduce a new alpha tag on master
> to raise its version. So, after creating stable/$series from version
> X.0.0.0rc1, we would tag the next commit on master with X+1.0.0.0a1.
> All subsequent commits on master would then be considered to be
> part of the X+1.0.0 series.

This seems to go back to the essence of pre-versioning. Instead of updating a 
string in a file, you've updated it as a tag. You've still got the coordination 
issues at release to deal with (when and what to tag) and the issue of knowing 
what the next release is before you've landed any patches that will be in that 
release.

Isn't the reason that the branch is merged back in because otherwise per can't 
generate a valid version number? You've "solved" that by hiding the release 
version number behind the new *a1 tag. Therefore, for any commit on the master 
branch, the only ancestor commits that are tagged will have the *a1 tags, and 
the actual release tags will never be reachable by walking up parent commits 
(assuming there is at least one commit on a release branch, which seems normal).



>
> Libraries and other projects that follow the cycle-with-intermediary
> release model won't need this treatment, because we're not using
> alpha or beta versions for them and they are tagged more often than
> the projects following the cycle-with-milestones model.
>
>
> Possible Issues
> ---
>
> We will be moving back to a situation where we have to orchestrate
> multiple operations when we create branches. Adding an extra tag
> isn't a lot of work, though, and it doesn't really matter what the
> commit is that gets the tag, so if there's 

[openstack-dev] [swift] Exploring the feasibility of a dependency approach

2016-06-07 Thread John Dickinson
Below is the entirety of an email thread between myself, Thierry, and Flavio. 
It goes into detail about Swift's design and the feasibility and potential 
impact of a "split repos" scenario.

I'm posting this with permission as an FYI, not to reraise discussion.


--John





Forwarded message:

> From: John Dickinson <m...@not.mn>
> To: Thierry Carrez <thie...@openstack.org>
> Cc: Flavio Percoco <flape...@gmail.com>
> Subject: Re: Exploring the feasibility of a dependency approach
> Date: Wed, 01 Jun 2016 13:58:21 -0700
>
>
>
> On 30 May 2016, at 2:48, Thierry Carrez wrote:
>
>> John Dickinson wrote:
>>> Responses inline.
>>
>> Thank you for taking up the time to write this, it's really helpful (to me 
>> at least). I have a few additional comments/questions to make sure I fully 
>> understand.
>>
>>>> [...]
>>>> 1. How much sense would a Swift API / Swift engine split make today ?
>>>> [...]
>>>
>>> It doesn't make much sense to try to split Swift into an API part and
>>> an engine part because the things the API handles are inexorably
>>> linked to the storage engine itself. In other words, the API handlers
>>> are the implementation of the engine.
>>>
>>> Since the API is handling the actual resources that are exposed (ie
>>> the data itself), it also has to handle the "engine" pieces like the
>>> consistency model (when is something "durable"), placement (where
>>> should something go), failure handling (what if hardware in the
>>> cluster isn't available), and durability schemes (replicas, erasure
>>> coding, etc).
>>
>> Right, so knowledge of the data placement algorithm (or the durability 
>> constraints) in pervasive across the Swift nodes. The proxy server is, in a 
>> way, as low-level as the storage server.
>>
>>> The "engine" in Swift has two logical parts. One part is responsible
>>> for taking a request, making a canonical persistent "key" for it,
>>> handing the data to the storage media, and ensuring that the media has
>>> durably stored the data. The other part is responsible for handling a
>>> client request, finding data in the cluster, and coordinating all
>>> responses from the stuff in the first part.
>>>
>>> We call the first part "storage servers" and the second part "proxy
>>> servers". There are three different kinds of storage servers in Swift:
>>> account, container, and object, and each also have several background
>>> daemon processes associated with them. For the rest of this email, I'll
>>> refer to a proxy server and storage servers (or specific account,
>>> container, or object servers).
>>>
>>> The proxy server and the storage servers are pluggable. The proxy
>>> server and the storage servers support 3rd party WSGI middleware. The
>>> proxy server has been extended many times in the ecosystem with a lot
>>> of really cool functionality:
>>>
>>>   * Swift as an origin server for CDNs
>>>   * Storlets, which allow executable code stored as objects to
>>> mutate requests and responses
>>>   * Image thumbnails (eg for wikimedia)
>>>   * Genome sequence format conversions, so data coming out of a
>>> gene sequencer can go directly to swift and be usable by other
>>> apps in the workflow
>>>   * Media server timestamp to byte offset translator (eg for CrunchyRoll)
>>>   * Caching systems
>>>   * Metadata indexing
>>>
>>> The object server also supports different implementations for how it
>>> talks to durable media. The in-repo version has a memory-only
>>> implementation and a generic filesystem implementation. Third-party
>>> implementations support different storage media like Kinetic drives.
>>> If there were to be special optimizations for flash media, this is
>>> where it would go. Inside of the object server, this is abstracted as
>>> a "DiskFile", and extending it is a supported use case for Swift.
>>>
>>> The DiskFile is how other full-featured storage systems have plugged
>>> in to Swift. For example, the SwiftOnFile project implements a
>>> DiskFile that handles talking to a distributed filesystem instead of a
>>> local filesystem. This is used for putting Swift on GlusterFS or on
>>> NetApp. It's the same pattern that's used for swift-on-ceph and all of
>>> the other swift-on-* implementations out there. My previous

Re: [openstack-dev] [stable][all] Tagging kilo-eol for "the world"

2016-06-02 Thread John Dickinson
open swift/swiftclient patches to stable/kilo have been abandoned

--John



On 2 Jun 2016, at 4:45, Jesse Pretorius wrote:

> Hi Tony,
>
> OpenStack-Ansible is just waiting for the requirements repository and the
> swift repository kilo-eol tags. Once they're done we'd like to bump the
> SHA's for our 'kilo' to the EOL tags of those two repositories, tag a
> release, then do our own kilo-eol tag.
>
> Thanks,
>
> Jesse
> IRC: odyssey4me
>
> On 2 June 2016 at 11:31, Tony Breeds  wrote:
>
>> Hi all,
>> In early May we tagged/EOL'd several (13) projects.  We'd like to do a
>> final round for a more complete set.  We looked for projects meet one or
>> more
>> of the following criteria:
>> - The project is openstack-dev/devstack, openstack-dev/grenade or
>>   openstack/requirements
>> - The project has the 'check-requirements' job listed as a template in
>>   project-config:zuul/layout.yaml
>> - The project is listed in governance:reference/projects.yaml and is tagged
>>   with 'release:managed' or 'stable:follows-policy' (or both).
>>
>> The list of 171 projects that match above is at [1].  There are another 68
>> projects at [2] that have kilo branches but do NOT match the criteria
>> above.
>>
>> Please look over both lists by 2016-06-09 00:00 UTC and let me know if:
>> - A project is in list 1 and *really* *really* wants to opt *OUT* of
>> EOLing and
>>   why.
>> - A project is in list 2 that would like to opt *IN* to tagging/EOLing
>>
>> Any projects that will be EOL'd will need all open reviews abandoned
>> before it
>> can be processed.  I'm very happy to do this.
>>
>> I'd like to hand over the list of ready to EOL repos to the infra team on
>> 2016-09-10 (UTC)
>>
>> Yours Tony.
>> [1] http://paste.openstack.org/show/507233/
>> [2] http://paste.openstack.org/show/507232/
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> -- 
> Jesse Pretorius
> mobile: +44 7586 906045
> email: jesse.pretor...@gmail.com
> skype: jesse.pretorius
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-25 Thread John Dickinson
My responses are inline and to question 5, which, like you, I think is the key.

On 25 May 2016, at 3:48, Sean Dague wrote:

> I've been watching the threads, trying to digest, and find the way's
> this is getting sliced doesn't quite slice the way I've been thinking
> about it. (which might just means I've been thinking about it wrong).
> However, here is my current set of thoughts on things.
>
> 1. Should OpenStack be open to more languages?
>
> I've long thought the answer should be yes. Especially if it means we
> end up with keystonemiddleware, keystoneauth, oslo.config in other
> languages that let us share elements of infrastructure pretty
> seamlessly. The OpenStack model of building services that register in a
> service catalog and use common tokens for permissions through a bunch of
> services is quite valuable. There are definitely people that have Java
> applications that fit into the OpenStack model, but have no place to
> collaborate on them.
>
> (Note: nothing about the current proposal goes anywhere near this)
>
> 2. Is Go a "good" language to add to the community?
>
> Here I am far more mixed. In programming language time, Go is super new.
> It is roughly the same age as the OpenStack project. The idea that Go and
> Python programmers overlap seems to be because some shops that used
> to do a lot in Python, now do some things in Go.
>
> But when compared to other languages in our bag, Javascript, Bash. These
> are things that go back 2 decades. Unless you have avoided Linux or the
> Web successfully for 2 decades, you've done these in some form. Maybe
> not being an expert, but there is vestigial bits of knowledge there. So
> they *are* different. In the same way that C or Java are different, for
> having age. The likelihood of finding community members than know Python
> + one of these is actually *way* higher than Python + Go, just based on
> duration of existence. In a decade that probably won't be true.
>
> 3. Are there performance problems where python really can't get there?
>
> This seems like a pretty clear "yes". It shouldn't be surprising. Python
> has no jit (yes there is pypy, but it's compat story isn't here). There
> is a reason a bunch of python libs have native components for speed -
> numpy, lxml, cryptography, even yaml throws a warning that you should
> really compile the native version for performance when there is full
> python fallback.
>
> The Swift team did a very good job demonstrating where these issues are
> with trying to get raw disk IO. It was a great analysis, and kudos to
> that team for looking at so many angles here.
>
> 4. Do we want to be in the business of building data plane services that
> will all run into python limitations, and will all need to be rewritten
> in another language?
>
> This is a slightly different spin on the question Thierry is asking.
>
> Control Plane services are very unlikely to ever hit a scaling concern
> where rewriting the service in another language is needed for
> performance issues. These are orchestrators, and the time spent in them
> is vastly less than the operations they trigger (start a vm, configure a
> switch, boot a database server). There was a whole lot of talk in the
> threads of "well that's not innovative, no one will want to do just
> that", which seems weird, because that's most of OpenStack. And it's
> pretty much where all the effort in the containers space is right now,
> with a new container fleet manager every couple of weeks. So thinking
> that this is a boring problem no one wants to solve, doesn't hold water
> with me.
>
> Data Plane services seem like they will all end up in the boat of
> "python is not fast enough". Be it serving data from disk, mass DNS
> transfers, time series database, message queues. They will all
> eventually hit the python wall. Swift hit it first because of the
> maturity of the project and they are now focused on this kind of
> optimization, as that's what their user base demands. However I think
> all other data plane services will hit this as well.
>
> Glance (which is partially a data plane service) did hit this limit, and
> the way it is largely mitigated by folks is by using Ceph and exposing that
> directly to Nova so now Glance is only in the location game and metadata
> game, and Ceph is in the data plane game.
>
> When it comes to doing data plan services in OpenStack, I'm quite mixed.
> The technology concerns for data plane
> services are quite different. All the control plane services kind of
> look and feel the same. An API + worker model, a DB for state, message
> passing / rpc to put work to the workers. This is a common pattern and
> is something which even for all the project differences, does end up
> kind of common between parts. Projects that follow this model are
> debuggable as a group not too badly.
>
> 5. Where does Swift fit?
>
> This I think has always been a tension point in the community (at least
> since I joined in 2012). Swift is an original 

Re: [openstack-dev] [all][tc] Languages vs. Scope of "OpenStack"

2016-05-19 Thread John Dickinson
summary:
 * Defining the scope of OpenStack projects DOES NOT define the
   languages needed to implement them. The considerations are
   orthogonal.
 * We've already defined OpenStack--it's whatever it takes to
   fulfill its mission statement.

On 19 May 2016, at 6:19, Thierry Carrez wrote:

> Hi everyone,
>
> The discussion on the addition of golang focuses on estimating
> community costs vs. technical benefits, so that the TC can make the
> right call for "OpenStack". From that discussion, we established
> that the costs for cross-project teams (Infra...) seem to be
> workable. There is still significant community fragmentation
> effects as we add another language, creating language-expert silos,
> duplicating efforts, and losing some productivity as some
> needlessly rewrite things in golang. But we seem generally ready to
> accept that cost because we are missing a tool in our toolbelt: a
> language that lets us do such native optimization.
>
> We have a number of projects in OpenStack that are more low-level
> than others and which require such optimization, and for those
> projects (or subprojects) our current mix of language is not
> satisfactory. The Swift team in particular has spent a lot of time
> trying to get the I/O behavior they require with hard disk access
> using Python, and they certainly did not jump on the golang
> bandwagon lightly.
>
> I believe the programming languages you need in OpenStack official
> projects are a function of the scope you define for OpenStack
> official projects. We wouldn't need to have JavaScript in the mix if
> we considered that web interfaces that purely consume OpenStack
> APIs are projects that consume OpenStack, rather than part of
> OpenStack itself.
>
> In the same vein, if we consider lower-level projects (which often
> require such native optimization) as part of "OpenStack", rather
> than as external open source projects that should be integrated by
> "OpenStack", then we need a language like golang in our toolbelt.
> There is basically no point in saying no to golang in OpenStack if
> we need lower-level native optimization in OpenStack: we'll have to
> accept the community cost that comes with such a community scope.

Defining "lower-level" is very hard. Since the Nova API[1] is
listening to a public network interface and coordinating with various
services in a cluster, is it low-level enough to need to consider
optimizations? Does the Nova API require optimization to handle a very
large number of connections using all of the hardware available on a
single server? If Nova is going to eek out every drop of performance
possible from a server, it probably does need to consider all kinds of
"low-level" optimizations.[2]

Because deployers of any OpenStack project do want efficient software
that is performant, all parts of OpenStack need to consider what it
takes to meet the performance demanded. Most of the time this will not
require rewriting code in a different language (that's almost never
the right answer), but my point is that I believe you're wrong that
defining the scope of OpenStack projects also defines the languages
needed to implement them. The considerations are orthogonal.

[1] substitute your favorite OpenStack project

[2] 
http://highscalability.com/blog/2013/5/13/the-secret-to-10-million-concurrent-connections-the-kernel-i.html


> So the real question we need to answer is... where does OpenStack
> stop, and where does the wider open source community start ? If
> OpenStack is purely an "integration engine", glue code for other
> lower-level technologies like hypervisors, databases, or distributed
> block storage, then the scope is limited, Python should be plenty
> enough, and we don't need to fragment our community. If OpenStack is
> "whatever it takes to reach our mission", then yes we need to add one
> language to cover lower-level/native optimization, because we'll
> need that... and we need to accept the community cost as a
> consequence of that scope choice. Those are the only two options on
> the table.
>
> I'm actually not sure what is the best answer. But I'm convinced we,
> as a community, need to have a clear answer to that. We've been
> avoiding that clear answer until now, creating tension between the
> advocates of an ASF-like collection of tools and the advocates of a
> tighter-integrated "openstack" product. We have created silos and
> specialized areas as we got into the business of developing time-
> series databases or SDNs. As a result, it's not "one community"
> anymore. Should we further encourage that, or should we focus on
> what the core of our mission is, what we have in common, this
> integration engine that binds all those other open source projects
> into one programmable infrastructure solution ?

You said the answer in your question. OpenStack isn't defined as an
integration engine[3]. The definition of OpenStack is whatever it
takes to fulfill our mission[4][5]. I don't mean that as a tautology.
I 

Re: [openstack-dev] [tc] supporting Go

2016-05-16 Thread John Dickinson


On 16 May 2016, at 8:14, Dmitry Tantsur wrote:

> On 05/16/2016 05:09 PM, Ian Cordasco wrote:
>>
>>
>> -Original Message-
>> From: Dmitry Tantsur 
>> Reply: OpenStack Development Mailing List (not for usage questions) 
>> 
>> Date: May 16, 2016 at 09:55:27
>> To: openstack-dev@lists.openstack.org 
>> Subject:  Re: [openstack-dev] [tc] supporting Go
>>
>>> On 05/16/2016 04:35 PM, Adam Young wrote:
 On 05/16/2016 05:23 AM, Dmitry Tantsur wrote:
> On 05/14/2016 03:00 AM, Adam Young wrote:
>> On 05/13/2016 08:21 PM, Dieterly, Deklan wrote:
>>> If we allow Go, then we should also consider allowing JVM based
>>> languages.
>> Nope. Don't get me wrong, I've written more than my fair share of Java
>> in my career, and I like it, and I miss automated refactoring and real
>> threads. I have nothing against Java (I know a lot of you do).
>>
>> Java fills the same niche as Python. We already have one of those, and
>> its very nice (according to John Cleese).
>
> A couple of folks in this thread already stated that the primary
> reason to switch from Python-based languages is the concurrency story.
> JVM solves it and does it in the same manner as Go (at least that's my
> assumption).
>
> (not advocating for JVM, just trying to understand the objection)
>
>>
>> So, what I think we are really saying here is "what is our Native
>> extension story going to be? Is it the traditional native languages, or
>> is it something new that has learned from them?"
>>
>> Go is a complement to Python to fill in the native stuff. The
>> alternative is C or C++. Ok Flapper, or Rust.
>
> C, C++, Rust, yes, I'd call them "native".
>
> A language with a GC and green threads does not fall into "native"
> category for me, rather the same as JVM.

 MOre complex than just that. But Go does not have a VM, just put a lot
 of effort into co-routines without taking context switches. Different
 than green threads.
>>>
>>> Ok, I think we have a different notion of "native" here. For me it's
>>> being with as little magic happening behind the scenes as possible.
>>
>> Have you written a lot of Rust?
>
> Not a lot, but definitely some.
>
>> Rust handles the memory management for you as well. Certainly, you can 
>> determine the lifetime of something and tell the compiler how the underlying 
>> memory is shared, but Rust is far better than C in so much as you should 
>> never be able to write code that doubly frees the same memory unless you're 
>> explicitly using the unsafe features of the language that are infrequently 
>> needed.
>
> I think we're in agreement here, not sure which bit you're arguing against :)
>
>>
>> I'm with Flavio about preferring Rust personally, but I'm not a member of 
>> either of these teams and I understand the fact that most of the code is 
>> already written and has been shown to drastically improve performance in 
>> exactly the places where it's needed. With all of that in mind, I'm in favor 
>> of just agreeing already that Go is okay. I understand the concern that this 
>> will increase cognitive load on some developers and *might* have effects on 
>> our larger community but our community can only grow so long as our software 
>> is usable (performant) and useful (satisfies needs/requirements).
>
> This Rust discussion is a bit offtopic, I was just stating that my notion of 
> "nativeness" of Go is closer to one of JVM, not one of C/C++/Rust.
>
> I guess my main question is whether folks seriously considered 
> PyPy/Cython/etc.

Yes.

http://lists.openstack.org/pipermail/openstack-dev/2016-May/094960.html (which 
also references 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094549.html and 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094720.html

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-13 Thread John Dickinson
You're absolutely right. If we can get as good (or even close) to the same 
performance eg with PyPy, then we should absolutely do that! I've had many 
public and private conversations over the last year or so that have that same 
basic message as I've been looking at the ongoing Golang work in the Swift 
community.

In fact, for the last several months I have been working pretty closely with 
some Intel engineers who are dedicated to Python (and specifically PyPy) 
performance improvements. We've been running Swift under PyPy and there's been 
other more preliminary tests with other OpenStack projects. I gave a joint talk 
in Austin about this work: https://www.youtube.com/watch?v=L5sCD7zEENg

Based on that testing, I have a few conclusions:

1) (assuming production-ready stability of PyPy and OpenStack running under it) 
Everyone should be running OpenStack projects under PyPy instead of CPython. 
Put simply, it's just plain faster. There are some issues to work out, though, 
specifically relating to PyPy's GC. Right now, we're looking at a few patches 
that do a better job of socket handling in Swift so that it runs better under 
PyPy. Once these patches land, I really do hope that more people will be using 
PyPy in production.

2) PyPy only helps when you've got a CPU-constrained environment.

3) The Golang targets in Swift are related to effective thread management, 
syscalls, and IO (as Sam described in a few earlier posts in this thread), and 
these are the issues we're facing on the persistent storage layer in Swift.

The conclusion is that there will still be Python code in Swift for a long time 
to come, and there is serious effort underway to make sure we can run that in 
production on PyPy and get some great performance improvements there. However, 
PyPy is not helping us when it comes to the persistent storage layer in Swift. 
For that, we need scalable, lightweight, and fast syscall management and thread 
coordination (again, see Sam's earlier messages). Perhaps we could write some 
Python C extension to get something like that, but (1) the best we'd hope for 
would only be approaching what we'd get with Golang's runtime out-of-the box 
and (2) we'd still be having this exact same conversation except about C.

So I expect that a year from now (or sooner) Swift deployers will be running 
both Python code with PyPy and compiled Golang code. And they'll have a storage 
system that's a lot faster than is possible today.


--John





On 13 May 2016, at 2:04, Alexey Stupnikov wrote:

> + Agree. It is strange to use another language to address performance issues 
> if you haven't tried to solve those issues using original language's options.
>
> On 05/13/2016 11:53 AM, Fausto Marzi wrote:
>> ++ Brilliant.
>>
>> On Fri, May 13, 2016 at 10:14 AM, Dmitry Tantsur > > wrote:
>>
>>
>> This is pretty subjective, I would say. I personally don't feel Go
>> (especially its approach to error handling) any natural (at least
>> no more than Rust or Scala, for example). If familiarity for
>> Python developers is an argument here, mastering Cython or making
>> OpenStack run on PyPy must be much easier for a random Python
>> developer out there to seriously bump the performance. And it
>> would not require introducing a completely new language to the
>> picture.
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> -- 
> BR, Alexey Stupnikov.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [keystone][oslo][designate][zaqar][nova][swift] using pylibmc instead of python-memcached

2016-05-13 Thread John Dickinson


On 13 May 2016, at 1:14, Steve Martinelli wrote:

> /me gets the ball rolling
>
> Just when python3 support for keystone was looking like a reality, we've
> hit another snag. Apparently there are several issues with python-memcached
> in py3, putting it simply: it loads, but doesn't actually work. I've
> included projects in the subject line that use python-memcached (based on
> codesearch)
>
> Enter pylibmc; apparently it is (almost?) a drop-in replacement, performs
> better, and is more actively maintained.
>
> - Has anyone had success using python-memcached in py3?
> - Is anyone interested in using pylibmc in their project instead of
> python-memcached?
> - Will buy-in from all projects be necessary to proceed for any single
> project?
>
> Open issues like this:
> https://github.com/linsomniac/python-memcached/issues/94 make me sad.
>
> stevemar
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


In Swift, we've got our own memcache client[1] to get around some of the issues 
that were present in python-memcached at the time we needed it. Those issues 
may (or may not) be resolved now, but basically we've been very happy for years 
now with our own memcache client. It's drop-in compatible with python-memcached.

[1] https://github.com/openstack/swift/blob/master/swift/common/memcached.py 
(note the docstring at the top)


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-11 Thread John Dickinson


On 11 May 2016, at 13:11, Robert Collins wrote:
>
> So, given that that is the model - why is language part of it? Yes,
> there are minimum overheads to having a given language in CI - we need
> to be able to do robust reliable builds [or accept periodic downtime
> when the internet is not cooperating], and that sets a lower fixed
> cost, varying per language. Right now we support Python, Java,
> Javascript, Ruby in CI (as I understand it - infra focused folk please
> jump in here :)).

+1000

this is what the whole thread should be about

> Here is a straw man list of requirements:
>  - Reliable builds: the ability to build and test without talking to
> the internet at all.
>  - Packagable: the ability to take a source tree for a project, do
> some arbitrary transform and end up with a folder structure that can
> be placed on another machine, with any needed dependencies, and work.
> [Note, this isn't the same as 'packagable in a way that makes Red Hat
> and Canonical and Suse **happy**, but thats something we can be sure
> that those orgs are working on with language providers already ]
>  - FL/OSS
>  - Compatible with ASL v2 source code. [e.g. any compiler doesn't
> taint its output]
>  - Can talk oslo.messaging's message format


The great news is that we don't have to have the straw man--we actually are 
building the real list (and you've hit several of them).

At the infra team meeting this week we talked through a few of these (mostly 
focused on the dependency management issues first), and we've started 
collecting notes on 
https://etherpad.openstack.org/p/golang-infra-issues-to-solve about the basic 
infra things that need to be figured out.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Team blogs

2016-05-09 Thread John Dickinson


On 9 May 2016, at 15:53, Monty Taylor wrote:

> On 05/09/2016 05:45 PM, Robert Collins wrote:
>> IIRC mediawiki provides RSS of changes... maybe just using the wiki
>> more would be a good start, and have zero infra costs?
>
> We'd actually like to start using the wiki less, per the most recent
> summit. Also, the wiki currently has new accounts turned off (thanks
> spammers) so if you don't have a wiki account now, you're not getting
> one soon.

Ah! Yikes! First I've heard of this. This will make 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/094026.html more 
difficult if wiki access is restricted. I suppose we could find another 
location, but the wiki is a great place for the community to put stuff.

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread John Dickinson
On 9 May 2016, at 13:16, Gregory Haynes wrote:
>
> This is a bit of an aside but I am sure others are wondering the same
> thing - Is there some info (specs/etherpad/ML thread/etc) that has more
> details on the bottleneck you're running in to? Given that the only
> clients of your service are the public facing DNS servers I am now even
> more surprised that you're hitting a python-inherent bottleneck.

In Swift's case, the summary is that it's hard[0] to write a network
service in Python that shuffles data between the network and a block
device (hard drive) and effectively utilizes all of the hardware
available. So far, we've done very well by fork()'ing child processes,
using cooperative concurrency via eventlet, and basic "write more
efficient code" optimizations. However, when it comes down to it,
managing all of the async operations across many cores and many drives
is really hard, and there just isn't a good, efficient interface for
that in Python.

Initial results from a golang reimplementation of the object server in
Python are very positive[1]. We're not proposing to rewrite Swift
entirely in Golang. Specifically, we're looking at improving object
replication time in Swift. This service must discover what data is on
a drive, talk to other servers in the cluster about what they have,
and coordinate any data sync process that's needed.

[0] Hard, not impossible. Of course, given enough time, we can do
 anything in a Turing-complete language, right? But we're not talking
 about possible, we're talking about efficient tools for the job at
 hand.

[1] http://d.not.mn/python_vs_golang_gets.png and
 http://d.not.mn/python_vs_golang_puts.png


--John






signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-09 Thread John Dickinson


On 9 May 2016, at 11:14, Hayes, Graham wrote:

> On 09/05/2016 19:09, Fox, Kevin M wrote:
>> I think you'll find that being able to embed a higher performance language 
>> inside python will be much easier to do for optimizing a function or two 
>> rather then deal with having a separate server have to be created, 
>> authentication be added between the two, and marshalling/unmarshalling the 
>> data to and from it to optimize one little piece. Last I heard, you couldn't 
>> just embed go in python. C/C++ is pretty easy to do. Maybe I'm wrong and its 
>> possible to embed go now. Someone, please chime in if you know of a good way.
>>
>> Thanks,
>> Kevin
>
> We won't be replacing any particular function, we will be replacing a
> whole service.
>
> There is no auth (or inter-service communications) from this component,
> all it does it query the DB and spit out DNS packets.
>
> I can't talk for what swift are doing, but we have a very targeted scope
> for our Go work.
>
> - Graham

This is exactly the direction Swift is exploring--replacing a part of the whole 
that is already it's own daemon and/or network service.

--John




>
>> 
>> From: Hayes, Graham [graham.ha...@hpe.com]
>> Sent: Monday, May 09, 2016 4:33 AM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [tc] supporting Go
>>
>> On 08/05/2016 10:21, Thomas Goirand wrote:
>>> On 05/04/2016 01:29 AM, Hayes, Graham wrote:
>>>> On 03/05/2016 17:03, John Dickinson wrote:
>>>>> TC,
>>>>>
>>>>> In reference to 
>>>>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html 
>>>>> and Thierry's reply, I'm currently drafting a TC resolution to update 
>>>>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>>>>>  to include Go as a supported language in OpenStack projects.
>>>>>
>>>>> As a starting point, what would you like to see addressed in the document 
>>>>> I'm drafting?
>>>>>
>>>>> --John
>>>>>
>>>>>
>>>>>
>>>>
>>>> Great - I was about to write a thread like this :)
>>>>
>>>> Designate is looking to move a single component of ours to Go - and we
>>>> were wondering what was the best way to do it.
>>
>>> We discussed about this during the summit. You told me that the issue
>>> was a piece of code that needed optimization, to which I replied that
>>> probably, a C++ .so extension in a Python module is probably what you
>>> are looking for (with the advice of not using CFFI which is sometimes
>>> breaking things in distros).
>>>
>>> Did you think about this other possibility, and did you discuss it with
>>> your team?
>>
>> We had a brief discussion about it, and we going to try a new POC in
>> C/C++ to validate it, but then this thread (and related TC policy) were
>> proposed.
>>
>> If Golang is going to be a supported language, we would much rather
>> stick with one of the official OpenStack languages that suits our
>> use case instead of getting an exemption for another similar language.
>>
>> When we spoke at the summit, I was under the impression that the feature
>> branch in swift was not going to be merged to master, and we would have
>> to get an exemption from the TC anyway - which we could have used to get
>> C / C++.
>>
>> The team also much preferred the idea of Golang - we do not have much
>> C++ expertise in the Designate dev team, which would slow down the
>> development cycle for us.
>>
>> -- Graham
>>
>>> At the Linux distribution level, the main issue that there is with Go,
>>> is that it (still) doesn't support the concept of shared library. We see
>>> this as a bug, rather than a feature. As a consequence, when a library
>>> upgrades, the release team has to trigger rebuilds for each and every
>>> reverse dependencies. As the number of Go stuff increases over time, it
>>> becomes less and less manageable this way (and it may potentially be a
>>> security patching disaster in Stable). I've heard that upstream for
>>> Golang was working on implementing shared libs, but I have no idea what
>>> the status is. Does anyone know?
>>>
>>> Cheers,
>>>
>>> Thomas Goirand (zigo)
>>>
>>>
>>> _

[openstack-dev] [swift] specs are dead. long live ideas

2016-05-04 Thread John Dickinson
If you're working on an idea for Swift, you've probably already spent a lot of 
time thinking deeply about it. Working with other people in the community to 
further develop your idea is wonderful, and by the time we're all reviewing the 
code, it's really important that we all think deeply about the problem and 
proposed implementation. The challenge is figuring out how to share this 
information so we can all benefit from the time you've already spent thinking 
about it. When we're all in the same room, it's easy--we just grab a whiteboard 
and take some time. But when we're globally distributed, that's hard.

Previously, we've tried to use the swift-specs repo as a place to share this 
"brain dump" and collaborate on an idea. However, specs-as-code-review has some 
problems:
* spec review distracts from code review
* spec review devolves into bikeshedding at the earliest opportunity
* typos are not what's important in a spec
* specs take a long time to land (in part because of the above)
* once specs have landed, it becomes much harder to ask questions and have 
a conversation about the idea
* where there's some disagreement, no progress is made

The swift-specs repo has become a wasteland of partial ideas where hope goes to 
die.

Moving forward, we want to facilitate communication without encouraging 
despair. We will be stopping any further work in the swift-specs repo. It is, 
as of now, officially dead.

To replace the "shared brain dump" area, we have 
https://wiki.openstack.org/wiki/Swift/ideas where you can add

 Topic -- link to your document

That's it.

Currently-open patches to the swift-specs repo have been landed so they can be 
accessed through the http://specs.openstack.org site. If you write down your 
thoughts and want to share them so others in the community can help you write 
and merge it, add a line to the wiki page. It doesn't matter where you write 
down your thoughts. Use whatever you want. Etherpads, wikis, google docs, slide 
decks, anything. Use what works for you, and please include some contact info 
in your doc (IRC nick, email, etc). I'm looking forward to reading what you're 
thinking about and working with you to implement it.

--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson


On 3 May 2016, at 14:50, Doug Hellmann wrote:

> Excerpts from John Dickinson's message of 2016-05-03 13:01:28 -0700:
>>
>> On 3 May 2016, at 12:19, Monty Taylor wrote:
>>
>>> On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
>>>> On Tue, May 3, 2016 at 9:03 AM John Dickinson <m...@not.mn
>>>> <mailto:m...@not.mn>> wrote:
>>>>
>>>>
>>>> As a starting point, what would you like to see addressed in the
>>>> document I'm drafting?
>>>>
>>>>
>>>> I'm going through this project with JavaScript right now. Here's some of
>>>> the things I've had to address:
>>>>
>>>> - Common language formatting rules (ensure that a pep8-like thing exists).
>>>> - Mirroring dependencies?
>>>> - Building Documentation
>>>
>>> Mirroring and building are the ones that we'll definitely want to work 
>>> together on in terms of figuring out how to support. go get being able to 
>>> point at any git repo for depends is neat - but it increases the amount of 
>>> internet surface-area in the gate. Last time I looked (last year) there 
>>> were options for doing just the fetch part of go get separate from the 
>>> build part.
>>>
>>> In any case, as much info as you can get about the mechanics of downloading 
>>> dependencies, especially as it relates to pre-caching or pointing build 
>>> systems at local mirrors of things holistically rather than by modifying 
>>> the source code would be useful. We've gone through a couple of design 
>>> iterations on javascript support as we've dived in further.
>>
>> Are these the sort of things that need to be in a resolution saying that 
>> it's ok to write code in Golang? I'll definitely agree that these questions 
>> are important, and I don't have the answers yet (although I expect we will 
>> by the time any Golang code lands in Swift). We've already got the 
>> Consistent Testing Interface doc[1] which talks about having tests, a coding 
>> style, and docs (amongst other things). Does a resolution about Golang being 
>> acceptable need to describe dependency management, build tooling, and CI?
>
> There are separate interfaces described there for Python and JavaScript.
> I think it makes sense to start documenting the expected interface for
> projects written in Go, for the same reason that we have the others, and
> I don't think we would want to say "Go is fine" until we at least have a
> start on that documentation -- otherwise we have a gap where projects
> may do whatever they want, and we have to work to get them back into
> sync.
>
> Doug
>

Yeah, I see that. Can you help me come up with that list? I honestly don't know 
the "right" way to do everything in Go. These are some of the things that need 
to be sussed out over the next several months (see the original email).

I've proposed my initial draft to https://review.openstack.org/#/c/312267/. I'd 
be happy if you pushed over that or had a follow-on patch to help describe the 
interfaces like with JS and Python.


--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson


On 3 May 2016, at 12:19, Monty Taylor wrote:

> On 05/03/2016 01:45 PM, Michael Krotscheck wrote:
>> On Tue, May 3, 2016 at 9:03 AM John Dickinson <m...@not.mn
>> <mailto:m...@not.mn>> wrote:
>>
>>
>> As a starting point, what would you like to see addressed in the
>> document I'm drafting?
>>
>>
>> I'm going through this project with JavaScript right now. Here's some of
>> the things I've had to address:
>>
>> - Common language formatting rules (ensure that a pep8-like thing exists).
>> - Mirroring dependencies?
>> - Building Documentation
>
> Mirroring and building are the ones that we'll definitely want to work 
> together on in terms of figuring out how to support. go get being able to 
> point at any git repo for depends is neat - but it increases the amount of 
> internet surface-area in the gate. Last time I looked (last year) there were 
> options for doing just the fetch part of go get separate from the build part.
>
> In any case, as much info as you can get about the mechanics of downloading 
> dependencies, especially as it relates to pre-caching or pointing build 
> systems at local mirrors of things holistically rather than by modifying the 
> source code would be useful. We've gone through a couple of design iterations 
> on javascript support as we've dived in further.

Are these the sort of things that need to be in a resolution saying that it's 
ok to write code in Golang? I'll definitely agree that these questions are 
important, and I don't have the answers yet (although I expect we will by the 
time any Golang code lands in Swift). We've already got the Consistent Testing 
Interface doc[1] which talks about having tests, a coding style, and docs 
(amongst other things). Does a resolution about Golang being acceptable need to 
describe dependency management, build tooling, and CI?

--John




[1] http://governance.openstack.org/reference/project-testing-interface.html

signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's an interesting point. I'm not very familiar with Golang itself yet, and 
I haven't yet had to manage any Golang projects in prod. These sorts of 
questions are great!

If a distro is distributing pre-compiled binaries, isn't the compatibility 
issue up to the distros? OpenStack is not distributing binaries (or even distro 
packages!), so while it's an important question, how does it affect the 
question of golang being an ok language in which to write openstack source code?

--John




On 3 May 2016, at 9:16, Rayson Ho wrote:

> I like Go! However, Go does not offer binary compatibility between point
> releases. For those who install from source it may not be a big issue, but
> for commercial distributions that pre-package & pre-compile everything,
> then the compiled Go libs won't be compatible with old/new releases of the
> Go compiler that the user may want to install on their systems.
>
> Rayson
>
> ==
> Open Grid Scheduler - The Official Open Source Grid Engine
> http://gridscheduler.sourceforge.net/
> http://gridscheduler.sourceforge.net/GridEngine/GridEngineCloud.html
>
>
>
>
> On Tue, May 3, 2016 at 11:58 AM, John Dickinson <m...@not.mn> wrote:
>
>> TC,
>>
>> In reference to
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html
>> and Thierry's reply, I'm currently drafting a TC resolution to update
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>> to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document
>> I'm drafting?
>>
>> --John
>>
>>
>>
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
That's a good question, and I'll be sure to address it. Thanks.

In the context of "golang code in swift", any discussion around a "goslo" 
library would be up to the oslo team, I think. The proposed functionality that 
would be in golang in swift does not currently depend on any oslo library. In 
general, if the TC supports Go, I'd think it wouldn't be any different than the 
question of "where's the oslo libraries for javascript [which is already an 
approved language]?"

--John




On 3 May 2016, at 9:14, Tim Bell wrote:

> John,
>
> How would Oslo like functionality be included ? Would the aim be to produce 
> equivalent libraries ?
>
> Tim
>
>
>
>
> On 03/05/16 17:58, "John Dickinson" <m...@not.mn> wrote:
>
>> TC,
>>
>> In reference to 
>> http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
>> Thierry's reply, I'm currently drafting a TC resolution to update 
>> http://governance.openstack.org/resolutions/20150901-programming-languages.html
>>  to include Go as a supported language in OpenStack projects.
>>
>> As a starting point, what would you like to see addressed in the document 
>> I'm drafting?
>>
>> --John
>>
>>
>>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [tc] supporting Go

2016-05-03 Thread John Dickinson
TC,

In reference to 
http://lists.openstack.org/pipermail/openstack-dev/2016-May/093680.html and 
Thierry's reply, I'm currently drafting a TC resolution to update 
http://governance.openstack.org/resolutions/20150901-programming-languages.html 
to include Go as a supported language in OpenStack projects.

As a starting point, what would you like to see addressed in the document I'm 
drafting?

--John





signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Swift] going forward

2016-05-02 Thread John Dickinson
At the summit last week, the Swift community spent a lot of time discussing the 
feature/hummingbird branch. (For those who don't know, the feature/hummingbird 
branch contains some parts of Swift which have been reimplemented in Go.)

As a result of that summit discussion, we have a plan and a goal: we will 
integrate a subset of the current hummingbird work into Swift's master branch, 
and the future will contain both Python and Go code. We are starting with the 
object server and replication layer.

The high-level plan is below. I've included some general time estimates, but, 
as with all things open-source, these estimates are just that. This work will 
be done when it's done.

Our current short-term focus for Swift is to integrate the feature/crypto work 
to provide at-rest encryption. This crypto work is nearly ready to merge, and 
it is the community focus until it's done. While that crypto work is finishing 
up, we will be defining the minimum deployable functionality from hummingbird 
that is necessary before it can land on master. I expect the crypto work to be 
finished in the next six to eight weeks.

After feature/crypto is merged, as a community we will be implementing any 
missing things identified as necessary to merge. There will be some base 
functionality that needs to be implemented, and there will be a lot of things 
like docs, tests, and deployability work.

Our goal is to have a reasonably ready-to-merge feature branch ready by the 
Barcelona summit. Shortly after Barcelona, we will begin the actual merge of 
the Go code to master.

This work, this plan, and these goals do NOT mean that we are completely 
rewriting Swift in Go. Python will exist in Swift's codebase for a long time to 
come. Our goal is to keep doing the same thing we've been doing for years: 
focus on performance and user needs to give the best possible object storage 
system in the world.


--John




signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Swift] Erasure coding and geo replication

2016-04-20 Thread John Dickinson
There's no significant change with the global EC clusters story in the 2.7 
release. That's something we're discussing next week at the summit.

--John



On 19 Apr 2016, at 22:47, Mark Kirkwood wrote:

> Hi,
>
> Has the release of 2.7 significantly changed the assessment here?
>
> Thanks
>
> Mark
>
> On 15/02/16 23:29, Kota TSUYUZAKI wrote:
>> Hello Mark,
>>
>> AFAIK, a few reasons for that we still are in working progress for erasure 
>> code + geo replication.
>>
 and expect to survive a region outage...

 With that I mind I did some experiments (Liberty swift) and it looks to me 
 like if you have:

 - num_data_frags < num_nodes in (smallest) region

 and:

 - num_parity_frags = num_data_frags


 then having a region fail does not result in service outage.
>>
>> Good point but note that the PyECLib v1.0.7 (pinned to Kilo/Liberty stable) 
>> still have a problem which cannot decode the original data when all feed 
>> fragments are parity frags[1]. (i.e. if set
>> num_parity_frags = num_data frags and then, num_parity_frags comes into 
>> proxy for GET request, it will fail at the decoding) The problem was already 
>> resolved in the PyECLib/liberasurecode at master
>> branch and current swift master has the PyECLib>=1.0.7 dependencies so if 
>> you thought to use the newest Swift, it might be not
>> a matter.
>>
>> In the Swift perspective, I think that we need more tests/discussion for geo 
>> replication around write/read affinity[2] which is geo replication stuff in 
>> Swift itself and performances.
>>
>> For the write/read affinity, actually we didn't consider the affinity 
>> control to simplify the implementation until EC landed into Swift master[3] 
>> so I think it's time to make sure how we can use the
>> affinity control with EC but it's not done yet.
>>
>> For the performance perspective, in my experiments, more parities causes 
>> quite performance degradation[4]. To prevent the degradation, I am working 
>> for the spec which makes duplicated copy from
>> data/parity fragments and spread them out into geo regions.
>>
>> To sumurize, we've not done the work yet but we welcome to discuss and 
>> contribute for EC + geo replication anytime, IMO.
>>
>> Thanks,
>> Kota
>>
>> 1: 
>> https://bitbucket.org/tsg-/liberasurecode/commits/a01b1818c874a65d1d1fb8f11ea441e9d3e18771
>> 2: 
>> http://docs.openstack.org/developer/swift/admin_guide.html#geographically-distributed-clusters
>> 3: 
>> http://docs.openstack.org/developer/swift/overview_erasure_code.html#region-support
>> 4: 
>> https://specs.openstack.org/openstack/swift-specs/specs/in_progress/global_ec_cluster.html
>>
>>
>>
>> (2016/02/15 18:00), Mark Kirkwood wrote:
>>> After looking at:
>>>
>>> https://www.youtube.com/watch?v=9YHvYkcse-k
>>>
>>> I have a question (that follows on from Bruno's) about using erasure coding 
>>> with geo replication.
>>>
>>> Now the example given to show why you could/should not use erasure coding 
>>> with geo replication is somewhat flawed as it is immediately clear that you 
>>> cannot set:
>>>
>>> - num_data_frags > num_devices (or nodes) in a region
>>>
>>> and expect to survive a region outage...
>>>
>>> With that I mind I did some experiments (Liberty swift) and it looks to me 
>>> like if you have:
>>>
>>> - num_data_frags < num_nodes in (smallest) region
>>>
>>> and:
>>>
>>> - num_parity_frags = num_data_frags
>>>
>>>
>>> then having a region fail does not result in service outage.
>>>
>>> So my real question is - it looks like it *is* possible to use erasure 
>>> coding in geo replicated situations - however I may well be missing 
>>> something significant, so I'd love some clarification here [1]!
>>>
>>> Cheers
>>>
>>> Mark
>>>
>>> [1] Reduction is disk usage and net traffic looks attractive
>>>
>>> __
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


signature.asc
Description: OpenPGP digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   >