RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
While "just works" is a great goal, and its relatively easy to accomplish in 
the nice, virtualized world of vm's, I've found it is often not the case in the 
dirty realm of real physical hardware. Sometimes you must rebuild/replace a 
kernel or add a kernel module to get things to actually work. If you don't 
support that, Its going to be a problem for many a site.

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Josh Berkus 
[jber...@redhat.com]
Sent: Thursday, July 25, 2019 11:23 AM
To: Clayton Coleman; Aleksandar Lazic
Cc: users; dev
Subject: Re: Follow up on OKD 4

On 7/25/19 6:51 AM, Clayton Coleman wrote:
> 1. Openshift 4 isn’t flexible in the ways people want (Ie you want to
> add an rpm to the OS to get a kernel module, or you want to ship a
> complex set of config and managing things with mcd looks too hard)
> 2. You want to build and maintain these things yourself, so the “just
> works” mindset doesn’t appeal.

FWIW, 2.5 years ago when we were exploring having a specific
Atomic+Openshift distro for Kubernetes, we did a straw poll of Fedora
Cloud users.  We found that 2/3 of respondees wanted a complete package
(that is, OKD+Atomic) that installed and "just worked" out of the box,
and far fewer folks wanted to hack their own.  We never had such a
release due to insufficient engineering resources (and getting stuck
behind the complete rewrite of the Fedora build pipelines), but that was
the original goal.

Things may have changed in the interim, but I think that a broad user
survey would still find a strong audience for a "just works" distro in
Fedora.

--
--
Josh Berkus
Kubernetes Community
Red Hat OSAS

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Follow up on OKD 4

2019-07-25 Thread Fox, Kevin M
Yeah, There is the question what it is now, and the question what it 
potentially should be. I'm asking more from a where should it go standpoint.

Right now, k8s distro's are very much in the early linux distro days. Here's 
how to get a base os going. Ok, now your on your own to deploy anything on it. 
Download tarball, build it, install it, write init script, etc. If you look a 
the total package list in the modern linux distro, the os level stuff is 
usually a very small percentage of the software in the distro.

These days we've moved on so far from "the distro is a kernel" that folks even 
talk about running a redhat, a fedora, or a centos container. Thats really #4 
level stuff only.

olm is like yum. a tool to install stuff. So kind of a #3 tool. Its the 
software packaging itself, mysql, apache, etc. that is also part of the distro 
that is mostly missing I think. a container is like an rpm. one way to define a 
linux distro is a collection of prebuilt/tested/supported rpms for common 
software.

In the linux os today, you can start from "I want to deploy a mysql server" and 
I trust redhat to provide good software, so you go and yum install mysql. I 
could imagine similarly OKD as a collection of software to deploy on top of a 
k8s, where there is an optional, self hosting OS part (1-3) the same way 
Fedora/Centos can be used purely #4 with containers, or as a full blown 
os+workloads.

Sure, you can let the community build all their own stuff. Thats possible in 
linux distro's today too and shouldn't be blocked. But it misses the point why 
folks deploy software from linux distro's over getting it from the source. I 
prefer to run mysql from redhat as apposed to upstream because of all the 
extras the distro packagers provide.

Not trying to short change all the hard work in getting a k8s going. okd's 
doing an amazing job at that. That's really important too. But so is all the 
distro work around software packaging, and thats still much more in its infancy 
I think. We're still mostly at the point where we're debating if thats the end 
users problem.

The package management tools are coming around nicely, but not so much yet the 
distro packages. How do we get a k8s distro of this form going? Is that in the 
general scope of OKD, or should there be a whole new project just for that?

The redhat container catalog is a good start too, but we need to be thinking 
all the way up to the k8s level.

Should it be "okd k8s distro" or "fedora k8s distro" or something else?

Thanks,
Kevin


From: Clayton Coleman [ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 10:31 AM
To: Fox, Kevin M
Cc: Michael Gugino; users; dev
Subject: Re: Follow up on OKD 4



On Wed, Jul 24, 2019 at 12:45 PM Fox, Kevin M 
mailto:kevin@pnnl.gov>> wrote:
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

That's interesting that you'd say that.  I think kube today is like "install a 
kernel with bash and serial port magic", whereas OpenShift 4 is "here's a 
compose, an installer, a disk formatter, yum, yum repos, lifecycle, glibc, 
optional packages, and sys utils".  I don't know if you can extend the analogy 
there (if you want to use EKS, you're effectively running on someone's VPS, but 
you can only use their distro and you can't change anything), but definitely a 
good debate.


As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

I think the analogy I've been using is that openshift is a proper distro in the 
sense that you don't take someone's random kernel and use it with someone 
else's random glibc and a third party's random gcc, but you might not care 
about the stuff on top.  The things i

RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
Ah, this raises an interesting discussion I've been wanting to have for a while.

There are potentially lots of things you could call a distro.

Most linux distro's are made up of several layers:
1. boot loader - components to get the kernel running
2. kernel - provides a place to run higher level software
3. os level services - singletons needed to really call the os an os. (dhcp, 
systemd, dbus, etc)
4. prebuilt/tested, generic software/services - workload (mysql, apache, 
firefox, gnome, etc)

For sake of discussion, lets map these layers a bit, and assume that the 
openshift specific components can be added to a vanilla kubernetes. We then have

1. linux distro (could be k8s specific and micro)
2. kubernetes control plane & kubelets
3. openshift components (auth, ingress, cicd/etc)
4. ?  (operators + containers, helm + containers, etc)

openshift use to be defined as being 1-3.

As things like ake/eks/gke make it easy to deploy 1-2, maybe openshift should 
really become modular so it focuses more on 3 and 4.

As for having something that provides a #1 that is super tiny/easy to maintain 
so that you can do #2 on top easily, I'm for that as well, but should be 
decoupled from 3-4 I think. Should you be able to switch out your #1 for 
someone elses #1 while keeping the rest? That's the question from previous in 
the thread.

#4 I think is very important and while the operator framework is starting to 
make some inroads on it, there is still a lot of work to do to make an 
equivalent of the 'redhat' distro of software that runs on k8s.

A lot of focus has been on making a distro out of k8s. but its really mostly 
been at the level of, how do I get a kernel booted/upgraded. I think the more 
important distro thing #4 is how do you make a distribution of prebuilt, easy 
to install software to run on top of k8s. Redhat's distro is really 99% 
userspace and a bit of getting the thing booted. Its value is in having a suite 
of prebuilt, tested, stable, and easily installable/upgradable software  with a 
team of humans that can provide support for it. The kernel/bootloader part is 
really just a means to enable #4. No one installs a kernel/os just to get a 
kernel. This part is currently lacking. Where is the equivalent of 
Redhat/Centos/Fedora for #4.

In the context of OKD, which of these layers is OKD focused on?

Thanks,
Kevin


From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Clayton Coleman 
[ccole...@redhat.com]
Sent: Wednesday, July 24, 2019 9:04 AM
To: Michael Gugino
Cc: users; dev
Subject: Re: Follow up on OKD 4




On Wed, Jul 24, 2019 at 10:40 AM Michael Gugino 
mailto:mgug...@redhat.com>> wrote:
I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

That’s feedback that’s probably something you should share in the fcos forums 
as well.  I will say that I find the OCP + RHEL experience unsatisfying and 
doesn't truly live up to what RHCOS+OCP can do (since it lacks the key features 
like ignition and immutable hosts).  Are you saying you'd prefer to have more 
of a "DIY kube bistro" than the "highly opinionated, totally integrated OKD" 
proposal?  I think that's a good question the community should get a chance to 
weigh in on (in my original email that was the implicit question - do you want 
something that looks like OCP4, or something that is completely different).


I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

That’s even more dramatic a change from OKD even as it was in 3.x.  I’d be 
happy to see people excited about reusing cvo / mcd and be able to mix and 
match, but most of the things here would be a huge investment to build.  In my 
original email I might call this the “I want to build my own distro" - if 
that's what people want 

RE: Follow up on OKD 4

2019-07-24 Thread Fox, Kevin M
So, last I heard OpenShift was starting to modularize, so it could load the 
OpenShift parts as extensions to the kube-apiserver? Has this been completed? 
Maybe the idea below of being able to deploy vanilla k8s is workable as the 
OpenShift parts could easily be added on top?

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Michael Gugino 
[mgug...@redhat.com]
Sent: Wednesday, July 24, 2019 7:40 AM
To: Clayton Coleman
Cc: users; dev
Subject: Re: Follow up on OKD 4

I tried FCoS prior to the release by using the assembler on github.
Too much secret sauce in how to actually construct an image.  I
thought atomic was much more polished, not really sure what the
value-add of ignition is in this usecase.  Just give me a way to build
simple image pipelines and I don't need ignition.  To that end, there
should be an okd 'spin' of FCoS IMO.  Atomic was dead simple to build
ostree repos for.  I'd prefer to have ignition be opt-in.  Since we're
supporting the mcd-once-from to parse ignition on RHEL, we don't need
ignition to actually install okd.  To me, it seems FCoS was created
just to have a more open version of RHEL CoreOS, and I'm not sure FCoS
actually solves anyone's needs relative to atomic.  It feels like we
jumped the shark on this one.

I'd like to see OKD be distro-independent.  Obviously Fedora should be
our primary target (I'd argue Fedora over FCoS), but I think it should
be true upstream software in the sense that apache2 http server is
upstream and not distro specific.  To this end, perhaps it makes sense
to consume k/k instead of openshift/origin for okd.  OKD should be
free to do wild and crazy things independently of the enterprise
product.  Perhaps there's a usecase for treating k/k vs
openshift/origin as a swappable base layer.

It would be nice to have a more native kubernetes place to develop our
components against so we can upstream them, or otherwise just build a
solid community around how we think kubernetes should be deployed and
consumed.  Similar to how Fedora has a package repository, we should
have a Kubernetes component repository (I realize operatorhub fulfulls
some of this, but I'm talking about a place for OLM and things like
MCD to live).

I think we could integrate with existing package managers via a
'repo-in-a-container' type strategy for those not using ostree.

As far as slack vs IRC, I vote IRC or any free software solution (but
my preference is IRC because it's simple and I like it).

On Sun, Jul 21, 2019 at 12:28 PM Clayton Coleman  wrote:
>
>
>
> On Sat, Jul 20, 2019 at 12:40 PM Justin Cook  wrote:
>>
>> Once upon a time Freenode #openshift-dev was vibrant with loads of activity 
>> and publicly available logs. I jumped in asked questions and Red Hatters 
>> came from the woodwork and some amazing work was done.
>>
>> Perfect.
>>
>> Slack not so much. Since Monday there have been three comments with two 
>> reply threads. All this with 524 people. Crickets.
>>
>> Please explain how this is better. I’d really love to know why IRC ceased. 
>> It worked and worked brilliantly.
>
>
> Is your concern about volume or location (irc vs slack)?
>
> Re volume: It should be relatively easy to move some common discussion types 
> into the #openshift-dev slack channel (especially triage / general QA) that 
> might be distributed to other various slack channels today (both private and 
> public), and I can take the follow up to look into that.  Some of the volume 
> that was previously in IRC moved to these slack channels, but they're not 
> anything private (just convenient).
>
> Re location:  I don't know how many people want to go back to IRC from slack, 
> but that's a fairly easy survey to do here if someone can volunteer to drive 
> that, and I can run the same one internally.  Some of it is inertia - people 
> have to be in slack sig-* channels - and some of it is preference (in that 
> IRC is an inferior experience for long running communication).
>
>>
>>
>> There are mentions of sigs and bits and pieces, but absolutely no progress. 
>> I fail to see why anyone would want to regress. OCP4 maybe brilliant, but as 
>> I said in a private email, without upstream there is no culture or insurance 
>> we’ve come to love from decades of heart and soul.
>>
>> Ladies and gentlemen, this is essentially getting to the point the community 
>> is being abandoned. Man years of work acknowledged with the roadmap pulled 
>> out from under us.
>
>
> I don't think that's a fair characterization, but I understand why you feel 
> that way and we are working to get the 4.x work moving.  The FCoS team as 
> mentioned just released their first preview last week, I've been working with 
> Diane and others to identify who on the team is going to take point on the 
> design work, and there's a draft in flight that I saw yesterday.  Every 
> component of OKD4 *besides* the FCoS integration is 

RE: Docker level for building 3.11

2019-02-26 Thread Fox, Kevin M
that would be easiest I think. docker and buildah don't share an image 
directory. so one can't see the others images. I think you can use skopeo to 
copy images from one to the other if you wanted to try and go without a local 
registry.

From: Neale Ferguson [ne...@sinenomine.net]
Sent: Tuesday, February 26, 2019 1:07 PM
To: Fox, Kevin M; Adam Kaplan
Cc: Openshift
Subject: Re: Docker level for building 3.11

Thanks. I have local images I wish to use for the build, so I assume I will 
need a local registry up and running.


you don't need to erase docker. buildah and docker can coexist.

Thanks. buildah-1.5.2 appears to be the latest. So I need to:


  1.  yum erase docker
  2.  yum install buildah

What provides the docker daemon?

Neale


This is a part of the multistage build syntax introduced in Docker 17.05 [1]. 
This is available through the centos-extras repo, and requires you to uninstall 
any other installations of docker.

I recommend using buildah instead [3].

[1] https://docs.docker.com/develop/develop-images/multistage-build
[2] https://docs.docker.com/install/linux/docker-ce/centos/
[3] https://github.com/containers/buildah
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Docker level for building 3.11

2019-02-26 Thread Fox, Kevin M
you don't need to erase docker. buildah and docker can coexist.

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Neale Ferguson 
[ne...@sinenomine.net]
Sent: Tuesday, February 26, 2019 12:46 PM
To: Adam Kaplan
Cc: Openshift
Subject: Re: Docker level for building 3.11

Thanks. buildah-1.5.2 appears to be the latest. So I need to:


  1.  yum erase docker
  2.  yum install buildah

What provides the docker daemon?

Neale


This is a part of the multistage build syntax introduced in Docker 17.05 [1]. 
This is available through the centos-extras repo, and requires you to uninstall 
any other installations of docker.

I recommend using buildah instead [3].

[1] https://docs.docker.com/develop/develop-images/multistage-build
[2] https://docs.docker.com/install/linux/docker-ce/centos/
[3] https://github.com/containers/buildah
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: Is Docker enterprise version subscription required for Openshift 3.7

2018-10-17 Thread Fox, Kevin M
Buildah supports multistage builds. Could that be used to do the  builds?

Thanks,
Kevin

From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Santosh Kumar30 
[sk00546...@techmahindra.com]
Sent: Wednesday, October 17, 2018 10:07 AM
To: Mark Wagner; Jeremy Eder
Cc: dev@lists.openshift.redhat.com
Subject: RE: Is Docker enterprise version subscription required for Openshift 
3.7


Are you saying that we require docker17 or later for Hyperledger fabric image 
deployment ?
If yes, definitely we require additional Docker enterprise edition subscription 
to deploy it on Openshift 3.7, is this assumption is correct?

Regards,
Santosh Kumar

From: Mark Wagner 
Sent: Wednesday, October 17, 2018 8:18 PM
To: Jeremy Eder 
Cc: dev@lists.openshift.redhat.com; Santosh Kumar30 

Subject: Re: Is Docker enterprise version subscription required for Openshift 
3.7

>From the upstream Fabric list.

Technically, at runtime right now Docker 1.13 or later will work for pure 
Docker and/or Kubernetes.
The samples and example network rely on later versions of docker-compose which 
I believe require some features of Docker 17.06 and later (I think in the area 
of networks and volumes but don't recall explicitly and we definitely use 
docker exec commands in some of the samples which require 17.06).

With 1.3 and earlier, you should still be able to build with Docker 1.13, but 
with the current master we've moved to multistage builds which require 17.06 
and later to build.

Hope this helps.

FWIW, I was able to use the Docker 1.13 version which ships with Redhat 7.x to 
build and run Redhat-based Fabric images.

-- G

On Wed, Oct 17, 2018 at 9:09 AM, Jeremy Eder 
mailto:je...@redhat.com>> wrote:
Mark, do you know where the version requirement in the hyperledger docs comes 
from?

On Wed, Oct 17, 2018 at 7:50 AM Santosh Kumar30 
mailto:sk00546...@techmahindra.com>> wrote:
Hi,

I am recently started exploring openshift. I am a Hyperledger blockcahin 
developer.
I am trying to create a blockchain network containing which will contain 
Hyperledger – peer, orderer, cli… and there images have been provided by 
Hyperledger.

As per the Hyperledger doc, these images only compatable with Docker version 
17.06.2-ce or greater .
https://hyperledger-fabric.readthedocs.io/en/release-1.3/prereqs.html#docker-and-docker-compose

But as Openshift 3.7 release note: 
https://docs.openshift.com/container-platform/3.7/release_notes/ocp_3_7_release_notes.html#ocp-37-about-this-release

OpenShift Container Platform 3.7 is supported on RHEL 7.3, 7.4.2, 7.5, and 
Atomic Host 7.4.2 and newer with the latest packages from Extras, including 
Docker 1.12.

So my query here is if I need docker 17 or later version for this openshift 
3.7, Whether I required Docker enterprise version subscription as on RHEL linux 
system, docker CE version will not work?


Thanks in advance.

Regards,
Santosh Kumar

Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html externally 
http://tim.techmahindra.com/tim/disclaimer.html internally within TechMahindra.

___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


--

-- Jeremy Eder



--
Mark Wagner
Senior Principal Software Engineer
Performance and Scalability
Red Hat, Inc
___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: keystonepasswd auth

2016-04-14 Thread Fox, Kevin M
It would be very nice to actually use scoped tokens though. Then you could use 
the project's roles to map up to tenants in openshift and not have to manage 
memberships in multiple systems.

Thanks,
Kevin

From: Jordan Liggitt [jligg...@redhat.com]
Sent: Thursday, April 14, 2016 10:37 AM
To: Fox, Kevin M
Cc: Scott Seago; Chmouel Boudjnah; OpenShift List Dev
Subject: Re: keystonepasswd auth

We don't use the token to make any other API calls, just to verify the user's 
auth credentials.

On Thu, Apr 14, 2016 at 1:36 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
Ah. There are scoped and unscoped tokens in keystone. Unscoped ones are 
project-less but can do almost nothing. Project scoped ones usually used.

Most resources in openstack is bound to the project and not the user, so hence 
the need for scoped tokens.

Thanks,
Kevin

From: Jordan Liggitt [jligg...@redhat.com<mailto:jligg...@redhat.com>]
Sent: Thursday, April 14, 2016 9:53 AM
To: Fox, Kevin M; Scott Seago
Cc: Chmouel Boudjnah; OpenShift List Dev
Subject: Re: keystonepasswd auth

I'm not seeing where tenant name is defaulted to the user name. The keystone 
auth request is a password authentication with the user name and domain name, 
which uniquely identifies the user (users belong to domains, not 
tenants/projects)

On Thu, Apr 14, 2016 at 12:20 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
keystone v3 renamed tenant to project. Otherwise, should be the same.

Thanks,
Kevin



From: 
dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>
 
[dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>]
 on behalf of Jordan Liggitt [jligg...@redhat.com<mailto:jligg...@redhat.com>]
Sent: Thursday, April 14, 2016 9:16 AM
To: Chmouel Boudjnah
Cc: OpenShift List Dev
Subject: Re: keystonepasswd auth

The OpenShift Keystone IDP integration only supports the v3 Keystone API. I 
don't see any discussion of tenants in the doc for that API 
(http://developer.openstack.org/api-ref-identity-v3.html)



On Thu, Apr 14, 2016 at 12:06 PM, Chmouel Boudjnah 
<chmo...@redhat.com<mailto:chmo...@redhat.com>> wrote:
Hello,

I was looking at trying the keystone password authentication. While there is 
some missing directive in the documentation :

https://github.com/openshift/openshift-docs/pull/1902

things are working and i could properly auth my openshift user with my keystone 
username/password.

The only caveat is that in OpenStack we usually need to specify a 
tenant_name/id for the user to auth with, by default if I understand correctly 
gophercloud would try to match the provider from the argument provided :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/auth_options.go#L10-L11

which in this case if no tenant_name are specified would do a 
tenant_name==user_name like done by default on Rackspace Cloud (gophercloud is 
written by rackspace)

So now the question is how can we improve this and be able to specify a 
tenant_name in there? Since most of deployed OpenStack clouds would have 
multiple users scoped to different tenants

We could do some hackery things like having a delimiter like colon : to be able 
to split those as tenant_name and user_name which is something we did on 
swiftclient sometime ago but that's not very openstackish and was more of hack 
that need to be supported forever (i implemented that :(( )

We could add a switch like --keystone-tenant-name or something but i guess that 
would pollute the login if we want to add more stuff.

Maybe using the openstack environment which is a standard way in OpenStack for 
the clients to use would be an option :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/openstack/auth_env.go#L24

which would be transparent for the user since they would have only to download 
their openrc from openstack dashboard (horizon) and just issue a oc login to 
connect (which could be only a fallback to the current method)

What do you think?

Cheers,
Chmouel



___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev




___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: keystonepasswd auth

2016-04-14 Thread Fox, Kevin M
Ah. There are scoped and unscoped tokens in keystone. Unscoped ones are 
project-less but can do almost nothing. Project scoped ones usually used.

Most resources in openstack is bound to the project and not the user, so hence 
the need for scoped tokens.

Thanks,
Kevin

From: Jordan Liggitt [jligg...@redhat.com]
Sent: Thursday, April 14, 2016 9:53 AM
To: Fox, Kevin M; Scott Seago
Cc: Chmouel Boudjnah; OpenShift List Dev
Subject: Re: keystonepasswd auth

I'm not seeing where tenant name is defaulted to the user name. The keystone 
auth request is a password authentication with the user name and domain name, 
which uniquely identifies the user (users belong to domains, not 
tenants/projects)

On Thu, Apr 14, 2016 at 12:20 PM, Fox, Kevin M 
<kevin@pnnl.gov<mailto:kevin@pnnl.gov>> wrote:
keystone v3 renamed tenant to project. Otherwise, should be the same.

Thanks,
Kevin



From: 
dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>
 
[dev-boun...@lists.openshift.redhat.com<mailto:dev-boun...@lists.openshift.redhat.com>]
 on behalf of Jordan Liggitt [jligg...@redhat.com<mailto:jligg...@redhat.com>]
Sent: Thursday, April 14, 2016 9:16 AM
To: Chmouel Boudjnah
Cc: OpenShift List Dev
Subject: Re: keystonepasswd auth

The OpenShift Keystone IDP integration only supports the v3 Keystone API. I 
don't see any discussion of tenants in the doc for that API 
(http://developer.openstack.org/api-ref-identity-v3.html)



On Thu, Apr 14, 2016 at 12:06 PM, Chmouel Boudjnah 
<chmo...@redhat.com<mailto:chmo...@redhat.com>> wrote:
Hello,

I was looking at trying the keystone password authentication. While there is 
some missing directive in the documentation :

https://github.com/openshift/openshift-docs/pull/1902

things are working and i could properly auth my openshift user with my keystone 
username/password.

The only caveat is that in OpenStack we usually need to specify a 
tenant_name/id for the user to auth with, by default if I understand correctly 
gophercloud would try to match the provider from the argument provided :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/auth_options.go#L10-L11

which in this case if no tenant_name are specified would do a 
tenant_name==user_name like done by default on Rackspace Cloud (gophercloud is 
written by rackspace)

So now the question is how can we improve this and be able to specify a 
tenant_name in there? Since most of deployed OpenStack clouds would have 
multiple users scoped to different tenants

We could do some hackery things like having a delimiter like colon : to be able 
to split those as tenant_name and user_name which is something we did on 
swiftclient sometime ago but that's not very openstackish and was more of hack 
that need to be supported forever (i implemented that :(( )

We could add a switch like --keystone-tenant-name or something but i guess that 
would pollute the login if we want to add more stuff.

Maybe using the openstack environment which is a standard way in OpenStack for 
the clients to use would be an option :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/openstack/auth_env.go#L24

which would be transparent for the user since they would have only to download 
their openrc from openstack dashboard (horizon) and just issue a oc login to 
connect (which could be only a fallback to the current method)

What do you think?

Cheers,
Chmouel



___
dev mailing list
dev@lists.openshift.redhat.com<mailto:dev@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


RE: keystonepasswd auth

2016-04-14 Thread Fox, Kevin M
keystone v3 renamed tenant to project. Otherwise, should be the same.

Thanks,
Kevin



From: dev-boun...@lists.openshift.redhat.com 
[dev-boun...@lists.openshift.redhat.com] on behalf of Jordan Liggitt 
[jligg...@redhat.com]
Sent: Thursday, April 14, 2016 9:16 AM
To: Chmouel Boudjnah
Cc: OpenShift List Dev
Subject: Re: keystonepasswd auth

The OpenShift Keystone IDP integration only supports the v3 Keystone API. I 
don't see any discussion of tenants in the doc for that API 
(http://developer.openstack.org/api-ref-identity-v3.html)



On Thu, Apr 14, 2016 at 12:06 PM, Chmouel Boudjnah 
> wrote:
Hello,

I was looking at trying the keystone password authentication. While there is 
some missing directive in the documentation :

https://github.com/openshift/openshift-docs/pull/1902

things are working and i could properly auth my openshift user with my keystone 
username/password.

The only caveat is that in OpenStack we usually need to specify a 
tenant_name/id for the user to auth with, by default if I understand correctly 
gophercloud would try to match the provider from the argument provided :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/auth_options.go#L10-L11

which in this case if no tenant_name are specified would do a 
tenant_name==user_name like done by default on Rackspace Cloud (gophercloud is 
written by rackspace)

So now the question is how can we improve this and be able to specify a 
tenant_name in there? Since most of deployed OpenStack clouds would have 
multiple users scoped to different tenants

We could do some hackery things like having a delimiter like colon : to be able 
to split those as tenant_name and user_name which is something we did on 
swiftclient sometime ago but that's not very openstackish and was more of hack 
that need to be supported forever (i implemented that :(( )

We could add a switch like --keystone-tenant-name or something but i guess that 
would pollute the login if we want to add more stuff.

Maybe using the openstack environment which is a standard way in OpenStack for 
the clients to use would be an option :

https://github.com/rackspace/gophercloud/blob/e83aa011e019917c7bd951444d61c42431b4d21d/openstack/auth_env.go#L24

which would be transparent for the user since they would have only to download 
their openrc from openstack dashboard (horizon) and just issue a oc login to 
connect (which could be only a fallback to the current method)

What do you think?

Cheers,
Chmouel



___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev


___
dev mailing list
dev@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/dev