Re: Parse only some parts of a template

2018-07-09 Thread Chmouel Boudjnah
On Mon, Jul 9, 2018 at 3:56 PM Ben Parees  wrote:

> I was wondering if there was any way to have only some parts of a template
>> processed, for example i want to get the dc/route/is out of the template
>> but not the bc or others,
>>
> no... you'd have to pass the template to oc process, redirect the result
> (a yaml list of resources) to a file and then do your additional
> filtering/parsing.
>

OK thanks, here is a python script that would do the filtering if someone
else would find it useful :

https://gist.github.com/chmouel/1e558420f8c2fd1d1eaaa16a97e46771

Chmouel


>
>
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: Parse only some parts of a template

2018-07-09 Thread Ben Parees
On Mon, Jul 9, 2018 at 9:05 AM, Chmouel Boudjnah  wrote:

> Hello,
>
> I was wondering if there was any way to have only some parts of a template
> processed, for example i want to get the dc/route/is out of the template
> but not the bc or others,
>

no... you'd have to pass the template to oc process, redirect the result (a
yaml list of resources) to a file and then do your additional
filtering/parsing.



>
> Thanks,
> Chmouel
>
> ___
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>
>


-- 
Ben Parees | OpenShift
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Creating choice-based parameters for Jenkins Pipeline strategy

2018-07-09 Thread Andrew Feller
Can anyone confirm if non-string typed parameters (for example: choice) can
be declared for Jenkins Pipeline strategy?

jenkins-sync-plugin issue 131
 appears to
add support for them except there appears to be a bug due to a typo
*(ChoiceParameterDefintion
probably should be **ChoiceParameterDefinition)* and I can't really find
anyone talking about this.

Thanks!
-- 

[image: BandwidthMaroon.png]

Andy Feller  •  Sr DevOps Engineer

900 Main Campus Drive, Suite 500, Raleigh, NC 27606


e: afel...@bandwidth.com
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Parse only some parts of a template

2018-07-09 Thread Chmouel Boudjnah
Hello,

I was wondering if there was any way to have only some parts of a template
processed, for example i want to get the dc/route/is out of the template
but not the bc or others,

Thanks,
Chmouel
___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users


Re: scheduler policy to spread pods

2018-07-09 Thread Tim Dudgeon
Hi, thanks for that suggestion. I took a look, but tit seems it isn't 
quite what's needed.
It looks likes pod (anti)affinity is a binary thing. It works for the 
first pod on the node with/without the specified label, but it doesn't 
ensure an even spread when you schedule multiple pods.


In my case I scheduled pods using an antiaffinity 
preferredDuringSchedulingIgnoredDuringExecution rule applying across 3 
nodes and that made sure that the first 3 pods went to separate nodes as 
expected, but after that the rule seemed to not be applied (there were 
no nodes that satisfied the rule, but as the rule was 'preferred' not 
'required' the pod was scheduled without any further preference). So 
that by the time I had 6 pods running 3 other them were on one node, 2 
on another and only 1 on the third.


So I suppose the anti-affinity rule is working as designed, but that its 
not designed to ensure an even spread when you have multiple pods on the 
nodes.



On 04/07/18 12:16, Joel Pearson wrote:

Here’s an OpenShift reference for the same thing.

https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/pod_affinity.html
On Wed, 4 Jul 2018 at 9:14 pm, Joel Pearson 
mailto:japear...@agiledigital.com.au>> 
wrote:


You’re probably after pod anti-affinity?

https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity

That lets you tell the scheduler that the pods aren’t allowed to
be on the same node for example.
On Wed, 4 Jul 2018 at 8:51 pm, Tim Dudgeon mailto:tdudgeon...@gmail.com>> wrote:

I've got a process the fires up a number of pods (bare pods,
not backed
by replication controller) to execute a computationally
demanding job in
parallel.
What I find is that the pods do not spread effectively across the
available nodes. In my case I have a node selector that restricts
execution to 3 nodes, and the pods run mostly on the first
node, a few
run on the second node, and none run on the third node.

I know that I could specify cpu resource requests and limits
to help
with this, but for other reasons I'm currently unable to do this.

It looks like this is controllable through the scheduler, but the
options for controlling this look pretty complex.
Could someone advise on how best to allow pods to spread
evenly across
nodes rather than execute preferentially on one node?

___
users mailing list
users@lists.openshift.redhat.com

http://lists.openshift.redhat.com/openshiftmm/listinfo/users



___
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users