Thank you for the explanation.  That now makes sense.  I redeployed with
3.7 and the correct tags on the ec2 instances.  Now my new issue is that
I'm continuously getting the error "Unable to mount volumes for pod
"jenkins-2-lrgjb_test(ca61f578-f352-11e7-9237-0abad0f909f2)": timeout
expired waiting for volumes to attach/mount for pod
"test"/"jenkins-2-lrgjb". list of unattached/unmounted
volumes=[jenkins-data]" when trying to deploy jenkins.   The EBS volume is
created, the volume is attached to the node when i run lsblk i see the
device but it just times out.

Thanks
Marc

On Sat, Jan 6, 2018 at 6:43 AM Hemant Kumar <[email protected]> wrote:

> Correction in last sentence:
>
> " hence it will pick NOT zone in which Openshift cluster did not exist."
>
> On Sat, Jan 6, 2018 at 6:36 AM, Hemant Kumar <[email protected]> wrote:
>
>> Let me clarify - I did not say that you have to "label" nodes and
>> masters.
>>
>> I was suggesting to tag nodes and masters, the way you tag a cloud
>> resource via AWS console or AWS CLI. I meant - AWS tag not openshift labels.
>>
>> The reason you have volumes created in another zone is because - your AWS
>> account has nodes in more than one zone, possibly not part of Openshift
>> cluster. But when you are requesting a dynamic provisioned volume -
>> Openshift considers all nodes it can find and accordingly it "randomly"
>> selects a zone among zone it discovered.
>>
>> But if you were to use AWS Console or CLI to tag all nodes(including
>> master) in your cluster with "KubernetesCluster" : "cluster_id"  then it
>> will only select tagged nodes and hence it will pick zone in which
>> Openshift cluster did not exist.
>>
>>
>>
>> On Fri, Jan 5, 2018 at 11:48 PM, Marc Boorshtein <[email protected]>
>> wrote:
>>
>>> how do i label a master?  When i create PVCs it switches between 1c and
>>> 1a.  look on the master I see:
>>>
>>> Creating volume for PVC "wtf3"; chose zone="us-east-1c" from
>>> zones=["us-east-1a" "us-east-1c"]
>>>
>>> Where did us-east-1c come from???
>>>
>>> On Fri, Jan 5, 2018 at 11:07 PM Hemant Kumar <[email protected]> wrote:
>>>
>>>> Both nodes and masters. The tag information is picked from master
>>>> itself(Where controller-manager is running) and then openshift uses same
>>>> value to find all nodes in the cluster.
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, Jan 5, 2018 at 10:26 PM, Marc Boorshtein <[email protected]
>>>> > wrote:
>>>>
>>>>> node and masters?  or just nodes? (sounded like just nodes from the
>>>>> docs)
>>>>>
>>>>> On Fri, Jan 5, 2018 at 9:16 PM Hemant Kumar <[email protected]>
>>>>> wrote:
>>>>>
>>>>>> Make sure that you configure ALL instances in the cluster with tag
>>>>>> "KubernetesCluster": "value". The value of the tag for key
>>>>>> "KubernetesCluster" should be same for all instances in the cluster. You
>>>>>> can choose any string you want for value.
>>>>>>
>>>>>> You will probably have to restart openshift controller-manager after
>>>>>> the change at very minimum.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 5, 2018 at 8:21 PM, Marc Boorshtein <
>>>>>> [email protected]> wrote:
>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I have a brand new Origin 3.6 running on AWS, the master and all
>>>>>>> nodes are in us-east-1a but whenever I try to have AWS create a new 
>>>>>>> volume,
>>>>>>> it puts it in us-east-1c so then no one can access it and all my nodes 
>>>>>>> go
>>>>>>> into a permanent pending state because NoVolumeZoneConflict.  Looking at
>>>>>>> aws.conf it states us-east-1a.  What am I missing?
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> [email protected]
>>>>>>> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
>>>>>>>
>>>>>>>
>>>>>>
>>>>
>>
>
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to