I'm upgrading from OSE 3.2 to OCP 3.3.
During the redeployment (refresh) of metrics, which starts like this:
```
[root@ose-test-master-01 playbook-openshift]# oc new-app -f \
>
> /usr/share/openshift/examples/infrastructure-templates/enterprise/metrics-deployer.yaml
> \
> -p HAWKULAR_METRICS_HOSTNAME=metrics.test.os.example.com,MODE=refresh
--> Deploying template metrics-deployer-template for
"/usr/share/openshift/examples/infrastructure-templates/enterprise/metrics-deployer.yaml"
metrics-deployer-template
---------
Template for deploying the required Metrics integration. Requires
cluster-admin 'metrics-deployer' service account and 'metrics-deployer' secret.
* With parameters:
* IMAGE_PREFIX=registry.access.redhat.com/openshift3/
* IMAGE_VERSION=3.3.0
* MASTER_URL=https://kubernetes.default.svc:443
* HAWKULAR_METRICS_HOSTNAME=metrics.test.os.example.com
* MODE=refresh
* REDEPLOY=false
* IGNORE_PREFLIGHT=false
* USE_PERSISTENT_STORAGE=true
* DYNAMICALLY_PROVISION_STORAGE=false
* CASSANDRA_NODES=1
* CASSANDRA_PV_SIZE=10Gi
* METRIC_DURATION=7
* USER_WRITE_ACCESS=false
* HEAPSTER_NODE_ID=nodename
* METRIC_RESOLUTION=15s
--> Creating resources with label app=metrics-deployer-template ...
pod "metrics-deployer-r8w4b" created
--> Success
Run 'oc status' to view your app.
```
There is a failure from the deploy pod. Logs follow:
```
[root@ose-test-master-01 playbook-openshift]# oc logs metrics-deployer-r8w4b
+ deployer_mode=refresh
+ image_prefix=registry.access.redhat.com/openshift3/
+ image_version=3.3.0
+ master_url=https://kubernetes.default.svc:443
+ [[ 3 == \/ ]]
++ parse_bool false REDEPLOY
++ local v=false
++ '[' false '!=' true -a false '!=' false ']'
++ echo false
+ redeploy=false
+ '[' false == true ']'
+ mode=refresh
+ '[' refresh = redeploy ']'
++ parse_bool false IGNORE_PREFLIGHT
++ local v=false
++ '[' false '!=' true -a false '!=' false ']'
++ echo false
+ ignore_preflight=false
+ cassandra_nodes=1
++ parse_bool true USE_PERSISTENT_STORAGE
++ local v=true
++ '[' true '!=' true -a true '!=' false ']'
++ echo true
+ use_persistent_storage=true
++ parse_bool false DYNAMICALLY_PROVISION_STORAGE
++ local v=false
++ '[' false '!=' true -a false '!=' false ']'
++ echo false
+ dynamically_provision_storage=false
+ cassandra_pv_size=10Gi
+ metric_duration=7
+ user_write_access=false
+ heapster_node_id=nodename
+ metric_resolution=15s
+ project=openshift-infra
+ master_ca=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
+ token_file=/var/run/secrets/kubernetes.io/serviceaccount/token
+ dir=/etc/deploy/_output
+ secret_dir=/secret
+ rm -rf /etc/deploy/_output
+ mkdir -p /etc/deploy/_output
+ chmod 700 /etc/deploy/_output
+ mkdir -p /secret
+ chmod 700 /secret
chmod: changing permissions of '/secret': Read-only file system
+ :
+ hawkular_metrics_hostname=metrics.test.os.example.com
+ hawkular_metrics_alias=hawkular-metrics
+ hawkular_cassandra_alias=hawkular-cassandra
++ date +%s
+ openshift admin ca create-signer-cert --key=/etc/deploy/_output/ca.key
--cert=/etc/deploy/_output/ca.crt --serial=/etc/deploy/_output/ca.serial.txt
--name=metrics-signer@1475451952
+ '[' -n 1 ']'
+ oc config set-cluster master --api-version=v1
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
--server=https://kubernetes.default.svc:443
cluster "master" set.
++ cat /var/run/secrets/kubernetes.io/serviceaccount/token
+ oc config set-credentials account --token=eyJhb---snipped---
user "account" set.
+ oc config set-context current --cluster=master --user=account
--namespace=openshift-infra
context "current" set.
+ oc config use-context current
switched to context "current".
+ old_kc=/etc/deploy/.kubeconfig
+ KUBECONFIG=/etc/deploy/_output/kube.conf
+ '[' -z 1 ']'
+ oc config set-cluster deployer-master --api-version=v1
--certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
--server=https://kubernetes.default.svc:443
cluster "deployer-master" set.
++ cat /var/run/secrets/kubernetes.io/serviceaccount/token
+ oc config set-credentials deployer-account --token
=eyJhb---snipped---
user "deployer-account" set.
+ oc config set-context deployer-context --cluster=deployer-master
--user=deployer-account --namespace=openshift-infra
context "deployer-context" set.
+ '[' -n 1 ']'
+ oc config use-context deployer-context
switched to context "deployer-context".
+ case $deployer_mode in
+ '[' false '!=' true ']'
+ validate_preflight
+ set -e
+ set +x
PREFLIGHT CHECK SUCCEEDED
validate_master_accessible: ok
validate_hostname: The HAWKULAR_METRICS_HOSTNAME value is deemed acceptable.
validate_deployer_secret: ok
Deleting any previous deployment (leaving route and PVCs)
POD_NAME metrics-deployer-r8w4b
The Pod "metrics-deployer-r8w4b" is invalid.
spec: Forbidden: pod updates may not change fields other than
`containers[*].image` or `spec.activeDeadlineSeconds`
```
Poking at the deployer pod using `oc debug metrics-deployer-r8w4b` I see in
`/opt/deploy/scripts/functions.sh`:
```
function handle_previous_deployment() {
if [ "$mode" = "refresh" ]; then
echo "Deleting any previous deployment (leaving route and PVCs)"
# We don't want to delete ourselves, but we do want to remove old deployers
# Remove our label so that we are not deleted.
echo "POD_NAME ${POD_NAME:-}"
[ -n "${POD_NAME:-}" ] && oc label pod ${POD_NAME} metrics-infra-
oc delete rc,svc,pod,sa,templates,secrets --selector="metrics-infra"
--ignore-not-found=true
# Add back our label so that the next time the deployer is run this will be
deleted
[ -n "${POD_NAME:-}" ] && oc label pod ${POD_NAME} metrics-infra=deployer
```
Any idea what is causing these errors from the deployer pod?
- The Pod "metrics-deployer-r8w4b" is invalid.
- spec: Forbidden: pod updates may not change fields other than
`containers[*].image` or `spec.activeDeadlineSeconds`
_______________________________________________
users mailing list
[email protected]
http://lists.openshift.redhat.com/openshiftmm/listinfo/users