gauti098 opened a new issue, #24950:
URL: https://github.com/apache/superset/issues/24950
kubectl logs -f superset-worker-7ddd669dfb-wvpwc
Collecting psycopg2-binary==2.9.1
Downloading
psycopg2_binary-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
(3.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 6.7 MB/s eta 0:00:00
Requirement already satisfied: redis==3.5.3 in
/usr/local/lib/python3.8/site-packages (3.5.3)
Installing collected packages: psycopg2-binary
Attempting uninstall: psycopg2-binary
Found existing installation: psycopg2-binary 2.9.5
Uninstalling psycopg2-binary-2.9.5:
Successfully uninstalled psycopg2-binary-2.9.5
Successfully installed psycopg2-binary-2.9.1
WARNING: Running pip as the 'root' user can result in broken permissions and
conflicting behaviour with the system package manager. It is recommended to use
a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 22.0.4; however, version 23.2.1 is
available.
You should consider upgrading via the '/usr/local/bin/python -m pip install
--upgrade pip' command.
logging was configured successfully
2023-08-10 15:21:31,062:INFO:superset.utils.logging_configurator:logging was
configured successfully
2023-08-10 15:21:31,066:INFO:root:Configured event logger of type <class
'superset.utils.log.DBEventLogger'>
We haven't found any Content Security Policy (CSP) defined in the
configurations. Please make sure to configure CSP using the TALISMAN_ENABLED
and TALISMAN_CONFIG keys or any other external software. Failing to configure
CSP have serious security implications. Check
https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP for more information. You
can disable this warning using the CONTENT_SECURITY_POLICY_WARNING key.
2023-08-10 15:21:31,067:WARNING:superset.initialization:We haven't found any
Content Security Policy (CSP) defined in the configurations. Please make sure
to configure CSP using the TALISMAN_ENABLED and TALISMAN_CONFIG keys or any
other external software. Failing to configure CSP have serious security
implications. Check https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP for
more information. You can disable this warning using the
CONTENT_SECURITY_POLICY_WARNING key.
Loaded your LOCAL configuration at [/app/pythonpath/superset_config.py]
-------------- celery@superset-worker-7ddd669dfb-wvpwc v5.2.2 (dawn-chorus)
--- ***** -----
-- ******* ---- Linux-5.15.0-78-generic-x86_64-with-glibc2.2.5 2023-08-10
15:21:36
- *** --- * ---
- ** ---------- [config]
- ** ---------- .> app: __main__:0x7f8185e4c280
- ** ---------- .> transport: redis://superset-redis-headless:6379/0
- ** ---------- .> results: redis://superset-redis-headless:6379/0
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ---- .> task events: ON
--- ***** -----
-------------- [queues]
.> celery exchange=celery(direct) key=celery
[tasks]
. cache-warmup
. cache_chart_thumbnail
. cache_dashboard_thumbnail
. fetch_url
. load_chart_data_into_cache
. load_explore_json_into_cache
. reports.execute
. reports.prune_log
. reports.scheduler
. sql_lab.get_sql_results
/usr/local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py:67:
SAWarning: relationship 'SqlaTable.slices' will copy column tables.id to
column slices.datasource_id, which conflicts with relationship(s):
'Slice.table' (copies tables.id to slices.datasource_id). If this is not the
intention, consider if these relationships should be linked with
back_populates, or if viewonly=True should be applied to one or more if they
are read-only. For the less common case that foreign key constraints are
partially overlapping, the orm.foreign() annotation can be used to isolate the
columns that should be written towards. To silence this warning, add the
parameter 'overlaps="table"' to the 'SqlaTable.slices' relationship.
(Background on this error at: https://sqlalche.me/e/14/qzyx)
for prop in class_mapper(obj).iterate_properties:
/usr/local/lib/python3.8/site-packages/celery/platforms.py:840:
SecurityWarning: You're running the worker with superuser privileges: this is
absolutely not recommended!
Please specify a different user using the --uid option.
User information: uid=0 euid=0 gid=0 egid=0
warnings.warn(SecurityWarning(ROOT_DISCOURAGED.format(
[2023-08-10 15:21:36,082: WARNING/MainProcess]
/usr/local/lib/python3.8/site-packages/celery/app/utils.py:204:
CDeprecationWarning:
The 'CELERY_ANNOTATIONS' setting is deprecated and scheduled for removal
in
version 6.0.0. Use the task_annotations instead
deprecated.warn(description=f'The {setting!r} setting',
[2023-08-10 15:21:36,083: WARNING/MainProcess]
/usr/local/lib/python3.8/site-packages/celery/app/utils.py:204:
CDeprecationWarning:
The 'BROKER_URL' setting is deprecated and scheduled for removal in
version 6.0.0. Use the broker_url instead
deprecated.warn(description=f'The {setting!r} setting',
[2023-08-10 15:21:36,083: WARNING/MainProcess]
/usr/local/lib/python3.8/site-packages/celery/app/utils.py:204:
CDeprecationWarning:
The 'CELERY_IMPORTS' setting is deprecated and scheduled for removal in
version 6.0.0. Use the imports instead
deprecated.warn(description=f'The {setting!r} setting',
[2023-08-10 15:21:36,083: WARNING/MainProcess]
/usr/local/lib/python3.8/site-packages/celery/app/utils.py:204:
CDeprecationWarning:
The 'CELERY_RESULT_BACKEND' setting is deprecated and scheduled for
removal in
version 6.0.0. Use the result_backend instead
deprecated.warn(description=f'The {setting!r} setting',
[2023-08-10 15:21:36,083: WARNING/MainProcess] Please run `celery upgrade
settings path/to/settings.py` to avoid these warnings and to allow a smoother
upgrade to Celery 6.0.
[2023-08-10 15:21:39,077: INFO/MainProcess] Connected to
redis://superset-redis-headless:6379/0
[2023-08-10 15:21:39,082: INFO/MainProcess] mingle: searching for neighbors
[2023-08-10 15:21:40,142: INFO/MainProcess] mingle: sync with 1 nodes
[2023-08-10 15:21:40,143: INFO/MainProcess] mingle: sync complete
[2023-08-10 15:21:40,185: INFO/MainProcess]
celery@superset-worker-7ddd669dfb-wvpwc ready.
this is worker logs
kubectl logs -f superset-celerybeat-5f7cf48678-bksdf
Collecting psycopg2-binary==2.9.1
Downloading
psycopg2_binary-2.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
(3.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.4/3.4 MB 7.9 MB/s eta 0:00:00
Requirement already satisfied: redis==3.5.3 in
/usr/local/lib/python3.8/site-packages (3.5.3)
Installing collected packages: psycopg2-binary
Attempting uninstall: psycopg2-binary
Found existing installation: psycopg2-binary 2.9.5
Uninstalling psycopg2-binary-2.9.5:
Successfully uninstalled psycopg2-binary-2.9.5
Successfully installed psycopg2-binary-2.9.1
WARNING: Running pip as the 'root' user can result in broken permissions and
conflicting behaviour with the system package manager. It is recommended to use
a virtual environment instead: https://pip.pypa.io/warnings/venv
WARNING: You are using pip version 22.0.4; however, version 23.2.1 is
available.
You should consider upgrading via the '/usr/local/bin/python -m pip install
--upgrade pip' command.
logging was configured successfully
2023-08-10 15:21:30,295:INFO:superset.utils.logging_configurator:logging was
configured successfully
2023-08-10 15:21:30,342:INFO:root:Configured event logger of type <class
'superset.utils.log.DBEventLogger'>
We haven't found any Content Security Policy (CSP) defined in the
configurations. Please make sure to configure CSP using the TALISMAN_ENABLED
and TALISMAN_CONFIG keys or any other external software. Failing to configure
CSP have serious security implications. Check
https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP for more information. You
can disable this warning using the CONTENT_SECURITY_POLICY_WARNING key.
2023-08-10 15:21:30,344:WARNING:superset.initialization:We haven't found any
Content Security Policy (CSP) defined in the configurations. Please make sure
to configure CSP using the TALISMAN_ENABLED and TALISMAN_CONFIG keys or any
other external software. Failing to configure CSP have serious security
implications. Check https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP for
more information. You can disable this warning using the
CONTENT_SECURITY_POLICY_WARNING key.
/usr/local/lib/python3.8/site-packages/flask_appbuilder/models/sqla/interface.py:67:
SAWarning: relationship 'SqlaTable.slices' will copy column tables.id to
column slices.datasource_id, which conflicts with relationship(s):
'Slice.table' (copies tables.id to slices.datasource_id). If this is not the
intention, consider if these relationships should be linked with
back_populates, or if viewonly=True should be applied to one or more if they
are read-only. For the less common case that foreign key constraints are
partially overlapping, the orm.foreign() annotation can be used to isolate the
columns that should be written towards. To silence this warning, add the
parameter 'overlaps="table"' to the 'SqlaTable.slices' relationship.
(Background on this error at: https://sqlalche.me/e/14/qzyx)
for prop in class_mapper(obj).iterate_properties:
[2023-08-10 15:21:35,539: INFO/MainProcess] beat: Starting...
this is beat logs
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# Default values for superset.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
# A README is automatically generated from this file to document it, using
helm-docs (see https://github.com/norwoodj/helm-docs)
# To update it, install helm-docs and run helm-docs from the root of this
chart
# -- User ID directive. This user must have enough permissions to run the
bootstrap script
# Running containers as root is not recommended in production. Change this
to another UID - e.g. 1000 to be more secure
runAsUser: 0
# serviceAccountName: superset
serviceAccount:
# -- Create custom service account for Superset. If create: true and name
is not provided, `superset.fullname` will be used.
create: false
annotations: {}
# -- Install additional packages and do any other bootstrap configuration in
this script
# For production clusters it's recommended to build own image with this step
done in CI
# @default -- see `values.yaml`
bootstrapScript: |
#!/bin/bash
rm -rf /var/lib/apt/lists/* && \
pip install \
psycopg2-binary==2.9.1 \
redis==3.5.3 && \
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{
.Values.runAsUser }}" > ~/bootstrap; fi
# -- The name of the secret which we will use to generate a
superset_config.py file
# Note: this secret must have the key superset_config.py in it and can
include other files as well
configFromSecret: '{{ template "superset.fullname" . }}-config'
# -- The name of the secret which we will use to populate env vars in
deployed pods
# This can be useful for secret keys, etc.
envFromSecret: '{{ template "superset.fullname" . }}-env'
# -- This can be a list of templated strings
envFromSecrets: []
# -- Extra environment variables that will be passed into pods
extraEnv:
{}
# Different gunicorn settings, refer to the gunicorn documentation
# https://docs.gunicorn.org/en/stable/settings.html#
# These variables are used as Flags at the gunicorn startup
# https://github.com/apache/superset/blob/master/docker/run-server.sh#L22
# Extend timeout to allow long running queries.
# GUNICORN_TIMEOUT: 300
# Increase the gunicorn worker amount, can improve performance drastically
# See: https://docs.gunicorn.org/en/stable/design.html#how-many-workers
# SERVER_WORKER_AMOUNT: 4
# WORKER_MAX_REQUESTS: 0
# WORKER_MAX_REQUESTS_JITTER: 0
# SERVER_THREADS_AMOUNT: 20
# GUNICORN_KEEPALIVE: 2
# SERVER_LIMIT_REQUEST_LINE: 0
# SERVER_LIMIT_REQUEST_FIELD_SIZE: 0
# OAUTH_HOME_DOMAIN: ..
# # If a whitelist is not set, any address that can use your OAuth2
endpoint will be able to login.
# # this includes any random Gmail address if your OAuth2 Web App is set
to External.
# OAUTH_WHITELIST_REGEX: ...
# -- Extra environment variables in RAW format that will be passed into pods
extraEnvRaw:
[]
# Load DB password from other secret (e.g. for zalando operator)
# - name: DB_PASS
# valueFrom:
# secretKeyRef:
# name:
superset.superset-postgres.credentials.postgresql.acid.zalan.do
# key: password
# -- Extra environment variables to pass as secrets
extraSecretEnv:
{}
# MAPBOX_API_KEY: ...
# # Google API Keys: https://console.cloud.google.com/apis/credentials
# GOOGLE_KEY: ...
# GOOGLE_SECRET: ...
# -- Extra files to mount on `/app/pythonpath`
extraConfigs:
{}
# import_datasources.yaml: |
# databases:
# - allow_file_upload: true
# allow_ctas: true
# allow_cvas: true
# database_name: example-db
# extra: "{\r\n \"metadata_params\": {},\r\n
\"engine_params\": {},\r\n \"\
# metadata_cache_timeout\": {},\r\n
\"schemas_allowed_for_file_upload\": []\r\n\
# }"
# sqlalchemy_uri: example://example-db.local
# tables: []
# -- Extra files to mount on `/app/pythonpath` as secrets
extraSecrets: {}
extraVolumes:
[]
# - name: customConfig
# configMap:
# name: '{{ template "superset.fullname" . }}-custom-config'
# - name: additionalSecret
# secret:
# secretName: my-secret
# defaultMode: 0600
extraVolumeMounts:
[]
# - name: customConfig
# mountPath: /mnt/config
# readOnly: true
# - name: additionalSecret:
# mountPath: /mnt/secret
# -- A dictionary of overrides to append at the end of superset_config.py -
the name does not matter
# WARNING: the order is not guaranteed
# Files can be passed as helm --set-file
configOverrides.my-override=my-file.py
configOverrides:
{}
# extend_timeout: |
# # Extend timeout to allow long running queries.
# SUPERSET_WEBSERVER_TIMEOUT = ...
# enable_oauth: |
# from flask_appbuilder.security.manager import (AUTH_DB, AUTH_OAUTH)
# AUTH_TYPE = AUTH_OAUTH
# OAUTH_PROVIDERS = [
# {
# "name": "google",
# "whitelist": [ os.getenv("OAUTH_WHITELIST_REGEX", "") ],
# "icon": "fa-google",
# "token_key": "access_token",
# "remote_app": {
# "client_id": os.environ.get("GOOGLE_KEY"),
# "client_secret": os.environ.get("GOOGLE_SECRET"),
# "api_base_url": "https://www.googleapis.com/oauth2/v2/",
# "client_kwargs": {"scope": "email profile"},
# "request_token_url": None,
# "access_token_url":
"https://accounts.google.com/o/oauth2/token",
# "authorize_url":
"https://accounts.google.com/o/oauth2/auth",
# "authorize_params": {"hd": os.getenv("OAUTH_HOME_DOMAIN",
"")}
# }
# }
# ]
# # Map Authlib roles to superset roles
# AUTH_ROLE_ADMIN = 'Admin'
# AUTH_ROLE_PUBLIC = 'Public'
# # Will allow user self registration, allowing to create Flask users
from Authorized User
# AUTH_USER_REGISTRATION = True
# # The default user self registration role
# AUTH_USER_REGISTRATION_ROLE = "Admin"
# secret: |
# # Generate your own secret key for encryption. Use openssl rand
-base64 42 to generate a good key
# SECRET_KEY = 'YOUR_OWN_RANDOM_GENERATED_SECRET_KEY'
# -- Same as above but the values are files
configOverridesFiles:
{}
# extend_timeout: extend_timeout.py
# enable_oauth: enable_oauth.py
configMountPath: "/app/pythonpath"
extraConfigMountPath: "/app/configs"
image:
repository: docker.io/gautam098/v1suprset
tag: v45
pullPolicy: IfNotPresent
imagePullSecrets: []
initImage:
repository: jwilder/dockerize
tag: latest
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8088
annotations:
{}
# cloud.google.com/load-balancer-type: "Internal"
loadBalancerIP: null
nodePort:
# -- (int)
http: nil
ingress:
enabled: false
# ingressClassName: nginx
annotations:
{}
# kubernetes.io/tls-acme: "true"
## Extend timeout to allow long running queries.
# nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
# nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
# nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
path: /
pathType: ImplementationSpecific
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources:
{}
# We usually recommend not to specify default resources and to leave this
as a conscious
# choice for the user. This also increases chances charts run on
environments with little
# resources, such as Minikube. If you do want to specify resources,
uncomment the following
# lines, adjust them as necessary, and remove the curly braces after
'resources:'.
# The limits below will apply to all Superset components. To set
individual resource limitations refer to the pod specific values below.
# The pod specific values will overwrite anything that is set here.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
# -- Custom hostAliases for all superset pods
## https://kubernetes.io/docs/tasks/network/customize-hosts-file-for-pods/
hostAliases: []
# - hostnames:
# - nodns.my.lan
# ip: 18.27.36.45
# Superset node configuration
supersetNode:
replicaCount: 3
# -- Startup command
# @default -- See `values.yaml`
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh;
/usr/bin/run-server.sh"
connections:
# -- Change in case of bringing your own redis and then also set
redis.enabled:false
redis_host: '{{ template "superset.fullname" . }}-redis-headless'
# redis_password: superset
redis_port: "6379"
# You need to change below configuration incase bringing own PostgresSQL
instance and also set postgresql.enabled:false
db_host: '{{ template "superset.fullname" . }}-postgresql'
db_port: "5432"
db_user: superset
db_pass: superset
db_name: superset
env: {}
# -- If true, forces deployment to reload on each upgrade
forceReload: false
# -- Init containers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://$DB_HOST:$DB_PORT" -timeout 120s
# -- Annotations to be added to supersetNode deployment
deploymentAnnotations: {}
# -- Labels to be added to supersetNode deployment
deploymentLabels: {}
# -- Affinity to be added to supersetNode deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetNode deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetNode pods
podAnnotations: {}
# -- Labels to be added to supersetNode pods
podLabels: {}
startupProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 15
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
successThreshold: 1
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 15
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 15
successThreshold: 1
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 15
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 15
successThreshold: 1
# -- Resource settings for the supersetNode pods - these settings
overwrite might existing values from the global resources object defined above.
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
strategy:
{}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 25%
# maxUnavailable: 25%
# Superset Celery worker configuration
supersetWorker:
replicaCount: 1
# -- Worker startup command
# @default -- a `celery worker` command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery
--app=superset.tasks.celery_app:app worker -O fair -l INFO -E"
# -- If true, forces deployment to reload on each upgrade
# command: ["/app/docker/docker-bootstrap.sh", "worker"]
forceReload: true
# -- Init container
# @default -- a container waiting for postgres and redis
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://$DB_HOST:$DB_PORT" -wait
"tcp://$REDIS_HOST:$REDIS_PORT" -timeout 120s
# -- Annotations to be added to supersetWorker deployment
deploymentAnnotations: {}
# -- Labels to be added to supersetWorker deployment
deploymentLabels: {}
# -- Annotations to be added to supersetWorker pods
podAnnotations: {}
# -- Labels to be added to supersetWorker pods
podLabels: {}
# -- Resource settings for the supersetWorker pods - these settings
overwrite might existing values from the global resources object defined above.
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
strategy:
{}
# type: RollingUpdate
# rollingUpdate:
# maxSurge: 25%
# maxUnavailable: 25%
livenessProbe:
exec:
# -- Liveness probe command
# @default -- a `celery inspect ping` command
command:
- sh
- -c
- celery -A superset.tasks.celery_app:app inspect ping -d
celery@$HOSTNAME
initialDelaySeconds: 120
timeoutSeconds: 60
failureThreshold: 3
periodSeconds: 60
successThreshold: 1
# -- No startup/readiness probes by default since we don't really care
about its startup time (it doesn't serve traffic)
startupProbe: {}
# -- No startup/readiness probes by default since we don't really care
about its startup time (it doesn't serve traffic)
readinessProbe: {}
# Superset beat configuration (to trigger scheduled jobs like reports)
supersetCeleryBeat:
# -- This is only required if you intend to use alerts and reports
enabled: True
# -- Command
# @default -- a `celery beat` command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; celery
--app=superset.tasks.celery_app:app beat --pidfile /tmp/celerybeat.pid -l INFO
-s '/tmp/celerybeat-schedule'"
# command: ["/app/docker/docker-bootstrap.sh", "beat"]
# celery --app=superset.tasks.celery_app:app beat --pidfile
/tmp/celerybeat.pid -l INFO -s "${SUPERSET_HOME}"/celerybeat-schedule
# -- If true, forces deployment to reload on each upgrade
forceReload: true
# -- List of init containers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://$DB_HOST:$DB_PORT" -wait
"tcp://$REDIS_HOST:$REDIS_PORT" -timeout 120s
# -- Annotations to be added to supersetCeleryBeat deployment
deploymentAnnotations: {}
# -- Affinity to be added to supersetCeleryBeat deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetCeleryBeat deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetCeleryBeat pods
podAnnotations: {}
# -- Labels to be added to supersetCeleryBeat pods
podLabels: {}
# -- Resource settings for the CeleryBeat pods - these settings overwrite
might existing values from the global resources object defined above.
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
supersetCeleryFlower:
# -- Enables a Celery flower deployment (management UI to monitor celery
jobs)
# WARNING: on superset 1.x, this requires a Superset image that has
`flower<1.0.0` installed (which is NOT the case of the default images)
# flower>=1.0.0 requires Celery 5+ which Superset 1.5 does not support
enabled: false
replicaCount: 1
# -- Command
# @default -- a `celery flower` command
command:
- "/bin/sh"
- "-c"
- "celery --app=superset.tasks.celery_app:app flower"
service:
type: ClusterIP
annotations: {}
port: 5555
nodePort:
# -- (int)
http: nil
startupProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
successThreshold: 1
livenessProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
readinessProbe:
httpGet:
path: /api/workers
port: flower
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
# -- List of init containers
# @default -- a container waiting for postgres and redis
initContainers:
- name: wait-for-postgres-redis
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://$DB_HOST:$DB_PORT" -wait
"tcp://$REDIS_HOST:$REDIS_PORT" -timeout 120s
# -- Annotations to be added to supersetCeleryFlower deployment
deploymentAnnotations: {}
# -- Affinity to be added to supersetCeleryFlower deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetCeleryFlower
deployments
topologySpreadConstraints: []
# -- Annotations to be added to supersetCeleryFlower pods
podAnnotations: {}
# -- Labels to be added to supersetCeleryFlower pods
podLabels: {}
# -- Resource settings for the CeleryBeat pods - these settings overwrite
might existing values from the global resources object defined above.
resources:
{}
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
podSecurityContext: {}
containerSecurityContext: {}
supersetWebsockets:
# -- This is only required if you intend to use `GLOBAL_ASYNC_QUERIES` in
`ws` mode
# see
https://github.com/apache/superset/blob/master/CONTRIBUTING.md#async-chart-queries
enabled: false
replicaCount: 1
ingress:
path: /ws
pathType: Prefix
image:
# -- There is no official image (yet), this one is community-supported
repository: oneacrefund/superset-websocket
tag: latest
pullPolicy: IfNotPresent
# -- The config.json to pass to the server, see
https://github.com/apache/superset/tree/master/superset-websocket
# Note that the configuration can also read from environment variables
(which will have priority), see
https://github.com/apache/superset/blob/master/superset-websocket/src/config.ts
for a list of supported variables
# @default -- see `values.yaml`
config:
{
"port": 8080,
"logLevel": "debug",
"logToFile": false,
"logFilename": "app.log",
"statsd": { "host": "127.0.0.1", "port": 8125, "globalTags": [] },
"redis":
{
"port": 6379,
"host": "127.0.0.1",
"password": "",
"db": 0,
"ssl": false,
},
"redisStreamPrefix": "async-events-",
"jwtSecret": "CHANGE-ME",
"jwtCookieName": "async-token",
}
service:
type: ClusterIP
annotations: {}
port: 8080
nodePort:
# -- (int)
http: nil
command: []
resources: {}
deploymentAnnotations: {}
# -- Affinity to be added to supersetWebsockets deployment
affinity: {}
# -- TopologySpreadConstrains to be added to supersetWebsockets deployments
topologySpreadConstraints: []
podAnnotations: {}
podLabels: {}
strategy: {}
podSecurityContext: {}
containerSecurityContext: {}
startupProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 60
periodSeconds: 5
successThreshold: 1
livenessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
readinessProbe:
httpGet:
path: /health
port: ws
initialDelaySeconds: 5
timeoutSeconds: 1
failureThreshold: 3
periodSeconds: 5
successThreshold: 1
init:
# Configure resources
# Warning: fab command consumes a lot of ram and can
# cause the process to be killed due to OOM if it exceeds limit
# Make sure you are giving a strong password for the admin user creation(
else make sure you are changing after setup)
# Also change the admin email to your own custom email.
resources:
{}
# limits:
# cpu:
# memory:
# requests:
# cpu:
# memory:
# -- Command
# @default -- a `superset_init.sh` command
command:
- "/bin/sh"
- "-c"
- ". {{ .Values.configMountPath }}/superset_bootstrap.sh; . {{
.Values.configMountPath }}/superset_init.sh"
# command:
# - /bin/sh
# - -c
# - |
# . {{ .Values.configMountPath }}/superset_bootstrap.sh
# superset re-encrypt-secrets
# . {{ .Values.configMountPath }}/superset_init.sh
enabled: true
loadExamples: false
createAdmin: true
adminUser:
username: admin
firstname: Superset
lastname: Admin
email: [email protected]
password: admin
# -- List of initContainers
# @default -- a container waiting for postgres
initContainers:
- name: wait-for-postgres
image: "{{ .Values.initImage.repository }}:{{ .Values.initImage.tag }}"
imagePullPolicy: "{{ .Values.initImage.pullPolicy }}"
envFrom:
- secretRef:
name: "{{ tpl .Values.envFromSecret . }}"
command:
- /bin/sh
- -c
- dockerize -wait "tcp://$DB_HOST:$DB_PORT" -timeout 120s
# -- A Superset init script
# @default -- a script to create admin user and initailize roles
initscript: |-
#!/bin/sh
set -eu
echo "Upgrading DB schema..."
superset db upgrade
echo "Initializing roles..."
superset init
{{ if .Values.init.createAdmin }}
echo "Creating admin user..."
superset fab create-admin \
--username {{ .Values.init.adminUser.username }} \
--firstname {{ .Values.init.adminUser.firstname }} \
--lastname {{ .Values.init.adminUser.lastname }} \
--email {{ .Values.init.adminUser.email }} \
--password {{ .Values.init.adminUser.password }} \
|| true
{{- end }}
{{ if .Values.init.loadExamples }}
echo "Loading examples..."
superset load_examples
{{- end }}
if [ -f "{{ .Values.extraConfigMountPath }}/import_datasources.yaml" ];
then
echo "Importing database connections.... "
superset import_datasources -p {{ .Values.extraConfigMountPath
}}/import_datasources.yaml
fi
## Annotations to be added to init job pods
podAnnotations: {}
podSecurityContext: {}
containerSecurityContext: {}
# -- Configuration values for the postgresql dependency.
# ref:
https://github.com/kubernetes/charts/blob/master/stable/postgresql/README.md
# @default -- see `values.yaml`
postgresql:
##
## Use the PostgreSQL chart dependency.
## Set to false if bringing your own PostgreSQL.
enabled: true
## Authentication parameters
auth:
## The name of an existing secret that contains the postgres password.
existingSecret:
## PostgreSQL name for a custom user to create
username: superset
## PostgreSQL password for the custom user to create. Ignored if
`auth.existingSecret` with key `password` is provided
password: superset
## PostgreSQL name for a custom database to create
database: superset
image:
tag: "14.6.0-debian-11-r13"
## PostgreSQL Primary parameters
primary:
##
## Persistent Volume Storage configuration.
## ref: https://kubernetes.io/docs/user-guide/persistent-volumes
persistence:
##
## Enable PostgreSQL persistence using Persistent Volume Claims.
enabled: true
##
## Persistant class
# storageClass: classname
##
## Access modes:
accessModes:
- ReadWriteOnce
## PostgreSQL port
service:
ports:
postgresql: "5432"
# -- Configuration values for the Redis dependency.
# ref: https://github.com/bitnami/charts/blob/master/bitnami/redis
# More documentation can be found here:
https://artifacthub.io/packages/helm/bitnami/redis
# @default -- see `values.yaml`
redis:
##
## Use the redis chart dependency.
##
## If you are bringing your own redis, you can set the host in
supersetNode.connections.redis_host
##
## Set to false if bringing your own redis.
enabled: true
##
## Set architecture to standalone/replication
architecture: standalone
##
## Auth configuration:
##
auth:
## Enable password authentication
enabled: false
## The name of an existing secret that contains the redis password.
existingSecret: ""
## Name of the key containing the secret.
existingSecretKey: ""
## Redis password
password: superset
##
## Master configuration
##
master:
##
## Image configuration
# image:
##
## docker registry secret names (list)
# pullSecrets: nil
##
## Configure persistance
persistence:
##
## Use a PVC to persist data.
enabled: false
##
## Persistant class
# storageClass: classname
##
## Access mode:
accessModes:
- ReadWriteOnce
nodeSelector: {}
tolerations: []
affinity: {}
# -- TopologySpreadConstrains to be added to all deployments
topologySpreadConstraints: []
this is values.yaml file
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]