esc opened a new issue #14831:
URL: https://github.com/apache/airflow/issues/14831


   <!--
   
   Welcome to Apache Airflow!  For a smooth issue process, try to answer the 
following questions.
   Don't worry if they're not all applicable; just try to include what you can 
:-)
   
   If you need to include code snippets or logs, please put them in fenced code
   blocks.  If they're super-long, please use the details tag like
   <details><summary>super-long log</summary> lots of stuff </details>
   
   Please delete these comment blocks before submitting the issue.
   
   -->
   
   <!--
   
   IMPORTANT!!!
   
   PLEASE CHECK "SIMILAR TO X EXISTING ISSUES" OPTION IF VISIBLE
   NEXT TO "SUBMIT NEW ISSUE" BUTTON!!!
   
   PLEASE CHECK IF THIS ISSUE HAS BEEN REPORTED PREVIOUSLY USING SEARCH!!!
   
   Please complete the next sections or the issue will be closed.
   These questions are the first thing we need to know to understand the 
context.
   
   -->
   
   **Apache Airflow version**: apache-airflow==1.10.14
   
   
   **Kubernetes version (if you are using kubernetes)** (use `kubectl 
version`): None
   
   **Environment**:
   
   - **Cloud provider or hardware configuration**: x86
   - **OS** (e.g. from /etc/os-release): Ubuntu 16.04.4 LTS
   - **Kernel** (e.g. `uname -a`): Linux numba-linux64-gpu 4.4.0-87-generic 
#110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
   - **Install tools**: `pip`
   - **Others**: `docker-compose`
   
   **What happened**:
   
   The command `airflow scheduler -p` fails with an error message.
   
   ```
   (base) root@d10c31496430:/# airflow scheduler -p
   /ci_repo/airflow/airflow-ci/airflow_ci/ui.py:12: DeprecationWarning: get: 
Accessing configuration method 'get' directly from the configuration module is 
deprecated. Please access the configuration from the 'configuration.conf' 
object via 'conf.get'
     DASK_DASHBOARD = configuration.get('dask', 'dashboard')
   /ci_repo/airflow/airflow-ci/airflow_ci/airflow_plugins.py:7: FutureWarning: 
Registering operators or sensors in plugins is deprecated -- these should be 
treated like 'plain' python modules, and imported normally in DAGs.
   Airflow 2.0 has removed the ability to register these types in plugins. See 
<http://airflow.apache.org/docs/stable/howto/custom-operator.html>.
     class CIPlugin(AirflowPlugin):
   Traceback (most recent call last):
     File "/opt/conda/bin/airflow", line 37, in <module>
       args.func(args)
     File "/opt/conda/lib/python3.7/site-packages/airflow/utils/cli.py", line 
78, in wrapper
       metrics = _build_metrics(f.__name__, args[0])
     File "/opt/conda/lib/python3.7/site-packages/airflow/utils/cli.py", line 
108, in _build_metrics
       full_command[idx + 1] = "*" * 8
   IndexError: list assignment index out of range
   ```
   
   **What you expected to happen**:
   
   Expecting the flag `-p` to mean `--do_pickle` as per the command synopsis:
   
   ```
   usage: airflow scheduler [-h] [-d DAG_ID] [-sd SUBDIR] [-r RUN_DURATION]
                            [-n NUM_RUNS] [-p] [--pid [PID]] [-D]
                            [--stdout STDOUT] [--stderr STDERR] [-l LOG_FILE]
   optional arguments:
     -h, --help            show this help message and exit
     -d DAG_ID, --dag_id DAG_ID
                           The id of the dag to run
     -sd SUBDIR, --subdir SUBDIR
                           File location or directory from which to look for the
                           dag. Defaults to '[AIRFLOW_HOME]/dags' where
                           [AIRFLOW_HOME] is the value you set for 
'AIRFLOW_HOME'
                           config you set in 'airflow.cfg'
     -r RUN_DURATION, --run-duration RUN_DURATION
                           Set number of seconds to execute before exiting
     -n NUM_RUNS, --num_runs NUM_RUNS
                           Set the number of runs to execute before exiting
     -p, --do_pickle       Attempt to pickle the DAG object to send over to the
                           workers, instead of letting workers run their version
                           of the code.
     --pid [PID]           PID file location
     -D, --daemon          Daemonize instead of running in the foreground
     --stdout STDOUT       Redirect stdout to this file
     --stderr STDERR       Redirect stderr to this file
     -l LOG_FILE, --log-file LOG_FILE
                           Location of the log file
   ```
   
   **What do you think went wrong?**
   
   This PR broke it: https://github.com/apache/airflow/pull/11468
   
   After this PR the option `-p` is misinterpreted as `--password`.
   
   The workaround is to use the long form `--do_pickle`.
   
   **How to reproduce it**:
   
   Run the mentioned command with the correct version of Airflow.
   
   <!---
   
   As minimally and precisely as possible. Keep in mind we do not have access 
to your cluster or dags.
   
   If you are using kubernetes, please attempt to recreate the issue using 
minikube or kind.
   
   ## Install minikube/kind
   
   - Minikube https://minikube.sigs.k8s.io/docs/start/
   - Kind https://kind.sigs.k8s.io/docs/user/quick-start/
   
   If this is a UI bug, please provide a screenshot of the bug or a link to a 
youtube video of the bug in action
   
   You can include images using the .md style of
   ![alt text](http://url/to/img.png)
   
   To record a screencast, mac users can use QuickTime and then create an 
unlisted youtube video with the resulting .mov file.
   
   --->
   
   
   **Anything else we need to know**:
   
   I have reported this via Slack and Marcos Marx asked me to open an issue 
about it.
   
   <!--
   
   How often does this problem occur? Once? Every time etc?
   
   Any relevant logs to include? Put them here in side a detail tag:
   <details><summary>x.log</summary> lots of stuff </details>
   
   -->
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to