Vishal2696 opened a new issue #22366:
URL: https://github.com/apache/airflow/issues/22366


   ### Apache Airflow version
   
   2.2.3
   
   ### What happened
   
   I'm building the airflow image from the source code using normal `docker 
build` command (Commit used for build: 
`06c82e17e9d7ff1bf261357e84c6013ccdb3c241`). After successful build I deployed 
to my kubernetes cluster and the webserver pod is also up and running but when 
I try to login, I'm displayed with the below error page.
   
   ```
   Something bad has happened.
    
   Airflow is used by many users, and it is very likely that others had similar 
problems and you can easily find
   a solution to your problem.
    
   Consider following these steps:
    
     * gather the relevant information (detailed logs with errors, reproduction 
steps, details of your deployment)
    
     * find similar issues using:
        * [GitHub Discussions](https://github.com/apache/airflow/discussions)
        * [GitHub Issues](https://github.com/apache/airflow/issues)
        * [Stack Overflow](https://stackoverflow.com/questions/tagged/airflow)
        * the usual search engine you use on a daily basis
    
     * if you run Airflow on a Managed Service, consider opening an issue using 
the service support channels
    
     * if you tried and have difficulty with diagnosing and fixing the problem 
yourself, consider creating a [bug 
report](https://github.com/apache/airflow/issues/new/choose).
       Make sure however, to include all relevant details and results of your 
investigation so far.
    
   Python version: 3.8.10
   Airflow version: 2.2.3
   Node: airflow-webserver
   
-------------------------------------------------------------------------------
   Traceback (most recent call last):
     File 
"/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", 
line 1808, in _execute_context
       self.dialect.do_execute(
     File 
"/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py",
 line 732, in do_execute
       cursor.execute(statement, parameters)
   psycopg2.errors.UndefinedColumn: column dag.has_import_errors does not exist
   LINE 1: ...rrency_limits AS dag_has_task_concurrency_limits, dag.has_im...
   ``` 
   
   Similarly I can also see error in scheduler but not sure whether both are 
related. Below is the error seen in scheduler. 
   ```
   [SQL: SELECT dag.dag_id AS dag_dag_id, dag.root_dag_id AS dag_root_dag_id, 
dag.is_paused AS dag_is_paused, dag.is_subdag AS dag_is_subdag, dag.is_active 
AS dag_is_active, dag.last_parsed_time AS dag_last_parsed_time, 
dag.last_pickled AS dag_last_pickled, dag.last_expired AS dag_last_expired, 
dag.scheduler_lock AS dag_scheduler_lock, dag.pickle_id AS dag_pickle_id, 
dag.fileloc AS dag_fileloc, dag.owners AS dag_owners, dag.description AS 
dag_description, dag.default_view AS dag_default_view, dag.schedule_interval AS 
dag_schedule_interval, dag.max_active_tasks AS dag_max_active_tasks, 
dag.max_active_runs AS dag_max_active_runs, dag.has_task_concurrency_limits AS 
dag_has_task_concurrency_limits, dag.has_import_errors AS 
dag_has_import_errors, dag.next_dagrun AS dag_next_dagrun, 
dag.next_dagrun_data_interval_start AS dag_next_dagrun_data_interval_start, 
dag.next_dagrun_data_interval_end AS dag_next_dagrun_data_interval_end, 
dag.next_dagrun_create_after AS dag_next_dagrun_create_after
   FROM dag
   WHERE dag.is_paused = false AND dag.is_active = true AND 
dag.has_import_errors = false AND dag.next_dagrun_create_after <= now() ORDER 
BY dag.next_dagrun_create_after
   LIMIT %(param_1)s FOR UPDATE OF dag SKIP LOCKED]
   [parameters: {'param_1': 10}]
   (Background on this error at: https://sqlalche.me/e/14/2j85)
   [[34m2022-03-18 12:58:41,415[0m] {[34mprocess_utils.py:[0m120} INFO[0m - 
Sending Signals.SIGTERM to group 23. PIDs of all processes in the group: [][0m
   [[34m2022-03-18 12:58:41,415[0m] {[34mprocess_utils.py:[0m75} INFO[0m - 
Sending the signal Signals.SIGTERM to group 23[0m
   [[34m2022-03-18 12:58:41,415[0m] {[34mprocess_utils.py:[0m89} INFO[0m - 
Sending the signal Signals.SIGTERM to process 23 as process group is missing.[0m
   [[34m2022-03-18 12:58:41,415[0m] {[34mscheduler_job.py:[0m655} INFO[0m - 
Exited execute loop[0m
   Traceback (most recent call last):
     File 
"/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/base.py", 
line 1808, in _execute_context
       self.dialect.do_execute(
     File 
"/home/airflow/.local/lib/python3.8/site-packages/sqlalchemy/engine/default.py",
 line 732, in do_execute
       cursor.execute(statement, parameters)
   
   psycopg2.errors.InFailedSqlTransaction: current transaction is aborted, 
commands ignored until end of transaction block
   ```
   
   
   ### What you think should happen instead
   
   The webserver should have displayed the airflow dags homepage after login
   
   ### How to reproduce
   
   _No response_
   
   ### Operating System
   
   original airflow build from source code
   
   ### Versions of Apache Airflow Providers
   
   _No response_
   
   ### Deployment
   
   Other Docker-based deployment
   
   ### Deployment details
   
   _No response_
   
   ### Anything else
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [ ] Yes I am willing to submit a PR!
   
   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to