karenbraganz opened a new pull request, #39724:
URL: https://github.com/apache/airflow/pull/39724

   <!--
    Licensed to the Apache Software Foundation (ASF) under one
    or more contributor license agreements.  See the NOTICE file
    distributed with this work for additional information
    regarding copyright ownership.  The ASF licenses this file
    to you under the Apache License, Version 2.0 (the
    "License"); you may not use this file except in compliance
    with the License.  You may obtain a copy of the License at
   
      http://www.apache.org/licenses/LICENSE-2.0
   
    Unless required by applicable law or agreed to in writing,
    software distributed under the License is distributed on an
    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
    KIND, either express or implied.  See the License for the
    specific language governing permissions and limitations
    under the License.
    -->
   
   <!--
   Thank you for contributing! Please make sure that your code changes
   are covered with tests. And in case of new features or big changes
   remember to adjust the documentation.
   
   Feel free to ping committers for the review!
   
   In case of an existing issue, reference it using one of the following:
   
   closes: #ISSUE
   related: #ISSUE
   
   How to write a good git commit message:
   http://chris.beams.io/posts/git-commit/
   -->
   Check pool_slots value in mapped tasks using partial()
   
   
   <!-- Please keep an empty line above the dashes. -->
   ---
   Issue #39639 was opened because a DAG import error would show up for regular 
tasks with an incorrect pool_slots value but not for mapped tasks using 
partial() and expand(). 
   
   It looks like this error is raised for regular tasks due to a conditional 
statement that is executed when the task is instantiated (from the BaseOperator 
__init__() method)
   ```
   if self.pool_slots < 1:
       dag_str = f" in dag {dag.dag_id}" if dag else ""
       raise ValueError(f"pool slots for {self.task_id}{dag_str} cannot be less 
than 1")
   ```
   This code isn't being executed for mapped tasks using partial(). I am not 
sure if it's because they are instantiated in a different way. However, I was 
able to get the error to be raised by adding the following code to def 
partial() in baseoperator.py.
   ```
   if partial_kwargs["pool_slots"] < 1:
       dag_str = f" in dag {partial_kwargs['dag'].dag_id}" if 
partial_kwargs["dag"] else ""
       raise ValueError(f"pool slots for {partial_kwargs['task_id']}{dag_str} 
cannot be less than 1")
   ```
   
   I am working on investigating whether this is the correct approach or 
whether this code should be executed when the task is instantiated like it is 
for a regular task. I'm not sure if mapped tasks using partial() are purposely 
designed to be instantiated differently or if they should pass through some of 
the same checks as a regular task upon instantiation.
   
   Additionally, if modifying def partial() is the way to go, I might have to 
modify the expand() method in the _TaskDecorator class as well.
   
   **^ Add meaningful description above**
   Read the **[Pull Request 
Guidelines](https://github.com/apache/airflow/blob/main/contributing-docs/05_pull_requests.rst#pull-request-guidelines)**
 for more information.
   In case of fundamental code changes, an Airflow Improvement Proposal 
([AIP](https://cwiki.apache.org/confluence/display/AIRFLOW/Airflow+Improvement+Proposals))
 is needed.
   In case of a new dependency, check compliance with the [ASF 3rd Party 
License Policy](https://www.apache.org/legal/resolved.html#category-x).
   In case of backwards incompatible changes please leave a note in a 
newsfragment file, named `{pr_number}.significant.rst` or 
`{issue_number}.significant.rst`, in 
[newsfragments](https://github.com/apache/airflow/tree/main/newsfragments).
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to