harrisjoseph commented on issue #25905:
URL: https://github.com/apache/airflow/issues/25905#issuecomment-1225956990

   Thank you for your response,  I appreciate the input 🙂.  I apologise for 
using the wrong terminology - I've been reading the code where logical_date is 
passed to `infer_manual_data_interval`, and haven't been following the 
conceptual shift from `execution_date` -> `logical_date` -> `data_interval` 
very closely.
   
   Ultimately I'm trying to resolve the issue discussed 
[here](https://github.com/apache/airflow/discussions/25687) - a custom data 
interval isn't what I'm looking for - a _correct_ data interval is what I'd 
like to achieve:
   ```python
   from airflow.timetables.interval import CronDataIntervalTimetable
   import pytz
   
   # pass 2022-02-14. I expect start_date = 2022-02-07 and end_date 2022-02-14
   dt = datetime(2022,2,14, tzinfo=pytz.UTC)
   tt = CronDataIntervalTimetable('0 0 * * 1', pytz.UTC)
   tt.infer_manual_data_interval(run_after=dt)
   
   # I get start_date 2022-01-31 and end_date 2022-02-07
   DataInterval(start=DateTime(2022, 1, 31, 0, 0, 0, tzinfo=Timezone('UTC')), 
end=DateTime(2022, 2, 7, 0, 0, 0, tzinfo=Timezone('UTC')))
   ```
   
   Perhaps the run_after date filter needs to be greater than inclusive rather 
than exclusive to make the behaviour match the previous behaviour of `airflow 
dags trigger -e`. Either way, I think we need to be able to pass something to 
`airflow dags trigger` that doesn't belong to the old forbidden world of 
execution_dates. Either we pass data_interval, or logical_date, or some 
different thing. But it would make sense for the resulting dag_run to have a 
data_interval which matches the input of the command as it was before the 
deprecation of `execution_date`, if possible?
   
   Let me know what you think


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to