[ 
https://issues.apache.org/jira/browse/SPARK-54630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod KC updated SPARK-54630:
-----------------------------
    Description: 
 

Add {{date_bucket}} function to bucket DATE values into arbitrary 
fixed-duration intervals aligned to the Unix epoch (1970-01-01)

 

Currently, Spark provides:
 * {*}{{date_trunc()}}{*}: Truncates to calendar-based units (year, quarter, 
month, week, day) but only supports predefined units
 * {*}{{window()}}{*}: For TIMESTAMP bucketing with tumbling/sliding windows, 
returns a struct with start/end times

However, there is no function to bucket dates by arbitrary fixed durations like:
 * 7-day buckets (weekly reporting aligned to epoch, not calendar weeks)
 * 14-day buckets (bi-weekly cycles)
 * Custom N-day intervals for business cycles

This is a common requirement for:
 * {*}Time-series analytics{*}: Grouping events by custom time periods
 * {*}Reporting{*}: Creating consistent bucketing across different date ranges
 * {*}Data partitioning{*}: Organizing data by fixed-duration intervals

 
{code:java}
-- 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, DATE'2025-01-15');
-- Result: 2025-01-09

-- 2 WEEKS = 14 days
SELECT date_bucket(INTERVAL '2' WEEK, DATE'2025-01-15');
-- Result: 2025-01-02

-- Group orders by 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, order_date) AS week_bucket,
       COUNT(*) AS order_count,
       SUM(amount) AS total_sales
FROM orders
WHERE order_date >= DATE'2025-01-01'
GROUP BY week_bucket
ORDER BY week_bucket; 

-- Demonstrate date_bucket with Multiple Input Dates
SELECT stack(8,
    DATE'2025-01-01', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-01'),
    DATE'2025-01-05', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-05'),
    DATE'2025-01-08', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-08'),
    DATE'2025-01-15', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-15'),
    DATE'2025-01-14', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-14'),
    DATE'2025-01-13', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-13'),
    DATE'2025-01-22', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-22'),
    DATE'2025-01-29', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-29')
  ) AS (input_date, bucket_start)

2025-01-01    2024-12-26
2025-01-05    2025-01-02
2025-01-08    2025-01-02
2025-01-15    2025-01-09
2025-01-14    2025-01-09
2025-01-13    2025-01-09
2025-01-22    2025-01-16
2025-01-29    2025-01-23{code}

  was:
 

Add {{date_bucket}} function to bucket DATE values into arbitrary 
fixed-duration intervals aligned to the Unix epoch (1970-01-01)

 

Currently, Spark provides:
 * {*}{{date_trunc()}}{*}: Truncates to calendar-based units (year, quarter, 
month, week, day) but only supports predefined units
 * {*}{{window()}}{*}: For TIMESTAMP bucketing with tumbling/sliding windows, 
returns a struct with start/end times

However, there is no function to bucket dates by arbitrary fixed durations like:
 * 7-day buckets (weekly reporting aligned to epoch, not calendar weeks)
 * 14-day buckets (bi-weekly cycles)
 * Custom N-day intervals for business cycles

This is a common requirement for:
 * {*}Time-series analytics{*}: Grouping events by custom time periods
 * {*}Reporting{*}: Creating consistent bucketing across different date ranges
 * {*}Data partitioning{*}: Organizing data by fixed-duration intervals

 
{code:java}
-- 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, DATE'2025-01-15');
-- Result: 2025-01-09

-- 2 WEEKS = 14 days
SELECT date_bucket(INTERVAL '2' WEEK, DATE'2025-01-15');
-- Result: 2025-01-08

-- Group orders by 7-day buckets
SELECT date_bucket(INTERVAL '7' DAY, order_date) AS week_bucket,
       COUNT(*) AS order_count,
       SUM(amount) AS total_sales
FROM orders
WHERE order_date >= DATE'2025-01-01'
GROUP BY week_bucket
ORDER BY week_bucket; 

-- Demonstrate date_bucket with Multiple Input Dates
SELECT stack(8,
    DATE'2025-01-01', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-01'),
    DATE'2025-01-05', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-05'),
    DATE'2025-01-08', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-08'),
    DATE'2025-01-15', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-15'),
    DATE'2025-01-14', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-14'),
    DATE'2025-01-13', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-13'),
    DATE'2025-01-22', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-22'),
    DATE'2025-01-29', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-29')
  ) AS (input_date, bucket_start)

2025-01-01    2024-12-26
2025-01-05    2025-01-02
2025-01-08    2025-01-02
2025-01-15    2025-01-09
2025-01-14    2025-01-09
2025-01-13    2025-01-09
2025-01-22    2025-01-16
2025-01-29    2025-01-23{code}


> Add date_bucket function for arbitrary interval bucketing of dates
> ------------------------------------------------------------------
>
>                 Key: SPARK-54630
>                 URL: https://issues.apache.org/jira/browse/SPARK-54630
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 3.5.0, 4.0.0
>            Reporter: Vinod KC
>            Priority: Major
>
>  
> Add {{date_bucket}} function to bucket DATE values into arbitrary 
> fixed-duration intervals aligned to the Unix epoch (1970-01-01)
>  
> Currently, Spark provides:
>  * {*}{{date_trunc()}}{*}: Truncates to calendar-based units (year, quarter, 
> month, week, day) but only supports predefined units
>  * {*}{{window()}}{*}: For TIMESTAMP bucketing with tumbling/sliding windows, 
> returns a struct with start/end times
> However, there is no function to bucket dates by arbitrary fixed durations 
> like:
>  * 7-day buckets (weekly reporting aligned to epoch, not calendar weeks)
>  * 14-day buckets (bi-weekly cycles)
>  * Custom N-day intervals for business cycles
> This is a common requirement for:
>  * {*}Time-series analytics{*}: Grouping events by custom time periods
>  * {*}Reporting{*}: Creating consistent bucketing across different date ranges
>  * {*}Data partitioning{*}: Organizing data by fixed-duration intervals
>  
> {code:java}
> -- 7-day buckets
> SELECT date_bucket(INTERVAL '7' DAY, DATE'2025-01-15');
> -- Result: 2025-01-09
> -- 2 WEEKS = 14 days
> SELECT date_bucket(INTERVAL '2' WEEK, DATE'2025-01-15');
> -- Result: 2025-01-02
> -- Group orders by 7-day buckets
> SELECT date_bucket(INTERVAL '7' DAY, order_date) AS week_bucket,
>        COUNT(*) AS order_count,
>        SUM(amount) AS total_sales
> FROM orders
> WHERE order_date >= DATE'2025-01-01'
> GROUP BY week_bucket
> ORDER BY week_bucket; 
> -- Demonstrate date_bucket with Multiple Input Dates
> SELECT stack(8,
>     DATE'2025-01-01', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-01'),
>     DATE'2025-01-05', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-05'),
>     DATE'2025-01-08', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-08'),
>     DATE'2025-01-15', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-15'),
>     DATE'2025-01-14', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-14'),
>     DATE'2025-01-13', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-13'),
>     DATE'2025-01-22', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-22'),
>     DATE'2025-01-29', date_bucket(INTERVAL '1' WEEK, DATE'2025-01-29')
>   ) AS (input_date, bucket_start)
> 2025-01-01    2024-12-26
> 2025-01-05    2025-01-02
> 2025-01-08    2025-01-02
> 2025-01-15    2025-01-09
> 2025-01-14    2025-01-09
> 2025-01-13    2025-01-09
> 2025-01-22    2025-01-16
> 2025-01-29    2025-01-23{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to