My basic test is here - https://github.com/rohitkapoor1/sparkPushDownAggregate
From: German Schiavon
Date: Thursday, 4 November 2021 at 2:17 AM
To: huaxin gao
Cc: Kapoor, Rohit , user@spark.apache.org
Subject: Re: [Spark SQL]: Aggregate Push Down / Spark 3.2
EXTERNAL MAIL: USE CAUTION BEFORE
Unsubscribe.
On Mon, Nov 1, 2021 at 6:57 PM Kapoor, Rohit
wrote:
> Hi,
>
>
>
> I am testing the aggregate push down for JDBC after going through the JIRA
> - https://issues.apache.org/jira/browse/SPARK-34952
>
> I have the latest Spark 3.2 setup in local mode (laptop).
>
>
>
> I have PostgreSQL
to test the push down
>> operators successfully against Postgresql using DS v2.
>>
>>
>>
>>
>>
>> *From: *huaxin gao
>> *Date: *Tuesday, 2 November 2021 at 12:35 AM
>> *To: *Kapoor, Rohit
>> *Subject: *Re: [Spark SQL]: Aggregate Push Down / S
sday, 2 November 2021 at 12:35 AM
> *To: *Kapoor, Rohit
> *Subject: *Re: [Spark SQL]: Aggregate Push Down / Spark 3.2
>
>
>
> *EXTERNAL MAIL: USE CAUTION BEFORE CLICKING LINKS OR OPENING ATTACHMENTS.
> ALWAYS VERIFY THE SOURCE OF MESSAGES. *
>
>
>
> *EXTERNAL MA
Thanks for your guidance Huaxin. I have been able to test the push down
operators successfully against Postgresql using DS v2.
From: huaxin gao
Date: Tuesday, 2 November 2021 at 12:35 AM
To: Kapoor, Rohit
Subject: Re: [Spark SQL]: Aggregate Push Down / Spark 3.2
EXTERNAL MAIL: USE CAUTION
Cc: user@spark.apache.org
Subject: Re: [Spark SQL]: Aggregate Push Down / Spark 3.2
EXTERNAL MAIL: USE CAUTION BEFORE CLICKING LINKS OR OPENING ATTACHMENTS. ALWAYS
VERIFY THE SOURCE OF MESSAGES.
EXTERNAL MAIL: USE CAUTION BEFORE CLICKING LINKS OR OPENING ATTACHMENTS. ALWAYS
VERIFY THE SOURCE OF
d message -
> From: Kapoor, Rohit
> Date: Mon, Nov 1, 2021 at 6:27 AM
> Subject: [Spark SQL]: Aggregate Push Down / Spark 3.2
> To: user@spark.apache.org
>
>
> Hi,
>
>
>
> I am testing the aggregate push down for JDBC after going through the JIRA
> - https://is
Hi,
I am testing the aggregate push down for JDBC after going through the JIRA -
https://issues.apache.org/jira/browse/SPARK-34952
I have the latest Spark 3.2 setup in local mode (laptop).
I have PostgreSQL v14 locally on my laptop. I am trying a basic aggregate query
on “emp” table that has 10