Robin,My query statement is as below:select id, name, trans_date, gender,
hobby, job, country from Employees LIMIT 100
In PostgreSQL, it works very well. For 10M records in DB, it just took less
than 20ms, but in SparkSQL, it took long time.
Michael,
Got it. For me, it is not good news. Anyway, thanks.
Regards,Yi
On Tuesday, May 5, 2015 5:59 AM, Michael Armbrust <[email protected]>
wrote:
The JDBC interface for Spark SQL does not support pushing down limits today.
On Mon, May 4, 2015 at 8:06 AM, Robin East <[email protected]> wrote:
and a further question - have you tried running this query in pqsl? what’s the
performance like there?
On 4 May 2015, at 16:04, Robin East <[email protected]> wrote:
What query are you running. It may be the case that your query requires
PosgreSQL to do a large amount of work before identifying the first n rows
On 4 May 2015, at 15:52, Yi Zhang <[email protected]> wrote:
I am trying to query PostgreSQL using LIMIT(n) to reduce memory size and
improve query performance, but I found it took long time as same as querying
not using LIMIT. It let me confused. Anybody know why?
Thanks.
Regards,Yi