Rahul,
*double rows* is an estimate row count, which can be used in choosing right
Join operator, maybe somewhere else.
But to have a proper *PushLimitIntoScan *it is necessary to change *String*
*sql*.
Possibly it is necessary to keep *JdbcImplementor *or* JdbcImplementor.Result
*from*
Vitalii,
I tried debugging this and the query received at
JdbcGroupScan#getSpecificScan is without the limit clause - "SELECT * FROM
public.actor", so the decision to push is made before itself. Debugger did
not hit the rules of DrillPushLimitToScanRule too.
Any pointers?
Regards,
Rahul
On
Vitalii,
I made both the changes, it did not work and a full scan was issued as
shown in the plan below.
00-00Screen : rowType = RecordType(INTEGER actor_id, VARCHAR(45)
first_name, VARCHAR(45) last_name, TIMESTAMP(3) last_update): rowcount
= 5.0, cumulative cost = {120.5 rows, 165.5 cpu,
I will make the changes and update you.
Regards,
Rahul
On Fri, Oct 19, 2018 at 1:05 AM Vitalii Diravka wrote:
> Rahul,
>
> Possibly *JdbcGroupScan* can be improved, for instance by overriding
> *supportsLimitPushdown()* and *applyLimit()* methods,
> *double rows *field can be updated by the
Rahul,
Possibly *JdbcGroupScan* can be improved, for instance by overriding
*supportsLimitPushdown()* and *applyLimit()* methods,
*double rows *field can be updated by the limit value.
I've performed the following query: select * from mysql.`testdb`.`table`
limit 2;
but the following one is
Vitalii,
Created documentation ticket DRILL-6794
How do we proceed on extending the scan operators to support JDBC plugins?
Regards,
Rahul
On Sat, Oct 13, 2018 at 6:47 PM Vitalii Diravka wrote:
> To update the documentation, since that issues were solved by using these
> properties in
To update the documentation, since that issues were solved by using these
properties in connection URL:
defaultRowFetchSize=1 [1]
defaultAutoCommit=false[2]
The full URL was there "url": "jdbc:postgresql://
myhost.mydomain.com/mydb?useCursorFetch=true=false=TRACE=/tmp/jdbc.log=1
"
If
Should I create tickets to track these issues or should I create a ticket
to update the documentation?
Rahul
On Sat, Oct 13, 2018 at 6:16 PM Vitalii Diravka wrote:
> 1. You are right, it means it is reasonable to extend this rule for
> applying on other Scan operators (or possibly to create
1. You are right, it means it is reasonable to extend this rule for
applying on other Scan operators (or possibly to create the separate one).
2. There was a question about OOM issues in Drill + PostgreSQL, please take
a look [1].
Since you are trying to setup this configs, It will be good, if
Hi Rahul,
Drill has *DrillPushLimitToScanRule* [1] rule, which should do this
optimization, whether the GroupScan supports Limit Push Down.
Also you can verify in debug mode whether this rule is fired.
Possibly for some external DB (like MapR-DB) Drill should have the separate
class for this
Hi,
Drill does not push the LIMIT queries to external databases and I assume it
could be more related to Calcite. This leads to out of memory situations
while querying large table to view few records. Is there something that
could be improved here? One solutions would be to push filters down to
11 matches
Mail list logo