2017 09:18, "jordan.halter...@gmail.com" <
> jordan.halter...@gmail.com> wrote:
>
>> You can often get the original of a column via
>> RelMetadataQuery.getColumnOrigin(),
>> but keep in mind columns can have multiple origins or no origin at all.
>>
You can often get the original of a column via
RelMetadataQuery.getColumnOrigin(), but keep in mind columns can have multiple
origins or no origin at all.
> On Feb 18, 2017, at 5:47 PM, barry squire wrote:
>
> Hi everyone,
>
> Calcite's SQL parsing, planning and execution
Calcite differs from Catalyst in many ways. First of all, Catalyst is
essentially a heuristic optimizer, while Calcite optimizers often combine
heuristics and cost-based optimization. Catalyst pushes down predicates and
projections to most data sources, while Calcite can often push down full
n 20, 2017 at 4:24 PM, Jacques Nadeau <jacq...@apache.org> wrote:
>> Jordan, super interesting work you've shared. It would be very cool to get
>> this incorporated back into Spark mainline. That would continue to broaden
>> Calcite's reach :)
>>
>> On Fr
So, AFAIK the Spark adapter that's inside Calcite is in an unusable state right
now. It's still using Spark 1.x and last time I tried it I couldn't get it to
run. It probably needs to either be removed or completely rewritten. But I can
certainly offer some guidance on working with Spark and
example on calcite+spark?
>
> Riccardo Tommasini
> Master Degree Computer Science
> PhD Student at Politecnico di Milano (Italy)
> streamreasoning.org<http://streamreasoning.org/>
>
> Submitted from an iPhone, I apologise for typos.
>
> From: jordan.halter...@gmail.com
> &l
We added the double-colon syntax to our own fork of the Calcite grammar to
placate our analysts and their addiction to Redshift. TBH it was not easy, and
our implementation still doesn't support things like casting from a scalar
subquery. Essentially, you can cast identifiers and function
I think you should just be able to override getStatistic() in your table
implementations and return a Statistic object that has an accurate row count.
The table scan should compute its cost from that, and uses 100d as a default
IIRC.
> On Sep 25, 2016, at 1:56 PM, Γιώργος Θεοδωράκης
You got most of the way there, but to optimize the plan you need to add
programs to your framework configuration. See the programs() method of the
framework config.
A Program is essentially a RelOptPlanner and a set of rules to apply. You can
add several Programs to your Planner by using the
ffice phone:+86-10-58812516
> mobile:+86-13671116520
>
>
>
>
>
>
>
>
>> On 9/15/16, 1:07 AM, "jordan.halter...@gmail.com"
>>
You have to run planner.validate after parse, otherwise the state in
PlannerImpl will be incorrect. You can also go into the PlannerImpl and steal
some code if you need to circumvent those states, but I agree this is probably
the easiest way to go about it. The alternative is just creating a
SqlNode is the abstract syntax tree that represents the actual structure of the
query a user input. When a query is first parsed, it's parsed into a SqlNode.
For example, a SELECT query will be parsed into a SqlSelect with a list of
fields, a table, a join, etc. Calcite is also capable of
some range predicates if the partition
> key is already restricted by an equality predicate and the range predicate
> is part o the clustering key.
>
> Cheers,
> --
> Michael Mior
> michael.m...@gmail.com
>
> 2016-08-03 14:21 GMT-04:00 jordan.halter...@gmail.com <
> jo
It's just a matter of splitting out equality predicates. What I would do is
create a Filter rule that splits the filter based on whether a predicate is an
equality. If the Filter is split, the rule returns a Filter with the equality
predicates as inputs to the non-equality predicates. That
I'm no Calcite expert (yet) but I have a few suggestions based on my own
experience with using Planner and digging through its code. Keep in mind that
there are surely better people to explain this around here, but I'll do my best
based on what I've learned...
When using Planner, you shouldn't
15 matches
Mail list logo