I am not currently very involved with the storm SQL so take my comments worth a 
grain of salt.  I am +1 on waiting to define groupby and join until we have a 
solid base that we can build this on.
But I do want to contradict a bit about what there being no standard for 
streaming SQL.  Technically that is true, but in general the streaming SQL 
solutions I have seen restrict the supported queries to either have no 
aggregations at all (pure element by element operations) or rely on the OVER 
clause, and more specifically time based windows.
The issue is that the output of a streaming operation is not a table, it is a 
protocol that includes updates.  This is where BEAM really shines because it 
exposes all of that ugly underbelly and lets users have complete control over 
that. The API is rather complex and I think a bit ugly because of that.  I 
would suggest that we define what we want it to look like, and let that drive 
the underlying implementation.  If we are feeling ambitions we can get together 
with Flink, Spark, and anyone else who might be interested and see if we can 
come to an understanding on what extensions to SQL we should put in to really 
make streaming work properly.
- Bobby - Bobby 

    On Friday, October 21, 2016 2:18 AM, Jungtaek Lim <[email protected]> wrote:
 

 Hi devs,

Sorry to send multiple mails at once, I had been resolving issues
sequentially, and now stopped a bit and retrospect about the direction of
Storm SQL.

I'd like to propose destructive actions, dropping features about GROUP BY
and JOIN from Storm SQL which are fortunately not released yet.

The reason of dropping features is simple: This borrows Trident semantic
(within micro-batch, or stateful), and not making sense of true "streaming"
semantic.

Spark and Flink interpret "streaming" aggregation and join as windowed
operators. Since there's no SQL standard for streaming (even no de-facto),
they are adding the feature to its API (Structured Streaming for Spark, and
Table API for Flink), and don't address them to SQL side yet.

I was eager to add more features on Storm SQL to make progress (even Bobby
pointed out similarly), but after worked on these things, I change my mind
that letting users not confusing is more important than adding features.

Btw, Storm SQL "temporary" relies on Trident since we don't have
higher-level API on core and we don't want to build topology from ground
up. AFAIK, choosing Trident is not for living with micro-batch, and IMHO it
should run on per-tuple streaming manner instead of micro-batch.
Integrating streams API to Storm SQL could be great internal project for
POC of streams API. Exactly-once needs to be addressed before.

"GROUP BY" is also what SQE supports now (SQE aggregates this stateful and
exactly-once way), so I would like to hear our opinions regarding this.

Flink and Storm is waiting for Calcite to make progress on Streaming SQL:
https://calcite.apache.org/docs/stream.html (For now most of definitions
are not implemented yet.)
This means that we might not support Streaming SQL semantics in SQL
statement unless Calcite finishes their work. I think this is OK since
there're many other works left on Storm SQL, and Storm SQL is now in
experimental anyway (The state of Spark Structured Streaming and Flink
Streaming SQL are also alpha or experiment.)

While waiting, we might want to have LINQ style API like Table API and
address aggregate and join from there, but it requires huge amount of works
and it's a kind of duplicated works with streams API (STORM-1961
<https://issues.apache.org/jira/browse/STORM-1961>) in terms of adding
high-level API. IMHO, if streams API is well defined, it should be fairly
easy and not necessary need to have LINQ style API. (though someone feels
more convenient to use 'select', 'where', and so on.)

Please share your opinion about this. Especially I'd like to see JW Player
participating discussion, since aggregation is already supported by SQE.

Thanks for reading a quite long thread.

Thanks,
Jungtaek Lim (HeartSaVioR)


   

Reply via email to