Thanks all

Actually, I want to know more info about Flink SQL and Flink performance
Here is the Spark benchmark. Maybe you already saw it before.
https://amplab.cs.berkeley.edu/benchmark/

Thanks.



Best regards
Hawin



On Fri, Jun 5, 2015 at 1:35 AM, Fabian Hueske <fhue...@gmail.com> wrote:

> If you want to append data to a data set that is store as files (e.g., on
> HDFS), you can go for a directory structure as follows:
>
> dataSetRootFolder
>   - part1
>     - 1
>     - 2
>     - ...
>   - part2
>     - 1
>     - ...
>   - partX
>
> Flink's file format supports recursive directory scans such that you can
> add new subfolders to dataSetRootFolder and read the full data set.
>
> 2015-06-05 9:58 GMT+02:00 Aljoscha Krettek <aljos...@apache.org>:
>
>> Hi,
>> I think the example could be made more concise by using the Table API.
>> http://ci.apache.org/projects/flink/flink-docs-master/libs/table.html
>>
>> Please let us know if you have questions about that, it is still quite
>> new.
>>
>> On Fri, Jun 5, 2015 at 9:03 AM, hawin <hawin.ji...@gmail.com> wrote:
>> > Hi Aljoscha
>> >
>> > Thanks for your reply.
>> > Do you have any tips for Flink SQL.
>> > I know that Spark support ORC format. How about Flink SQL?
>> > BTW, for TPCHQuery10 example, you have implemented it by 231 lines of
>> code.
>> > How to make that as simple as possible by flink.
>> > I am going to use Flink in my future project.  Sorry for so many
>> questions.
>> > I believe that you guys will make a world difference.
>> >
>> >
>> > @Chiwan
>> > You made a very good example for me.
>> > Thanks a lot
>> >
>> >
>> >
>> >
>> >
>> > --
>> > View this message in context:
>> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Re-Apache-Flink-transactions-tp1457p1494.html
>> > Sent from the Apache Flink User Mailing List archive. mailing list
>> archive at Nabble.com.
>>
>
>

Reply via email to