回复:[DISCUSS] Release Flink 1.5.1

2018-07-02 Thread Zhijiang(wangzhijiang999)
Hi Chesnay,

Agree with your proposal.  I submitted a jira FLINK-9676 related with deadlock 
issue.
I think it needs to be confirmed whether to be covered in this release or later.

Zhijiang
--
发件人:Chesnay Schepler 
发送时间:2018年7月2日(星期一) 18:19
收件人:dev@flink.apache.org 
主 题:[DISCUSS] Release Flink 1.5.1

Hello,

it has been a little over a month since we've release 1.5.0. Since then 
we've addressed 56 JIRAs [1] for the 1.5 branch, including stability 
enhancement to the new execution mode (FLIP-6), fixes for critical 
issues in the metric system, but also features that didn't quite make it 
into 1.5.0 like FLIP-6 support for the scala-shell.

I think now is a good time to start thinking about a 1.5.1 release, for 
which I would volunteer as the release manager.

There are a few issues that I'm aware of that we should include in the 
release [3], but I believe these should be resolved within the next days.
So that we don't overlap with with proposed 1.6 release [2] we ideally 
start the release process this week.

What do you think?

[1] https://issues.apache.org/jira/projects/FLINK/versions/12343053

[2] 
https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E

[3]

- https://issues.apache.org/jira/browse/FLINK-9280
- https://issues.apache.org/jira/browse/FLINK-8785
- https://issues.apache.org/jira/browse/FLINK-9567



Re: [DISCUSS] Release Flink 1.5.1

2018-07-02 Thread Till Rohrmann
+1.

Thanks for being the release manager Chesnay!

I just merged FLINK-9567.

Cheers,
Till

On Mon, Jul 2, 2018 at 8:48 PM Stephan Ewen  wrote:

> +1 there are many minor fixes that are important for 1.5.1
>
> I would suggest to make 1.5.1 rather asap and consider also a 1.5.2 quite
> soon for the known issues that are not yet fixed.
>
>
> In the future, with the increasing test automation, we can hopefully make a
> lot of minor releases fast, so fixed get to users asap
>
>
> On Mon, Jul 2, 2018 at 1:25 PM, shimin yang  wrote:
>
> > Hi Chesnay,
> >
> > It's a good idea. And I just created a pull request on Flink-9567. Please
> > have a look if you had some free time.
> >
> > Cheers,
> > Shimin
> >
> > On Mon, Jul 2, 2018 at 7:17 PM vino yang  wrote:
> >
> > > +1
> > >
> > > 2018-07-02 18:19 GMT+08:00 Chesnay Schepler :
> > >
> > > > Hello,
> > > >
> > > > it has been a little over a month since we've release 1.5.0. Since
> then
> > > > we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
> > > > enhancement to the new execution mode (FLIP-6), fixes for critical
> > issues
> > > > in the metric system, but also features that didn't quite make it
> into
> > > > 1.5.0 like FLIP-6 support for the scala-shell.
> > > >
> > > > I think now is a good time to start thinking about a 1.5.1 release,
> for
> > > > which I would volunteer as the release manager.
> > > >
> > > > There are a few issues that I'm aware of that we should include in
> the
> > > > release [3], but I believe these should be resolved within the next
> > days.
> > > > So that we don't overlap with with proposed 1.6 release [2] we
> ideally
> > > > start the release process this week.
> > > >
> > > > What do you think?
> > > >
> > > > [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
> > > >
> > > > [2] https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb
> > > > 722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
> > > >
> > > > [3]
> > > >
> > > > - https://issues.apache.org/jira/browse/FLINK-9280
> > > > - https://issues.apache.org/jira/browse/FLINK-8785
> > > > - https://issues.apache.org/jira/browse/FLINK-9567
> > > >
> > >
> >
>


Re: [DISCUSS] Release Flink 1.5.1

2018-07-02 Thread Stephan Ewen
+1 there are many minor fixes that are important for 1.5.1

I would suggest to make 1.5.1 rather asap and consider also a 1.5.2 quite
soon for the known issues that are not yet fixed.


In the future, with the increasing test automation, we can hopefully make a
lot of minor releases fast, so fixed get to users asap


On Mon, Jul 2, 2018 at 1:25 PM, shimin yang  wrote:

> Hi Chesnay,
>
> It's a good idea. And I just created a pull request on Flink-9567. Please
> have a look if you had some free time.
>
> Cheers,
> Shimin
>
> On Mon, Jul 2, 2018 at 7:17 PM vino yang  wrote:
>
> > +1
> >
> > 2018-07-02 18:19 GMT+08:00 Chesnay Schepler :
> >
> > > Hello,
> > >
> > > it has been a little over a month since we've release 1.5.0. Since then
> > > we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
> > > enhancement to the new execution mode (FLIP-6), fixes for critical
> issues
> > > in the metric system, but also features that didn't quite make it into
> > > 1.5.0 like FLIP-6 support for the scala-shell.
> > >
> > > I think now is a good time to start thinking about a 1.5.1 release, for
> > > which I would volunteer as the release manager.
> > >
> > > There are a few issues that I'm aware of that we should include in the
> > > release [3], but I believe these should be resolved within the next
> days.
> > > So that we don't overlap with with proposed 1.6 release [2] we ideally
> > > start the release process this week.
> > >
> > > What do you think?
> > >
> > > [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
> > >
> > > [2] https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb
> > > 722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
> > >
> > > [3]
> > >
> > > - https://issues.apache.org/jira/browse/FLINK-9280
> > > - https://issues.apache.org/jira/browse/FLINK-8785
> > > - https://issues.apache.org/jira/browse/FLINK-9567
> > >
> >
>


[jira] [Created] (FLINK-9705) Failed to close kafka producer - Interrupted while joining ioThread

2018-07-02 Thread Sayat Satybaldiyev (JIRA)
Sayat Satybaldiyev created FLINK-9705:
-

 Summary: Failed to close kafka producer - Interrupted while 
joining ioThread
 Key: FLINK-9705
 URL: https://issues.apache.org/jira/browse/FLINK-9705
 Project: Flink
  Issue Type: Bug
  Components: Kafka Connector
Affects Versions: 1.5.0
Reporter: Sayat Satybaldiyev


While running Flink 1.5.0 with Kafka sink, I got following errors from Flink 
streaming connector.

 
{code:java}
18:05:09,270 ERROR org.apache.kafka.clients.producer.KafkaProducer - 
Interrupted while joining ioThread
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1260)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1031)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1010)
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:989)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.close(FlinkKafkaProducerBase.java:319)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:473)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:374)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
18:05:09,271 ERROR org.apache.flink.streaming.runtime.tasks.StreamTask - Error 
during disposal of stream operator.
org.apache.kafka.common.KafkaException: Failed to close kafka producer
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1062)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1010)
at org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:989)
at 
org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerBase.close(FlinkKafkaProducerBase.java:319)
at 
org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
at 
org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:117)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:473)
at 
org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:374)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:703)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1260)
at 
org.apache.kafka.clients.producer.KafkaProducer.close(KafkaProducer.java:1031)
... 9 more
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (FLINK-9704) QueryableState E2E test failed on travis

2018-07-02 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-9704:
---

 Summary: QueryableState E2E test failed on travis
 Key: FLINK-9704
 URL: https://issues.apache.org/jira/browse/FLINK-9704
 Project: Flink
  Issue Type: Improvement
  Components: Queryable State, Tests
Affects Versions: 1.6.0
Reporter: Chesnay Schepler


https://travis-ci.org/zentol/flink-ci/jobs/399179386



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [TABLE][SQL] Enirchment/Time versioned joins support in Flink

2018-07-02 Thread vino yang
+1, great work!

2018-07-02 22:57 GMT+08:00 Rong Rong :

> Huge +1!!!
>
> On Mon, Jul 2, 2018 at 6:07 AM Piotr Nowojski 
> wrote:
>
> > Hi,
> >
> > Together with Fabian, Timo and Stephan we were working on a proposal to
> > add a support for time versioned joins in Flink SQL/Table API. The idea
> > here is to support use cases, when user would like to join a stream of
> data
> > with a table, that changes over time and where future updates to the
> table
> > do not affect past/previously emitted results.
> >
> > I have prepared a document describing such feature, with proposed
> > nomenclature and syntax, as well with explained semantic of such join:
> > https://docs.google.com/document/d/1KaAkPZjWFeu-
> ffrC9FhYuxE6CIKsatHTTxyrxSBR8Sk/edit?ts=5b34c416#heading=h.bpzgdkgvel0s
> > <
> > https://docs.google.com/document/d/1KaAkPZjWFeu-
> ffrC9FhYuxE6CIKsatHTTxyrxSBR8Sk/edit?ts=5b34c416#heading=h.bpzgdkgvel0s
> > >
> >
> > The document briefly covers also a case of enrichment join with
> > static/bounded table.
> >
> > If anyone has any comments/questions, please feel free to mark them in
> the
> > document or in this thread.
> >
> > Thanks, Piotrek
>


Re: [TABLE][SQL] Enirchment/Time versioned joins support in Flink

2018-07-02 Thread Rong Rong
Huge +1!!!

On Mon, Jul 2, 2018 at 6:07 AM Piotr Nowojski 
wrote:

> Hi,
>
> Together with Fabian, Timo and Stephan we were working on a proposal to
> add a support for time versioned joins in Flink SQL/Table API. The idea
> here is to support use cases, when user would like to join a stream of data
> with a table, that changes over time and where future updates to the table
> do not affect past/previously emitted results.
>
> I have prepared a document describing such feature, with proposed
> nomenclature and syntax, as well with explained semantic of such join:
> https://docs.google.com/document/d/1KaAkPZjWFeu-ffrC9FhYuxE6CIKsatHTTxyrxSBR8Sk/edit?ts=5b34c416#heading=h.bpzgdkgvel0s
> <
> https://docs.google.com/document/d/1KaAkPZjWFeu-ffrC9FhYuxE6CIKsatHTTxyrxSBR8Sk/edit?ts=5b34c416#heading=h.bpzgdkgvel0s
> >
>
> The document briefly covers also a case of enrichment join with
> static/bounded table.
>
> If anyone has any comments/questions, please feel free to mark them in the
> document or in this thread.
>
> Thanks, Piotrek


Re: [DISCUSS] Long-term goal of making flink-table Scala-free

2018-07-02 Thread Xingcan Cui
Hi all,

I also think about this problem these days and here are my thoughts.

1) We must admit that it’s really a tough task to interoperate with Java and 
Scala. E.g., they have different collection types (Scala collections v.s. 
java.util.*) and in Java, it's hard to implement a method which takes Scala 
functions as parameters. Considering the major part of the code base is 
implemented in Java, +1 for this goal from a long-term view.

2) The ideal solution would be to just expose a Scala API and make all the 
other parts Scala-free. But I am not sure if it could be achieved even in a 
long-term. Thus as Timo suggested, keep the Scala codes in "flink-table-core" 
would be a compromise solution.

3) If the community makes the final decision, maybe any new features should be 
added in Java (regardless of the modules), in order to prevent the Scala codes 
from growing.

Best,
Xingcan


> On Jul 2, 2018, at 9:30 PM, Piotr Nowojski  wrote:
> 
> Bumping the topic.
> 
> If we want to do this, the sooner we decide, the less code we will have to 
> rewrite. I have some objections/counter proposals to Fabian's proposal of 
> doing it module wise and one module at a time. 
> 
> First, I do not see a problem of having java/scala code even within one 
> module, especially not if there are clean boundaries. Like we could have API 
> in Scala and optimizer rules/logical nodes written in Java in the same 
> module. However I haven’t previously maintained mixed scala/java code bases 
> before, so I might be missing something here.
> 
> Secondly this whole migration might and most like will take longer then 
> expected, so that creates a problem for a new code that we will be creating. 
> After making a decision to migrate to Java, almost any new Scala line of code 
> will be immediately a technological debt and we will have to rewrite it to 
> Java later. 
> 
> Thus I would propose first to state our end goal - modules structure and 
> which parts of modules we want to have eventually Scala-free. Secondly taking 
> all steps necessary that will allow us to write new code complaint with our 
> end goal. Only after that we should/could focus on incrementally rewriting 
> the old code. Otherwise we could be stuck/blocked for years writing new code 
> in Scala (and increasing technological debt), because nobody have found a 
> time to rewrite some non important and not actively developed part of some 
> module.
> 
> Piotrek
> 
>> On 14 Jun 2018, at 15:34, Fabian Hueske  wrote:
>> 
>> Hi,
>> 
>> In general, I think this is a good effort. However, it won't be easy and I
>> think we have to plan this well.
>> I don't like the idea of having the whole code base fragmented into Java
>> and Scala code for too long.
>> 
>> I think we should do this one step at a time and focus on migrating one
>> module at a time.
>> IMO, the easiest start would be to port the runtime to Java.
>> Extracting the API classes into an own module, porting them to Java, and
>> removing the Scala dependency won't be possible without breaking the API
>> since a few classes depend on the Scala Table API.
>> 
>> Best, Fabian
>> 
>> 
>> 2018-06-14 10:33 GMT+02:00 Till Rohrmann :
>> 
>>> I think that is a noble and honorable goal and we should strive for it.
>>> This, however, must be an iterative process given the sheer size of the
>>> code base. I like the approach to define common Java modules which are used
>>> by more specific Scala modules and slowly moving classes from Scala to
>>> Java. Thus +1 for the proposal.
>>> 
>>> Cheers,
>>> Till
>>> 
>>> On Wed, Jun 13, 2018 at 12:01 PM Piotr Nowojski 
>>> wrote:
>>> 
 Hi,
 
 I do not have an experience with how scala and java interacts with each
 other, so I can not fully validate your proposal, but generally speaking
>>> +1
 from me.
 
 Does it also mean, that we should slowly migrate `flink-table-core` to
 Java? How would you envision it? It would be nice to be able to add new
 classes/features written in Java and so that they can coexist with old
 Scala code until we gradually switch from Scala to Java.
 
 Piotrek
 
> On 13 Jun 2018, at 11:32, Timo Walther  wrote:
> 
> Hi everyone,
> 
> as you all know, currently the Table & SQL API is implemented in Scala.
 This decision was made a long-time ago when the initital code base was
 created as part of a master's thesis. The community kept Scala because of
 the nice language features that enable a fluent Table API like
 table.select('field.trim()) and because Scala allows for quick
>>> prototyping
 (e.g. multi-line comments for code generation). The committers enforced
>>> not
 splitting the code-base into two programming languages.
> 
> However, nowadays the flink-table module more and more becomes an
 important part in the Flink ecosystem. Connectors, formats, and SQL
>>> client
 are actually implemented in Java but need to interoperate with
>>> 

Re: [DISCUSS] Long-term goal of making flink-table Scala-free

2018-07-02 Thread Piotr Nowojski
Bumping the topic.

If we want to do this, the sooner we decide, the less code we will have to 
rewrite. I have some objections/counter proposals to Fabian's proposal of doing 
it module wise and one module at a time. 

First, I do not see a problem of having java/scala code even within one module, 
especially not if there are clean boundaries. Like we could have API in Scala 
and optimizer rules/logical nodes written in Java in the same module. However I 
haven’t previously maintained mixed scala/java code bases before, so I might be 
missing something here.

Secondly this whole migration might and most like will take longer then 
expected, so that creates a problem for a new code that we will be creating. 
After making a decision to migrate to Java, almost any new Scala line of code 
will be immediately a technological debt and we will have to rewrite it to Java 
later. 

Thus I would propose first to state our end goal - modules structure and which 
parts of modules we want to have eventually Scala-free. Secondly taking all 
steps necessary that will allow us to write new code complaint with our end 
goal. Only after that we should/could focus on incrementally rewriting the old 
code. Otherwise we could be stuck/blocked for years writing new code in Scala 
(and increasing technological debt), because nobody have found a time to 
rewrite some non important and not actively developed part of some module.

Piotrek

> On 14 Jun 2018, at 15:34, Fabian Hueske  wrote:
> 
> Hi,
> 
> In general, I think this is a good effort. However, it won't be easy and I
> think we have to plan this well.
> I don't like the idea of having the whole code base fragmented into Java
> and Scala code for too long.
> 
> I think we should do this one step at a time and focus on migrating one
> module at a time.
> IMO, the easiest start would be to port the runtime to Java.
> Extracting the API classes into an own module, porting them to Java, and
> removing the Scala dependency won't be possible without breaking the API
> since a few classes depend on the Scala Table API.
> 
> Best, Fabian
> 
> 
> 2018-06-14 10:33 GMT+02:00 Till Rohrmann :
> 
>> I think that is a noble and honorable goal and we should strive for it.
>> This, however, must be an iterative process given the sheer size of the
>> code base. I like the approach to define common Java modules which are used
>> by more specific Scala modules and slowly moving classes from Scala to
>> Java. Thus +1 for the proposal.
>> 
>> Cheers,
>> Till
>> 
>> On Wed, Jun 13, 2018 at 12:01 PM Piotr Nowojski 
>> wrote:
>> 
>>> Hi,
>>> 
>>> I do not have an experience with how scala and java interacts with each
>>> other, so I can not fully validate your proposal, but generally speaking
>> +1
>>> from me.
>>> 
>>> Does it also mean, that we should slowly migrate `flink-table-core` to
>>> Java? How would you envision it? It would be nice to be able to add new
>>> classes/features written in Java and so that they can coexist with old
>>> Scala code until we gradually switch from Scala to Java.
>>> 
>>> Piotrek
>>> 
 On 13 Jun 2018, at 11:32, Timo Walther  wrote:
 
 Hi everyone,
 
 as you all know, currently the Table & SQL API is implemented in Scala.
>>> This decision was made a long-time ago when the initital code base was
>>> created as part of a master's thesis. The community kept Scala because of
>>> the nice language features that enable a fluent Table API like
>>> table.select('field.trim()) and because Scala allows for quick
>> prototyping
>>> (e.g. multi-line comments for code generation). The committers enforced
>> not
>>> splitting the code-base into two programming languages.
 
 However, nowadays the flink-table module more and more becomes an
>>> important part in the Flink ecosystem. Connectors, formats, and SQL
>> client
>>> are actually implemented in Java but need to interoperate with
>> flink-table
>>> which makes these modules dependent on Scala. As mentioned in an earlier
>>> mail thread, using Scala for API classes also exposes member variables
>> and
>>> methods in Java that should not be exposed to users [1]. Java is still
>> the
>>> most important API language and right now we treat it as a second-class
>>> citizen. I just noticed that you even need to add Scala if you just want
>> to
>>> implement a ScalarFunction because of method clashes between `public
>> String
>>> toString()` and `public scala.Predef.String toString()`.
 
 Given the size of the current code base, reimplementing the entire
>>> flink-table code in Java is a goal that we might never reach. However, we
>>> should at least treat the symptoms and have this as a long-term goal in
>>> mind. My suggestion would be to convert user-facing and runtime classes
>> and
>>> split the code base into multiple modules:
 
> flink-table-java {depends on flink-table-core}
 Implemented in Java. Java users can use this. This would require to
>>> convert classes like 

[TABLE][SQL] Enirchment/Time versioned joins support in Flink

2018-07-02 Thread Piotr Nowojski
Hi,

Together with Fabian, Timo and Stephan we were working on a proposal to add a 
support for time versioned joins in Flink SQL/Table API. The idea here is to 
support use cases, when user would like to join a stream of data with a table, 
that changes over time and where future updates to the table do not affect 
past/previously emitted results.

I have prepared a document describing such feature, with proposed nomenclature 
and syntax, as well with explained semantic of such join: 
https://docs.google.com/document/d/1KaAkPZjWFeu-ffrC9FhYuxE6CIKsatHTTxyrxSBR8Sk/edit?ts=5b34c416#heading=h.bpzgdkgvel0s
 


The document briefly covers also a case of enrichment join with static/bounded 
table.

If anyone has any comments/questions, please feel free to mark them in the 
document or in this thread.

Thanks, Piotrek

CoreOptions.TMP_DIRS bug

2018-07-02 Thread Oleksandr Nitavskyi
Hello guys,

We have discovered minor issue with Flink 1.5 on YARN particularly which was 
related with the way Flink manages temp paths (io.tmp.dirs
) in configuration: 
https://ci.apache.org/projects/flink/flink-docs-master/ops/config.html#io-tmp-dirs


1.   From what we can see in the code, default option doesn’t correspond to 
reality on YARN or on Mesos deployments. Looks like it equals to env variable 
‘_FLINK_TMP_DIR’ on Mesos and to `LOCAL_DIRS` on Yarn.

2.   The issue on Yarn is that it is impossible to have different 
LOCAL_DIRS on JobManager and TaskManager, despite LOCAL_DIRS value depends on 
the container.

The issue is that CoreOptions.TMP_DIRS is configured to the default value 
during JobManager initialization and added to the configuration object. When 
TaskManager is launched the appropriate configuration object is cloned with 
LOCAL_DIRS which makes sense only for Job Manager container. When YARN 
container with TaskManager from his point of view CoreOptions.TMP_DIRS is 
always equal either to path in flink.yml or to the or to the LOCAL_DIRS of Job 
Manager (default behaviour). Is TaskManager’s container do not have an access 
to another folders, that folders allocated by YARN TaskManager cannot be 
started.

Could you please confirm that it is a bug and I will create a Jira ticket to 
track it?

Thanks
Kind Regards
Oleksandr Nitavskyi




Re: Flink table api

2018-07-02 Thread Fabian Hueske
CrossJoins are not supported.
You should add an equality join predicate.

2018-07-02 13:26 GMT+02:00 Amol S - iProgrammer :

> Hello fabian,
>
> I have tried to convert table into stream as below
>
>
> Cannot generate a valid execution plan for the given query:
>
> tableEnv.toDataStream(result, Oplog.class);
>
> and it is giving me below error.
>
>
> LogicalFilter(condition=[<>($1, $3)])
>   LogicalJoin(condition=[true], joinType=[inner])
> LogicalProject(master=[$1], timeStamp=[$5])
>   LogicalFilter(condition=[=($0, _UTF-16LE'local.customerMISMaster')])
> LogicalTableScan(table=[[_DataStreamTable_0]])
> LogicalProject(child1=[$1], timeStamp2=[$5])
>   LogicalFilter(condition=[=($0, _UTF-16LE'local.customerMISChild1')])
> LogicalTableScan(table=[[_DataStreamTable_0]])
>
> This exception indicates that the query uses an unsupported SQL feature.
> Please check the documentation for the set of currently supported SQL
> features.
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 4:43 PM, Amol S - iProgrammer <
> am...@iprogrammer.com>
> wrote:
>
> > Hello Fabian,
> >
> > Can you please tell me hot to convert Table back into DataStream? I just
> > want to print the table result.
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
> >
> > On Mon, Jul 2, 2018 at 4:20 PM, Fabian Hueske  wrote:
> >
> >> You can also use Row, but then you cannot rely on automatic type
> >> extraction
> >> and provide TypeInformation.
> >>
> >> Amol S - iProgrammer  schrieb am Mo., 2. Juli
> >> 2018,
> >> 12:37:
> >>
> >> > Hello Fabian,
> >> >
> >> > According to my requirement I can not create static pojo's for all
> >> classes
> >> > because I want to create dynamic jobs for all tables based on rule
> >> engine
> >> > config. Please suggest me if there any other way to achieve this.
> >> >
> >> > ---
> >> > *Amol Suryawanshi*
> >> > Java Developer
> >> > am...@iprogrammer.com
> >> >
> >> >
> >> > *iProgrammer Solutions Pvt. Ltd.*
> >> >
> >> >
> >> >
> >> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> >> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> >> 411016,
> >> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> >> > www.iprogrammer.com 
> >> > 
> >> >
> >> > On Mon, Jul 2, 2018 at 4:02 PM, Fabian Hueske 
> >> wrote:
> >> >
> >> > > Hi Amol,
> >> > >
> >> > > These are the requirements for POJOs [1] that are fully supported by
> >> > Flink.
> >> > >
> >> > > Best, Fabian
> >> > >
> >> > > [1]
> >> > > https://ci.apache.org/projects/flink/flink-docs-
> >> > > release-1.5/dev/api_concepts.html#pojos
> >> > >
> >> > > 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer <
> >> am...@iprogrammer.com>:
> >> > >
> >> > > > Hello Xingcan
> >> > > >
> >> > > > As mentioned in above mail thread I am streaming mongodb oplog to
> >> join
> >> > > > multiple mongo tables based on some unique key (Primary key). To
> >> > achieve
> >> > > > this I have created one java pojo as below. where o represent
> >> generic
> >> > > pojo
> >> > > > type of mongodb which has my table fields i.e. dynamic. now I want
> >> to
> >> > use
> >> > > > table api join over this basic BasicDBObject but it seem flink
> does
> >> not
> >> > > > allow generic pojo's. please suggest on this.
> >> > > >
> >> > > > public class Oplog {
> >> > > > private OplogTimestamp ts;
> >> > > > private BasicDBObject o;
> >> > > > }
> >> > > >
> >> > > >
> >> > > >
> >> > > > ---
> >> > > > *Amol Suryawanshi*
> >> > > > Java Developer
> >> > > > am...@iprogrammer.com
> >> > > >
> >> > > >
> >> > > > *iProgrammer Solutions Pvt. Ltd.*
> >> > > >
> >> > > >
> >> > > >
> >> > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> >> > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune
> -
> >> > > 411016,
> >> > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> >> > > > www.iprogrammer.com 
> >> > > > 
> >> > > >
> >> > 

[jira] [Created] (FLINK-9703) Mesos does not expose TM Prometheus port

2018-07-02 Thread Rune Skou Larsen (JIRA)
Rune Skou Larsen created FLINK-9703:
---

 Summary: Mesos does not expose TM Prometheus port
 Key: FLINK-9703
 URL: https://issues.apache.org/jira/browse/FLINK-9703
 Project: Flink
  Issue Type: Bug
  Components: Mesos
Reporter: Rune Skou Larsen


LaunchableMesosWorker makes Mesos expose these ports for a Task Manager:

{{private static final String[] TM_PORT_KEYS = {}}
{{ "taskmanager.rpc.port",}}
{{ "taskmanager.data.port"};}}

But when running Prometheus Exporter on a TM, another port needs to be exposed 
to make Flink's Prometheos endpoint externally scrapable by the Prometheus 
server. By default this is port 9249, but it is configurable according to:

[https://ci.apache.org/projects/flink/flink-docs-release-1.6/monitoring/metrics.html#prometheus-orgapacheflinkmetricsprometheusprometheusreporter]

 

My plan is to make a PR, that just adds another config option for mesos, to 
enable custom ports to be exposed in the provisioned TMs.



I considered carrying parts of the Metrics config into the Mesos code to 
automatically map metrics ports in mesos. But making such a "shortcut" between 
Flink's metrics and mesos modules would probably need some sort of integration 
testing, so I prefer the simple solution of just adding another Mesos config 
option. But comments are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello fabian,

I have tried to convert table into stream as below


Cannot generate a valid execution plan for the given query:

tableEnv.toDataStream(result, Oplog.class);

and it is giving me below error.


LogicalFilter(condition=[<>($1, $3)])
  LogicalJoin(condition=[true], joinType=[inner])
LogicalProject(master=[$1], timeStamp=[$5])
  LogicalFilter(condition=[=($0, _UTF-16LE'local.customerMISMaster')])
LogicalTableScan(table=[[_DataStreamTable_0]])
LogicalProject(child1=[$1], timeStamp2=[$5])
  LogicalFilter(condition=[=($0, _UTF-16LE'local.customerMISChild1')])
LogicalTableScan(table=[[_DataStreamTable_0]])

This exception indicates that the query uses an unsupported SQL feature.
Please check the documentation for the set of currently supported SQL
features.

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 4:43 PM, Amol S - iProgrammer 
wrote:

> Hello Fabian,
>
> Can you please tell me hot to convert Table back into DataStream? I just
> want to print the table result.
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 4:20 PM, Fabian Hueske  wrote:
>
>> You can also use Row, but then you cannot rely on automatic type
>> extraction
>> and provide TypeInformation.
>>
>> Amol S - iProgrammer  schrieb am Mo., 2. Juli
>> 2018,
>> 12:37:
>>
>> > Hello Fabian,
>> >
>> > According to my requirement I can not create static pojo's for all
>> classes
>> > because I want to create dynamic jobs for all tables based on rule
>> engine
>> > config. Please suggest me if there any other way to achieve this.
>> >
>> > ---
>> > *Amol Suryawanshi*
>> > Java Developer
>> > am...@iprogrammer.com
>> >
>> >
>> > *iProgrammer Solutions Pvt. Ltd.*
>> >
>> >
>> >
>> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
>> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
>> 411016,
>> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
>> > www.iprogrammer.com 
>> > 
>> >
>> > On Mon, Jul 2, 2018 at 4:02 PM, Fabian Hueske 
>> wrote:
>> >
>> > > Hi Amol,
>> > >
>> > > These are the requirements for POJOs [1] that are fully supported by
>> > Flink.
>> > >
>> > > Best, Fabian
>> > >
>> > > [1]
>> > > https://ci.apache.org/projects/flink/flink-docs-
>> > > release-1.5/dev/api_concepts.html#pojos
>> > >
>> > > 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer <
>> am...@iprogrammer.com>:
>> > >
>> > > > Hello Xingcan
>> > > >
>> > > > As mentioned in above mail thread I am streaming mongodb oplog to
>> join
>> > > > multiple mongo tables based on some unique key (Primary key). To
>> > achieve
>> > > > this I have created one java pojo as below. where o represent
>> generic
>> > > pojo
>> > > > type of mongodb which has my table fields i.e. dynamic. now I want
>> to
>> > use
>> > > > table api join over this basic BasicDBObject but it seem flink does
>> not
>> > > > allow generic pojo's. please suggest on this.
>> > > >
>> > > > public class Oplog {
>> > > > private OplogTimestamp ts;
>> > > > private BasicDBObject o;
>> > > > }
>> > > >
>> > > >
>> > > >
>> > > > ---
>> > > > *Amol Suryawanshi*
>> > > > Java Developer
>> > > > am...@iprogrammer.com
>> > > >
>> > > >
>> > > > *iProgrammer Solutions Pvt. Ltd.*
>> > > >
>> > > >
>> > > >
>> > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
>> > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
>> > > 411016,
>> > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
>> > > > www.iprogrammer.com 
>> > > > 
>> > > >
>> > > > On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer <
>> > > > am...@iprogrammer.com>
>> > > > wrote:
>> > > >
>> > > > > Hello Xingcan
>> > > > >
>> > > > > DataStream streamSource = env
>> > > > > .addSource(kafkaConsumer)
>> > > > > .setParallelism(4);
>> > > > >
>> > > > > StreamTableEnvironment tableEnv = TableEnvironment.
>> > > > getTableEnvironment(env);
>> > > > > // Convert the DataStream into a Table with default fields 

Re: [DISCUSS] Release Flink 1.5.1

2018-07-02 Thread shimin yang
Hi Chesnay,

It's a good idea. And I just created a pull request on Flink-9567. Please
have a look if you had some free time.

Cheers,
Shimin

On Mon, Jul 2, 2018 at 7:17 PM vino yang  wrote:

> +1
>
> 2018-07-02 18:19 GMT+08:00 Chesnay Schepler :
>
> > Hello,
> >
> > it has been a little over a month since we've release 1.5.0. Since then
> > we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
> > enhancement to the new execution mode (FLIP-6), fixes for critical issues
> > in the metric system, but also features that didn't quite make it into
> > 1.5.0 like FLIP-6 support for the scala-shell.
> >
> > I think now is a good time to start thinking about a 1.5.1 release, for
> > which I would volunteer as the release manager.
> >
> > There are a few issues that I'm aware of that we should include in the
> > release [3], but I believe these should be resolved within the next days.
> > So that we don't overlap with with proposed 1.6 release [2] we ideally
> > start the release process this week.
> >
> > What do you think?
> >
> > [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
> >
> > [2] https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb
> > 722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
> >
> > [3]
> >
> > - https://issues.apache.org/jira/browse/FLINK-9280
> > - https://issues.apache.org/jira/browse/FLINK-8785
> > - https://issues.apache.org/jira/browse/FLINK-9567
> >
>


[jira] [Created] (FLINK-9702) Improvement in (de)serialization of keys and values for RocksDB state

2018-07-02 Thread Stefan Richter (JIRA)
Stefan Richter created FLINK-9702:
-

 Summary: Improvement in (de)serialization of keys and values for 
RocksDB state
 Key: FLINK-9702
 URL: https://issues.apache.org/jira/browse/FLINK-9702
 Project: Flink
  Issue Type: Improvement
  Components: State Backends, Checkpointing
Affects Versions: 1.6.0
Reporter: Stefan Richter


When Flink interacts with state in RocksDB, object (de)serialization often 
contributes significantly to performance overhead. I think there are some 
aspects that we can improve here to reduce the costs in this area. In 
particular, currently every state has to serialize the backen's current key 
before each state access. We could reduce this effort by sharing serialized key 
bytes across all state interactions. Furthermore, we can reduce the amount of  
`byte[]` and stream/view that are involved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] Release Flink 1.5.1

2018-07-02 Thread vino yang
+1

2018-07-02 18:19 GMT+08:00 Chesnay Schepler :

> Hello,
>
> it has been a little over a month since we've release 1.5.0. Since then
> we've addressed 56 JIRAs [1] for the 1.5 branch, including stability
> enhancement to the new execution mode (FLIP-6), fixes for critical issues
> in the metric system, but also features that didn't quite make it into
> 1.5.0 like FLIP-6 support for the scala-shell.
>
> I think now is a good time to start thinking about a 1.5.1 release, for
> which I would volunteer as the release manager.
>
> There are a few issues that I'm aware of that we should include in the
> release [3], but I believe these should be resolved within the next days.
> So that we don't overlap with with proposed 1.6 release [2] we ideally
> start the release process this week.
>
> What do you think?
>
> [1] https://issues.apache.org/jira/projects/FLINK/versions/12343053
>
> [2] https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb
> 722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E
>
> [3]
>
> - https://issues.apache.org/jira/browse/FLINK-9280
> - https://issues.apache.org/jira/browse/FLINK-8785
> - https://issues.apache.org/jira/browse/FLINK-9567
>


Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello Fabian,

Can you please tell me hot to convert Table back into DataStream? I just
want to print the table result.

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 4:20 PM, Fabian Hueske  wrote:

> You can also use Row, but then you cannot rely on automatic type extraction
> and provide TypeInformation.
>
> Amol S - iProgrammer  schrieb am Mo., 2. Juli 2018,
> 12:37:
>
> > Hello Fabian,
> >
> > According to my requirement I can not create static pojo's for all
> classes
> > because I want to create dynamic jobs for all tables based on rule engine
> > config. Please suggest me if there any other way to achieve this.
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
> >
> > On Mon, Jul 2, 2018 at 4:02 PM, Fabian Hueske  wrote:
> >
> > > Hi Amol,
> > >
> > > These are the requirements for POJOs [1] that are fully supported by
> > Flink.
> > >
> > > Best, Fabian
> > >
> > > [1]
> > > https://ci.apache.org/projects/flink/flink-docs-
> > > release-1.5/dev/api_concepts.html#pojos
> > >
> > > 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer  >:
> > >
> > > > Hello Xingcan
> > > >
> > > > As mentioned in above mail thread I am streaming mongodb oplog to
> join
> > > > multiple mongo tables based on some unique key (Primary key). To
> > achieve
> > > > this I have created one java pojo as below. where o represent generic
> > > pojo
> > > > type of mongodb which has my table fields i.e. dynamic. now I want to
> > use
> > > > table api join over this basic BasicDBObject but it seem flink does
> not
> > > > allow generic pojo's. please suggest on this.
> > > >
> > > > public class Oplog {
> > > > private OplogTimestamp ts;
> > > > private BasicDBObject o;
> > > > }
> > > >
> > > >
> > > >
> > > > ---
> > > > *Amol Suryawanshi*
> > > > Java Developer
> > > > am...@iprogrammer.com
> > > >
> > > >
> > > > *iProgrammer Solutions Pvt. Ltd.*
> > > >
> > > >
> > > >
> > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > > 411016,
> > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > > www.iprogrammer.com 
> > > > 
> > > >
> > > > On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer <
> > > > am...@iprogrammer.com>
> > > > wrote:
> > > >
> > > > > Hello Xingcan
> > > > >
> > > > > DataStream streamSource = env
> > > > > .addSource(kafkaConsumer)
> > > > > .setParallelism(4);
> > > > >
> > > > > StreamTableEnvironment tableEnv = TableEnvironment.
> > > > getTableEnvironment(env);
> > > > > // Convert the DataStream into a Table with default fields "f0",
> "f1"
> > > > > Table table1 = tableEnv.fromDataStream(streamSource);
> > > > >
> > > > >
> > > > > Table customerMISMaster = table1.filter("ns ===
> > > > 'local.customerMISMaster'"
> > > > > ).select("o as master");
> > > > > Table customerMISChild1 = table1.filter("ns ===
> > > > 'local.customerMISChild1'"
> > > > > ).select("o as child1");
> > > > > Table customerMISChild2 = table1.filter("ns ===
> > > > 'local.customerMISChild2'"
> > > > > ).select("o as child2");
> > > > > Table result = customerMISMaster.join(customerMISChild1).where("
> > > master.
> > > > > loanApplicationId=child1.loanApplicationId");
> > > > >
> > > > >
> > > > > it is throwing error "Method threw 'org.apache.flink.table.api.
> > > ValidationException'
> > > > exception. Undefined function: LOANAPPLICATIONID"
> > > > >
> > > > >
> > > > >
> > > > > ---
> > > > > *Amol Suryawanshi*
> > > > > Java Developer
> > > > > am...@iprogrammer.com
> > > > >
> > > > >
> > > > > *iProgrammer Solutions Pvt. Ltd.*
> > > > >
> > > > >
> > > > >
> > > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > > > 411016,
> > > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > > > www.iprogrammer.com 
> > > > > 
> > > > >
> > > > > On Mon, Jul 2, 2018 at 2:56 PM, Xingcan 

[jira] [Created] (FLINK-9701) Activate TTL in state descriptors

2018-07-02 Thread Andrey Zagrebin (JIRA)
Andrey Zagrebin created FLINK-9701:
--

 Summary: Activate TTL in state descriptors
 Key: FLINK-9701
 URL: https://issues.apache.org/jira/browse/FLINK-9701
 Project: Flink
  Issue Type: Sub-task
  Components: State Backends, Checkpointing
Reporter: Andrey Zagrebin
Assignee: Andrey Zagrebin






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Flink table api

2018-07-02 Thread Fabian Hueske
You can also use Row, but then you cannot rely on automatic type extraction
and provide TypeInformation.

Amol S - iProgrammer  schrieb am Mo., 2. Juli 2018,
12:37:

> Hello Fabian,
>
> According to my requirement I can not create static pojo's for all classes
> because I want to create dynamic jobs for all tables based on rule engine
> config. Please suggest me if there any other way to achieve this.
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 4:02 PM, Fabian Hueske  wrote:
>
> > Hi Amol,
> >
> > These are the requirements for POJOs [1] that are fully supported by
> Flink.
> >
> > Best, Fabian
> >
> > [1]
> > https://ci.apache.org/projects/flink/flink-docs-
> > release-1.5/dev/api_concepts.html#pojos
> >
> > 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer :
> >
> > > Hello Xingcan
> > >
> > > As mentioned in above mail thread I am streaming mongodb oplog to join
> > > multiple mongo tables based on some unique key (Primary key). To
> achieve
> > > this I have created one java pojo as below. where o represent generic
> > pojo
> > > type of mongodb which has my table fields i.e. dynamic. now I want to
> use
> > > table api join over this basic BasicDBObject but it seem flink does not
> > > allow generic pojo's. please suggest on this.
> > >
> > > public class Oplog {
> > > private OplogTimestamp ts;
> > > private BasicDBObject o;
> > > }
> > >
> > >
> > >
> > > ---
> > > *Amol Suryawanshi*
> > > Java Developer
> > > am...@iprogrammer.com
> > >
> > >
> > > *iProgrammer Solutions Pvt. Ltd.*
> > >
> > >
> > >
> > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > 411016,
> > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > www.iprogrammer.com 
> > > 
> > >
> > > On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer <
> > > am...@iprogrammer.com>
> > > wrote:
> > >
> > > > Hello Xingcan
> > > >
> > > > DataStream streamSource = env
> > > > .addSource(kafkaConsumer)
> > > > .setParallelism(4);
> > > >
> > > > StreamTableEnvironment tableEnv = TableEnvironment.
> > > getTableEnvironment(env);
> > > > // Convert the DataStream into a Table with default fields "f0", "f1"
> > > > Table table1 = tableEnv.fromDataStream(streamSource);
> > > >
> > > >
> > > > Table customerMISMaster = table1.filter("ns ===
> > > 'local.customerMISMaster'"
> > > > ).select("o as master");
> > > > Table customerMISChild1 = table1.filter("ns ===
> > > 'local.customerMISChild1'"
> > > > ).select("o as child1");
> > > > Table customerMISChild2 = table1.filter("ns ===
> > > 'local.customerMISChild2'"
> > > > ).select("o as child2");
> > > > Table result = customerMISMaster.join(customerMISChild1).where("
> > master.
> > > > loanApplicationId=child1.loanApplicationId");
> > > >
> > > >
> > > > it is throwing error "Method threw 'org.apache.flink.table.api.
> > ValidationException'
> > > exception. Undefined function: LOANAPPLICATIONID"
> > > >
> > > >
> > > >
> > > > ---
> > > > *Amol Suryawanshi*
> > > > Java Developer
> > > > am...@iprogrammer.com
> > > >
> > > >
> > > > *iProgrammer Solutions Pvt. Ltd.*
> > > >
> > > >
> > > >
> > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > > 411016,
> > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > > www.iprogrammer.com 
> > > > 
> > > >
> > > > On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui 
> > wrote:
> > > >
> > > >> Hi Amol,
> > > >>
> > > >> The “dynamic table” is just a logical concept, following which the
> > Flink
> > > >> table API is designed.
> > > >> That means you don’t need to implement dynamic tables yourself.
> > > >>
> > > >> Flink table API provides different kinds of stream to stream joins
> in
> > > >> recent versions (from 1.4).
> > > >> The related docs can be found here https://ci.apache.org/projects
> > > >> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> > > >> https://ci.apache.org/projects/flink/flink-docs-release-1.5
> > > >> /dev/table/tableApi.html#joins>.
> > > >>
> > > >> Best,
> > > >> Xingcan
> > > >>
> > > >>
> > > >> > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer <
> > > am...@iprogrammer.com>
> > > >> wrote:
> > > >> >
> > > >> > Hello,
> > > >> >
> > > >> > I am streaming mongodb oplog 

Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello Fabian,

According to my requirement I can not create static pojo's for all classes
because I want to create dynamic jobs for all tables based on rule engine
config. Please suggest me if there any other way to achieve this.

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 4:02 PM, Fabian Hueske  wrote:

> Hi Amol,
>
> These are the requirements for POJOs [1] that are fully supported by Flink.
>
> Best, Fabian
>
> [1]
> https://ci.apache.org/projects/flink/flink-docs-
> release-1.5/dev/api_concepts.html#pojos
>
> 2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer :
>
> > Hello Xingcan
> >
> > As mentioned in above mail thread I am streaming mongodb oplog to join
> > multiple mongo tables based on some unique key (Primary key). To achieve
> > this I have created one java pojo as below. where o represent generic
> pojo
> > type of mongodb which has my table fields i.e. dynamic. now I want to use
> > table api join over this basic BasicDBObject but it seem flink does not
> > allow generic pojo's. please suggest on this.
> >
> > public class Oplog {
> > private OplogTimestamp ts;
> > private BasicDBObject o;
> > }
> >
> >
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
> >
> > On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer <
> > am...@iprogrammer.com>
> > wrote:
> >
> > > Hello Xingcan
> > >
> > > DataStream streamSource = env
> > > .addSource(kafkaConsumer)
> > > .setParallelism(4);
> > >
> > > StreamTableEnvironment tableEnv = TableEnvironment.
> > getTableEnvironment(env);
> > > // Convert the DataStream into a Table with default fields "f0", "f1"
> > > Table table1 = tableEnv.fromDataStream(streamSource);
> > >
> > >
> > > Table customerMISMaster = table1.filter("ns ===
> > 'local.customerMISMaster'"
> > > ).select("o as master");
> > > Table customerMISChild1 = table1.filter("ns ===
> > 'local.customerMISChild1'"
> > > ).select("o as child1");
> > > Table customerMISChild2 = table1.filter("ns ===
> > 'local.customerMISChild2'"
> > > ).select("o as child2");
> > > Table result = customerMISMaster.join(customerMISChild1).where("
> master.
> > > loanApplicationId=child1.loanApplicationId");
> > >
> > >
> > > it is throwing error "Method threw 'org.apache.flink.table.api.
> ValidationException'
> > exception. Undefined function: LOANAPPLICATIONID"
> > >
> > >
> > >
> > > ---
> > > *Amol Suryawanshi*
> > > Java Developer
> > > am...@iprogrammer.com
> > >
> > >
> > > *iProgrammer Solutions Pvt. Ltd.*
> > >
> > >
> > >
> > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > 411016,
> > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > www.iprogrammer.com 
> > > 
> > >
> > > On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui 
> wrote:
> > >
> > >> Hi Amol,
> > >>
> > >> The “dynamic table” is just a logical concept, following which the
> Flink
> > >> table API is designed.
> > >> That means you don’t need to implement dynamic tables yourself.
> > >>
> > >> Flink table API provides different kinds of stream to stream joins in
> > >> recent versions (from 1.4).
> > >> The related docs can be found here https://ci.apache.org/projects
> > >> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> > >> https://ci.apache.org/projects/flink/flink-docs-release-1.5
> > >> /dev/table/tableApi.html#joins>.
> > >>
> > >> Best,
> > >> Xingcan
> > >>
> > >>
> > >> > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer <
> > am...@iprogrammer.com>
> > >> wrote:
> > >> >
> > >> > Hello,
> > >> >
> > >> > I am streaming mongodb oplog using kafka and flink and want to join
> > >> > multiple tables using flink table api but i have some concerns like
> is
> > >> it
> > >> > possible to join streamed tables in flink and if yes then please
> > >> provide me
> > >> > some example of stream join using table API.
> > >> >
> > >> > I gone through your dynamic table api doc. it is quit interesting
> but
> > >> > haven't found any example tutorial how to implement dynamic table.
> > >> >
> > >> > I 

Re: Flink table api

2018-07-02 Thread Fabian Hueske
Hi Amol,

These are the requirements for POJOs [1] that are fully supported by Flink.

Best, Fabian

[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/api_concepts.html#pojos

2018-07-02 12:19 GMT+02:00 Amol S - iProgrammer :

> Hello Xingcan
>
> As mentioned in above mail thread I am streaming mongodb oplog to join
> multiple mongo tables based on some unique key (Primary key). To achieve
> this I have created one java pojo as below. where o represent generic pojo
> type of mongodb which has my table fields i.e. dynamic. now I want to use
> table api join over this basic BasicDBObject but it seem flink does not
> allow generic pojo's. please suggest on this.
>
> public class Oplog {
> private OplogTimestamp ts;
> private BasicDBObject o;
> }
>
>
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer <
> am...@iprogrammer.com>
> wrote:
>
> > Hello Xingcan
> >
> > DataStream streamSource = env
> > .addSource(kafkaConsumer)
> > .setParallelism(4);
> >
> > StreamTableEnvironment tableEnv = TableEnvironment.
> getTableEnvironment(env);
> > // Convert the DataStream into a Table with default fields "f0", "f1"
> > Table table1 = tableEnv.fromDataStream(streamSource);
> >
> >
> > Table customerMISMaster = table1.filter("ns ===
> 'local.customerMISMaster'"
> > ).select("o as master");
> > Table customerMISChild1 = table1.filter("ns ===
> 'local.customerMISChild1'"
> > ).select("o as child1");
> > Table customerMISChild2 = table1.filter("ns ===
> 'local.customerMISChild2'"
> > ).select("o as child2");
> > Table result = customerMISMaster.join(customerMISChild1).where("master.
> > loanApplicationId=child1.loanApplicationId");
> >
> >
> > it is throwing error "Method threw 
> > 'org.apache.flink.table.api.ValidationException'
> exception. Undefined function: LOANAPPLICATIONID"
> >
> >
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
> >
> > On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui  wrote:
> >
> >> Hi Amol,
> >>
> >> The “dynamic table” is just a logical concept, following which the Flink
> >> table API is designed.
> >> That means you don’t need to implement dynamic tables yourself.
> >>
> >> Flink table API provides different kinds of stream to stream joins in
> >> recent versions (from 1.4).
> >> The related docs can be found here https://ci.apache.org/projects
> >> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> >> https://ci.apache.org/projects/flink/flink-docs-release-1.5
> >> /dev/table/tableApi.html#joins>.
> >>
> >> Best,
> >> Xingcan
> >>
> >>
> >> > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer <
> am...@iprogrammer.com>
> >> wrote:
> >> >
> >> > Hello,
> >> >
> >> > I am streaming mongodb oplog using kafka and flink and want to join
> >> > multiple tables using flink table api but i have some concerns like is
> >> it
> >> > possible to join streamed tables in flink and if yes then please
> >> provide me
> >> > some example of stream join using table API.
> >> >
> >> > I gone through your dynamic table api doc. it is quit interesting but
> >> > haven't found any example tutorial how to implement dynamic table.
> >> >
> >> > I have tried to implement table api join using pojo class but it is
> >> > giving org.apache.flink.table.api.TableException: Cannot generate a
> >> valid
> >> > execution plan for the given query
> >> >
> >> > ---
> >> > *Amol Suryawanshi*
> >> > Java Developer
> >> > am...@iprogrammer.com
> >> >
> >> >
> >> > *iProgrammer Solutions Pvt. Ltd.*
> >> >
> >> >
> >> >
> >> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> >> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> >> 411016,
> >> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> >> > www.iprogrammer.com 
> >> > 
> >>
> >>
> >
>


Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello Fabian,

The output of customerMISMaster.printSchema() is undefined

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 3:49 PM, Fabian Hueske  wrote:

>  Hi,
>
> It looks like the type of master is not known to Flink.
> What's the output of
>
> customerMISMaster.printSchema(); ?
>
> Best, Fabian
>
>
>
> 2018-07-02 11:33 GMT+02:00 Amol S - iProgrammer :
>
> > Hello Xingcan
> >
> > DataStream streamSource = env
> > .addSource(kafkaConsumer)
> > .setParallelism(4);
> >
> > StreamTableEnvironment tableEnv = TableEnvironment.
> > getTableEnvironment(env);
> > // Convert the DataStream into a Table with default fields "f0", "f1"
> > Table table1 = tableEnv.fromDataStream(streamSource);
> >
> >
> > Table customerMISMaster = table1.filter("ns ===
> > 'local.customerMISMaster'").
> > select("o as master");
> > Table customerMISChild1 = table1.filter("ns ===
> > 'local.customerMISChild1'").
> > select("o as child1");
> > Table customerMISChild2 = table1.filter("ns ===
> > 'local.customerMISChild2'").
> > select("o as child2");
> > Table result = customerMISMaster.join(customerMISChild1).where("
> > master.loanApplicationId=child1.loanApplicationId");
> >
> >
> > it is throwing error "Method threw
> > 'org.apache.flink.table.api.ValidationException' exception. Undefined
> > function: LOANAPPLICATIONID"
> >
> >
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
> >
> > On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui  wrote:
> >
> > > Hi Amol,
> > >
> > > The “dynamic table” is just a logical concept, following which the
> Flink
> > > table API is designed.
> > > That means you don’t need to implement dynamic tables yourself.
> > >
> > > Flink table API provides different kinds of stream to stream joins in
> > > recent versions (from 1.4).
> > > The related docs can be found here https://ci.apache.org/projects
> > > /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> > > https://ci.apache.org/projects/flink/flink-docs-release-1.
> > > 5/dev/table/tableApi.html#joins>.
> > >
> > > Best,
> > > Xingcan
> > >
> > >
> > > > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer <
> > am...@iprogrammer.com>
> > > wrote:
> > > >
> > > > Hello,
> > > >
> > > > I am streaming mongodb oplog using kafka and flink and want to join
> > > > multiple tables using flink table api but i have some concerns like
> is
> > it
> > > > possible to join streamed tables in flink and if yes then please
> > provide
> > > me
> > > > some example of stream join using table API.
> > > >
> > > > I gone through your dynamic table api doc. it is quit interesting but
> > > > haven't found any example tutorial how to implement dynamic table.
> > > >
> > > > I have tried to implement table api join using pojo class but it is
> > > > giving org.apache.flink.table.api.TableException: Cannot generate a
> > > valid
> > > > execution plan for the given query
> > > >
> > > > ---
> > > > *Amol Suryawanshi*
> > > > Java Developer
> > > > am...@iprogrammer.com
> > > >
> > > >
> > > > *iProgrammer Solutions Pvt. Ltd.*
> > > >
> > > >
> > > >
> > > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > > 411016,
> > > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > > www.iprogrammer.com 
> > > > 
> > >
> > >
> >
>


Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello Xingcan

As mentioned in above mail thread I am streaming mongodb oplog to join
multiple mongo tables based on some unique key (Primary key). To achieve
this I have created one java pojo as below. where o represent generic pojo
type of mongodb which has my table fields i.e. dynamic. now I want to use
table api join over this basic BasicDBObject but it seem flink does not
allow generic pojo's. please suggest on this.

public class Oplog {
private OplogTimestamp ts;
private BasicDBObject o;
}



---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 3:03 PM, Amol S - iProgrammer 
wrote:

> Hello Xingcan
>
> DataStream streamSource = env
> .addSource(kafkaConsumer)
> .setParallelism(4);
>
> StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
> // Convert the DataStream into a Table with default fields "f0", "f1"
> Table table1 = tableEnv.fromDataStream(streamSource);
>
>
> Table customerMISMaster = table1.filter("ns === 'local.customerMISMaster'"
> ).select("o as master");
> Table customerMISChild1 = table1.filter("ns === 'local.customerMISChild1'"
> ).select("o as child1");
> Table customerMISChild2 = table1.filter("ns === 'local.customerMISChild2'"
> ).select("o as child2");
> Table result = customerMISMaster.join(customerMISChild1).where("master.
> loanApplicationId=child1.loanApplicationId");
>
>
> it is throwing error "Method threw 
> 'org.apache.flink.table.api.ValidationException' exception. Undefined 
> function: LOANAPPLICATIONID"
>
>
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui  wrote:
>
>> Hi Amol,
>>
>> The “dynamic table” is just a logical concept, following which the Flink
>> table API is designed.
>> That means you don’t need to implement dynamic tables yourself.
>>
>> Flink table API provides different kinds of stream to stream joins in
>> recent versions (from 1.4).
>> The related docs can be found here https://ci.apache.org/projects
>> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
>> https://ci.apache.org/projects/flink/flink-docs-release-1.5
>> /dev/table/tableApi.html#joins>.
>>
>> Best,
>> Xingcan
>>
>>
>> > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer 
>> wrote:
>> >
>> > Hello,
>> >
>> > I am streaming mongodb oplog using kafka and flink and want to join
>> > multiple tables using flink table api but i have some concerns like is
>> it
>> > possible to join streamed tables in flink and if yes then please
>> provide me
>> > some example of stream join using table API.
>> >
>> > I gone through your dynamic table api doc. it is quit interesting but
>> > haven't found any example tutorial how to implement dynamic table.
>> >
>> > I have tried to implement table api join using pojo class but it is
>> > giving org.apache.flink.table.api.TableException: Cannot generate a
>> valid
>> > execution plan for the given query
>> >
>> > ---
>> > *Amol Suryawanshi*
>> > Java Developer
>> > am...@iprogrammer.com
>> >
>> >
>> > *iProgrammer Solutions Pvt. Ltd.*
>> >
>> >
>> >
>> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
>> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
>> 411016,
>> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
>> > www.iprogrammer.com 
>> > 
>>
>>
>


Re: Flink table api

2018-07-02 Thread Fabian Hueske
 Hi,

It looks like the type of master is not known to Flink.
What's the output of

customerMISMaster.printSchema(); ?

Best, Fabian



2018-07-02 11:33 GMT+02:00 Amol S - iProgrammer :

> Hello Xingcan
>
> DataStream streamSource = env
> .addSource(kafkaConsumer)
> .setParallelism(4);
>
> StreamTableEnvironment tableEnv = TableEnvironment.
> getTableEnvironment(env);
> // Convert the DataStream into a Table with default fields "f0", "f1"
> Table table1 = tableEnv.fromDataStream(streamSource);
>
>
> Table customerMISMaster = table1.filter("ns ===
> 'local.customerMISMaster'").
> select("o as master");
> Table customerMISChild1 = table1.filter("ns ===
> 'local.customerMISChild1'").
> select("o as child1");
> Table customerMISChild2 = table1.filter("ns ===
> 'local.customerMISChild2'").
> select("o as child2");
> Table result = customerMISMaster.join(customerMISChild1).where("
> master.loanApplicationId=child1.loanApplicationId");
>
>
> it is throwing error "Method threw
> 'org.apache.flink.table.api.ValidationException' exception. Undefined
> function: LOANAPPLICATIONID"
>
>
>
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
>
>
> *iProgrammer Solutions Pvt. Ltd.*
>
>
>
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 
>
> On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui  wrote:
>
> > Hi Amol,
> >
> > The “dynamic table” is just a logical concept, following which the Flink
> > table API is designed.
> > That means you don’t need to implement dynamic tables yourself.
> >
> > Flink table API provides different kinds of stream to stream joins in
> > recent versions (from 1.4).
> > The related docs can be found here https://ci.apache.org/projects
> > /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> > https://ci.apache.org/projects/flink/flink-docs-release-1.
> > 5/dev/table/tableApi.html#joins>.
> >
> > Best,
> > Xingcan
> >
> >
> > > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer <
> am...@iprogrammer.com>
> > wrote:
> > >
> > > Hello,
> > >
> > > I am streaming mongodb oplog using kafka and flink and want to join
> > > multiple tables using flink table api but i have some concerns like is
> it
> > > possible to join streamed tables in flink and if yes then please
> provide
> > me
> > > some example of stream join using table API.
> > >
> > > I gone through your dynamic table api doc. it is quit interesting but
> > > haven't found any example tutorial how to implement dynamic table.
> > >
> > > I have tried to implement table api join using pojo class but it is
> > > giving org.apache.flink.table.api.TableException: Cannot generate a
> > valid
> > > execution plan for the given query
> > >
> > > ---
> > > *Amol Suryawanshi*
> > > Java Developer
> > > am...@iprogrammer.com
> > >
> > >
> > > *iProgrammer Solutions Pvt. Ltd.*
> > >
> > >
> > >
> > > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> > 411016,
> > > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > > www.iprogrammer.com 
> > > 
> >
> >
>


[DISCUSS] Release Flink 1.5.1

2018-07-02 Thread Chesnay Schepler

Hello,

it has been a little over a month since we've release 1.5.0. Since then 
we've addressed 56 JIRAs [1] for the 1.5 branch, including stability 
enhancement to the new execution mode (FLIP-6), fixes for critical 
issues in the metric system, but also features that didn't quite make it 
into 1.5.0 like FLIP-6 support for the scala-shell.


I think now is a good time to start thinking about a 1.5.1 release, for 
which I would volunteer as the release manager.


There are a few issues that I'm aware of that we should include in the 
release [3], but I believe these should be resolved within the next days.
So that we don't overlap with with proposed 1.6 release [2] we ideally 
start the release process this week.


What do you think?

[1] https://issues.apache.org/jira/projects/FLINK/versions/12343053

[2] 
https://lists.apache.org/thread.html/1b8b0e627739d1f01b760fb722a1aeb2e786eec09ddd47b8303faadb@%3Cdev.flink.apache.org%3E


[3]

- https://issues.apache.org/jira/browse/FLINK-9280
- https://issues.apache.org/jira/browse/FLINK-8785
- https://issues.apache.org/jira/browse/FLINK-9567


Re: Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello Xingcan

DataStream streamSource = env
.addSource(kafkaConsumer)
.setParallelism(4);

StreamTableEnvironment tableEnv = TableEnvironment.getTableEnvironment(env);
// Convert the DataStream into a Table with default fields "f0", "f1"
Table table1 = tableEnv.fromDataStream(streamSource);


Table customerMISMaster = table1.filter("ns === 'local.customerMISMaster'").
select("o as master");
Table customerMISChild1 = table1.filter("ns === 'local.customerMISChild1'").
select("o as child1");
Table customerMISChild2 = table1.filter("ns === 'local.customerMISChild2'").
select("o as child2");
Table result = customerMISMaster.join(customerMISChild1).where("
master.loanApplicationId=child1.loanApplicationId");


it is throwing error "Method threw
'org.apache.flink.table.api.ValidationException' exception. Undefined
function: LOANAPPLICATIONID"



---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 


On Mon, Jul 2, 2018 at 2:56 PM, Xingcan Cui  wrote:

> Hi Amol,
>
> The “dynamic table” is just a logical concept, following which the Flink
> table API is designed.
> That means you don’t need to implement dynamic tables yourself.
>
> Flink table API provides different kinds of stream to stream joins in
> recent versions (from 1.4).
> The related docs can be found here https://ci.apache.org/projects
> /flink/flink-docs-release-1.5/dev/table/tableApi.html#joins <
> https://ci.apache.org/projects/flink/flink-docs-release-1.
> 5/dev/table/tableApi.html#joins>.
>
> Best,
> Xingcan
>
>
> > On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer 
> wrote:
> >
> > Hello,
> >
> > I am streaming mongodb oplog using kafka and flink and want to join
> > multiple tables using flink table api but i have some concerns like is it
> > possible to join streamed tables in flink and if yes then please provide
> me
> > some example of stream join using table API.
> >
> > I gone through your dynamic table api doc. it is quit interesting but
> > haven't found any example tutorial how to implement dynamic table.
> >
> > I have tried to implement table api join using pojo class but it is
> > giving org.apache.flink.table.api.TableException: Cannot generate a
> valid
> > execution plan for the given query
> >
> > ---
> > *Amol Suryawanshi*
> > Java Developer
> > am...@iprogrammer.com
> >
> >
> > *iProgrammer Solutions Pvt. Ltd.*
> >
> >
> >
> > *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> > Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune -
> 411016,
> > MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> > www.iprogrammer.com 
> > 
>
>


Re: Flink table api

2018-07-02 Thread Xingcan Cui
Hi Amol,

The “dynamic table” is just a logical concept, following which the Flink table 
API is designed.
That means you don’t need to implement dynamic tables yourself.

Flink table API provides different kinds of stream to stream joins in recent 
versions (from 1.4).
The related docs can be found here 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/dev/table/tableApi.html#joins
 
.

Best,
Xingcan


> On Jul 2, 2018, at 4:49 PM, Amol S - iProgrammer  
> wrote:
> 
> Hello,
> 
> I am streaming mongodb oplog using kafka and flink and want to join
> multiple tables using flink table api but i have some concerns like is it
> possible to join streamed tables in flink and if yes then please provide me
> some example of stream join using table API.
> 
> I gone through your dynamic table api doc. it is quit interesting but
> haven't found any example tutorial how to implement dynamic table.
> 
> I have tried to implement table api join using pojo class but it is
> giving org.apache.flink.table.api.TableException: Cannot generate a valid
> execution plan for the given query
> 
> ---
> *Amol Suryawanshi*
> Java Developer
> am...@iprogrammer.com
> 
> 
> *iProgrammer Solutions Pvt. Ltd.*
> 
> 
> 
> *Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
> Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
> MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
> www.iprogrammer.com 
> 



Flink table api

2018-07-02 Thread Amol S - iProgrammer
Hello,

I am streaming mongodb oplog using kafka and flink and want to join
multiple tables using flink table api but i have some concerns like is it
possible to join streamed tables in flink and if yes then please provide me
some example of stream join using table API.

I gone through your dynamic table api doc. it is quit interesting but
haven't found any example tutorial how to implement dynamic table.

I have tried to implement table api join using pojo class but it is
giving org.apache.flink.table.api.TableException: Cannot generate a valid
execution plan for the given query

---
*Amol Suryawanshi*
Java Developer
am...@iprogrammer.com


*iProgrammer Solutions Pvt. Ltd.*



*Office 103, 104, 1st Floor Pride Portal,Shivaji Housing Society,
Bahiratwadi,Near Hotel JW Marriott, Off Senapati Bapat Road, Pune - 411016,
MH, INDIA.**Phone: +91 9689077510 | Skype: amols_iprogrammer*
www.iprogrammer.com 



[ANNOUNCE] Weekly community update #27

2018-07-02 Thread Till Rohrmann
Dear community,

this is the weekly community update thread #27. Please post any news and
updates you want to share with the community to this thread.

# Feature freeze and release date for Flink 1.6

The community is currently discussing the feature freeze and, thus, also
the release date for Flink 1.6 [1] which will happen soon. Join the
discussion to learn more and voice your opinion.

# DynamoDB connector

The community currently discusses the possibility to extend the existing
FlinkKinesisConsumer to also consume date from DynamoDB [2]. If you have
ideas or experience with DynamoDB then please join the thread.

[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Feature-freeze-for-Flink-1-6-td23010.html
[2]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Consuming-data-from-dynamoDB-streams-to-flink-td22963.html

Cheers,
Till


[jira] [Created] (FLINK-9699) Add api to replace table

2018-07-02 Thread Jeff Zhang (JIRA)
Jeff Zhang created FLINK-9699:
-

 Summary: Add api to replace table
 Key: FLINK-9699
 URL: https://issues.apache.org/jira/browse/FLINK-9699
 Project: Flink
  Issue Type: Improvement
Reporter: Jeff Zhang






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)