[jira] [Created] (FLINK-3754) Add a validation phase before construct RelNode using TableAPI

2016-04-13 Thread Yijie Shen (JIRA)
Yijie Shen created FLINK-3754:
-

 Summary: Add a validation phase before construct RelNode using 
TableAPI
 Key: FLINK-3754
 URL: https://issues.apache.org/jira/browse/FLINK-3754
 Project: Flink
  Issue Type: Improvement
  Components: Table API
Affects Versions: 1.0.0
Reporter: Yijie Shen
Assignee: Yijie Shen


Unlike sql string's execution, which have a separate validation phase before 
RelNode construction, Table API lacks the counterparts and the validation is 
scattered in many places.

I suggest to add a single validation phase and detect problems as early as 
possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3752) Add Per-Kafka-Partition Watermark Generation to the docs

2016-04-13 Thread Stephan Ewen (JIRA)
Stephan Ewen created FLINK-3752:
---

 Summary: Add Per-Kafka-Partition Watermark Generation to the docs
 Key: FLINK-3752
 URL: https://issues.apache.org/jira/browse/FLINK-3752
 Project: Flink
  Issue Type: Improvement
  Components: Documentation
Affects Versions: 1.1.0
Reporter: Stephan Ewen
 Fix For: 1.1.0


The new methods that create watermarks per Kafka topic-partition, rather than 
per Flink DataStream partition, should be documented under 

https://ci.apache.org/projects/flink/flink-docs-master/apis/streaming/event_timestamps_watermarks.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Matthias J. Sax
Hi jstar,

I need to have a close look. But I am wondering why you use reflection
in the first place? Is there any specific reason for that?

Furthermore, the example provided in project maven-example also covers
the case to submit a topology to Flink via Java. Have a look at
org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter

It contains a main() method and you can just run it as a regular Java
program in your IDE.

The SO question example should also work; it also contains a main()
method, so you should be able to run it.

Btw: If you use Storm-Compatiblitly-API there is no reason the get an
ExecutuionEnvironment in you code. This happen automatically with
FlinkClient/FlinkSubmitter.

Furthermore, I would recommend to use FlinkSubmitter instead of
FlinkClient as it is somewhat simpler to use.

About SO question: I guess the problem is the jar assembling. The user says

"Since I'using maven to handle my dependencies, I do a Mvn clean install
to obtain the jar."

I guess this is not sufficient to bundle a correct jar. Have a look into
pom.xml from storm-examples. It uses maven plug-ins in assemble the jar
correctly. (Regular maven artifact do not work for job submission...)

Will have a close look and follow up... Hope this helps already.

-Matthias

On 04/13/2016 06:23 PM, star jlong wrote:
> Thanks for the reply.
> @Stephen, I try using RemoteEnvironment to submit my topology to flink. 
> Here is the try that I did RemoteEnvironment remote = new 
> RemoteEnvironment(ipJobManager, 6123, jarPath); remote.execute();
> While running the program, this is the exception that I got.
> java.lang.RuntimeException: No data sinks have been created yet. A program 
> needs at least one sink that consumes data. Examples are writing the data set 
> or printing it.
>  
> 
> Le Mercredi 13 avril 2016 16h54, Till Rohrmann  a 
> écrit :
>  
> 
>  I think this is not the problem here since the problem is still happening
> on the client side when the FlinkTopology tries to copy the registered
> spouts. This happens before the job is submitted to the cluster. Maybe
> Mathias could chime in here.
> 
> Cheers,
> Till
> 
> On Wed, Apr 13, 2016 at 5:39 PM, Stephan Ewen  wrote:
> 
>> Hi!
>>
>> For flink standalone programs, you would use a "RemoteEnvironment"
>>
>> For Storm, I would use the "FlinkClient" in "org.apache.flink.storm.api".
>> That one should deal with jars, classloaders, etc for you.
>>
>> Stephan
>>
>>
>> On Wed, Apr 13, 2016 at 3:43 PM, star jlong 
>> wrote:
>>
>>> Thanks for the suggestion. Sure those examples are interesting and I have
>>> deploy them successfully on flink. The deployment is done the command
>> line
>>> that is doing something like
>>> bin/flink run example.jarBut what I want is to submit the topology to
>>> flink using a java program.
>>>
>>> Thanks.
>>>
>>> Le Mercredi 13 avril 2016 14h12, Chesnay Schepler <
>> ches...@apache.org>
>>> a écrit :
>>>
>>>
>>>   you can find examples here:
>>>
>>>
>> https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples
>>>
>>> we haven't established yet that it is an API issue; it could very well
>>> be caused by the reflection magic you're using...
>>>
>>> On 13.04.2016 14:57, star jlong wrote:
 Ok, it seems like there an issue with the api. So please does anybody
>>> has a working example for deploying a topology using the flink dependency
>>> flink-storm_2.11 or any other will be welcoming.

 Thanks,
 jstar

   Le Mercredi 13 avril 2016 13h44, star jlong
>>>  a écrit :


   Hi Schepler,

 Thanks for the concerned. Yes I'm actaully having the same issue as
>>> indicated on that post because I'm the one that posted that issue.

   Le Mercredi 13 avril 2016 13h35, Chesnay Schepler <
>>> ches...@apache.org> a écrit :



>>>
>> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api

 On 13.04.2016 14:28, Till Rohrmann wrote:
> Hi jstar,
>
> what's exactly the problem you're observing?
>
> Cheers,
> Till
>
> On Wed, Apr 13, 2016 at 2:23 PM, star jlong
>>  wrote:
>
>> Hi there,
>>
>> I'm jstar. I have been playing around with flink. I'm very much
>>> interested
>> in submitting a topoloy  to flink using its api. As indicated
>> on stackoverflow, that is the try that I have given. But I was stuck
>>> with
>> some exception. Please any help will be welcoming.
>>
>>
>> Thanks.
>> jstar





>>>
>>>
>>>
>>>
>>>
>>
> 
>   
> 



signature.asc
Description: OpenPGP digital signature


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Thanks for the reply.
@Stephen, I try using RemoteEnvironment to submit my topology to flink. 
Here is the try that I did RemoteEnvironment remote = new 
RemoteEnvironment(ipJobManager, 6123, jarPath); remote.execute();
While running the program, this is the exception that I got.
java.lang.RuntimeException: No data sinks have been created yet. A program 
needs at least one sink that consumes data. Examples are writing the data set 
or printing it.
 

Le Mercredi 13 avril 2016 16h54, Till Rohrmann  a 
écrit :
 

 I think this is not the problem here since the problem is still happening
on the client side when the FlinkTopology tries to copy the registered
spouts. This happens before the job is submitted to the cluster. Maybe
Mathias could chime in here.

Cheers,
Till

On Wed, Apr 13, 2016 at 5:39 PM, Stephan Ewen  wrote:

> Hi!
>
> For flink standalone programs, you would use a "RemoteEnvironment"
>
> For Storm, I would use the "FlinkClient" in "org.apache.flink.storm.api".
> That one should deal with jars, classloaders, etc for you.
>
> Stephan
>
>
> On Wed, Apr 13, 2016 at 3:43 PM, star jlong 
> wrote:
>
> > Thanks for the suggestion. Sure those examples are interesting and I have
> > deploy them successfully on flink. The deployment is done the command
> line
> > that is doing something like
> > bin/flink run example.jarBut what I want is to submit the topology to
> > flink using a java program.
> >
> > Thanks.
> >
> >    Le Mercredi 13 avril 2016 14h12, Chesnay Schepler <
> ches...@apache.org>
> > a écrit :
> >
> >
> >  you can find examples here:
> >
> >
> https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples
> >
> > we haven't established yet that it is an API issue; it could very well
> > be caused by the reflection magic you're using...
> >
> > On 13.04.2016 14:57, star jlong wrote:
> > > Ok, it seems like there an issue with the api. So please does anybody
> > has a working example for deploying a topology using the flink dependency
> > flink-storm_2.11 or any other will be welcoming.
> > >
> > > Thanks,
> > > jstar
> > >
> > >      Le Mercredi 13 avril 2016 13h44, star jlong
> >  a écrit :
> > >
> > >
> > >  Hi Schepler,
> > >
> > > Thanks for the concerned. Yes I'm actaully having the same issue as
> > indicated on that post because I'm the one that posted that issue.
> > >
> > >      Le Mercredi 13 avril 2016 13h35, Chesnay Schepler <
> > ches...@apache.org> a écrit :
> > >
> > >
> > >
> >
> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
> > >
> > > On 13.04.2016 14:28, Till Rohrmann wrote:
> > >> Hi jstar,
> > >>
> > >> what's exactly the problem you're observing?
> > >>
> > >> Cheers,
> > >> Till
> > >>
> > >> On Wed, Apr 13, 2016 at 2:23 PM, star jlong
>  > >
> > >> wrote:
> > >>
> > >>> Hi there,
> > >>>
> > >>> I'm jstar. I have been playing around with flink. I'm very much
> > interested
> > >>> in submitting a topoloy  to flink using its api. As indicated
> > >>> on stackoverflow, that is the try that I have given. But I was stuck
> > with
> > >>> some exception. Please any help will be welcoming.
> > >>>
> > >>>
> > >>> Thanks.
> > >>> jstar
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
> >
> >
>

  

Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Till Rohrmann
I think this is not the problem here since the problem is still happening
on the client side when the FlinkTopology tries to copy the registered
spouts. This happens before the job is submitted to the cluster. Maybe
Mathias could chime in here.

Cheers,
Till

On Wed, Apr 13, 2016 at 5:39 PM, Stephan Ewen  wrote:

> Hi!
>
> For flink standalone programs, you would use a "RemoteEnvironment"
>
> For Storm, I would use the "FlinkClient" in "org.apache.flink.storm.api".
> That one should deal with jars, classloaders, etc for you.
>
> Stephan
>
>
> On Wed, Apr 13, 2016 at 3:43 PM, star jlong 
> wrote:
>
> > Thanks for the suggestion. Sure those examples are interesting and I have
> > deploy them successfully on flink. The deployment is done the command
> line
> > that is doing something like
> > bin/flink run example.jarBut what I want is to submit the topology to
> > flink using a java program.
> >
> > Thanks.
> >
> > Le Mercredi 13 avril 2016 14h12, Chesnay Schepler <
> ches...@apache.org>
> > a écrit :
> >
> >
> >  you can find examples here:
> >
> >
> https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples
> >
> > we haven't established yet that it is an API issue; it could very well
> > be caused by the reflection magic you're using...
> >
> > On 13.04.2016 14:57, star jlong wrote:
> > > Ok, it seems like there an issue with the api. So please does anybody
> > has a working example for deploying a topology using the flink dependency
> > flink-storm_2.11 or any other will be welcoming.
> > >
> > > Thanks,
> > > jstar
> > >
> > >  Le Mercredi 13 avril 2016 13h44, star jlong
> >  a écrit :
> > >
> > >
> > >  Hi Schepler,
> > >
> > > Thanks for the concerned. Yes I'm actaully having the same issue as
> > indicated on that post because I'm the one that posted that issue.
> > >
> > >  Le Mercredi 13 avril 2016 13h35, Chesnay Schepler <
> > ches...@apache.org> a écrit :
> > >
> > >
> > >
> >
> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
> > >
> > > On 13.04.2016 14:28, Till Rohrmann wrote:
> > >> Hi jstar,
> > >>
> > >> what's exactly the problem you're observing?
> > >>
> > >> Cheers,
> > >> Till
> > >>
> > >> On Wed, Apr 13, 2016 at 2:23 PM, star jlong
>  > >
> > >> wrote:
> > >>
> > >>> Hi there,
> > >>>
> > >>> I'm jstar. I have been playing around with flink. I'm very much
> > interested
> > >>> in submitting a topoloy  to flink using its api. As indicated
> > >>> on stackoverflow, that is the try that I have given. But I was stuck
> > with
> > >>> some exception. Please any help will be welcoming.
> > >>>
> > >>>
> > >>> Thanks.
> > >>> jstar
> > >
> > >
> > >
> > >
> > >
> >
> >
> >
> >
> >
>


Re: RichMapPartitionFunction - problems with collect

2016-04-13 Thread Sergio Ramírez

Hello again:

Any news about this problem with enriched MapPartition function?

Thank you

On 06/04/16 17:01, Sergio Ramírez wrote:

Hello,

Ok, please find enclosed the test code and the input data.

Cheers

On 31/03/16 10:07, Till Rohrmann wrote:

Hi Sergio,

could you please provide a complete example (including input data) to
reproduce your problem. It is hard to tell what's going wrong when 
one only

sees a fraction of the program.

Cheers,
Till

On Tue, Mar 29, 2016 at 5:58 PM, Sergio Ramírez 
wrote:


Hi again,

I've not been able to solve the problem with the instruction you 
gave me.
I've tried with static variables (matrices) also unsuccessfully. 
I've also

tried this simpler code:


def mapPartition(it: java.lang.Iterable[LabeledVector], out:
Collector[((Int, Int), Int)]): Unit = {
   val index = getRuntimeContext().getIndexOfThisSubtask() //
Partition index
   var ninst = 0
   for(reg <- it.asScala) {
 requireByteValues(reg.vector)
 ninst += 1
   }
   for(i <- 0 until nFeatures) out.collect((i, index) -> ninst)
 }

The result is as follows:

Attribute 10, first seven partitions:
((10,0),201),((10,1),200),((10,2),201),((10,3),200),((10,4),200),((10,5),201),((10,6),201),((10,7),201) 


Attribute 12, first seven partitions:
((12,0),201),((12,1),201),((12,2),201),((12,3),200),((12,4),201),((12,5),200),((12,6),200),((12,7),201) 



As you can see, for example, for block 6 different number of 
instances are

shown, but  it's impossible.


On 24/03/16 22:39, Chesnay Schepler wrote:

Haven't looked to deeply into this, but this sounds like object 
reuse is
enabled, at which point buffering values effectively causes you to 
store

the same value multiple times.

can you try disabling objectReuse using
env.getConfig().disableObjectReuse() ?

On 22.03.2016 16:53, Sergio Ramírez wrote:


Hi all,

I've been having some problems with RichMapPartitionFunction. 
Firstly, I
tried to convert the iterable into an array unsuccessfully. Then, 
I have
used some buffers to store the values per column. I am trying to  
transpose

the local matrix of LabeledVectors that I have in each partition.

None of these solutions have worked. For example, for partition 7 and
feature 10, the vector is empty, whereas for the same partition 
and feature
11, the vectors contains 200 elements. And this change on each 
execution,

different partitions and features.

I think there is a problem with using the collect method out of the
iterable loop.

new RichMapPartitionFunction[LabeledVector, ((Int, Int), 
Array[Byte])]()

{
 def mapPartition(it: java.lang.Iterable[LabeledVector], out:
Collector[((Int, Int), Array[Byte])]): Unit = {
   val index = getRuntimeContext().getIndexOfThisSubtask()
   val mat = for (i <- 0 until nFeatures) yield new
scala.collection.mutable.ListBuffer[Byte]
   for(reg <- it.asScala) {
 for (i <- 0 until (nFeatures - 1)) mat(i) +=
reg.vector(i).toByte
 mat(nFeatures - 1) += classMap(reg.label)
   }
   for(i <- 0 until nFeatures) out.collect((i, index) ->
mat(i).toArray) // numPartitions
 }
  }

Regards








Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Stephan Ewen
Hi!

For flink standalone programs, you would use a "RemoteEnvironment"

For Storm, I would use the "FlinkClient" in "org.apache.flink.storm.api".
That one should deal with jars, classloaders, etc for you.

Stephan


On Wed, Apr 13, 2016 at 3:43 PM, star jlong 
wrote:

> Thanks for the suggestion. Sure those examples are interesting and I have
> deploy them successfully on flink. The deployment is done the command line
> that is doing something like
> bin/flink run example.jarBut what I want is to submit the topology to
> flink using a java program.
>
> Thanks.
>
> Le Mercredi 13 avril 2016 14h12, Chesnay Schepler 
> a écrit :
>
>
>  you can find examples here:
>
> https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples
>
> we haven't established yet that it is an API issue; it could very well
> be caused by the reflection magic you're using...
>
> On 13.04.2016 14:57, star jlong wrote:
> > Ok, it seems like there an issue with the api. So please does anybody
> has a working example for deploying a topology using the flink dependency
> flink-storm_2.11 or any other will be welcoming.
> >
> > Thanks,
> > jstar
> >
> >  Le Mercredi 13 avril 2016 13h44, star jlong
>  a écrit :
> >
> >
> >  Hi Schepler,
> >
> > Thanks for the concerned. Yes I'm actaully having the same issue as
> indicated on that post because I'm the one that posted that issue.
> >
> >  Le Mercredi 13 avril 2016 13h35, Chesnay Schepler <
> ches...@apache.org> a écrit :
> >
> >
> >
> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
> >
> > On 13.04.2016 14:28, Till Rohrmann wrote:
> >> Hi jstar,
> >>
> >> what's exactly the problem you're observing?
> >>
> >> Cheers,
> >> Till
> >>
> >> On Wed, Apr 13, 2016 at 2:23 PM, star jlong  >
> >> wrote:
> >>
> >>> Hi there,
> >>>
> >>> I'm jstar. I have been playing around with flink. I'm very much
> interested
> >>> in submitting a topoloy  to flink using its api. As indicated
> >>> on stackoverflow, that is the try that I have given. But I was stuck
> with
> >>> some exception. Please any help will be welcoming.
> >>>
> >>>
> >>> Thanks.
> >>> jstar
> >
> >
> >
> >
> >
>
>
>
>
>


[jira] [Created] (FLINK-3751) default Operator names are inconsistent

2016-04-13 Thread Chesnay Schepler (JIRA)
Chesnay Schepler created FLINK-3751:
---

 Summary: default Operator names are inconsistent
 Key: FLINK-3751
 URL: https://issues.apache.org/jira/browse/FLINK-3751
 Project: Flink
  Issue Type: Bug
  Components: DataSet API, DataStream API
Affects Versions: 1.0.1
Reporter: Chesnay Schepler
Priority: Minor


h3. The Problem
If a user doesn't name an operator explicitly (generally using the name() 
method) then Flink auto generates a name. These generated names are really 
(like, _really_) inconsistent within and across API's.

In the batch API non-source/-sink operator names are _generally_ formed like 
this:
{code}FlatMap (FlatMap at main(WordCount.java:81)){code}

We have
* FlatMap, describing the runtime operator type
* another FlatMap, describing which user-call created this operator
* main(WordCount.java:81), describing the call location

This already falls apart when you have a DataSource, which looks like this:
{code}DataSource (at getDefaultTextLineDataSet(WordCountData.java:70) 
(org.apache.flink.CollectionInputFormat){code}
It is missing the call that created the sink (fromElements()) and suddenly 
includes the inputFormat name.

Sink are a different story yet again, since collect() is displayed as
{code} DataSink (collect()) {code}
which is missing the call location.

Then we have the Streaming API  where things are named completely different as 
well:

The fromElements source is displayed as 
{code} Source: Collection Source {code}

non-source/-sink operators are displayed simply as their runtime operator type
{code} FlatMap {code}

and sinks, at times, do not have a name at all.
{code} Sink: Unnamed {code}

To put the cherry on top, chains are displayed in the Batch API as
{code} CHAIN  ->  {code}
while in the Streaming API we lost the CHAIN keyword
{code}  ->  {code}

Considering that these names are right in the users face via the Dashboard we 
should try to homogenize them a bit.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-3750) Make JDBCInputFormat a parallel source

2016-04-13 Thread Flavio Pompermaier (JIRA)
Flavio Pompermaier created FLINK-3750:
-

 Summary: Make JDBCInputFormat a parallel source
 Key: FLINK-3750
 URL: https://issues.apache.org/jira/browse/FLINK-3750
 Project: Flink
  Issue Type: Improvement
  Components: Batch
Affects Versions: 1.0.1
Reporter: Flavio Pompermaier
Assignee: Flavio Pompermaier
Priority: Minor


At the moment the batch JDBC InputFormat does not support parallelism 
(NonParallelInput). I'd like to remove such limitation



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Thanks for the suggestion. Sure those examples are interesting and I have 
deploy them successfully on flink. The deployment is done the command line that 
is doing something like
bin/flink run example.jarBut what I want is to submit the topology to flink 
using a java program.

Thanks. 

Le Mercredi 13 avril 2016 14h12, Chesnay Schepler  a 
écrit :
 

 you can find examples here: 
https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples

we haven't established yet that it is an API issue; it could very well 
be caused by the reflection magic you're using...

On 13.04.2016 14:57, star jlong wrote:
> Ok, it seems like there an issue with the api. So please does anybody has a 
> working example for deploying a topology using the flink dependency 
> flink-storm_2.11 or any other will be welcoming.
>
> Thanks,
> jstar
>
>      Le Mercredi 13 avril 2016 13h44, star jlong  
>a écrit :
>  
>
>  Hi Schepler,
>
> Thanks for the concerned. Yes I'm actaully having the same issue as indicated 
> on that post because I'm the one that posted that issue.
>
>      Le Mercredi 13 avril 2016 13h35, Chesnay Schepler  a 
>écrit :
>  
>
>  
>http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
>
> On 13.04.2016 14:28, Till Rohrmann wrote:
>> Hi jstar,
>>
>> what's exactly the problem you're observing?
>>
>> Cheers,
>> Till
>>
>> On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
>> wrote:
>>
>>> Hi there,
>>>
>>> I'm jstar. I have been playing around with flink. I'm very much interested
>>> in submitting a topoloy  to flink using its api. As indicated
>>> on stackoverflow, that is the try that I have given. But I was stuck with
>>> some exception. Please any help will be welcoming.
>>>
>>>
>>> Thanks.
>>> jstar
>
>
>
>
>    



  

Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Chesnay Schepler
you can find examples here: 
https://github.com/apache/flink/tree/master/flink-contrib/flink-storm-examples


we haven't established yet that it is an API issue; it could very well 
be caused by the reflection magic you're using...


On 13.04.2016 14:57, star jlong wrote:

Ok, it seems like there an issue with the api. So please does anybody has a 
working example for deploying a topology using the flink dependency 
flink-storm_2.11 or any other will be welcoming.

Thanks,
jstar

 Le Mercredi 13 avril 2016 13h44, star jlong  a 
écrit :
  


  Hi Schepler,

Thanks for the concerned. Yes I'm actaully having the same issue as indicated 
on that post because I'm the one that posted that issue.

 Le Mercredi 13 avril 2016 13h35, Chesnay Schepler  a 
écrit :
  


  
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api

On 13.04.2016 14:28, Till Rohrmann wrote:

Hi jstar,

what's exactly the problem you're observing?

Cheers,
Till

On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
wrote:


Hi there,

I'm jstar. I have been playing around with flink. I'm very much interested
in submitting a topoloy  to flink using its api. As indicated
on stackoverflow, that is the try that I have given. But I was stuck with
some exception. Please any help will be welcoming.


Thanks.
jstar





   




Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Ok, it seems like there an issue with the api. So please does anybody has a 
working example for deploying a topology using the flink dependency 
flink-storm_2.11 or any other will be welcoming.

Thanks,
jstar 

Le Mercredi 13 avril 2016 13h44, star jlong  a 
écrit :
 

 Hi Schepler,

Thanks for the concerned. Yes I'm actaully having the same issue as indicated 
on that post because I'm the one that posted that issue. 

    Le Mercredi 13 avril 2016 13h35, Chesnay Schepler  a 
écrit :
 

 
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api

On 13.04.2016 14:28, Till Rohrmann wrote:
> Hi jstar,
>
> what's exactly the problem you're observing?
>
> Cheers,
> Till
>
> On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
> wrote:
>
>> Hi there,
>>
>> I'm jstar. I have been playing around with flink. I'm very much interested
>> in submitting a topoloy  to flink using its api. As indicated
>> on stackoverflow, that is the try that I have given. But I was stuck with
>> some exception. Please any help will be welcoming.
>>
>>
>> Thanks.
>> jstar





  

Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Till Rohrmann
I've updated the master. Could you check it out and run your program with
the latest master? I would expect to see a ClassNotFoundException.

On Wed, Apr 13, 2016 at 2:54 PM, Till Rohrmann  wrote:

> Yes that is true. I'll commit a hotfix for that.
>
> My suspicion is that we use the wrong class loader in the
> FlinkTopology.copyObject method to load the RandomSentenceSpout class. We
> can see that once I removed the exception swallowing in the current master.
>
> On Wed, Apr 13, 2016 at 2:40 PM, star jlong 
> wrote:
>
>> Hi Schepler,
>>
>> Thanks for the concerned. Yes I'm actaully having the same issue as
>> indicated on that post because I'm the one that posted that issue.
>>
>> Le Mercredi 13 avril 2016 13h35, Chesnay Schepler 
>> a écrit :
>>
>>
>>
>> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
>>
>> On 13.04.2016 14:28, Till Rohrmann wrote:
>> > Hi jstar,
>> >
>> > what's exactly the problem you're observing?
>> >
>> > Cheers,
>> > Till
>> >
>> > On Wed, Apr 13, 2016 at 2:23 PM, star jlong > >
>> > wrote:
>> >
>> >> Hi there,
>> >>
>> >> I'm jstar. I have been playing around with flink. I'm very much
>> interested
>> >> in submitting a topoloy  to flink using its api. As indicated
>> >> on stackoverflow, that is the try that I have given. But I was stuck
>> with
>> >> some exception. Please any help will be welcoming.
>> >>
>> >>
>> >> Thanks.
>> >> jstar
>>
>>
>>
>>
>>
>
>


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Till Rohrmann
Yes that is true. I'll commit a hotfix for that.

My suspicion is that we use the wrong class loader in the
FlinkTopology.copyObject method to load the RandomSentenceSpout class. We
can see that once I removed the exception swallowing in the current master.

On Wed, Apr 13, 2016 at 2:40 PM, star jlong 
wrote:

> Hi Schepler,
>
> Thanks for the concerned. Yes I'm actaully having the same issue as
> indicated on that post because I'm the one that posted that issue.
>
> Le Mercredi 13 avril 2016 13h35, Chesnay Schepler 
> a écrit :
>
>
>
> http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api
>
> On 13.04.2016 14:28, Till Rohrmann wrote:
> > Hi jstar,
> >
> > what's exactly the problem you're observing?
> >
> > Cheers,
> > Till
> >
> > On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
> > wrote:
> >
> >> Hi there,
> >>
> >> I'm jstar. I have been playing around with flink. I'm very much
> interested
> >> in submitting a topoloy  to flink using its api. As indicated
> >> on stackoverflow, that is the try that I have given. But I was stuck
> with
> >> some exception. Please any help will be welcoming.
> >>
> >>
> >> Thanks.
> >> jstar
>
>
>
>
>


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Hi Schepler,

Thanks for the concerned. Yes I'm actaully having the same issue as indicated 
on that post because I'm the one that posted that issue. 

Le Mercredi 13 avril 2016 13h35, Chesnay Schepler  a 
écrit :
 

 
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api

On 13.04.2016 14:28, Till Rohrmann wrote:
> Hi jstar,
>
> what's exactly the problem you're observing?
>
> Cheers,
> Till
>
> On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
> wrote:
>
>> Hi there,
>>
>> I'm jstar. I have been playing around with flink. I'm very much interested
>> in submitting a topoloy  to flink using its api. As indicated
>> on stackoverflow, that is the try that I have given. But I was stuck with
>> some exception. Please any help will be welcoming.
>>
>>
>> Thanks.
>> jstar



  

Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Chesnay Schepler

I think the following is the interesting part of the stack-trace:

|Causedby:java.lang.RuntimeException:Failedto copy object.at 
org.apache.flink.storm.api.FlinkTopology.copyObject(FlinkTopology.java:145)at 
org.apache.flink.storm.api.FlinkTopology.getPrivateField(FlinkTopology.java:132)at 
org.apache.flink.storm.api.FlinkTopology.(FlinkTopology.java:89)at 
org.apache.flink.storm.api.FlinkTopology.createTopology(FlinkTopology.java:105)at 
stormWorldCount.WordCountTopology.buildTopology(WordCountTopology.java:96)|


relevant method:

private  T copyObject(T object) {
try {
return InstantiationUtil.deserializeObject(
InstantiationUtil.serializeObject(object),
getClass().getClassLoader()
);
} catch (IOException | ClassNotFoundException e) {
throw new RuntimeException("Failed to copy object.");
}
}

sadly another case where we just swallow the exception cause.

On 13.04.2016 14:35, Chesnay Schepler wrote:
http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api 



On 13.04.2016 14:28, Till Rohrmann wrote:

Hi jstar,

what's exactly the problem you're observing?

Cheers,
Till

On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
wrote:


Hi there,

I'm jstar. I have been playing around with flink. I'm very much 
interested

in submitting a topoloy  to flink using its api. As indicated
on stackoverflow, that is the try that I have given. But I was stuck 
with

some exception. Please any help will be welcoming.


Thanks.
jstar







Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Chesnay Schepler

http://stackoverflow.com/questions/36584784/issues-while-submitting-a-topology-to-apache-flink-using-the-flink-api

On 13.04.2016 14:28, Till Rohrmann wrote:

Hi jstar,

what's exactly the problem you're observing?

Cheers,
Till

On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
wrote:


Hi there,

I'm jstar. I have been playing around with flink. I'm very much interested
in submitting a topoloy  to flink using its api. As indicated
on stackoverflow, that is the try that I have given. But I was stuck with
some exception. Please any help will be welcoming.


Thanks.
jstar




[jira] [Created] (FLINK-3749) Improve decimal handling

2016-04-13 Thread Timo Walther (JIRA)
Timo Walther created FLINK-3749:
---

 Summary: Improve decimal handling
 Key: FLINK-3749
 URL: https://issues.apache.org/jira/browse/FLINK-3749
 Project: Flink
  Issue Type: Bug
  Components: Table API
Reporter: Timo Walther
Assignee: Timo Walther


The current decimal handling is too restrictive and does not allow literals 
such as "11.2".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Hi Till,
Thank for the quick reply. I'm unable to copy the mainMethod of my topology 
using the instruction
(FlinkTopology) method.invoke(null, new Object[] {});

where method is variable of type java.lang.reflect.Method 

Le Mercredi 13 avril 2016 13h28, Till Rohrmann  a 
écrit :
 

 Hi jstar,

what's exactly the problem you're observing?

Cheers,
Till

On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
wrote:

> Hi there,
>
> I'm jstar. I have been playing around with flink. I'm very much interested
> in submitting a topoloy  to flink using its api. As indicated
> on stackoverflow, that is the try that I have given. But I was stuck with
> some exception. Please any help will be welcoming.
>
>
> Thanks.
> jstar


  

Re: Issue deploying a topology to flink with a java api

2016-04-13 Thread Till Rohrmann
Hi jstar,

what's exactly the problem you're observing?

Cheers,
Till

On Wed, Apr 13, 2016 at 2:23 PM, star jlong 
wrote:

> Hi there,
>
> I'm jstar. I have been playing around with flink. I'm very much interested
> in submitting a topoloy  to flink using its api. As indicated
> on stackoverflow, that is the try that I have given. But I was stuck with
> some exception. Please any help will be welcoming.
>
>
> Thanks.
> jstar


Issue deploying a topology to flink with a java api

2016-04-13 Thread star jlong
Hi there,

I'm jstar. I have been playing around with flink. I'm very much interested in 
submitting a topoloy  to flink using its api. As indicated on stackoverflow, 
that is the try that I have given. But I was stuck with some exception. Please 
any help will be welcoming. 


Thanks.
jstar

[jira] [Created] (FLINK-3748) Add CASE function to Table API

2016-04-13 Thread Timo Walther (JIRA)
Timo Walther created FLINK-3748:
---

 Summary: Add CASE function to Table API
 Key: FLINK-3748
 URL: https://issues.apache.org/jira/browse/FLINK-3748
 Project: Flink
  Issue Type: Sub-task
  Components: Table API
Reporter: Timo Walther
Assignee: Timo Walther


Add a CASE/WHEN functionality to Java/Scala Table API and add support for SQL 
API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Sqoop-like module in Flink

2016-04-13 Thread Flavio Pompermaier
Hi to all,
we've recently migrated our sqoop[1] import process to a Flink job, using
an improved version of the Flink JDBC Input Format[2] that is able to
exploit the parallelism of the cluster (the current Flink version
implements NonParallelInput).

Still need to improve the mapping part of sql types to java ones (in the
addValue method IMHO) but this could be the basis for a flink-sqoop module
that will incrementally cover the sqoop functionalities when requested.
Do you think that such a module could be of interest for Flink or not?

[1] https://sqoop.apache.org/
[2] https://gist.github.com/fpompermaier/bcd704abc93b25b6744ac76ac17ed351

Best,
Flavio


Re: Kryo StackOverflowError

2016-04-13 Thread Stephan Ewen
+1 to add this to 1.0.2


On Wed, Apr 13, 2016 at 1:57 AM, Andrew Palumbo  wrote:

>
> Hi,
>
> Great! Do you think that this is something that you'll be enabling in your
> upcoming 1.0.2 release?  We plan on putting out a maintenance Mahout
> Release relatively soon and this would allow us to speed up Matrix
> Multiplication greatly.
>
> Thanks,
>
> Andy
> 
> From: Till Rohrmann 
> Sent: Tuesday, April 12, 2016 11:18 AM
> To: dev@flink.apache.org
> Subject: Re: Kryo StackOverflowError
>
> +1
>
> On Tue, Apr 12, 2016 at 1:13 PM, Robert Metzger 
> wrote:
>
> > Good catch Till!
> >
> > I just checked it with the Mahout source code and the issues is gone with
> > reference tracking enabled.
> >
> > I would just re-enable it again in Flink.
> >
> > On Tue, Apr 12, 2016 at 10:20 AM, Till Rohrmann 
> > wrote:
> >
> > > Hey guys,
> > >
> > > I have a suspicion which could be the culprit: Could change the line
> > > KryoSerializer.java:328 to kryo.setReferences(true) and try if the
> error
> > > still remains? We deactivated the reference tracking and now Kryo
> > shouldn’t
> > > be able to resolve cyclic references properly.
> > >
> > > Cheers,
> > > Till
> > > ​
> > >
> > > On Mon, Apr 11, 2016 at 11:42 PM, Lisonbee, Todd <
> > todd.lison...@intel.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > I also got this error message when I had private inner classes:
> > > >
> > > > public class A {
> > > > private class B {
> > > > }
> > > > }
> > > >
> > > > I was able to fix by making the inner classes public static:
> > > >
> > > > public class A {
> > > > public static class B {
> > > > }
> > > > }
> > > >
> > > > When I was trying to debug it seemed this error message can be caused
> > by
> > > > several different things.
> > > >
> > > > Thanks,
> > > >
> > > > Todd
> > > >
> > > >
> > > > -Original Message-
> > > > From: Hilmi Yildirim [mailto:hilmi.yildi...@dfki.de]
> > > > Sent: Sunday, April 10, 2016 11:36 AM
> > > > To: dev@flink.apache.org
> > > > Subject: Re: Kryo StackOverflowError
> > > >
> > > > Hi,
> > > > I also had this problem and solved it.
> > > >
> > > > In my case I had multiple objects which are created via anonymous
> > > classes.
> > > > When I broadcasted these objects, the serializer tried to serialize
> the
> > > > objects and for that it tried to serialize the anonymous classes.
> This
> > > > caused the problem.
> > > >
> > > > For example,
> > > >
> > > > class A{
> > > >
> > > >   def createObjects() : Array[Object]{
> > > > objects
> > > >  for{
> > > >  object = new Class{
> > > >  ...
> > > >  }
> > > >  objects.add(object)
> > > >  }
> > > >  return objects
> > > >  }
> > > > }
> > > >
> > > > It tried to serialize "new Class". For that it tried to serialize the
> > > > method createObjects(). And then it tried to serialize class A. To
> > > > serialize class A it tried to serialize the method createObjects. Or
> > > > something like that, I do not remember the details. This caused the
> > > > recursion.
> > > >
> > > > BR,
> > > > Hilmi
> > > >
> > > > Am 10.04.2016 um 19:18 schrieb Stephan Ewen:
> > > > > Hi!
> > > > >
> > > > > Is it possible that some datatype has a recursive structure
> > > nonetheless?
> > > > > Something like a linked list or so, which would create a large
> object
> > > > graph?
> > > > >
> > > > > There seems to be a large object graph that the Kryo serializer
> > > > traverses,
> > > > > which causes the StackOverflowError.
> > > > >
> > > > > Greetings,
> > > > > Stephan
> > > > >
> > > > >
> > > > > On Sun, Apr 10, 2016 at 6:24 PM, Andrew Palumbo <
> ap@outlook.com>
> > > > wrote:
> > > > >
> > > > >> Hi Stephan,
> > > > >>
> > > > >> thanks for answering.
> > > > >>
> > > > >> This not from a recursive object. (it is used in a recursive
> method
> > in
> > > > the
> > > > >> test that is throwing this error, but the the depth is only 2 and
> > > there
> > > > are
> > > > >> no other Flink DataSet operations before execution is triggered so
> > it
> > > is
> > > > >> trivial.)
> > > > >>
> > > > >> Gere is a Gist of the code, and the full output and stack trace:
> > > > >>
> > > > >>
> > > https://gist.github.com/andrewpalumbo/40c7422a5187a24cd03d7d81feb2a419
> > > > >>
> > > > >> The Error begins at line 178 of the "Output" file.
> > > > >>
> > > > >> Thanks
> > > > >>
> > > > >> 
> > > > >> From: ewenstep...@gmail.com  on behalf of
> > > > Stephan
> > > > >> Ewen 
> > > > >> Sent: Sunday, April 10, 2016 9:39 AM
> > > > >> To: dev@flink.apache.org
> > > > >> Subject: Re: Kryo StackOverflowError
> > > > >>
> > > > >> Hi!
> > > > >>
> > > > >> Sorry, I don't fully understand he diagnosis.
> > > > >> You say that this stack overflow is 

[jira] [Created] (FLINK-3747) Consolidate TimestampAssigner Methods in Kafka Consumer

2016-04-13 Thread Aljoscha Krettek (JIRA)
Aljoscha Krettek created FLINK-3747:
---

 Summary: Consolidate TimestampAssigner Methods in Kafka Consumer
 Key: FLINK-3747
 URL: https://issues.apache.org/jira/browse/FLINK-3747
 Project: Flink
  Issue Type: Improvement
  Components: Streaming
Reporter: Aljoscha Krettek
Priority: Blocker


On {{DataStream}} the methods for setting a TimestampAssigner/WatermarkEmitter 
are called {{assignTimestampsAndWatermarks()}} while on {{FlinkKafkaConsumer*}} 
they are called {{setPunctuatedWatermarkEmitter()}} and 
{{setPeriodicWatermarkEmitter()}}.

I think these names should be matched, also the name {{setWatermarkEmitter}} 
does not hint at the fact that the assigner primarily assigns timestamps.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)