??????Flink??????????Storm??ack????????

2020-08-04 Thread Bruce
1.??Flink??rabbitmq


2.??rabbitmqqos??1Flink??


3.checkpointcheckpoint??checkpoint??


4.??Flink??task






best wishes
-





Re: [DISCUSS] Dropping flink-storm?

2019-01-10 Thread Fabian Hueske
+1 from my side as well.

I would assume that most Bolts that are supported by our current wrappers
can be easily converted into respective Flink functions.

Fabian



Am Do., 10. Jan. 2019 um 10:35 Uhr schrieb Kostas Kloudas <
k.klou...@da-platform.com>:

> +1 to drop as well.
>
> On Thu, Jan 10, 2019 at 10:15 AM Ufuk Celebi  wrote:
>
>> +1 to drop.
>>
>> I totally agree with your reasoning. I like that we tried to keep it,
>> but I don't think the maintenance overhead would be justified.
>>
>> – Ufuk
>>
>> On Wed, Jan 9, 2019 at 4:09 PM Till Rohrmann 
>> wrote:
>> >
>> > With https://issues.apache.org/jira/browse/FLINK-10571, we will remove
>> the
>> > Storm topologies from Flink and keep the wrappers for the moment.
>> >
>> > However, looking at the FlinkTopologyContext [1], it becomes quite
>> obvious
>> > that Flink's compatibility with Storm is really limited. Almost all of
>> the
>> > context methods are not supported which makes me wonder how useful these
>> > wrappers really are. Given the additional maintenance overhead of having
>> > them in the code base and no indication that someone is actively using
>> > them, I would still be in favour of removing them. This will reduce our
>> > maintenance burden in the future. What do you think?
>> >
>> > [1]
>> >
>> https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java
>> >
>> > Cheers,
>> > Till
>> >
>> > On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske 
>> wrote:
>> >
>> > > Yes, let's do it this way.
>> > > The wrapper classes are probably not too complex and can be easily
>> tested.
>> > > We have the same for the Hadoop interfaces, although I think only the
>> > > Input- and OutputFormatWrappers are actually used.
>> > >
>> > >
>> > > Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
>> > > ches...@apache.org>:
>> > >
>> > >> That sounds very good to me.
>> > >>
>> > >> On 08.10.2018 11:36, Till Rohrmann wrote:
>> > >> > Good point. The initial idea of this thread was to remove the storm
>> > >> > compatibility layer completely.
>> > >> >
>> > >> > During the discussion I realized that it might be useful for our
>> users
>> > >> > to not completely remove it in one go. Instead for those who still
>> > >> > want to use some Bolt and Spout code in Flink, it could be nice to
>> > >> > keep the wrappers. At least, we could remove flink-storm in a more
>> > >> > graceful way by first removing the Topology and client parts and
>> then
>> > >> > the wrappers. What do you think?
>> > >> >
>> > >> > Cheers,
>> > >> > Till
>> > >> >
>> > >> > On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler <
>> ches...@apache.org
>> > >> > <mailto:ches...@apache.org>> wrote:
>> > >> >
>> > >> > I don't believe that to be the consensus. For starters it is
>> > >> > contradictory; we can't /drop /flink-storm yet still /keep
>> //some
>> > >> > parts/.
>> > >> >
>> > >> > From my understanding we drop flink-storm completely, and put a
>> > >> > note in the docs that the bolt/spout wrappers of previous
>> versions
>> > >> > will continue to work.
>> > >> >
>> > >> > On 08.10.2018 11:04, Till Rohrmann wrote:
>> > >> >> Thanks for opening the issue Chesnay. I think the overall
>> > >> >> consensus is to drop flink-storm and only keep the Bolt and
>> Spout
>> > >> >> wrappers. Thanks for your feedback!
>> > >> >>
>> > >> >> Cheers,
>> > >> >> Till
>> > >> >>
>> > >> >> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
>> > >> >> mailto:ches...@apache.org>> wrote:
>> > >> >>
>> > >> >> I've created
>> > >> >> https://issues.apache.org/jira/browse/FLINK-10509 for
>> > >> >> removing flink

Re: [DISCUSS] Dropping flink-storm?

2019-01-10 Thread Kostas Kloudas
+1 to drop as well.

On Thu, Jan 10, 2019 at 10:15 AM Ufuk Celebi  wrote:

> +1 to drop.
>
> I totally agree with your reasoning. I like that we tried to keep it,
> but I don't think the maintenance overhead would be justified.
>
> – Ufuk
>
> On Wed, Jan 9, 2019 at 4:09 PM Till Rohrmann  wrote:
> >
> > With https://issues.apache.org/jira/browse/FLINK-10571, we will remove
> the
> > Storm topologies from Flink and keep the wrappers for the moment.
> >
> > However, looking at the FlinkTopologyContext [1], it becomes quite
> obvious
> > that Flink's compatibility with Storm is really limited. Almost all of
> the
> > context methods are not supported which makes me wonder how useful these
> > wrappers really are. Given the additional maintenance overhead of having
> > them in the code base and no indication that someone is actively using
> > them, I would still be in favour of removing them. This will reduce our
> > maintenance burden in the future. What do you think?
> >
> > [1]
> >
> https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java
> >
> > Cheers,
> > Till
> >
> > On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske  wrote:
> >
> > > Yes, let's do it this way.
> > > The wrapper classes are probably not too complex and can be easily
> tested.
> > > We have the same for the Hadoop interfaces, although I think only the
> > > Input- and OutputFormatWrappers are actually used.
> > >
> > >
> > > Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
> > > ches...@apache.org>:
> > >
> > >> That sounds very good to me.
> > >>
> > >> On 08.10.2018 11:36, Till Rohrmann wrote:
> > >> > Good point. The initial idea of this thread was to remove the storm
> > >> > compatibility layer completely.
> > >> >
> > >> > During the discussion I realized that it might be useful for our
> users
> > >> > to not completely remove it in one go. Instead for those who still
> > >> > want to use some Bolt and Spout code in Flink, it could be nice to
> > >> > keep the wrappers. At least, we could remove flink-storm in a more
> > >> > graceful way by first removing the Topology and client parts and
> then
> > >> > the wrappers. What do you think?
> > >> >
> > >> > Cheers,
> > >> > Till
> > >> >
> > >> > On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler <
> ches...@apache.org
> > >> > <mailto:ches...@apache.org>> wrote:
> > >> >
> > >> > I don't believe that to be the consensus. For starters it is
> > >> > contradictory; we can't /drop /flink-storm yet still /keep
> //some
> > >> > parts/.
> > >> >
> > >> > From my understanding we drop flink-storm completely, and put a
> > >> > note in the docs that the bolt/spout wrappers of previous
> versions
> > >> > will continue to work.
> > >> >
> > >> > On 08.10.2018 11:04, Till Rohrmann wrote:
> > >> >> Thanks for opening the issue Chesnay. I think the overall
> > >> >> consensus is to drop flink-storm and only keep the Bolt and
> Spout
> > >> >> wrappers. Thanks for your feedback!
> > >> >>
> > >> >> Cheers,
> > >> >> Till
> > >> >>
> > >> >> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
> > >> >> mailto:ches...@apache.org>> wrote:
> > >> >>
> > >> >> I've created
> > >> >> https://issues.apache.org/jira/browse/FLINK-10509 for
> > >> >> removing flink-storm.
> > >> >>
> > >> >>     On 28.09.2018 15:22, Till Rohrmann wrote:
> > >> >> > Hi everyone,
> > >> >> >
> > >> >> > I would like to discuss how to proceed with Flink's storm
> > >> >> compatibility
> > >> >> > layer flink-strom.
> > >> >> >
> > >> >> > While working on removing Flink's legacy mode, I noticed
> > >> >> that some parts of
> > >> >>     > flink-sto

Re: [DISCUSS] Dropping flink-storm?

2019-01-10 Thread Ufuk Celebi
+1 to drop.

I totally agree with your reasoning. I like that we tried to keep it,
but I don't think the maintenance overhead would be justified.

– Ufuk

On Wed, Jan 9, 2019 at 4:09 PM Till Rohrmann  wrote:
>
> With https://issues.apache.org/jira/browse/FLINK-10571, we will remove the
> Storm topologies from Flink and keep the wrappers for the moment.
>
> However, looking at the FlinkTopologyContext [1], it becomes quite obvious
> that Flink's compatibility with Storm is really limited. Almost all of the
> context methods are not supported which makes me wonder how useful these
> wrappers really are. Given the additional maintenance overhead of having
> them in the code base and no indication that someone is actively using
> them, I would still be in favour of removing them. This will reduce our
> maintenance burden in the future. What do you think?
>
> [1]
> https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java
>
> Cheers,
> Till
>
> On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske  wrote:
>
> > Yes, let's do it this way.
> > The wrapper classes are probably not too complex and can be easily tested.
> > We have the same for the Hadoop interfaces, although I think only the
> > Input- and OutputFormatWrappers are actually used.
> >
> >
> > Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
> > ches...@apache.org>:
> >
> >> That sounds very good to me.
> >>
> >> On 08.10.2018 11:36, Till Rohrmann wrote:
> >> > Good point. The initial idea of this thread was to remove the storm
> >> > compatibility layer completely.
> >> >
> >> > During the discussion I realized that it might be useful for our users
> >> > to not completely remove it in one go. Instead for those who still
> >> > want to use some Bolt and Spout code in Flink, it could be nice to
> >> > keep the wrappers. At least, we could remove flink-storm in a more
> >> > graceful way by first removing the Topology and client parts and then
> >> > the wrappers. What do you think?
> >> >
> >> > Cheers,
> >> > Till
> >> >
> >> > On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler  >> > <mailto:ches...@apache.org>> wrote:
> >> >
> >> > I don't believe that to be the consensus. For starters it is
> >> > contradictory; we can't /drop /flink-storm yet still /keep //some
> >> > parts/.
> >> >
> >> > From my understanding we drop flink-storm completely, and put a
> >> > note in the docs that the bolt/spout wrappers of previous versions
> >> > will continue to work.
> >> >
> >> > On 08.10.2018 11:04, Till Rohrmann wrote:
> >> >> Thanks for opening the issue Chesnay. I think the overall
> >> >> consensus is to drop flink-storm and only keep the Bolt and Spout
> >> >> wrappers. Thanks for your feedback!
> >> >>
> >> >> Cheers,
> >> >> Till
> >> >>
> >> >> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
> >> >> mailto:ches...@apache.org>> wrote:
> >> >>
> >> >> I've created
> >> >> https://issues.apache.org/jira/browse/FLINK-10509 for
> >> >> removing flink-storm.
> >> >>
> >> >> On 28.09.2018 15:22, Till Rohrmann wrote:
> >> >> > Hi everyone,
> >> >> >
> >> >> > I would like to discuss how to proceed with Flink's storm
> >> >> compatibility
> >> >> > layer flink-strom.
> >> >> >
> >> >> > While working on removing Flink's legacy mode, I noticed
> >> >> that some parts of
> >> >> > flink-storm rely on the legacy Flink client. In fact, at
> >> >> the moment
> >> >> > flink-storm does not work together with Flink's new
> >> distributed
> >> >>     > architecture.
> >> >> >
> >> >> > I'm also wondering how many people are actually using
> >> >> Flink's Storm
> >> >> > compatibility layer and whether it would be worth porting it.
> >> >> >
> >> >> > I see two options how to proceed:
> >> >> >
> >> >> > 1) Commit to maintain flink-storm and port it to Flink's
> >> >> new architecture
> >> >> > 2) Drop flink-storm
> >> >> >
> >> >> > I doubt that we can contribute it to Apache Bahir [1],
> >> >> because once we
> >> >> > remove the legacy mode, this module will no longer work
> >> >> with all newer
> >> >> > Flink versions.
> >> >> >
> >> >> > Therefore, I would like to hear your opinion on this and in
> >> >> particular if
> >> >> > you are using or planning to use flink-storm in the future.
> >> >> >
> >> >> > [1] https://github.com/apache/bahir-flink
> >> >> >
> >> >> > Cheers,
> >> >> > Till
> >> >> >
> >> >>
> >> >
> >>
> >>


Re: [DISCUSS] Dropping flink-storm?

2019-01-09 Thread Till Rohrmann
With https://issues.apache.org/jira/browse/FLINK-10571, we will remove the
Storm topologies from Flink and keep the wrappers for the moment.

However, looking at the FlinkTopologyContext [1], it becomes quite obvious
that Flink's compatibility with Storm is really limited. Almost all of the
context methods are not supported which makes me wonder how useful these
wrappers really are. Given the additional maintenance overhead of having
them in the code base and no indication that someone is actively using
them, I would still be in favour of removing them. This will reduce our
maintenance burden in the future. What do you think?

[1]
https://github.com/apache/flink/blob/master/flink-contrib/flink-storm/src/main/java/org/apache/flink/storm/wrappers/FlinkTopologyContext.java

Cheers,
Till

On Tue, Oct 9, 2018 at 10:08 AM Fabian Hueske  wrote:

> Yes, let's do it this way.
> The wrapper classes are probably not too complex and can be easily tested.
> We have the same for the Hadoop interfaces, although I think only the
> Input- and OutputFormatWrappers are actually used.
>
>
> Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
> ches...@apache.org>:
>
>> That sounds very good to me.
>>
>> On 08.10.2018 11:36, Till Rohrmann wrote:
>> > Good point. The initial idea of this thread was to remove the storm
>> > compatibility layer completely.
>> >
>> > During the discussion I realized that it might be useful for our users
>> > to not completely remove it in one go. Instead for those who still
>> > want to use some Bolt and Spout code in Flink, it could be nice to
>> > keep the wrappers. At least, we could remove flink-storm in a more
>> > graceful way by first removing the Topology and client parts and then
>> > the wrappers. What do you think?
>> >
>> > Cheers,
>> > Till
>> >
>> > On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler > > <mailto:ches...@apache.org>> wrote:
>> >
>> >     I don't believe that to be the consensus. For starters it is
>> > contradictory; we can't /drop /flink-storm yet still /keep //some
>> > parts/.
>> >
>> > From my understanding we drop flink-storm completely, and put a
>> > note in the docs that the bolt/spout wrappers of previous versions
>> > will continue to work.
>> >
>> > On 08.10.2018 11:04, Till Rohrmann wrote:
>> >> Thanks for opening the issue Chesnay. I think the overall
>> >> consensus is to drop flink-storm and only keep the Bolt and Spout
>> >> wrappers. Thanks for your feedback!
>> >>
>> >> Cheers,
>> >> Till
>> >>
>> >> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
>> >> mailto:ches...@apache.org>> wrote:
>> >>
>> >> I've created
>> >> https://issues.apache.org/jira/browse/FLINK-10509 for
>> >> removing flink-storm.
>> >>
>> >>     On 28.09.2018 15:22, Till Rohrmann wrote:
>> >> > Hi everyone,
>> >> >
>> >> > I would like to discuss how to proceed with Flink's storm
>> >> compatibility
>> >> > layer flink-strom.
>> >> >
>> >> > While working on removing Flink's legacy mode, I noticed
>> >> that some parts of
>> >> > flink-storm rely on the legacy Flink client. In fact, at
>> >> the moment
>> >> > flink-storm does not work together with Flink's new
>> distributed
>> >> > architecture.
>> >> >
>> >> > I'm also wondering how many people are actually using
>> >> Flink's Storm
>> >> > compatibility layer and whether it would be worth porting it.
>> >> >
>> >> > I see two options how to proceed:
>> >> >
>> >> > 1) Commit to maintain flink-storm and port it to Flink's
>> >> new architecture
>> >> > 2) Drop flink-storm
>> >> >
>> >> > I doubt that we can contribute it to Apache Bahir [1],
>> >> because once we
>> >> > remove the legacy mode, this module will no longer work
>> >> with all newer
>> >> > Flink versions.
>> >> >
>> >> > Therefore, I would like to hear your opinion on this and in
>> >> particular if
>> >> > you are using or planning to use flink-storm in the future.
>> >> >
>> >> > [1] https://github.com/apache/bahir-flink
>> >> >
>> >> > Cheers,
>> >> > Till
>> >> >
>> >>
>> >
>>
>>


Re: [DISCUSS] Dropping flink-storm?

2018-10-09 Thread Fabian Hueske
Yes, let's do it this way.
The wrapper classes are probably not too complex and can be easily tested.
We have the same for the Hadoop interfaces, although I think only the
Input- and OutputFormatWrappers are actually used.


Am Di., 9. Okt. 2018 um 09:46 Uhr schrieb Chesnay Schepler <
ches...@apache.org>:

> That sounds very good to me.
>
> On 08.10.2018 11:36, Till Rohrmann wrote:
> > Good point. The initial idea of this thread was to remove the storm
> > compatibility layer completely.
> >
> > During the discussion I realized that it might be useful for our users
> > to not completely remove it in one go. Instead for those who still
> > want to use some Bolt and Spout code in Flink, it could be nice to
> > keep the wrappers. At least, we could remove flink-storm in a more
> > graceful way by first removing the Topology and client parts and then
> > the wrappers. What do you think?
> >
> > Cheers,
> > Till
> >
> > On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler  > <mailto:ches...@apache.org>> wrote:
> >
> > I don't believe that to be the consensus. For starters it is
> > contradictory; we can't /drop /flink-storm yet still /keep //some
> > parts/.
> >
> > From my understanding we drop flink-storm completely, and put a
> > note in the docs that the bolt/spout wrappers of previous versions
> > will continue to work.
> >
> > On 08.10.2018 11:04, Till Rohrmann wrote:
> >> Thanks for opening the issue Chesnay. I think the overall
> >> consensus is to drop flink-storm and only keep the Bolt and Spout
> >> wrappers. Thanks for your feedback!
> >>
> >> Cheers,
> >> Till
> >>
> >> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
> >> mailto:ches...@apache.org>> wrote:
> >>
> >> I've created
> >> https://issues.apache.org/jira/browse/FLINK-10509 for
> >> removing flink-storm.
> >>
> >> On 28.09.2018 15:22, Till Rohrmann wrote:
> >> > Hi everyone,
> >> >
> >> > I would like to discuss how to proceed with Flink's storm
> >> compatibility
> >> > layer flink-strom.
> >> >
> >> > While working on removing Flink's legacy mode, I noticed
> >> that some parts of
> >> > flink-storm rely on the legacy Flink client. In fact, at
> >> the moment
> >> > flink-storm does not work together with Flink's new
> distributed
> >> > architecture.
> >> >
> >> > I'm also wondering how many people are actually using
> >> Flink's Storm
> >> > compatibility layer and whether it would be worth porting it.
> >> >
> >> > I see two options how to proceed:
> >> >
> >>     > 1) Commit to maintain flink-storm and port it to Flink's
> >> new architecture
> >> > 2) Drop flink-storm
> >> >
> >> > I doubt that we can contribute it to Apache Bahir [1],
> >> because once we
> >> > remove the legacy mode, this module will no longer work
> >> with all newer
> >> > Flink versions.
> >> >
> >> > Therefore, I would like to hear your opinion on this and in
> >> particular if
> >> > you are using or planning to use flink-storm in the future.
> >> >
> >> > [1] https://github.com/apache/bahir-flink
> >> >
> >> > Cheers,
> >> > Till
> >> >
> >>
> >
>
>


Re: [DISCUSS] Dropping flink-storm?

2018-10-09 Thread Chesnay Schepler

That sounds very good to me.

On 08.10.2018 11:36, Till Rohrmann wrote:
Good point. The initial idea of this thread was to remove the storm 
compatibility layer completely.


During the discussion I realized that it might be useful for our users 
to not completely remove it in one go. Instead for those who still 
want to use some Bolt and Spout code in Flink, it could be nice to 
keep the wrappers. At least, we could remove flink-storm in a more 
graceful way by first removing the Topology and client parts and then 
the wrappers. What do you think?


Cheers,
Till

On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler <mailto:ches...@apache.org>> wrote:


I don't believe that to be the consensus. For starters it is
contradictory; we can't /drop /flink-storm yet still /keep //some
parts/.

From my understanding we drop flink-storm completely, and put a
note in the docs that the bolt/spout wrappers of previous versions
will continue to work.

On 08.10.2018 11:04, Till Rohrmann wrote:

Thanks for opening the issue Chesnay. I think the overall
consensus is to drop flink-storm and only keep the Bolt and Spout
wrappers. Thanks for your feedback!

Cheers,
Till

On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler
mailto:ches...@apache.org>> wrote:

I've created
https://issues.apache.org/jira/browse/FLINK-10509 for
    removing flink-storm.

On 28.09.2018 15:22, Till Rohrmann wrote:
> Hi everyone,
>
> I would like to discuss how to proceed with Flink's storm
compatibility
> layer flink-strom.
>
> While working on removing Flink's legacy mode, I noticed
that some parts of
> flink-storm rely on the legacy Flink client. In fact, at
the moment
> flink-storm does not work together with Flink's new distributed
> architecture.
>
> I'm also wondering how many people are actually using
Flink's Storm
> compatibility layer and whether it would be worth porting it.
>
> I see two options how to proceed:
>
> 1) Commit to maintain flink-storm and port it to Flink's
new architecture
> 2) Drop flink-storm
>
> I doubt that we can contribute it to Apache Bahir [1],
because once we
> remove the legacy mode, this module will no longer work
with all newer
> Flink versions.
>
> Therefore, I would like to hear your opinion on this and in
particular if
> you are using or planning to use flink-storm in the future.
>
> [1] https://github.com/apache/bahir-flink
>
> Cheers,
> Till
>







Re: [DISCUSS] Dropping flink-storm?

2018-10-08 Thread Till Rohrmann
Good point. The initial idea of this thread was to remove the storm
compatibility layer completely.

During the discussion I realized that it might be useful for our users to
not completely remove it in one go. Instead for those who still want to use
some Bolt and Spout code in Flink, it could be nice to keep the wrappers.
At least, we could remove flink-storm in a more graceful way by first
removing the Topology and client parts and then the wrappers. What do you
think?

Cheers,
Till

On Mon, Oct 8, 2018 at 11:13 AM Chesnay Schepler  wrote:

> I don't believe that to be the consensus. For starters it is
> contradictory; we can't *drop *flink-storm yet still *keep **some parts*.
>
> From my understanding we drop flink-storm completely, and put a note in
> the docs that the bolt/spout wrappers of previous versions will continue to
> work.
>
> On 08.10.2018 11:04, Till Rohrmann wrote:
>
> Thanks for opening the issue Chesnay. I think the overall consensus is to
> drop flink-storm and only keep the Bolt and Spout wrappers. Thanks for your
> feedback!
>
> Cheers,
> Till
>
> On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler 
> wrote:
>
>> I've created https://issues.apache.org/jira/browse/FLINK-10509 for
>> removing flink-storm.
>>
>> On 28.09.2018 15:22, Till Rohrmann wrote:
>> > Hi everyone,
>> >
>> > I would like to discuss how to proceed with Flink's storm compatibility
>> > layer flink-strom.
>> >
>> > While working on removing Flink's legacy mode, I noticed that some
>> parts of
>> > flink-storm rely on the legacy Flink client. In fact, at the moment
>> > flink-storm does not work together with Flink's new distributed
>> > architecture.
>> >
>> > I'm also wondering how many people are actually using Flink's Storm
>> > compatibility layer and whether it would be worth porting it.
>> >
>> > I see two options how to proceed:
>> >
>> > 1) Commit to maintain flink-storm and port it to Flink's new
>> architecture
>> > 2) Drop flink-storm
>> >
>> > I doubt that we can contribute it to Apache Bahir [1], because once we
>> > remove the legacy mode, this module will no longer work with all newer
>> > Flink versions.
>> >
>> > Therefore, I would like to hear your opinion on this and in particular
>> if
>> > you are using or planning to use flink-storm in the future.
>> >
>> > [1] https://github.com/apache/bahir-flink
>> >
>> > Cheers,
>> > Till
>> >
>>
>>
>


Re: [DISCUSS] Dropping flink-storm?

2018-10-08 Thread Chesnay Schepler
I don't believe that to be the consensus. For starters it is 
contradictory; we can't /drop /flink-storm yet still /keep //some parts/.


From my understanding we drop flink-storm completely, and put a note in 
the docs that the bolt/spout wrappers of previous versions will continue 
to work.


On 08.10.2018 11:04, Till Rohrmann wrote:
Thanks for opening the issue Chesnay. I think the overall consensus is 
to drop flink-storm and only keep the Bolt and Spout wrappers. Thanks 
for your feedback!


Cheers,
Till

On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler <mailto:ches...@apache.org>> wrote:


I've created https://issues.apache.org/jira/browse/FLINK-10509 for
removing flink-storm.

On 28.09.2018 15:22, Till Rohrmann wrote:
> Hi everyone,
>
> I would like to discuss how to proceed with Flink's storm
compatibility
> layer flink-strom.
>
> While working on removing Flink's legacy mode, I noticed that
    some parts of
> flink-storm rely on the legacy Flink client. In fact, at the moment
> flink-storm does not work together with Flink's new distributed
> architecture.
>
> I'm also wondering how many people are actually using Flink's Storm
> compatibility layer and whether it would be worth porting it.
>
> I see two options how to proceed:
>
> 1) Commit to maintain flink-storm and port it to Flink's new
architecture
> 2) Drop flink-storm
>
> I doubt that we can contribute it to Apache Bahir [1], because
once we
> remove the legacy mode, this module will no longer work with all
newer
> Flink versions.
>
> Therefore, I would like to hear your opinion on this and in
particular if
> you are using or planning to use flink-storm in the future.
>
> [1] https://github.com/apache/bahir-flink
>
> Cheers,
> Till
>





Re: [DISCUSS] Dropping flink-storm?

2018-10-08 Thread Till Rohrmann
Thanks for opening the issue Chesnay. I think the overall consensus is to
drop flink-storm and only keep the Bolt and Spout wrappers. Thanks for your
feedback!

Cheers,
Till

On Mon, Oct 8, 2018 at 9:37 AM Chesnay Schepler  wrote:

> I've created https://issues.apache.org/jira/browse/FLINK-10509 for
> removing flink-storm.
>
> On 28.09.2018 15:22, Till Rohrmann wrote:
> > Hi everyone,
> >
> > I would like to discuss how to proceed with Flink's storm compatibility
> > layer flink-strom.
> >
> > While working on removing Flink's legacy mode, I noticed that some parts
> of
> > flink-storm rely on the legacy Flink client. In fact, at the moment
> > flink-storm does not work together with Flink's new distributed
> > architecture.
> >
> > I'm also wondering how many people are actually using Flink's Storm
> > compatibility layer and whether it would be worth porting it.
> >
> > I see two options how to proceed:
> >
> > 1) Commit to maintain flink-storm and port it to Flink's new architecture
> > 2) Drop flink-storm
> >
> > I doubt that we can contribute it to Apache Bahir [1], because once we
> > remove the legacy mode, this module will no longer work with all newer
> > Flink versions.
> >
> > Therefore, I would like to hear your opinion on this and in particular if
> > you are using or planning to use flink-storm in the future.
> >
> > [1] https://github.com/apache/bahir-flink
> >
> > Cheers,
> > Till
> >
>
>


Re: [DISCUSS] Dropping flink-storm?

2018-10-08 Thread Chesnay Schepler
I've created https://issues.apache.org/jira/browse/FLINK-10509 for 
removing flink-storm.


On 28.09.2018 15:22, Till Rohrmann wrote:

Hi everyone,

I would like to discuss how to proceed with Flink's storm compatibility
layer flink-strom.

While working on removing Flink's legacy mode, I noticed that some parts of
flink-storm rely on the legacy Flink client. In fact, at the moment
flink-storm does not work together with Flink's new distributed
architecture.

I'm also wondering how many people are actually using Flink's Storm
compatibility layer and whether it would be worth porting it.

I see two options how to proceed:

1) Commit to maintain flink-storm and port it to Flink's new architecture
2) Drop flink-storm

I doubt that we can contribute it to Apache Bahir [1], because once we
remove the legacy mode, this module will no longer work with all newer
Flink versions.

Therefore, I would like to hear your opinion on this and in particular if
you are using or planning to use flink-storm in the future.

[1] https://github.com/apache/bahir-flink

Cheers,
Till





Re: [DISCUSS] Dropping flink-storm?

2018-10-02 Thread Aljoscha Krettek
+1 for dropping it

> On 1. Oct 2018, at 10:55, Fabian Hueske  wrote:
> 
> +1 to drop it.
> 
> Thanks, Fabian
> 
> Am Sa., 29. Sep. 2018 um 12:05 Uhr schrieb Niels Basjes :
> 
>> I would drop it.
>> 
>> Niels Basjes
>> 
>> On Sat, 29 Sep 2018, 10:38 Kostas Kloudas, 
>> wrote:
>> 
>>> +1 to drop it as nobody seems to be willing to maintain it and it also
>>> stands in the way for future developments in Flink.
>>> 
>>> Cheers,
>>> Kostas
>>> 
>>>> On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen  wrote:
>>>> 
>>>> +1 to drop it.
>>>> 
>>>> It seems few people use it. Commits history of an experimental
>>>> module sparse often means that there is low interest.
>>>> 
>>>> Best,
>>>> tison.
>>>> 
>>>> 
>>>> 远远  于2018年9月29日周六 下午2:16写道:
>>>> 
>>>>> +1, it‘s time to drop it
>>>>> 
>>>>> Zhijiang(wangzhijiang999)  于2018年9月29日周六
>>>>> 下午1:53写道:
>>>>> 
>>>>>> Very agree with to drop it. +1
>>>>>> 
>>>>>> --
>>>>>> 发件人:Jeff Carter 
>>>>>> 发送时间:2018年9月29日(星期六) 10:18
>>>>>> 收件人:dev 
>>>>>> 抄 送:chesnay ; Till Rohrmann <
>> trohrm...@apache.org
>>>> ;
>>>>>> user 
>>>>>> 主 题:Re: [DISCUSS] Dropping flink-storm?
>>>>>> 
>>>>>> +1 to drop it.
>>>>>> 
>>>>>> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng 
>>> wrote:
>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> +1 to drop it. It seems that few people use it.
>>>>>>> 
>>>>>>> Best, Hequn
>>>>>>> 
>>>>>>> On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler <
>> ches...@apache.org
>>>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>>> I'm very much in favor of dropping it.
>>>>>>>> 
>>>>>>>> Flink has been continually growing in terms of features, and IMO
>>> we've
>>>>>>>> reached the point where we should cull some of the more obscure
>> ones.
>>>>>> 
>>>>>>>> flink-storm, while interesting from a theoretical standpoint,
>> offers
>>> too
>>>>>>>> little value.
>>>>>>>> 
>>>>>> 
>>>>>>>> Note that the bolt/spout wrapper parts of the part are still
>>> compatible,
>>>>>>>> it's only topologies that aren't working.
>>>>>>>> 
>>>>>>>> IMO compatibility layers only add value if they ease the migration
>> to
>>>>>>>> Flink APIs.
>>>>>> 
>>>>>>>> * bolt/spout wrappers do this, but they will continue to work even
>>> if we
>>>>>>>> drop it
>>>>>>>> * topologies don't do this, so I'm not interested in then.
>>>>>>>> 
>>>>>>>> On 28.09.2018 15:22, Till Rohrmann wrote:
>>>>>>>>> Hi everyone,
>>>>>>>>> 
>>>>>>>>> I would like to discuss how to proceed with Flink's storm
>>>>>>>>> compatibility layer flink-strom.
>>>>>>>>> 
>>>>>>>>> While working on removing Flink's legacy mode, I noticed that some
>>>>>> 
>>>>>>>>> parts of flink-storm rely on the legacy Flink client. In fact, at
>>> the
>>>>>> 
>>>>>>>>> moment flink-storm does not work together with Flink's new
>>> distributed
>>>>>>>>> architecture.
>>>>>>>>> 
>>>>>>>>> I'm also wondering how many people are actually using Flink's
>> Storm
>>>>>>>>> compatibility layer and whether it would be worth porting it.
>>>>>>>>> 
>>>>>>>>> I see two options how to proceed:
>>>>>>>>> 
>>>>>>>>> 1) Commit to maintain flink-storm and port it to Flink's new
>>>>>>> architecture
>>>>>>>>> 2) Drop flink-storm
>>>>>>>>> 
>>>>>> 
>>>>>>>>> I doubt that we can contribute it to Apache Bahir [1], because
>> once
>>> we
>>>>>> 
>>>>>>>>> remove the legacy mode, this module will no longer work with all
>>> newer
>>>>>>>>> Flink versions.
>>>>>>>>> 
>>>>>> 
>>>>>>>>> Therefore, I would like to hear your opinion on this and in
>>> particular
>>>>>>>>> if you are using or planning to use flink-storm in the future.
>>>>>>>>> 
>>>>>>>>> [1] https://github.com/apache/bahir-flink
>>>>>>>>> 
>>>>>>>>> Cheers,
>>>>>>>>> Till
>>>>>>>> 
>>>>>>>> 
>>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>> 
>>> 
>> 



Re: [DISCUSS] Dropping flink-storm?

2018-10-01 Thread Fabian Hueske
+1 to drop it.

Thanks, Fabian

Am Sa., 29. Sep. 2018 um 12:05 Uhr schrieb Niels Basjes :

>  I would drop it.
>
> Niels Basjes
>
> On Sat, 29 Sep 2018, 10:38 Kostas Kloudas, 
> wrote:
>
> > +1 to drop it as nobody seems to be willing to maintain it and it also
> > stands in the way for future developments in Flink.
> >
> > Cheers,
> > Kostas
> >
> > > On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen  wrote:
> > >
> > > +1 to drop it.
> > >
> > > It seems few people use it. Commits history of an experimental
> > > module sparse often means that there is low interest.
> > >
> > > Best,
> > > tison.
> > >
> > >
> > > 远远  于2018年9月29日周六 下午2:16写道:
> > >
> > >> +1, it‘s time to drop it
> > >>
> > >> Zhijiang(wangzhijiang999)  于2018年9月29日周六
> > >> 下午1:53写道:
> > >>
> > >>> Very agree with to drop it. +1
> > >>>
> > >>> --
> > >>> 发件人:Jeff Carter 
> > >>> 发送时间:2018年9月29日(星期六) 10:18
> > >>> 收件人:dev 
> > >>> 抄 送:chesnay ; Till Rohrmann <
> trohrm...@apache.org
> > >;
> > >>> user 
> > >>> 主 题:Re: [DISCUSS] Dropping flink-storm?
> > >>>
> > >>> +1 to drop it.
> > >>>
> > >>> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng 
> > wrote:
> > >>>
> > >>>> Hi,
> > >>>>
> > >>>> +1 to drop it. It seems that few people use it.
> > >>>>
> > >>>> Best, Hequn
> > >>>>
> > >>>> On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler <
> ches...@apache.org
> > >
> > >>>> wrote:
> > >>>>
> > >>>>> I'm very much in favor of dropping it.
> > >>>>>
> > >>>>> Flink has been continually growing in terms of features, and IMO
> > we've
> > >>>>> reached the point where we should cull some of the more obscure
> ones.
> > >>>
> > >>>>> flink-storm, while interesting from a theoretical standpoint,
> offers
> > too
> > >>>>> little value.
> > >>>>>
> > >>>
> > >>>>> Note that the bolt/spout wrapper parts of the part are still
> > compatible,
> > >>>>> it's only topologies that aren't working.
> > >>>>>
> > >>>>> IMO compatibility layers only add value if they ease the migration
> to
> > >>>>> Flink APIs.
> > >>>
> > >>>>> * bolt/spout wrappers do this, but they will continue to work even
> > if we
> > >>>>> drop it
> > >>>>> * topologies don't do this, so I'm not interested in then.
> > >>>>>
> > >>>>> On 28.09.2018 15:22, Till Rohrmann wrote:
> > >>>>>> Hi everyone,
> > >>>>>>
> > >>>>>> I would like to discuss how to proceed with Flink's storm
> > >>>>>> compatibility layer flink-strom.
> > >>>>>>
> > >>>>>> While working on removing Flink's legacy mode, I noticed that some
> > >>>
> > >>>>>> parts of flink-storm rely on the legacy Flink client. In fact, at
> > the
> > >>>
> > >>>>>> moment flink-storm does not work together with Flink's new
> > distributed
> > >>>>>> architecture.
> > >>>>>>
> > >>>>>> I'm also wondering how many people are actually using Flink's
> Storm
> > >>>>>> compatibility layer and whether it would be worth porting it.
> > >>>>>>
> > >>>>>> I see two options how to proceed:
> > >>>>>>
> > >>>>>> 1) Commit to maintain flink-storm and port it to Flink's new
> > >>>> architecture
> > >>>>>> 2) Drop flink-storm
> > >>>>>>
> > >>>
> > >>>>>> I doubt that we can contribute it to Apache Bahir [1], because
> once
> > we
> > >>>
> > >>>>>> remove the legacy mode, this module will no longer work with all
> > newer
> > >>>>>> Flink versions.
> > >>>>>>
> > >>>
> > >>>>>> Therefore, I would like to hear your opinion on this and in
> > particular
> > >>>>>> if you are using or planning to use flink-storm in the future.
> > >>>>>>
> > >>>>>> [1] https://github.com/apache/bahir-flink
> > >>>>>>
> > >>>>>> Cheers,
> > >>>>>> Till
> > >>>>>
> > >>>>>
> > >>>>>
> > >>>>
> > >>>
> > >>>
> > >>>
> >
> >
>


Re: [DISCUSS] Dropping flink-storm?

2018-09-29 Thread Niels Basjes
 I would drop it.

Niels Basjes

On Sat, 29 Sep 2018, 10:38 Kostas Kloudas, 
wrote:

> +1 to drop it as nobody seems to be willing to maintain it and it also
> stands in the way for future developments in Flink.
>
> Cheers,
> Kostas
>
> > On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen  wrote:
> >
> > +1 to drop it.
> >
> > It seems few people use it. Commits history of an experimental
> > module sparse often means that there is low interest.
> >
> > Best,
> > tison.
> >
> >
> > 远远  于2018年9月29日周六 下午2:16写道:
> >
> >> +1, it‘s time to drop it
> >>
> >> Zhijiang(wangzhijiang999)  于2018年9月29日周六
> >> 下午1:53写道:
> >>
> >>> Very agree with to drop it. +1
> >>>
> >>> ------
> >>> 发件人:Jeff Carter 
> >>> 发送时间:2018年9月29日(星期六) 10:18
> >>> 收件人:dev 
> >>> 抄 送:chesnay ; Till Rohrmann  >;
> >>> user 
> >>> 主 题:Re: [DISCUSS] Dropping flink-storm?
> >>>
> >>> +1 to drop it.
> >>>
> >>> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng 
> wrote:
> >>>
> >>>> Hi,
> >>>>
> >>>> +1 to drop it. It seems that few people use it.
> >>>>
> >>>> Best, Hequn
> >>>>
> >>>> On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler  >
> >>>> wrote:
> >>>>
> >>>>> I'm very much in favor of dropping it.
> >>>>>
> >>>>> Flink has been continually growing in terms of features, and IMO
> we've
> >>>>> reached the point where we should cull some of the more obscure ones.
> >>>
> >>>>> flink-storm, while interesting from a theoretical standpoint, offers
> too
> >>>>> little value.
> >>>>>
> >>>
> >>>>> Note that the bolt/spout wrapper parts of the part are still
> compatible,
> >>>>> it's only topologies that aren't working.
> >>>>>
> >>>>> IMO compatibility layers only add value if they ease the migration to
> >>>>> Flink APIs.
> >>>
> >>>>> * bolt/spout wrappers do this, but they will continue to work even
> if we
> >>>>> drop it
> >>>>> * topologies don't do this, so I'm not interested in then.
> >>>>>
> >>>>> On 28.09.2018 15:22, Till Rohrmann wrote:
> >>>>>> Hi everyone,
> >>>>>>
> >>>>>> I would like to discuss how to proceed with Flink's storm
> >>>>>> compatibility layer flink-strom.
> >>>>>>
> >>>>>> While working on removing Flink's legacy mode, I noticed that some
> >>>
> >>>>>> parts of flink-storm rely on the legacy Flink client. In fact, at
> the
> >>>
> >>>>>> moment flink-storm does not work together with Flink's new
> distributed
> >>>>>> architecture.
> >>>>>>
> >>>>>> I'm also wondering how many people are actually using Flink's Storm
> >>>>>> compatibility layer and whether it would be worth porting it.
> >>>>>>
> >>>>>> I see two options how to proceed:
> >>>>>>
> >>>>>> 1) Commit to maintain flink-storm and port it to Flink's new
> >>>> architecture
> >>>>>> 2) Drop flink-storm
> >>>>>>
> >>>
> >>>>>> I doubt that we can contribute it to Apache Bahir [1], because once
> we
> >>>
> >>>>>> remove the legacy mode, this module will no longer work with all
> newer
> >>>>>> Flink versions.
> >>>>>>
> >>>
> >>>>>> Therefore, I would like to hear your opinion on this and in
> particular
> >>>>>> if you are using or planning to use flink-storm in the future.
> >>>>>>
> >>>>>> [1] https://github.com/apache/bahir-flink
> >>>>>>
> >>>>>> Cheers,
> >>>>>> Till
> >>>>>
> >>>>>
> >>>>>
> >>>>
> >>>
> >>>
> >>>
>
>


Re: [DISCUSS] Dropping flink-storm?

2018-09-29 Thread Kostas Kloudas
+1 to drop it as nobody seems to be willing to maintain it and it also 
stands in the way for future developments in Flink.

Cheers,
Kostas

> On Sep 29, 2018, at 8:19 AM, Tzu-Li Chen  wrote:
> 
> +1 to drop it.
> 
> It seems few people use it. Commits history of an experimental
> module sparse often means that there is low interest.
> 
> Best,
> tison.
> 
> 
> 远远  于2018年9月29日周六 下午2:16写道:
> 
>> +1, it‘s time to drop it
>> 
>> Zhijiang(wangzhijiang999)  于2018年9月29日周六
>> 下午1:53写道:
>> 
>>> Very agree with to drop it. +1
>>> 
>>> --
>>> 发件人:Jeff Carter 
>>> 发送时间:2018年9月29日(星期六) 10:18
>>> 收件人:dev 
>>> 抄 送:chesnay ; Till Rohrmann ;
>>> user 
>>> 主 题:Re: [DISCUSS] Dropping flink-storm?
>>> 
>>> +1 to drop it.
>>> 
>>> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng  wrote:
>>> 
>>>> Hi,
>>>> 
>>>> +1 to drop it. It seems that few people use it.
>>>> 
>>>> Best, Hequn
>>>> 
>>>> On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler 
>>>> wrote:
>>>> 
>>>>> I'm very much in favor of dropping it.
>>>>> 
>>>>> Flink has been continually growing in terms of features, and IMO we've
>>>>> reached the point where we should cull some of the more obscure ones.
>>> 
>>>>> flink-storm, while interesting from a theoretical standpoint, offers too
>>>>> little value.
>>>>> 
>>> 
>>>>> Note that the bolt/spout wrapper parts of the part are still compatible,
>>>>> it's only topologies that aren't working.
>>>>> 
>>>>> IMO compatibility layers only add value if they ease the migration to
>>>>> Flink APIs.
>>> 
>>>>> * bolt/spout wrappers do this, but they will continue to work even if we
>>>>> drop it
>>>>> * topologies don't do this, so I'm not interested in then.
>>>>> 
>>>>> On 28.09.2018 15:22, Till Rohrmann wrote:
>>>>>> Hi everyone,
>>>>>> 
>>>>>> I would like to discuss how to proceed with Flink's storm
>>>>>> compatibility layer flink-strom.
>>>>>> 
>>>>>> While working on removing Flink's legacy mode, I noticed that some
>>> 
>>>>>> parts of flink-storm rely on the legacy Flink client. In fact, at the
>>> 
>>>>>> moment flink-storm does not work together with Flink's new distributed
>>>>>> architecture.
>>>>>> 
>>>>>> I'm also wondering how many people are actually using Flink's Storm
>>>>>> compatibility layer and whether it would be worth porting it.
>>>>>> 
>>>>>> I see two options how to proceed:
>>>>>> 
>>>>>> 1) Commit to maintain flink-storm and port it to Flink's new
>>>> architecture
>>>>>> 2) Drop flink-storm
>>>>>> 
>>> 
>>>>>> I doubt that we can contribute it to Apache Bahir [1], because once we
>>> 
>>>>>> remove the legacy mode, this module will no longer work with all newer
>>>>>> Flink versions.
>>>>>> 
>>> 
>>>>>> Therefore, I would like to hear your opinion on this and in particular
>>>>>> if you are using or planning to use flink-storm in the future.
>>>>>> 
>>>>>> [1] https://github.com/apache/bahir-flink
>>>>>> 
>>>>>> Cheers,
>>>>>> Till
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>> 
>>> 
>>> 



Re: [DISCUSS] Dropping flink-storm?

2018-09-29 Thread Tzu-Li Chen
+1 to drop it.

It seems few people use it. Commits history of an experimental
module sparse often means that there is low interest.

Best,
tison.


远远  于2018年9月29日周六 下午2:16写道:

> +1, it‘s time to drop it
>
> Zhijiang(wangzhijiang999)  于2018年9月29日周六
> 下午1:53写道:
>
>> Very agree with to drop it. +1
>>
>> --
>> 发件人:Jeff Carter 
>> 发送时间:2018年9月29日(星期六) 10:18
>> 收件人:dev 
>> 抄 送:chesnay ; Till Rohrmann ;
>> user 
>> 主 题:Re: [DISCUSS] Dropping flink-storm?
>>
>> +1 to drop it.
>>
>> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng  wrote:
>>
>> > Hi,
>> >
>> > +1 to drop it. It seems that few people use it.
>> >
>> > Best, Hequn
>> >
>> > On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler 
>> > wrote:
>> >
>> > > I'm very much in favor of dropping it.
>> > >
>> > > Flink has been continually growing in terms of features, and IMO we've
>> > > reached the point where we should cull some of the more obscure ones.
>>
>> > > flink-storm, while interesting from a theoretical standpoint, offers too
>> > > little value.
>> > >
>>
>> > > Note that the bolt/spout wrapper parts of the part are still compatible,
>> > > it's only topologies that aren't working.
>> > >
>> > > IMO compatibility layers only add value if they ease the migration to
>> > > Flink APIs.
>>
>> > > * bolt/spout wrappers do this, but they will continue to work even if we
>> > > drop it
>> > > * topologies don't do this, so I'm not interested in then.
>> > >
>> > > On 28.09.2018 15:22, Till Rohrmann wrote:
>> > > > Hi everyone,
>> > > >
>> > > > I would like to discuss how to proceed with Flink's storm
>> > > > compatibility layer flink-strom.
>> > > >
>> > > > While working on removing Flink's legacy mode, I noticed that some
>>
>> > > > parts of flink-storm rely on the legacy Flink client. In fact, at the
>>
>> > > > moment flink-storm does not work together with Flink's new distributed
>> > > > architecture.
>> > > >
>> > > > I'm also wondering how many people are actually using Flink's Storm
>> > > > compatibility layer and whether it would be worth porting it.
>> > > >
>> > > > I see two options how to proceed:
>> > > >
>> > > > 1) Commit to maintain flink-storm and port it to Flink's new
>> > architecture
>> > > > 2) Drop flink-storm
>> > > >
>>
>> > > > I doubt that we can contribute it to Apache Bahir [1], because once we
>>
>> > > > remove the legacy mode, this module will no longer work with all newer
>> > > > Flink versions.
>> > > >
>>
>> > > > Therefore, I would like to hear your opinion on this and in particular
>> > > > if you are using or planning to use flink-storm in the future.
>> > > >
>> > > > [1] https://github.com/apache/bahir-flink
>> > > >
>> > > > Cheers,
>> > > > Till
>> > >
>> > >
>> > >
>> >
>>
>>
>>


Re: [DISCUSS] Dropping flink-storm?

2018-09-29 Thread 远远
+1, it‘s time to drop it

Zhijiang(wangzhijiang999)  于2018年9月29日周六
下午1:53写道:

> Very agree with to drop it. +1
>
> --
> 发件人:Jeff Carter 
> 发送时间:2018年9月29日(星期六) 10:18
> 收件人:dev 
> 抄 送:chesnay ; Till Rohrmann ;
> user 
> 主 题:Re: [DISCUSS] Dropping flink-storm?
>
> +1 to drop it.
>
> On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng  wrote:
>
> > Hi,
> >
> > +1 to drop it. It seems that few people use it.
> >
> > Best, Hequn
> >
> > On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler 
> > wrote:
> >
> > > I'm very much in favor of dropping it.
> > >
> > > Flink has been continually growing in terms of features, and IMO we've
> > > reached the point where we should cull some of the more obscure ones.
>
> > > flink-storm, while interesting from a theoretical standpoint, offers too
> > > little value.
> > >
>
> > > Note that the bolt/spout wrapper parts of the part are still compatible,
> > > it's only topologies that aren't working.
> > >
> > > IMO compatibility layers only add value if they ease the migration to
> > > Flink APIs.
>
> > > * bolt/spout wrappers do this, but they will continue to work even if we
> > > drop it
> > > * topologies don't do this, so I'm not interested in then.
> > >
> > > On 28.09.2018 15:22, Till Rohrmann wrote:
> > > > Hi everyone,
> > > >
> > > > I would like to discuss how to proceed with Flink's storm
> > > > compatibility layer flink-strom.
> > > >
> > > > While working on removing Flink's legacy mode, I noticed that some
> > > > parts of flink-storm rely on the legacy Flink client. In fact, at the
>
> > > > moment flink-storm does not work together with Flink's new distributed
> > > > architecture.
> > > >
> > > > I'm also wondering how many people are actually using Flink's Storm
> > > > compatibility layer and whether it would be worth porting it.
> > > >
> > > > I see two options how to proceed:
> > > >
> > > > 1) Commit to maintain flink-storm and port it to Flink's new
> > architecture
> > > > 2) Drop flink-storm
> > > >
>
> > > > I doubt that we can contribute it to Apache Bahir [1], because once we
>
> > > > remove the legacy mode, this module will no longer work with all newer
> > > > Flink versions.
> > > >
>
> > > > Therefore, I would like to hear your opinion on this and in particular
> > > > if you are using or planning to use flink-storm in the future.
> > > >
> > > > [1] https://github.com/apache/bahir-flink
> > > >
> > > > Cheers,
> > > > Till
> > >
> > >
> > >
> >
>
>
>


回复:[DISCUSS] Dropping flink-storm?

2018-09-28 Thread Zhijiang(wangzhijiang999)
Very agree with to drop it. +1
--
发件人:Jeff Carter 
发送时间:2018年9月29日(星期六) 10:18
收件人:dev 
抄 送:chesnay ; Till Rohrmann ; user 

主 题:Re: [DISCUSS] Dropping flink-storm?

+1 to drop it.

On Fri, Sep 28, 2018, 7:25 PM Hequn Cheng  wrote:

> Hi,
>
> +1 to drop it. It seems that few people use it.
>
> Best, Hequn
>
> On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler 
> wrote:
>
> > I'm very much in favor of dropping it.
> >
> > Flink has been continually growing in terms of features, and IMO we've
> > reached the point where we should cull some of the more obscure ones.
> > flink-storm, while interesting from a theoretical standpoint, offers too
> > little value.
> >
> > Note that the bolt/spout wrapper parts of the part are still compatible,
> > it's only topologies that aren't working.
> >
> > IMO compatibility layers only add value if they ease the migration to
> > Flink APIs.
> > * bolt/spout wrappers do this, but they will continue to work even if we
> > drop it
> > * topologies don't do this, so I'm not interested in then.
> >
> > On 28.09.2018 15:22, Till Rohrmann wrote:
> > > Hi everyone,
> > >
> > > I would like to discuss how to proceed with Flink's storm
> > > compatibility layer flink-strom.
> > >
> > > While working on removing Flink's legacy mode, I noticed that some
> > > parts of flink-storm rely on the legacy Flink client. In fact, at the
> > > moment flink-storm does not work together with Flink's new distributed
> > > architecture.
> > >
> > > I'm also wondering how many people are actually using Flink's Storm
> > > compatibility layer and whether it would be worth porting it.
> > >
> > > I see two options how to proceed:
> > >
> > > 1) Commit to maintain flink-storm and port it to Flink's new
> architecture
> > > 2) Drop flink-storm
> > >
> > > I doubt that we can contribute it to Apache Bahir [1], because once we
> > > remove the legacy mode, this module will no longer work with all newer
> > > Flink versions.
> > >
> > > Therefore, I would like to hear your opinion on this and in particular
> > > if you are using or planning to use flink-storm in the future.
> > >
> > > [1] https://github.com/apache/bahir-flink
> > >
> > > Cheers,
> > > Till
> >
> >
> >
>



Re: [DISCUSS] Dropping flink-storm?

2018-09-28 Thread Hequn Cheng
Hi,

+1 to drop it. It seems that few people use it.

Best, Hequn

On Fri, Sep 28, 2018 at 10:22 PM Chesnay Schepler 
wrote:

> I'm very much in favor of dropping it.
>
> Flink has been continually growing in terms of features, and IMO we've
> reached the point where we should cull some of the more obscure ones.
> flink-storm, while interesting from a theoretical standpoint, offers too
> little value.
>
> Note that the bolt/spout wrapper parts of the part are still compatible,
> it's only topologies that aren't working.
>
> IMO compatibility layers only add value if they ease the migration to
> Flink APIs.
> * bolt/spout wrappers do this, but they will continue to work even if we
> drop it
> * topologies don't do this, so I'm not interested in then.
>
> On 28.09.2018 15:22, Till Rohrmann wrote:
> > Hi everyone,
> >
> > I would like to discuss how to proceed with Flink's storm
> > compatibility layer flink-strom.
> >
> > While working on removing Flink's legacy mode, I noticed that some
> > parts of flink-storm rely on the legacy Flink client. In fact, at the
> > moment flink-storm does not work together with Flink's new distributed
> > architecture.
> >
> > I'm also wondering how many people are actually using Flink's Storm
> > compatibility layer and whether it would be worth porting it.
> >
> > I see two options how to proceed:
> >
> > 1) Commit to maintain flink-storm and port it to Flink's new architecture
> > 2) Drop flink-storm
> >
> > I doubt that we can contribute it to Apache Bahir [1], because once we
> > remove the legacy mode, this module will no longer work with all newer
> > Flink versions.
> >
> > Therefore, I would like to hear your opinion on this and in particular
> > if you are using or planning to use flink-storm in the future.
> >
> > [1] https://github.com/apache/bahir-flink
> >
> > Cheers,
> > Till
>
>
>


Re: [DISCUSS] Dropping flink-storm?

2018-09-28 Thread vino yang
Hi,

+1, I agree.

In addition, some users ask questions about the integration of Storm
compatibility mode with the newer Flink version on the mailing list.
It seems that they are not aware that some of Flink's new features are no
longer available in Storm compatibility mode.
This can be confusing to the relevant users.

Thanks, vino.

Chesnay Schepler  于2018年9月28日周五 下午10:22写道:

> I'm very much in favor of dropping it.
>
> Flink has been continually growing in terms of features, and IMO we've
> reached the point where we should cull some of the more obscure ones.
> flink-storm, while interesting from a theoretical standpoint, offers too
> little value.
>
> Note that the bolt/spout wrapper parts of the part are still compatible,
> it's only topologies that aren't working.
>
> IMO compatibility layers only add value if they ease the migration to
> Flink APIs.
> * bolt/spout wrappers do this, but they will continue to work even if we
> drop it
> * topologies don't do this, so I'm not interested in then.
>
> On 28.09.2018 15:22, Till Rohrmann wrote:
> > Hi everyone,
> >
> > I would like to discuss how to proceed with Flink's storm
> > compatibility layer flink-strom.
> >
> > While working on removing Flink's legacy mode, I noticed that some
> > parts of flink-storm rely on the legacy Flink client. In fact, at the
> > moment flink-storm does not work together with Flink's new distributed
> > architecture.
> >
> > I'm also wondering how many people are actually using Flink's Storm
> > compatibility layer and whether it would be worth porting it.
> >
> > I see two options how to proceed:
> >
> > 1) Commit to maintain flink-storm and port it to Flink's new architecture
> > 2) Drop flink-storm
> >
> > I doubt that we can contribute it to Apache Bahir [1], because once we
> > remove the legacy mode, this module will no longer work with all newer
> > Flink versions.
> >
> > Therefore, I would like to hear your opinion on this and in particular
> > if you are using or planning to use flink-storm in the future.
> >
> > [1] https://github.com/apache/bahir-flink
> >
> > Cheers,
> > Till
>
>
>


Re: [DISCUSS] Dropping flink-storm?

2018-09-28 Thread Chesnay Schepler

I'm very much in favor of dropping it.

Flink has been continually growing in terms of features, and IMO we've 
reached the point where we should cull some of the more obscure ones. 
flink-storm, while interesting from a theoretical standpoint, offers too 
little value.


Note that the bolt/spout wrapper parts of the part are still compatible, 
it's only topologies that aren't working.


IMO compatibility layers only add value if they ease the migration to 
Flink APIs.
* bolt/spout wrappers do this, but they will continue to work even if we 
drop it

* topologies don't do this, so I'm not interested in then.

On 28.09.2018 15:22, Till Rohrmann wrote:

Hi everyone,

I would like to discuss how to proceed with Flink's storm 
compatibility layer flink-strom.


While working on removing Flink's legacy mode, I noticed that some 
parts of flink-storm rely on the legacy Flink client. In fact, at the 
moment flink-storm does not work together with Flink's new distributed 
architecture.


I'm also wondering how many people are actually using Flink's Storm 
compatibility layer and whether it would be worth porting it.


I see two options how to proceed:

1) Commit to maintain flink-storm and port it to Flink's new architecture
2) Drop flink-storm

I doubt that we can contribute it to Apache Bahir [1], because once we 
remove the legacy mode, this module will no longer work with all newer 
Flink versions.


Therefore, I would like to hear your opinion on this and in particular 
if you are using or planning to use flink-storm in the future.


[1] https://github.com/apache/bahir-flink

Cheers,
Till





[DISCUSS] Dropping flink-storm?

2018-09-28 Thread Till Rohrmann
Hi everyone,

I would like to discuss how to proceed with Flink's storm compatibility
layer flink-strom.

While working on removing Flink's legacy mode, I noticed that some parts of
flink-storm rely on the legacy Flink client. In fact, at the moment
flink-storm does not work together with Flink's new distributed
architecture.

I'm also wondering how many people are actually using Flink's Storm
compatibility layer and whether it would be worth porting it.

I see two options how to proceed:

1) Commit to maintain flink-storm and port it to Flink's new architecture
2) Drop flink-storm

I doubt that we can contribute it to Apache Bahir [1], because once we
remove the legacy mode, this module will no longer work with all newer
Flink versions.

Therefore, I would like to hear your opinion on this and in particular if
you are using or planning to use flink-storm in the future.

[1] https://github.com/apache/bahir-flink

Cheers,
Till


Re: Exception when run flink-storm-example

2018-09-11 Thread vino yang
Hi hanjing,

*There may be both flink job and flink-storm in the my cluster, I don't
know the influence about legacy mode.*

> For storm-compatible jobs, because of technical limitations, you need to
use a cluster that supports legacy mode.
   But for Jobs implemented using the Flink-related API, I strongly
recommend using the new mode,
   because it has made huge changes to the old model and you will get a
more timely response if you encounter problems.

Thanks, vino.

jing  于2018年9月11日周二 下午6:02写道:

> Hi Till,
> legacy mode worked!
> Thanks a lot. And what's difference between legacy and new? Is there
> any document and release note?
> There may be both flink job and flink-storm in the my cluster, I don't
> know the influence about legacy mode.
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 9/11/2018 14:43,Till Rohrmann
>  wrote:
>
> Hi Hanjing,
>
> I think the problem is that the Storm compatibility layer only works with
> legacy mode at the moment. Please set `mode: legacy` in your
> flink-conf.yaml. I hope this will resolve the problems.
>
> Cheers,
> Till
>
> On Tue, Sep 11, 2018 at 7:10 AM jing  wrote:
>
>> Hi vino,
>> Thank you very much.
>> I'll try more tests.
>>
>> Hanjing
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 9/11/2018 11:51,vino yang
>>  wrote:
>>
>> Hi Hanjing,
>>
>> Flink does not currently support TaskManager HA and only supports
>> JobManager HA.
>> In the Standalone environment, once the JobManager triggers a failover,
>> it will also cause cancel and restart for all jobs.
>>
>> Thanks, vino.
>>
>> jing  于2018年9月11日周二 上午11:12写道:
>>
>>> Hi vino,
>>> Thanks a lot.
>>> Besides,  I'm also confused about taskmanager's HA.
>>> There're 2 taskmangaer in my cluster, only one job A worked on
>>> taskmanager A. If taskmangaer A crashed, what happend about my job.
>>> I tried, my job failed, taskmanger B does not take over job A.
>>> Is this right?
>>>
>>> Hanjing
>>>
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>> On 9/11/2018 10:59,vino yang
>>>  wrote:
>>>
>>> Oh, I thought the flink job could not be submitted. I don't know why the
>>> storm's example could not be submitted. Because I have never used it.
>>>
>>> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>>>
>>> Thanks, vino.
>>>
>>> jing  于2018年9月11日周二 上午10:26写道:
>>>
>>>> Hi vino,
>>>> My job mangaer log is as below. I can submit regular flink job to this
>>>> jobmanger, it worked. But the flink-storm example doesn's work.
>>>> Thanks.
>>>> Hanjing
>>>>
>>>> 2018-09-11 18:22:48,937 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>>> 
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
>>>> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
>>>> Date:07.08.2018 @ 13:31:13 UTC)
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS 
>>>> current user: hadoop3
>>>> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader  
>>>>  - Unable to load native-hadoop library for your 
>>>> platform... using builtin-java classes where applicable
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
>>>> Hadoop/Kerberos user: hadoop3
>>>> 2018-09-11 18:22:49,186 INFO

Re: Exception when run flink-storm-example

2018-09-11 Thread Till Rohrmann
You can check these release notes
https://flink.apache.org/news/2018/05/25/release-1.5.0.html for more
information.

Cheers,
Till

On Tue, Sep 11, 2018 at 12:02 PM jing  wrote:

> Hi Till,
> legacy mode worked!
> Thanks a lot. And what's difference between legacy and new? Is there
> any document and release note?
> There may be both flink job and flink-storm in the my cluster, I don't
> know the influence about legacy mode.
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 9/11/2018 14:43,Till Rohrmann
>  wrote:
>
> Hi Hanjing,
>
> I think the problem is that the Storm compatibility layer only works with
> legacy mode at the moment. Please set `mode: legacy` in your
> flink-conf.yaml. I hope this will resolve the problems.
>
> Cheers,
> Till
>
> On Tue, Sep 11, 2018 at 7:10 AM jing  wrote:
>
>> Hi vino,
>> Thank you very much.
>> I'll try more tests.
>>
>> Hanjing
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 9/11/2018 11:51,vino yang
>>  wrote:
>>
>> Hi Hanjing,
>>
>> Flink does not currently support TaskManager HA and only supports
>> JobManager HA.
>> In the Standalone environment, once the JobManager triggers a failover,
>> it will also cause cancel and restart for all jobs.
>>
>> Thanks, vino.
>>
>> jing  于2018年9月11日周二 上午11:12写道:
>>
>>> Hi vino,
>>> Thanks a lot.
>>> Besides,  I'm also confused about taskmanager's HA.
>>> There're 2 taskmangaer in my cluster, only one job A worked on
>>> taskmanager A. If taskmangaer A crashed, what happend about my job.
>>> I tried, my job failed, taskmanger B does not take over job A.
>>> Is this right?
>>>
>>> Hanjing
>>>
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>>> On 9/11/2018 10:59,vino yang
>>>  wrote:
>>>
>>> Oh, I thought the flink job could not be submitted. I don't know why the
>>> storm's example could not be submitted. Because I have never used it.
>>>
>>> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>>>
>>> Thanks, vino.
>>>
>>> jing  于2018年9月11日周二 上午10:26写道:
>>>
>>>> Hi vino,
>>>> My job mangaer log is as below. I can submit regular flink job to this
>>>> jobmanger, it worked. But the flink-storm example doesn's work.
>>>> Thanks.
>>>> Hanjing
>>>>
>>>> 2018-09-11 18:22:48,937 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>>> 
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
>>>> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
>>>> Date:07.08.2018 @ 13:31:13 UTC)
>>>> 2018-09-11 18:22:48,938 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS 
>>>> current user: hadoop3
>>>> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader  
>>>>  - Unable to load native-hadoop library for your 
>>>> platform... using builtin-java classes where applicable
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
>>>> Hadoop/Kerberos user: hadoop3
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
>>>> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
>>>> 2018-09-11 18:22:49,186 INFO  
>>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum 

Re: Exception when run flink-storm-example

2018-09-11 Thread jing
Hi Till,
legacy mode worked!
Thanks a lot. And what's difference between legacy and new? Is there any 
document and release note?
There may be both flink job and flink-storm in the my cluster, I don't know 
the influence about legacy mode.


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 14:43,Till Rohrmann wrote:
Hi Hanjing,


I think the problem is that the Storm compatibility layer only works with 
legacy mode at the moment. Please set `mode: legacy` in your flink-conf.yaml. I 
hope this will resolve the problems.


Cheers,
Till


On Tue, Sep 11, 2018 at 7:10 AM jing  wrote:

Hi vino,
Thank you very much.
I'll try more tests.


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 11:51,vino yang wrote:
Hi Hanjing,


Flink does not currently support TaskManager HA and only supports JobManager 
HA. 
In the Standalone environment, once the JobManager triggers a failover, it will 
also cause cancel and restart for all jobs.



Thanks, vino.


jing  于2018年9月11日周二 上午11:12写道:

Hi vino,
Thanks a lot.
Besides,  I'm also confused about taskmanager's HA.
There're 2 taskmangaer in my cluster, only one job A worked on taskmanager A. 
If taskmangaer A crashed, what happend about my job.
I tried, my job failed, taskmanger B does not take over job A.
Is this right?


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 10:59,vino yang wrote:
Oh, I thought the flink job could not be submitted. I don't know why the 
storm's example could not be submitted. Because I have never used it.


Maybe Till, Chesnay or Gary can help you. Ping them for you.


Thanks, vino.


jing  于2018年9月11日周二 上午10:26写道:

Hi vino,
My job mangaer log is as below. I can submit regular flink job to this 
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
Date:07.08.2018 @ 13:31:13 UTC)
2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
user: hadoop3
2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader   
- Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
Hadoop/Kerberos user: hadoop3
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum heap 
size: 981 MiBytes
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
/usr/java/jdk1.8.0_172-amd64
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
version: 2.7.5
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
Arguments:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
/home/hadoop3/zh/flink-1.6.0/conf
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--executionMode
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Classpath: 
/home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink

Re: Exception when run flink-storm-example

2018-09-11 Thread Till Rohrmann
Hi Hanjing,

I think the problem is that the Storm compatibility layer only works with
legacy mode at the moment. Please set `mode: legacy` in your
flink-conf.yaml. I hope this will resolve the problems.

Cheers,
Till

On Tue, Sep 11, 2018 at 7:10 AM jing  wrote:

> Hi vino,
> Thank you very much.
> I'll try more tests.
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 9/11/2018 11:51,vino yang
>  wrote:
>
> Hi Hanjing,
>
> Flink does not currently support TaskManager HA and only supports
> JobManager HA.
> In the Standalone environment, once the JobManager triggers a failover, it
> will also cause cancel and restart for all jobs.
>
> Thanks, vino.
>
> jing  于2018年9月11日周二 上午11:12写道:
>
>> Hi vino,
>> Thanks a lot.
>> Besides,  I'm also confused about taskmanager's HA.
>> There're 2 taskmangaer in my cluster, only one job A worked on
>> taskmanager A. If taskmangaer A crashed, what happend about my job.
>> I tried, my job failed, taskmanger B does not take over job A.
>> Is this right?
>>
>> Hanjing
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
>> On 9/11/2018 10:59,vino yang
>>  wrote:
>>
>> Oh, I thought the flink job could not be submitted. I don't know why the
>> storm's example could not be submitted. Because I have never used it.
>>
>> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>>
>> Thanks, vino.
>>
>> jing  于2018年9月11日周二 上午10:26写道:
>>
>>> Hi vino,
>>> My job mangaer log is as below. I can submit regular flink job to this
>>> jobmanger, it worked. But the flink-storm example doesn's work.
>>> Thanks.
>>> Hanjing
>>>
>>> 2018-09-11 18:22:48,937 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>> 
>>> 2018-09-11 18:22:48,938 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
>>> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
>>> Date:07.08.2018 @ 13:31:13 UTC)
>>> 2018-09-11 18:22:48,938 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
>>> user: hadoop3
>>> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader   
>>> - Unable to load native-hadoop library for your platform... 
>>> using builtin-java classes where applicable
>>> 2018-09-11 18:22:49,186 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
>>> Hadoop/Kerberos user: hadoop3
>>> 2018-09-11 18:22:49,186 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
>>> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
>>> 2018-09-11 18:22:49,186 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum 
>>> heap size: 981 MiBytes
>>> 2018-09-11 18:22:49,186 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
>>> /usr/java/jdk1.8.0_172-amd64
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
>>> version: 2.7.5
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM 
>>> Options:
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>> -Xms1024m
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>> -Xmx1024m
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>> -Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
>>> 2018-09-11 18:22:49,188 INFO  
>>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>>> -Dlog4j.configuration=fi

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
Hi vino,
Thank you very much.
I'll try more tests.


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 11:51,vino yang wrote:
Hi Hanjing,


Flink does not currently support TaskManager HA and only supports JobManager 
HA. 
In the Standalone environment, once the JobManager triggers a failover, it will 
also cause cancel and restart for all jobs.



Thanks, vino.


jing  于2018年9月11日周二 上午11:12写道:

Hi vino,
Thanks a lot.
Besides,  I'm also confused about taskmanager's HA.
There're 2 taskmangaer in my cluster, only one job A worked on taskmanager A. 
If taskmangaer A crashed, what happend about my job.
I tried, my job failed, taskmanger B does not take over job A.
Is this right?


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 10:59,vino yang wrote:
Oh, I thought the flink job could not be submitted. I don't know why the 
storm's example could not be submitted. Because I have never used it.


Maybe Till, Chesnay or Gary can help you. Ping them for you.


Thanks, vino.


jing  于2018年9月11日周二 上午10:26写道:

Hi vino,
My job mangaer log is as below. I can submit regular flink job to this 
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
Date:07.08.2018 @ 13:31:13 UTC)
2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
user: hadoop3
2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader   
- Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
Hadoop/Kerberos user: hadoop3
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum heap 
size: 981 MiBytes
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
/usr/java/jdk1.8.0_172-amd64
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
version: 2.7.5
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
Arguments:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
/home/hadoop3/zh/flink-1.6.0/conf
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--executionMode
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Classpath: 
/home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-dist_2.11-1.6.0.jar:::
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX 
signal handlers for [TERM, HUP, INT]
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.rpc.address, p-a36-72
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration

Re: Exception when run flink-storm-example

2018-09-10 Thread vino yang
Hi Hanjing,

Flink does not currently support TaskManager HA and only supports
JobManager HA.
In the Standalone environment, once the JobManager triggers a failover, it
will also cause cancel and restart for all jobs.

Thanks, vino.

jing  于2018年9月11日周二 上午11:12写道:

> Hi vino,
> Thanks a lot.
> Besides,  I'm also confused about taskmanager's HA.
> There're 2 taskmangaer in my cluster, only one job A worked on taskmanager
> A. If taskmangaer A crashed, what happend about my job.
> I tried, my job failed, taskmanger B does not take over job A.
> Is this right?
>
> Hanjing
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=Hanjing=hanjingzuzu%40163.com=http%3A%2F%2Fmail-online.nosdn.127.net%2Fsmda6015df3a52ec22402a83bdad785acb.jpg=%5B%22%22%2C%22%22%2C%22%22%2C%22%22%2C%22%22%5D>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail81> 定制
> On 9/11/2018 10:59,vino yang
>  wrote:
>
> Oh, I thought the flink job could not be submitted. I don't know why the
> storm's example could not be submitted. Because I have never used it.
>
> Maybe Till, Chesnay or Gary can help you. Ping them for you.
>
> Thanks, vino.
>
> jing  于2018年9月11日周二 上午10:26写道:
>
>> Hi vino,
>> My job mangaer log is as below. I can submit regular flink job to this
>> jobmanger, it worked. But the flink-storm example doesn's work.
>> Thanks.
>> Hanjing
>>
>> 2018-09-11 18:22:48,937 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> 
>> 2018-09-11 18:22:48,938 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
>> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
>> Date:07.08.2018 @ 13:31:13 UTC)
>> 2018-09-11 18:22:48,938 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
>> user: hadoop3
>> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader
>>- Unable to load native-hadoop library for your platform... 
>> using builtin-java classes where applicable
>> 2018-09-11 18:22:49,186 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
>> Hadoop/Kerberos user: hadoop3
>> 2018-09-11 18:22:49,186 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
>> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
>> 2018-09-11 18:22:49,186 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum 
>> heap size: 981 MiBytes
>> 2018-09-11 18:22:49,186 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
>> /usr/java/jdk1.8.0_172-amd64
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
>> version: 2.7.5
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> -Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> -Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> -Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
>> Arguments:
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> --configDir
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> /home/hadoop3/zh/flink-1.6.0/conf
>> 2018-09-11 18:22:49,188 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
>> --executionMode
>> 2018-09-11 18:22:49,189 INFO  
>> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
>> 2018-09-11 18:22:49,189 INFO  
>

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
Hi vino,
Thanks a lot.
Besides,  I'm also confused about taskmanager's HA.
There're 2 taskmangaer in my cluster, only one job A worked on taskmanager A. 
If taskmangaer A crashed, what happend about my job.
I tried, my job failed, taskmanger B does not take over job A.
Is this right?


| |
Hanjing
|
|
|
签名由网易邮箱大师定制
On 9/11/2018 10:59,vino yang wrote:
Oh, I thought the flink job could not be submitted. I don't know why the 
storm's example could not be submitted. Because I have never used it.


Maybe Till, Chesnay or Gary can help you. Ping them for you.


Thanks, vino.


jing  于2018年9月11日周二 上午10:26写道:

Hi vino,
My job mangaer log is as below. I can submit regular flink job to this 
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
Date:07.08.2018 @ 13:31:13 UTC)
2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
user: hadoop3
2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader   
- Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
Hadoop/Kerberos user: hadoop3
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum heap 
size: 981 MiBytes
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
/usr/java/jdk1.8.0_172-amd64
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
version: 2.7.5
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
Arguments:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
/home/hadoop3/zh/flink-1.6.0/conf
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--executionMode
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Classpath: 
/home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-dist_2.11-1.6.0.jar:::
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX 
signal handlers for [TERM, HUP, INT]
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.rpc.address, p-a36-72
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.rpc.port, 6123
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.heap.size, 1024m
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.heap.size, 10240m
2018-09-11

Re: Exception when run flink-storm-example

2018-09-10 Thread vino yang
Oh, I thought the flink job could not be submitted. I don't know why the
storm's example could not be submitted. Because I have never used it.

Maybe Till, Chesnay or Gary can help you. Ping them for you.

Thanks, vino.

jing  于2018年9月11日周二 上午10:26写道:

> Hi vino,
> My job mangaer log is as below. I can submit regular flink job to this
> jobmanger, it worked. But the flink-storm example doesn's work.
> Thanks.
> Hanjing
>
> 2018-09-11 18:22:48,937 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> 
> 2018-09-11 18:22:48,938 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
> StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
> Date:07.08.2018 @ 13:31:13 UTC)
> 2018-09-11 18:22:48,938 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
> user: hadoop3
> 2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader 
>   - Unable to load native-hadoop library for your platform... 
> using builtin-java classes where applicable
> 2018-09-11 18:22:49,186 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
> Hadoop/Kerberos user: hadoop3
> 2018-09-11 18:22:49,186 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
> HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
> 2018-09-11 18:22:49,186 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum heap 
> size: 981 MiBytes
> 2018-09-11 18:22:49,186 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
> /usr/java/jdk1.8.0_172-amd64
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
> version: 2.7.5
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> -Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> -Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> -Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
> Arguments:
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> --configDir
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> /home/hadoop3/zh/flink-1.6.0/conf
> 2018-09-11 18:22:49,188 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> --executionMode
> 2018-09-11 18:22:49,189 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
> 2018-09-11 18:22:49,189 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Classpath: 
> /home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-dist_2.11-1.6.0.jar:::
> 2018-09-11 18:22:49,189 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
> 
> 2018-09-11 18:22:49,189 INFO  
> org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered 
> UNIX signal handlers for [TERM, HUP, INT]
> 2018-09-11 18:22:49,197 INFO  
> org.apache.flink.configuration.GlobalConfiguration- Loading 
> configuration property: jobmanager.rpc.address, p-a36-72
> 2018-09-11 18:22:49,197 INFO  
> org.apache.flink.configuration.GlobalConfiguration- Loading 
> configuration property: jobmanager.rpc.port, 6123
> 2018-09-11 18:22:49,197 INFO  
> org.apache.flink.configuration.GlobalConfiguration- Loading 
> configuration property: jobmanager.heap.size, 1024m
> 2018-09-11 18:22:49,197 INFO  
> org.apache.flink.configuration.GlobalConfiguration- Loading 
> 

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
Hi vino,
My job mangaer log is as below. I can submit regular flink job to this 
jobmanger, it worked. But the flink-storm example doesn's work.
Thanks.
Hanjing
2018-09-11 18:22:48,937 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Starting 
StandaloneSessionClusterEntrypoint (Version: 1.6.0, Rev:ff472b4, 
Date:07.08.2018 @ 13:31:13 UTC)
2018-09-11 18:22:48,938 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  OS current 
user: hadoop3
2018-09-11 18:22:49,143 WARN  org.apache.hadoop.util.NativeCodeLoader   
- Unable to load native-hadoop library for your platform... using 
builtin-java classes where applicable
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Current 
Hadoop/Kerberos user: hadoop3
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM: Java 
HotSpot(TM) 64-Bit Server VM - Oracle Corporation - 1.8/25.172-b11
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Maximum heap 
size: 981 MiBytes
2018-09-11 18:22:49,186 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JAVA_HOME: 
/usr/java/jdk1.8.0_172-amd64
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Hadoop 
version: 2.7.5
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  JVM Options:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xms1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - -Xmx1024m
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog.file=/home/hadoop3/zh/flink-1.6.0/log/flink-hadoop3-standalonesession-0-p-a36-72.log
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlog4j.configuration=file:/home/hadoop3/zh/flink-1.6.0/conf/log4j.properties
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
-Dlogback.configurationFile=file:/home/hadoop3/zh/flink-1.6.0/conf/logback.xml
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Program 
Arguments:
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - --configDir
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
/home/hadoop3/zh/flink-1.6.0/conf
2018-09-11 18:22:49,188 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 
--executionMode
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - cluster
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint -  Classpath: 
/home/hadoop3/zh/flink-1.6.0/lib/flink-python_2.11-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-shaded-hadoop2-uber-1.6.0.jar:/home/hadoop3/zh/flink-1.6.0/lib/log4j-1.2.17.jar:/home/hadoop3/zh/flink-1.6.0/lib/slf4j-log4j12-1.7.7.jar:/home/hadoop3/zh/flink-1.6.0/lib/flink-dist_2.11-1.6.0.jar:::
2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - 

2018-09-11 18:22:49,189 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Registered UNIX 
signal handlers for [TERM, HUP, INT]
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.rpc.address, p-a36-72
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.rpc.port, 6123
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: jobmanager.heap.size, 1024m
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.heap.size, 10240m
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: taskmanager.numberOfTaskSlots, 16
2018-09-11 18:22:49,197 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: parallelism.default, 2
2018-09-11 18:22:49,198 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: rest.port, 8081
2018-09-11 18:22:49,207 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Starting 
StandaloneSessionClusterEntrypoint.
2018-09-11 18:22:49,207

Re: Exception when run flink-storm-example

2018-09-10 Thread vino yang
55)
>
> at
> akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)
>
> at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:76)
>
> at
> akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)
>
> at
> akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:75)
>
> at
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
>
> at
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
>
> at akka.pattern.PromiseActorRef.$bang(AskSupport.scala:534)
>
> at akka.remote.DefaultMessageDispatcher.dispatch(Endpoint.scala:97)
>
> at
> akka.remote.EndpointReader$$anonfun$receive$2.applyOrElse(Endpoint.scala:982)
>
> at akka.actor.Actor$class.aroundReceive(Actor.scala:502)
>
> at akka.remote.EndpointActor.aroundReceive(Endpoint.scala:446)
>
> at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)
>
> at akka.actor.ActorCell.invoke(ActorCell.scala:495)
>
> at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
>
> at akka.dispatch.Mailbox.run(Mailbox.scala:224)
>
> at akka.dispatch.Mailbox.exec(Mailbox.scala:234)
>
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
> On 9/10/2018 20:17,vino yang
>  wrote:
>
> Hi Hanjing,
>
> OK, I mean you change the "localhost" to the real IP.
>
> Try it.
>
> Thanks, vino.
>
> jing  于2018年9月10日周一 下午8:07写道:
>
>> Hi vino,
>> jonmanager rpc address value is setted by localhost.
>> hadoop3@p-a36-72 is the node host the jobmanager jvm.
>>
>> Thanks.
>> Hanjing
>>
>>
>>
>> jing
>> 邮箱hanjingz...@163.com
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=jing=hanjingzuzu%40163.com=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>
>> On 09/10/2018 19:25, vino yang  wrote:
>> Hi Hanjing,
>>
>> I mean this configuration key.[1]
>>
>> What's more, Is the "hadoop3@p-a36-72" also the node which host
>> JobManager's jvm process?
>>
>> Thanks, vino.
>>
>> [1]:
>> https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address
>>
>> jing  于2018年9月10日周一 下午6:57写道:
>>
>>> Hi vino,
>>>   I commit the job on the jvm code with the command below.
>>> hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>>> WordCount-StormTopology.jar input output
>>>
>>> And I'm a new user, which configuation name should be set. All the
>>> configuations are the default setting now.
>>>
>>> Thanks.
>>> Hanjing
>>>
>>> jing
>>> 邮箱hanjingz...@163.com
>>>
>>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=jing=hanjingzuzu%40163.com=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>>
>>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>>
>>> On 09/10/2018 15:49, vino yang  wrote:
>>> Hi Hanjing,
>>>
>>> Did you perform a CLI commit on the JM node? Is the address bound to
>>> "localhost" in the Flink JM configuration?
>>>
>>> Thanks, vino.
>>>
>>> jing  于2018年9月10日周一 上午11:00写道:
>>>
>>>> Hello,
>>>>
>>>>I’m trying to run flink-storm-example on standalone clusters.
>>>> But there’s some exception I can’t sovle. Could anyone please help me
>>>> with trouble.
>>>>
>>>>flink-storm-example version: 1.60
>>>>
>>>>flink version: 1.60
>>>>
>>>>The log below is the Exception. My job manager status is as the
>>>> picture.
>>>>
>>>>I’v tried to changed the IP address and port, but it doesn’t’
>>>> 

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
oint.scala:446)

at akka.actor.ActorCell.receiveMessage(ActorCell.scala:526)

at akka.actor.ActorCell.invoke(ActorCell.scala:495)

at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)

at akka.dispatch.Mailbox.run(Mailbox.scala:224)

at akka.dispatch.Mailbox.exec(Mailbox.scala:234)

at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)

at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)

at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)

at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

 

On 9/10/2018 20:17,vino yang wrote:
Hi Hanjing,


OK, I mean you change the "localhost" to the real IP.


Try it.


Thanks, vino. 


jing  于2018年9月10日周一 下午8:07写道:

Hi vino,
jonmanager rpc address value is setted by localhost.
hadoop3@p-a36-72 is the node host the jobmanager jvm.

Thanks.
Hanjing




| |
jing
|
|
邮箱hanjingz...@163.com
|

签名由 网易邮箱大师 定制

On 09/10/2018 19:25, vino yang wrote:
Hi Hanjing,


I mean this configuration key.[1]


What's more, Is the "hadoop3@p-a36-72" also the node which host JobManager's 
jvm process?


Thanks, vino.


[1]: 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address


jing  于2018年9月10日周一 下午6:57写道:

Hi vino,
  I commit the job on the jvm code with the command below.
hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

And I'm a new user, which configuation name should be set. All the 
configuations are the default setting now.

Thanks.
Hanjing


| |
jing
|
|
邮箱hanjingz...@163.com
|

签名由 网易邮箱大师 定制

On 09/10/2018 15:49, vino yang wrote:
Hi Hanjing,


Did you perform a CLI commit on the JM node? Is the address bound to 
"localhost" in the Flink JM configuration?


Thanks, vino.


jing  于2018年9月10日周一 上午11:00写道:


Hello,

   I’m trying to run flink-storm-example on standalone clusters. But 
there’s some exception I can’t sovle. Could anyone please help me with trouble.

   flink-storm-example version: 1.60

   flink version: 1.60

   The log below is the Exception. My job manager status is as the picture.

   I’v tried to changed the IP address and port, but it doesn’t’work.

  

   Thanks a lot.

---

[hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

Starting execution of program






 The program finished with the following exception:




org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error.

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)

at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)

at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)

at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)

at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)

at 
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)

Caused by: java.lang.RuntimeException: Could not connect to Flink JobManager 
with address localhost:6123

at 
org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)

at 
org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)

at 
org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)

... 12 more

Caused by: java.io.IOException: Actor at 
akka.tcp://flink@localhost:6123/user/jobmanager not reachable. Please make sure 
that the actor is running and its port is reachable.

at 
org.apache.f

Re: Exception when run flink-storm-example

2018-09-10 Thread vino yang
Hi Hanjing,

OK, I mean you change the "localhost" to the real IP.

Try it.

Thanks, vino.

jing  于2018年9月10日周一 下午8:07写道:

> Hi vino,
> jonmanager rpc address value is setted by localhost.
> hadoop3@p-a36-72 is the node host the jobmanager jvm.
>
> Thanks.
> Hanjing
>
>
>
> jing
> 邮箱hanjingz...@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=jing=hanjingzuzu%40163.com=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>
> On 09/10/2018 19:25, vino yang  wrote:
> Hi Hanjing,
>
> I mean this configuration key.[1]
>
> What's more, Is the "hadoop3@p-a36-72" also the node which host
> JobManager's jvm process?
>
> Thanks, vino.
>
> [1]:
> https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address
>
> jing  于2018年9月10日周一 下午6:57写道:
>
>> Hi vino,
>>   I commit the job on the jvm code with the command below.
>> hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>> WordCount-StormTopology.jar input output
>>
>> And I'm a new user, which configuation name should be set. All the
>> configuations are the default setting now.
>>
>> Thanks.
>> Hanjing
>>
>> jing
>> 邮箱hanjingz...@163.com
>>
>> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=jing=hanjingzuzu%40163.com=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>>
>> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>>
>> On 09/10/2018 15:49, vino yang  wrote:
>> Hi Hanjing,
>>
>> Did you perform a CLI commit on the JM node? Is the address bound to
>> "localhost" in the Flink JM configuration?
>>
>> Thanks, vino.
>>
>> jing  于2018年9月10日周一 上午11:00写道:
>>
>>> Hello,
>>>
>>>I’m trying to run flink-storm-example on standalone clusters.
>>> But there’s some exception I can’t sovle. Could anyone please help me
>>> with trouble.
>>>
>>>flink-storm-example version: 1.60
>>>
>>>flink version: 1.60
>>>
>>>The log below is the Exception. My job manager status is as the
>>> picture.
>>>
>>>I’v tried to changed the IP address and port, but it doesn’t’
>>> work.
>>>
>>>
>>>
>>>Thanks a lot.
>>>
>>> ---
>>>
>>> [hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>>> WordCount-StormTopology.jar input output
>>>
>>> Starting execution of program
>>>
>>>
>>> 
>>>
>>>  The program finished with the following exception:
>>>
>>>
>>> org.apache.flink.client.program.ProgramInvocationException: The main
>>> method caused an error.
>>>
>>> at
>>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
>>>
>>> at
>>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>>>
>>> at
>>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
>>>
>>> at
>>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
>>>
>>> at
>>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
>>>
>>> at
>>> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
>>>
>>> at
>>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
>>>
>>> at
>>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
>>>
>>> at java.security.AccessController.doPrivileged(Native Method)
>>>
>>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>>
>>> at
>>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>>>
>>> at
>>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>>
>>> at
>>> org.apache.flink.cl

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
Hi vino,
jonmanager rpc address value is setted by localhost.
hadoop3@p-a36-72 is the node host the jobmanager jvm.

Thanks.
Hanjing




| |
jing
|
|
邮箱hanjingz...@163.com
|

签名由 网易邮箱大师 定制

On 09/10/2018 19:25, vino yang wrote:
Hi Hanjing,


I mean this configuration key.[1]


What's more, Is the "hadoop3@p-a36-72" also the node which host JobManager's 
jvm process?


Thanks, vino.


[1]: 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address


jing  于2018年9月10日周一 下午6:57写道:

Hi vino,
  I commit the job on the jvm code with the command below.
hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

And I'm a new user, which configuation name should be set. All the 
configuations are the default setting now.

Thanks.
Hanjing


| |
jing
|
|
邮箱hanjingz...@163.com
|

签名由 网易邮箱大师 定制

On 09/10/2018 15:49, vino yang wrote:
Hi Hanjing,


Did you perform a CLI commit on the JM node? Is the address bound to 
"localhost" in the Flink JM configuration?


Thanks, vino.


jing  于2018年9月10日周一 上午11:00写道:


Hello,

   I’m trying to run flink-storm-example on standalone clusters. But 
there’s some exception I can’t sovle. Could anyone please help me with trouble.

   flink-storm-example version: 1.60

   flink version: 1.60

   The log below is the Exception. My job manager status is as the picture.

   I’v tried to changed the IP address and port, but it doesn’t’work.

  

   Thanks a lot.

---

[hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

Starting execution of program






 The program finished with the following exception:




org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error.

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)

at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)

at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)

at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)

at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)

at 
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)

Caused by: java.lang.RuntimeException: Could not connect to Flink JobManager 
with address localhost:6123

at 
org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)

at 
org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)

at 
org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)

... 12 more

Caused by: java.io.IOException: Actor at 
akka.tcp://flink@localhost:6123/user/jobmanager not reachable. Please make sure 
that the actor is running and its port is reachable.

at 
org.apache.flink.runtime.akka.AkkaUtils$.getActorRef(AkkaUtils.scala:547)

at org.apache.flink.runtime.akka.AkkaUtils.getActorRef(AkkaUtils.scala)

at 
org.apache.flink.storm.api.FlinkClient.getJobManager(FlinkClient.java:339)

at 
org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:278)

... 19 more

Caused by: akka.actor.ActorNotFound: Actor not found for: 
ActorSelection[Anchor(akka.tcp://flink@localhost:6123/), Path(/user/jobmanager)]

at 
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:68)

at 
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)

at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)

at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)

at akka.dispa

Re: Exception when run flink-storm-example

2018-09-10 Thread vino yang
Hi Hanjing,

I mean this configuration key.[1]

What's more, Is the "hadoop3@p-a36-72" also the node which host
JobManager's jvm process?

Thanks, vino.

[1]:
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/config.html#jobmanager-rpc-address

jing  于2018年9月10日周一 下午6:57写道:

> Hi vino,
>   I commit the job on the jvm code with the command below.
> hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
> WordCount-StormTopology.jar input output
>
> And I'm a new user, which configuation name should be set. All the
> configuations are the default setting now.
>
> Thanks.
> Hanjing
>
> jing
> 邮箱hanjingz...@163.com
>
> <https://maas.mail.163.com/dashi-web-extend/html/proSignature.html?ftlId=1=jing=hanjingzuzu%40163.com=https%3A%2F%2Fmail-online.nosdn.127.net%2Fqiyelogo%2FdefaultAvatar.png=%5B%22%E9%82%AE%E7%AE%B1hanjingzuzu%40163.com%22%5D>
>
> 签名由 网易邮箱大师 <https://mail.163.com/dashi/dlpro.html?from=mail88> 定制
>
> On 09/10/2018 15:49, vino yang  wrote:
> Hi Hanjing,
>
> Did you perform a CLI commit on the JM node? Is the address bound to
> "localhost" in the Flink JM configuration?
>
> Thanks, vino.
>
> jing  于2018年9月10日周一 上午11:00写道:
>
>> Hello,
>>
>>I’m trying to run flink-storm-example on standalone clusters. But
>> there’s some exception I can’t sovle. Could anyone please help me with
>> trouble.
>>
>>flink-storm-example version: 1.60
>>
>>flink version: 1.60
>>
>>The log below is the Exception. My job manager status is as the
>> picture.
>>
>>I’v tried to changed the IP address and port, but it doesn’t’work.
>>
>>
>>
>>Thanks a lot.
>>
>> ---
>>
>> [hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run
>> WordCount-StormTopology.jar input output
>>
>> Starting execution of program
>>
>>
>> 
>>
>>  The program finished with the following exception:
>>
>>
>> org.apache.flink.client.program.ProgramInvocationException: The main
>> method caused an error.
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)
>>
>> at
>> org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)
>>
>> at java.security.AccessController.doPrivileged(Native Method)
>>
>> at javax.security.auth.Subject.doAs(Subject.java:422)
>>
>> at
>> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
>>
>> at
>> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>>
>> at
>> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)
>>
>> Caused by: java.lang.RuntimeException: Could not connect to Flink
>> JobManager with address localhost:6123
>>
>> at
>> org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)
>>
>> at
>> org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)
>>
>> at
>> org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)
>>
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>
>> at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>
>> at java.lang.reflect.Method.invoke(Method.java:498)
>>
>> at
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)
>>
>> ... 12 more
>>
>> Caused by: java.io.IOException: Actor at 
>> akka.tcp://flin

Re: Exception when run flink-storm-example

2018-09-10 Thread jing
Hi vino,
  I commit the job on the jvm code with the command below.
hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

And I'm a new user, which configuation name should be set. All the 
configuations are the default setting now.

Thanks.
Hanjing


| |
jing
|
|
邮箱hanjingz...@163.com
|

签名由 网易邮箱大师 定制

On 09/10/2018 15:49, vino yang wrote:
Hi Hanjing,


Did you perform a CLI commit on the JM node? Is the address bound to 
"localhost" in the Flink JM configuration?


Thanks, vino.


jing  于2018年9月10日周一 上午11:00写道:


Hello,

   I’m trying to run flink-storm-example on standalone clusters. But 
there’s some exception I can’t sovle. Could anyone please help me with trouble.

   flink-storm-example version: 1.60

   flink version: 1.60

   The log below is the Exception. My job manager status is as the picture.

   I’v tried to changed the IP address and port, but it doesn’t’work.

  

   Thanks a lot.

---

[hadoop3@p-a36-72 flink-1.6.0]$ ./bin/flink run WordCount-StormTopology.jar 
input output

Starting execution of program






 The program finished with the following exception:




org.apache.flink.client.program.ProgramInvocationException: The main method 
caused an error.

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546)

at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421)

at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:426)

at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:804)

at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280)

at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)

at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1044)

at 
org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1120)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)

at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)

at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1120)

Caused by: java.lang.RuntimeException: Could not connect to Flink JobManager 
with address localhost:6123

at 
org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:304)

at 
org.apache.flink.storm.api.FlinkSubmitter.submitTopology(FlinkSubmitter.java:107)

at 
org.apache.flink.storm.wordcount.WordCountRemoteBySubmitter.main(WordCountRemoteBySubmitter.java:75)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:498)

at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529)

... 12 more

Caused by: java.io.IOException: Actor at 
akka.tcp://flink@localhost:6123/user/jobmanager not reachable. Please make sure 
that the actor is running and its port is reachable.

at 
org.apache.flink.runtime.akka.AkkaUtils$.getActorRef(AkkaUtils.scala:547)

at org.apache.flink.runtime.akka.AkkaUtils.getActorRef(AkkaUtils.scala)

at 
org.apache.flink.storm.api.FlinkClient.getJobManager(FlinkClient.java:339)

at 
org.apache.flink.storm.api.FlinkClient.getTopologyJobId(FlinkClient.java:278)

... 19 more

Caused by: akka.actor.ActorNotFound: Actor not found for: 
ActorSelection[Anchor(akka.tcp://flink@localhost:6123/), Path(/user/jobmanager)]

at 
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:68)

at 
akka.actor.ActorSelection$$anonfun$resolveOne$1.apply(ActorSelection.scala:66)

at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)

at 
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)

at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:73)

at 
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.unbatchedExecute(Future.scala:76)

at 
akka.dispatch.BatchingExecutor$class.execute(BatchingExecutor.scala:120)

at 
akka.dispatch.ExecutionContexts$sameThreadExecutionContext$.execute(Future.scala:75)

at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)

at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise

Re: Using HiveBolt from storm-hive with Flink-Storm compatibility wrapper

2017-09-25 Thread Timo Walther

Hi Federico,

I think going through a Storm compatibility layer could work, but did 
you thought about using the flink-jdbc connector? That should be the 
easiest solution.


Otherwise I think it would be easier to quickly implement your our 
SinkFunction. It is just one method that you have to implement, you 
could call some Hive commands there.


Regards,
Timo


Am 9/25/17 um 4:16 PM schrieb Nico Kruber:

Hi Federico,
I also did not find any implementation of a hive sink, nor much details on this
topic in general. Let me forward this to Timo and Fabian (cc'd) who may know
more.

Nico

On Friday, 22 September 2017 12:14:32 CEST Federico D'Ambrosio wrote:

Hello everyone,

I'd like to use the HiveBolt from storm-hive inside a flink job using the
Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
me explain, I would have the following:

val mapper = ...

val hiveOptions = ...

streamByID
   .transform[OUT]("hive-sink", new BoltWrapper[IN, OUT](new
HiveBolt(hiveOptions)))

where streamByID is a DataStream[Event].

What would be the IN and OUT types? HiveBolt executes on a storm Tuple, so,
I'd think that In should be an Event "tuple-d" ( event => (field1, field2,
field3 ...) ), while OUT, since I don't want the stream to keep flowing
would be null or None?

Alternatively, do you know any implementation of an hive sink in Flink?
Other than the adaptation of the said HiveBolt in a RichSinkFunction?

Thanks for your attention,
  Federico





Re: Using HiveBolt from storm-hive with Flink-Storm compatibility wrapper

2017-09-25 Thread Nico Kruber
Hi Federico,
I also did not find any implementation of a hive sink, nor much details on this 
topic in general. Let me forward this to Timo and Fabian (cc'd) who may know 
more.

Nico

On Friday, 22 September 2017 12:14:32 CEST Federico D'Ambrosio wrote:
> Hello everyone,
> 
> I'd like to use the HiveBolt from storm-hive inside a flink job using the
> Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
> me explain, I would have the following:
> 
> val mapper = ...
> 
> val hiveOptions = ...
> 
> streamByID
>   .transform[OUT]("hive-sink", new BoltWrapper[IN, OUT](new
> HiveBolt(hiveOptions)))
> 
> where streamByID is a DataStream[Event].
> 
> What would be the IN and OUT types? HiveBolt executes on a storm Tuple, so,
> I'd think that In should be an Event "tuple-d" ( event => (field1, field2,
> field3 ...) ), while OUT, since I don't want the stream to keep flowing
> would be null or None?
> 
> Alternatively, do you know any implementation of an hive sink in Flink?
> Other than the adaptation of the said HiveBolt in a RichSinkFunction?
> 
> Thanks for your attention,
>  Federico




Using HiveBolt from storm-hive with Flink-Storm compatibility wrapper

2017-09-22 Thread Federico D'Ambrosio
Hello everyone,

I'd like to use the HiveBolt from storm-hive inside a flink job using the
Flink-Storm compatibility layer but I'm not sure how to integrate it. Let
me explain, I would have the following:

val mapper = ...

val hiveOptions = ...

streamByID
  .transform[OUT]("hive-sink", new BoltWrapper[IN, OUT](new
HiveBolt(hiveOptions)))

where streamByID is a DataStream[Event].

What would be the IN and OUT types? HiveBolt executes on a storm Tuple, so,
I'd think that In should be an Event "tuple-d" ( event => (field1, field2,
field3 ...) ), while OUT, since I don't want the stream to keep flowing
would be null or None?

Alternatively, do you know any implementation of an hive sink in Flink?
Other than the adaptation of the said HiveBolt in a RichSinkFunction?

Thanks for your attention,
 Federico


Re: flink-storm FlinkLocalCluster issue

2016-02-29 Thread Maximilian Michels
Hi Zhang,

Please have a look here for the 1.0.0-rc2:

Binaries: http://people.apache.org/~rmetzger/flink-1.0.0-rc2/
Maven repository:
https://repository.apache.org/content/repositories/orgapacheflink-1064

Cheers,
Max

On Sat, Feb 27, 2016 at 4:00 AM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg> wrote:
> Thanks for the confirmation.
>
> When will 1.0 be ready in maven repo?
>
>
>
> From: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] On Behalf Of
> Stephan Ewen
> Sent: Friday, February 26, 2016 9:07 PM
> To: user@flink.apache.org
> Subject: Re: flink-storm FlinkLocalCluster issue
>
>
>
> Hi!
>
>
>
> On 0.10.x, the Storm compatibility layer does not properly configure the
> Local Flink Executor to have the right parallelism.
>
>
>
> In 1.0 that is fixed. If you try the latest snapshot, or the
> 1.0-Release-Candidate-1, it should work.
>
>
>
> Greetings,
>
> Stephan
>
>
>
>
>
> On Fri, Feb 26, 2016 at 12:16 PM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg>
> wrote:
>
> Hi till,
>
>
>
> Thanks for your reply.
>
> But it appears that it only started with #slot of 1.
>
> I have traced down to the source code of flink step by step, where I have
> confirmed it.
>
>
>
> I'm using flink 0.10.2, source code downloaded from flink website. Nothing
> have been changed. I simply try to run the flink-Storm word count local
> example.
>
>
>
> It just failed to work.
>
>
>
>
>
> Sent from my iPhone
>
>
> On 26 Feb 2016, at 6:16 PM, Till Rohrmann <trohrm...@apache.org> wrote:
>
> Hi Shuhao,
>
> the configuration you’re providing is only used for the storm compatibility
> layer and not Flink itself. When you run your job locally, the
> LocalFlinkMiniCluster should be started with as many slots as your maximum
> degree of parallelism is in your topology. You can check this in
> FlinkLocalCluster.java:96. When you submit your job to a remote cluster,
> then you have to define the number of slots in the flink-conf.yaml file.
>
> Cheers,
> Till
>
>
>
> On Fri, Feb 26, 2016 at 10:34 AM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg>
> wrote:
>
> Hi everyone,
>
>
>
> I’m a student researcher working on Flink recently.
>
>
>
> I’m trying out the flink-storm example project, version 0.10.2,
> flink-storm-examples, word-count-local.
>
>
>
> But, I got the following error:
>
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Not enough free slots available to run the job. You can decrease the
> operator parallelism or increase the number of slots per TaskManager in the
> configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @
> (unassigned) - [SCHEDULED] > with groupID < b86aa7eb76c417c63afb577ea46ddd72
>> in sharing group < SlotSharingGroup [0ea70b6a3927c02f29baf9de8f59575b,
> b86aa7eb76c417c63afb577ea46ddd72, cbc10505364629b45b28e79599c2f6b8] >.
> Resources available to scheduler: Number of instances=1, total number of
> slots=1, available slots=0
>
> at
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
>
> at
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
>
> at
> org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>
> at
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>
> at
> akka.dispatch.F

RE: flink-storm FlinkLocalCluster issue

2016-02-26 Thread #ZHANG SHUHAO#
Thanks for the confirmation.
When will 1.0 be ready in maven repo?

From: ewenstep...@gmail.com [mailto:ewenstep...@gmail.com] On Behalf Of Stephan 
Ewen
Sent: Friday, February 26, 2016 9:07 PM
To: user@flink.apache.org
Subject: Re: flink-storm FlinkLocalCluster issue

Hi!

On 0.10.x, the Storm compatibility layer does not properly configure the Local 
Flink Executor to have the right parallelism.

In 1.0 that is fixed. If you try the latest snapshot, or the 
1.0-Release-Candidate-1, it should work.

Greetings,
Stephan


On Fri, Feb 26, 2016 at 12:16 PM, #ZHANG SHUHAO# 
<szhang...@e.ntu.edu.sg<mailto:szhang...@e.ntu.edu.sg>> wrote:
Hi till,

Thanks for your reply.
But it appears that it only started with #slot of 1.
I have traced down to the source code of flink step by step, where I have 
confirmed it.

I'm using flink 0.10.2, source code downloaded from flink website. Nothing have 
been changed. I simply try to run the flink-Storm word count local example.

It just failed to work.


Sent from my iPhone

On 26 Feb 2016, at 6:16 PM, Till Rohrmann 
<trohrm...@apache.org<mailto:trohrm...@apache.org>> wrote:

Hi Shuhao,

the configuration you’re providing is only used for the storm compatibility 
layer and not Flink itself. When you run your job locally, the 
LocalFlinkMiniCluster should be started with as many slots as your maximum 
degree of parallelism is in your topology. You can check this in 
FlinkLocalCluster.java:96. When you submit your job to a remote cluster, then 
you have to define the number of slots in the flink-conf.yaml file.

Cheers,
Till
​

On Fri, Feb 26, 2016 at 10:34 AM, #ZHANG SHUHAO# 
<szhang...@e.ntu.edu.sg<mailto:szhang...@e.ntu.edu.sg>> wrote:
Hi everyone,

I’m a student researcher working on Flink recently.

I’m trying out the flink-storm example project, version 0.10.2, 
flink-storm-examples, word-count-local.

But, I got the following error:
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not 
enough free slots available to run the job. You can decrease the operator 
parallelism or increase the number of slots per TaskManager in the 
configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @ (unassigned) 
- [SCHEDULED] > with groupID < b86aa7eb76c417c63afb577ea46ddd72 > in sharing 
group < SlotSharingGroup [0ea70b6a3927c02f29baf9de8f59575b, 
b86aa7eb76c417c63afb577ea46ddd72, cbc10505364629b45b28e79599c2f6b8] >. 
Resources available to scheduler: Number of instances=1, total number of 
slots=1, available slots=0
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at 
org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
at 
org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


I notice that by default, task manager has only one slot, changing the setting 
in flink-conf does not help as I want to debug locally through 
FlinkLocalCluster (not to submit it locally).

I have try the following:


Import backtype.storm.Config;




Config config = new Config();
config.put(ConfigConstants.TASK_MANAGER_NUM_TASK_S

Re: flink-storm FlinkLocalCluster issue

2016-02-26 Thread Stephan Ewen
Hi!

On 0.10.x, the Storm compatibility layer does not properly configure the
Local Flink Executor to have the right parallelism.

In 1.0 that is fixed. If you try the latest snapshot, or the
1.0-Release-Candidate-1, it should work.

Greetings,
Stephan


On Fri, Feb 26, 2016 at 12:16 PM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg>
wrote:

> Hi till,
>
> Thanks for your reply.
> But it appears that it only started with #slot of 1.
> I have traced down to the source code of flink step by step, where I have
> confirmed it.
>
> I'm using flink 0.10.2, source code downloaded from flink website. Nothing
> have been changed. I simply try to run the flink-Storm word count local
> example.
>
> It just failed to work.
>
>
> Sent from my iPhone
>
> On 26 Feb 2016, at 6:16 PM, Till Rohrmann <trohrm...@apache.org> wrote:
>
> Hi Shuhao,
>
> the configuration you’re providing is only used for the storm
> compatibility layer and not Flink itself. When you run your job locally,
> the LocalFlinkMiniCluster should be started with as many slots as your
> maximum degree of parallelism is in your topology. You can check this in
> FlinkLocalCluster.java:96. When you submit your job to a remote cluster,
> then you have to define the number of slots in the flink-conf.yaml file.
>
> Cheers,
> Till
> ​
>
> On Fri, Feb 26, 2016 at 10:34 AM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg>
> wrote:
>
>> Hi everyone,
>>
>>
>>
>> I’m a student researcher working on Flink recently.
>>
>>
>>
>> I’m trying out the flink-storm example project, version 0.10.2,
>> flink-storm-examples, word-count-local.
>>
>>
>>
>> But, I got the following error:
>>
>> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
>> Not enough free slots available to run the job. You can decrease the
>> operator parallelism or increase the number of slots per TaskManager in the
>> configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @
>> (unassigned) - [SCHEDULED] > with groupID <
>> b86aa7eb76c417c63afb577ea46ddd72 > in sharing group < SlotSharingGroup
>> [0ea70b6a3927c02f29baf9de8f59575b, b86aa7eb76c417c63afb577ea46ddd72,
>> cbc10505364629b45b28e79599c2f6b8] >. Resources available to scheduler:
>> Number of instances=1, total number of slots=1, available slots=0
>>
>> at
>> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
>>
>> at
>> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
>>
>> at
>> org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
>>
>> at
>> org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
>>
>> at
>> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
>>
>> at
>> org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>>
>> at
>> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>>
>> at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>>
>> at
>> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>>
>> at
>> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>>
>> at
>> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>>
>> at
>> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>>
>> at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>>
>> at
>> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>>
>> at
>> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>>
>&g

Re: flink-storm FlinkLocalCluster issue

2016-02-26 Thread #ZHANG SHUHAO#
Hi till,

Thanks for your reply.
But it appears that it only started with #slot of 1.
I have traced down to the source code of flink step by step, where I have 
confirmed it.

I'm using flink 0.10.2, source code downloaded from flink website. Nothing have 
been changed. I simply try to run the flink-Storm word count local example.

It just failed to work.


Sent from my iPhone

On 26 Feb 2016, at 6:16 PM, Till Rohrmann 
<trohrm...@apache.org<mailto:trohrm...@apache.org>> wrote:


Hi Shuhao,

the configuration you’re providing is only used for the storm compatibility 
layer and not Flink itself. When you run your job locally, the 
LocalFlinkMiniCluster should be started with as many slots as your maximum 
degree of parallelism is in your topology. You can check this in 
FlinkLocalCluster.java:96. When you submit your job to a remote cluster, then 
you have to define the number of slots in the flink-conf.yaml file.

Cheers,
Till

​

On Fri, Feb 26, 2016 at 10:34 AM, #ZHANG SHUHAO# 
<szhang...@e.ntu.edu.sg<mailto:szhang...@e.ntu.edu.sg>> wrote:
Hi everyone,

I’m a student researcher working on Flink recently.

I’m trying out the flink-storm example project, version 0.10.2, 
flink-storm-examples, word-count-local.

But, I got the following error:
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not 
enough free slots available to run the job. You can decrease the operator 
parallelism or increase the number of slots per TaskManager in the 
configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @ (unassigned) 
- [SCHEDULED] > with groupID < b86aa7eb76c417c63afb577ea46ddd72 > in sharing 
group < SlotSharingGroup [0ea70b6a3927c02f29baf9de8f59575b, 
b86aa7eb76c417c63afb577ea46ddd72, cbc10505364629b45b28e79599c2f6b8] >. 
Resources available to scheduler: Number of instances=1, total number of 
slots=1, available slots=0
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at 
org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
at 
org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


I notice that by default, task manager has only one slot, changing the setting 
in flink-conf does not help as I want to debug locally through 
FlinkLocalCluster (not to submit it locally).

I have try the following:


Import backtype.storm.Config;




Config config = new Config();
config.put(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, 1024);
cluster.submitTopology(topologyId, config, ft);


But it’s not working.


Is there any way to work around?

Many thanks.

shuhao zhang (Tony).



Re: flink-storm FlinkLocalCluster issue

2016-02-26 Thread Till Rohrmann
Hi Shuhao,

the configuration you’re providing is only used for the storm compatibility
layer and not Flink itself. When you run your job locally, the
LocalFlinkMiniCluster should be started with as many slots as your maximum
degree of parallelism is in your topology. You can check this in
FlinkLocalCluster.java:96. When you submit your job to a remote cluster,
then you have to define the number of slots in the flink-conf.yaml file.

Cheers,
Till
​

On Fri, Feb 26, 2016 at 10:34 AM, #ZHANG SHUHAO# <szhang...@e.ntu.edu.sg>
wrote:

> Hi everyone,
>
>
>
> I’m a student researcher working on Flink recently.
>
>
>
> I’m trying out the flink-storm example project, version 0.10.2,
> flink-storm-examples, word-count-local.
>
>
>
> But, I got the following error:
>
> org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:
> Not enough free slots available to run the job. You can decrease the
> operator parallelism or increase the number of slots per TaskManager in the
> configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @
> (unassigned) - [SCHEDULED] > with groupID <
> b86aa7eb76c417c63afb577ea46ddd72 > in sharing group < SlotSharingGroup
> [0ea70b6a3927c02f29baf9de8f59575b, b86aa7eb76c417c63afb577ea46ddd72,
> cbc10505364629b45b28e79599c2f6b8] >. Resources available to scheduler:
> Number of instances=1, total number of slots=1, available slots=0
>
> at
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
>
> at
> org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
>
> at
> org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
>
> at
> org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>
> at
> org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
>
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
>
> at
> scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
>
> at
> akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
>
> at
> akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
>
> at
> scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
>
> at
> scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
>
> at
> scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
>
>
>
>
>
> I notice that by default, task manager has only one slot, changing the
> setting in flink-conf does not help as I want to debug locally through
> FlinkLocalCluster (not to submit it locally).
>
>
>
> I have try the following:
>
>
>
> Import backtype.storm.Config;
>
>
>
>
>
> *Config config *= new Config();
> *config*.put(ConfigConstants.*TASK_MANAGER_NUM_TASK_SLOTS*, 1024);
> cluster.submitTopology(*topologyId*, *config*, ft);
>
>
>
>
>
> But it’s not working.
>
>
>
>
>
> Is there any way to work around?
>
>
>
> Many thanks.
>
>
>
> shuhao zhang (Tony).
>


flink-storm FlinkLocalCluster issue

2016-02-26 Thread #ZHANG SHUHAO#
Hi everyone,

I'm a student researcher working on Flink recently.

I'm trying out the flink-storm example project, version 0.10.2, 
flink-storm-examples, word-count-local.

But, I got the following error:
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not 
enough free slots available to run the job. You can decrease the operator 
parallelism or increase the number of slots per TaskManager in the 
configuration. Task to schedule: < Attempt #0 (tokenizer (2/4)) @ (unassigned) 
- [SCHEDULED] > with groupID < b86aa7eb76c417c63afb577ea46ddd72 > in sharing 
group < SlotSharingGroup [0ea70b6a3927c02f29baf9de8f59575b, 
b86aa7eb76c417c63afb577ea46ddd72, cbc10505364629b45b28e79599c2f6b8] >. 
Resources available to scheduler: Number of instances=1, total number of 
slots=1, available slots=0
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
at 
org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at 
org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
at 
org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
at 
org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
at 
org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:992)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:972)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at 
scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at 
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)


I notice that by default, task manager has only one slot, changing the setting 
in flink-conf does not help as I want to debug locally through 
FlinkLocalCluster (not to submit it locally).

I have try the following:


Import backtype.storm.Config;




Config config = new Config();
config.put(ConfigConstants.TASK_MANAGER_NUM_TASK_SLOTS, 1024);
cluster.submitTopology(topologyId, config, ft);


But it's not working.


Is there any way to work around?

Many thanks.

shuhao zhang (Tony).


Re: Flink Storm

2015-12-07 Thread Madhire, Naveen
Hi Matthias, Sorry for the confusion. I just used a simple code in the
Count Bolt to write the bolt output into a file and was not using
BiltFileSink.

OutputStream o;
try {
o = new FileOutputStream("/tmp/wordcount.txt", true);
o.write((word + " " + count.toString() + "\n").getBytes());
o.close();
} catch (IOException e) {
e.printStackTrace();
}




Coming to BoltFileSink, I tried using cluster.shutdown at the end which
stops the local cluster but getting the below exception,

java.lang.Exception: TaskManager is shutting down.
at 
org.apache.flink.runtime.taskmanager.TaskManager.postStop(TaskManager.scala
:216)
at akka.actor.Actor$class.aroundPostStop(Actor.scala:475)
at 
org.apache.flink.runtime.taskmanager.TaskManager.aroundPostStop(TaskManager
.scala:119)
at 
akka.actor.dungeon.FaultHandling$class.akka$actor$dungeon$FaultHandling$$fi
nishTerminate(FaultHandling.scala:210)
at 
akka.actor.dungeon.FaultHandling$class.terminate(FaultHandling.scala:172)
at akka.actor.ActorCell.terminate(ActorCell.scala:369)
at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:462)
at akka.actor.ActorCell.systemInvoke(ActorCell.scala:478)
at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:279)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:257)
at akka.dispatch.Mailbox.run(Mailbox.scala:221)
at akka.dispatch.Mailbox.exec(Mailbox.scala:231)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:
1339)
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.jav
a:107)



I added the below lines of code for stoping the local cluster at the end,
the code is same as flink-storm-examples one.

Utils.sleep(10 * 1000);

cluster.shutdown();




Thanks,
Naveen




On 12/5/15, 7:54 AM, "Matthias J. Sax" <mj...@apache.org> wrote:

>Hi Naveen,
>
>in you previous mail you mention that
>
>> Yeah, I did route the ³count² bolt output to a file and I see the
>>output.
>> I can see the Storm and Flink output matching.
>
>How did you do this? Modifying the "count bolt" code? Or did you use
>some other bolt that consumes the "count bolt" output?
>
>One more thought: how much data do you have and did you terminate you
>program before looking into the result file? I am asking because
>BoltFileSink uses a BufferedOutputWriter internally -- if you have only
>a few records in your result and do not terminate, the data might still
>be buffered. I would get flushed to disc if you terminate the program.
>
>Otherwise, I could not spot any issue with your code. And as Max
>mentioned that the console output worked for him using you program I am
>little puzzled what might go wrong in your setup. The program seems to
>be correct.
>
>
>-Matthias
>
>
>On 12/04/2015 08:55 PM, Madhire, Naveen wrote:
>> Hi Max,
>> 
>> I forgot to include flink-storm-examples dependency in the application
>>to
>> use BoltFileSink.
>> 
>> However, the file created by the BoltFileSink is empty. Is there any
>>other
>> stuff which I need to do to write it into a file by using BoltFileSink?
>> 
>> I am using the same code what you mentioned,
>> 
>> builder.setBolt("file", new BoltFileSink("/tmp/storm", new
>> OutputFormatter() {
>>@Override
>>public String format(Tuple tuple) {
>>   return tuple.toString();
>>}
>> }), 1).shuffleGrouping("count");
>> 
>> 
>> 
>> 
>> Thanks,
>> Naveen
>> 
>> 
>> 
>> 
>>>
>>> On 12/4/15, 5:36 AM, "Maximilian Michels" <m...@apache.org> wrote:
>>>
>>>> Hi Naveen,
>>>>
>>>> Were you using Maven before? The syncing of changes in the master
>>>> always takes a while for Maven. The documentation happened to be
>>>> updated before Maven synchronized. Building and installing manually
>>>> (what you did) solves the problem.
>>>>
>>>> Strangely, when I run your code on my machine with the latest
>>>> 1.0-SNAPSHOT I see a lot of output on my console.
>>>>
>>>> Here's the output: https://gist.github.com/mxm/98cd927866b193ce0f89
>>>>
>>>> Could you add bolt which writes the Storm tuples to a file? Is that
>>>> file also empty?
>>>>
>>>> builder.setBolt("file", new BoltFileSink("/tmp/storm",

Re: Flink Storm

2015-12-04 Thread Maximilian Michels
Hi Naveen,

Were you using Maven before? The syncing of changes in the master
always takes a while for Maven. The documentation happened to be
updated before Maven synchronized. Building and installing manually
(what you did) solves the problem.

Strangely, when I run your code on my machine with the latest
1.0-SNAPSHOT I see a lot of output on my console.

Here's the output: https://gist.github.com/mxm/98cd927866b193ce0f89

Could you add bolt which writes the Storm tuples to a file? Is that
file also empty?

builder.setBolt("file", new BoltFileSink("/tmp/storm", new OutputFormatter() {
   @Override
   public String format(Tuple tuple) {
  return tuple.toString();
   }
}), 1).shuffleGrouping("count");


Thanks,
Max


Re: Flink Storm

2015-12-04 Thread Madhire, Naveen
Hi Max,

Yeah, I did route the ³count² bolt output to a file and I see the output.
I can see the Storm and Flink output matching.

However, I am not able to use the BoltFileSink class in the 1.0-SNAPSHOT
which I built. I think it¹s better to wait for a day for the Maven sync to
happen so that I can directly use 1.0-SNAPSHOT in the dependency.

I have few Storm topologies, which I will try to run on Flink over the
next few days. I will let you know how that goes. Thanks :)


Thanks,
Naveen

On 12/4/15, 5:36 AM, "Maximilian Michels"  wrote:

>Hi Naveen,
>
>Were you using Maven before? The syncing of changes in the master
>always takes a while for Maven. The documentation happened to be
>updated before Maven synchronized. Building and installing manually
>(what you did) solves the problem.
>
>Strangely, when I run your code on my machine with the latest
>1.0-SNAPSHOT I see a lot of output on my console.
>
>Here's the output: https://gist.github.com/mxm/98cd927866b193ce0f89
>
>Could you add bolt which writes the Storm tuples to a file? Is that
>file also empty?
>
>builder.setBolt("file", new BoltFileSink("/tmp/storm", new
>OutputFormatter() {
>   @Override
>   public String format(Tuple tuple) {
>  return tuple.toString();
>   }
>}), 1).shuffleGrouping("count");
>
>
>Thanks,
>Max



The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.



Flink Storm

2015-12-03 Thread Madhire, Naveen
Hi,

I am trying to execute few storm topologies using Flink, I have a question 
related to the documentation,

Can anyone tell me which of the below code is correct,

https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/storm_compatibility.html

https://ci.apache.org/projects/flink/flink-docs-master/apis/storm_compatibility.html


I want to use Flink-storm 1.0-SNAPSHOT version, I don’t see any createTopology 
method in FlinkTopology class.

Ex, cluster.submitTopology("WordCount", conf, 
FlinkTopology.createTopology(builder));

Is the documentation incorrect for the 1.0-SNAPSHOT or may be I missing 
something ;)

Thanks,
Naveen


The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.


Re: Flink Storm

2015-12-03 Thread Maximilian Michels
Hi Naveen,

I think you're not using the latest 1.0-SNAPSHOT. Did you build from
source? If so, you need to build again because the snapshot API has
been updated recently.

Best regards,
Max

On Thu, Dec 3, 2015 at 6:40 PM, Madhire, Naveen
<naveen.madh...@capitalone.com> wrote:
> Hi,
>
> I am trying to execute few storm topologies using Flink, I have a question
> related to the documentation,
>
> Can anyone tell me which of the below code is correct,
>
> https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/storm_compatibility.html
>
> https://ci.apache.org/projects/flink/flink-docs-master/apis/storm_compatibility.html
>
>
> I want to use Flink-storm 1.0-SNAPSHOT version, I don’t see any
> createTopology method in FlinkTopology class.
>
> Ex, cluster.submitTopology("WordCount", conf,
> FlinkTopology.createTopology(builder));
>
> Is the documentation incorrect for the 1.0-SNAPSHOT or may be I missing
> something ;)
>
> Thanks,
> Naveen
>
> 
>
> The information contained in this e-mail is confidential and/or proprietary
> to Capital One and/or its affiliates and may only be used solely in
> performance of work or services for Capital One. The information transmitted
> herewith is intended only for use by the individual or entity to which it is
> addressed. If the reader of this message is not the intended recipient, you
> are hereby notified that any review, retransmission, dissemination,
> distribution, copying or other use of, or taking of any action in reliance
> upon this information is strictly prohibited. If you have received this
> communication in error, please contact the sender and delete the material
> from your computer.


Re: question on flink-storm-examples

2015-09-03 Thread Matthias J. Sax
>
>> >>   
>>  
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >> >
>> >> > at
>> >> >
>> >>   
>>  
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >> >
>> >> > at java.lang.reflect.Method.invoke(Method.java:483)
>> >> >
>> >> > at
>> >> >
>> >>   
>>  
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
>> >> >
>> >> > ... 6 more
>> >> >
>> >> >
>> >> > The exception above occurred while trying to run your
>> command.
>> >> >
>> >> >
>> >> > Any idea how to fix this?
>> >> >
>> >> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
>> >> > <mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>
>> >> <mailto:mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>>
>> >> <mailto:mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>
>> >> <mailto:mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>>>>
>> >> > wrote:
>> >> >
>> >> > Hi Jerry,
>> >> >
>> >> > WordCount-StormTopology uses a hard coded dop of 4.
>> If you
>> >> start up
>> >> > Flink in local mode (bin/start-local-streaming.sh),
>> you need
>> >> to increase
>> >> > the number of task slots to at least 4 in
>> conf/flink-conf.yaml
>> >> before
>> >> > starting Flink -> taskmanager.numberOfTaskSlots
>> >> >
>> >> > You should actually see the following exception in
>> >> > log/flink-...-jobmanager-...log
>> >> >
>> >> > > NoResourceAvailableException: Not enough free
>> slots available to
>> >> > run the job. You can decrease the operator
>> parallelism or increase
>> >> > the number of slots per TaskManager in the
>> configuration.
>> >> >
>> >> > WordCount-StormTopology does use
>> StormWordCountRemoteBySubmitter
>> >> > internally. So, you do use it already ;)
>> >> >
>> >> > I am not sure what you mean by "get rid of
>> KafkaSource"? It is
>> >> still in
>> >> > the code base. Which version to you use? In
>> >> flink-0.10-SNAPSHOT it is
>> >> > located in submodule "flink-connector-kafka" (which is
>> >> submodule of
>> >> > "flink-streaming-connector-parent" -- which is
>> submodule of
>> >> > "flink-streamping-parent").
>> >> >
>> >> >
>> >> > -Matthias
>> >> >
>> >> >
>> >> > On 09/01/2015 09:40 PM, Jerry Peng wrote:
>> >> > > Hello,
>> >> > >
>> >> > > I have some questions regarding how to run one of the
>> >> > > flink-storm-examples, the WordCountTopology.  How
>> should I
>> >> run the
>> >> > job?
>> >> > > On github its says I should just execute
>> >> > > bin/flink run example.jar but when I execute:
>> >> > >
>> >> > > bin/flink run WordCount-StormTopology.jar
>> >> > >
>> >> > > nothing happens.  What am I doing wrong? and How
>> can I run the
>> >> > > WordCounttopology via
>> StormWordCountRemoteBySubmitter?
>> >> > >
>> >> > > Also why did you guys get rid of the KafkaSource
>> class?  What is
>> >> > the API
>> >> > > now for subscribing to a kafka source?
>> >> > >
>> >> > > Best,
>> >> > >
>> >> > > Jerry
>> >> >
>> >> >
>> >>
>> >>
>> >
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: question on flink-storm-examples

2015-09-02 Thread Matthias J. Sax
rthermore, WordCount-StormTopology sleeps for 5 seconds
> and tries to
> >> "kill" the job. However, because the job was never
> started, there is a
> >> NotAliveException which in print to stdout.
> >>
> >> -Matthias
> >>
> >>
> >>
> >> On 09/01/2015 10:26 PM, Jerry Peng wrote:
> >> > When I run WordCount-StormTopology I get the following
> exception:
> >> >
> >> > ~/flink/bin/flink run WordCount-StormTopology.jar
> >> > hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/data.txt
> >> > hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/results.txt
> >> >
> >> >
> org.apache.flink.client.program.ProgramInvocationException: The main
> >> > method caused an error.
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:452)
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
> >> >
> >> > at
> org.apache.flink.client.program.Client.run(Client.java:278)
> >> >
> >> > at
> >>   
>  
> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:631)
> >> >
> >> > at
> org.apache.flink.client.CliFrontend.run(CliFrontend.java:319)
> >> >
> >> > at
> >>   
>  
> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:954)
> >> >
> >> > at
> org.apache.flink.client.CliFrontend.main(CliFrontend.java:1004)
> >> >
> >> > Caused by: NotAliveException(msg:null)
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.stormcompatibility.api.FlinkClient.killTopologyWithOpts(FlinkClient.java:209)
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.stormcompatibility.api.FlinkClient.killTopology(FlinkClient.java:203)
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.stormcompatibility.wordcount.StormWordCountRemoteBySubmitter.main(StormWordCountRemoteBySubmitter.java:80)
> >> >
> >> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> Method)
> >> >
> >> > at
> >> >
> >>   
>  
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >> >
> >> > at
> >> >
> >>   
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> >
> >> > at java.lang.reflect.Method.invoke(Method.java:483)
> >> >
> >> > at
> >> >
> >>   
>  
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
> >> >
> >> > ... 6 more
> >> >
> >> >
> >> > The exception above occurred while trying to run your
> command.
> >> >
> >> >
> >> > Any idea how to fix this?
> >> >
> >> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> >> > <mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>
> >> <mailto:mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>>
> >> <mailto:mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>
> >> <mailto:mj...@informatik.hu-berlin.de
> <

question on flink-storm-examples

2015-09-01 Thread Jerry Peng
Hello,

I have some questions regarding how to run one of the flink-storm-examples,
the WordCountTopology.  How should I run the job?  On github its says I
should just execute
bin/flink run example.jar but when I execute:

bin/flink run WordCount-StormTopology.jar

nothing happens.  What am I doing wrong? and How can I run the
WordCounttopology via StormWordCountRemoteBySubmitter?

Also why did you guys get rid of the KafkaSource class?  What is the API
now for subscribing to a kafka source?

Best,

Jerry


Re: question on flink-storm-examples

2015-09-01 Thread Jerry Peng
When I run WordCount-StormTopology I get the following exception:

~/flink/bin/flink run WordCount-StormTopology.jar
hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/data.txt
hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/results.txt

org.apache.flink.client.program.ProgramInvocationException: The main method
caused an error.

at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:452)

at
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)

at org.apache.flink.client.program.Client.run(Client.java:278)

at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:631)

at org.apache.flink.client.CliFrontend.run(CliFrontend.java:319)

at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:954)

at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1004)

Caused by: NotAliveException(msg:null)

at
org.apache.flink.stormcompatibility.api.FlinkClient.killTopologyWithOpts(FlinkClient.java:209)

at
org.apache.flink.stormcompatibility.api.FlinkClient.killTopology(FlinkClient.java:203)

at
org.apache.flink.stormcompatibility.wordcount.StormWordCountRemoteBySubmitter.main(StormWordCountRemoteBySubmitter.java:80)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:483)

at
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)

... 6 more


The exception above occurred while trying to run your command.

Any idea how to fix this?

On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax <
mj...@informatik.hu-berlin.de> wrote:

> Hi Jerry,
>
> WordCount-StormTopology uses a hard coded dop of 4. If you start up
> Flink in local mode (bin/start-local-streaming.sh), you need to increase
> the number of task slots to at least 4 in conf/flink-conf.yaml before
> starting Flink -> taskmanager.numberOfTaskSlots
>
> You should actually see the following exception in
> log/flink-...-jobmanager-...log
>
> > NoResourceAvailableException: Not enough free slots available to run the
> job. You can decrease the operator parallelism or increase the number of
> slots per TaskManager in the configuration.
>
> WordCount-StormTopology does use StormWordCountRemoteBySubmitter
> internally. So, you do use it already ;)
>
> I am not sure what you mean by "get rid of KafkaSource"? It is still in
> the code base. Which version to you use? In flink-0.10-SNAPSHOT it is
> located in submodule "flink-connector-kafka" (which is submodule of
> "flink-streaming-connector-parent" -- which is submodule of
> "flink-streamping-parent").
>
>
> -Matthias
>
>
> On 09/01/2015 09:40 PM, Jerry Peng wrote:
> > Hello,
> >
> > I have some questions regarding how to run one of the
> > flink-storm-examples, the WordCountTopology.  How should I run the job?
> > On github its says I should just execute
> > bin/flink run example.jar but when I execute:
> >
> > bin/flink run WordCount-StormTopology.jar
> >
> > nothing happens.  What am I doing wrong? and How can I run the
> > WordCounttopology via StormWordCountRemoteBySubmitter?
> >
> > Also why did you guys get rid of the KafkaSource class?  What is the API
> > now for subscribing to a kafka source?
> >
> > Best,
> >
> > Jerry
>
>


Re: question on flink-storm-examples

2015-09-01 Thread Jerry Peng
  >
> >>
>  sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >> >
> >> > at
> >> >
> >>
>  
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >> >
> >> > at java.lang.reflect.Method.invoke(Method.java:483)
> >> >
> >> > at
> >> >
> >>
>  
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
> >> >
> >> > ... 6 more
> >> >
> >> >
> >> > The exception above occurred while trying to run your command.
> >> >
> >> >
> >> > Any idea how to fix this?
> >> >
> >> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> >> > <mj...@informatik.hu-berlin.de
> >> <mailto:mj...@informatik.hu-berlin.de>
> >> <mailto:mj...@informatik.hu-berlin.de
> >> <mailto:mj...@informatik.hu-berlin.de>>>
> >> > wrote:
> >> >
> >> > Hi Jerry,
> >> >
> >> > WordCount-StormTopology uses a hard coded dop of 4. If you
> >> start up
> >> > Flink in local mode (bin/start-local-streaming.sh), you need
> >> to increase
> >> > the number of task slots to at least 4 in conf/flink-conf.yaml
> >> before
> >> > starting Flink -> taskmanager.numberOfTaskSlots
> >> >
> >> > You should actually see the following exception in
> >> > log/flink-...-jobmanager-...log
> >> >
> >> > > NoResourceAvailableException: Not enough free slots
> available to
> >> > run the job. You can decrease the operator parallelism or
> increase
> >> > the number of slots per TaskManager in the configuration.
> >> >
> >> > WordCount-StormTopology does use
> StormWordCountRemoteBySubmitter
> >> > internally. So, you do use it already ;)
> >> >
> >> > I am not sure what you mean by "get rid of KafkaSource"? It is
> >> still in
> >> > the code base. Which version to you use? In
> >> flink-0.10-SNAPSHOT it is
> >> > located in submodule "flink-connector-kafka" (which is
> >> submodule of
> >> > "flink-streaming-connector-parent" -- which is submodule of
> >> > "flink-streamping-parent").
> >> >
> >> >
> >> > -Matthias
> >> >
> >> >
> >> > On 09/01/2015 09:40 PM, Jerry Peng wrote:
> >> > > Hello,
> >> > >
> >> > > I have some questions regarding how to run one of the
> >> > > flink-storm-examples, the WordCountTopology.  How should I
> >> run the
> >> > job?
> >> > > On github its says I should just execute
> >> > > bin/flink run example.jar but when I execute:
> >> > >
> >> > > bin/flink run WordCount-StormTopology.jar
> >> > >
> >> > > nothing happens.  What am I doing wrong? and How can I run
> the
> >> > > WordCounttopology via StormWordCountRemoteBySubmitter?
> >> > >
> >> > > Also why did you guys get rid of the KafkaSource class?
> What is
> >> > the API
> >> > > now for subscribing to a kafka source?
> >> > >
> >> > > Best,
> >> > >
> >> > > Jerry
> >> >
> >> >
> >>
> >>
> >
>
>


Re: question on flink-storm-examples

2015-09-01 Thread Matthias J. Sax
Hi Jerry,

WordCount-StormTopology uses a hard coded dop of 4. If you start up
Flink in local mode (bin/start-local-streaming.sh), you need to increase
the number of task slots to at least 4 in conf/flink-conf.yaml before
starting Flink -> taskmanager.numberOfTaskSlots

You should actually see the following exception in
log/flink-...-jobmanager-...log

> NoResourceAvailableException: Not enough free slots available to run the job. 
> You can decrease the operator parallelism or increase the number of slots per 
> TaskManager in the configuration.

WordCount-StormTopology does use StormWordCountRemoteBySubmitter
internally. So, you do use it already ;)

I am not sure what you mean by "get rid of KafkaSource"? It is still in
the code base. Which version to you use? In flink-0.10-SNAPSHOT it is
located in submodule "flink-connector-kafka" (which is submodule of
"flink-streaming-connector-parent" -- which is submodule of
"flink-streamping-parent").


-Matthias


On 09/01/2015 09:40 PM, Jerry Peng wrote:
> Hello,
> 
> I have some questions regarding how to run one of the
> flink-storm-examples, the WordCountTopology.  How should I run the job? 
> On github its says I should just execute
> bin/flink run example.jar but when I execute:
> 
> bin/flink run WordCount-StormTopology.jar 
> 
> nothing happens.  What am I doing wrong? and How can I run the
> WordCounttopology via StormWordCountRemoteBySubmitter? 
> 
> Also why did you guys get rid of the KafkaSource class?  What is the API
> now for subscribing to a kafka source?
> 
> Best,
> 
> Jerry



signature.asc
Description: OpenPGP digital signature


Re: question on flink-storm-examples

2015-09-01 Thread Matthias J. Sax
gt; 
>> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
>> >
>> > at org.apache.flink.client.program.Client.run(Client.java:278)
>> >
>> > at
>> org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:631)
>> >
>> > at org.apache.flink.client.CliFrontend.run(CliFrontend.java:319)
>> >
>> > at
>> org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:954)
>> >
>> > at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1004)
>> >
>> > Caused by: NotAliveException(msg:null)
>> >
>> > at
>> >
>> 
>> org.apache.flink.stormcompatibility.api.FlinkClient.killTopologyWithOpts(FlinkClient.java:209)
>> >
>> > at
>> >
>> 
>> org.apache.flink.stormcompatibility.api.FlinkClient.killTopology(FlinkClient.java:203)
>> >
>> > at
>> >
>> 
>> org.apache.flink.stormcompatibility.wordcount.StormWordCountRemoteBySubmitter.main(StormWordCountRemoteBySubmitter.java:80)
>> >
>> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> >
>> > at
>> >
>> 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>> >
>> > at
>> >
>> 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>> >
>> > at java.lang.reflect.Method.invoke(Method.java:483)
>> >
>> > at
>> >
>> 
>> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
>> >
>> > ... 6 more
>> >
>> >
>> > The exception above occurred while trying to run your command.
>> >
>> >
>> > Any idea how to fix this?
>> >
>> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
>> > <mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>
>> <mailto:mj...@informatik.hu-berlin.de
>> <mailto:mj...@informatik.hu-berlin.de>>>
>> > wrote:
>> >
>> > Hi Jerry,
>> >
>> > WordCount-StormTopology uses a hard coded dop of 4. If you
>> start up
>> >     Flink in local mode (bin/start-local-streaming.sh), you need
>> to increase
>> > the number of task slots to at least 4 in conf/flink-conf.yaml
>> before
>> > starting Flink -> taskmanager.numberOfTaskSlots
>> >
>> > You should actually see the following exception in
>> > log/flink-...-jobmanager-...log
>> >
>> > > NoResourceAvailableException: Not enough free slots available to
>> > run the job. You can decrease the operator parallelism or increase
>> > the number of slots per TaskManager in the configuration.
>> >
>> > WordCount-StormTopology does use StormWordCountRemoteBySubmitter
>> > internally. So, you do use it already ;)
>> >
>> > I am not sure what you mean by "get rid of KafkaSource"? It is
>> still in
>> > the code base. Which version to you use? In
>> flink-0.10-SNAPSHOT it is
>> > located in submodule "flink-connector-kafka" (which is
>> submodule of
>> > "flink-streaming-connector-parent" -- which is submodule of
>> > "flink-streamping-parent").
>> >
>> >
>> > -Matthias
>> >
>> >
>> > On 09/01/2015 09:40 PM, Jerry Peng wrote:
>> > > Hello,
>> > >
>> > > I have some questions regarding how to run one of the
>> > > flink-storm-examples, the WordCountTopology.  How should I
>> run the
>> > job?
>> > > On github its says I should just execute
>> > > bin/flink run example.jar but when I execute:
>> > >
>> > > bin/flink run WordCount-StormTopology.jar
>> > >
>> > > nothing happens.  What am I doing wrong? and How can I run the
>> > > WordCounttopology via StormWordCountRemoteBySubmitter?
>> > >
>> > > Also why did you guys get rid of the KafkaSource class?  What is
>> > the API
>> > > now for subscribing to a kafka source?
>> > >
>> > > Best,
>> > >
>> > > Jerry
>> >
>> >
>>
>>
> 



signature.asc
Description: OpenPGP digital signature


Re: question on flink-storm-examples

2015-09-01 Thread Matthias J. Sax
Client.java:209)
> >
> > at
> >
> 
> org.apache.flink.stormcompatibility.api.FlinkClient.killTopology(FlinkClient.java:203)
> >
> > at
> >
> 
> org.apache.flink.stormcompatibility.wordcount.StormWordCountRemoteBySubmitter.main(StormWordCountRemoteBySubmitter.java:80)
> >
> > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >
> > at
> >
> 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> >
> > at
> >
> 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >
> > at java.lang.reflect.Method.invoke(Method.java:483)
> >
> > at
> >
> 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
> >
> > ... 6 more
> >
> >
> > The exception above occurred while trying to run your command.
> >
> >
> > Any idea how to fix this?
> >
> > On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> > <mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>
> <mailto:mj...@informatik.hu-berlin.de
> <mailto:mj...@informatik.hu-berlin.de>>>
> > wrote:
> >
> > Hi Jerry,
> >
> > WordCount-StormTopology uses a hard coded dop of 4. If you
> start up
> > Flink in local mode (bin/start-local-streaming.sh), you need
> to increase
> > the number of task slots to at least 4 in conf/flink-conf.yaml
> before
> > starting Flink -> taskmanager.numberOfTaskSlots
> >
> > You should actually see the following exception in
> > log/flink-...-jobmanager-...log
> >
> > > NoResourceAvailableException: Not enough free slots available to
> > run the job. You can decrease the operator parallelism or increase
> > the number of slots per TaskManager in the configuration.
> >
> > WordCount-StormTopology does use StormWordCountRemoteBySubmitter
> > internally. So, you do use it already ;)
> >
> > I am not sure what you mean by "get rid of KafkaSource"? It is
> still in
> > the code base. Which version to you use? In
> flink-0.10-SNAPSHOT it is
> > located in submodule "flink-connector-kafka" (which is
> submodule of
> > "flink-streaming-connector-parent" -- which is submodule of
> > "flink-streamping-parent").
> >
> >
> > -Matthias
> >
> >
> > On 09/01/2015 09:40 PM, Jerry Peng wrote:
> > > Hello,
> > >
> > > I have some questions regarding how to run one of the
> > > flink-storm-examples, the WordCountTopology.  How should I
> run the
> > job?
> > > On github its says I should just execute
> > > bin/flink run example.jar but when I execute:
> > >
> > > bin/flink run WordCount-StormTopology.jar
> > >
> > > nothing happens.  What am I doing wrong? and How can I run the
> > > WordCounttopology via StormWordCountRemoteBySubmitter?
> > >
> > > Also why did you guys get rid of the KafkaSource class?  What is
> > the API
> > > now for subscribing to a kafka source?
> > >
> > > Best,
> > >
> > > Jerry
> >
> >
> 
> 



signature.asc
Description: OpenPGP digital signature


Re: question on flink-storm-examples

2015-09-01 Thread Jerry Peng
hias J. Sax
> > <mj...@informatik.hu-berlin.de <mailto:mj...@informatik.hu-berlin.de>>
> > wrote:
> >
> > Hi Jerry,
> >
> > WordCount-StormTopology uses a hard coded dop of 4. If you start up
> > Flink in local mode (bin/start-local-streaming.sh), you need to
> increase
> > the number of task slots to at least 4 in conf/flink-conf.yaml before
> > starting Flink -> taskmanager.numberOfTaskSlots
> >
> > You should actually see the following exception in
> > log/flink-...-jobmanager-...log
> >
> > > NoResourceAvailableException: Not enough free slots available to
> > run the job. You can decrease the operator parallelism or increase
> > the number of slots per TaskManager in the configuration.
> >
> > WordCount-StormTopology does use StormWordCountRemoteBySubmitter
> > internally. So, you do use it already ;)
> >
> > I am not sure what you mean by "get rid of KafkaSource"? It is still
> in
> > the code base. Which version to you use? In flink-0.10-SNAPSHOT it is
> > located in submodule "flink-connector-kafka" (which is submodule of
> > "flink-streaming-connector-parent" -- which is submodule of
> > "flink-streamping-parent").
> >
> >
> > -Matthias
> >
> >
> > On 09/01/2015 09:40 PM, Jerry Peng wrote:
> > > Hello,
> > >
> > > I have some questions regarding how to run one of the
> > > flink-storm-examples, the WordCountTopology.  How should I run the
> > job?
> > > On github its says I should just execute
> > > bin/flink run example.jar but when I execute:
> > >
> > > bin/flink run WordCount-StormTopology.jar
> > >
> > > nothing happens.  What am I doing wrong? and How can I run the
> > > WordCounttopology via StormWordCountRemoteBySubmitter?
> > >
> > > Also why did you guys get rid of the KafkaSource class?  What is
> > the API
> > > now for subscribing to a kafka source?
> > >
> > > Best,
> > >
> > > Jerry
> >
> >
>
>


Re: question on flink-storm-examples

2015-09-01 Thread Matthias J. Sax
Yes. That is what I expected.

JobManager cannot start the job, due to less task slots. It logs the
exception NoResourceAvailableException (it is not shown in stdout; see
"log" folder). There is no feedback to Flink CLI that the job could not
be started.

Furthermore, WordCount-StormTopology sleeps for 5 seconds and tries to
"kill" the job. However, because the job was never started, there is a
NotAliveException which in print to stdout.

-Matthias



On 09/01/2015 10:26 PM, Jerry Peng wrote:
> When I run WordCount-StormTopology I get the following exception:
> 
> ~/flink/bin/flink run WordCount-StormTopology.jar
> hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/data.txt
> hdfs:///home/jerrypeng/hadoop/hadoop_dir/data/results.txt
> 
> org.apache.flink.client.program.ProgramInvocationException: The main
> method caused an error.
> 
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:452)
> 
> at
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:353)
> 
> at org.apache.flink.client.program.Client.run(Client.java:278)
> 
> at org.apache.flink.client.CliFrontend.executeProgram(CliFrontend.java:631)
> 
> at org.apache.flink.client.CliFrontend.run(CliFrontend.java:319)
> 
> at org.apache.flink.client.CliFrontend.parseParameters(CliFrontend.java:954)
> 
> at org.apache.flink.client.CliFrontend.main(CliFrontend.java:1004)
> 
> Caused by: NotAliveException(msg:null)
> 
> at
> org.apache.flink.stormcompatibility.api.FlinkClient.killTopologyWithOpts(FlinkClient.java:209)
> 
> at
> org.apache.flink.stormcompatibility.api.FlinkClient.killTopology(FlinkClient.java:203)
> 
> at
> org.apache.flink.stormcompatibility.wordcount.StormWordCountRemoteBySubmitter.main(StormWordCountRemoteBySubmitter.java:80)
> 
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 
> at java.lang.reflect.Method.invoke(Method.java:483)
> 
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:437)
> 
> ... 6 more
> 
> 
> The exception above occurred while trying to run your command.
> 
> 
> Any idea how to fix this?
> 
> On Tue, Sep 1, 2015 at 3:10 PM, Matthias J. Sax
> <mj...@informatik.hu-berlin.de <mailto:mj...@informatik.hu-berlin.de>>
> wrote:
> 
> Hi Jerry,
> 
> WordCount-StormTopology uses a hard coded dop of 4. If you start up
> Flink in local mode (bin/start-local-streaming.sh), you need to increase
> the number of task slots to at least 4 in conf/flink-conf.yaml before
> starting Flink -> taskmanager.numberOfTaskSlots
> 
> You should actually see the following exception in
> log/flink-...-jobmanager-...log
> 
> > NoResourceAvailableException: Not enough free slots available to
> run the job. You can decrease the operator parallelism or increase
> the number of slots per TaskManager in the configuration.
> 
> WordCount-StormTopology does use StormWordCountRemoteBySubmitter
> internally. So, you do use it already ;)
> 
> I am not sure what you mean by "get rid of KafkaSource"? It is still in
> the code base. Which version to you use? In flink-0.10-SNAPSHOT it is
> located in submodule "flink-connector-kafka" (which is submodule of
> "flink-streaming-connector-parent" -- which is submodule of
> "flink-streamping-parent").
> 
> 
> -Matthias
> 
> 
> On 09/01/2015 09:40 PM, Jerry Peng wrote:
> > Hello,
> >
> > I have some questions regarding how to run one of the
> > flink-storm-examples, the WordCountTopology.  How should I run the
> job?
> > On github its says I should just execute
> > bin/flink run example.jar but when I execute:
> >
> > bin/flink run WordCount-StormTopology.jar
> >
> > nothing happens.  What am I doing wrong? and How can I run the
> > WordCounttopology via StormWordCountRemoteBySubmitter?
> >
> > Also why did you guys get rid of the KafkaSource class?  What is
> the API
> > now for subscribing to a kafka source?
> >
> > Best,
> >
> > Jerry
> 
> 



signature.asc
Description: OpenPGP digital signature