Hi,
Thanks a lot for the resources.
On Dec 1, 2015 9:11 PM, "Fabian Hueske" wrote:
> Hi Madhu,
>
> checkout the following resources:
>
> - Apache Flink Blog: http://flink.apache.org/blog/index.html
> - Data Artisans Blog: http://data-artisans.com/blog/
> - Flink Forward
I think the way supervisor is used in the Docker scripts is a bit hacky. It
is simply started in the foreground and does nothing. Supervisor is
actually a really nice utility to start processes in Docker containers and
monitor them.
Nevertheless, supervisor also expects commands to stay in the
I opened an issue for it and it will fixed with the next 0.10.2 release.
@Robert: are you aware of another workaround for the time being?
On Thu, Dec 3, 2015 at 1:20 PM, LINZ, Arnaud
wrote:
> Hi,
> It works fine with that file renamed. Is there a way to specify its
Hi,
The batch job does not need to be HA.
I stopped everything, cleaned the temp files, added -Drecovery.mode=standalone
and it seems to work now !
Strange, but good for me for now.
Thanks,
Arnaud
-Message d'origine-
De : Ufuk Celebi [mailto:u...@apache.org]
Envoyé : jeudi 3 décembre
Hi Niels,
Just got back from our CI. The build above would fail with a
Checkstyle error. I corrected that. Also I have built the binaries for
your Hadoop version 2.6.0.
Binaries:
https://drive.google.com/file/d/0BziY9U_qva1sZ1FVR3RWeVNrNzA/view?usp=sharing
Source:
Hi Welly,
the fix has been merged and should be available in 0.10-SNAPSHOT.
On Wed, Dec 2, 2015 at 10:12 AM, Maximilian Michels wrote:
> Hi Welly,
>
> We still have to decide on the next release date but I would expect
> Flink 0.10.2 within the next weeks. If you can't work
Oopss... False joy.
In fact, it does start another container, but this container ends immediately
because the job is not submitted to that container but to the streaming one.
Log details:
Command =
# JVM_ARGS = -DCluster.Parallelisme=150 -Drecovery.mode=standalone
Hi Arnaud,
as long as you don't have HA activated for your batch jobs, HA shouldn't
have an influence on the batch execution. If it interferes, then you should
see additional task manager connected to the streaming cluster when you
execute the batch job. Could you check that? Furthermore, could
Hey Arnaud,
thanks for reporting this. I think Till’s suggestion will help to debug this
(checking whether a second YARN application has been started)…
You don’t want to run the batch application in HA mode, correct?
I sounds like the batch job is submitted with the same config keys. Could you
Hi,
I’ve tried to put that parameter in the JVM_ARGS, but not with much success.
# JVM_ARGS : -DCluster.Parallelisme=150 -Drecovery.mode=standalone
-Dyarn.properties-file.location=/tmp/flink/batch
(…)
2015:12:03 15:25:42 (ThrdExtrn) - INFO - (...)jobs.exec.ExecutionProcess$1.run
- > Found
As far as I know "auto.offset.reset" what to do if offset it not available or
out of bound?
Vladimir
On Thursday, December 3, 2015 5:58 PM, Maximilian Michels
wrote:
Hi Vladimir,
You may supply Kafka consumer properties when you create the FlinkKafkaConsumer.
Properties
Hi,
I am trying to execute few storm topologies using Flink, I have a question
related to the documentation,
Can anyone tell me which of the below code is correct,
https://ci.apache.org/projects/flink/flink-docs-release-0.10/apis/storm_compatibility.html
Thanks Max, I took a look at making this change directly to the scripts. I
was initially thinking about making a separate script whose only
responsibility is to run the command in the foreground, so that the
flink-daemon.sh could delegate to this script. I didn't get very far into
though, mostly
I see that Flink 0.10.1 now supports Keyed Schemas which allows us to rely on
Kafka topics set to "compact" retention for data persistence.
In our topology we wanted to set some topics with Log Compactions enabled and
read topic from the beginning when the topology starts or component recovers.
Hi,
When I submit a job using RemoteEnvironment without setting parallelism, it
always uses only one task slot.
Is this a bug or intentional ? I thought it was supposed to be the default
configuration of the server (parallelism.default=24 in my cases)
I'm using Flink in Standalone cluster mode.
Hi Vladimir!
The Kafka Consumer can start from any offset internally (it does that for
example when recovering a failure).
Should be fairly straightforward to set that offset field initially from a
parameter. The FlinkKafkaConsumer is part of the user jars anyways. If you
want, you can give it a
Hi Vladimir,
Did you pass the properties to the FlinkKafkaConsumer?
Cheers,
Max
On Thu, Dec 3, 2015 at 7:06 PM, Vladimir Stoyak wrote:
> Gave it a try, but does not seem to help. Is it working for you?
>
> Thanks
>
> Sent from my iPhone
>
>> On Dec 3, 2015, at 6:11 PM,
Hi Naveen,
I think you're not using the latest 1.0-SNAPSHOT. Did you build from
source? If so, you need to build again because the snapshot API has
been updated recently.
Best regards,
Max
On Thu, Dec 3, 2015 at 6:40 PM, Madhire, Naveen
wrote:
> Hi,
>
> I am
Gave it a try, but does not seem to help. Is it working for you?
Thanks
Sent from my iPhone
> On Dec 3, 2015, at 6:11 PM, Vladimir Stoyak wrote:
>
> As far as I know "auto.offset.reset" what to do if offset it not available or
> out of bound?
>
> Vladimir
>
>
> On
19 matches
Mail list logo