Additionally, If you have any questions about contributing, please send a mail
to dev mailing list.
Regards,
Chiwan Park
> On Aug 27, 2015, at 2:11 PM, Chiwan Park wrote:
>
> Hi Naveen,
>
> There is a guide document [1] about contribution in homepage. Please read
> first before contributing.
Hi Naveen,
There is a guide document [1] about contribution in homepage. Please read first
before contributing. Maybe a document of coding guidelines [2] would be helpful
to you. You can find some issues [3] to start contributing to Flink in JIRA.
The issues are labeled as `starter`, `newbie`,
Hi,
I've setup Flink on my local linux machine and ran few examples as well.
Also setup the Intellij IDE for the coding environment. Can anyone please
let me know if there are any beginner tasks which I can take a look for
contributing to Apache Flink codebase.
I am comfortable in Java and Scala
I think that is a very good idea.
Originally, we wrapped the Hadoop FS classes for convenience (they were
changing, we wanted to keep the system independent of Hadoop), but these
are no longer relevant reasons, in my opinion.
Let's start with your proposal and see if we can actually get rid of th
Hi,
I’ve noticed that when you use org.apache.flink.core.fs.FileSystem to write
into a hdfs file, calling
org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.create(), it returns a
HadoopDataOutputStream that wraps a org.apache.hadoop.fs.FSDataOutputStream
(under its org.apache.hadoop.hdfs.clie
The only problem with writing the temp is that it will be gone after a
restart. While this is not important for PIDs because the system has
been restarted anyways, this can actually be a problem if you want to
resume a YARN cluster after you have restarted your system.
On Wed, Aug 26, 2015 at 3:34
Nice. More configuration options :)
On Wed, Aug 26, 2015 at 5:58 PM, Robert Metzger wrote:
> Therefore, my change will include a configuration option to set a custom
> location for the file.
>
> On Wed, Aug 26, 2015 at 5:55 PM, Maximilian Michels wrote:
>>
>> The only problem with writing the te
Therefore, my change will include a configuration option to set a custom
location for the file.
On Wed, Aug 26, 2015 at 5:55 PM, Maximilian Michels wrote:
> The only problem with writing the temp is that it will be gone after a
> restart. While this is not important for PIDs because the system h
Yep. I think the start-*.sh scripts are also writing the PID to tmp.
On Wed, Aug 26, 2015 at 3:30 PM, Maximilian Michels wrote:
> Can't we write the file to the system's temp directory or the user
> home? IMHO this is more standard practice for these type of session
> information.
>
> On Wed, Au
Can't we write the file to the system's temp directory or the user
home? IMHO this is more standard practice for these type of session
information.
On Wed, Aug 26, 2015 at 3:25 PM, Robert Metzger wrote:
> Great ;)
>
> Not yet, but you are the second user to request this.
> I think I'll put the fi
Great ;)
Not yet, but you are the second user to request this.
I think I'll put the file somewhere else now.
On Wed, Aug 26, 2015 at 3:19 PM, LINZ, Arnaud
wrote:
> Ooops… Seems it was rather a write problem on the conf dir…
>
> Sorry, it works!
>
>
>
> BTW, it’s not really nice to have an appli
Hi Arnaud,
I think my answer to Gwenhaël could also be helpful to you:
are you using the one-yarn-cluster-per-job mode of Flink? I.e., you are
starting your Flink job with (from the doc):
flink run -m yarn-cluster -yn 4 -yjm 1024 -ytm 4096
./examples/flink-java-examples-0.10-SNAPSHOT-WordCount.ja
Ooops… Seems it was rather a write problem on the conf dir…
Sorry, it works!
BTW, it’s not really nice to have an application write the configuration dir ;
it’s often a root protected directory in usr/lib/flink. Is there a parameter to
put that file elsewhere ?
De : Robert Metzger [mailto:rmet
Hi Arnaud,
usually, you don't have to manually specify the JobManager address manually
with the -m argument, because it is reading it from the
conf/.yarn-session.properties file.
Give me a few minutes to reproduce the issue.
On Wed, Aug 26, 2015 at 2:39 PM, LINZ, Arnaud
wrote:
> Hi,
> Using la
Hi,
Using last nightly build, it seems that if you call yarn-session.sh with -nm
option to give a nice application name, then you cannot submit a job with flink
run without specify the ever changing -m address since it does not
find it any longer.
Regards,
Arnaud
_
Hi Flavio,
can you share a minimal version of your program to reproduce the issue?
On Wed, Aug 26, 2015 at 10:36 AM, Flavio Pompermaier
wrote:
> I'm running my job from my Eclipse and I don't register any Kryo class in
> the env.
>
> On Wed, Aug 26, 2015 at 10:34 AM, Stephan Ewen wrote:
>
>> H
I'm running my job from my Eclipse and I don't register any Kryo class in
the env.
On Wed, Aug 26, 2015 at 10:34 AM, Stephan Ewen wrote:
> Hi Flavio!
>
> That exception means that the Kryo serializers are not in sync. The
> writers have registered types that the readers do not know.
>
> Two poss
Hi Flavio!
That exception means that the Kryo serializers are not in sync. The writers
have registered types that the readers do not know.
Two possible reasons that I can think of from the top of my head:
1) Do you manually register types? Are you registering new types in the
middle of your prog
Hi !
Yes, we’re starting our job with “flink run --jobmanager yarn-cluster”
So it’s perfect, we’ll use your fix and, when it’s out, we’ll switch to flink
0.9.1.
B.R.
From: Aljoscha Krettek [mailto:aljos...@apache.org]
Sent: mardi 25 août 2015 19:25
To: user@flink.apache.org
Subject: Re: Appli
Hi to all,
I'm running a job (with Flink 0.10-SNAPSHOT) that reads some parquet-thrift
objects and then it performs some joins and I receive the following
exception:
Caused by: java.io.IOException: Thread 'SortMerger spilling thread'
terminated due to an exception: Encountered unregistered class
20 matches
Mail list logo