Super nice to hear :-)
On Mon, Nov 9, 2015 at 4:48 PM, Niels Basjes wrote:
> Apparently I just had to wait a bit longer for the first run.
> Now I'm able to package the project in about 7 minutes.
>
> Current status: I am now able to access HBase from within Flink on a
>
Apparently I just had to wait a bit longer for the first run.
Now I'm able to package the project in about 7 minutes.
Current status: I am now able to access HBase from within Flink on a
Kerberos secured cluster.
Cleaning up the patch so I can submit it in a few days.
On Sat, Nov 7, 2015 at
Hi Niels!
Usually, you simply build the binaries by invoking "mvn -DskipTests clean
package" in the root flink directory. The resulting program should be in
the "build-target" directory.
If the program gets stuck, let us know where and what the last message on
the command line is.
Please be
Hi,
Excellent.
What you can help me with are the commands to build the binary distribution
from source.
I tried it last Thursday and the build seemed to get stuck at some point
(at the end of/just after building the dist module).
I haven't been able to figure out why yet.
Niels
On 5 Nov 2015
The single shading step on my machine (SSD, 10 GB RAM) takes about 45
seconds. HDD may be significantly longer, but should really not be more
than 10 minutes.
Is your maven build always stuck in that stage (flink-dist) showing a long
list of dependencies (saying including org.x.y, including
Usually, if all the dependencies are being downloaded, i.e., on the first
build, it'll likely take 30-40 minutes. Subsequent builds might take 10
minutes approx. [I have the same PC configuration.]
-- Sachin Goel
Computer Science, IIT Delhi
m. +91-9871457685
On Sun, Nov 8, 2015 at 2:05 AM, Niels
I created https://issues.apache.org/jira/browse/FLINK-2977
On Thu, Nov 5, 2015 at 12:25 PM, Robert Metzger wrote:
> Hi Niels,
> thank you for analyzing the issue so properly. I agree with you. It seems
> that HDFS and HBase are using their own tokes which we need to
Hi Niels,
thank you for analyzing the issue so properly. I agree with you. It seems
that HDFS and HBase are using their own tokes which we need to transfer
from the client to the YARN containers. We should be able to port the fix
from Spark (which they got from Storm) into our YARN client.
I think
Thank you for looking into the problem, Niels. Let us know if you need
anything. We would be happy to merge a pull request once you have verified
the fix.
On Thu, Nov 5, 2015 at 1:38 PM, Niels Basjes wrote:
> I created https://issues.apache.org/jira/browse/FLINK-2977
>
> On
Hi Niels,
You're welcome. Some more information on how this would be configured:
In the kdc.conf, there are two variables:
max_life = 2h 0m 0s
max_renewable_life = 7d 0h 0m 0s
max_life is the maximum life of the current ticket. However, it may be
renewed up to a time span of
Hi,
Thanks for your feedback.
So I guess I'll have to talk to the security guys about having special
kerberos ticket expiry times for these types of jobs.
Niels Basjes
On Fri, Oct 23, 2015 at 11:45 AM, Maximilian Michels wrote:
> Hi Niels,
>
> Thank you for your question.
Hi Niels,
Thank you for your question. Flink relies entirely on the Kerberos
support of Hadoop. So your question could also be rephrased to "Does
Hadoop support long-term authentication using Kerberos?". And the
answer is: Yes!
While Hadoop uses Kerberos tickets to authenticate users with
Hi,
I want to write a long running (i.e. never stop it) streaming flink
application on a kerberos secured Hadoop/Yarn cluster. My application needs
to do things with files on HDFS and HBase tables on that cluster so having
the correct kerberos tickets is very important. The stream is to be
13 matches
Mail list logo