I created https://issues.apache.org/jira/browse/FLINK-2977
On Thu, Nov 5, 2015 at 12:25 PM, Robert Metzger wrote:
> Hi Niels,
> thank you for analyzing the issue so properly. I agree with you. It seems
> that HDFS and HBase are using their own tokes which we need to
Hi,
these are some interesting Ideas.
I have some thoughts, though, about the current implementation.
1. With Schema and Field you are basically re-implementing RowTypeInfo, so it
should not be required. Maybe just an easier way to create a RowTypeInfo.
2. Right now, in Flink the
Hi Nick,
both JIRA and the mailing list are good. In this case I'd say JIRA would be
better because then everybody has the full context of the discussion.
The issue is fixed in 0.10, which is not yet released.
You can work around the issue by implementing a custom SourceFunction which
returns
Hi Niels,
thank you for analyzing the issue so properly. I agree with you. It seems
that HDFS and HBase are using their own tokes which we need to transfer
from the client to the YARN containers. We should be able to port the fix
from Spark (which they got from Storm) into our YARN client.
I think
Thank you Robert, I'll give this a spin -- obvious now that you point
it out. I'll go ahead and continue the line on inquiry on the JIRA.
-n
On Thu, Nov 5, 2015 at 4:18 AM, Robert Metzger wrote:
> Hi Nick,
>
> both JIRA and the mailing list are good. In this case I'd say
Hi Robert,
It seems "type" was what I needed. This it also looks like the test
jar has an undeclared dependency. In the end, the following allowed me
to use TestStreamEnvironment for my integration test. Thanks a lot!
-n
org.apache.flink
flink-streaming-core
Thanks Stephan, I'll check that out in the morning. Generally speaking, it
would be great to have some single-jvm example tests for those of us
getting started. Following the example of WindowingIntegrationTest is
mostly working, though reusing my single sink instance with its static
collection
While discussing with my colleagues about the issue today, we came up with
another approach to resolve the issue:
d) Upload the job jar to HDFS (or another FS) and trigger the execution of
the jar using an HTTP request to the web interface.
We could add some tooling into the /bin/flink client to
Thank you for looking into the problem, Niels. Let us know if you need
anything. We would be happy to merge a pull request once you have verified
the fix.
On Thu, Nov 5, 2015 at 1:38 PM, Niels Basjes wrote:
> I created https://issues.apache.org/jira/browse/FLINK-2977
>
> On
Hi,
I checked and this setting has been set to a limited port range of only 100
port numbers.
I tried to find the actual port an AM is running on and couldn't find it
(I'm not the admin on that cluster)
The url to the AM that I use to access it always looks like this:
10 matches
Mail list logo