I believe so. By default it creates a filepath that can't be read when creating 
registrars later.
-Will

    On Wednesday, July 12, 2017 1:02 AM, Aljoscha Krettek <[email protected]> 
wrote:
 

 So this is a general bug in how Flink constructs the HADOOP_CONF_DIR path?
Best,Aljoscha

On 11. Jul 2017, at 20:14, Will Walters <[email protected]> wrote:
I've managed to solve the problem I was having, which was with Flink not 
properly finding my hdfs registrar. It turns out that in the config.sh file, it 
runs this line on the global variable $HADOOP_CONF_DIR:
if [ -d "$HADOOP_HOME/etc/hadoop" ]; then        # Its Hadoop 2.2+        
HADOOP_CONF_DIR="$HADOOP_CONF_DIR:$HADOOP_HOME/etc/hadoop"
This means that HADOOP_CONF_DIR is set to two filepaths concatenated with a 
colon. In the function which reads in the registrars, it passes this string 
into the File() constructor, which fails because the string isn't a valid 
filepath. Commenting out the line above solved the problem, allowing a 
successful submission.
Thanks for your help!Will 

    On Thursday, June 29, 2017 1:28 AM, Jean-Baptiste Onofré 
<[email protected]> wrote:
 

 Good point, fair enough.

Regards
JB

On 06/29/2017 10:26 AM, Aljoscha Krettek wrote:
> I think it’s a bug because if you start a Flink cluster on bare-metal it 
> works, just when it’s started in YARN it doesn’t. And I feel that the way you 
> start your cluster should not affect how you can submit jobs to it.
> 
> Best,
> Aljoscha
> 
>> On 29. Jun 2017, at 10:15, Jean-Baptiste Onofré <[email protected]> wrote:
>>
>> Yes, it's the same with the spark runner using bin/spark-submit. From my 
>> standpoint, it's not a bug, it's a feature request.
>>
>> Regards
>> JB
>>
>> On 06/29/2017 10:12 AM, Aljoscha Krettek wrote:
>>> I also responded to a separate mail by Will. The problem is that currently 
>>> we cannot submit a job using the remote client to a Flink cluster that was 
>>> started on YARN. (It’s a bug or “feature” of how communication with a Flink 
>>> cluster from a client works.)
>>> The workaround for that is to use the bin/flink command to submit a Beam 
>>> fat-jar on a Flink YARN cluster.
>>> Best,
>>> Aljoscha
>>>> On 29. Jun 2017, at 07:23, Jean-Baptiste Onofré <[email protected]> wrote:
>>>>
>>>> Hi Will,
>>>>
>>>> assuming you are using Beam 2.0.0, the Flink runner uses Flink 1.2.1 by 
>>>> default. So, I would recommend this version or 1.2.x.
>>>>
>>>> Regards
>>>> JB
>>>>
>>>> On 06/28/2017 10:39 PM, Will Walters wrote:
>>>>> Hello,
>>>>> I've been attempting to run Beam through Flink on a Yarn cluster and have 
>>>>> run into trouble with getting a job to submit, partly because of 
>>>>> incompatibility between versions. Does anyone know what versions of Beam 
>>>>> and Flink I should be using to give myself the best chance of finding 
>>>>> compatibility?
>>>>> Thank you,
>>>>> Will.
>>>>
>>>> -- 
>>>> Jean-Baptiste Onofré
>>>> [email protected]
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>
>> -- 
>> Jean-Baptiste Onofré
>> [email protected]
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
> 

-- 
Jean-Baptiste Onofré
[email protected]
http://blog.nanthrax.net
Talend - http://www.talend.com


   



   

Reply via email to