I have an idea how we can reduce the impact this class of problem.
If we can detect that we are running in a distributed environment then in
order to use HBase you MUST have an hbase-site.xml
I'll see if I can make a proof of concept.
Niels
On Wed, Oct 25, 2017 at 11:27 AM, Till Rohrmann
Hi Niels,
good to see that you solved your problem.
I’m not entirely sure how Pig does it, but I assume that there must be some
kind of HBase support where the HBase specific files are explicitly send to
the cluster or that it copies the environment variables. For Flink
supporting this kind of
I changed my cluster config (on all nodes) to include the HBase config dir
in the classpath.
Now everything works as expected.
This may very well be a misconfiguration of my cluster.
How ever ...
My current assesment:
Tools like Pig use the HBase config which has been specified on the LOCAL
Minor correction: The HBase jar files are on the classpath, just in a
different order.
On Tue, Oct 24, 2017 at 11:18 AM, Niels Basjes wrote:
> I did some more digging.
>
> I added extra code to print both the environment variables and the
> classpath that is used by the
I did some more digging.
I added extra code to print both the environment variables and the
classpath that is used by the HBaseConfiguration to load the resource files.
I call this both locally and during startup of the job (i.e. these logs
arrive in the jobmanager.log on the cluster)
Summary of
Till do you have some idea what is going on? I do not see any meaningful
difference between Niels code and HBaseWriteStreamExample.java. There is also a
very similar issue on mailing list as well: “Flink can't read hdfs namenode
logical url”
Piotrek
> On 22 Oct 2017, at 12:56, Niels Basjes
HI,
Yes, on all nodes the the same /etc/hbase/conf/hbase-site.xml that contains
the correct settings for hbase to find zookeeper.
That is why adding that files as an additional resource to the
configuration works.
I have created a very simple project that reproduces the problem on my
setup:
Is this /etc/hbase/conf/hbase-site.xml file is present on all of the machines?
If yes, could you share your code?
> On 20 Oct 2017, at 16:29, Niels Basjes wrote:
>
> I look at the logfiles from the Hadoop Yarn webinterface. I.e. actually
> looking in the jobmanager.log of the
Hi,
What do you mean by saying:
> When I open the logfiles on the Hadoop cluster I see this:
The error doesn’t come from Flink? Where do you execute
hbaseConfig.addResource(new Path("file:/etc/hbase/conf/hbase-site.xml"));
?
To me it seems like it is a problem with misconfigured HBase and
To facilitate you guys helping me I put this test project on github:
https://github.com/nielsbasjes/FlinkHBaseConnectProblem
Niels Basjes
On Fri, Oct 20, 2017 at 1:32 PM, Niels Basjes wrote:
> Hi,
>
> Ik have a Flink 1.3.2 application that I want to run on a Hadoop yarn
>
Hi,
Ik have a Flink 1.3.2 application that I want to run on a Hadoop yarn
cluster where I need to connect to HBase.
What I have:
In my environment:
HADOOP_CONF_DIR=/etc/hadoop/conf/
HBASE_CONF_DIR=/etc/hbase/conf/
HIVE_CONF_DIR=/etc/hive/conf/
YARN_CONF_DIR=/etc/hadoop/conf/
In
11 matches
Mail list logo