Hi guys,
I am facing problem while connecting remote hbase from Apache flink.
I am able to connect successfully through simple hbase java program.
However, when i try to connect and scan the table it says
java.lang.NoSuchMethodError:
Hi Santosh, which version of Flink are you using? And which version of
hbase?
On Mar 31, 2015 12:50 PM, santosh_rajaguru sani...@gmail.com wrote:
Hi guys,
I am facing problem while connecting remote hbase from Apache flink.
I am able to connect successfully through simple hbase java program.
Till Rohrmann created FLINK-1807:
Summary: Stochastic gradient descent optimizer for ML library
Key: FLINK-1807
URL: https://issues.apache.org/jira/browse/FLINK-1807
Project: Flink
Issue
Paris Carbone created FLINK-1808:
Summary: Omit sending checkpoint barriers when the execution graph
is not running
Key: FLINK-1808
URL: https://issues.apache.org/jira/browse/FLINK-1808
Project:
Ufuk Celebi created FLINK-1806:
--
Summary: Improve S3 file system error message when no
access/secret key provided
Key: FLINK-1806
URL: https://issues.apache.org/jira/browse/FLINK-1806
Project: Flink
Very nice. Thanks Robert!
On Mon, Mar 30, 2015 at 9:13 PM, Robert Metzger rmetz...@apache.org wrote:
It seems that the issue is fixed. I've just pushed two times to a pull
request and it immediately started building both.
I think the apache user has much more parallel builds available now (we
Hi Robert,
thanks for your answer.
I get an InterruptedException when I call shutdown():
java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Thread.join(Thread.java:1225)
at java.lang.Thread.join(Thread.java:1278)
at
Strange, I'm using Flink 0.8.1 and HBase 0.98.6 and everything works fine
(at least during reading).
Remember to put the correct hbase-site.xml in the classpath!
To output data I'm trying to find the best way to achieve it..
It came out that the hadoop compatibility layer of flink probably doesn't
I like the fact that the naming scheme follows some logic.
I also like that we have two easy to understand concepts:
- Operator (be that in any of the above representations)
- Result (of executing an operator)
+1
On Tue, Mar 31, 2015 at 4:50 PM, Ufuk Celebi u...@apache.org wrote:
On a high
Till Rohrmann created FLINK-1809:
Summary: Add standard scaler to ML library
Key: FLINK-1809
URL: https://issues.apache.org/jira/browse/FLINK-1809
Project: Flink
Issue Type: Improvement
Hi!
Also important: Which Hadoop version are you using with Flink? The problem
is a missing method in a Hadoop class, so I guess there is a Hadoop version
mismatch.
For all Flink versions, there is a package for Hadoop 1.x and a package for
Hadoop 2.x . Make sure you pick the right one for HBase
Hi Flink devs,
this is my final report about the HBaseOutputFormat problem (with Flink
0.8.1) and I hope you could suggest me the best way to make a PR:
1) The following code produce the error reported below (this should be
fixed in 0.9 right?)
Job job = Job.getInstance();
I like getting the consistency in there.
I was never thinking of the intermediate data sets to be strictly produced
by a vertex, so I am unsure whether we should use that exact naming scheme,
or one that disconnects the results from the term VertexResult.
On Tue, Mar 31, 2015 at 5:27 PM, Kostas
As one of the devs that recently been tracing the runtime portion of
the code +1 for renaming for inlining with the concepts.
One thing I would like to have is immediate change to the
documentation [1] with renaming PR . Otherwise
Then need to file followup ticket to update Kostas' awesome wiki
Hey Henry,
1) There is no extra message, but this is piggy backed with the finished
state transition (see Execution#markAsFinished). There it is essentially
the same mechanism.
2) It's part of my plan for this week to add documentation for exactly
this flow of RPC messages related to the
HI Flinksters,
Have one quick question, hopefully someone could help me get some answers:
When is the DataExchangeMode.BATCH is actually being used, and how
does the consumer know when to start consuming the buffer
In the pipelined mode, the producer tell JobManager via JobManager !
16 matches
Mail list logo