Hi Max,
The program runs locally in one machine. I use grep ZkSerializer in the
generated fat jar file and it exists, so it seems build process is OK. I
also put zkclient-0.5.jar under flink_root_dir/lib/ and it contains the
class of ZkSerializer.
Thanks,
Wendong
--
View this message in
also tried using zkclient-0.3.jar in lib/, updated build.sbt and rebuild. It
doesn't help. Still got the same error of NoClassDefFoundError: ZkSerializer
in flink.streaming.connectors.kafka.api.KafkaSource.open().
--
View this message in context:
please help
--
View this message in context:
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/when-write-this-code-display-error-no-interface-expected-here-public-static-class-MyCoGrouper-extendn-tp2205p2219.html
Sent from the Apache Flink User Mailing List archive. mailing
Glad to hear that it finally worked :-)
On Tue, Jul 21, 2015 at 2:21 AM, Wendong wendong@gmail.com wrote:
Hi Till,
Thanks for your suggestion! I did a fat jar and the runtime error of
ClassNotFoundException was finally gone. I wish I had tried fat jar earlier
and it would have saved me
Hi,
Are you running this locally or in a cluster environment? Did you put the
zkClient-0.5.jar in the /lib directory of every node (also task managers)?
It seems like sbt should include the zkClient dependency in the fat jar. So
there might be something wrong with your build process.
Best
Exceptions are swallowed upon canceling (because canceling has usually
followup exceptions).
Root error cause exceptions should never be swallowed.
Do you have a specific place in mind where that happens?
On Mon, Jun 29, 2015 at 4:49 PM, Flavio Pompermaier pomperma...@okkam.it
wrote:
I think
when write this code display error
no interface expected here public static class MyCoGrouper extends
CoGroupFunctionCustomer,Orders,Result {
ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
DataSetCustomer customers = getCustomerDataSet(env,mask,l,map);
I don't think there is a simpler way to do this.
Flink follows the semantics of the Hadoop's HDFS file system there, which
behaves that way, and the Java File class.
But it seems your solution is working, even if it needs a few extra lines
of code.
On Fri, Jul 17, 2015 at 11:17 AM, Flavio
@Stephan: This is a with high probability a deadlock in the spillable
partitions. I'm looking into it
(https://issues.apache.org/jira/browse/FLINK-2384)
@Flavio: can you run your job in force pipelined mode for the time being and
check whether it works.