Hi,
I have a pipeline which is sinking into an Apache Derby database, but I’m 
constantly receiving the error
java.lang.IllegalArgumentException: JDBC driver class not found.
The Scala libraries I’m loading are
val flinkDependencies = Seq(
  "org.apache.flink" %% "flink-scala" % flinkVersion ,
  "org.apache.flink" %% "flink-table" % "1.7.1" ,
  "org.apache.flink" % "flink-table_2.11" % "1.7.2",
  "org.apache.flink" %% "flink-streaming-scala" % flinkVersion,
  "org.apache.flink" %% "flink-table-uber" % flinkVersion,
  "org.apache.flink" %% "flink-jdbc" % flinkVersion,
  "org.apache.derby" % "derby" % "10.15.1.3" % Test
The Scala code for the sink is 
val sink: JDBCAppendTableSink = JDBCAppendTableSink.builder()
  .setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
  .setDBUrl("jdbc:derby:/Volumes/HD1/nwalton/Databases/mydb")
  .setQuery("INSERT INTO mydb (bearing, sample, value, hash, prevrepeats) 
VALUES (?,?,?,?,?)")
  .setParameterTypes(INT_TYPE_INFO, LONG_TYPE_INFO, DOUBLE_TYPE_INFO, 
STRING_TYPE_INFO, INT_TYPE_INFO)
  .build()

tableEnv.registerTableSink(
  "jdbcOutputTable",
  // specify table schema
  Array[String](“mydb"),
  Array[TypeInformation[_]](Types.INT, Types.LONG, 
Types.DOUBLE,Types.STRING,Types.INT),
  sink)

val table: Table = tableEnv.fromDataStream(signalFourBuckets)
table.insertInto("jdbcOutputTable")
I note that all the examples I have found of Derby usage in Flink have been for 
in memory databases. Is there anything particular about Derby in that respect? 
I have checked the jar file (built using sbt assembly) and it appears to 
include the Derby drivers, and I have started the cluster, which is running on 
a single machine, with CLASSPATH set to include the drivers

Nick Walton

Reply via email to