Hi Aniket,

you could try to restore the previous behavior by configuring

classloader.resolve-order: parent-first

in the Flink configuration.

Best, Fabian

2018-01-08 23:09 GMT+01:00 ani.desh1512 <ani.desh1...@gmail.com>:

> *Background:* We have a setup of Flink 1.3.2 along with a secure MAPR
> (v5.2.1) cluster (Flink is running on mapr client nodes). We run this flink
> cluster via flink-jobmanager.sh foreground and flink-taskmanager.sh
> foreground command via Marathon. We want to upgrade to Flink 1.4.0.
>
> Since, we require Mapr libraries, heres how I built flink 1.4.0 from source
> (maven version 3.5.0)
> 1. Cloned the flink repo
> 2. Checked out the release-1.4.0 tag
> 3. /mvn clean install -DskipTests -Pvendor-repos,mapr
> -Dhadoop.version=2.7.0-mapr-1703 -Dzookeeper.version=3.4.5-mapr-1604/
> 4. cd flink-dist
> 5. /mvn clean install/
>
> Now, we have a java flink application that writes to MaprDB (via the hbase
> api). This jar runs without any error on Flink 1.3.2.
> Now we changed the flink version in the jar to 1.4.0 and the tried running
> it on the newly created Flink 1.4.0 cluster, but we get the following
> error:
>
> /java.lang.UnsatisfiedLinkError:
> com.mapr.security.JNISecurity.SetParsingDone()V
>         at com.mapr.security.JNISecurity.SetParsingDone(Native Method)
>         at
> com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.init(
> CLDBRpcCommonUtils.java:231)
>         at
> com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<init>(
> CLDBRpcCommonUtils.java:73)
>         at
> com.mapr.baseutils.cldbutils.CLDBRpcCommonUtils.<clinit>(
> CLDBRpcCommonUtils.java:63)
>         at
> org.apache.hadoop.conf.CoreDefaultProperties.<clinit>
> (CoreDefaultProperties.java:69)
>         at java.lang.Class.forName0(Native Method)
>         at java.lang.Class.forName(Class.java:348)
>         at
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(
> Configuration.java:2159)
>         at
> org.apache.hadoop.conf.Configuration.getProperties(
> Configuration.java:2374)
>         at
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2591)
>         at
> org.apache.hadoop.conf.Configuration.loadResources(
> Configuration.java:2543)
>         at org.apache.hadoop.conf.Configuration.getProps(
> Configuration.java:2456)
>         at org.apache.hadoop.conf.Configuration.get(
> Configuration.java:994)
>         at org.apache.hadoop.conf.Configuration.getTrimmed(
> Configuration.java:1044)
>         at org.apache.hadoop.conf.Configuration.getBoolean(
> Configuration.java:1445)
>         at
> org.apache.hadoop.hbase.HBaseConfiguration.checkDefaultsVersion(
> HBaseConfiguration.java:69)
>         at
> org.apache.hadoop.hbase.HBaseConfiguration.addHbaseResources(
> HBaseConfiguration.java:83)
>         at
> org.apache.hadoop.hbase.HBaseConfiguration.create(
> HBaseConfiguration.java:98)
>         at com.kabbage.maprdb.HBaseUtils.<init>(HBaseUtils.java:17)
>         at com.kabbage.maprdb.HBaseSink.open(HBaseSink.java:18)
>         at
> org.apache.flink.api.common.functions.util.FunctionUtils.
> openFunction(FunctionUtils.java:36)
>         at
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(
> AbstractUdfStreamOperator.java:102)
>         at
> org.apache.flink.streaming.api.operators.StreamSink.open(
> StreamSink.java:48)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.
> openAllOperators(StreamTask.java:393)
>         at
> org.apache.flink.streaming.runtime.tasks.StreamTask.
> invoke(StreamTask.java:254)
>         at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
>         at java.lang.Thread.run(Thread.java:748)/
>
> We have set the FLINK_CLASSPATH variable to point to our Mapr libs.
> I saw in the Flink 1.4.0 release notes that there are changes to dynamic
> class loading of user code. So my question is, what extra and different
> steps that i need t take to make Flink work with MapR libraries again?
>
> Thanks,
> Aniket
>
>
>
>
> --
> Sent from: http://apache-flink-user-mailing-list-archive.2336050.
> n4.nabble.com/
>

Reply via email to