Hi Ravi, I think you can pass the arguments to the job via `./bin/standalone-job.sh start -Dscheduler-mode=reactive -Dexecution.checkpointing.interval="3000s" lib/tornado.jar myArguments`.
Cheers, Till On Wed, Nov 3, 2021 at 5:20 AM Ravi Sankar Reddy Sangana <ra...@radware.com> wrote: > Thanks a lot working fine now. Also you also explain how to pass > parameters to the job. In the session cluster I am passing arguments using > api. > > > > Here how can I pass the arguments to the job? > > > > > > Regards, > > Ravi Sankar Reddy. > > > > *From:* Till Rohrmann <trohrm...@apache.org> > *Sent:* 02 November 2021 07:33 PM > *To:* Ravi Sankar Reddy Sangana <ra...@radware.com> > *Cc:* user <user@flink.apache.org> > *Subject:* Re: Reactive mode in 1.13 > > > > Hi Ravi, > > > > I think you also need to make the tornado.jar available to the > TaskExecutor processes (e.g. putting them into the usrlib or lib directory > where you started the process). When using the application mode, then Flink > assumes that all processes have access to the user code jar. That's why > Flink won't ship the user code jars to the other processes unlike when > using the session cluster mode, for example. The idea is that the user code > is bundled together with the application cluster deployment. > > > > Cheers, > > Till > > > > On Tue, Nov 2, 2021 at 1:01 PM Ravi Sankar Reddy Sangana < > ra...@radware.com> wrote: > > Set up: > > > > 1 job manager in 2 core 6 GB > > 2 task managers in 4 core 12 GB > > > > Created fat jar and copied the jar to jobmanager lib folder. > > > > ./bin/standalone-job.sh start -Dscheduler-mode=reactive > -Dexecution.checkpointing.interval="3000s" lib/tornado.jar > > > > *Build logs in job manager:* > > > > [INFO] --- maven-shade-plugin:3.0.0:shade (default) @ clive --- > > [INFO] Including org.apache.flink:flink-java:jar:1.13.1 in the shaded jar. > > [INFO] Including org.apache.flink:flink-core:jar:1.13.1 in the shaded jar. > > [INFO] Including org.apache.flink:flink-annotations:jar:1.13.1 in the > shaded jar. > > [INFO] Including org.apache.flink:flink-metrics-core:jar:1.13.1 in the > shaded jar. > > [INFO] Including org.apache.flink:flink-shaded-asm-7:jar:7.1-13.0 in the > shaded jar. > > [INFO] Including com.esotericsoftware.kryo:kryo:jar:2.24.0 in the shaded > jar. > > [INFO] Including com.esotericsoftware.minlog:minlog:jar:1.2 in the shaded > jar. > > [INFO] Including org.objenesis:objenesis:jar:2.1 in the shaded jar. > > [INFO] Including commons-collections:commons-collections:jar:3.2.2 in the > shaded jar. > > [INFO] Including org.apache.commons:commons-compress:jar:1.20 in the > shaded jar. > > [INFO] Including org.apache.flink:flink-shaded-guava:jar:18.0-13.0 in the > shaded jar. > > [INFO] Including org.apache.commons:commons-lang3:jar:3.3.2 in the shaded > jar. > > [INFO] Including org.apache.commons:commons-math3:jar:3.5 in the shaded > jar. > > [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.15 from the shaded jar. > > [INFO] Excluding com.google.code.findbugs:jsr305:jar:1.3.9 from the shaded > jar. > > [INFO] Excluding org.apache.flink:force-shading:jar:1.13.1 from the shaded > jar. > > [INFO] Including org.apache.flink:flink-connector-kafka_2.11:jar:1.13.1 in > the shaded jar. > > [INFO] Including org.apache.kafka:kafka-clients:jar:2.4.1 in the shaded > jar. > > [INFO] Including com.github.luben:zstd-jni:jar:1.4.3-1 in the shaded jar. > > [INFO] Including org.lz4:lz4-java:jar:1.6.0 in the shaded jar. > > [INFO] Including org.xerial.snappy:snappy-java:jar:1.1.7.3 in the shaded > jar. > > [INFO] Including org.apache.flink:flink-connector-base:jar:1.13.1 in the > shaded jar. > > [INFO] Replacing original artifact with shaded artifact. > > > > *LOGS:* > > > > 2021-11-02 11:02:36,224 INFO > org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler [] - > Restarting job. > > org.apache.flink.streaming.runtime.tasks.StreamTaskException: Cannot load > user class: org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer > > ClassLoader info: URL ClassLoader: > > Class not resolvable through given classloader. > > at > org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:336) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at > org.apache.flink.streaming.runtime.tasks.OperatorChain.<init>(OperatorChain.java:154) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.executeRestore(StreamTask.java:548) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.runWithCleanUpOnFail(StreamTask.java:647) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at > org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:537) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:759) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_312] > > Caused by: java.lang.ClassNotFoundException: > org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer > > at java.net.URLClassLoader.findClass(URLClassLoader.java:382) > ~[?:1.8.0_312] > > at java.lang.ClassLoader.loadClass(ClassLoader.java:418) > ~[?:1.8.0_312] > > at > org.apache.flink.util.FlinkUserCodeClassLoader.loadClassWithoutExceptionHandling(FlinkUserCodeClassLoader.java:64) > ~[clive-1.0-SNAPSHOT.jar:?] > > at > org.apache.flink.util.ChildFirstClassLoader.loadClassWithoutExceptionHandling(ChildFirstClassLoader.java:65) > ~[clive-1.0-SNAPSHOT.jar:?] > > at > org.apache.flink.util.FlinkUserCodeClassLoader.loadClass(FlinkUserCodeClassLoader.java:48) > ~[clive-1.0-SNAPSHOT.jar:?] > > at java.lang.ClassLoader.loadClass(ClassLoader.java:351) > ~[?:1.8.0_312] > > at > org.apache.flink.runtime.execution.librarycache.FlinkUserCodeClassLoaders$SafetyNetWrapperClassLoader.loadClass(FlinkUserCodeClassLoaders.java:172) > ~[flink-dist_2.11-1.13.1.jar:1.13.1] > > at java.lang.Class.forName0(Native Method) ~[?:1.8.0_312] > > at java.lang.Class.forName(Class.java:348) ~[?:1.8.0_312] > > at > org.apache.flink.util.InstantiationUtil$ClassLoaderObjectInputStream.resolveClass(InstantiationUtil.java:76) > ~[clive-1.0-SNAPSHOT.jar:?] > > at > java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1985) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1849) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2159) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2404) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2404) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2186) > ~[?:1.8.0_312] > > at > java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1666) > ~[?:1.8.0_312] > > > > *From:* Till Rohrmann <trohrm...@apache.org> > *Sent:* 02 November 2021 03:16 PM > *To:* user <user@flink.apache.org> > *Cc:* Ravi Sankar Reddy Sangana <ra...@radware.com> > *Subject:* Re: Reactive mode in 1.13 > > > > Hi Ravi, > > > > The reactive mode shouldn't do things differently compared to a normal > application cluster deployment. Maybe you can show us exactly how you > submit a job, the contents of the bundled jar, how you build the fat jar > and the logs of the failed Flink run. > > > > Moving this discussion to the user ML because it better fits there. > > > > Cheers, > > Till > > > > On Tue, Nov 2, 2021 at 10:22 AM Ravi Sankar Reddy Sangana < > ra...@radware.com> wrote: > > Hi team, > > We are planning to move to reactive mode with standalone job for scaling > options. > > While submitting the jobs getting errors saying the kafka conusmers and > client related jars are missing even they packed in the fat jar and the > same jar is running with normal session cluster. > > Can anyone help on how to add the jars while using standalone ?? Thanks in > advance > > > Regards, > Ravi Sankar Reddy > >