[jira] [Commented] (STORM-2175) Supervisor V2 can possibly shut down workers twice in local mode

2016-10-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616570#comment-15616570
 ] 

Robert Joseph Evans commented on STORM-2175:


I put up two pull requests, one for master and another for 1.x.  I manually 
forced Interrupted exceptions for the test [~Srdo] posted and verified that 
this works.

I also found a number of leaked threads from nimbus and a few other possible 
places. I'll file a follow on JIRA for them, because they are not causing 
issues right now.

> Supervisor V2 can possibly shut down workers twice in local mode
> 
>
> Key: STORM-2175
> URL: https://issues.apache.org/jira/browse/STORM-2175
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> See https://github.com/apache/storm/pull/1697#issuecomment-256456889
> {code}
> java.lang.NullPointerException
> at 
> org.apache.storm.utils.DisruptorQueue$FlusherPool.stop(DisruptorQueue.java:110)
> at 
> org.apache.storm.utils.DisruptorQueue$Flusher.close(DisruptorQueue.java:293)
> at 
> org.apache.storm.utils.DisruptorQueue.haltWithInterrupt(DisruptorQueue.java:410)
> at 
> org.apache.storm.disruptor$halt_with_interrupt_BANG_.invoke(disruptor.clj:77)
> at 
> org.apache.storm.daemon.executor$mk_executor$reify__4923.shutdown(executor.clj:412)
> at sun.reflect.GeneratedMethodAccessor303.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
> at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:668)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> and
> {code}
> java.lang.IllegalStateException: Timer is not active
> at org.apache.storm.timer$check_active_BANG_.invoke(timer.clj:87)
> at org.apache.storm.timer$cancel_timer.invoke(timer.clj:120)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:682)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> [~Srdo] is still working on getting a reproducible use case for us. But I 
> will try to reproduce/fix it myself in the mean time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2175) Supervisor V2 can possibly shut down workers twice in local mode

2016-10-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616424#comment-15616424
 ] 

Robert Joseph Evans commented on STORM-2175:


Yes, that is just what I am doing, and also making ProcessSimulator eat 
InterruptedExceptions (because a test shouldn't fail for an exception that we 
expect to happen and ignore in other parts of the code).

> Supervisor V2 can possibly shut down workers twice in local mode
> 
>
> Key: STORM-2175
> URL: https://issues.apache.org/jira/browse/STORM-2175
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>
> See https://github.com/apache/storm/pull/1697#issuecomment-256456889
> {code}
> java.lang.NullPointerException
> at 
> org.apache.storm.utils.DisruptorQueue$FlusherPool.stop(DisruptorQueue.java:110)
> at 
> org.apache.storm.utils.DisruptorQueue$Flusher.close(DisruptorQueue.java:293)
> at 
> org.apache.storm.utils.DisruptorQueue.haltWithInterrupt(DisruptorQueue.java:410)
> at 
> org.apache.storm.disruptor$halt_with_interrupt_BANG_.invoke(disruptor.clj:77)
> at 
> org.apache.storm.daemon.executor$mk_executor$reify__4923.shutdown(executor.clj:412)
> at sun.reflect.GeneratedMethodAccessor303.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
> at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:668)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> and
> {code}
> java.lang.IllegalStateException: Timer is not active
> at org.apache.storm.timer$check_active_BANG_.invoke(timer.clj:87)
> at org.apache.storm.timer$cancel_timer.invoke(timer.clj:120)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:682)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> [~Srdo] is still working on getting a reproducible use case for us. But I 
> will try to reproduce/fix it myself in the mean time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2175) Supervisor V2 can possibly shut down workers twice in local mode

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/STORM-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616383#comment-15616383
 ] 

Stig Rohde Døssing commented on STORM-2175:
---

Thanks for the explanation. You're right, it's much easier if the structures 
can handle being closed repeatedly. So this issue should probably become about 
changing timer, executor, worker and DisruptorQueue (or really anything 
Shutdownable) so they can handle multiple calls to shutdown.

> Supervisor V2 can possibly shut down workers twice in local mode
> 
>
> Key: STORM-2175
> URL: https://issues.apache.org/jira/browse/STORM-2175
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>
> See https://github.com/apache/storm/pull/1697#issuecomment-256456889
> {code}
> java.lang.NullPointerException
> at 
> org.apache.storm.utils.DisruptorQueue$FlusherPool.stop(DisruptorQueue.java:110)
> at 
> org.apache.storm.utils.DisruptorQueue$Flusher.close(DisruptorQueue.java:293)
> at 
> org.apache.storm.utils.DisruptorQueue.haltWithInterrupt(DisruptorQueue.java:410)
> at 
> org.apache.storm.disruptor$halt_with_interrupt_BANG_.invoke(disruptor.clj:77)
> at 
> org.apache.storm.daemon.executor$mk_executor$reify__4923.shutdown(executor.clj:412)
> at sun.reflect.GeneratedMethodAccessor303.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
> at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:668)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> and
> {code}
> java.lang.IllegalStateException: Timer is not active
> at org.apache.storm.timer$check_active_BANG_.invoke(timer.clj:87)
> at org.apache.storm.timer$cancel_timer.invoke(timer.clj:120)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:682)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> [~Srdo] is still working on getting a reproducible use case for us. But I 
> will try to reproduce/fix it myself in the mean time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (STORM-2176) Workers do not shutdown cleanly and worker hooks don't run when a topology is killed

2016-10-28 Thread P. Taylor Goetz (JIRA)
P. Taylor Goetz created STORM-2176:
--

 Summary: Workers do not shutdown cleanly and worker hooks don't 
run when a topology is killed
 Key: STORM-2176
 URL: https://issues.apache.org/jira/browse/STORM-2176
 Project: Apache Storm
  Issue Type: Bug
Affects Versions: 1.0.0, 1.0.1, 1.0.2
Reporter: P. Taylor Goetz
Priority: Critical


This appears to have been introduced in the 1.0.0 release. The issues does not 
seem to affect 0.10.2.

When a topology is killed and workers receive the notification to shutdown, 
they do not shutdown cleanly, so worker hooks never get invoked.

When a worker shuts down cleanly, the worker logs should contain entries such 
as the following:

{code}
2016-10-28 18:52:06.273 b.s.d.worker [INFO] Shut down transfer thread
2016-10-28 18:52:06.279 b.s.d.worker [INFO] Shutting down default resources
2016-10-28 18:52:06.287 b.s.d.worker [INFO] Shut down default resources
2016-10-28 18:52:06.351 b.s.d.worker [INFO] Disconnecting from storm cluster 
state context
2016-10-28 18:52:06.359 b.s.d.worker [INFO] Shut down worker 
exclaim-1-1477680593 61bddd66-0fda-4556-b742-4b63f0df6fc1 6700
{code}

In the 1.0.x line of releases (and presumably 1.x, though I haven't checked) 
this does not happen -- the worker shutdown process appears to get stuck 
shutting down executors 
(https://github.com/apache/storm/blob/v1.0.2/storm-core/src/clj/org/apache/storm/daemon/worker.clj#L666),
 no further log messages are seen in the worker log, and worker hooks do not 
run.

There are two properties that affect how workers exit. The first is the 
configuration property {{supervisor.worker.shutdown.sleep.secs}}, which 
defaults to 1 second. This corresponds to how long the supervisor will wait for 
a worker to exit gracefully before forcibly killing it with {{kill -9}}. When 
this happens the supervisor will log that the worker terminated with exit code 
137 (128 + 9).

The second property is a hard-coded 1 second delay 
(https://github.com/apache/storm/blob/v1.0.2/storm-core/src/clj/org/apache/storm/util.clj#L463)
 added as a shutdown hook that will call {{Runtime.halt()}} if the delay is 
exceeded. When this happens, the supervisor will log that the worker terminated 
with exit code 20 (hard-coded).

Side Note: The hardcoded halt delay in worker.clj and the default value for 
{{supervisor.worker.shutdown.sleep.secs}} both being 1 second should probably 
be changed since it creates a race to see whether the supervisor delay or the 
worker delay wins.


To test this, I set {{supervisor.worker.shutdown.sleep.secs}} to 15 to allow 
plenty of time for the worker to exit gracefully, and deployed and killed a 
topology. In this case the supervisor consistently reported exit code 20 for 
the worker, indicating the hard-coded shutdown hook caused the worker to exit.

I thought the hard-coded 1 second shutdown hook delay might not be long enough 
for the worker to shutdown cleanly. To test that hypothesis, I changed the 
hard-code delay to 10 seconds, leaving 
{{supervisor.worker.shutdown.sleep.secs}} at 15 seconds. Again supervisor 
reported an exit code of 20 for the worker, and there were no log messages 
indicating the worker had exited cleanly and that the worker hook had run.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (STORM-2175) Supervisor V2 can possibly shut down workers twice in local mode

2016-10-28 Thread Robert Joseph Evans (JIRA)

[ 
https://issues.apache.org/jira/browse/STORM-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15616279#comment-15616279
 ] 

Robert Joseph Evans commented on STORM-2175:


OK I was looking through the logs and found what happened (and honestly I think 
it is actually a good thing).

When shutting down a local mode cluster inside testing.clj we have.

{code}
  (doseq [s @(:supervisors cluster-map)]
(.shutdownAllWorkers s)
;; race condition here? will it launch the workers again?
(.close s))
  (ProcessSimulator/killAllProcesses)
{code}

NOTE: the comment above is not what is causing this issue.

So all of the supervisors are shut down, first by killing all of the worker 
processes and then by closing the supervisor.
After all of the supervisors are shut down, just to be sure, we then kill all 
of the processes still registered with the process simulator.

The code for killing all of the workers in a supervisor is the following...

{code}
public synchronized void shutdownAllWorkers() {
for (Slot slot: slots.values()) {
slot.setNewAssignment(null);
}

for (Slot slot: slots.values()) {
try {
int count = 0;
while (slot.getMachineState() != MachineState.EMPTY) {
if (count > 10) {
LOG.warn("DONE waiting for {} to finish {}", slot, 
slot.getMachineState());
break;
}
if (Time.isSimulating()) {
Time.advanceTime(1000);
Thread.sleep(100);
} else {
Time.sleep(100);
}
count++;
}
} catch (Exception e) {
LOG.error("Error trying to shutdown workers in {}", slot, e);
}
}
}
{code}

It tells the Slot that it is not assigned anything any more and waits for it to 
kill the worker under it.  I saw in the logs that for the one worker in 
question it timed out ("DONE waiting for...") and went on to kill/shut down 
other things. 

The ProcessSimulator code to kill local mode worker is.

{code}
/**
 * Kill a process
 *
 * @param pid
 */
public static void killProcess(String pid) {
synchronized (lock) {
LOG.info("Begin killing process " + pid);
Shutdownable shutdownHandle = processMap.get(pid);
if (shutdownHandle != null) {
shutdownHandle.shutdown();
}
processMap.remove(pid);
LOG.info("Successfully killed process " + pid);
}
}

/**
 * Kill all processes
 */
public static void killAllProcesses() {
Set pids = processMap.keySet();
for (String pid : pids) {
killProcess(pid);
}
}
{code}

Inside the logs I don't see a corresponding "Successfully killed process " for 
the "Begin killing process ".  This means that an exception was thrown during 
the shutdown process.  Looking at the code the only exception that could have 
caused this is an InterruptedException or something caused by it (or else the 
entire processes would have exited).

This means that the ProcessSimulator partially shut down the worker, then the 
worker threw the exception, and ProcessSimulator failed to remove the worker 
from the map.  That way when the follow on code to kill anything registered 
with the process simulator that might have leaked is called it ends up trying 
to shoot the process yet again, which the code we found does not like.

Everything seems perfectly reasonable, except for the part where some of our 
data structures don't like to be shut down multiple times.  This seems against 
what most other software does.  Exactly once is hard so lets make sure at least 
once works.

> Supervisor V2 can possibly shut down workers twice in local mode
> 
>
> Key: STORM-2175
> URL: https://issues.apache.org/jira/browse/STORM-2175
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>
> See https://github.com/apache/storm/pull/1697#issuecomment-256456889
> {code}
> java.lang.NullPointerException
> at 
> org.apache.storm.utils.DisruptorQueue$FlusherPool.stop(DisruptorQueue.java:110)
> at 
> org.apache.storm.utils.DisruptorQueue$Flusher.close(DisruptorQueue.java:293)
> at 
> org.apache.storm.utils.DisruptorQueue.haltWithInterrupt(DisruptorQueue.java:410)
> at 
> org.apache.storm.disruptor$halt_with_interrupt_BANG_.invoke(disruptor.clj:77)
> at 
> org.apache.storm.daemon.executor$mk_executor$reify__4923.shutdown(executor.clj:412)
> 

[jira] [Updated] (STORM-2175) Supervisor V2 can possibly shut down workers twice in local mode

2016-10-28 Thread Robert Joseph Evans (JIRA)

 [ 
https://issues.apache.org/jira/browse/STORM-2175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated STORM-2175:
---
Affects Version/s: 1.1.0

> Supervisor V2 can possibly shut down workers twice in local mode
> 
>
> Key: STORM-2175
> URL: https://issues.apache.org/jira/browse/STORM-2175
> Project: Apache Storm
>  Issue Type: Bug
>  Components: storm-core
>Affects Versions: 2.0.0, 1.1.0
>Reporter: Robert Joseph Evans
>Assignee: Robert Joseph Evans
>
> See https://github.com/apache/storm/pull/1697#issuecomment-256456889
> {code}
> java.lang.NullPointerException
> at 
> org.apache.storm.utils.DisruptorQueue$FlusherPool.stop(DisruptorQueue.java:110)
> at 
> org.apache.storm.utils.DisruptorQueue$Flusher.close(DisruptorQueue.java:293)
> at 
> org.apache.storm.utils.DisruptorQueue.haltWithInterrupt(DisruptorQueue.java:410)
> at 
> org.apache.storm.disruptor$halt_with_interrupt_BANG_.invoke(disruptor.clj:77)
> at 
> org.apache.storm.daemon.executor$mk_executor$reify__4923.shutdown(executor.clj:412)
> at sun.reflect.GeneratedMethodAccessor303.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
> at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:668)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> and
> {code}
> java.lang.IllegalStateException: Timer is not active
> at org.apache.storm.timer$check_active_BANG_.invoke(timer.clj:87)
> at org.apache.storm.timer$cancel_timer.invoke(timer.clj:120)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify__5552$shutdown_STAR___5572.invoke(worker.clj:682)
> at 
> org.apache.storm.daemon.worker$fn__5550$exec_fn__1372__auto__$reify$reify__5598.shutdown(worker.clj:706)
> at org.apache.storm.ProcessSimulator.killProcess(ProcessSimulator.java:66)
> at 
> org.apache.storm.ProcessSimulator.killAllProcesses(ProcessSimulator.java:79)
> at 
> org.apache.storm.testing$kill_local_storm_cluster.invoke(testing.clj:207)
> at org.apache.storm.testing4j$_withLocalCluster.invoke(testing4j.clj:93)
> at org.apache.storm.Testing.withLocalCluster(Unknown Source)
> {code}
> [~Srdo] is still working on getting a reproducible use case for us. But I 
> will try to reproduce/fix it myself in the mean time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (STORM-1386) Problem using a newer version of log4j-core

2016-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/STORM-1386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15612283#comment-15612283
 ] 

Sébastien Volle edited comment on STORM-1386 at 10/28/16 12:18 PM:
---

I have a similar problem. I'm stuck with log4j-core 2.4 in my application and I 
get illegal access errors due to API changes between 2.1 and 2.4
{code}
Exception in thread "main" java.lang.IllegalAccessError: tried to access method 
org.apache.logging.log4j.core.lookup.MapLookup.newMap(I)Ljava/util/HashMap; 
from class org.apache.logging.log4j.core.lookup.MainMapLookup
{code}

Would it be possible to shade logging dependencies and allow users to use 
whatever version of log4j they see fit, or does shading logging pose a 
particular problem?

Thanks.


was (Author: svolle):
I have a similar problem. I'm stuck with log4j-core 2.4 in my application and I 
get illegal access errors due to API changes between 2.1 and 2.4
{code}
Exception in thread "main" java.lang.IllegalAccessError: tried to access method 
org.apache.logging.log4j.core.lookup.MapLookup.newMap(I)Ljava/util/HashMap; 
from class org.apache.logging.log4j.core.lookup.MainMapLookup
{code}

Would it be possible to shade logging dependencies and allow users to use 
whatever version of log4j they see fit, or does logging pose a particular 
problem?

Thanks.

> Problem using a newer version of log4j-core
> ---
>
> Key: STORM-1386
> URL: https://issues.apache.org/jira/browse/STORM-1386
> Project: Apache Storm
>  Issue Type: Question
>  Components: storm-core
>Affects Versions: 0.10.0
> Environment: Ubuntu 14.04, Ambari 2.1.2 with HDP-2.3.2.0-2950
>Reporter: Vassilis Sotiridis
>Priority: Minor
>
> Storm 0.10.0 comes with log4j-core 2.1 and I can't find any way to override 
> it in my app. I need 2.4.x+ in order to use the kafka appender. I have even 
> tried to relocate the org.apache.logging.log4j in my shade configuration but 
> even then I'm getting errors like this :
> ERROR StatusLogger Log4j2 could not find a logging implementation. Please add 
> log4j-core to the classpath. Using SimpleLogger to log to the console...
> Thanks in advance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)