[jira] [Closed] (AMQ-5339) LevelDBClient operation failed. NullPointerException after entering recovery mode

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5339.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> LevelDBClient operation failed. NullPointerException after entering recovery 
> mode
> -
>
> Key: AMQ-5339
> URL: https://issues.apache.org/jira/browse/AMQ-5339
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.8.0
> Environment: CentOS 6.5
>Reporter: Serge Smertin
>Priority: Critical
>
> Once in a while we're getting following exception in AMQ logs and there is no 
> other way than purging the queue. How can we overcome this issue? Is it okay 
> to use LevelDB store now? Any answers? :)
> {noformat}
> 2014-09-01 13:25:52,065 [erSimpleAppMain] DEBUG AbstractRegion
>  - localhost adding destination: queue://files/dead
> 2014-09-01 13:25:52,081 [erSimpleAppMain] DEBUG TaskRunnerFactory 
>  - Initialized TaskRunnerFactory[ActiveMQ BrokerService[localhost] Task] 
> using ExecutorService: null
> 2014-09-01 13:25:52,098 [erSimpleAppMain] WARN  LevelDBClient 
>  - DB operation failed. (entering recovery mode)
> 2014-09-01 13:25:52,099 [erSimpleAppMain] DEBUG LevelDBClient 
>  - java.lang.NullPointerException
> java.lang.NullPointerException
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:966)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$queueCursor$1.apply(LevelDBClient.scala:962)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$9.apply(LevelDBClient.scala:1038)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1$$anonfun$apply$mcV$sp$9.apply(LevelDBClient.scala:1037)
> at 
> org.apache.activemq.leveldb.LevelDBClient$RichDB.check$4(LevelDBClient.scala:309)
> at 
> org.apache.activemq.leveldb.LevelDBClient$RichDB.cursorRange(LevelDBClient.scala:311)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply$mcV$sp(LevelDBClient.scala:1037)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1037)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$collectionCursor$1.apply(LevelDBClient.scala:1037)
> at 
> org.apache.activemq.leveldb.LevelDBClient.usingIndex(LevelDBClient.scala:760)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$retryUsingIndex$1.apply(LevelDBClient.scala:766)
> at 
> org.apache.activemq.leveldb.util.RetrySupport$.retry(RetrySupport.scala:38)
> at 
> org.apache.activemq.leveldb.LevelDBClient.retry(LevelDBClient.scala:457)
> at 
> org.apache.activemq.leveldb.LevelDBClient.retryUsingIndex(LevelDBClient.scala:766)
> at 
> org.apache.activemq.leveldb.LevelDBClient.collectionCursor(LevelDBClient.scala:1036)
> at 
> org.apache.activemq.leveldb.LevelDBClient.queueCursor(LevelDBClient.scala:962)
> at 
> org.apache.activemq.leveldb.DBManager.cursorMessages(DBManager.scala:633)
> at 
> org.apache.activemq.leveldb.LevelDBStore$LevelDBMessageStore.recoverNextMessages(LevelDBStore.scala:643)
> at org.apache.activemq.broker.region.Queue.initialize(Queue.java:381)
> at 
> org.apache.activemq.broker.region.DestinationFactoryImpl.createDestination(DestinationFactoryImpl.java:87)
> at 
> org.apache.activemq.broker.region.AbstractRegion.createDestination(AbstractRegion.java:526)
> at 
> org.apache.activemq.broker.jmx.ManagedQueueRegion.createDestination(ManagedQueueRegion.java:56)
> at 
> org.apache.activemq.broker.region.AbstractRegion.addDestination(AbstractRegion.java:136)
> at 
> org.apache.activemq.broker.region.RegionBroker.addDestination(RegionBroker.java:277)
> at 
> org.apache.activemq.broker.BrokerFilter.addDestination(BrokerFilter.java:145)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5496) leveldb becomes corrupt after vmware ha migration with power off on physical machine

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5496.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> leveldb becomes corrupt after vmware ha migration with power off on physical 
> machine
> 
>
> Key: AMQ-5496
> URL: https://issues.apache.org/jira/browse/AMQ-5496
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0, 5.9.1
> Environment: red hat linux, vmware , two physical hp sl210 machines, 
> vmware ha, iscsi remote disk storage. when a virtual machine with activemq is 
> migrated from one physical machine to another, operation is continous and no 
> problem. if one physcial machine is powered off, the leveldb database becomes 
> corrupt on restart. the files CURRENT are not populated with newest manifest 
> number. Doing this manually did not fix
>Reporter: michael kelly
>
> les can connect to service:jmx:rmi:///jndi/rmi://localhost:1090/jmxrmi
> 2014-12-19T16:10:07.687+0100 INFO [main] o.a.a.l.LevelDBClient [Log.scala:93] 
> Using the pure java LevelDB implementation.
> 2014-12-19T16:10:08.044+0100 INFO [LevelDB IOException handler.] 
> o.a.a.b.BrokerService [BrokerService.java:2561] No IOExceptionHandler 
> registered, ignoring IO exception
> java.io.IOException: CURRENT file does not end with newline
> at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)
>  ~[activemq-client-5.9.1.jar:5.9.1]
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552) 
> ~[activemq-leveldb-store-5.9.1.jar:5.9.1]
> at 
> org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:657)
>  ~[activemq-leveldb-store-5.9.1.jar:5.9.1]
> at 
> org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558) 
> ~[activemq-leveldb-store-5.9.1.jar:5.9.1]
> at org.apache.activemq.leveldb.DBManager.start(DBManager.scala:626) 
> ~[activemq-leveldb-store-5.9.1.jar:5.9.1]
> at 
> org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:236) 
> ~[activemq-leveldb-store-5.9.1.jar:5.9.1]
> at 
> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55) 
> ~[activemq-client-



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5429) Hadoop v1.0 Dependency

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5429.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> Hadoop v1.0 Dependency 
> ---
>
> Key: AMQ-5429
> URL: https://issues.apache.org/jira/browse/AMQ-5429
> Project: ActiveMQ
>  Issue Type: Improvement
>  Components: activemq-leveldb-store
>Reporter: Joe Fernandez
>Priority: Minor
>
> All references to Hadoop appear to be made only from the unit tests. So tag 
> the the very old Hadoop 1.0 dependency with a scope of 'test'.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5498) Scheduled messages not saved to LevelDB backing

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5498.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> Scheduled messages not saved to LevelDB backing
> ---
>
> Key: AMQ-5498
> URL: https://issues.apache.org/jira/browse/AMQ-5498
> Project: ActiveMQ
>  Issue Type: New Feature
>  Components: activemq-leveldb-store, Job Scheduler
>Affects Versions: 5.10.0
>Reporter: Kevin Burton
>  Labels: kahadb, leveldb, scheduler
>
> If you enable LevelDB storage, scheduled messages do not use LevelDB.  They 
> only support KahaDB.  
> This causes a number of problems:
> 1. If you're using LevelDB replication, when you failover, ALL scheduled 
> messages are lost.
> 2. You're still stuck with having KahaDB in your application. If you've made 
> the decision to migrate to LevelDB you're still stuck with Kaha... 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5235) erroneous temp percent used

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5235.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> erroneous temp percent used
> ---
>
> Key: AMQ-5235
> URL: https://issues.apache.org/jira/browse/AMQ-5235
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0
> Environment: debian (quality testing and production)
>Reporter: anselme dewavrin
>
> Dear all,
> We have an activemq 5.9 configured with 1GB of tempUsage allowed. Just by 
> security because we only use persistent messages (about 6000 messages per 
> day). After severall days of use, the temp usage increases, and even shows 
> values that are above the total amount of the data on disk. Here it shows 45% 
> of its 1GB limit for the following files :
> find activemq-data -ls
> 768098014 drwxr-xr-x   5 anselme  anselme  4096 Jun 19 10:24 
> activemq-data
> 768098134 -rw-r--r--   1 anselme  anselme24 Jun 16 16:13 
> activemq-data/store-version.txt
> 768098174 drwxr-xr-x   2 anselme  anselme  4096 Jun 16 16:13 
> activemq-data/dirty.index
> 768098114 -rw-r--r--   2 anselme  anselme  2437 Jun 16 12:06 
> activemq-data/dirty.index/08.sst
> 768098204 -rw-r--r--   1 anselme  anselme16 Jun 16 16:13 
> activemq-data/dirty.index/CURRENT
> 76809819   80 -rw-r--r--   1 anselme  anselme 80313 Jun 16 16:13 
> activemq-data/dirty.index/11.sst
> 768098220 -rw-r--r--   1 anselme  anselme 0 Jun 16 16:13 
> activemq-data/dirty.index/LOCK
> 76809810  300 -rw-r--r--   2 anselme  anselme305206 Jun 16 11:51 
> activemq-data/dirty.index/05.sst
> 76809821 2048 -rw-r--r--   1 anselme  anselme   2097152 Jun 19 11:30 
> activemq-data/dirty.index/12.log
> 76809818 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 
> activemq-data/dirty.index/MANIFEST-10
> 768098160 -rw-r--r--   1 anselme  anselme 0 Jun 16 16:13 
> activemq-data/lock
> 76809815 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 19 11:30 
> activemq-data/00f0faaf.log
> 76809823 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 16 11:50 
> activemq-data/00385f46.log
> 768098074 drwxr-xr-x   2 anselme  anselme  4096 Jun 16 16:13 
> activemq-data/00f0faaf.index
> 76809808  420 -rw-r--r--   1 anselme  anselme429264 Jun 16 16:13 
> activemq-data/00f0faaf.index/09.log
> 768098114 -rw-r--r--   2 anselme  anselme  2437 Jun 16 12:06 
> activemq-data/00f0faaf.index/08.sst
> 768098124 -rw-r--r--   1 anselme  anselme   165 Jun 16 16:13 
> activemq-data/00f0faaf.index/MANIFEST-07
> 768098094 -rw-r--r--   1 anselme  anselme16 Jun 16 16:13 
> activemq-data/00f0faaf.index/CURRENT
> 76809810  300 -rw-r--r--   2 anselme  anselme305206 Jun 16 11:51 
> activemq-data/00f0faaf.index/05.sst
> 76809814 102400 -rw-r--r--   1 anselme  anselme  104857600 Jun 12 21:06 
> activemq-data/.log
> 768098024 drwxr-xr-x   2 anselme  anselme  4096 Jun 16 16:13 
> activemq-data/plist.index
> 768098034 -rw-r--r--   1 anselme  anselme16 Jun 16 16:13 
> activemq-data/plist.index/CURRENT
> 768098060 -rw-r--r--   1 anselme  anselme 0 Jun 16 16:13 
> activemq-data/plist.index/LOCK
> 76809805 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 
> activemq-data/plist.index/03.log
> 76809804 1024 -rw-r--r--   1 anselme  anselme   1048576 Jun 16 16:13 
> activemq-data/plist.index/MANIFEST-02
> The problem is that in our production system it once blocked producers with a 
> tempusage at 122%, even if the disk was empty.
> So we invesigated and executed the broker in a debugger, and found how the 
> usage is calculated. If it in the scala leveldb files : It is not based on 
> what  is on disk, but on what it thinks is on the disk. It multiplies the 
> size of one log by the number of logs known by a certain hashmap.
> I think the entries of  the hashmap are not removed when the log files are 
> purged.
> Could you confirm ?
> Thanks in advance 
> Anselme



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5321) activeMQ levelDB

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5321.
-
   Resolution: Fixed
Fix Version/s: (was: NEEDS_REVIEW)

LevelDB has been deprecated and is no longer supported.

> activeMQ  levelDB
> -
>
> Key: AMQ-5321
> URL: https://issues.apache.org/jira/browse/AMQ-5321
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.10.0
> Environment: windows 7
>Reporter: Kevin
>
> https://issues.apache.org/jira/browse/AMQ-5257which was duplicated by
> https://issues.apache.org/jira/browse/AMQ-5105.
> claimed to be fixed in 5.11.0,   but when I used the binaries of 5.11.0  from 
> unreleased   
> (https://repository.apache.org/content/repositories/snapshots/org/apache/activemq/apache-activemq/5.11-SNAPSHOT/)
>  , it is not fixed yet.
> here is what I got:
> PSHOT\bin\win32>activemq.bat
> wrapper  | --> Wrapper Started as Console
> wrapper  | Launching a JVM...
> jvm 1| Wrapper (Version 3.2.3) http://wrapper.tanukisoftware.org
> jvm 1|   Copyright 1999-2006 Tanuki Software, Inc.  All Rights Reserved.
> jvm 1|
> jvm 1| Java Runtime: Oracle Corporation 1.7.0_67 C:\Program Files 
> (x86)\Java
> \jre7
> jvm 1|   Heap sizes: current=15872k  free=12305k  max=1013632k
> jvm 1| JVM args: -Dactivemq.home=../.. -Dactivemq.base=../.. 
> -Djavax.net
> .ssl.keyStorePassword=password -Djavax.net.ssl.trustStorePassword=password 
> -Djav
> ax.net.ssl.keyStore=../../conf/broker.ks 
> -Djavax.net.ssl.trustStore=../../conf/b
> roker.ts -Dcom.sun.management.jmxremote 
> -Dorg.apache.activemq.UseDedicatedTaskRu
> nner=true -Djava.util.logging.config.file=logging.properties 
> -Dactivemq.conf=../
> ../conf -Dactivemq.data=../../data 
> -Djava.security.auth.login.config=../../conf/
> login.config -Xmx1024m -Djava.library.path=../../bin/win32 
> -Dwrapper.key=7JJvTVF
> 5VnXQi50z -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 
> -Dwrapper.jvm.port.m
> ax=31999 -Dwrapper.pid=6236 -Dwrapper.version=3.2.3 
> -Dwrapper.native_library=wra
> pper -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1
> jvm 1| Extensions classpath:
> jvm 1|   
> [..\..\lib,..\..\lib\camel,..\..\lib\optional,..\..\lib\web,..\..\l
> ib\extra]
> jvm 1| ACTIVEMQ_HOME: ..\..
> jvm 1| ACTIVEMQ_BASE: ..\..
> jvm 1| ACTIVEMQ_CONF: ..\..\conf
> jvm 1| ACTIVEMQ_DATA: ..\..\data
> jvm 1| Loading message broker from: xbean:activemq.xml
> jvm 1|  INFO | Refreshing 
> org.apache.activemq.xbean.XBeanBrokerFactory$1@193
> c227: startup date [Wed Aug 13 10:00:30 EDT 2014]; root of context hierarchy
> jvm 1|  INFO | Using Persistence Adapter: Replicated 
> LevelDB[C:\ActiveMQ\apa
> che-activemq-5.11-20140808.003936-58-bin\apache-activemq-5.11-SNAPSHOT\bin\win32
> \..\..\data\leveldb, bosvsvm01:2181//activemq/leveldb-stores]
> jvm 1|  INFO | Starting StateChangeDispatcher
> jvm 1|  INFO | Client environment:zookeeper.version=3.4.5-1392090, built 
> on
> 09/30/2012 17:52 GMT
> jvm 1|  INFO | Client environment:host.name=WMT-VS009.bost.local
> jvm 1|  INFO | Client environment:java.version=1.7.0_67
> jvm 1|  INFO | Client environment:java.vendor=Oracle Corporation
> jvm 1|  INFO | Client environment:java.home=C:\Program Files 
> (x86)\Java\jre7
> jvm 1|  INFO | Client 
> environment:java.class.path=../../bin/wrapper.jar;../.
> ./bin/activemq.jar
> jvm 1|  INFO | Client environment:java.library.path=../../bin/win32
> jvm 1|  INFO | Client 
> environment:java.io.tmpdir=C:\Users\george\AppData\Lo
> cal\Temp\
> jvm 1|  INFO | Client environment:java.compiler=
> jvm 1|  INFO | Client environment:os.name=Windows 7
> jvm 1|  INFO | Client environment:os.arch=x86
> jvm 1|  INFO | Client environment:os.version=6.1
> jvm 1|  INFO | Client environment:user.name=george
> jvm 1|  INFO | Client environment:user.home=C:\Users\george
> jvm 1|  INFO | Client 
> environment:user.dir=C:\ActiveMQ\apache-activemq-5.11-
> 20140808.003936-58-bin\apache-activemq-5.11-SNAPSHOT\bin\win32
> jvm 1|  INFO | Initiating client connection, connectString=bosvsvm01:2181
>  sessionTimeout=2000 
> watcher=org.apache.activemq.leveldb.replicated.groups.ZKCli
> ent@1fbdfd0
> jvm 1|  WARN | SASL configuration failed: 
> javax.security.auth.login.LoginExc
> eption: No JAAS configuration section named 'Client' was found in specified 
> JAAS
>  configuration file: '../../conf/login.config'. Will continue connection to 
> Zook
> eeper server without SASL authentication, if Zookeeper server allows it.
> jvm 1|  INFO | Opening socket connection to server 
> brlvsvolap01.bluecrest.lo
> cal/10.42.0.109:2181
> jvm 1|  WARN | unprocessed event state: AuthFailed
> jvm 1|  INFO | Socket connection established to 

[jira] [Closed] (AMQ-5228) java.lang.NoClassDefFoundError: org/fusesource/leveldbjni/internal/JniDB error during cleanup

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5228?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5228.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> java.lang.NoClassDefFoundError: org/fusesource/leveldbjni/internal/JniDB 
> error during cleanup
> -
>
> Key: AMQ-5228
> URL: https://issues.apache.org/jira/browse/AMQ-5228
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.1, 5.10.0
> Environment: Linux  2.6.32-279.5.2.el6.x86_64 #1 SMP Thu Aug 23 
> 12:05:59 EDT 2012 x86_64 x86_64 x86_64 GNU/Linux
> JDK 1.7.0_51
> Apache Karaf 2.3.5 with activemq-osgi, activemq-blueprint, activemq-client, 
> activemq-webconsole, activemq-camel, activemq features installed.  Using 
> version 5.9.1 (and tried 5.10.0)
>Reporter: Timothy Stewart
>
> In our production environment, our storage folder runs out of disk space 
> every few days (75 Gb).  Restarting the container addresses the issue, it 
> cleans it up.  I noticed a stack trace coming on system.err in the 
> wrapper.log (Karaf starts with a wrapper service).  It may or may not be 
> related to our disk space issue:
> INFO   | jvm 1| 2014/06/13 03:22:36 | Exception in thread "Thread-117" 
> java.lang.NoClassDefFoundError: org/fusesource/leveldbjni/internal/JniDB
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.activemq.leveldb.LevelDBClient$RichDB.compact(LevelDBClient.scala:377)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.activemq.leveldb.LevelDBClient.gc(LevelDBClient.scala:1647)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.activemq.leveldb.DBManager$$anonfun$pollGc$1$$anonfun$apply$mcV$sp$2.apply$mcV$sp(DBManager.scala:648)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:357)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> java.lang.Thread.run(Thread.java:744)
> INFO   | jvm 1| 2014/06/13 03:22:36 | Caused by: 
> java.lang.ClassNotFoundException: org.fusesource.leveldbjni.internal.JniDB 
> not found by org.apache.activemq.activemq-osgi [105]
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1460)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:72)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1843)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:358)
> INFO   | jvm 1| 2014/06/13 03:22:36 |   ... 7 more
> I see the same problem in our dev environment but could not replicate it.  I 
> was finally able to replicate by using the hawtio console to execute the 
> compact operation.  Everytime I do this, the same stack trace outputs and the 
> operation cycles endlessly (well as long as I've waited anyhow).  The 5.10.0 
> stack trace when I execute the operation is:
> INFO   | jvm 2| 2014/06/14 21:00:44 | Exception in thread "Thread-106" 
> java.lang.NoClassDefFoundError: org/fusesource/leveldbjni/internal/JniDB
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> org.apache.activemq.leveldb.LevelDBClient$RichDB.compact(LevelDBClient.scala:378)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> org.apache.activemq.leveldb.LevelDBClient.gc(LevelDBClient.scala:1654)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> org.apache.activemq.leveldb.LevelDBStoreView$$anonfun$compact$1.apply$mcV$sp(LevelDBStore.scala:126)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> java.lang.Thread.run(Thread.java:744)
> INFO   | jvm 2| 2014/06/14 21:00:44 | Caused by: 
> java.lang.ClassNotFoundException: org.fusesource.leveldbjni.internal.JniDB 
> not found by org.apache.activemq.activemq-osgi [390]
> INFO   | jvm 2| 2014/06/14 21:00:44 |   at 
> 

[jira] [Resolved] (AMQ-5225) broker will not start when using leveldb

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQ-5225.
---
   Resolution: Fixed
Fix Version/s: 5.14.0

> broker will not start when using leveldb
> 
>
> Key: AMQ-5225
> URL: https://issues.apache.org/jira/browse/AMQ-5225
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.10.0
> Environment: Centos 6.3
>Reporter: John Rushford
> Fix For: 5.14.0
>
>
> I've configured a 3 node activemq cluster using this as my guide: 
> http://activemq.apache.org/replicated-leveldb-store.html
> 1) Startup activemq on 1st node.
> 2) Startup activemq on 2nd node
>  At this point I see a log message on the 1st node stating it was 
> promoted to master.  Next I see this Exception and am unable to connect to 
> the broker.  Each time an activemq instance is promoted to master, this 
> exception occurs and broker is left unusable.
> java.io.IOException: 
> com.google.common.base.Objects.firstNonNull(Ljava/lang/Object;Ljava/lang/Object;)Ljava/lang/Object;
>   at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)[activemq-client-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:657)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.DBManager.start(DBManager.scala:648)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:235)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.replicated.MasterLevelDBStore.doStart(MasterLevelDBStore.scala:110)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)[activemq-client-5.10.0.jar:5.10.0]
>   at 
> org.apache.activemq.leveldb.replicated.ElectingLevelDBStore$$anonfun$start_master$1.apply$mcV$sp(ElectingLevelDBStore.scala:226)[activemq-leveldb-store-5.10.0.jar:5.10.0]
>   at 
> org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:330)[hawtdispatch-scala-2.11-1.21.jar:1.21]
>   at java.lang.Thread.run(Thread.java:744)[:1.7.0_51]
> 2014-06-12 19:02:32,350 | INFO  | Stopped 
> LevelDB[/opt/apache-activemq-5.10.0/bin/linux-x86-64/../../data/leveldb] | 
> org.apache.activemq.leveldb.LevelDBStore | LevelDB IOException handler.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (AMQCLI-1) Populate project with initial structure

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQCLI-1?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish resolved AMQCLI-1.
---
   Resolution: Fixed
Fix Version/s: 1.0.0

> Populate project with initial structure
> ---
>
> Key: AMQCLI-1
> URL: https://issues.apache.org/jira/browse/AMQCLI-1
> Project: ActiveMQ CLI Tools
>  Issue Type: Task
>Reporter: Timothy Bish
>Assignee: Timothy Bish
> Fix For: 1.0.0
>
>
> Add initial maven project structure, license and notice files along with some 
> initial documentation files.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (AMQCLI-1) Populate project with initial structure

2017-02-03 Thread Timothy Bish (JIRA)
Timothy Bish created AMQCLI-1:
-

 Summary: Populate project with initial structure
 Key: AMQCLI-1
 URL: https://issues.apache.org/jira/browse/AMQCLI-1
 Project: ActiveMQ CLI Tools
  Issue Type: Task
Reporter: Timothy Bish
Assignee: Timothy Bish


Add initial maven project structure, license and notice files along with some 
initial documentation files.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-937) Use Proper disk alignment over libaio instead of 512 hard coded.

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15852059#comment-15852059
 ] 

ASF subversion and git services commented on ARTEMIS-937:
-

Commit 83b00d6a8e61ce2683580aed1a67a37c26313ccb in activemq-artemis's branch 
refs/heads/1.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=83b00d6 ]

ARTEMIS-937 no sync on AIO shouldn't use O_DIRECT

(cherry picked from commit c60c92697f782209875f21fad8b4fdecc3fdcd12)


> Use Proper disk alignment over libaio instead of 512 hard coded.
> 
>
> Key: ARTEMIS-937
> URL: https://issues.apache.org/jira/browse/ARTEMIS-937
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.5.2
>Reporter: clebert suconic
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.x
>
>
> this will cause performance issues in a lot of current SSDs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-935) Tool to recalculate disk sync times

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15852058#comment-15852058
 ] 

ASF subversion and git services commented on ARTEMIS-935:
-

Commit 9321ade39bf8d51705ca2c4e9ba20bcfc2968799 in activemq-artemis's branch 
refs/heads/1.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=9321ade ]

ARTEMIS-935 sync option on NIO

(cherry picked from commit 1ac63549901f9991b9319e950becb86dce8ea358)


> Tool to recalculate disk sync times
> ---
>
> Key: ARTEMIS-935
> URL: https://issues.apache.org/jira/browse/ARTEMIS-935
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: clebert suconic
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-937) Use Proper disk alignment over libaio instead of 512 hard coded.

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15852049#comment-15852049
 ] 

ASF subversion and git services commented on ARTEMIS-937:
-

Commit 6018b2d74cdc998013d918f0453c917ca00a855d in activemq-artemis's branch 
refs/heads/1.x from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=6018b2d ]

ARTEMIS-937 Implementing proper alignment and adding perf-journal tool to 
validate the journal syncs

(cherry picked from commit ce035a8084874da3004cded844221629a9a3bc2e)


> Use Proper disk alignment over libaio instead of 512 hard coded.
> 
>
> Key: ARTEMIS-937
> URL: https://issues.apache.org/jira/browse/ARTEMIS-937
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.5.2
>Reporter: clebert suconic
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.x
>
>
> this will cause performance issues in a lot of current SSDs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread Timothy Bish (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851789#comment-15851789
 ] 

Timothy Bish commented on AMQ-6432:
---

Yes, turning off ack compaction will ensure you don't see this warning in you 
logs. 

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
> Fix For: 5.15.0
>
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread Edwin Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851771#comment-15851771
 ] 

Edwin Yu commented on AMQ-6432:
---

Hi Gary, may I ask,  by setting enableAckCompaction=false, would I avoid this 
error in 5.14.x release?  We're considering going live with 5.14.3.  Thank you.

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
> Fix For: 5.15.0
>
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Resolved] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully resolved AMQ-6432.
-
   Resolution: Fixed
Fix Version/s: 5.15.0

issue was not related to syncs but doing a scan for acks that could venture 
into the newly created data file for the acks. 
Adding a limit to the getNextLocation scan provides the necessary restriction 
and avoids the EOF and warn.

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
> Fix For: 5.15.0
>
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851726#comment-15851726
 ] 

ASF subversion and git services commented on AMQ-6432:
--

Commit 9b64e188b59a395300a2f5d6022df9dbbae2f426 in activemq's branch 
refs/heads/master from [~gtully]
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=9b64e18 ]

[AMQ-6432] issue was journal scan on newly created ack file. I left the 
relevant braces from AMQ-6288 in place. Fix and test


> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (AMQ-6288) Message ack compaction needs to acquire the checkpoint lock

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-6288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851727#comment-15851727
 ] 

ASF subversion and git services commented on AMQ-6288:
--

Commit 9b64e188b59a395300a2f5d6022df9dbbae2f426 in activemq's branch 
refs/heads/master from [~gtully]
[ https://git-wip-us.apache.org/repos/asf?p=activemq.git;h=9b64e18 ]

[AMQ-6432] issue was journal scan on newly created ack file. I left the 
relevant braces from AMQ-6288 in place. Fix and test


> Message ack compaction needs to acquire the checkpoint lock
> ---
>
> Key: AMQ-6288
> URL: https://issues.apache.org/jira/browse/AMQ-6288
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: Broker
>Affects Versions: 5.13.3
>Reporter: Christopher L. Shannon
>Assignee: Christopher L. Shannon
> Fix For: 5.14.0
>
>
> The AckCompactionRunner task needs to acquire the checkpiont lock to prevent 
> other threads from running a checkpoint while the task is running. Normally 
> this task runs on the same executor as the checkpoint task so the ack 
> compaction task wouldn't run at the same time as the checkpoint task as they 
> are processed one at a time.
> However, there are two cases where this isn't always true.  First, the 
> checkpoint() method is public and can be called through the 
> PersistenceAdapter interface by someone at the same time the ack compaction 
> is running.  Second, a checkpoint is called during shutdown without using the 
> executor and could also run while the ack compaction is running.
> The main reason for this fix is because when doing some testing I was seeing 
> an occasional error from journal.getNextLocation() in the forwardAllAcks 
> method because a journal file was missing which I believe was cleaned up by 
> the cleanup task.  I was testing scenarios such as shutdown and also manually 
> triggering the task at the same time as an ack compaction.
> Also, while we are at it, we should have a try/catch around the 
> journal.getNextLocation calls to catch any IOException so we can abort 
> gracefully. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Updated] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully updated AMQ-6432:

Issue Type: Bug  (was: Improvement)

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Bug
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (AMQ-6432) Improve 'Failed to load next journal location: null' warning output

2017-02-03 Thread Gary Tully (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-6432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Tully reassigned AMQ-6432:
---

Assignee: Gary Tully

> Improve 'Failed to load next journal location: null' warning output
> ---
>
> Key: AMQ-6432
> URL: https://issues.apache.org/jira/browse/AMQ-6432
> Project: ActiveMQ
>  Issue Type: Improvement
>Affects Versions: 5.14.0
>Reporter: Martin Lichtin
>Assignee: Gary Tully
>
> Seeing
> {noformat}
> 2016-09-19 15:11:30,270 | WARN  | ournal Checkpoint Worker | MessageDatabase  
> | 
>emq.store.kahadb.MessageDatabase 2104 | 102 - 
> org.apache.activemq.activemq-osgi - 5.14.0 | 
>Failed to load next journal location: null
> {noformat}
> it'd be great to improve the output in such a case (Journal Checkpoint Worker 
> ).
> Why not show the exception stack (it seems weird to only show the stack when 
> debug level is enabled).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-935) Tool to recalculate disk sync times

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851663#comment-15851663
 ] 

ASF subversion and git services commented on ARTEMIS-935:
-

Commit 1ac63549901f9991b9319e950becb86dce8ea358 in activemq-artemis's branch 
refs/heads/master from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=1ac6354 ]

ARTEMIS-935 sync option on NIO


> Tool to recalculate disk sync times
> ---
>
> Key: ARTEMIS-935
> URL: https://issues.apache.org/jira/browse/ARTEMIS-935
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: clebert suconic
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.x
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5082) ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5082.
-
   Resolution: Won't Fix
Fix Version/s: (was: 5.15.0)

LevelDB has been deprecated and is no longer supported.

> ActiveMQ replicatedLevelDB cluster breaks, all nodes stop listening
> ---
>
> Key: AMQ-5082
> URL: https://issues.apache.org/jira/browse/AMQ-5082
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.0, 5.10.0
>Reporter: Scott Feldstein
>Assignee: Christian Posta
>Priority: Critical
> Attachments: 03-07.tgz, amq_5082_threads.tar.gz, 
> mq-node1-cluster.failure, mq-node2-cluster.failure, mq-node3-cluster.failure, 
> zookeeper-failover-logs.7z, zookeeper.out-cluster.failure
>
>
> I have a 3 node amq cluster and one zookeeper node using a replicatedLevelDB 
> persistence adapter.
> {code}
> 
>directory="${activemq.data}/leveldb"
>   replicas="3"
>   bind="tcp://0.0.0.0:0"
>   zkAddress="zookeep0:2181"
>   zkPath="/activemq/leveldb-stores"/>
> 
> {code}
> After about a day or so of sitting idle there are cascading failures and the 
> cluster completely stops listening all together.
> I can reproduce this consistently on 5.9 and the latest 5.10 (commit 
> 2360fb859694bacac1e48092e53a56b388e1d2f0).  I am going to attach logs from 
> the three mq nodes and the zookeeper logs that reflect the time where the 
> cluster starts having issues.
> The cluster stops listening Mar 4, 2014 4:56:50 AM (within 5 seconds).
> The OSs are all centos 5.9 on one esx server, so I doubt networking is an 
> issue.
> If you need more data it should be pretty easy to get whatever is needed 
> since it is consistently reproducible.
> This bug may be related to AMQ-5026, but looks different enough to file a 
> separate issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5097) ElectingLevelDBStore - NPE in doStop

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5097.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> ElectingLevelDBStore - NPE in doStop
> 
>
> Key: AMQ-5097
> URL: https://issues.apache.org/jira/browse/AMQ-5097
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.10.0
>Reporter: Claus Ibsen
>Priority: Minor
>
> I copied the examples/conf/activemq-leveldb-replicating.xml to the conf 
> directory and started AMQ
> bin/activemq console xbean:conf/activemq-leveldb-replicating.xml
> Then after a while it give up, and you get this NPE
> {code}
> ERROR | Could not stop service: Replicated 
> LevelDB[/opt/apache-activemq-5.10-SNAPSHOT/data, 
> 127.0.0.1:2181//activemq/leveldb-stores]. Reason: 
> java.lang.NullPointerException
> java.lang.NullPointerException
>   at 
> org.apache.activemq.leveldb.replicated.ElectingLevelDBStore.doStop(ElectingLevelDBStore.scala:276)
>   at org.apache.activemq.util.ServiceSupport.stop(ServiceSupport.java:71)
>   at org.apache.activemq.util.ServiceStopper.stop(ServiceStopper.java:41)
>   at org.apache.activemq.broker.BrokerService.stop(BrokerService.java:775)
>   at 
> org.apache.activemq.xbean.XBeanBrokerService.stop(XBeanBrokerService.java:122)
>   at 
> org.apache.activemq.broker.BrokerService.start(BrokerService.java:601)
>   at 
> org.apache.activemq.console.command.StartCommand.runTask(StartCommand.java:88)
>   at 
> org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57)
>   at 
> org.apache.activemq.console.command.ShellCommand.runTask(ShellCommand.java:150)
>   at 
> org.apache.activemq.console.command.AbstractCommand.execute(AbstractCommand.java:57)
>   at 
> org.apache.activemq.console.command.ShellCommand.main(ShellCommand.java:104)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at org.apache.activemq.console.Main.runTaskClass(Main.java:262)
>   at org.apache.activemq.console.Main.main(Main.java:115)
> ERROR | Could not stop service: Replicated 
> LevelDB[/opt/apache-activemq-5.10-SNAPSHOT/data, 
> 127.0.0.1:2181//activemq/leveldb-stores]. Reason: 
> java.lang.NullPointerException
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Closed] (AMQ-5181) Replicated LevelDB Corruption on Solaris

2017-02-03 Thread Timothy Bish (JIRA)

 [ 
https://issues.apache.org/jira/browse/AMQ-5181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Bish closed AMQ-5181.
-
Resolution: Won't Fix

LevelDB has been deprecated and is no longer supported.

> Replicated LevelDB Corruption on Solaris
> 
>
> Key: AMQ-5181
> URL: https://issues.apache.org/jira/browse/AMQ-5181
> Project: ActiveMQ
>  Issue Type: Bug
>  Components: activemq-leveldb-store
>Affects Versions: 5.9.1, 5.10.0
> Environment: Solaris 5.10 on Sparc
>Reporter: Ed Schmed
>
> Steps to recreate:
> 3 Node ActiveMQ cluster using replicated leveldb, AMQ 5.9.1
> Start all three instances
> Using the web console, connect to the master and create a queue named test. 
> Also using the web console, send 100 persistent messages with priority 4 to 
> the queue.
> Issue kill command against the PID for the master broker
> When another broker tries to become master, CorruptionExceptions are thrown:
> 2014-05-12 09:30:22,910 | INFO  | No IOExceptionHandler registered, ignoring 
> IO exception | org.apache.activemq.broker.BrokerService | LevelDB IOException 
> handler.
> java.io.IOException: org.iq80.snappy.CorruptionException: Invalid copy offset 
> for opcode starting at 8
> at 
> org.apache.activemq.util.IOExceptionSupport.create(IOExceptionSupport.java:39)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:552)
> at 
> org.apache.activemq.leveldb.LevelDBClient.replay_init(LevelDBClient.scala:657)
> at 
> org.apache.activemq.leveldb.LevelDBClient.start(LevelDBClient.scala:558)
> at org.apache.activemq.leveldb.DBManager.start(DBManager.scala:626)
> at 
> org.apache.activemq.leveldb.LevelDBStore.doStart(LevelDBStore.scala:236)
> at 
> org.apache.activemq.leveldb.replicated.MasterLevelDBStore.doStart(MasterLevelDBStore.scala:110)
> at 
> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:55)
> at 
> org.apache.activemq.leveldb.replicated.ElectingLevelDBStore$$anonfun$start_master$1.apply$mcV$sp(ElectingLevelDBStore.scala:226)
> at 
> org.fusesource.hawtdispatch.package$$anon$4.run(hawtdispatch.scala:357)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: com.google.common.util.concurrent.UncheckedExecutionException: 
> org.iq80.snappy.CorruptionException: Invalid copy offset for opcode starting 
> at 8
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2256)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3980)
> at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3984)
> at 
> com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4868)
> at org.iq80.leveldb.impl.TableCache.getTable(TableCache.java:80)
> at org.iq80.leveldb.impl.TableCache.newIterator(TableCache.java:69)
> at org.iq80.leveldb.impl.TableCache.newIterator(TableCache.java:64)
> at org.iq80.leveldb.impl.DbImpl.buildTable(DbImpl.java:983)
> at org.iq80.leveldb.impl.DbImpl.writeLevel0Table(DbImpl.java:932)
> at org.iq80.leveldb.impl.DbImpl.recoverLogFile(DbImpl.java:552)
> at org.iq80.leveldb.impl.DbImpl.(DbImpl.java:209)
> at org.iq80.leveldb.impl.Iq80DBFactory.open(Iq80DBFactory.java:59)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$replay_init$2.apply$mcV$sp(LevelDBClient.scala:677)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$replay_init$2.apply(LevelDBClient.scala:657)
> at 
> org.apache.activemq.leveldb.LevelDBClient$$anonfun$replay_init$2.apply(LevelDBClient.scala:657)
> at 
> org.apache.activemq.leveldb.LevelDBClient.might_fail(LevelDBClient.scala:549)
> ... 11 more
> Caused by: org.iq80.snappy.CorruptionException: Invalid copy offset for 
> opcode starting at 8
> at 
> org.iq80.snappy.SnappyDecompressor.decompressAllTags(SnappyDecompressor.java:165)
> at 
> org.iq80.snappy.SnappyDecompressor.uncompress(SnappyDecompressor.java:76)
> at org.iq80.snappy.Snappy.uncompress(Snappy.java:43)
> at org.iq80.leveldb.util.Snappy$IQ80Snappy.uncompress(Snappy.java:100)
> at org.iq80.leveldb.util.Snappy.uncompress(Snappy.java:160)
> at 
> org.iq80.leveldb.table.FileChannelTable.readBlock(FileChannelTable.java:74)
> at org.iq80.leveldb.table.Table.(Table.java:60)
> at 
> org.iq80.leveldb.table.FileChannelTable.(FileChannelTable.java:34)
> at 
> org.iq80.leveldb.impl.TableCache$TableAndFile.(TableCache.java:117)
> at 
> 

[jira] [Commented] (AMQ-2860) EOFException and ActiveMQMapMessage with null properties

2017-02-03 Thread Scott K Pullano (JIRA)

[ 
https://issues.apache.org/jira/browse/AMQ-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851565#comment-15851565
 ] 

Scott K Pullano commented on AMQ-2860:
--

Hey everyone this is happening for me and i am on mq client 5.13.2. Below is my 
error. I am using apache camel 2.16.2 with mq 5.13.2. This error occurs on the 
client side. The broker is using 5.10.0. I tried downgrading my client to 
5.10.0 and also got the error so i ruled that out. It also only comes when 
under high load. If i load 1000 messages in 10 seconds i get about 20 of these. 
Has any seen or fixed this issue? I need a resolution asap.
```
2017-02-02 17:50:54,143] [WARN ] [Camel (camel-1) thread #6 - JmsConsumer 
Execution of JMS message listener failed. Caused by: 
[org.apache.camel.RuntimeCamelException - javax.jms.JMSException: 
java.io.EOFException]
org.apache.camel.RuntimeCamelException: javax.jms.JMSException: 
java.io.EOFException
at 
org.apache.camel.component.jms.JmsBinding.extractHeadersFromJms(JmsBinding.java:193)
 ~[camel-jms-2.16.2.jar:2.16.2]
at 
org.apache.camel.component.jms.JmsMessage.populateInitialHeaders(JmsMessage.java:244)
 ~[camel-jms-2.16.2.jar:2.16.2]
at 
org.apache.camel.impl.DefaultMessage.createHeaders(DefaultMessage.java:203) 
~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.component.jms.JmsMessage.ensureInitialHeaders(JmsMessage.java:229)
 ~[camel-jms-2.16.2.jar:2.16.2]
at 
org.apache.camel.component.jms.JmsMessage.getHeaders(JmsMessage.java:187) 
~[camel-jms-2.16.2.jar:2.16.2]
at 
org.apache.camel.impl.DefaultUnitOfWork.(DefaultUnitOfWork.java:91) 
~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.impl.DefaultUnitOfWork.(DefaultUnitOfWork.java:72) 
~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.impl.DefaultUnitOfWorkFactory.createUnitOfWork(DefaultUnitOfWorkFactory.java:34)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.createUnitOfWork(CamelInternalProcessor.java:663)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.before(CamelInternalProcessor.java:631)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.processor.CamelInternalProcessor$UnitOfWorkProcessorAdvice.before(CamelInternalProcessor.java:608)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:138)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:109)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:87)
 ~[camel-core-2.16.2.jar:2.16.2]
at 
org.apache.camel.component.jms.EndpointMessageListener.onMessage(EndpointMessageListener.java:112)
 ~[camel-jms-2.16.2.jar:2.16.2]
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.doInvokeListener(AbstractMessageListenerContainer.java:689)
 ~[spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.invokeListener(AbstractMessageListenerContainer.java:649)
 ~[spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.AbstractMessageListenerContainer.doExecuteListener(AbstractMessageListenerContainer.java:619)
 ~[spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:307)
 [spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:245)
 [spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:1144)
 [spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:1136)
 [spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:1033)
 [spring-jms-4.1.9.RELEASE.jar:4.1.9.RELEASE]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[na:1.8.0_51]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[na:1.8.0_51]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_51]
Caused by: javax.jms.JMSException: java.io.EOFException
at 
org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport.java:72)
 ~[activemq-client-5.13.2.jar:5.13.2]
at 

[jira] [Commented] (ARTEMIS-906) Memory Mapped JournalType

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851554#comment-15851554
 ] 

ASF subversion and git services commented on ARTEMIS-906:
-

Commit aacddfda61804b203dc8b3efdebafa9384662e22 in activemq-artemis's branch 
refs/heads/master from [~nigro@gmail.com]
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=aacddfd ]

ARTEMIS-906 Memory Mapped JournalType


> Memory Mapped JournalType
> -
>
> Key: ARTEMIS-906
> URL: https://issues.apache.org/jira/browse/ARTEMIS-906
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>
> It fixes the original memory mapped version of SequentialFile and provides 
> the configuration of an high performance memory mapped journal.
> New sanity tests and performance tests are added to align the implementation 
> with the current NIO and AIO versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-906) Memory Mapped JournalType

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851555#comment-15851555
 ] 

ASF subversion and git services commented on ARTEMIS-906:
-

Commit c039aae37fbc3b0eb8fe0b0289fb2af2def84f6a in activemq-artemis's branch 
refs/heads/master from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=c039aae ]

ARTEMIS-906 Adding Paging tests for mapped journal


> Memory Mapped JournalType
> -
>
> Key: ARTEMIS-906
> URL: https://issues.apache.org/jira/browse/ARTEMIS-906
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>
> It fixes the original memory mapped version of SequentialFile and provides 
> the configuration of an high performance memory mapped journal.
> New sanity tests and performance tests are added to align the implementation 
> with the current NIO and AIO versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-937) Use Proper disk alignment over libaio instead of 512 hard coded.

2017-02-03 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851553#comment-15851553
 ] 

ASF subversion and git services commented on ARTEMIS-937:
-

Commit ce035a8084874da3004cded844221629a9a3bc2e in activemq-artemis's branch 
refs/heads/master from Clebert Suconic
[ https://git-wip-us.apache.org/repos/asf?p=activemq-artemis.git;h=ce035a8 ]

ARTEMIS-937 Implementing proper alignment and adding perf-journal tool to 
validate the journal syncs


> Use Proper disk alignment over libaio instead of 512 hard coded.
> 
>
> Key: ARTEMIS-937
> URL: https://issues.apache.org/jira/browse/ARTEMIS-937
> Project: ActiveMQ Artemis
>  Issue Type: Bug
>Affects Versions: 1.5.2
>Reporter: clebert suconic
>Assignee: clebert suconic
> Fix For: 2.0.0, 1.5.x
>
>
> this will cause performance issues in a lot of current SSDs.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (ARTEMIS-906) Memory Mapped JournalType

2017-02-03 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/ARTEMIS-906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15851525#comment-15851525
 ] 

ASF GitHub Bot commented on ARTEMIS-906:


Github user franz1981 closed the pull request at:

https://github.com/apache/activemq-artemis/pull/981


> Memory Mapped JournalType
> -
>
> Key: ARTEMIS-906
> URL: https://issues.apache.org/jira/browse/ARTEMIS-906
> Project: ActiveMQ Artemis
>  Issue Type: Improvement
>Reporter: Francesco Nigro
>Priority: Minor
>
> It fixes the original memory mapped version of SequentialFile and provides 
> the configuration of an high performance memory mapped journal.
> New sanity tests and performance tests are added to align the implementation 
> with the current NIO and AIO versions.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)