Re: Karaf 4.1.1 console issue with list and grep
Understood, thx for the clarification. -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-4-1-1-console-issue-with-list-and-grep-tp4050295p4050298.html Sent from the Karaf - User mailing list archive at Nabble.com.
Karaf 4.1.1 console issue with list and grep
bundle:list | grep -i "foo " On Mac the space and end quote is auto-completing when I the the first quote. I can't delete the space. So I can't search on "foo bar". Known issue/bug? Does not reproduce on Ubuntu. -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-4-1-1-console-issue-with-list-and-grep-tp4050295.html Sent from the Karaf - User mailing list archive at Nabble.com.
bundle:info missing loaded location of JAR
Karaf 3.0.6 When I run "bundle:info 222" I don't see the file location of the loaded JAR. When I view the bundle.info file's contents in data/cache/bundle222, I do see it (e.g. file:/path/to/file/myJar.jar). Is it planned to add this info for the output of the "bundle:info" cmd? Or is there a reason it's missing? I searched the forum for hits on "bundle:info" and didn't find any so posting now. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-info-missing-loaded-location-of-JAR-tp4048377.html Sent from the Karaf - User mailing list archive at Nabble.com.
NPE when using bundle:id
karaf> system:version 3.0.3 karaf> bundle:id "foo" Error executing command: java.lang.NullPointerException 2016-04-25 09:20:51,190 | ERROR | l for user karaf | ShellUtil | 27 - org.apache.karaf.shell.console - 3.0.3 | Exception caught while executing command java.lang.NullPointerException at org.apache.karaf.bundle.command.Id.doExecute(Id.java:48)[31:org.apache.karaf.bundle.command:3.0.3] at org.apache.karaf.bundle.command.Id.doExecute(Id.java:40)[31:org.apache.karaf.bundle.command:3.0.3] at org.apache.karaf.shell.console.AbstractAction.execute(AbstractAction.java:33)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.karaf.shell.console.OsgiCommandSupport.execute(OsgiCommandSupport.java:39)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.karaf.shell.commands.basic.AbstractCommand.execute(AbstractCommand.java:33)[27:org.apache.karaf.shell.console:3.0.3] at Proxy65aa54bc_9353_425d_8c20_861c5754e083.execute(Unknown Source)[:] at Proxy65aa54bc_9353_425d_8c20_861c5754e083.execute(Unknown Source)[:] at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:78)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:477)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:403)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)[27:org.apache.karaf.shell.console:3.0.3] at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:92) at org.apache.karaf.shell.console.impl.jline.ConsoleImpl.run(ConsoleImpl.java:208) at org.apache.karaf.shell.console.impl.jline.LocalConsoleManager$2$1$1.run(LocalConsoleManager.java:109) at java.security.AccessController.doPrivileged(Native Method)[:1.8.0_45] at org.apache.karaf.jaas.modules.JaasHelper.doAs(JaasHelper.java:57)[28:org.apache.karaf.jaas.modules:3.0.3] at org.apache.karaf.shell.console.impl.jline.LocalConsoleManager$2$1.run(LocalConsoleManager.java:102)[27:org.apache.karaf.shell.console:3.0.3] -- View this message in context: http://karaf.922171.n3.nabble.com/NPE-when-using-bundle-id-tp4046355.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: exported package in bundle A is not able to be imported in bundle B
I am seeing "Activator start error" when doing a feature:install xxx. What is recommended course of action? Just install the bundles one by one, in the top to bottom order from the features xml? Seems difficult to debug this way... -- View this message in context: http://karaf.922171.n3.nabble.com/exported-package-in-bundle-A-is-not-able-to-be-imported-in-bundle-B-tp4046019p4046045.html Sent from the Karaf - User mailing list archive at Nabble.com.
importing and exporting same package in same bundle
what are the side-effects/consequences of importing and exporting same package in same bundle? Is this normal or bad practice and if we are doing this in our manifest.mf, how do we correct this in the pom.xml and maven-bundle-plugin config/instructions? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/importing-and-exporting-same-package-in-same-bundle-tp4046020.html Sent from the Karaf - User mailing list archive at Nabble.com.
exported package in bundle A is not able to be imported in bundle B
Karaf 3.0.3 bundle B is in GracePeriod (and ultimately Failure) with dependency on bundle A (which is active and has lower start-level than bundle B). eventually bundle:diag 123 (for bundle B) gives: Status: Failure Blueprint 3/28/16 3:24 PM Exception: null java.util.concurrent.TimeoutException at org.apache.aries.blueprint.container.BlueprintContainerImpl$1.run(BlueprintContainerImpl.java:336) at org.apache.aries.blueprint.utils.threading.impl.DiscardableRunnable.run(DiscardableRunnable.java:48) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Missing dependencies: (objectClass=com.foo.bar.myClass) When I run package:exports | grep com.foo.bar I see only the one expected bundle which is exported same package (I've triple-checked the manifest.mf files for both bundles in data/cache/xyz). i.e. there is no split-package problem in this scenario afaik. Any idea how/why this happens and how to resolve? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/exported-package-in-bundle-A-is-not-able-to-be-imported-in-bundle-B-tp4046019.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: using bundle:watch for bundle refreshes and updates
So it is looking in the local .m2/repository path or elsewhere? If the former, then I suppose this should be sufficient as the built JAR will be uploaded to .m2/repository/path/to/bundle.jar. jbonofre wrote > Hi, > > bundle:watch works only for mvn URL containing SNAPSHOT by default. > > Regards > JB > > On 11/20/2015 06:32 AM, asookazian2 wrote: >> I am currently using the following cmd to update an existing bundle: >> >> update 384 file:/path/to/bundleFoo.jar >> refresh 384 >> >> According to this doc: >> https://karaf.apache.org/manual/latest/commands/bundle-watch.html >> >> The param for the bundle:watch cmd is URLs (bundle IDs or URLs) >> >> I'd like to have a specific file location/JAR be auto-reloaded and >> refreshed >> whenever the JAR is rebuilt by 'maven clean install'. What is the syntax >> to >> achieve same result as update/refresh cmds above with bundle:watch? >> Also, >> how often does bundle:watch poll for changes to the file? >> >> thx. >> >> >> >> -- >> View this message in context: >> http://karaf.922171.n3.nabble.com/using-bundle-watch-for-bundle-refreshes-and-updates-tp4043630.html >> Sent from the Karaf - User mailing list archive at Nabble.com. >> > > -- > Jean-Baptiste Onofré > jbonofre@ > http://blog.nanthrax.net > Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/using-bundle-watch-for-bundle-refreshes-and-updates-tp4043630p4043645.html Sent from the Karaf - User mailing list archive at Nabble.com.
using bundle:watch for bundle refreshes and updates
I am currently using the following cmd to update an existing bundle: update 384 file:/path/to/bundleFoo.jar refresh 384 According to this doc: https://karaf.apache.org/manual/latest/commands/bundle-watch.html The param for the bundle:watch cmd is URLs (bundle IDs or URLs) I'd like to have a specific file location/JAR be auto-reloaded and refreshed whenever the JAR is rebuilt by 'maven clean install'. What is the syntax to achieve same result as update/refresh cmds above with bundle:watch? Also, how often does bundle:watch poll for changes to the file? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/using-bundle-watch-for-bundle-refreshes-and-updates-tp4043630.html Sent from the Karaf - User mailing list archive at Nabble.com.
Debugging Karaf deployed app with Eclipse
Eclipse Kepler SR1 Karaf 3.0.3 I have a feature with a few bundles that are already installed. I can attach Eclipse debugger to socket 5005 as remote Java app and step thru the code with no issues. I make changes to a particular source file which maps to bundle 340. I uninstall bundle 340, re-install from system file location (install -s file:/foo/bar/baz). resolve 340 refresh 340 restart 340 bundle 340 is active. I terminate and re-launch debugger in Eclipse. I hit a breakpoint and step thru the code and the lines of execution are no longer logical (skipping over lines of code as if the code in runtime JVM and what is in the source for Eclipse debugger don't match). I restart karaf. all bundles are active. I relaunch Eclipse debugger. Now I hit all logical breakpoints and can step thru the lines of code as expected. Is this the expected experience (i.e. must I restart Karaf for the changes to take affect for a single bundle update)? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/Debugging-Karaf-deployed-app-with-Eclipse-tp4043203.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Debugging Karaf deployed app with Eclipse
Thanks much, that solves the problem... -- View this message in context: http://karaf.922171.n3.nabble.com/Debugging-Karaf-deployed-app-with-Eclipse-tp4043203p4043205.html Sent from the Karaf - User mailing list archive at Nabble.com.
Equivalent to Spring AOP or EJB3 interceptor in Karaf?
What is the equivalent of Spring AOP or EJB3 interceptor in Karaf? Typical uses cases are auditing and profiling for example. What is recommended (any frmwk out-of-the-box in Karaf 3.x or 4.x or do we need to use an external frmwk?) thx. -- View this message in context: http://karaf.922171.n3.nabble.com/Equivalent-to-Spring-AOP-or-EJB3-interceptor-in-Karaf-tp4043161.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Equivalent to Spring AOP or EJB3 interceptor in Karaf?
Apparently these are the recommended approaches with CXF/JAX-RS: http://cxf.apache.org/docs/interceptors.html https://dzone.com/articles/common-cxf-request-interceptor -- View this message in context: http://karaf.922171.n3.nabble.com/Equivalent-to-Spring-AOP-or-EJB3-interceptor-in-Karaf-tp4043161p4043162.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle corruption with felix and karaf 3.0.2
Hi is this bundle corruption problem fixed in karaf 3.0.5? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-corruption-with-felix-and-karaf-3-0-2-tp4037669p4042932.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle corruption with felix and karaf 3.0.2
Also, we were seeing bundle corruption when terminating java process for Karaf 3.0.x (e.g. kill -9 or OS crashes). Is this fixed as well or is there a workaround? -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-corruption-with-felix-and-karaf-3-0-2-tp4037669p4042934.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: [ANN] Apache Karaf 3.0.4 Released!
Are there any regressions with 3.0.4? We are having hanging behavior with ssh cmds in 3.0.4. No stack trace/errors. Specifically, did the following patch break related functionality? https://issues.apache.org/jira/browse/KARAF-3656 https://issues.apache.org/jira/browse/KARAF-3656 -- View this message in context: http://karaf.922171.n3.nabble.com/ANN-Apache-Karaf-3-0-4-Released-tp4041254p4041306.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: SLF4J logging error in console on startup
Custom based on 3.0.3. does not reproduce with 3.0.3 plain/vanilla/outofthebox. I deleted all bundles with start level = 50 but still reproduces. In terms of a diff after deleting, there are 128 (custom) vs 64 (3.0.3) bundles with start level 50. I tried to delete this thread yesterday so you can ignore it, I'll determine which bundle(s) is/are causing the problem. btw, I tried using the original org.ops4j.pax.logging.cfg file from 3.0.3 and still reproduces with custom distro (based on 3.0.3). jbonofre wrote Did you change something in etc/org.ops4j.pax.logging.cfg and pax-logging bundles are correctly started ? Do you use a custom distro ? I just tested with a fresh Karaf 3.0.3 and it starts fine, including pax-logging. Regards JB On 05/19/2015 10:50 PM, asookazian2 wrote: Karaf 3.0.3 SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder. SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Please add a JIRA to specify which bundle(s) is/are causing this problem, I see nothing logged in the karaf.log either. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/SLF4J-logging-error-in-console-on-startup-tp4040505.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/Re-SLF4J-logging-error-in-console-on-startup-tp4040511p4040522.html Sent from the Karaf - User mailing list archive at Nabble.com.
SLF4J logging error in console on startup
Karaf 3.0.3 SLF4J: Failed to load class org.slf4j.impl.StaticLoggerBinder. SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Please add a JIRA to specify which bundle(s) is/are causing this problem, I see nothing logged in the karaf.log either. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/SLF4J-logging-error-in-console-on-startup-tp4040505.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: WebServiceContext is null after upgrading to Karaf 3.0.3
@Context is specific to JAXRS and this is SOAP WS so that is not applicable here. -- View this message in context: http://karaf.922171.n3.nabble.com/WebServiceContext-is-null-after-upgrading-to-Karaf-3-0-3-tp4040185p4040186.html Sent from the Karaf - User mailing list archive at Nabble.com.
WebServiceContext is null after upgrading to Karaf 3.0.3
Suddenly we are getting NPE for WebServiceContext instance in code that has not changed before or after the Karaf 3.0.3 upgrade (culprit most likely is cxf 3.0.4). This is causing a web service call from web client to fail. We are using followin code: javax.xml.ws.WebServiceContext foo; Considering replacing with following code (as we really only need MessageContext to get HTTP request headers): //@Context @Resource javax.xml.ws.handler.MessageContext foo; We are not using @Context anywhere in our codebase however it was recommended here to use it: http://t80463.apache-cxf-user.apachetalks.us/webservicecontext-is-null-t80463.html http://t80463.apache-cxf-user.apachetalks.us/webservicecontext-is-null-t80463.html It will not resolve when I try to organize imports in my Eclipse project/workspace. any help appreciated. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/WebServiceContext-is-null-after-upgrading-to-Karaf-3-0-3-tp4040185.html Sent from the Karaf - User mailing list archive at Nabble.com.
WADL for REST class is not updating after REST bundle update
Hi, I'm currently adding some public methods to an existing REST API (class). The WADL after bundle:update (or bundle:uninstall, bundle:install) is not reflecting the newly added method's changes. I have verified that the REST JAR is being built correctly and checked timestamp on the JAR and the contents of the class are correct (as per Java decompiler). I am not seeing any exceptions in the karaf log. The old REST URI/methods are working properly. example of newly added method: @GET @Path(/x/y/z) @Produces({MediaType.APPLICATION_JSON}) public Response arbiTest3() { logger.trace(arbiTest3 invoked); return Response.ok(it worked!).build(); } So I'm not seeing this method in the WADL after bundle:update and even after restarting karaf. It's behaving like I'm installing an old version of the JAR which is not the case afaik. Anybody seen this behavior before? karaf 3.0.2. Any input appreciated. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/WADL-for-REST-class-is-not-updating-after-REST-bundle-update-tp4038988.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle:update
ok what about refresh, restart, resolve after update? are those recommended/required? -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-update-tp4038534p4038716.html Sent from the Karaf - User mailing list archive at Nabble.com.
bundle:update
Hi with karaf 3.0.2. assume we have bundleA (a REST API) which uses bundleB (a DAO API). I have new versions of these bundles from a maven build. I bundle:update B then bundle:update A. Cmd is update bundleid file:/path/to/bundle. Do I need to also subsequently issue bundle:refresh and/or bundle:resolve and/or bundle:restart to effect the loading/use of the new versions of these bundles? How do you know which version of a particular bundle is in use by karaf? Is there a way to check this in the console? -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-update-tp4038534.html Sent from the Karaf - User mailing list archive at Nabble.com.
debug mode
Hi how can I know if Karaf has been starting (i.e. is running) in debug mode? -- View this message in context: http://karaf.922171.n3.nabble.com/debug-mode-tp4038535.html Sent from the Karaf - User mailing list archive at Nabble.com.
display all exceptions in log
Is there plans to have a log:exceptions-display-all cmd in karaf to display all the exceptions in the current karaf.log (not just the last one)? -- View this message in context: http://karaf.922171.n3.nabble.com/display-all-exceptions-in-log-tp4038495.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle corruption with felix and karaf 3.0.2
Please answer the question below regarding usage of org.apache.aries.blueprint.preemptiveShutdown=false. any repercussions of doing this? Also, is there any documentation on this and any other system properties that are hidden? Perhaps this one should be explicitly set to true/false by default in the system.properties file? schmke wrote One case I've seen, this issue occurs when ActiveMQ is being used and gets shutdown as part of the Blueprint shutdown that doesn't seem to follow the reverse start-level shutdown. Then AMQ clients that aren't using Blueprint have issues as they are trying to reconnect and don't shutdown. I found that there is a setting, org.apache.aries.blueprint.preemptiveShutdown, which defaults to true but when set to false seems to help this issue. But what is the impact or side-effects of setting it to false? Will the shutdown just take longer in this case? Or is there something else to be aware of? On Wed, Jan 7, 2015 at 9:30 PM, Jean-Baptiste Onofré lt; jb@ gt; wrote: We have a Jira about that, it's related to Felix framework and bundle cache corruption. Just delete the cache fixes the issue but I'm looking for a more reliable workaround (in Karaf). Regards JB On 01/08/2015 01:12 AM, asookazian2 wrote: Karaf 3.0.2 we are seeing intermittent hanging on halt (shutdown) of karaf. we then use 'kill -9' on karaf java process. then, sometimes after a restart, we are seeing some bundles (typically the last n bundles) in 'installed' state. I believe I saw on this forum JB commented the root cause of this is a felix bug. Any details on this JIRA, etc. and when it may be fixed? Has anybody else seen this behavior? In the log we are seeing: 20150107 15:38:22.613 [WARN ] ActiveMQ Task-3 | 147:org.apache.activemq.activemq-osgi | org.apache.activemq.transport.failover.FailoverTransport | Failed to connect to [tcp://localhost:61616] after: 10 attempt(s) continuing to retry. Also, i noticed today that I could not 'uninstall bundleId ' for these corrupted bundles. Should we just delete them manually from kara/data/cache/ bundleId directory and restart karaf? thx for any help. btw, we tried switching to equinox but we had other issues with that so back with felix for now. -- View this message in context: http://karaf.922171.n3.nabble. com/bundle-corruption-with-felix-and-karaf-3-0-2-tp4037669.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-corruption-with-felix-and-karaf-3-0-2-tp4037669p4037694.html Sent from the Karaf - User mailing list archive at Nabble.com.
bundle corruption with felix and karaf 3.0.2
Karaf 3.0.2 we are seeing intermittent hanging on halt (shutdown) of karaf. we then use 'kill -9' on karaf java process. then, sometimes after a restart, we are seeing some bundles (typically the last n bundles) in 'installed' state. I believe I saw on this forum JB commented the root cause of this is a felix bug. Any details on this JIRA, etc. and when it may be fixed? Has anybody else seen this behavior? In the log we are seeing: 20150107 15:38:22.613 [WARN ] ActiveMQ Task-3 | 147:org.apache.activemq.activemq-osgi | org.apache.activemq.transport.failover.FailoverTransport | Failed to connect to [tcp://localhost:61616] after: 10 attempt(s) continuing to retry. Also, i noticed today that I could not 'uninstall bundleId' for these corrupted bundles. Should we just delete them manually from kara/data/cache/bundleId directory and restart karaf? thx for any help. btw, we tried switching to equinox but we had other issues with that so back with felix for now. -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-corruption-with-felix-and-karaf-3-0-2-tp4037669.html Sent from the Karaf - User mailing list archive at Nabble.com.
Groovy related exceptions when switching to Equinox
We are seeing a couple of groovy related exceptions when switching to equinox with 3.0.2 Karaf. Any known issues to expect or why is this happening? Everything is fine when using same bundles/features with felix and 3.0.2 Karaf. list | grep -i groovy 228 | Active | 80 | 2.3.4 | Groovy Runtime system:framework Current OSGi framework is equinox 1) Caused by: java.lang.NoClassDefFoundError: Could not initialize class groovy.lang.GroovySystem 2) Caused by: groovy.lang.GroovyRuntimeException: Conflicting module versions. Module [groovy-all is loaded in version 2.3.4 and you are trying to load version 2.3.6 at org.codehaus.groovy.runtime.metaclass.MetaClassRegistryImpl$DefaultModuleListener.onModule(MetaClassRegistryImpl.java:509) at org.codehaus.groovy.runtime.m12n.ExtensionModuleScanner.scanExtensionModuleFromProperties(ExtensionModuleScanner.java:77) at org.codehaus.groovy.runtime.m12n.ExtensionModuleScanner.scanExtensionModuleFromMetaInf(ExtensionModuleScanner.java:71) at org.codehaus.groovy.runtime.m12n.ExtensionModuleScanner.scanClasspathModules(ExtensionModuleScanner.java:53) at org.codehaus.groovy.runtime.metaclass.MetaClassRegistryImpl.init(MetaClassRegistryImpl.java:110) at org.codehaus.groovy.runtime.metaclass.MetaClassRegistryImpl.init(MetaClassRegistryImpl.java:71) at groovy.lang.GroovySystem.clinit(GroovySystem.java:33) ... 83 more -- View this message in context: http://karaf.922171.n3.nabble.com/Groovy-related-exceptions-when-switching-to-Equinox-tp4037006.html Sent from the Karaf - User mailing list archive at Nabble.com.
OSGi framework version not displayed
system:framework Current OSGi framework is equinox How can I see the version of the frmwk? There is no option for this. Is there a properties file I can check? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/OSGi-framework-version-not-displayed-tp4037009.html Sent from the Karaf - User mailing list archive at Nabble.com.
Could not open JDBC connection for transaction
I'm able to reproduce the following exception (at least the first one which happens on startup of Karaf). Karaf 3.0.2 and we've replaced BoneCP with HikariCP-java6 2.2.2 recently as BoneCP has been deprecated. Please note that in my Spring applicationConfig.xml for this WAR mega-bundle, I'm using the following to lookup the JDBC datasource (which was working previously without these exceptions with BoneCP): jee:jndi-lookup id=dataSource jndi-name=osgi:service/jdbc/FooDataSource/ What is the recommended way of doing the JNDI lookup here? jee:jndi-lookup is a Spring specific way but I'm not sure is recommended for OSGi. The interesting thing to note is that there are some inserts happening into the DB in question after startup when I execute some test cases from the web app front end. 20141203 15:43:47.320 [ERROR] Thread-66 | 273:my-project-server | org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable | exception during job acquisition: Could not open JDBC Connection for transaction; nested exception is org.osgi.framework.ServiceException org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is org.osgi.framework.ServiceException at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:240)[273:my-project-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371)[273:my-project-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:40)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable.run(AcquireJobsRunnable.java:57)[273:my-project-server:1.0.0.SNAPSHOT] at java.lang.Thread.run(Thread.java:745)[:1.7.0_71] Caused by: org.osgi.framework.ServiceException at org.apache.aries.jndi.services.ServiceHelper$JNDIServiceDamper.call(ServiceHelper.java:197)[98:org.apache.aries.jndi.url:1.0.0] at Proxy3e789f04_5f82_464a_ab5b_df52583e7c46.getConnection(Unknown Source)[:] at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:202)[273:my-project-server:1.0.0.SNAPSHOT] ... 6 more 20141204 08:40:37.490 [ERROR] Thread-67 | 273:my-project-server | org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable | exception during job acquisition: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Timeout after 47143144ms of waiting for a connection. org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: Timeout after 47143144ms of waiting for a connection. at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:240)[273:my-project-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371)[273:my-project-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:40)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)[273:my-project-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable.run(AcquireJobsRunnable.java:57)[273:my-project-server:1.0.0.SNAPSHOT] at java.lang.Thread.run(Thread.java:745)[:1.7.0_71] Caused by: java.sql.SQLException: Timeout after 47143144ms of waiting for a connection. at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:205)[231:com.zaxxer.HikariCP-java6:2.2.2] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:108)[231:com.zaxxer.HikariCP-java6:2.2.2] at Proxydb8a93e5_8d4e_43a2_b487_005e667ffa99.getConnection(Unknown Source)[:] at Proxy479ae98e_177d_4ad4_8509_6f6eb9d96b88.getConnection(Unknown Source)[:] at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:202)[273:my-project-server:1.0.0.SNAPSHOT] ... 6 more -- View this message in context: http://karaf.922171.n3.nabble.com/Could-not-open-JDBC-connection-for-transaction-tp4036968.html Sent from the Karaf - User mailing
Re: Could not open JDBC connection for transaction
I installed the BoneCP bundle and reverted back to the BoneCP version of the datasource xml and it still reproduces... that's interesting as the applicationConfig.xml and datasources have not been modified recently... -- View this message in context: http://karaf.922171.n3.nabble.com/Could-not-open-JDBC-connection-for-transaction-tp4036968p4036970.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Could not open JDBC connection for transaction
I have enabled DEBUG level logging on a Spring and Activiti class and I've determined that the first attempt fails with the exception but the 2nd attempt to acquire jobs (30 secs later) is successful... -- View this message in context: http://karaf.922171.n3.nabble.com/Could-not-open-JDBC-connection-for-transaction-tp4036968p4036971.html Sent from the Karaf - User mailing list archive at Nabble.com.
Using Spring with Karaf 3.x
We have some legacy J2EE apps that we've ported (finally) to Karaf 3.0.x which use Spring. Mainly for dependency injection and possibly some templates (e.g. JdbcTemplate, etc.) and AOP. Anyways, looks like there are no Spring (or Spring dm?) examples here: https://github.com/cschneider/Karaf-Tutorial and there are no Spring examples in the Apache Karaf Cookbook. Is Spring usage with OSGi/Karaf recommended or not? If yes, where can I find some examples? I stopped using spring on OSGi a long time ago. http://stackoverflow.com/questions/24595900/invalid-bundle-when-starting-apache-karaf -- View this message in context: http://karaf.922171.n3.nabble.com/Using-Spring-with-Karaf-3-x-tp4036977.html Sent from the Karaf - User mailing list archive at Nabble.com.
Karaf skipping feature install
We have a feature install as follows: feature name=myFeature version=1.0.0-SNAPSHOT resolver=(obr) start-level=90 description=description feature version=1.0.0-SNAPSHOTfeature1/feature feature version=1.0.0-SNAPSHOTfeature2/feature bundlemvn:com.foo.bar/component1/1.0.0-SNAPSHOT/bundle /feature feature name=feature2 version=1.0.0-SNAPSHOT resolver=(obr) start-level=90 description=description feature version=1.0.0-SNAPSHOTfeature1/feature bundlemvn:com.foo.bar/component2/1.0.0-SNAPSHOT/bundle /feature In a particular Karaf 3.0.2 server (with older versions of our software), component1 is being installed successfully but component2 is not. In a different Karaf 3.0.2 server (with latest versions of our software), component1 and component2 is being installed successfully. In the former case, I did not see any exceptions in the karaf log so it seems as if karaf is skipping the compononent2 install. I verified that the JAR does indeed exist in the karaf/data/repo directory structure. Has anybody else seen this and can you provide an explanation? I verified in the Karaf cmd line console using 'list | grep -i componentName' that the bundle was not already installed... -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-skipping-feature-install-tp4036927.html Sent from the Karaf - User mailing list archive at Nabble.com.
Unable to uninstall multiple bundles
4 bundles suddenly and simultaneously go into installed state after karaf restart (previously were active). I can't uninstall the bundles (bundle is invalid is the error at cmd line). karaf 3.0.2. one of them is simply a datasource.xml in deploy dir. i had to delete from karaf/data/cache dir to get rid of them. wtf -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Karaf skipping feature install
This envmt is using only one karaf. I was able to reproduce this behavior with a new Karaf install and latest of our software. However, it behaved normally on the 1st install attempt (i.e. component1 and component2 were both installed and active). However, on the 2nd install attempt, the behavior reproduced. Maybe it's possible I am not uninstalling both features using feature:uninstall foo before the re-install and that is causing this problem/behavior? jbonofre wrote Do you have the obr feature installed on one Karaf ? Regards JB On 12/03/2014 09:25 PM, asookazian2 wrote: We have a feature install as follows: feature name=myFeature version=1.0.0-SNAPSHOT resolver=(obr) start-level=90 description=description feature version=1.0.0-SNAPSHOT feature1 /feature feature version=1.0.0-SNAPSHOT feature2 /feature bundle mvn:com.foo.bar/component1/1.0.0-SNAPSHOT /bundle /feature feature name=feature2 version=1.0.0-SNAPSHOT resolver=(obr) start-level=90 description=description feature version=1.0.0-SNAPSHOT feature1 /feature bundle mvn:com.foo.bar/component2/1.0.0-SNAPSHOT /bundle /feature In a particular Karaf 3.0.2 server (with older versions of our software), component1 is being installed successfully but component2 is not. In a different Karaf 3.0.2 server (with latest versions of our software), component1 and component2 is being installed successfully. In the former case, I did not see any exceptions in the karaf log so it seems as if karaf is skipping the compononent2 install. I verified that the JAR does indeed exist in the karaf/data/repo directory structure. Has anybody else seen this and can you provide an explanation? I verified in the Karaf cmd line console using 'list | grep -i componentName ' that the bundle was not already installed... -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-skipping-feature-install-tp4036927.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/Karaf-skipping-feature-install-tp4036927p4036931.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Unable to uninstall multiple bundles
I don't have the log available that would pertain to this problem. I did try to bundle:stop and bundle:start and both of those failed (I think with invalid bundle error). from etc/config.properties: # # Framework selection properties # karaf.framework=felix jbonofre wrote What's in the log (when the bundles went in installed state) ? Did you try to start by hand ? I bet that you use the Felix framework (default one in Karaf): it's probably the bug on the framework when the cache is corrupted. Are you able to reproduce the scenario ? Regards JB On 12/03/2014 10:22 PM, asookazian2 wrote: 4 bundles suddenly and simultaneously go into installed state after karaf restart (previously were active). I can't uninstall the bundles (bundle is invalid is the error at cmd line). karaf 3.0.2. one of them is simply a datasource.xml in deploy dir. i had to delete from karaf/data/cache dir to get rid of them. wtf -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928p4036933.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Unable to uninstall multiple bundles
how and why does the cache get corrupted? we are under the impression that we need to halt/start karaf after doing our feature installs in order to avoid any corruption issues but not sure if this is really required or not. -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928p4036934.html Sent from the Karaf - User mailing list archive at Nabble.com.
cascading feature uninstall option
Is there a cascading feature:uninstall option? I checked --help and didn't see it. I mean if you have bundles and features in a single feature.xml, currently if you feature:uninstall that feature it will only uninstall the bundles embedded in it, not necessarily the other embedded features. I'd like an option that will uninstall everything... -- View this message in context: http://karaf.922171.n3.nabble.com/cascading-feature-uninstall-option-tp4036938.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Unable to uninstall multiple bundles
reproduced exact same 4 bundles are in installed state after restart and error states Bundle x is invalid when I try to stop the bundle. This occurred after I killed the JVM as the halt cmd in karaf was hanging. stack trace before JVM stopped: 20141203 14:49:35.376 [ERROR] Thread-116 | : | org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable | exception during job acquisition: Could not open JDBC Connection for transaction; nested exception is java.lang.IllegalStateException: Invalid BundleContext. org.springframework.transaction.CannotCreateTransactionException: Could not open JDBC Connection for transaction; nested exception is java.lang.IllegalStateException: Invalid BundleContext. at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:240)[284:workflow-ws-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:371)[284:workflow-ws-server:1.0.0.SNAPSHOT] at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:127)[284:workflow-ws-server:1.0.0.SNAPSHOT] at org.activiti.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:40)[284:workflow-ws-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:33)[284:workflow-ws-server:1.0.0.SNAPSHOT] at org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable.run(AcquireJobsRunnable.java:57)[284:workflow-ws-server:1.0.0.SNAPSHOT] at java.lang.Thread.run(Thread.java:745)[:1.7.0_71] Caused by: java.lang.IllegalStateException: Invalid BundleContext. at org.apache.felix.framework.BundleContextImpl.checkValidity(BundleContextImpl.java:514)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleContextImpl.getServiceReferences(BundleContextImpl.java:425)[org.apache.felix.framework-4.2.1.jar:] at org.apache.aries.jndi.services.ServiceHelper.findService(ServiceHelper.java:375)[98:org.apache.aries.jndi.url:1.0.0] at org.apache.aries.jndi.services.ServiceHelper.access$500(ServiceHelper.java:66)[98:org.apache.aries.jndi.url:1.0.0] at org.apache.aries.jndi.services.ServiceHelper$JNDIServiceDamper.call(ServiceHelper.java:180)[98:org.apache.aries.jndi.url:1.0.0] at Proxy386d92f9_189d_4945_b1ed_57c9fecb6138.getConnection(Unknown Source)[:] at org.springframework.jdbc.datasource.DataSourceTransactionManager.doBegin(DataSourceTransactionManager.java:202)[284:workflow-ws-server:1.0.0.SNAPSHOT] ... 6 more jbonofre wrote No it's not. Without details about your bundles, what they do and how they work, it's not easy. We have a bunch of users on Karaf and that use the features without problem. So I suspect something related to your distribution or application. Regards JB Sent from my Samsung Galaxy smartphone. Original message From: asookazian2 lt; asookazian@ gt; Date:03/12/2014 22:41 (GMT+01:00) To: user@.apache Cc: Subject: Re: Unable to uninstall multiple bundles how and why does the cache get corrupted? we are under the impression that we need to halt/start karaf after doing our feature installs in order to avoid any corruption issues but not sure if this is really required or not. -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928p4036934.html Sent from the Karaf - User mailing list archive at Nabble.com. -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928p4036941.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Unable to uninstall multiple bundles
also i don't understand why those specific 4 (last installed) bundles have presumably become corrupted. -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-uninstall-multiple-bundles-tp4036928p4036943.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: akhon r we can not kotha bolte parbo na ata kokhono somvob na............
what the hell is this random spam going on all the time here? -- View this message in context: http://karaf.922171.n3.nabble.com/akhon-r-we-can-not-kotha-bolte-parbo-na-ata-kokhono-somvob-na-tp4036950p4036951.html Sent from the Karaf - User mailing list archive at Nabble.com.
trouble uninstalling feature
I'm trying to uninstall a feature prior to installing it. I see in the karaf cmd line (feature:repo-list) that there is only one repo added for this feature name. However, when I issue 'feature:uninstall foo' it fails and states that I must specify a specific version b/c there are three versions for the same feature name. How does karaf find this list of features/versions? I have grep'd my repo and system folders for the feature name and only find the 1.0.0-SNAPSHOT version for this feature. Is it possible to do something like 'feature:uninstall foo *' and have karaf uninstall all versions of the same feature? Also, prior to any installations I delete the related folders for this feature in repo and system. -- View this message in context: http://karaf.922171.n3.nabble.com/trouble-uninstalling-feature-tp4036646.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: trouble uninstalling feature
karaf 3.0.2 -- View this message in context: http://karaf.922171.n3.nabble.com/trouble-uninstalling-feature-tp4036646p4036647.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: trouble uninstalling feature
Thx for reply. So I did 'feature:list -i' and see only 1.0.0-SNAPSHOT version, the other versions it claims are there are not there for this feature name. Something seems wrong... jbonofre wrote feature:list -i feature:uninstall foo/version You can do a script to uninstall all features with the same name (and different version). Regards JB On 11/24/2014 11:24 PM, asookazian2 wrote: I'm trying to uninstall a feature prior to installing it. I see in the karaf cmd line (feature:repo-list) that there is only one repo added for this feature name. However, when I issue 'feature:uninstall foo' it fails and states that I must specify a specific version b/c there are three versions for the same feature name. How does karaf find this list of features/versions? I have grep'd my repo and system folders for the feature name and only find the 1.0.0-SNAPSHOT version for this feature. Is it possible to do something like 'feature:uninstall foo *' and have karaf uninstall all versions of the same feature? Also, prior to any installations I delete the related folders for this feature in repo and system. -- View this message in context: http://karaf.922171.n3.nabble.com/trouble-uninstalling-feature-tp4036646.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/trouble-uninstalling-feature-tp4036646p4036650.html Sent from the Karaf - User mailing list archive at Nabble.com.
restarting Karaf after uninstalling bundles or features
Karaf 3.0.1 is it necessary to restart Karaf after uninstalling bundles or features to avoid a missing bundles problem we've been seeing? Sometimes and suddenly, some bundles disappear when you type 'list'. Has anybody else seen this and what is the best practice when uninstalling then re-installing the same feature? -- View this message in context: http://karaf.922171.n3.nabble.com/restarting-Karaf-after-uninstalling-bundles-or-features-tp4036577.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: restarting Karaf after uninstalling bundles or features
I've only seen this happen with our custom bundles... jbonofre wrote Hi, no, normally it's not necessary to restart Karaf (it depends of the bundles, but generally speaking), and I didn't see such kind of issue. Is it only with ouyr own bundles, or with Karaf bundles as well ? Regards JB On 11/21/2014 08:07 PM, asookazian2 wrote: Karaf 3.0.1 is it necessary to restart Karaf after uninstalling bundles or features to avoid a missing bundles problem we've been seeing? Sometimes and suddenly, some bundles disappear when you type 'list'. Has anybody else seen this and what is the best practice when uninstalling then re-installing the same feature? -- View this message in context: http://karaf.922171.n3.nabble.com/restarting-Karaf-after-uninstalling-bundles-or-features-tp4036577.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/restarting-Karaf-after-uninstalling-bundles-or-features-tp4036577p4036588.html Sent from the Karaf - User mailing list archive at Nabble.com.
RE: non-deterministic installation order with ds files in deploy dir
Thx for responses. How would the user modify the foo-ds.xml post-install using these methods? It is easy to do if it's in the deploy dir. -- View this message in context: http://karaf.922171.n3.nabble.com/non-deterministic-installation-order-with-ds-files-in-deploy-dir-tp4036433p4036479.html Sent from the Karaf - User mailing list archive at Nabble.com.
non-deterministic installation order with ds files in deploy dir
I've been testing a feature install which requires a foo-ds.xml (datasource) to be copied into the karaf/deploy dir first and then the 'feature:install bar' executed next. It seems that the order of installation of the foo-ds bundle may be before or after the feature bundle installs. If the foo-ds bundle is activated after the depending bundle is activated, then I see an exception in the log (something like a JNDI lookup error and the datasource can't be found). If I restart karaf, all bundles end up in Active state due to the start-level configs (the foo-ds.xml bundle start-level is 80 and the depending bundle is 85). What is the best way to handle this so all bundles are guaranteed to be Active state after feature install? We've discussed adding a sleep seconds=5/ after the foo-ds.xml copy to deploy dir and prior to the feature install. Is there any better way to handle this? Another idea was to have a loop in the ant install script which checks the foo-ds.xml bundle's state and once it is Active then install the feature. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/non-deterministic-installation-order-with-ds-files-in-deploy-dir-tp4036433.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: non-deterministic installation order with ds files in deploy dir
we are using Karaf 3.0.1 -- View this message in context: http://karaf.922171.n3.nabble.com/non-deterministic-installation-order-with-ds-files-in-deploy-dir-tp4036433p4036434.html Sent from the Karaf - User mailing list archive at Nabble.com.
restart cmd
Is there a Karaf cmd in 3.0.1 that will restart the server from the console? equivalent to 'halt' and then 'bin/karaf'? -- View this message in context: http://karaf.922171.n3.nabble.com/restart-cmd-tp4036293.html Sent from the Karaf - User mailing list archive at Nabble.com.
how to determine if a bundle has already been installed?
Let's say I have a feature set which requires that another feature already be installed (i.e. a set of bundles should already be installed in Karaf 3.0.1). What is the recommended method of determining if that bundle or feature set is already installed? Currently I am using the following in my ant target: bundle:list | grep 'bundleFoo' -- View this message in context: http://karaf.922171.n3.nabble.com/how-to-determine-if-a-bundle-has-already-been-installed-tp4036281.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: [ANN] Apache Karaf 4.0.0.M1 Released!
Please supply a link to list of new features in Karaf 4.0.0. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/ANN-Apache-Karaf-4-0-0-M1-Released-tp4035927p4035969.html Sent from the Karaf - User mailing list archive at Nabble.com.
install on an existing bundle with same bundle-version fails
Please consider giving message/warning to Karaf cmd line/console when user tries to install a bundle that already exists. Currently, it seems to just ignore the command and the bundle.jar in the data/cache/xxx dir is not updated. Please use update bundle_id instead of install on a pre-existing bundle, something like that would be helpful. -- View this message in context: http://karaf.922171.n3.nabble.com/install-on-an-existing-bundle-with-same-bundle-version-fails-tp4035779.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Schema for custom shell commands
I am running into the same problem with Karaf 3.0.1. Caused by: org.osgi.service.blueprint.container.ComponentDefinitionException: Unable to find a matching factory method getScope on class org.apache.karaf.shell.console.commands.NamespaceHandler for arguments [com.nextgate.accessmanager.user.setup.ProvisionUserCommand] when instanciating bean #recipe-25 at org.apache.aries.blueprint.container.BeanRecipe.getInstance(BeanRecipe.java:318) at org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:806) at org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:787) at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:106) at org.apache.aries.blueprint.di.MapRecipe.internalCreate(MapRecipe.java:111) I am getting this exception with v1.1.0 and v1.0.0. Please advise as I cannot continue testing at this point. This is my blueprint.xml: ?xml version=1.0 encoding=UTF-8? blueprint xmlns=http://www.osgi.org/xmlns/blueprint/v1.0.0; command-bundle xmlns=http://karaf.apache.org/xmlns/shell/v1.1.0; command action class=com.nextgate.accessmanager.user.setup.ProvisionUserCommand/ /command /command-bundle /blueprint Interestingly, a different class which extends OsgiCommandSupport with a different blueprint.xml in the same Karaf instance is NOT having this problem (it's using v1.1.0). I'm not sure what the difference is, the blueprint.xml files are very similar. -- View this message in context: http://karaf.922171.n3.nabble.com/Schema-for-custom-shell-commands-tp4029047p4035579.html Sent from the Karaf - User mailing list archive at Nabble.com.
Class com/sun/org/apache/xerces/internal/dom/ElementImpl illegally accessing package private member of class com/sun/org/apache/xerces/internal/dom/CoreDocumentImpl
We ran into the following error with our rebranded version of Karaf 3.0.1 on AIX (does not reproduce on Windows or Linux): body HTTP ERROR 500 pProblem accessing /ws/wf/WorkflowManagerWS. Reason: preServer Error/pre/p Caused by: prejava.lang.IllegalAccessError: Class com/sun/org/apache/xerces/internal/dom/ElementImpl illegally accessing package private member of class com/sun/org/apache/xerces/internal/dom/CoreDocumentImpl As suggested in the following thread: http://comments.gmane.org/gmane.comp.apache.cxf.user/30677 we removed the following file: apache-karaf-3.0.1/lib/endorsed/org.apache.servicemix.specs.saaj-api-1.3-2.4.0.jar, restarted karaf, re-tested and the error went away on AIX. We are running JDK 1.7. Has anybody else run into this problem and why is that particular JAR and the others included in the endorsed dir? -- View this message in context: http://karaf.922171.n3.nabble.com/Class-com-sun-org-apache-xerces-internal-dom-ElementImpl-illegally-accessing-package-private-member-l-tp4034932.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: karaf 3.0.1 start error
I believe the machine needs to be online so the maven repos can be hit... -- View this message in context: http://karaf.922171.n3.nabble.com/karaf-3-0-1-start-error-tp4034645p4034781.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Data source xml change not taking effect in first karaf start post modification
This is an issue. We have users who will not know this unless it is explicitly documented. You should be able to modify the datasource-xxx.xml file when karaf is not running and karaf should always check for any changes to the xml files in deploy dir vs. the already deployed xml bundles. Then this wouldn't be a problem. So a bug/enhancement request is in order I believe... Achim Nierbeck wrote No Issue, just not the right usage? If Karaf is running, the change is detected. It is even detected when the Karaf Instance is shut down. So it works as designed, AFAIC. regards, Achim 2014-08-11 11:25 GMT+02:00 Christian Schneider lt; chris@ gt;: I also think this is the issue. Do you think this should be filed as a bug in felix fileinstall? Christian Am 11.08.2014 10:29, schrieb Achim Nierbeck: I think the main problem seems to be, the xml file was altered while karaf wasn't able to realize it's been altered. After the startup it still used the old one as there has already been a bundle generated for it. My best guess right now, this is due to the fact that karaf/FileInstaller wasn't seeing the change it wasn't able to re-load the change, make sure you alter it while Karaf is running then it will be able to get it faster. regards, Achim -- Christian Schneider http://www.liquid-reality.de Open Source Architect Talend Application Integration Division http://www.talend.com -- Apache Member Apache Karaf lt;http://karaf.apache.org/gt; Committer PMC OPS4J Pax Web lt;http://wiki.ops4j.org/display/paxweb/Pax+Web/gt; Committer Project Lead blog lt;http://notizblog.nierbeck.de/gt; Software Architect / Project Manager / Scrum Master -- View this message in context: http://karaf.922171.n3.nabble.com/Data-source-xml-change-not-taking-effect-in-first-karaf-start-post-modification-tp4034670p4034782.html Sent from the Karaf - User mailing list archive at Nabble.com.
No appenders could be found for logger (log4j related)
We are seeing the following message when deploying a WAR bundle and starting karaf 3.0.1: admin@NextGate () log4j:WARN No appenders could be found for logger (org.springframework.web.servlet.DispatcherServlet). log4j:WARN Please initialize the log4j system properly. I was told to remove log4j and use 'mvn dependency:tree' to determine how the transitive dependency is happening: i executed the following cmd: mvn dependency:tree -Dverbose -Dincludes=log4j 10:28 [INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ RelationManager-OSGi --- [INFO] com.nextgate.mm.relation.deploys:RelationManager-OSGi:war:1.0.0-SNAPSHOT [INFO] +- org.slf4j:slf4j-log4j12:jar:1.7.2:compile [INFO] | \- (log4j:log4j:jar:1.2.16:compile - version managed from 1.2.17; omitted for duplicate) [INFO] \- log4j:log4j:jar:1.2.16:compile I tried to remove the log4j reference in the bundle-classpath for maven-bundle-plugin config but starting seeing other exceptions (ClassNotFoundExceptions for log4j classes). This is what is relevant in the bundle-classpath for the maven-bundle-plugin section in pom.xml for this WAR project. I have tried removing log4j-1.2.16.jar and slf4j-api-1.7.2.jar and see various exceptions in karaf.log. Bundle-ClassPath. ... ,WEB-INF/lib/log4j-1.2.16.jar ,WEB-INF/lib/slf4j-api-1.7.2.jar ,WEB-INF/lib/slf4j-log4j12-1.7.2.jar ... /Bundle-ClassPath Now I have finally resorted to including a log4j.xml file in the root of the WAR so that log4j will find it in the classpath. Now the WARN logging above to the karaf console is gone but the logging for the file appender has a different format than what is in the karaf.log. Here is my current log4j.xml config. I was thinking about using ./data/log/karaf.log and appending to that log instead of a test.log. appender name=FILE class=org.apache.log4j.RollingFileAppender layout class=org.apache.log4j.PatternLayout /layout /appender Am I missing something simple here (i.e. making this too complicated as a solution)? Any assistance is appreciated. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
This works: appender name=FILE class=org.apache.log4j.RollingFileAppender layout class=org.apache.log4j.PatternLayout /layout /appender This does *not* work (i.e. no logging of the same above contents in test.log are visible in the karaf.log; why?) appender name=FILE class=org.apache.log4j.RollingFileAppender layout class=org.apache.log4j.PatternLayout /layout /appender -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034742.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
Achim, thx for reply! So I need to use Import-Package only instead of Bundle-Classpath? And if yes, which packages do I need to import specifically? -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034743.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
Ok so I removed the three JARs (one log4j and two slf4j) from the Bundle-ClassPath section and added the following to the package-import section: Import-Package ... ,org.apache.log4j ,org.slf4j ... /Import-Package Now I am not seeing the WARN in the console during karaf startup, but I am seeing the following exception in the log. Please advise how to resolve. Do I need to add log4j back to the bundle-classpath? thx. 20140813 10:01:03.662 [WARN ] FelixStartLevel | 138:org.ops4j.pax.web.pax-web-extender-war | org.ops4j.pax.web.extender.war.internal.Activator | Error while destroying extension for bundle RelationManager-OSGi [242] java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: org/apache/log4j/LogManager at java.util.concurrent.FutureTask.report(FutureTask.java:122)[:1.7.0_51] at java.util.concurrent.FutureTask.get(FutureTask.java:188)[:1.7.0_51] at org.ops4j.pax.web.extender.war.internal.extender.AbstractExtender.destroyExtension(AbstractExtender.java:309)[138:org.ops4j.pax.web.pax-web-extender-war:3.1.0] at org.ops4j.pax.web.extender.war.internal.extender.AbstractExtender.bundleChanged(AbstractExtender.java:188)[138:org.ops4j.pax.web.pax-web-extender-war:3.1.0] at org.apache.felix.framework.util.EventDispatcher.invokeBundleListenerCallback(EventDispatcher.java:868)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:789)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:514)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4403)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.Felix.stopBundle(Felix.java:2520)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1309)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:304)[org.apache.felix.framework-4.2.1.jar:] at java.lang.Thread.run(Thread.java:744)[:1.7.0_51] Caused by: java.lang.NoClassDefFoundError: org/apache/log4j/LogManager at org.springframework.util.Log4jConfigurer.shutdownLogging(Log4jConfigurer.java:117) at org.springframework.web.util.Log4jWebConfigurer.shutdownLogging(Log4jWebConfigurer.java:170) at org.springframework.web.util.Log4jConfigListener.contextDestroyed(Log4jConfigListener.java:51) at org.eclipse.jetty.server.handler.ContextHandler.doStop(ContextHandler.java:823) at org.eclipse.jetty.servlet.ServletContextHandler.doStop(ServletContextHandler.java:160) at org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doStop(HttpServiceContext.java:229) at org.eclipse.jetty.util.component.AbstractLifeCycle.stop(AbstractLifeCycle.java:89) at org.ops4j.pax.web.service.jetty.internal.JettyServerImpl$1.stop(JettyServerImpl.java:202) at org.ops4j.pax.web.service.internal.HttpServiceStarted.begin(HttpServiceStarted.java:1018) at org.ops4j.pax.web.service.internal.HttpServiceProxy.begin(HttpServiceProxy.java:417) at org.ops4j.pax.web.extender.war.internal.UnregisterWebAppVisitorWC.visit(UnregisterWebAppVisitorWC.java:82) at org.ops4j.pax.web.extender.war.internal.model.WebApp.accept(WebApp.java:641) at org.ops4j.pax.web.extender.war.internal.WebAppPublisher$WebAppDependencyListener.unregister(WebAppPublisher.java:264) at org.ops4j.pax.web.extender.war.internal.WebAppPublisher$WebAppDependencyListener.removedService(WebAppPublisher.java:224) at org.ops4j.pax.web.extender.war.internal.WebAppPublisher$WebAppDependencyListener.removedService(WebAppPublisher.java:135) at org.osgi.util.tracker.ServiceTracker$Tracked.customizerRemoved(ServiceTracker.java:956)[karaf-org.osgi.core.jar:] at org.osgi.util.tracker.ServiceTracker$Tracked.customizerRemoved(ServiceTracker.java:864)[karaf-org.osgi.core.jar:] at org.osgi.util.tracker.AbstractTracked.untrack(AbstractTracked.java:341)[karaf-org.osgi.core.jar:] at org.osgi.util.tracker.ServiceTracker.close(ServiceTracker.java:375)[karaf-org.osgi.core.jar:] at org.ops4j.pax.web.extender.war.internal.WebAppPublisher.unpublish(WebAppPublisher.java:127) at org.ops4j.pax.web.extender.war.internal.WebObserver.undeploy(WebObserver.java:247) at org.ops4j.pax.web.extender.war.internal.WebObserver$1.doDestroy(WebObserver.java:185) at org.ops4j.pax.web.extender.war.internal.extender.SimpleExtension.destroy(SimpleExtension.java:70) at org.ops4j.pax.web.extender.war.internal.extender.AbstractExtender$2.run(AbstractExtender.java:288) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)[:1.7.0_51] at
Re: No appenders could be found for logger (log4j related)
So I added back the following to the Bundle-ClassPath section: ,WEB-INF/lib/log4j-1.2.16.jar and still getting this exception on karaf startup. I still have the import for org.apache.log4j in the Import-Package section so not sure what else to try at this point Caused by: java.lang.ClassNotFoundException: org.apache.log4j.LogManager not found by org.ops4j.pax.logging.pax-logging-api [7] at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1532)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:75)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1955)[org.apache.felix.framework-4.2.1.jar:] at java.lang.ClassLoader.loadClass(ClassLoader.java:358)[:1.7.0_51] at org.apache.felix.framework.BundleWiringImpl.getClassByDelegation(BundleWiringImpl.java:1374)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl.searchImports(BundleWiringImpl.java:1553)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1484)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:75)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleWiringImpl$BundleClassLoader.loadClass(BundleWiringImpl.java:1955)[org.apache.felix.framework-4.2.1.jar:] at java.lang.ClassLoader.loadClass(ClassLoader.java:358)[:1.7.0_51] ... 36 more -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034747.html Sent from the Karaf - User mailing list archive at Nabble.com.
delete datasource-foo.xml from deploy and re-copy
delete datasource-foo.xml from deploy and re-copy to deploy dir (with a subsequent feature:install). then do 'list' in karaf cmd line. I do not see the bundle id in the list until i halt and restart karaf 3.0.1. is this intentional or a bug? -- View this message in context: http://karaf.922171.n3.nabble.com/delete-datasource-foo-xml-from-deploy-and-re-copy-tp4034749.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
Thanks for the tip. I tried this and it did not work. Still seeing exception in the karaf.log: Caused by: java.lang.ClassNotFoundException: org.apache.log4j.LogManager not found by org.ops4j.pax.logging.pax-logging-api [7] I will try to research the response that Achim gave. thx guys. bane73 wrote Not really the most correct answer, but try modifying your export section to export everything. That SHOULD work, I believe, because your project is using that JAR (thus, it's contained inside of it) but you are not telling OSGI that you want to use it and so it is blocking it from your JAR's classpath. IOW, in your POM right below the Bundle-ClassPath section you should have a Export-Package section. Delete everything and replace with a wildcard: ie: Export-Package * /Export-Package That should do the trick. If it does, don't settle for that though. The correct answer is, I think, to install org.apache.log4j either as it's own bundle or as a part of a dependency-bundle. I'm still pretty new to OSGI, though, so I could be wrong. -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034752.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
In the web.xml: listener listener-classorg.springframework.web.util.Log4jConfigListener/listener-class /listener Should we be using a different listener? -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034755.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
Rugby, man that's tough! hope you're ok. ;) http://grepcode.com/file/repo1.maven.org/maven2/org.springframework/spring-core/2.5.4/org/springframework/util/Log4jConfigurer.java This class is using LogManager... jbonofre wrote Hi guys, sorry I was out for a rugby game ;) Pax Logging wraps log4j, it doesn't import the whole log4j api. Some part of the log4j API is not included because all doesn't make sense in OSGi. It's the case for org.apache.log4j.LogManager: it's part of log4j but it's not included in pax-logging (api or service). In the latest pax-logging release, I included a couple of Log4j and SLf4j new classes, but not this one as it's not really usefull in OSGi (due to the Logging service). So, what's your usage of LogManager ? I'm pretty sure you can avoid to use it. Regards JB On 08/13/2014 10:21 PM, asookazian2 wrote: Thanks for the tip. I tried this and it did not work. Still seeing exception in the karaf.log: Caused by: java.lang.ClassNotFoundException: org.apache.log4j.LogManager not found by org.ops4j.pax.logging.pax-logging-api [7] I will try to research the response that Achim gave. thx guys. bane73 wrote Not really the most correct answer, but try modifying your export section to export everything. That SHOULD work, I believe, because your project is using that JAR (thus, it's contained inside of it) but you are not telling OSGI that you want to use it and so it is blocking it from your JAR's classpath. IOW, in your POM right below the Bundle-ClassPath section you should have a Export-Package section. Delete everything and replace with a wildcard: ie: Export-Package * /Export-Package That should do the trick. If it does, don't settle for that though. The correct answer is, I think, to install org.apache.log4j either as it's own bundle or as a part of a dependency-bundle. I'm still pretty new to OSGI, though, so I could be wrong. -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034752.html Sent from the Karaf - User mailing list archive at Nabble.com. -- Jean-Baptiste Onofré jbonofre@ http://blog.nanthrax.net Talend - http://www.talend.com -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034756.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: No appenders could be found for logger (log4j related)
I commented out the below listener block in the web.xml. Now it seems to be behaving better (no exceptions) but I'm not sure what side-effect it may have to remove that... asookazian2 wrote In the web.xml: listener listener-class org.springframework.web.util.Log4jConfigListener /listener-class /listener Should we be using a different listener? -- View this message in context: http://karaf.922171.n3.nabble.com/No-appenders-could-be-found-for-logger-log4j-related-tp4034739p4034757.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Unable to find the InitialContextFactory org.eclipse.jetty.jndi.InitialContextFactory
This is the fix I just made today which resolved the problem somehow. The problem is we'd like to better understand the following: 1) why was this fix needed? 2) is this the best way to fix the problem? 3) why/how is the problem fixed? before (unable to login to WAR app): cxt = new InitialContext(); after (able to login to WAR app; exception does not reproduce): ClassLoader cl = Thread.currentThread().getContextClassLoader(); Thread.currentThread().setContextClassLoader(this.getClass().getClassLoader()); try { cxt = new InitialContext(); } finally { Thread.currentThread().setContextClassLoader(cl); } -- View this message in context: http://karaf.922171.n3.nabble.com/Unable-to-find-the-InitialContextFactory-org-eclipse-jetty-jndi-InitialContextFactory-tp4034652p4034697.html Sent from the Karaf - User mailing list archive at Nabble.com.
Unable to find the InitialContextFactory org.eclipse.jetty.jndi.InitialContextFactory
Any explanation as to how to resolve this exception? We can access the URL home page for the WAR bundle fine until we deploy another EAR bundle at which point we see the following exception: I google'd a bit and threads are pointing to jndi.properties but I haven't actually touched this file before and not sure where it should be located and not sure what exact root cause is and how to resolve. Any help appreciated. 20140808 13:16:45.973 [ERROR] qtp500171771-47 | 249:com.nextgate.mm.PersonDQM | com.nextgate.dqm.presentation.amfendpoint.DqmMessageBrokerFilter | MessageException flex.messaging.MessageException: com.sun.mdm.index.master.ProcessingException : com.sun.mdm.index.master.ProcessingException: MDM-MI-SRC501: Failed to read MIDM configuration: Unable to find the InitialContextFactory org.eclipse.jetty.jndi.InitialContextFactory. at flex.messaging.services.remoting.adapters.JavaAdapter.invoke(JavaAdapter.java:447)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.services.RemotingService.serviceMessage(RemotingService.java:183)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.MessageBroker.routeMessageToService(MessageBroker.java:1503)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.AbstractEndpoint.serviceMessage(AbstractEndpoint.java:884)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at com.nextgate.dqm.presentation.amfendpoint.DqmAMFEndpoint.serviceMessage(DqmAMFEndpoint.java:92)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at com.nextgate.dqm.presentation.amfendpoint.DqmMessageBrokerFilter.invoke(DqmMessageBrokerFilter.java:169)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.amf.LegacyFilter.invoke(LegacyFilter.java:158)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.amf.SessionFilter.invoke(SessionFilter.java:44)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.amf.BatchProcessFilter.invoke(BatchProcessFilter.java:67)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.amf.SerializationFilter.invoke(SerializationFilter.java:146)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at flex.messaging.endpoints.BaseHTTPEndpoint.service(BaseHTTPEndpoint.java:278)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at org.springframework.flex.servlet.MessageBrokerHandlerAdapter.handle(MessageBrokerHandlerAdapter.java:101)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:875)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:807)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:571)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:511)[249:com.nextgate.mm.PersonDQM:1.0.0.SNAPSHOT] at javax.servlet.http.HttpServlet.service(HttpServlet.java:595)[64:org.apache.geronimo.specs.geronimo-servlet_3.0_spec:1.0] at javax.servlet.http.HttpServlet.service(HttpServlet.java:668)[64:org.apache.geronimo.specs.geronimo-servlet_3.0_spec:1.0] at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1496)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.ops4j.pax.web.service.internal.WelcomeFilesFilter.doFilter(WelcomeFilesFilter.java:185)[78:org.ops4j.pax.web.pax-web-runtime:3.1.0] at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1467)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:499)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.ops4j.pax.web.service.jetty.internal.HttpServiceServletHandler.doHandle(HttpServiceServletHandler.java:69)[79:org.ops4j.pax.web.pax-web-jetty:3.1.0] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:533)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)[69:org.eclipse.jetty.aggregate.jetty-all-server:8.1.14.v20131031] at
rolling upgrades to Karaf/Cellar
what is the best practice for rolling upgrades for a cluster of Karaf servers? I mean to say that we have 2 nodes in the cluster and we want to update/upgrade software on server 1 but keep server 2 on the previous version temporarily until everything is stable with the upgrade on server 1 then upgrade server 2. If we are using Cellar, the config, features and bundles will be always replicated as long as both servers are running. Also, what exactly is handling the failover for karaf instances in a cluster? Is it Karaf itself, cellar or hazelcast, or the software/hardware load balancer? thx. -- View this message in context: http://karaf.922171.n3.nabble.com/rolling-upgrades-to-Karaf-Cellar-tp4034531.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: rolling upgrades to Karaf/Cellar
Please add a link on this page to the Cellar clustering user guide: http://karaf.apache.org/index/subprojects/cellar.html http://karaf.apache.org/index/subprojects/cellar.html -- View this message in context: http://karaf.922171.n3.nabble.com/rolling-upgrades-to-Karaf-Cellar-tp4034531p4034532.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
Thanks for replying. There is only one reference to configadmin in the cellar docs: http://karaf.apache.org/manual/cellar/latest/user-guide/index.html How would I modify the etc/config.properties to be able to sync etc/a/b/c/foo.properties? Also, it would be nice if the Cellar docs was in one HTML page if possible. -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034377.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
Where is the documentation on how the ConfigAdmin works??? -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034378.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
I just tried to setup a brand new 2 node horz cluster on 2 windows VMs. Both nodes are joining the default group/cluster. However, when I run our installer script with both karaf nodes (3.0.1) running, the bundles install successfully on node A but not on node B. These are the actual bundles for our custom software, not the previous web service example I had deployed. Here are the exceptions for node B. Please note that we have directories/files in the system directory (such as features XML which have the bundle install info) that are not being sync'd with the node B. This is how I installed cellar on both nodes: karaf@root() feature:repo-add mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.0/xml/features karaf@root() feature:install cellar 20140721 12:30:56.860 [INFO ] hz._hzInstance_1_cellar.global-operation.thread-1 | 241:com.hazelcast | com.hazelcast.cluster.ClusterService | [192.168.172.219]:5701 [cellar] [3.2.3] Members [2] { Member [192.a.b.c]:5701 Member [192.a.b.d]:5701 this } 20140721 12:30:58.389 [INFO ] hz._hzInstance_1_cellar.cached.thread-18 | 241:com.hazelcast | com.hazelcast.core.LifecycleService | [192.168.172.219]:5701 [cellar] [3.2.3] Address[192.168.172.219]:5701 is MERGED 20140721 12:39:57.560 [ERROR] pool-14-thread-71 | 248:org.apache.karaf.cellar.features | org.apache.karaf.cellar.features.RepositoryEventHandler | CELLAR FEATURES: failed to add/remove repository URL mvn:com.nextgate.am/ngam-features/9.0.0-SNAPSHOT/xml/features java.lang.NullPointerException at org.apache.karaf.cellar.features.RepositoryEventHandler.handle(RepositoryEventHandler.java:80)[248:org.apache.karaf.cellar.features:3.0.0] at org.apache.karaf.cellar.features.RepositoryEventHandler.handle(RepositoryEventHandler.java:29)[248:org.apache.karaf.cellar.features:3.0.0] at Proxy816e4134_52be_45a8_9a8d_f2b4b7585c2a.handle(Unknown Source)[:] at Proxyd66f100e_e050_4991_b1e8_898982523878.handle(Unknown Source)[:] at org.apache.karaf.cellar.core.event.EventDispatchTask.run(EventDispatchTask.java:57)[242:org.apache.karaf.cellar.core:3.0.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_60] at java.lang.Thread.run(Thread.java:745)[:1.7.0_60] 20140721 12:39:58.496 [WARN ] pool-14-thread-71 | 5:org.ops4j.pax.url.mvn | org.ops4j.pax.url.maven.commons.MavenRepositoryURL | Repository spec https://repository.apache.org/content/repositories/orgapachekaraf-1006/ does not contain an identifier. This is deprecated discouraged just evil. 20140721 12:39:59.729 [WARN ] pool-14-thread-71 | 5:org.ops4j.pax.url.mvn | org.ops4j.pax.url.mvn.internal.AetherBasedResolver | Error resolving artifactcom.nextgate.am:ngam-webservice:jar:9.0.0-SNAPSHOT:Could not find artifact com.nextgate.am:ngam-webservice:jar:9.0.0-SNAPSHOT in defaultlocal (file:/C:/sw/ngs/current/ngs/data/repo/) org.sonatype.aether.resolution.ArtifactResolutionException: Could not find artifact com.nextgate.am:ngam-webservice:jar:9.0.0-SNAPSHOT in defaultlocal (file:/C:/sw/ngs/current/ngs/data/repo/) at org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolve(DefaultArtifactResolver.java:538)[5:org.ops4j.pax.url.mvn:1.6.0] at org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolveArtifacts(DefaultArtifactResolver.java:216)[5:org.ops4j.pax.url.mvn:1.6.0] at org.sonatype.aether.impl.internal.DefaultArtifactResolver.resolveArtifact(DefaultArtifactResolver.java:193)[5:org.ops4j.pax.url.mvn:1.6.0] at org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveArtifact(DefaultRepositorySystem.java:286)[5:org.ops4j.pax.url.mvn:1.6.0] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:250)[5:org.ops4j.pax.url.mvn:1.6.0] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolveFile(AetherBasedResolver.java:239)[5:org.ops4j.pax.url.mvn:1.6.0] at org.ops4j.pax.url.mvn.internal.AetherBasedResolver.resolve(AetherBasedResolver.java:223)[5:org.ops4j.pax.url.mvn:1.6.0] at org.ops4j.pax.url.mvn.internal.Connection.getInputStream(Connection.java:122)[5:org.ops4j.pax.url.mvn:1.6.0] at org.apache.felix.framework.util.SecureAction.getURLConnectionInputStream(SecureAction.java:524)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.cache.JarRevision.initialize(JarRevision.java:165)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.cache.JarRevision.init(JarRevision.java:77)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.cache.BundleArchive.createRevisionFromLocation(BundleArchive.java:878)[org.apache.felix.framework-4.2.1.jar:] at
Re: Cellar for karaf 3.0.1 active/active and failover clustering
No, they are using data/repo. http://karaf.apache.org/index/subprojects/cave/download/karaf-cave-3.0.0-release.html Do I need to use Karaf Cave? We probably cannot setup/install/config Artifactory at the client sites. Please advise link to the Cave docs. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034349.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
Populate from an external repository If you have a bunch of artifacts to upload, it’s not very efficient to use the cave:upload-artifact command. The cave:populate-repository command allows you to upload a set of artifacts from an “external” repository: karaf@root cave:populate-repository cave-repo file:/home/jbonofre/.m2/repository In this example, Cave will browse the file:/home/jbonofre/.m2/repository location, looking for OSGi bundles, and will copy the artifacts in your Cave Repository storage location. http://blog.nanthrax.net/2011/08/apache-karaf-cave-preview/ http://blog.nanthrax.net/2011/08/apache-karaf-cave-preview/ Do I need to install Cave on all the nodes? and then do I need to do the above (cave:populate-repository) for both nodes or just one? -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034350.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
On Windows, I can't write this cmd properly for it to work: cave:repository-create cave-repo -l C:/sw/ngs/current/ngs/data/repo data Error executing command cave:repository-create: too many arguments specified Example from blog: cave:create-repository -l /home/jbonofre/.m2/repository m2 -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034352.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
I don't believe so, here is our config: org.ops4j.pax.url.mvn.repositories = http://repo1.maven.org/maven2@id=central, http://repository.springsource.com/maven/bundles/release@id=spring.ebr.release, http://repository.springsource.com/maven/bundles/external@id=spring.ebr.external, file:C:\\sw\\ngs\\current\\ngs/system@id=system.repository, file:C:\\sw\\ngs\\current\\ngs\\data/kar@id=kar.repository@multi, https://repository.apache.org/content/repositories/orgapachekaraf-1006/ I was able to replicate bundle installs with the web service POC project I was working on recently. Round robin from httpd and failover worked as well prior. -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034353.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
Is it possible to replicate data/repo? We are considering running feature:repo-add and install bundles on each node which defeats the purpose of Cellar which replicates bundles, features and configs... -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034354.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
If I make a change to a file in /etc/a/b/c dir, will it replicate to the same dir structure in the other nodes? I just tried that with both nodes running and it doesn't seem to do anything... -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034355.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
In fact I made a change to etc/org.ops4j.pax.url.mvn.cfg and it did not replicate either (only a comment I tried). -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4034356.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle in lifecycle infinite loop
The blacklisting is for the following datasource.xml: blueprint:file:/C:/sw/ngs/deploy/datasource-person-ws.xml I'm blacklisting on both nodes (etc/org.apache.karaf.cellar.groups.cfg): default.bundle.whitelist.inbound = * default.bundle.whitelist.outbound = * default.bundle.blacklist.inbound = blueprint:file:/C:/sw/ngs/deploy/datasource-person-ws.xml default.bundle.blacklist.outbound = blueprint:file:/C:/sw/ngs/deploy/datasource-person-ws.xml default.bundle.sync = true -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-in-lifecycle-infinite-loop-tp4033960p4034174.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle in lifecycle infinite loop
We are trying to achieve active/active for a web service (for example) as well as failover... I have tested this and it seems to work so far with cellar 3.0.0 and karaf 3.0.1 -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-in-lifecycle-infinite-loop-tp4033960p4034175.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
thanks this seemed to work. Cellar docs need lots more tutorials and/or examples. Good one is how to setup a 2 node horz cluster using mod_proxy_balancer/httpd. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034145.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: bundle in lifecycle infinite loop
this seems to be a bug but the workaround JB suggested to blacklist the bundle for the datasource xml (use la -l to find the location and then update bundle blacklist inbound/outbound in etc/org.apache.karaf.cellar.groups.cfg) -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-in-lifecycle-infinite-loop-tp4033960p4034146.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
How exactly do I exclude the datasource xml bundle? If you copy the xml file to the deploy dir, Karaf creates a bundle for it with a corresponding bundle.jar. Are you saying I need to add it to the etc/org.apache.karaf.cellar.groups.cfg file like this: default.config.blacklist.inbound = org.apache.felix.fileinstall*, \ org.apache.karaf.cellar*, \ org.apache.karaf.management, \ org.apache.karaf.shell, \ org.ops4j.pax.logging, \ org.ops4j.pax.web, \ bundle.jar default.config.blacklist.outbound = org.apache.felix.fileinstall*, \ org.apache.karaf.cellar*, \ org.apache.karaf.management, \ org.apache.karaf.shell, \ org.ops4j.pax.logging, \ org.ops4j.pax.web, \ bundle.jar What exactly would the syntax be for referencing the jar or datasource xml here?? -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034107.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
sorry i meant this block same file: default.bundle.whitelist.inbound = * default.bundle.whitelist.outbound = * default.bundle.blacklist.inbound = none default.bundle.blacklist.outbound = none default.bundle.sync = true -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034108.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
is it possible to blacklist a particular bundle id or range of bundle id's? -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034109.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
Just tried this on the same file in both nodes and the behavior is not reproducing (no bundles are being sync'd I assume): default.bundle.whitelist.inbound = none default.bundle.whitelist.outbound = none default.bundle.blacklist.inbound = * default.bundle.blacklist.outbound = * default.bundle.sync = true -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034110.html Sent from the Karaf - User mailing list archive at Nabble.com.
writing a bug for Karaf/Cellar
How/where do I write a bug for Karaf/Cellar? -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: writing a bug for Karaf/Cellar
I am reproducing a behavior where a datasource.xml bundle is constantly stopping/starting/active (repeat) seemingly in an infinite loop. Blueprint is being destroyed constantly in the karaf.log. Only happens when I have two nodes active in a Karaf/cellar cluster. karaf 3.0.1 cellar 3.0.0 -- View this message in context: http://karaf.922171.n3.nabble.com/writing-a-bug-for-Karaf-Cellar-tp4034078p4034079.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: load-balancing-with-apache-karaf-cellar-and-mod_proxy_balancer
root@ubuntu:/etc/apache2# service apache2 stop * Stopping web server apache2 * * The apache2 configtest failed, so we are trying to kill it manually. This is almost certainly suboptimal, so please make sure your system is working as you'd expect now! root@ubuntu:/etc/apache2# service apache2 start * Starting web server apache2 * * The apache2 configtest failed. Output of config test was: AH00526: Syntax error on line 223 of /etc/apache2/apache2.conf: Invalid command 'Proxy', perhaps misspelled or defined by a module not included in the server configuration Action 'configtest' failed. The Apache error log may have more information. -- View this message in context: http://karaf.922171.n3.nabble.com/load-balancing-with-apache-karaf-cellar-and-mod-proxy-balancer-tp4027803p4034035.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: load-balancing-with-apache-karaf-cellar-and-mod_proxy_balancer
Configured on Mac host instead of Ubuntu and now it's working... -- View this message in context: http://karaf.922171.n3.nabble.com/load-balancing-with-apache-karaf-cellar-and-mod-proxy-balancer-tp4027803p4034036.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: load-balancing-with-apache-karaf-cellar-and-mod_proxy_balancer
I have a Mac host with 3 VM's: one Win7 for Karaf node, another Win7 for Karaf node and one Ubuntu for httpd. following directions here: http://blog.nanthrax.net/2013/02/load-balancing-with-apache-karaf-cellar-and-mod_proxy_balancer/ http://blog.nanthrax.net/2013/02/load-balancing-with-apache-karaf-cellar-and-mod_proxy_balancer/ I have added the following config to /etc/apache2/apache2.conf in Ubuntu 13.10 distro. Note that there is no /etc/httpd dir. Following URL is working on Ubuntu: http://localhost (I see the It works!, etc.) I edited the apache2.conf file and stopped/started apache2 service: service apache2 stop service apache2 start Contents of config change in /etc/apache2/apache2.conf: Proxy balancer://mycluster BalancerMember http://192.168.2.111:8181 BalancerMember http://192.168.2.110:8181 /Proxy ProxyPass /ws balancer://mycluster Location /balancer-manager SetHandler balancer-manager Order allow,deny Allow from all /Location I added this config at the very bottom of the file w/o editing anything else. I started karaf on both nodes of the cluster. I am able to navigate to http://localhost:8181/ws on both nodes. When I navigate to http://localhost/balancer-manager I get: The requested URL /balancer-manager was not found on this server. Plz advise how to fix. thx. -- View this message in context: http://karaf.922171.n3.nabble.com/load-balancing-with-apache-karaf-cellar-and-mod-proxy-balancer-tp4027803p4033994.html Sent from the Karaf - User mailing list archive at Nabble.com.
Destroying BlueprintContainer for bundle datasource.xml
I used the bundle.jar created in the data/cache/xxx dir which happens after you drop a datasource.xml file into the deploy dir in a running Karaf 3.0.1 instance (cluster). Then I install to both node in cluster via replication as follows: install file:/path/to/file/datasource.xml start xxx The bundle remains in active state in both nodes. Then the karaf.log is polluted with the following message repeatedly. Please advise how to resolve thx. 20140703 11:30:32.732 [INFO ] pool-13-thread-50 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainer for bundle datasource-person-ws.xml 20140703 11:30:32.795 [INFO ] pool-13-thread-75 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainer for bundle datasource-person-ws.xml 20140703 11:30:32.826 [INFO ] pool-13-thread-77 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainer for bundle datasource-person-ws.xml 20140703 11:30:35.946 [ERROR] pool-13-thread-47 | 140:org.springframework.osgi.extender | org.springframework.osgi.extender.internal.activator.ContextLoaderListener | Cannot create application context for bundle [null (datasource-person-ws.xml)] java.lang.NullPointerException at org.springframework.osgi.extender.support.DefaultOsgiApplicationContextCreator.createApplicationContext(DefaultOsgiApplicationContextCreator.java:56)[140:org.springframework.osgi.extender:1.2.1] at org.springframework.osgi.extender.internal.activator.ContextLoaderListener.maybeCreateApplicationContextFor(ContextLoaderListener.java:688)[140:org.springframework.osgi.extender:1.2.1] at org.springframework.osgi.extender.internal.activator.ContextLoaderListener$ContextBundleListener.handleEvent(ContextLoaderListener.java:229)[140:org.springframework.osgi.extender:1.2.1] at org.springframework.osgi.extender.internal.activator.ContextLoaderListener$BaseListener.bundleChanged(ContextLoaderListener.java:172)[140:org.springframework.osgi.extender:1.2.1] at org.apache.felix.framework.util.EventDispatcher.invokeBundleListenerCallback(EventDispatcher.java:868)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.util.EventDispatcher.fireEventImmediately(EventDispatcher.java:789)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:514)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4403)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.Felix.startBundle(Felix.java:2092)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:955)[org.apache.felix.framework-4.2.1.jar:] at org.apache.felix.framework.BundleImpl.start(BundleImpl.java:942)[org.apache.felix.framework-4.2.1.jar:] at org.apache.karaf.cellar.bundle.BundleSupport.startBundle(BundleSupport.java:75)[111:org.apache.karaf.cellar.bundle:3.0.0] at org.apache.karaf.cellar.bundle.BundleEventHandler.handle(BundleEventHandler.java:86)[111:org.apache.karaf.cellar.bundle:3.0.0] at org.apache.karaf.cellar.bundle.BundleEventHandler.handle(BundleEventHandler.java:34)[111:org.apache.karaf.cellar.bundle:3.0.0] at Proxycc70c24c_ddf4_42a9_ad67_2183ebdb6503.handle(Unknown Source)[:] at Proxyeca36dab_8f46_497b_aec3_eb7c5aea9005.handle(Unknown Source)[:] at org.apache.karaf.cellar.core.event.EventDispatchTask.run(EventDispatchTask.java:57)[106:org.apache.karaf.cellar.core:3.0.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)[:1.7.0_60] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)[:1.7.0_60] at java.lang.Thread.run(Thread.java:745)[:1.7.0_60] -- View this message in context: http://karaf.922171.n3.nabble.com/Destroying-BlueprintContainer-for-bundle-datasource-xml-tp4033985.html Sent from the Karaf - User mailing list archive at Nabble.com.
bundle in lifecycle infinite loop
starting/stopping/active (repeat) for a datasource.xml bundle in deploy dir. I'm running karaf 3.0.1 in a clustered (2 node) Win7 VM envmt. First time I've seen this and was able to reproduce after restarting both Karaf instances. also, I'd like to get this xml file into a mvn: repo (is it possible to just copy it with the right path in ~/.m2/repository?) The reason is that sounds like files in deploy dir don't sync to other nodes in same group/cluster. log: 20140702 11:36:34.523 [INFO ] pool-14-thread-6 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.555 [INFO ] pool-14-thread-6 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.586 [INFO ] pool-14-thread-22 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContaine for bundle datasource-person-ws.xml 20140702 11:36:34.633 [INFO ] pool-14-thread-18 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContaine for bundle datasource-person-ws.xml 20140702 11:36:34.664 [INFO ] pool-14-thread-6 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.695 [INFO ] pool-14-thread-6 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.742 [INFO ] pool-14-thread-6 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.789 [INFO ] pool-14-thread-4 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.820 [INFO ] pool-14-thread-7 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml 20140702 11:36:34.867 [INFO ] pool-14-thread-7 | 19:org.apache.aries.blueprint.core | org.apache.aries.blueprint.container.BlueprintExtender | Destroying BlueprintContainerfor bundle datasource-person-ws.xml -- View this message in context: http://karaf.922171.n3.nabble.com/bundle-in-lifecycle-infinite-loop-tp4033960.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
I'm installing custom features for our product which is built on top of Karaf 3.0.1. How can i send the logs (I assume you are referring to karaf.log files?) Do you have an email address? -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4033845.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
What version of Cave do we need to use with Karaf 3.0.1? -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4033848.html Sent from the Karaf - User mailing list archive at Nabble.com.
Re: Cellar for karaf 3.0.1 active/active and failover clustering
I installed successfully: feature:install cellar-webconsole admin@NextGate () list START LEVEL 33 , List Threshold: 50 ID | State| Lvl | Version | Name - 376 | Resolved | 80 | 3.0.0 | Apache Karaf :: Cellar :: Webconsole http://localhost:8181/system/console/bundles How do I view the info for the cluster in the web console? I see Licenses and System Information only after restarting Karaf... -- View this message in context: http://karaf.922171.n3.nabble.com/Cellar-for-karaf-3-0-1-active-active-and-failover-clustering-tp4033204p4033821.html Sent from the Karaf - User mailing list archive at Nabble.com.