[jira] [Created] (KARAF-5325) Karaf uses EEST timezone by default

2017-08-29 Thread Suresh Perumal (JIRA)
Suresh Perumal created KARAF-5325:
-

 Summary: Karaf uses EEST timezone by default
 Key: KARAF-5325
 URL: https://issues.apache.org/jira/browse/KARAF-5325
 Project: Karaf
  Issue Type: Bug
  Components: karaf-core
Affects Versions: 4.0.3
 Environment: Linux CENT OS 7
karaf : 4.0.3 version
Reporter: Suresh Perumal


Team,

System (Linux OS) uses JST timestamp / timezone. But karaf container by default 
refers EEST timezone. How to change it to OS based Timezone.

We have changed the timezone via -Duser.timezone parameter in JVM as part of 
export EXTRA_JAVA_OPTS="-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/opt/suresh -Duser.timezone=Europe/Sofia"

We have changed this in setevn under /bin directory. Also ensure 
that JVM sends this value or not. 

suresh   12405 1  0 07:52 pts/300:00:25 /usr/bin/java -server -Xms128M 
-Xmx512M -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass 
-Dcom.sun.management.jmxremote -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/opt/suresh *-Duser.timezone=Europe/Sofia* 
-Djava.endorsed.dirs=/opt/jdk1.8.0_66/jre/lib/endorsed:/opt/jdk1.8.0_66/lib/endorsed:/opt/suresh/apache-karaf-4.0.3/lib/endorsed
 
-Djava.ext.dirs=/opt/jdk1.8.0_66/jre/lib/ext:/opt/jdk1.8.0_66/lib/ext:/opt/suresh/apache-karaf-4.0.3/lib/ext
 -Dkaraf.instances=/opt/suresh/apache-karaf-4.0.3/instances 
-Dkaraf.home=/opt/suresh/apache-karaf-4.0.3 
-Dkaraf.base=/opt/suresh/apache-karaf-4.0.3 
-Dkaraf.data=/opt/suresh/apache-karaf-4.0.3/data 
-Dkaraf.etc=/opt/suresh/apache-karaf-4.0.3/etc 
-Djava.io.tmpdir=/opt/suresh/apache-karaf-4.0.3/data/tmp 
-Djava.util.logging.config.file=/opt/suresh/apache-karaf-4.0.3/etc/java.util.logging.properties
 -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -classpath 
/opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.diagnostic.boot-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.jaas.boot-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.main-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.osgi.core-6.0.0.jar
 org.apache.karaf.main.Main




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KARAF-5325) Karaf uses EEST timezone by default

2017-08-29 Thread Suresh Perumal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KARAF-5325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Perumal updated KARAF-5325:
--
Priority: Critical  (was: Major)

> Karaf uses EEST timezone by default
> ---
>
> Key: KARAF-5325
> URL: https://issues.apache.org/jira/browse/KARAF-5325
> Project: Karaf
>  Issue Type: Bug
>  Components: karaf-core
>Affects Versions: 4.0.3
> Environment: Linux CENT OS 7
> karaf : 4.0.3 version
>Reporter: Suresh Perumal
>Priority: Critical
>
> Team,
> System (Linux OS) uses JST timestamp / timezone. But karaf container by 
> default refers EEST timezone. How to change it to OS based Timezone.
> We have changed the timezone via -Duser.timezone parameter in JVM as part of 
> export EXTRA_JAVA_OPTS="-XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/opt/suresh -Duser.timezone=Europe/Sofia"
> We have changed this in setevn under /bin directory. Also ensure 
> that JVM sends this value or not. 
> suresh   12405 1  0 07:52 pts/300:00:25 /usr/bin/java -server 
> -Xms128M -Xmx512M -XX:+UnlockDiagnosticVMOptions -XX:+UnsyncloadClass 
> -Dcom.sun.management.jmxremote -XX:+HeapDumpOnOutOfMemoryError 
> -XX:HeapDumpPath=/opt/suresh *-Duser.timezone=Europe/Sofia* 
> -Djava.endorsed.dirs=/opt/jdk1.8.0_66/jre/lib/endorsed:/opt/jdk1.8.0_66/lib/endorsed:/opt/suresh/apache-karaf-4.0.3/lib/endorsed
>  
> -Djava.ext.dirs=/opt/jdk1.8.0_66/jre/lib/ext:/opt/jdk1.8.0_66/lib/ext:/opt/suresh/apache-karaf-4.0.3/lib/ext
>  -Dkaraf.instances=/opt/suresh/apache-karaf-4.0.3/instances 
> -Dkaraf.home=/opt/suresh/apache-karaf-4.0.3 
> -Dkaraf.base=/opt/suresh/apache-karaf-4.0.3 
> -Dkaraf.data=/opt/suresh/apache-karaf-4.0.3/data 
> -Dkaraf.etc=/opt/suresh/apache-karaf-4.0.3/etc 
> -Djava.io.tmpdir=/opt/suresh/apache-karaf-4.0.3/data/tmp 
> -Djava.util.logging.config.file=/opt/suresh/apache-karaf-4.0.3/etc/java.util.logging.properties
>  -Dkaraf.startLocalConsole=false -Dkaraf.startRemoteShell=true -classpath 
> /opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.diagnostic.boot-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.jaas.boot-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.apache.karaf.main-4.0.3.jar:/opt/suresh/apache-karaf-4.0.3/lib/boot/org.osgi.core-6.0.0.jar
>  org.apache.karaf.main.Main



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2017-01-18 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15829350#comment-15829350
 ] 

Suresh Perumal commented on KARAF-4878:
---

Any update Jean?

When this will get pushed or what version of karaf has this feature support.

> Cellar Hazelcast unresponsive when ETH Down
> ---
>
> Key: KARAF-4878
> URL: https://issues.apache.org/jira/browse/KARAF-4878
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-hazelcast
>Affects Versions: 4.0.5
> Environment: Redhat Linux 7.2, CentOS 7.2
>Reporter: Suresh Perumal
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> Cluster is configured with 2 Nodes. They are up and running.
> As part of fail-over scenario simulation. We are trying to test "ETHERNET 
> down scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" 
> command on the first node.
> During this scenario we are shutting down the first node where the ETH is  
> down by using monitoring scripts(in-house scripts). The second node(Among 
> those two nodes) is kept alive.
> Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
> getting bellow exception and no operation related to Hazelcast is working. 
> Applications whichever uses hazelcast kept frozen.
> Invocation   | 52 - com.hazelcast - 3.5.2 | 
> [10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
> serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
> partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
> invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
> backupsExpected=0, backupsCompleted=0}
> java.util.concurrent.TimeoutException: Call Invocation{ 
> serviceName='hz:impl:mapService', 
> op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
>  partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
> callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
> tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
> target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
> encountered a timeout
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
> at 
> com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
> at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown 
> Source)[:]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]



--
This message was sent by Atlassian JIRA

[jira] [Commented] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2016-12-18 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15760180#comment-15760180
 ] 

Suresh Perumal commented on KARAF-4878:
---

Any update or Fix planned?

> Cellar Hazelcast unresponsive when ETH Down
> ---
>
> Key: KARAF-4878
> URL: https://issues.apache.org/jira/browse/KARAF-4878
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-hazelcast
>Affects Versions: 4.0.5
> Environment: Redhat Linux 7.2, CentOS 7.2
>Reporter: Suresh Perumal
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> Cluster is configured with 2 Nodes. They are up and running.
> As part of fail-over scenario simulation. We are trying to test "ETHERNET 
> down scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" 
> command on the first node.
> During this scenario we are shutting down the first node where the ETH is  
> down by using monitoring scripts(in-house scripts). The second node(Among 
> those two nodes) is kept alive.
> Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
> getting bellow exception and no operation related to Hazelcast is working. 
> Applications whichever uses hazelcast kept frozen.
> Invocation   | 52 - com.hazelcast - 3.5.2 | 
> [10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
> serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
> partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
> invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
> backupsExpected=0, backupsCompleted=0}
> java.util.concurrent.TimeoutException: Call Invocation{ 
> serviceName='hz:impl:mapService', 
> op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
>  partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
> callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
> tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
> target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
> encountered a timeout
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
> at 
> com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
> at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown 
> Source)[:]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2016-12-09 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15735463#comment-15735463
 ] 

Suresh Perumal commented on KARAF-4878:
---

Any update?

> Cellar Hazelcast unresponsive when ETH Down
> ---
>
> Key: KARAF-4878
> URL: https://issues.apache.org/jira/browse/KARAF-4878
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-hazelcast
>Affects Versions: 4.0.5
> Environment: Redhat Linux 7.2, CentOS 7.2
>Reporter: Suresh Perumal
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> Cluster is configured with 2 Nodes. They are up and running.
> As part of fail-over scenario simulation. We are trying to test "ETHERNET 
> down scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" 
> command on the first node.
> During this scenario we are shutting down the first node where the ETH is  
> down by using monitoring scripts(in-house scripts). The second node(Among 
> those two nodes) is kept alive.
> Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
> getting bellow exception and no operation related to Hazelcast is working. 
> Applications whichever uses hazelcast kept frozen.
> Invocation   | 52 - com.hazelcast - 3.5.2 | 
> [10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
> serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
> partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
> invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
> backupsExpected=0, backupsCompleted=0}
> java.util.concurrent.TimeoutException: Call Invocation{ 
> serviceName='hz:impl:mapService', 
> op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
>  partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
> callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
> tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
> target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
> encountered a timeout
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
> at 
> com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
> at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown 
> Source)[:]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2016-12-06 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727946#comment-15727946
 ] 

Suresh Perumal commented on KARAF-4878:
---

Can we below two attributes and get reduce the wait time?
hazelcast.max.no.heartbeat.seconds
hazelcast.max.no.master.confirmation.seconds

Will there be any issues if we change the above said attributes.

Below are changes tried out in our environment

As part of KARAF CONTAINER

KARAF_HOME/bin/setenv file

export EXTRA_JAVA_OPTS="-Dhazelcast.max.no.heartbeat.seconds=60 
-Dhazelcast.max.no.master.confirmation.seconds=60"

http://docs.hazelcast.org/docs/2.2/manual/html/ch12s06.html

> Cellar Hazelcast unresponsive when ETH Down
> ---
>
> Key: KARAF-4878
> URL: https://issues.apache.org/jira/browse/KARAF-4878
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-hazelcast
>Affects Versions: 4.0.5
> Environment: Redhat Linux 7.2, CentOS 7.2
>Reporter: Suresh Perumal
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> Cluster is configured with 2 Nodes. They are up and running.
> As part of fail-over scenario simulation. We are trying to test "ETHERNET 
> down scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" 
> command on the first node.
> During this scenario we are shutting down the first node where the ETH is  
> down by using monitoring scripts(in-house scripts). The second node(Among 
> those two nodes) is kept alive.
> Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
> getting bellow exception and no operation related to Hazelcast is working. 
> Applications whichever uses hazelcast kept frozen.
> Invocation   | 52 - com.hazelcast - 3.5.2 | 
> [10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
> serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
> partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
> invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
> backupsExpected=0, backupsCompleted=0}
> java.util.concurrent.TimeoutException: Call Invocation{ 
> serviceName='hz:impl:mapService', 
> op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
>  partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
> callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
> tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
> target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
> encountered a timeout
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
> at 
> com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
> at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown 
> Source)[:]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
> at 

[jira] [Commented] (KARAF-4882) keystore.jks update in karaf requires force restart

2016-12-06 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727781#comment-15727781
 ] 

Suresh Perumal commented on KARAF-4882:
---

At runtime we just wanted to update the keystore.jks. But still it works only 
when karaf got restarted. It is able to pickup the new keystore.jks only when 
Karaf gets stopped and restarted.

> keystore.jks update in karaf requires force restart
> ---
>
> Key: KARAF-4882
> URL: https://issues.apache.org/jira/browse/KARAF-4882
> Project: Karaf
>  Issue Type: Bug
>  Components: karaf-core
>Affects Versions: 4.0.5
> Environment: Cent OS 7.2, RHEL 7.2
>Reporter: Suresh Perumal
>Priority: Blocker
>
> We are using Karaf 4.0.5, 4.0.6.
> We are using self signed certificate for https support.
> There are some scenarios where the certificate will get expired where we need 
> to regenerate the certificate again.
> During this scenario, newly generated keystore.jks getting stored in Karaf. 
> ,KARAF_HOME/etc folder.
> But looks like it is not picking up the latest keystore.jks and it requires 
> restart of karaf server.
> To some extent we will not be able to restart the karaf server which might 
> not be correct approach.
> I would like to know the approach to force update of certificates without 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4882) keystore.jks update in karaf requires force restart

2016-12-06 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15727776#comment-15727776
 ] 

Suresh Perumal commented on KARAF-4882:
---

Below is the content used in pax-web.
We are creating keystore.jks with java keytool command
We use this key - self signed certificate during https acess.
org.ops4j.pax.web.cfg
org.osgi.service.http.port=8181
org.osgi.service.http.port.secure=8443
org.osgi.service.http.secure.enabled=true
org.ops4j.pax.web.ssl.keystore=/opt/vira/fpm4.1/karaf/etc/keystores/keystore.jks
org.ops4j.pax.web.ssl.password=password
org.ops4j.pax.web.ssl.keypassword=password
org.ops4j.pax.web.config.file=/opt/vira/fpm4.1/karaf/etc/jetty.xml


> keystore.jks update in karaf requires force restart
> ---
>
> Key: KARAF-4882
> URL: https://issues.apache.org/jira/browse/KARAF-4882
> Project: Karaf
>  Issue Type: Bug
>  Components: karaf-core
>Affects Versions: 4.0.5
> Environment: Cent OS 7.2, RHEL 7.2
>Reporter: Suresh Perumal
>Priority: Blocker
>
> We are using Karaf 4.0.5, 4.0.6.
> We are using self signed certificate for https support.
> There are some scenarios where the certificate will get expired where we need 
> to regenerate the certificate again.
> During this scenario, newly generated keystore.jks getting stored in Karaf. 
> ,KARAF_HOME/etc folder.
> But looks like it is not picking up the latest keystore.jks and it requires 
> restart of karaf server.
> To some extent we will not be able to restart the karaf server which might 
> not be correct approach.
> I would like to know the approach to force update of certificates without 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KARAF-4882) keystore.jks update in karaf requires force restart

2016-12-06 Thread Suresh Perumal (JIRA)
Suresh Perumal created KARAF-4882:
-

 Summary: keystore.jks update in karaf requires force restart
 Key: KARAF-4882
 URL: https://issues.apache.org/jira/browse/KARAF-4882
 Project: Karaf
  Issue Type: Bug
  Components: karaf-core
Affects Versions: 4.0.5
 Environment: Cent OS 7.2, RHEL 7.2
Reporter: Suresh Perumal
Priority: Blocker


We are using Karaf 4.0.5, 4.0.6.

We are using self signed certificate for https support.
There are some scenarios where the certificate will get expired where we need 
to regenerate the certificate again.

During this scenario, newly generated keystore.jks getting stored in Karaf. 
,KARAF_HOME/etc folder.
But looks like it is not picking up the latest keystore.jks and it requires 
restart of karaf server.
To some extent we will not be able to restart the karaf server which might not 
be correct approach.
I would like to know the approach to force update of certificates without 
restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2016-12-06 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15726044#comment-15726044
 ] 

Suresh Perumal commented on KARAF-4878:
---

Any update on this issue?

> Cellar Hazelcast unresponsive when ETH Down
> ---
>
> Key: KARAF-4878
> URL: https://issues.apache.org/jira/browse/KARAF-4878
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-hazelcast
>Affects Versions: 4.0.5
> Environment: Redhat Linux 7.2, CentOS 7.2
>Reporter: Suresh Perumal
>Assignee: Jean-Baptiste Onofré
>Priority: Blocker
>
> Cluster is configured with 2 Nodes. They are up and running.
> As part of fail-over scenario simulation. We are trying to test "ETHERNET 
> down scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" 
> command on the first node.
> During this scenario we are shutting down the first node where the ETH is  
> down by using monitoring scripts(in-house scripts). The second node(Among 
> those two nodes) is kept alive.
> Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
> getting bellow exception and no operation related to Hazelcast is working. 
> Applications whichever uses hazelcast kept frozen.
> Invocation   | 52 - com.hazelcast - 3.5.2 | 
> [10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
> serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
> partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
> invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
> backupsExpected=0, backupsCompleted=0}
> java.util.concurrent.TimeoutException: Call Invocation{ 
> serviceName='hz:impl:mapService', 
> op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
>  partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
> callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
> tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
> target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
> encountered a timeout
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
> at 
> com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
> at 
> com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
> at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown 
> Source)[:]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
> at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4873) keystore.jks update in karaf requires force restart

2016-12-04 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721366#comment-15721366
 ] 

Suresh Perumal commented on KARAF-4873:
---

At runtime we just wanted to update the keystore.jks. But still it works only 
when karaf got restarted. It is able to pickup the new keystore.jks only when 
Karaf gets stopped and restarted.

> keystore.jks update in karaf requires force restart
> ---
>
> Key: KARAF-4873
> URL: https://issues.apache.org/jira/browse/KARAF-4873
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-http
>Affects Versions: 4.0.5
> Environment: karaf 4.0.5/4.0.6 on Linux CentOS, RHEL Platform
>Reporter: Suresh Perumal
>Priority: Blocker
>
> We are using Karaf 4.0.5, 4.0.6.
> We are using self signed certificate for https support.
> There are some scenarios where the certificate will get expired where we need 
> to regenerate the certificate again.
> During this scenario, newly generated keystore.jks getting stored in Karaf. 
> ,KARAF_HOME/etc folder.
> But looks like it is not picking up the latest keystore.jks and it requires 
> restart of karaf server. 
> To some extent we will not be able to restart the karaf server which might 
> not be correct approach.
> I would like to know the approach to force update of certificates without 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KARAF-4873) keystore.jks update in karaf requires force restart

2016-12-04 Thread Suresh Perumal (JIRA)

[ 
https://issues.apache.org/jira/browse/KARAF-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721326#comment-15721326
 ] 

Suresh Perumal commented on KARAF-4873:
---

Below is the content used in pax-web.
We are creating keystore.jks with java keytool command
We use this key - self signed certificate during https acess.

org.ops4j.pax.web.cfg

org.osgi.service.http.port=8181
org.osgi.service.http.port.secure=8443
org.osgi.service.http.secure.enabled=true
org.ops4j.pax.web.ssl.keystore=/opt/vira/fpm4.1/karaf/etc/keystores/keystore.jks
org.ops4j.pax.web.ssl.password=password
org.ops4j.pax.web.ssl.keypassword=password
org.ops4j.pax.web.config.file=/opt/vira/fpm4.1/karaf/etc/jetty.xml

> keystore.jks update in karaf requires force restart
> ---
>
> Key: KARAF-4873
> URL: https://issues.apache.org/jira/browse/KARAF-4873
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-http
>Affects Versions: 4.0.5
> Environment: karaf 4.0.5/4.0.6 on Linux CentOS, RHEL Platform
>Reporter: Suresh Perumal
>Priority: Blocker
>
> We are using Karaf 4.0.5, 4.0.6.
> We are using self signed certificate for https support.
> There are some scenarios where the certificate will get expired where we need 
> to regenerate the certificate again.
> During this scenario, newly generated keystore.jks getting stored in Karaf. 
> ,KARAF_HOME/etc folder.
> But looks like it is not picking up the latest keystore.jks and it requires 
> restart of karaf server. 
> To some extent we will not be able to restart the karaf server which might 
> not be correct approach.
> I would like to know the approach to force update of certificates without 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KARAF-4878) Cellar Hazelcast unresponsive when ETH Down

2016-12-04 Thread Suresh Perumal (JIRA)
Suresh Perumal created KARAF-4878:
-

 Summary: Cellar Hazelcast unresponsive when ETH Down
 Key: KARAF-4878
 URL: https://issues.apache.org/jira/browse/KARAF-4878
 Project: Karaf
  Issue Type: Bug
  Components: cellar-hazelcast
Affects Versions: 4.0.5
 Environment: Redhat Linux 7.2, CentOS 7.2
Reporter: Suresh Perumal
Priority: Blocker


Cluster is configured with 2 Nodes. They are up and running.

As part of fail-over scenario simulation. We are trying to test "ETHERNET down 
scenario" by running "/etc/sysconfig/network-scripts/ifdown eth0" command on 
the first node.

During this scenario we are shutting down the first node where the ETH is  down 
by using monitoring scripts(in-house scripts). The second node(Among those two 
nodes) is kept alive.

Second Node's Hazelcast is not accessible for more than 15 minutes. We are 
getting bellow exception and no operation related to Hazelcast is working. 
Applications whichever uses hazelcast kept frozen.

Invocation   | 52 - com.hazelcast - 3.5.2 | 
[10.249.50.80]:5701 [cellar] [3.5.2] While asking 'is-executing': Invocation{ 
serviceName='hz:impl:mapService', op=PutOperation{unacknowledged-alarm}, 
partitionId=165, replicaIndex=0, tryCount=250, tryPauseMillis=500, 
invokeCount=1, callTimeout=6, target=Address[10.249.50.79]:5701, 
backupsExpected=0, backupsCompleted=0}
java.util.concurrent.TimeoutException: Call Invocation{ 
serviceName='hz:impl:mapService', 
op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:impl:mapService',
 partitionId=-1, callId=2114, invocationTime=1480511190143, waitTimeout=-1, 
callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, 
tryPauseMillis=0, invokeCount=1, callTimeout=5000, 
target=Address[10.249.50.79]:5701, backupsExpected=0, backupsCompleted=0} 
encountered a timeout
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponse(InvocationFuture.java:366)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.resolveApplicationResponseOrThrowException(InvocationFuture.java:334)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:225)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService.isOperationExecuting(IsStillRunningService.java:85)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.waitForResponse(InvocationFuture.java:275)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:224)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.spi.impl.operationservice.impl.InvocationFuture.get(InvocationFuture.java:204)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.map.impl.proxy.MapProxySupport.invokeOperation(MapProxySupport.java:456)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.map.impl.proxy.MapProxySupport.putInternal(MapProxySupport.java:417)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:97)[52:com.hazelcast:3.5.2]
at 
com.hazelcast.map.impl.proxy.MapProxyImpl.put(MapProxyImpl.java:87)[52:com.hazelcast:3.5.2]
at 
com.fujitsu.fnc.emf.fpmplatform.cachemanager.HazelcastCacheManagerMapServiceImpl.addToMap(HazelcastCacheManagerMapServiceImpl.java:87)[209:FPMHazelcastCache:4.1.0.SNAPSHOT]
at Proxy1897a82c_c032_4a5c_9839_e71cb2af452a.addToMap(Unknown Source)[:]
at 
com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.prepareJSON(FpmConsumerTask.java:151)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
at 
com.fujitsu.fnc.ngemf.fm.server.impl.FpmConsumerTask.run(FpmConsumerTask.java:244)[235:com.fujitsu.fnc.ngemf.fm.server.impl:4.1.0.SNAPSHOT]
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)[:1.8.0_66]
at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_66]
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)[:1.8.0_66]
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)[:1.8.0_66]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_66]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KARAF-4873) keystore.jks update in karaf requires force restart

2016-12-02 Thread Suresh Perumal (JIRA)

 [ 
https://issues.apache.org/jira/browse/KARAF-4873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Perumal updated KARAF-4873:
--
Issue Type: Bug  (was: Question)

> keystore.jks update in karaf requires force restart
> ---
>
> Key: KARAF-4873
> URL: https://issues.apache.org/jira/browse/KARAF-4873
> Project: Karaf
>  Issue Type: Bug
>  Components: cellar-http
>Affects Versions: 4.0.5
> Environment: karaf 4.0.5/4.0.6 on Linux CentOS, RHEL Platform
>Reporter: Suresh Perumal
>Priority: Blocker
>
> We are using Karaf 4.0.5, 4.0.6.
> We are using self signed certificate for https support.
> There are some scenarios where the certificate will get expired where we need 
> to regenerate the certificate again.
> During this scenario, newly generated keystore.jks getting stored in Karaf. 
> ,KARAF_HOME/etc folder.
> But looks like it is not picking up the latest keystore.jks and it requires 
> restart of karaf server. 
> To some extent we will not be able to restart the karaf server which might 
> not be correct approach.
> I would like to know the approach to force update of certificates without 
> restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KARAF-4873) keystore.jks update in karaf requires force restart

2016-11-30 Thread Suresh Perumal (JIRA)
Suresh Perumal created KARAF-4873:
-

 Summary: keystore.jks update in karaf requires force restart
 Key: KARAF-4873
 URL: https://issues.apache.org/jira/browse/KARAF-4873
 Project: Karaf
  Issue Type: Question
  Components: cellar-http
Affects Versions: 4.0.5
 Environment: karaf 4.0.5/4.0.6 on Linux CentOS, RHEL Platform
Reporter: Suresh Perumal
Priority: Blocker


We are using Karaf 4.0.5, 4.0.6.

We are using self signed certificate for https support.

There are some scenarios where the certificate will get expired where we need 
to regenerate the certificate again.
During this scenario, newly generated keystore.jks getting stored in Karaf. 
,KARAF_HOME/etc folder.
But looks like it is not picking up the latest keystore.jks and it requires 
restart of karaf server. 

To some extent we will not be able to restart the karaf server which might not 
be correct approach.

I would like to know the approach to force update of certificates without 
restarts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)