How to unsubscribe this?

Thank You

Regards
Malith Pamuditha Fernando
Director
RevPort (Pvt.) Ltd.
malith.ferna...@revport.net  | +94 713 76 92 17



________________________________
From: John Dale <jcdw...@gmail.com>
Sent: Tuesday, February 19, 2019 1:57:10 AM
To: Tomcat Users List
Subject: Re: Tomcat session management with Redisson

Regarding clustering and state recovery, I opted some time ago to
store session information in the database - I prefer full control over
session state for security/obscurity reasons.

Load balancing is straightforward this way.

I'm not sure I would ever need more than 2 nodes for my purposes,
though, since Java can address such a huge memory space.  It's an
amazing computing environment now compared to what we had 20 years
ago.



On 2/18/19, Christopher Schultz <ch...@christopherschultz.net> wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA256
>
> Herb,
>
> On 2/18/19 13:59, Herb Burnswell wrote:
>> On Fri, Feb 15, 2019 at 12:21 PM Christopher Schultz <
>> ch...@christopherschultz.net> wrote:
>>
>> Herb,
>>
>> On 2/14/19 12:41, Herb Burnswell wrote:
>>>>> Tomcat 8.5.23 RHEL   7.5
>>>>>
>>>>> We are looking to set up session management via Redisson to
>>>>> offload the CPU consumption of using Tomcat's built in
>>>>> clustering session management.  We have CPU licensing limits
>>>>> and need to conserve as much CPU as possible.
>>
>> Dumb question: aren't you just going to move the CPU cycles to
>> another system?
>>
>>
>>> Thanks for the reply.  Short answer, yes.  But that is the idea.
>>> We can only use 2 CPU's per application node (3 nodes) with our
>>> licensing structure so we do not want to take cycles away from
>>> the application to manage sessions.
>
> Okay, so if you move the session-management to another machine, you
> don't have to pay app-license fees for the session-management server?
> Fair enough.
>
> Just remember that you still need code "managing" sessions from your
> Tomcat note to your Redisson server. I can't imagine that the
> Tomcat->Resisson code would be any less complicated than the Tomcat ->
> Tomcat code. You might want to validate that assumption before
> committing any resources toward solving a problem by adding complexity
> to your deployments.
>
>> Another dumb question: do you actually need clustering?
>>
>>
>>> If I'm using the term correctly, yes.  The idea would be for HA
>>> functionality; If users were connected to node 3 and the node
>>> failed for some reason, their session would be picked up by node
>>> 1 or 2 uninterrupted.  Sorry if I confused the intent.
>
> That's exactly what you will get.
>
> If you do NOT use clustering, a failed node will require the users who
> were on the failed node to re-login to a surviving node. Only you can
> determine whether that is an acceptable consequence of a failed node
> for your users and application. I, as well as many others, have
> decided that fail-over is such a rare event and logins such a
> non-issue that introducing the complexity of clustering is not justified
> .
>
>>>>> I have never set up a configuration this way, however I have
>>>>> Redis set up and running as 1 Master, 1 Slave.  I seemingly
>>>>> just need to point our application to it.  I have read this
>>>>> doc on how to:
>>>>>
>>>>> https://github.com/redisson/redisson/tree/master/redisson-tomcat
>>>>>
>>>>>
>>>>>
> It seems pretty straight forward except for the redisson.conf
>>>>> configuration:
>>>>>
>>>>> Add RedissonSessionManager into tomcat/conf/context.xml
>>>>>
>>>>> <Manager
>>>>> className="org.redisson.tomcat.RedissonSessionManager"
>>>>> configPath="${catalina.base}/redisson.conf" readMode="REDIS"
>>>>> updateMode="DEFAULT"/>
>>
>> I would do this in the application's context.xml file instead of
>> the global/default one. That means modifying the application's
>> META-INF/context.xml file, or, if you deploy via files from
>> outside your WAR/dir application, then
>> conf/[engine]/[hostname]/[appname].xml.
>>
>>
>>> Yes, this is requiring the editing a application specific xml
>>> file.
>
> Good.
>
>>>>> I am more familiar with YAML so plan on configuring the
>>>>> redisson.conf as such.  I have read the referenced
>>>>> configuration wiki page:
>>>>>
>>>>> https://github.com/redisson/redisson/wiki/2.-Configuration
>>>>>
>>>>> However, it has a great deal of options and I'm not sure what
>>>>> is and is not needed.
>>>>>
>>>>> I am reaching out here on the Tomcat user group to see if
>>>>> anyone else is using Redisson for session management and if
>>>>> maybe I can get some guidance on a basic redisson.conf
>>>>> configuration.  I'd also be interested in comments on if
>>>>> there are better options or things to watch out for.
>>
>> I don't have any experience with either Redis or Redisson, but what
>> is wrong with the default/sample configuration you have provided
>> above?
>>
>>
>>> I have through much trial and error been using this config:
>>
>>> { "masterSlaveServersConfig":{ "idleConnectionTimeout":10000,
>>> "connectTimeout":10000, "timeout":3000, "retryAttempts":3,
>>> "retryInterval":1500, "failedSlaveReconnectionInterval":3000,
>>> "failedSlaveCheckInterval":60000, "password":"<master_pass>",
>>> "subscriptionsPerConnection":5, "clientName":true,
>>> "subscriptionConnectionMinimumIdleSize":1,
>>> "subscriptionConnectionPoolSize":50,
>>> "slaveConnectionMinimumIdleSize":32,
>>> "slaveConnectionPoolSize":64,
>>> "masterConnectionMinimumIdleSize":32,
>>> "masterConnectionPoolSize":64, "readMode":"SLAVE",
>>> "subscriptionMode":"SLAVE", "slaveAddresses":[
>>> "<slave.example.com:6379" ],
>>> "masterAddress":"<master.example.com>:6379", "database":0 },
>>> "threads":0, "nettyThreads":0, "transportMode":"NIO" }
>>
>>> However, I am getting a couple exceptions and am not sure what
>>> might be the issue:
>
> Okay, let's take a look:
>
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 | Feb 18,
>>> 2019 10:09:33 AM org.apache.catalina.core.StandardContext
>>> startInternal INFO   | jvm 1    | main    | 2019/02/18
>>> 10:09:33.068 | SEVERE: The session manager failed to start
>
> Wow, this stack trace is tough to read. :(
>
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |
>>> org.apache.catalina.LifecycleException: Failed to start
>>> component [org.redisson.tomcat.RedissonSessionManager[]] INFO   |
>>> jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.apache.catalina.core.StandardContext.startInternal(StandardContex
> t.java:5224)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.
> java:1419)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.
> java:1409)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> java.util.concurrent.FutureTask.run(FutureTask.java:266) INFO   |
>>> jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
> java:1149)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
> .java:624)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> java.lang.Thread.run(Thread.java:748) INFO   | jvm 1    | main
>>> | 2019/02/18 10:09:33.068 | Caused by:
>>> org.apache.catalina.LifecycleException:
>>> java.lang.NullPointerException INFO   | jvm 1    | main    |
>>> 2019/02/18 10:09:33.068 |         at
>>> org.redisson.tomcat.RedissonSessionManager.buildClient(RedissonSessio
> nManager.java:279)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.tomcat.RedissonSessionManager.startInternal(RedissonSess
> ionManager.java:209)
>
> It
>>>
> looks like your SessionManager (from Redisson) is failing to start
> up. If you can look at the source, you might be able to see some kind
> of missing configuration parameter and fix it. Or you could ask the
> Redisson community what you might be missing. I've never even heard of
> Redisson and I've been on this list for ... 15 years. Perhaps someone
> else here can chime-in and help, but I wouldn't bet on it.
>
> Or there could be a bug in Redisson. For that, you'll have to approach
> that community for help, of course. I would consider a SessionManager
> throwing an NPE to be a bug. If there is some kind of required
> configuration, the SessionManager should emit a helpful message like
> "masterAddress is required" or somesuch.
>
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |
>>> at
>>> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         ... 8 mo
> re
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 | Caused
>>> by: java.lang.NullPointerException
>
> Hmm, the plot thickens. The NPE is not in the SessionManager; it's in
> the netty client?
>
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |
>>> at io.netty.util.NetUtil.isValidIpV4Address(NetUtil.java:648)
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |
>>> at
>>> io.netty.util.NetUtil.createByteArrayFromIpAddressString(NetUtil.java
> :368)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.client.RedisClient.resolveAddr(RedisClient.java:172)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveEntry.addSlave(MasterSlaveEntry.ja
> va:303)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveEntry.addSlave(MasterSlaveEntry.ja
> va:345)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveEntry.initSlaveBalancer(MasterSlav
> eEntry.java:102)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveConnectionManager.createMasterSlav
> eEntry(MasterSlaveConnectionManager.java:372)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveConnectionManager.initSingleEntry(
> MasterSlaveConnectionManager.java:346)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.connection.MasterSlaveConnectionManager.<init>(MasterSla
> veConnectionManager.java:161)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.config.ConfigSupport.createConnectionManager(ConfigSuppo
> rt.java:225)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.Redisson.<init>(Redisson.java:121) INFO   | jvm 1
>>> | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.Redisson.create(Redisson.java:164) INFO   | jvm 1
>>> | main    | 2019/02/18 10:09:33.068 |         at
>>> org.redisson.tomcat.RedissonSessionManager.buildClient(RedissonSessio
> nManager.java:277)
>>>
>>>
> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 |         ... 10 m
> ore
>>> INFO   | jvm 1    | main    | 2019/02/18 10:09:33.068 | INFO   |
>>> jvm 1    | main    | 2019/02/18 10:09:33.068 | Feb 18, 2019
>>> 10:09:33 AM org.apache.catalina.core.StandardContext
>>> startInternal INFO   | jvm 1    | main    | 2019/02/18
>>> 10:09:33.068 | SEVERE: Context [] startup failed due to previous
>>> errors
>
> Yep, the issue is in the Netty client. It's failing while trying to
> validate an IP address. Specifically for a slave's address (I gather
> from reading the stack trace). Are you sure anything that looks like
> it should be an IP address is correct? Can slaveAddresses have a port
> number? Are there other "address" type configuration parameters that
> must be set but that you haven't set to something? Maybe a port number?
>
> I still think an NPE is a very non-graceful handling, here, and I
> would expect that the SessionManager would perform a sanity-check on
> all values -- especially if they are required -- before using them to
> e.g. make a network connection.
>
> Hope that helps,
> - -chris
> -----BEGIN PGP SIGNATURE-----
> Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/
>
> iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlxrBsIACgkQHPApP6U8
> pFg+Tg/+N3In8g4bowjSYL/qe+tPH/4S67FQQgK+Gh+JRtRB4SgJ0ie3N7OVr1OP
> AculZNMIqDL2IkpFCRWOyctZSAh6dznzvy99CT1rLscEy1WYTwL93sPaEWrPGFsB
> 1WuxdbPkONrG8D4PSDCe9714cj4yTmz6v7nXR1Oii3G6tu+0V+hg6bT5ton4Pw/K
> uoDbsPKvg9f5nuGaVpG1ukHFxIANV3ShmJDmjDKcAypaNsOS7Fmsst6EeiZfmTjy
> vZtLPBydzL+bgEhTdXdRXEcqmkA1QVA8HS8XtI9aXFhY5kdjs0U1ebiwoPz8k6Ln
> NPd0js8FNO5GUhdJuCbOPEzkqRuLyR8hIdtyHQ1dJnJpOs0LtYoEEb5s+arIwuQX
> rSvMfrIlOB9Za1fu03MGcHETDXqPR8ZHzHezqf8cxGUIrMKO0ZOQmkuC1SZA7O1v
> HO2jQjiipXw2gQ00qM+9yruYyp5Dgf0qHS+5uy38ueBqcIw/cEgOGq8xgYB7qCNR
> f4pkpTvvZvARpGTZFrCwvRvsovtdjpA6iNKElFZJXcN+F7kRa9/Kx4voOpbnLtJk
> Pmu8ciFrZ1xtPCpYMOlY6+HcyIMfS3ip5qFm5ATPu4sqNtarzIRsWt7hfw8qTU64
> etq2KDiGW8O0SAiuuCpZda47Bfg8A0wN9uEjJxiVznHIT8K+89E=
> =xIZ2
> -----END PGP SIGNATURE-----
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to