Re: [PROPOSAL] Add getCache and getLocator to Launchers
To make this example a bit more concrete for everyone else... gfsh>start server --name=*Server1* --log-level=config Starting a Geode Server in /Users/jblum/pivdev/lab/Server1... Server in /Users/jblum/pivdev/lab/Server1 on 10.99.199.7[*40404*] as Server1 is currently online. Process ID: 49850 Uptime: 3 seconds Geode Version: 1.6.0 Java Version: 1.8.0_192 Log File: /Users/jblum/pivdev/lab/Server1/Server1.log JVM Arguments: -Dgemfire.default.locators=10.99.199.7[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar gfsh>show log --member=*Server1* --lines=20 SystemLog: [info 2018/11/02 17:52:53.523 PDT Server1 tid=0x1] Initialization of region _monitoringRegion_10.99.199.71025 completed [info 2018/11/02 17:52:54.146 PDT Server1 tid=0x1] Initialized cache service org.apache.geode.connectors.jdbc.internal.JdbcConnectorServiceImpl [info 2018/11/02 17:52:54.160 PDT Server1 tid=0x1] Initialized cache service org.apache.geode.cache.lucene.internal.LuceneServiceImpl [info 2018/11/02 17:52:54.161 PDT Server1 tid=0x1] Initialized cache service com.gemstone.gemfire.OldClientSupportProvider [info 2018/11/02 17:52:54.171 PDT Server1 tid=0x1] Initializing region PdxTypes [info 2018/11/02 17:52:54.173 PDT Server1 tid=0x1] Initialization of region PdxTypes completed [info 2018/11/02 17:52:54.226 PDT Server1 tid=0x1] Cache server connection listener bound to address 10.99.199.7-0.0.0.0/0.0.0.0:40404 with backlog 1,000. [info 2018/11/02 17:52:54.235 PDT Server1 tid=0x1] ClientHealthMonitorThread maximum allowed time between pings: 60,000 [warning 2018/11/02 17:52:54.236 PDT Server1 tid=0x1] Handshaker max Pool size: 4 [info 2018/11/02 17:52:54.242 PDT *Server1* tid=0x1] *CacheServer Configuration: port=40404* max-connections=800 max-threads=0 notify-by-subscription=true socket-buffer-size=32768 maximum-time-between-pings=6 maximum-message-count=23 message-time-to-live=180 eviction-policy=none capacity=1 overflow directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000 tcpNoDelay=true gfsh>start server --name=*Server2* --log-level=config --server-port=51515 Starting a Geode Server in /Users/jblum/pivdev/lab/Server2... ... Server in /Users/jblum/pivdev/lab/Server2 on 10.99.199.7[*51515*] as Server2 is currently online. Process ID: 49916 Uptime: 3 seconds Geode Version: 1.6.0 Java Version: 1.8.0_192 Log File: /Users/jblum/pivdev/lab/Server2/Server2.log JVM Arguments: -Dgemfire.default.locators=10.99.199.7[10334] -Dgemfire.start-dev-rest-api=false -Dgemfire.use-cluster-configuration=true -Dgemfire.log-level=config -XX:OnOutOfMemoryError=kill -KILL %p -Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true -Dsun.rmi.dgc.server.gcInterval=9223372036854775806 Class-Path: /Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-core-1.6.0.jar:/Users/jblum/pivdev/apache-geode-1.6.0/lib/geode-dependencies.jar gfsh>show log --member=Server2 --lines=20 SystemLog: [info 2018/11/02 17:54:36.776 PDT Server2 tid=0x1] Initialized cache service org.apache.geode.cache.lucene.internal.LuceneServiceImpl [info 2018/11/02 17:54:36.777 PDT Server2 tid=0x1] Initialized cache service com.gemstone.gemfire.OldClientSupportProvider [info 2018/11/02 17:54:36.788 PDT Server2 tid=0x1] Initializing region PdxTypes [info 2018/11/02 17:54:36.793 PDT Server2 tid=0x1] Region PdxTypes requesting initial image from 10.99.199.7(Server1:49850):1025 [info 2018/11/02 17:54:36.797 PDT Server2 tid=0x1] PdxTypes is done getting image from 10.99.199.7(Server1:49850):1025. isDeltaGII is false [info 2018/11/02 17:54:36.797 PDT Server2 tid=0x1] Initialization of region PdxTypes completed [info 2018/11/02 17:54:36.852 PDT Server2 tid=0x1] Cache server connection listener bound to address 10.99.199.7-0.0.0.0/0.0.0.0:51515 with backlog 1,000. [info 2018/11/02 17:54:36.859 PDT Server2 tid=0x1] ClientHealthMonitorThread maximum allowed time between pings: 60,000 [warning 2018/11/02 17:54:36.860 PDT Server2 tid=0x1] Handshaker max Pool size: 4 [info 2018/11/02 17:54:36.867 PDT *Server2* tid=0x1] *CacheServer Configuration: port=51515* max-connections=800 max-threads=0 notify-by-subscription=true socket-buffer-size=32768 maximum-time-between-pings=6 maximum-message-count=23 message-time-to-live=180 eviction-policy=none capacity=1 overflow directory=. groups=[] loadProbe=ConnectionCountProbe loadPollInterval=5000 tcpNoDelay=true gfsh>start server --name=*Server3* --log-level=config --disable-default-server Starting a Geode Server in /Users/jblum/pivdev/lab/Server3... Server in /Users/jblum/pivdev/lab/Server3 on
Re: [PROPOSAL] Add getCache and getLocator to Launchers
Damn it! Correction to my previous email... '--disable-default-port' should read '--disable-default-server' in my *Gfsh* `start server` command examples. Apologies, John On Fri, Nov 2, 2018 at 5:41 PM, John Blum wrote: > Bruce- > > Regarding... > > > "... *but it's the AcceptorImpl thread that keeps the JVM from exiting, > so I don't really agree that you can create a server with gfsh that doesn't > have a CacheServer.*" > > Well, that is not entirely true, and the last bit is definitely not true. > > How do you suppose I can do this... > > gfsh>start server --name=Server1 > gfsh>start server --name=Server2 *--server-port*=51515 > gfsh>start server --name=Server3 *--disable-default-port* > gfsh>start server --name=Server4 *--disable-default-port* > > ... if I could not disable the CacheServer? > > Clearly, Server1 starts up with the default CacheServer port, 40404. > Server2 explicitly sets the CacheServer port to 51515. And Servers 3 & 4 > simply do not have CacheServers running. If they tried to start a > CacheServer, and they did NOT explicitly set the --server-port option, > then Servers 3 & 4 would result in throwing a java.net.BindException. > > So, the ServerLauncher class "blocks" if you do not start a CacheServer > instance, which would not be started if the server was started using `start > server --disable-default-server`). > > See here [1], then here [2], and then here [3] and here [4] as well as > this [5], which gets set from this [6]. > > There is a very good reason why I know this. > > Regards, > -John > > > [1] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L705 > [2] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ > ServerLauncher.java#L906-L928 > [3] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L953 > [4] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ > ServerLauncher.java#L940-L942 > [5] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ > ServerLauncher.java#L407-L409 > [6] https://github.com/apache/geode/blob/rel/v1.7.0/geode- > core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L1487 > > > On Fri, Nov 2, 2018 at 3:03 PM, Bruce Schuchardt > wrote: > >> Hmm, but it's the AcceptorImpl thread that keeps the JVM from exiting, so >> I don't really agree that you can create a server with gfsh that doesn't >> have a CacheServer. One's got to be created through cache.xml or something. >> >> Be that as it may I think the recent talk about this convinces me that >> Kirk's original proposal is sound and we should go with it. Servers can be >> fished out of the cache so it's not a big deal if the launcher API doesn't >> have a getCacheServer method. >> >> >> On 11/1/18 9:38 AM, John Blum wrote: >> >>> Well, ServerLauncher may or may not create a CacheServer instance when it >>> starts a server. A server can be created without a CacheServer, for >>> instance, when in *Gfsh* `start server --disable-default-server` option >>> is >>> >>> specified. >>> >>> In addition, you can always find or get the list of CacheServers (if >>> present) from the Cache instance, using >>> Cache.getCacheServers():List [1]. So, I think it would be >>> better if the ServerLauncher returned a handle to the Cache and then >>> drill >>> down as opposed to up. >>> >>> -j >>> >>> >>> [1] >>> http://geode.apache.org/releases/latest/javadoc/org/apache/g >>> eode/cache/Cache.html#getCacheServers-- >>> >>> >>> On Thu, Nov 1, 2018 at 7:31 AM, Bruce Schuchardt >> > >>> wrote: >>> >>> I like this but it might be even better if ServerLauncher gave access to the CacheServer it launched and CacheServer gave access to its cache. The Locator interface, as well, could have a getCache() method and deprecate the getDistributedSystem() method. These days a Locator always has a Cache and no-one is interested in its DistributedSystem. On 10/31/18 2:48 PM, Kirk Lund wrote: LocatorLauncher provides an API which can be used in-process to create a > Locator. There is no public API on that class to get a reference to the > Locator or its Cache. > > Similarly, ServerLauncher provides an API which can be used in-process > to > create a Server, but there is no public API in that class to get a > reference to its Cache. > > The User of either Launcher would then have to resort to invoking > singletons to get a reference to the Cache. > > There are existing package-private getter APIs on both Launchers but > they're only used by tests in that same package. > > I propose adding public APIs for getCache to both LocatorLauncher and > ServerLauncher as well as adding getLocator to
Re: [PROPOSAL] Add getCache and getLocator to Launchers
Bruce- Regarding... > "... *but it's the AcceptorImpl thread that keeps the JVM from exiting, so I don't really agree that you can create a server with gfsh that doesn't have a CacheServer.*" Well, that is not entirely true, and the last bit is definitely not true. How do you suppose I can do this... gfsh>start server --name=Server1 gfsh>start server --name=Server2 *--server-port*=51515 gfsh>start server --name=Server3 *--disable-default-port* gfsh>start server --name=Server4 *--disable-default-port* ... if I could not disable the CacheServer? Clearly, Server1 starts up with the default CacheServer port, 40404. Server2 explicitly sets the CacheServer port to 51515. And Servers 3 & 4 simply do not have CacheServers running. If they tried to start a CacheServer, and they did NOT explicitly set the --server-port option, then Servers 3 & 4 would result in throwing a java.net.BindException. So, the ServerLauncher class "blocks" if you do not start a CacheServer instance, which would not be started if the server was started using `start server --disable-default-server`). See here [1], then here [2], and then here [3] and here [4] as well as this [5], which gets set from this [6]. There is a very good reason why I know this. Regards, -John [1] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L705 [2] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L906-L928 [3] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L953 [4] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L940-L942 [5] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L407-L409 [6] https://github.com/apache/geode/blob/rel/v1.7.0/geode-core/src/main/java/org/apache/geode/distributed/ServerLauncher.java#L1487 On Fri, Nov 2, 2018 at 3:03 PM, Bruce Schuchardt wrote: > Hmm, but it's the AcceptorImpl thread that keeps the JVM from exiting, so > I don't really agree that you can create a server with gfsh that doesn't > have a CacheServer. One's got to be created through cache.xml or something. > > Be that as it may I think the recent talk about this convinces me that > Kirk's original proposal is sound and we should go with it. Servers can be > fished out of the cache so it's not a big deal if the launcher API doesn't > have a getCacheServer method. > > > On 11/1/18 9:38 AM, John Blum wrote: > >> Well, ServerLauncher may or may not create a CacheServer instance when it >> starts a server. A server can be created without a CacheServer, for >> instance, when in *Gfsh* `start server --disable-default-server` option is >> >> specified. >> >> In addition, you can always find or get the list of CacheServers (if >> present) from the Cache instance, using >> Cache.getCacheServers():List [1]. So, I think it would be >> better if the ServerLauncher returned a handle to the Cache and then drill >> down as opposed to up. >> >> -j >> >> >> [1] >> http://geode.apache.org/releases/latest/javadoc/org/apache/ >> geode/cache/Cache.html#getCacheServers-- >> >> >> On Thu, Nov 1, 2018 at 7:31 AM, Bruce Schuchardt >> wrote: >> >> I like this but it might be even better if ServerLauncher gave access to >>> the CacheServer it launched and CacheServer gave access to its cache. >>> The >>> Locator interface, as well, could have a getCache() method and deprecate >>> the getDistributedSystem() method. These days a Locator always has a >>> Cache >>> and no-one is interested in its DistributedSystem. >>> >>> >>> On 10/31/18 2:48 PM, Kirk Lund wrote: >>> >>> LocatorLauncher provides an API which can be used in-process to create a Locator. There is no public API on that class to get a reference to the Locator or its Cache. Similarly, ServerLauncher provides an API which can be used in-process to create a Server, but there is no public API in that class to get a reference to its Cache. The User of either Launcher would then have to resort to invoking singletons to get a reference to the Cache. There are existing package-private getter APIs on both Launchers but they're only used by tests in that same package. I propose adding public APIs for getCache to both LocatorLauncher and ServerLauncher as well as adding getLocator to LocatorLauncher. The signatures would look like: /** * Gets a reference to the Cache that was created by this ServerLauncher. * * @return a reference to the Cache */ public org.apache.geode.cache.Cache getCache(); /** * Gets a reference to the Locator that was created by this LocatorLauncher. * * @return a reference
Re: [PROPOSAL] Add getCache and getLocator to Launchers
It's not the AcceptorImpl thread -- it's actually the main thread in ServerLauncher which keeps the JVM from exiting: "main" #1 prio=5 os_prio=31 tid=0x7ff1f4006800 nid=0x1c03 in Object.wait() [0x7d906000] java.lang.Thread.State: TIMED_WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x0006c001d948> (a org.apache.geode.distributed.ServerLauncher) at org.apache.geode.distributed.ServerLauncher.waitOnServer(ServerLauncher.java:919) - locked <0x0006c001d948> (a org.apache.geode.distributed.ServerLauncher) at org.apache.geode.distributed.ServerLauncher.run(ServerLauncher.java:708) at org.apache.geode.distributed.ServerLauncher.main(ServerLauncher.java:227) If you issue a "stop server" request, then that will cause the main thread to exit ServerLauncher.waitOnServer. On Fri, Nov 2, 2018 at 3:03 PM, Bruce Schuchardt wrote: > Hmm, but it's the AcceptorImpl thread that keeps the JVM from exiting, so > I don't really agree that you can create a server with gfsh that doesn't > have a CacheServer. One's got to be created through cache.xml or something. > > Be that as it may I think the recent talk about this convinces me that > Kirk's original proposal is sound and we should go with it. Servers can be > fished out of the cache so it's not a big deal if the launcher API doesn't > have a getCacheServer method. > > > > On 11/1/18 9:38 AM, John Blum wrote: > >> Well, ServerLauncher may or may not create a CacheServer instance when it >> starts a server. A server can be created without a CacheServer, for >> instance, when in *Gfsh* `start server --disable-default-server` option is >> specified. >> >> In addition, you can always find or get the list of CacheServers (if >> present) from the Cache instance, using >> Cache.getCacheServers():List [1]. So, I think it would be >> better if the ServerLauncher returned a handle to the Cache and then drill >> down as opposed to up. >> >> -j >> >> >> [1] >> http://geode.apache.org/releases/latest/javadoc/org/apache/ >> geode/cache/Cache.html#getCacheServers-- >> >> >> On Thu, Nov 1, 2018 at 7:31 AM, Bruce Schuchardt >> wrote: >> >> I like this but it might be even better if ServerLauncher gave access to >>> the CacheServer it launched and CacheServer gave access to its cache. >>> The >>> Locator interface, as well, could have a getCache() method and deprecate >>> the getDistributedSystem() method. These days a Locator always has a >>> Cache >>> and no-one is interested in its DistributedSystem. >>> >>> >>> On 10/31/18 2:48 PM, Kirk Lund wrote: >>> >>> LocatorLauncher provides an API which can be used in-process to create a Locator. There is no public API on that class to get a reference to the Locator or its Cache. Similarly, ServerLauncher provides an API which can be used in-process to create a Server, but there is no public API in that class to get a reference to its Cache. The User of either Launcher would then have to resort to invoking singletons to get a reference to the Cache. There are existing package-private getter APIs on both Launchers but they're only used by tests in that same package. I propose adding public APIs for getCache to both LocatorLauncher and ServerLauncher as well as adding getLocator to LocatorLauncher. The signatures would look like: /** * Gets a reference to the Cache that was created by this ServerLauncher. * * @return a reference to the Cache */ public org.apache.geode.cache.Cache getCache(); /** * Gets a reference to the Locator that was created by this LocatorLauncher. * * @return a reference to the Locator */ public org.apache.geode.distributed.Locator getLocator(); Any thoughts? Yay or nay? Thanks, Kirk >> >
Re: [PROPOSAL] --add-opens for Java 11 support
I guess I'm ok with documenting the --add-opens workaround in the short term, so that users can start playing with Geode on JDK 11. Maybe we should indicate the JDK 11 support is experimental. I guess in addition to that, we are going to document that users should put Geode on the classpath and not the module path? Has anyone tested putting Geode on the module path? There are actually two ways to reserve a module name - you can create the module-info.java, OR you can just add an Automatic-Module-Name header to the your jar manifest. I think that would probably be the minimum we should do before we start telling users to put geode on the module path. Regarding reflection - I don't think we don't need--add-opens for that. Users just need to use the opens directive to mark their packages as available for reflection. -Dan On Fri, Nov 2, 2018 at 8:49 AM Jinmei Liao wrote: > Galen, you are right, --add-opens is necessary for accessing the private > fields and methods when doing deep reflection which is used a lot in our > serialization code. > > Let me step back and explain the scope of what we are trying to do here. > We all know to be fully java11 compliant, we need to: > 1. remove all the java internal API dependencies in geode itself > 2. upgrade all the 3rd party libraries that was using java internal API as > well. > 3. properly modularize geode and include module-info in the manifest. > > The bad news is: we are NOT doing any of them YET, and even if we achieved > all the above, from what I read, these "--add-opens" are still necessary if > we need to allow our code to do deep reflection at runtime. > > What we are trying to achieve here is: > 1. get a green jdk11 pipeline up and running first. We need to be able to > use jdk11 to run all our tests first, so that we can begin working on the 3 > things in the above list. > 2. our users can download our code and starting running in jdk11 (with some > additional configuration of course), this way, we can get the community to > experiment with geode in jdk11 and improve upon it. > > We are only trying to discuss how to achieve the bottom 2 goals first here. > > Cheers. > > On Thu, Nov 1, 2018 at 11:16 PM Galen O'Sullivan > wrote: > > > I did a little more reading, and it sounds like we should create an > > module-info.java to reserve the proper name for those customers who are > > using Java 9+. See this article[1] for a description of what can go wrong > > if people start using the automatic package without us having declared a > > name. I think a module-info should be necessary for *any* level of Java > 9+ > > support. > > > > Jinmei, please correct me if I'm wrong here, but I believe --add-opens is > > necessary for the reflection that we use in PDX auto-serialization, and > > probably elsewhere as well. This would make it necessary for any Java > > program communicating with Geode that uses automatic serialization to > have > > --add-opens. I don't understand all that well what level of reflection is > > available in Java 11, but it will probably take quite a bit of time to > do a > > complete fix. > > > > +1 to this approach, provided we create a module-info.java > > > > [1]: https://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html > > > > > > On Thu, Nov 1, 2018 at 10:57 AM Jinmei Liao wrote: > > > > > And one disclaimer I have to add is that these "--add-opens" are the > ones > > > uncovered by our current set of tests, there might be more needed in > > areas > > > that are not covered by our tests. So to say the most, our current > jdk11 > > > support is still in beta mode. > > > > > > On Thu, Nov 1, 2018 at 10:33 AM Jinmei Liao wrote: > > > > > > > 1) yes, gfsh script will need to be updated to add these > > configurations. > > > > 2) yes, these ad-opens are required to run geode clients as well. We > > will > > > > need to document them. > > > > > > > > On Thu, Nov 1, 2018 at 10:31 AM Dan Smith wrote: > > > > > > > >> A couple of questions: > > > >> > > > >> 1) Are you proposing changing gfsh start server to automatically add > > > these > > > >> add-opens, or are you suggesting users will have to do that? > > > >> 2) Are these add-opens required for running a geode client? > > > >> > > > >> -Dan > > > >> > > > >> On Thu, Nov 1, 2018 at 9:48 AM Patrick Rhomberg < > prhomb...@apache.org > > > > > > >> wrote: > > > >> > > > >> > In case anyone else's email broke the thread, below is a link to > the > > > >> > previous thread in the mail archive for context. > > > >> > > > > >> > https://markmail.org/thread/xt224pvavxu3d54p > > > >> > > > > >> > On Thu, Nov 1, 2018 at 9:35 AM, Jinmei Liao > > > wrote: > > > >> > > > > >> > > We will need to wrap up this discussion with a decision. Looks > > like > > > we > > > >> > are > > > >> > > skeptical about #4, and it's proven to work with #3 since our > > > current > > > >> > jdk11 > > > >> > > pipeline is green with this approach. > > > >> > > > > > >> > > Can I propose we do #3 and
Re: [PROPOSAL] Add getCache and getLocator to Launchers
Hmm, but it's the AcceptorImpl thread that keeps the JVM from exiting, so I don't really agree that you can create a server with gfsh that doesn't have a CacheServer. One's got to be created through cache.xml or something. Be that as it may I think the recent talk about this convinces me that Kirk's original proposal is sound and we should go with it. Servers can be fished out of the cache so it's not a big deal if the launcher API doesn't have a getCacheServer method. On 11/1/18 9:38 AM, John Blum wrote: Well, ServerLauncher may or may not create a CacheServer instance when it starts a server. A server can be created without a CacheServer, for instance, when in *Gfsh* `start server --disable-default-server` option is specified. In addition, you can always find or get the list of CacheServers (if present) from the Cache instance, using Cache.getCacheServers():List [1]. So, I think it would be better if the ServerLauncher returned a handle to the Cache and then drill down as opposed to up. -j [1] http://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/Cache.html#getCacheServers-- On Thu, Nov 1, 2018 at 7:31 AM, Bruce Schuchardt wrote: I like this but it might be even better if ServerLauncher gave access to the CacheServer it launched and CacheServer gave access to its cache. The Locator interface, as well, could have a getCache() method and deprecate the getDistributedSystem() method. These days a Locator always has a Cache and no-one is interested in its DistributedSystem. On 10/31/18 2:48 PM, Kirk Lund wrote: LocatorLauncher provides an API which can be used in-process to create a Locator. There is no public API on that class to get a reference to the Locator or its Cache. Similarly, ServerLauncher provides an API which can be used in-process to create a Server, but there is no public API in that class to get a reference to its Cache. The User of either Launcher would then have to resort to invoking singletons to get a reference to the Cache. There are existing package-private getter APIs on both Launchers but they're only used by tests in that same package. I propose adding public APIs for getCache to both LocatorLauncher and ServerLauncher as well as adding getLocator to LocatorLauncher. The signatures would look like: /** * Gets a reference to the Cache that was created by this ServerLauncher. * * @return a reference to the Cache */ public org.apache.geode.cache.Cache getCache(); /** * Gets a reference to the Locator that was created by this LocatorLauncher. * * @return a reference to the Locator */ public org.apache.geode.distributed.Locator getLocator(); Any thoughts? Yay or nay? Thanks, Kirk
Re: Testing tip: Use IgnoredException as an AutoCloseable
With IDE highlighting things as unused, you can either @SuppressWarnings("unused") on the method or statement highlighted or adjust your IDE inspections (perhaps more advisable), which is per project. On Fri, Nov 2, 2018 at 1:27 PM, Kirk Lund wrote: > The easiest way to use IgnoredException is as an AutoCloseable in a > try-with-resource block and by specifying the Exception class: > > import static > org.apache.geode.test.dunit.IgnoredException.addIgnoredException; > > import org.apache.geode.cache.client.ServerOperationException; > > @Test > public void test{} { > try (IgnoredException ie = > addIgnoredException(ServerOperationException.class)) > { > invokeMethodThatLogsServerOperationException(); > } > } > > Another benefit is that the logged ServerOperationException is only ignored > within the tiny scope of that try-with-resource block. > > Unfortunately you DO need to declare IgnoredException ie even though your > IDE will highlight it as unused. That's just the way try-with-resource > works. > -- -John john.blum10101 (skype)
Testing tip: Use IgnoredException as an AutoCloseable
The easiest way to use IgnoredException is as an AutoCloseable in a try-with-resource block and by specifying the Exception class: import static org.apache.geode.test.dunit.IgnoredException.addIgnoredException; import org.apache.geode.cache.client.ServerOperationException; @Test public void test{} { try (IgnoredException ie = addIgnoredException(ServerOperationException.class)) { invokeMethodThatLogsServerOperationException(); } } Another benefit is that the logged ServerOperationException is only ignored within the tiny scope of that try-with-resource block. Unfortunately you DO need to declare IgnoredException ie even though your IDE will highlight it as unused. That's just the way try-with-resource works.
Re: [PROPOSAL] Add getCache and getLocator to Launchers
I'm with Bill and John on this one. Give the top-level object only and let users move down the object heirarchy. On Thu, Nov 1, 2018 at 9:38 AM John Blum wrote: > Well, ServerLauncher may or may not create a CacheServer instance when it > starts a server. A server can be created without a CacheServer, for > instance, when in *Gfsh* `start server --disable-default-server` option is > specified. > > In addition, you can always find or get the list of CacheServers (if > present) from the Cache instance, using > Cache.getCacheServers():List [1]. So, I think it would be > better if the ServerLauncher returned a handle to the Cache and then drill > down as opposed to up. > > -j > > > [1] > > http://geode.apache.org/releases/latest/javadoc/org/apache/geode/cache/Cache.html#getCacheServers-- > > > On Thu, Nov 1, 2018 at 7:31 AM, Bruce Schuchardt > wrote: > > > I like this but it might be even better if ServerLauncher gave access to > > the CacheServer it launched and CacheServer gave access to its cache. > The > > Locator interface, as well, could have a getCache() method and deprecate > > the getDistributedSystem() method. These days a Locator always has a > Cache > > and no-one is interested in its DistributedSystem. > > > > > > On 10/31/18 2:48 PM, Kirk Lund wrote: > > > >> LocatorLauncher provides an API which can be used in-process to create a > >> Locator. There is no public API on that class to get a reference to the > >> Locator or its Cache. > >> > >> Similarly, ServerLauncher provides an API which can be used in-process > to > >> create a Server, but there is no public API in that class to get a > >> reference to its Cache. > >> > >> The User of either Launcher would then have to resort to invoking > >> singletons to get a reference to the Cache. > >> > >> There are existing package-private getter APIs on both Launchers but > >> they're only used by tests in that same package. > >> > >> I propose adding public APIs for getCache to both LocatorLauncher and > >> ServerLauncher as well as adding getLocator to LocatorLauncher. The > >> signatures would look like: > >> > >> /** > >> * Gets a reference to the Cache that was created by this > ServerLauncher. > >> * > >> * @return a reference to the Cache > >> */ > >> public org.apache.geode.cache.Cache getCache(); > >> > >> /** > >> * Gets a reference to the Locator that was created by this > >> LocatorLauncher. > >> * > >> * @return a reference to the Locator > >> */ > >> public org.apache.geode.distributed.Locator getLocator(); > >> > >> Any thoughts? Yay or nay? > >> > >> Thanks, > >> Kirk > >> > >> > > > > > -- > -John > john.blum10101 (skype) >
Re: [DISCUSS] Cutting 1.8 release branch
Fixes for PRClientServerRegionFunctionExecutionDUnitTest and CreateAsyncEventQueueCommandDUnitTest have been pushed to develop. On 11/2/18 11:05 AM, Alexander Murmann wrote: Hi Ryan, I am currently waiting for the failing DUnit tests to pass and then plan to cut the release branch. Does it work for you if the fixes for GEODE-5972 would be merged to the branch after it has been cut? On Fri, Nov 2, 2018 at 9:57 AM Ryan McMahon wrote: Bill Burcham and I have been working on a data inconsistency issue which involves a lost event across WAN sites during rebalance on the originating site. We are currently performing root cause analysis. Below is a Geode ticket which we will update with more details as we learn more. https://issues.apache.org/jira/browse/GEODE-5972 On Thu, Nov 1, 2018 at 5:23 PM Ernest Burghardt wrote: and PR 390 has been approved and merged On Thu, Nov 1, 2018 at 5:10 PM Ernest Burghardt wrote: geode-native fixes are in https://github.com/apache/geode-native/pull/390 On Thu, Nov 1, 2018 at 4:06 PM Anthony Baker wrote: The geode-native source headers I mentioned in [1] need to be cleaned up. Anthony [1] https://lists.apache.org/thread.html/8c9da19d7c0ef0149b1ed79bf0cecde38f17a854ecfa0f0a42f1ff0b@%3Cdev.geode.apache.org%3E On Nov 1, 2018, at 2:01 PM, Bruce Schuchardt < bschucha...@pivotal.io> wrote: This PR has been merged to develop On 11/1/18 1:35 PM, Bruce Schuchardt wrote: I would like to get this PR in the release: https://github.com/apache/geode/pull/2757 I'm testing the merge to develop now On 11/1/18 1:18 PM, Sai Boorlagadda wrote: Sure! agree that we should hold the releases unless there is a critical issue. This is not a gating issue and the code is already committed to develop. Sai On Thu, Nov 1, 2018 at 1:03 PM Alexander Murmann < amurm...@pivotal.io wrote: In the spirit of the previously discussed timed releases, we should only hold cutting the release for critical issues, that for some reason (not sure how that might be) should not be fixed after the branch has been cut. Waiting for features leads us down the slippery slope we are trying to avoid by having timed releases. Does that make sense? On Thu, Nov 1, 2018 at 12:45 PM Sai Boorlagadda < sai.boorlaga...@gmail.com wrote: I would like to resolve GEODE-5338 as it is currently waiting for doc update. Sai On Thu, Nov 1, 2018 at 10:00 AM Alexander Murmann < amurm...@pivotal.io> wrote: Hi everyone, It's time to cut the release branch, since we are moving to time based releases. Are there any reasons why a release branch should not be cut as soon as possible?
Re: Request access to https://concourse.apachegeode-ci.info
I pushed an empty commit not long ago and it took a few minutes for tests to be started On 11/2/18 11:13 AM, Kirk Lund wrote: I just push an empty commit 15 minutes ago to my PR branch and it did not trigger a precheckin. Should I just keep pushing empty commits until something happens? Is there another way to trigger a precheckin? Sorry for all the questions but I seem to be blocked. https://github.com/apache/geode/pull/2768 Thanks, Kirk On Fri, Nov 2, 2018 at 10:56 AM, Kirk Lund wrote: That's unfortunate, half of the precheckin jobs for my PR started correctly and half of them failed with: /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: 404 - Not Found // See: https://developer.github.com/ v3/pulls/#get-a-single-pull-request (Octokit::NotFound) from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in `block in call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in `on_complete' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in `fetch' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in `process' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in `call!' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in `build_response' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in `run_request' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in `get' from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in `call' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in `request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in `get' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in `pull_request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in `method_missing' from /opt/resource/lib/commands/in.rb:78:in `pr' from /opt/resource/lib/commands/in.rb:20:in `output' from /opt/resource/lib/commands/in.rb:110:in `' These are the ones that failed with the Octokit not found error: https://concourse.apachegeode-ci.info/teams/main/pipelines/ apache-develop-pr/jobs/UnitTest/builds/347 https://concourse.apachegeode-ci.info/teams/main/pipelines/ apache-develop-pr/jobs/IntegrationTest/builds/347 https://concourse.apachegeode-ci.info/teams/main/pipelines/ apache-develop-pr/jobs/AcceptanceTest/builds/346 UpgradeTest, DistributedTest and StressNewTest all started up fine (StressNewTest already finished green). Is there a way to kill the above jobs to avoid wasting Pivotal's money? I'll go ahead start a new precheckin from scratch. Thanks! On Fri, Nov 2, 2018 at 10:44 AM, Jacob Barrett wrote: You can’t restart PR jobs even with access. Concourse will only restart the latest PR, which may not be yours. To restart a PR check you push and empty commit to your branch. -Jake On Nov 2, 2018, at 10:38 AM, Kirk Lund wrote: I want to be able to restart my precheckin jobs. Specifically, I'd like to restart the UnitTest job on my latest PR precheckin: https://concourse.apachegeode-ci.info/teams/main/pipelines/a pache-develop-pr/jobs/UnitTest/builds/347 It seems to have hit a snag that's unrelated to my PR: /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/resp onse/raise_error.rb:16:in `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768 : 404 - Not Found // See: https://developer.github.com/v3/pulls/#get-a-single-pull-request (Octokit::NotFound) from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res ponse.rb:9:in `block in call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res ponse.rb:61:in `on_complete' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res ponse.rb:8:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f araday/http_cache.rb:290:in `fetch' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f araday/http_cache.rb:195:in `process' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f araday/http_cache.rb:142:in `call!' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f araday/http_cache.rb:115:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rac k_builder.rb:143:in `build_response' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/con nection.rb:387:in `run_request' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/con nection.rb:138:in `get' from
Re: Request access to https://concourse.apachegeode-ci.info
That should be enough. Let me see what I can do. On Fri, Nov 2, 2018 at 11:13 AM Kirk Lund wrote: > I just push an empty commit 15 minutes ago to my PR branch and it did not > trigger a precheckin. Should I just keep pushing empty commits until > something happens? Is there another way to trigger a precheckin? Sorry for > all the questions but I seem to be blocked. > > https://github.com/apache/geode/pull/2768 > > Thanks, > Kirk > > On Fri, Nov 2, 2018 at 10:56 AM, Kirk Lund wrote: > > > That's unfortunate, half of the precheckin jobs for my PR started > > correctly and half of them failed with: > > > > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in > > `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: > > 404 - Not Found // See: https://developer.github.com/ > > v3/pulls/#get-a-single-pull-request (Octokit::NotFound) > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in > > `block in call' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in > > `on_complete' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in > > `call' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in > > `fetch' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in > > `process' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in > > `call!' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in > > `call' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in > > `build_response' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in > > `run_request' > > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in > > `get' > > from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in > > `call' > > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in > > `request' > > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in > > `get' > > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in > > `pull_request' > > from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in > > `method_missing' > > from /opt/resource/lib/commands/in.rb:78:in `pr' > > from /opt/resource/lib/commands/in.rb:20:in `output' > > from /opt/resource/lib/commands/in.rb:110:in `' > > > > These are the ones that failed with the Octokit not found error: > > > > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > > apache-develop-pr/jobs/UnitTest/builds/347 > > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > > apache-develop-pr/jobs/IntegrationTest/builds/347 > > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > > apache-develop-pr/jobs/AcceptanceTest/builds/346 > > > > UpgradeTest, DistributedTest and StressNewTest all started up fine > > (StressNewTest already finished green). > > > > Is there a way to kill the above jobs to avoid wasting Pivotal's money? > > > > I'll go ahead start a new precheckin from scratch. Thanks! > > > > On Fri, Nov 2, 2018 at 10:44 AM, Jacob Barrett > > wrote: > > > >> You can’t restart PR jobs even with access. Concourse will only restart > >> the latest PR, which may not be yours. To restart a PR check you push > and > >> empty commit to your branch. > >> > >> -Jake > >> > >> > >> > On Nov 2, 2018, at 10:38 AM, Kirk Lund wrote: > >> > > >> > I want to be able to restart my precheckin jobs. Specifically, I'd > like > >> to > >> > restart the UnitTest job on my latest PR precheckin: > >> > > >> > https://concourse.apachegeode-ci.info/teams/main/pipelines/a > >> pache-develop-pr/jobs/UnitTest/builds/347 > >> > > >> > It seems to have hit a snag that's unrelated to my PR: > >> > > >> > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/resp > >> onse/raise_error.rb:16:in > >> > `on_complete': GET > https://api.github.com/repos/apache/geode/pulls/2768 > >> : > >> > 404 - Not Found // See: > >> > https://developer.github.com/v3/pulls/#get-a-single-pull-request > >> > (Octokit::NotFound) > >> > from > >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res > >> ponse.rb:9:in > >> > `block in call' > >> > from > >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res > >> ponse.rb:61:in > >> > `on_complete' > >> > from > >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res > >> ponse.rb:8:in > >> > `call' > >> > from > >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f > >> araday/http_cache.rb:290:in > >> > `fetch' > >> > from > >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f > >> araday/http_cache.rb:195:in > >> > `process' > >> > from > >> >
Re: Request access to https://concourse.apachegeode-ci.info
I just push an empty commit 15 minutes ago to my PR branch and it did not trigger a precheckin. Should I just keep pushing empty commits until something happens? Is there another way to trigger a precheckin? Sorry for all the questions but I seem to be blocked. https://github.com/apache/geode/pull/2768 Thanks, Kirk On Fri, Nov 2, 2018 at 10:56 AM, Kirk Lund wrote: > That's unfortunate, half of the precheckin jobs for my PR started > correctly and half of them failed with: > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in > `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: > 404 - Not Found // See: https://developer.github.com/ > v3/pulls/#get-a-single-pull-request (Octokit::NotFound) > from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in > `block in call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in > `on_complete' > from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in > `fetch' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in > `process' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in > `call!' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in > `build_response' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in > `run_request' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in > `get' > from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in > `request' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in > `get' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in > `pull_request' > from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in > `method_missing' > from /opt/resource/lib/commands/in.rb:78:in `pr' > from /opt/resource/lib/commands/in.rb:20:in `output' > from /opt/resource/lib/commands/in.rb:110:in `' > > These are the ones that failed with the Octokit not found error: > > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > apache-develop-pr/jobs/UnitTest/builds/347 > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > apache-develop-pr/jobs/IntegrationTest/builds/347 > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > apache-develop-pr/jobs/AcceptanceTest/builds/346 > > UpgradeTest, DistributedTest and StressNewTest all started up fine > (StressNewTest already finished green). > > Is there a way to kill the above jobs to avoid wasting Pivotal's money? > > I'll go ahead start a new precheckin from scratch. Thanks! > > On Fri, Nov 2, 2018 at 10:44 AM, Jacob Barrett > wrote: > >> You can’t restart PR jobs even with access. Concourse will only restart >> the latest PR, which may not be yours. To restart a PR check you push and >> empty commit to your branch. >> >> -Jake >> >> >> > On Nov 2, 2018, at 10:38 AM, Kirk Lund wrote: >> > >> > I want to be able to restart my precheckin jobs. Specifically, I'd like >> to >> > restart the UnitTest job on my latest PR precheckin: >> > >> > https://concourse.apachegeode-ci.info/teams/main/pipelines/a >> pache-develop-pr/jobs/UnitTest/builds/347 >> > >> > It seems to have hit a snag that's unrelated to my PR: >> > >> > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/resp >> onse/raise_error.rb:16:in >> > `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768 >> : >> > 404 - Not Found // See: >> > https://developer.github.com/v3/pulls/#get-a-single-pull-request >> > (Octokit::NotFound) >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res >> ponse.rb:9:in >> > `block in call' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res >> ponse.rb:61:in >> > `on_complete' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/res >> ponse.rb:8:in >> > `call' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f >> araday/http_cache.rb:290:in >> > `fetch' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f >> araday/http_cache.rb:195:in >> > `process' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f >> araday/http_cache.rb:142:in >> > `call!' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/f >> araday/http_cache.rb:115:in >> > `call' >> > from >> > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rac >> k_builder.rb:143:in >> > `build_response' >> > from >> >
Re: [DISCUSS] Cutting 1.8 release branch
Hi Ryan, I am currently waiting for the failing DUnit tests to pass and then plan to cut the release branch. Does it work for you if the fixes for GEODE-5972 would be merged to the branch after it has been cut? On Fri, Nov 2, 2018 at 9:57 AM Ryan McMahon wrote: > Bill Burcham and I have been working on a data inconsistency issue which > involves a lost event across WAN sites during rebalance on the originating > site. We are currently performing root cause analysis. Below is a Geode > ticket which we will update with more details as we learn more. > > https://issues.apache.org/jira/browse/GEODE-5972 > > On Thu, Nov 1, 2018 at 5:23 PM Ernest Burghardt > wrote: > > > and PR 390 has been approved and merged > > > > On Thu, Nov 1, 2018 at 5:10 PM Ernest Burghardt > > wrote: > > > > > geode-native fixes are in > > https://github.com/apache/geode-native/pull/390 > > > > > > On Thu, Nov 1, 2018 at 4:06 PM Anthony Baker > wrote: > > > > > >> The geode-native source headers I mentioned in [1] need to be cleaned > > up. > > >> > > >> Anthony > > >> > > >> [1] > > >> > > > https://lists.apache.org/thread.html/8c9da19d7c0ef0149b1ed79bf0cecde38f17a854ecfa0f0a42f1ff0b@%3Cdev.geode.apache.org%3E > > >> > > >> > On Nov 1, 2018, at 2:01 PM, Bruce Schuchardt < > bschucha...@pivotal.io> > > >> wrote: > > >> > > > >> > This PR has been merged to develop > > >> > > > >> > > > >> > On 11/1/18 1:35 PM, Bruce Schuchardt wrote: > > >> >> I would like to get this PR in the release: > > >> https://github.com/apache/geode/pull/2757 > > >> >> > > >> >> I'm testing the merge to develop now > > >> >> > > >> >> > > >> >> On 11/1/18 1:18 PM, Sai Boorlagadda wrote: > > >> >>> Sure! agree that we should hold the releases unless there is a > > >> >>> critical issue. > > >> >>> > > >> >>> This is not a gating issue and the code is already committed to > > >> develop. > > >> >>> > > >> >>> Sai > > >> >>> > > >> >>> On Thu, Nov 1, 2018 at 1:03 PM Alexander Murmann < > > amurm...@pivotal.io > > >> > > > >> >>> wrote: > > >> >>> > > >> In the spirit of the previously discussed timed releases, we > should > > >> only > > >> hold cutting the release for critical issues, that for some > reason > > >> (not > > >> sure how that might be) should not be fixed after the branch has > > >> been cut. > > >> Waiting for features leads us down the slippery slope we are > trying > > >> to > > >> avoid by having timed releases. Does that make sense? > > >> > > >> On Thu, Nov 1, 2018 at 12:45 PM Sai Boorlagadda < > > >> sai.boorlaga...@gmail.com > > >> wrote: > > >> > > >> > I would like to resolve GEODE-5338 as it is currently waiting > for > > >> > doc update. > > >> > > > >> > Sai > > >> > > > >> > On Thu, Nov 1, 2018 at 10:00 AM Alexander Murmann < > > >> amurm...@pivotal.io> > > >> > wrote: > > >> > > > >> >> Hi everyone, > > >> >> > > >> >> It's time to cut the release branch, since we are moving to > time > > >> based > > >> >> releases. Are there any reasons why a release branch should not > > be > > >> cut > > >> as > > >> >> soon as possible? > > >> >> > > >> >> > > >> > > > >> > > >> > > >
Re: Request access to https://concourse.apachegeode-ci.info
That's unfortunate, half of the precheckin jobs for my PR started correctly and half of them failed with: /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: 404 - Not Found // See: https://developer.github.com/v3/pulls/#get-a-single-pull-request (Octokit::NotFound) from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in `block in call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in `on_complete' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in `fetch' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in `process' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in `call!' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in `build_response' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in `run_request' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in `get' from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in `call' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in `request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in `get' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in `pull_request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in `method_missing' from /opt/resource/lib/commands/in.rb:78:in `pr' from /opt/resource/lib/commands/in.rb:20:in `output' from /opt/resource/lib/commands/in.rb:110:in `' These are the ones that failed with the Octokit not found error: https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/UnitTest/builds/347 https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/IntegrationTest/builds/347 https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/AcceptanceTest/builds/346 UpgradeTest, DistributedTest and StressNewTest all started up fine (StressNewTest already finished green). Is there a way to kill the above jobs to avoid wasting Pivotal's money? I'll go ahead start a new precheckin from scratch. Thanks! On Fri, Nov 2, 2018 at 10:44 AM, Jacob Barrett wrote: > You can’t restart PR jobs even with access. Concourse will only restart > the latest PR, which may not be yours. To restart a PR check you push and > empty commit to your branch. > > -Jake > > > > On Nov 2, 2018, at 10:38 AM, Kirk Lund wrote: > > > > I want to be able to restart my precheckin jobs. Specifically, I'd like > to > > restart the UnitTest job on my latest PR precheckin: > > > > https://concourse.apachegeode-ci.info/teams/main/pipelines/ > apache-develop-pr/jobs/UnitTest/builds/347 > > > > It seems to have hit a snag that's unrelated to my PR: > > > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/ > response/raise_error.rb:16:in > > `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: > > 404 - Not Found // See: > > https://developer.github.com/v3/pulls/#get-a-single-pull-request > > (Octokit::NotFound) > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > response.rb:9:in > > `block in call' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > response.rb:61:in > > `on_complete' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > response.rb:8:in > > `call' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/ > faraday/http_cache.rb:290:in > > `fetch' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/ > faraday/http_cache.rb:195:in > > `process' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/ > faraday/http_cache.rb:142:in > > `call!' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/ > faraday/http_cache.rb:115:in > > `call' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > rack_builder.rb:143:in > > `build_response' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > connection.rb:387:in > > `run_request' > > from > > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/ > connection.rb:138:in > > `get' > > from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent. > rb:94:in > > `call' > > from > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/ > connection.rb:156:in > > `request' > > from > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/ > connection.rb:19:in > > `get' > > from > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/ >
Re: Request access to https://concourse.apachegeode-ci.info
You can’t restart PR jobs even with access. Concourse will only restart the latest PR, which may not be yours. To restart a PR check you push and empty commit to your branch. -Jake > On Nov 2, 2018, at 10:38 AM, Kirk Lund wrote: > > I want to be able to restart my precheckin jobs. Specifically, I'd like to > restart the UnitTest job on my latest PR precheckin: > > https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/UnitTest/builds/347 > > It seems to have hit a snag that's unrelated to my PR: > > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in > `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: > 404 - Not Found // See: > https://developer.github.com/v3/pulls/#get-a-single-pull-request > (Octokit::NotFound) > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in > `block in call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in > `on_complete' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in > `fetch' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in > `process' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in > `call!' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in > `build_response' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in > `run_request' > from > /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in > `get' > from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in > `call' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in > `request' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in > `get' > from > /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in > `pull_request' > from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in > `method_missing' > from /opt/resource/lib/commands/in.rb:78:in `pr' > from /opt/resource/lib/commands/in.rb:20:in `output' > from /opt/resource/lib/commands/in.rb:110:in `'
Request access to https://concourse.apachegeode-ci.info
I want to be able to restart my precheckin jobs. Specifically, I'd like to restart the UnitTest job on my latest PR precheckin: https://concourse.apachegeode-ci.info/teams/main/pipelines/apache-develop-pr/jobs/UnitTest/builds/347 It seems to have hit a snag that's unrelated to my PR: /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/response/raise_error.rb:16:in `on_complete': GET https://api.github.com/repos/apache/geode/pulls/2768: 404 - Not Found // See: https://developer.github.com/v3/pulls/#get-a-single-pull-request (Octokit::NotFound) from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:9:in `block in call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:61:in `on_complete' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/response.rb:8:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:290:in `fetch' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:195:in `process' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:142:in `call!' from /usr/lib/ruby/gems/2.4.0/gems/faraday-http-cache-2.0.0/lib/faraday/http_cache.rb:115:in `call' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/rack_builder.rb:143:in `build_response' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:387:in `run_request' from /usr/lib/ruby/gems/2.4.0/gems/faraday-0.14.0/lib/faraday/connection.rb:138:in `get' from /usr/lib/ruby/gems/2.4.0/gems/sawyer-0.8.1/lib/sawyer/agent.rb:94:in `call' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:156:in `request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/connection.rb:19:in `get' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit/client/pull_requests.rb:31:in `pull_request' from /usr/lib/ruby/gems/2.4.0/gems/octokit-4.8.0/lib/octokit.rb:46:in `method_missing' from /opt/resource/lib/commands/in.rb:78:in `pr' from /opt/resource/lib/commands/in.rb:20:in `output' from /opt/resource/lib/commands/in.rb:110:in `'
Re: [DISCUSS] Cutting 1.8 release branch
Bill Burcham and I have been working on a data inconsistency issue which involves a lost event across WAN sites during rebalance on the originating site. We are currently performing root cause analysis. Below is a Geode ticket which we will update with more details as we learn more. https://issues.apache.org/jira/browse/GEODE-5972 On Thu, Nov 1, 2018 at 5:23 PM Ernest Burghardt wrote: > and PR 390 has been approved and merged > > On Thu, Nov 1, 2018 at 5:10 PM Ernest Burghardt > wrote: > > > geode-native fixes are in > https://github.com/apache/geode-native/pull/390 > > > > On Thu, Nov 1, 2018 at 4:06 PM Anthony Baker wrote: > > > >> The geode-native source headers I mentioned in [1] need to be cleaned > up. > >> > >> Anthony > >> > >> [1] > >> > https://lists.apache.org/thread.html/8c9da19d7c0ef0149b1ed79bf0cecde38f17a854ecfa0f0a42f1ff0b@%3Cdev.geode.apache.org%3E > >> > >> > On Nov 1, 2018, at 2:01 PM, Bruce Schuchardt > >> wrote: > >> > > >> > This PR has been merged to develop > >> > > >> > > >> > On 11/1/18 1:35 PM, Bruce Schuchardt wrote: > >> >> I would like to get this PR in the release: > >> https://github.com/apache/geode/pull/2757 > >> >> > >> >> I'm testing the merge to develop now > >> >> > >> >> > >> >> On 11/1/18 1:18 PM, Sai Boorlagadda wrote: > >> >>> Sure! agree that we should hold the releases unless there is a > >> >>> critical issue. > >> >>> > >> >>> This is not a gating issue and the code is already committed to > >> develop. > >> >>> > >> >>> Sai > >> >>> > >> >>> On Thu, Nov 1, 2018 at 1:03 PM Alexander Murmann < > amurm...@pivotal.io > >> > > >> >>> wrote: > >> >>> > >> In the spirit of the previously discussed timed releases, we should > >> only > >> hold cutting the release for critical issues, that for some reason > >> (not > >> sure how that might be) should not be fixed after the branch has > >> been cut. > >> Waiting for features leads us down the slippery slope we are trying > >> to > >> avoid by having timed releases. Does that make sense? > >> > >> On Thu, Nov 1, 2018 at 12:45 PM Sai Boorlagadda < > >> sai.boorlaga...@gmail.com > >> wrote: > >> > >> > I would like to resolve GEODE-5338 as it is currently waiting for > >> > doc update. > >> > > >> > Sai > >> > > >> > On Thu, Nov 1, 2018 at 10:00 AM Alexander Murmann < > >> amurm...@pivotal.io> > >> > wrote: > >> > > >> >> Hi everyone, > >> >> > >> >> It's time to cut the release branch, since we are moving to time > >> based > >> >> releases. Are there any reasons why a release branch should not > be > >> cut > >> as > >> >> soon as possible? > >> >> > >> >> > >> > > >> > >> >
Re: [PROPOSAL] --add-opens for Java 11 support
Galen, you are right, --add-opens is necessary for accessing the private fields and methods when doing deep reflection which is used a lot in our serialization code. Let me step back and explain the scope of what we are trying to do here. We all know to be fully java11 compliant, we need to: 1. remove all the java internal API dependencies in geode itself 2. upgrade all the 3rd party libraries that was using java internal API as well. 3. properly modularize geode and include module-info in the manifest. The bad news is: we are NOT doing any of them YET, and even if we achieved all the above, from what I read, these "--add-opens" are still necessary if we need to allow our code to do deep reflection at runtime. What we are trying to achieve here is: 1. get a green jdk11 pipeline up and running first. We need to be able to use jdk11 to run all our tests first, so that we can begin working on the 3 things in the above list. 2. our users can download our code and starting running in jdk11 (with some additional configuration of course), this way, we can get the community to experiment with geode in jdk11 and improve upon it. We are only trying to discuss how to achieve the bottom 2 goals first here. Cheers. On Thu, Nov 1, 2018 at 11:16 PM Galen O'Sullivan wrote: > I did a little more reading, and it sounds like we should create an > module-info.java to reserve the proper name for those customers who are > using Java 9+. See this article[1] for a description of what can go wrong > if people start using the automatic package without us having declared a > name. I think a module-info should be necessary for *any* level of Java 9+ > support. > > Jinmei, please correct me if I'm wrong here, but I believe --add-opens is > necessary for the reflection that we use in PDX auto-serialization, and > probably elsewhere as well. This would make it necessary for any Java > program communicating with Geode that uses automatic serialization to have > --add-opens. I don't understand all that well what level of reflection is > available in Java 11, but it will probably take quite a bit of time to do a > complete fix. > > +1 to this approach, provided we create a module-info.java > > [1]: https://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html > > > On Thu, Nov 1, 2018 at 10:57 AM Jinmei Liao wrote: > > > And one disclaimer I have to add is that these "--add-opens" are the ones > > uncovered by our current set of tests, there might be more needed in > areas > > that are not covered by our tests. So to say the most, our current jdk11 > > support is still in beta mode. > > > > On Thu, Nov 1, 2018 at 10:33 AM Jinmei Liao wrote: > > > > > 1) yes, gfsh script will need to be updated to add these > configurations. > > > 2) yes, these ad-opens are required to run geode clients as well. We > will > > > need to document them. > > > > > > On Thu, Nov 1, 2018 at 10:31 AM Dan Smith wrote: > > > > > >> A couple of questions: > > >> > > >> 1) Are you proposing changing gfsh start server to automatically add > > these > > >> add-opens, or are you suggesting users will have to do that? > > >> 2) Are these add-opens required for running a geode client? > > >> > > >> -Dan > > >> > > >> On Thu, Nov 1, 2018 at 9:48 AM Patrick Rhomberg > > > >> wrote: > > >> > > >> > In case anyone else's email broke the thread, below is a link to the > > >> > previous thread in the mail archive for context. > > >> > > > >> > https://markmail.org/thread/xt224pvavxu3d54p > > >> > > > >> > On Thu, Nov 1, 2018 at 9:35 AM, Jinmei Liao > > wrote: > > >> > > > >> > > We will need to wrap up this discussion with a decision. Looks > like > > we > > >> > are > > >> > > skeptical about #4, and it's proven to work with #3 since our > > current > > >> > jdk11 > > >> > > pipeline is green with this approach. > > >> > > > > >> > > Can I propose we do #3 and document the extra configuration needed > > for > > >> > > jdk11 for now and then work towards #1 and #2? > > >> > > > > >> > > Here is the extra configuration to the jvm that are need to run > > geode > > >> > under > > >> > > jdk11: > > >> > > > > >> > > > --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED" > > >> > > --add-opens=java.xml/jdk.xml.internal=ALL-UNNAMED" > > >> > > --add-opens=java.base/jdk.internal.module=ALL-UNNAMED" > > >> > > --add-opens=java.base/java.lang.module=ALL-UNNAMED" > > >> > > > > >> > > comments? votes? > > >> > > > > >> > > Thanks! > > >> > > > > >> > > On Thu, Oct 11, 2018 at 10:20 AM Sai Boorlagadda < > > >> > > sai.boorlaga...@gmail.com> > > >> > > wrote: > > >> > > > > >> > > > >Do we know what third party libraries are using java internals > > that > > >> > >we > > >> > > > might > > >> > > > have problems with? #2 isn't going to work for those >libraries, > > >> unless > > >> > > > they also add a module-info.class. So maybe we >will need to do > #3 > > >> for > > >> > > > third-party libraries? > > >> > > > > > >> > > > Adding these third-party
Re: [PROPOSAL] --add-opens for Java 11 support
I did a little more reading, and it sounds like we should create an module-info.java to reserve the proper name for those customers who are using Java 9+. See this article[1] for a description of what can go wrong if people start using the automatic package without us having declared a name. I think a module-info should be necessary for *any* level of Java 9+ support. Jinmei, please correct me if I'm wrong here, but I believe --add-opens is necessary for the reflection that we use in PDX auto-serialization, and probably elsewhere as well. This would make it necessary for any Java program communicating with Geode that uses automatic serialization to have --add-opens. I don't understand all that well what level of reflection is available in Java 11, but it will probably take quite a bit of time to do a complete fix. +1 to this approach, provided we create a module-info.java [1]: https://blog.joda.org/2017/05/java-se-9-jpms-automatic-modules.html On Thu, Nov 1, 2018 at 10:57 AM Jinmei Liao wrote: > And one disclaimer I have to add is that these "--add-opens" are the ones > uncovered by our current set of tests, there might be more needed in areas > that are not covered by our tests. So to say the most, our current jdk11 > support is still in beta mode. > > On Thu, Nov 1, 2018 at 10:33 AM Jinmei Liao wrote: > > > 1) yes, gfsh script will need to be updated to add these configurations. > > 2) yes, these ad-opens are required to run geode clients as well. We will > > need to document them. > > > > On Thu, Nov 1, 2018 at 10:31 AM Dan Smith wrote: > > > >> A couple of questions: > >> > >> 1) Are you proposing changing gfsh start server to automatically add > these > >> add-opens, or are you suggesting users will have to do that? > >> 2) Are these add-opens required for running a geode client? > >> > >> -Dan > >> > >> On Thu, Nov 1, 2018 at 9:48 AM Patrick Rhomberg > >> wrote: > >> > >> > In case anyone else's email broke the thread, below is a link to the > >> > previous thread in the mail archive for context. > >> > > >> > https://markmail.org/thread/xt224pvavxu3d54p > >> > > >> > On Thu, Nov 1, 2018 at 9:35 AM, Jinmei Liao > wrote: > >> > > >> > > We will need to wrap up this discussion with a decision. Looks like > we > >> > are > >> > > skeptical about #4, and it's proven to work with #3 since our > current > >> > jdk11 > >> > > pipeline is green with this approach. > >> > > > >> > > Can I propose we do #3 and document the extra configuration needed > for > >> > > jdk11 for now and then work towards #1 and #2? > >> > > > >> > > Here is the extra configuration to the jvm that are need to run > geode > >> > under > >> > > jdk11: > >> > > > >> > > --add-opens=jdk.management/com.sun.management.internal=ALL-UNNAMED" > >> > > --add-opens=java.xml/jdk.xml.internal=ALL-UNNAMED" > >> > > --add-opens=java.base/jdk.internal.module=ALL-UNNAMED" > >> > > --add-opens=java.base/java.lang.module=ALL-UNNAMED" > >> > > > >> > > comments? votes? > >> > > > >> > > Thanks! > >> > > > >> > > On Thu, Oct 11, 2018 at 10:20 AM Sai Boorlagadda < > >> > > sai.boorlaga...@gmail.com> > >> > > wrote: > >> > > > >> > > > >Do we know what third party libraries are using java internals > that > >> > >we > >> > > > might > >> > > > have problems with? #2 isn't going to work for those >libraries, > >> unless > >> > > > they also add a module-info.class. So maybe we >will need to do #3 > >> for > >> > > > third-party libraries? > >> > > > > >> > > > Adding these third-party libs on module path[1] rather than class > >> path > >> > > > seems to address this issue. > >> > > > > >> > > > [1] http://openjdk.java.net/projects/jigsaw/spec/sotms/# > >> > > automatic-modules > >> > > > > >> > > > >> > > > >> > > -- > >> > > Cheers > >> > > > >> > > Jinmei > >> > > > >> > > >> > > > > > > -- > > Cheers > > > > Jinmei > > > > > -- > Cheers > > Jinmei >