Re: Region Put errors out in Local Multi Server environment.

2016-09-23 Thread Dan Smith
Hi Jun,

Your perspective coming coming from the HDFS world makes sense with geode
as well. You configure the same partitioned region on all of your geode
nodes. Geode will then place partition your data across those nodes. Your
client will figure out where it should connect to get to the data correctly.

It is a bit confusing in this case that geode gives you the control to
shoot yourself in the foot by configuring your servers with different
regions. That type of configuration is potentially valid for some rare use
cases, but only if you also make use of server-groups to make sure your
client regions connect to the right servers. It's unfortunate that geode is
not enforcing that at configuration time.

See this page for a bit more about server-groups if you really need to go
that route. But generally you shouldn't need to do this:
http://geode.docs.pivotal.io/docs/topologies_and_comm/topology_concepts/how_server_discovery_works.html

-Dan

On Fri, Sep 23, 2016 at 5:29 PM, jun aoki  wrote:

> I maybe making some stupid question but I'm from HDFS world where I don't
> have to care where data's physical location is. (You just connect to
> Namenode and you are good.) Bear with me.
>
> On Fri, Sep 23, 2016 at 5:26 PM, jun aoki  wrote:
>
> > Hi Dan, thank you for sharing your insightful information.
> >
> > We only know how to get a cache from a locator as client cache.
> > e.g. clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
> > LOCATOR_PORT).create();
> >
> > This way, when we try to create a PROXY region, we don't know which
> server
> > we are connecting to thus we don't know which region is available to the
> > client. (in Goutam's case, it seems it is actually connecting to server1
> > thus region1 is visible but not region2)
> >
> > Is there any way we can specify which server to connect so that under
> this
> > inconsistent configuration (each server holds different region) so that
> we
> > access to a target region appropriately?
> > Is this inconsistent configuration a valid configuration since it doesn't
> > error out at least, or it is a bad practice where Geode might not work as
> > we expect?
> >
> >
> >
> > On Fri, Sep 23, 2016 at 5:02 PM, Goutam Tadi  wrote:
> >
> >> Thanks a lot Dan :-).
> >>
> >> Yeah, that was intentional.
> >> Your solution solves my problem.
> >>
> >> Thanks,
> >> Goutam Tadi.
> >>
> >> On Fri, Sep 23, 2016 at 4:58 PM Dan Smith  wrote:
> >>
> >> > Hi Goutam,
> >> >
> >> > It looks like you configured your two servers to have different
> regions.
> >> > Was that intentional? What's happening is that the client is
> connecting
> >> to
> >> > only one of the servers, which has one of your regions but not the
> >> other.
> >> >
> >> > Generally, when you configure gemfire servers, you should configure
> the
> >> > same regions on all servers. In this case I would put both of your
> >> regions
> >> > in the same cache.xml and use that for both servers. By default a
> geode
> >> > client will assume that all of your servers have the same regions and
> >> will
> >> > connect to one or more servers as it sees fit to minimize the number
> of
> >> > connections and reduce the number of network hops.
> >> >
> >> > If you really want to have servers that have different regions, you
> will
> >> > need to make use of the server-groups property on the server and
> create
> >> two
> >> > different pools on the client, using PoolFactory.setServerGroup to
> >> control
> >> > which group the client connects to. But I would say that's a less
> common
> >> > use case.
> >> >
> >> > -Dan
> >> >
> >> > On Fri, Sep 23, 2016 at 4:47 PM, Goutam Tadi 
> wrote:
> >> >
> >> > > Hi,
> >> > >
> >> > > I was facing the *"Region not found"* exception when I do the
> >> following
> >> > on
> >> > > local machine (single Node) :
> >> > > And, I don't see the exception when I was trying to perform remote
> >> debug
> >> > > which introduced some time lapse. I tried to introduce some `sleep`
> ,
> >> but
> >> > > of no use
> >> > > Can you please help and let me know what I could be possibly doing
> >> wrong?
> >> > >
> >> > > *Geode version: 1.0.0-incubating.M3*
> >> > >
> >> > > 1. *Start a locator : *
> >> > >
> >> > > start locator --name=testlocator --include-system-classpath
> >> > >
> >> > > ​
> >> > > 2. *Start two servers:*
> >> > >
> >> > > - start server --name=testserver1
> >> > > --cache-xml-file=SITMultiServerMultiRegion1.cache.xml
> >> > > --locators=localhost[10334] --server-port=30303
> >> > > --include-system-classpath
> >> > >
> >> > > - start server --name=testserver2
> >> > > --cache-xml-file=SITMultiServerMultiRegion2.cache.xml
> >> > > --locators=localhost[10334] --server-port=30304
> >> > > --include-system-classpath
> >> > >
> >> > > ​
> >> > > 3. *Java code to create ClientRegion and perform region.put()*
> >> > >
> >> > > private final static String LOCATOR_HOST = 

Review Request 52229: GEODE-1934: Removing spring-core dependencies from geode-core

2016-09-23 Thread Dan Smith

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52229/
---

Review request for geode, Anthony Baker and Hitesh Khamesra.


Repository: geode


Description
---

Remving the spring-core usage in geode-core classes that are not cli or
web related. The CLI and web code needs to be split into separate
projects. The rest of the geode-core should not depend on spring-core.


Diffs
-

  
geode-core/src/main/java/org/apache/geode/internal/net/SSLConfigurationFactory.java
 4bea22bcd57079849bc1ed96b8d7fc9ca8a7ab0c 
  geode-core/src/main/java/org/apache/geode/internal/net/SocketCreator.java 
bc1e896a2ed85260a9e9d95309735ea870d4761f 
  
geode-core/src/main/java/org/apache/geode/management/internal/configuration/domain/XmlEntity.java
 47f032e594abf090d68f5bb4d4a41b4442244875 
  geode-core/src/main/java/org/apache/geode/pdx/internal/PdxInstanceImpl.java 
10a11d267754531bcd5bf116fa598fc0b3d5aad1 

Diff: https://reviews.apache.org/r/52229/diff/


Testing
---


Thanks,

Dan Smith



Re: Region Put errors out in Local Multi Server environment.

2016-09-23 Thread jun aoki
I maybe making some stupid question but I'm from HDFS world where I don't
have to care where data's physical location is. (You just connect to
Namenode and you are good.) Bear with me.

On Fri, Sep 23, 2016 at 5:26 PM, jun aoki  wrote:

> Hi Dan, thank you for sharing your insightful information.
>
> We only know how to get a cache from a locator as client cache.
> e.g. clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
> LOCATOR_PORT).create();
>
> This way, when we try to create a PROXY region, we don't know which server
> we are connecting to thus we don't know which region is available to the
> client. (in Goutam's case, it seems it is actually connecting to server1
> thus region1 is visible but not region2)
>
> Is there any way we can specify which server to connect so that under this
> inconsistent configuration (each server holds different region) so that we
> access to a target region appropriately?
> Is this inconsistent configuration a valid configuration since it doesn't
> error out at least, or it is a bad practice where Geode might not work as
> we expect?
>
>
>
> On Fri, Sep 23, 2016 at 5:02 PM, Goutam Tadi  wrote:
>
>> Thanks a lot Dan :-).
>>
>> Yeah, that was intentional.
>> Your solution solves my problem.
>>
>> Thanks,
>> Goutam Tadi.
>>
>> On Fri, Sep 23, 2016 at 4:58 PM Dan Smith  wrote:
>>
>> > Hi Goutam,
>> >
>> > It looks like you configured your two servers to have different regions.
>> > Was that intentional? What's happening is that the client is connecting
>> to
>> > only one of the servers, which has one of your regions but not the
>> other.
>> >
>> > Generally, when you configure gemfire servers, you should configure the
>> > same regions on all servers. In this case I would put both of your
>> regions
>> > in the same cache.xml and use that for both servers. By default a geode
>> > client will assume that all of your servers have the same regions and
>> will
>> > connect to one or more servers as it sees fit to minimize the number of
>> > connections and reduce the number of network hops.
>> >
>> > If you really want to have servers that have different regions, you will
>> > need to make use of the server-groups property on the server and create
>> two
>> > different pools on the client, using PoolFactory.setServerGroup to
>> control
>> > which group the client connects to. But I would say that's a less common
>> > use case.
>> >
>> > -Dan
>> >
>> > On Fri, Sep 23, 2016 at 4:47 PM, Goutam Tadi  wrote:
>> >
>> > > Hi,
>> > >
>> > > I was facing the *"Region not found"* exception when I do the
>> following
>> > on
>> > > local machine (single Node) :
>> > > And, I don't see the exception when I was trying to perform remote
>> debug
>> > > which introduced some time lapse. I tried to introduce some `sleep` ,
>> but
>> > > of no use
>> > > Can you please help and let me know what I could be possibly doing
>> wrong?
>> > >
>> > > *Geode version: 1.0.0-incubating.M3*
>> > >
>> > > 1. *Start a locator : *
>> > >
>> > > start locator --name=testlocator --include-system-classpath
>> > >
>> > > ​
>> > > 2. *Start two servers:*
>> > >
>> > > - start server --name=testserver1
>> > > --cache-xml-file=SITMultiServerMultiRegion1.cache.xml
>> > > --locators=localhost[10334] --server-port=30303
>> > > --include-system-classpath
>> > >
>> > > - start server --name=testserver2
>> > > --cache-xml-file=SITMultiServerMultiRegion2.cache.xml
>> > > --locators=localhost[10334] --server-port=30304
>> > > --include-system-classpath
>> > >
>> > > ​
>> > > 3. *Java code to create ClientRegion and perform region.put()*
>> > >
>> > > private final static String LOCATOR_HOST = "localhost";
>> > > private final static int LOCATOR_PORT = 10334;
>> > >
>> > > clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
>> > > LOCATOR_PORT).create();
>> > >
>> > > Region region1 = clientCache.
>> > > createClientRegionFactory("PROXY").create(REGION1);
>> > > assertNotNull(region1);
>> > >
>> > > Region region2 = clientCache.createClientRegionFactory("
>> > > PROXY").create(REGION2);
>> > > assertNotNull(region2);
>> > >
>> > > region1.put("key1", "region1"); // SUCCESSFUL
>> > > region2.put("key2", "region2"); // ERROR OCCURRED.
>> > >
>> > > ​
>> > > 4.* Cache xmls used:*
>> > >
>> > > *Cache-XML- 1*
>> > >
>> > > 
>> > >
>> > > > > >  xmlns="http://schema.pivotal.io/gemfire/cache;
>> > >  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
>> > >  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>> > >  xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
>> > > http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
>> > >  version="8.1">
>> > >
>> > >  
>> > >
>> > >  
>> > >  
>> > >  
>> > >  
>> > >
>> > > 
>> > >
>> > > ​
>> > >
>> > >
>> > > *Cache-XML- 2*
>> > >
>> > > 
>> > >
>> > > > > >  xmlns="http://schema.pivotal.io/gemfire/cache;
>> > >  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
>> 

Re: Hibernate module and Geode 1.0 ?

2016-09-23 Thread Anthony Baker
Likewise!  Geode provides an L2 cache for Hibernate.  That is, an application 
that is using Hibernate could plug in Geode for caching.  Specifically, we 
implement Hibernate’s cache interfaces like CacheProvider, RegionFactory, etc.

There are build-time dependencies on several hibernate jars 
(hibernate-annotations, hibernate-core, hibernate-commons-annotations).  No 
hibernate source code or jars are shipped with any release.

Docs:
http://geode.docs.pivotal.io/docs/tools_modules/hibernate_cache/chapter_overview.html

Code:
https://git-wip-us.apache.org/repos/asf?p=incubator-geode.git;a=tree;f=extensions/geode-modules-hibernate;h=be8b9355934f824b9d4565ec6bfaa5d17a117f45;hb=HEAD

~/working/apache-geode-1.0.0-incubating.M3$ unzip -l 
tools/Modules/Apache_Geode_Modules-1.0.0-incubating.M3-Hibernate.zip
Archive:  tools/Modules/Apache_Geode_Modules-1.0.0-incubating.M3-Hibernate.zip
  Length Date   TimeName
    
0  08-01-16 17:01   lib/
   114497  08-01-16 16:58   lib/geode-modules-1.0.0-incubating.M3.jar
56960  08-01-16 17:01   lib/geode-modules-hibernate-1.0.0-incubating.M3.jar
    ---
   171457   3 files

~/working/apache-geode-1.0.0-incubating.M3$ jar tvf 
lib/geode-modules-hibernate-1.0.0-incubating.M3.jar
 0 Mon Aug 01 17:01:40 PDT 2016 META-INF/
   139 Mon Aug 01 17:01:40 PDT 2016 META-INF/MANIFEST.MF
 28210 Mon Jul 25 21:52:24 PDT 2016 META-INF/LICENSE
   584 Fri Jul 08 12:51:12 PDT 2016 META-INF/NOTICE
 0 Mon Aug 01 17:01:40 PDT 2016 com/
 0 Mon Aug 01 17:01:40 PDT 2016 com/gemstone/
 0 Mon Aug 01 17:01:40 PDT 2016 com/gemstone/gemfire/
 0 Mon Aug 01 17:01:40 PDT 2016 com/gemstone/gemfire/modules/
 0 Mon Aug 01 17:01:40 PDT 2016 com/gemstone/gemfire/modules/hibernate/
  1210 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/EnumType.class
  5707 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/GemFireCache.class
  1700 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/GemFireCacheListener.class
  7084 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/GemFireCacheProvider.class
  1104 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/GemFireQueryCacheFactory.class
  9529 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/GemFireRegionFactory.class
 0 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/
  1020 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/Access$1.class
  9535 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/Access.class
   343 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/ClientServerRegionFactoryDelegate$1.class
  1508 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/ClientServerRegionFactoryDelegate$LocatorHolder.class
  9639 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/ClientServerRegionFactoryDelegate.class
  9739 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/CollectionAccess.class
  2409 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/EntityRegionWriter.class
   240 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/EntityVersion.class
   964 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/EntityVersionImpl.class
  2446 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/EntityWrapper.class
  5001 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/GemFireBaseRegion.class
  2563 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/GemFireCollectionRegion.class
  7702 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/GemFireEntityRegion.class
  3058 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/GemFireQueryResultsRegion.class
  2547 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/KeyWrapper.class
  2911 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/NonStrictReadWriteAccess.class
  1670 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/ReadOnlyAccess.class
  1121 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/ReadWriteAccess.class
  7073 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/RegionFactoryDelegate.class
   581 Mon Aug 01 17:01:40 PDT 2016 
com/gemstone/gemfire/modules/hibernate/internal/TransactionalAccess.class


Anthony

> On Sep 23, 2016, at 4:54 PM, Roman Shaposhnik  wrote:
> 
> You've mentioned my trigger word ;-) Hibernate is licensed under LGPL
> so could you please specify how is Geode using it?
> 
> Thanks,
> Roman.
> 
> On Thu, Sep 

Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread Dan Smith

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150271
---


Ship it!




Ship It!

- Dan Smith


On Sept. 23, 2016, 9:58 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 23, 2016, 9:58 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Hibernate module and Geode 1.0 ?

2016-09-23 Thread Dan Smith
Geode provides an optional L2 cache for hibernate:

http://geode.docs.pivotal.io/docs/tools_modules/hibernate_cache/setting_up_the_module.html

Looking at the code, it looks like the geode-modules-hibernate is pulling
in hibernate-core as a compile dependency, but it's not being distributed
as part of the binary distribution.

-Dan

On Fri, Sep 23, 2016 at 4:54 PM, Roman Shaposhnik 
wrote:

> You've mentioned my trigger word ;-) Hibernate is licensed under LGPL
> so could you please specify how is Geode using it?
>
> Thanks,
> Roman.
>
> On Thu, Sep 22, 2016 at 3:15 PM, William Markito 
> wrote:
> > Folks,
> >
> > We're still building the Hibernate cache module [1] but it's compatible
> > only with a very old version (3.5) and given that the API has completely
> > changed and unless someone in the community wants to help getting this
> > module up-to-date with at least Hibernate 5.x I'd like propose to remove
> > the module from 1.0 / develop until we can work on updating that module.
> >
> > Given that it's already a separate module it shouldn't be that hard to be
> > removed.
> >
> > Thoughts ?
> >
> > [1]
> > http://geode.docs.pivotal.io/docs/tools_modules/hibernate_
> cache/chapter_overview.html
>


Re: Region Put errors out in Local Multi Server environment.

2016-09-23 Thread Goutam Tadi
Thanks a lot Dan :-).

Yeah, that was intentional.
Your solution solves my problem.

Thanks,
Goutam Tadi.

On Fri, Sep 23, 2016 at 4:58 PM Dan Smith  wrote:

> Hi Goutam,
>
> It looks like you configured your two servers to have different regions.
> Was that intentional? What's happening is that the client is connecting to
> only one of the servers, which has one of your regions but not the other.
>
> Generally, when you configure gemfire servers, you should configure the
> same regions on all servers. In this case I would put both of your regions
> in the same cache.xml and use that for both servers. By default a geode
> client will assume that all of your servers have the same regions and will
> connect to one or more servers as it sees fit to minimize the number of
> connections and reduce the number of network hops.
>
> If you really want to have servers that have different regions, you will
> need to make use of the server-groups property on the server and create two
> different pools on the client, using PoolFactory.setServerGroup to control
> which group the client connects to. But I would say that's a less common
> use case.
>
> -Dan
>
> On Fri, Sep 23, 2016 at 4:47 PM, Goutam Tadi  wrote:
>
> > Hi,
> >
> > I was facing the *"Region not found"* exception when I do the following
> on
> > local machine (single Node) :
> > And, I don't see the exception when I was trying to perform remote debug
> > which introduced some time lapse. I tried to introduce some `sleep` , but
> > of no use
> > Can you please help and let me know what I could be possibly doing wrong?
> >
> > *Geode version: 1.0.0-incubating.M3*
> >
> > 1. *Start a locator : *
> >
> > start locator --name=testlocator --include-system-classpath
> >
> > ​
> > 2. *Start two servers:*
> >
> > - start server --name=testserver1
> > --cache-xml-file=SITMultiServerMultiRegion1.cache.xml
> > --locators=localhost[10334] --server-port=30303
> > --include-system-classpath
> >
> > - start server --name=testserver2
> > --cache-xml-file=SITMultiServerMultiRegion2.cache.xml
> > --locators=localhost[10334] --server-port=30304
> > --include-system-classpath
> >
> > ​
> > 3. *Java code to create ClientRegion and perform region.put()*
> >
> > private final static String LOCATOR_HOST = "localhost";
> > private final static int LOCATOR_PORT = 10334;
> >
> > clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
> > LOCATOR_PORT).create();
> >
> > Region region1 = clientCache.
> > createClientRegionFactory("PROXY").create(REGION1);
> > assertNotNull(region1);
> >
> > Region region2 = clientCache.createClientRegionFactory("
> > PROXY").create(REGION2);
> > assertNotNull(region2);
> >
> > region1.put("key1", "region1"); // SUCCESSFUL
> > region2.put("key2", "region2"); // ERROR OCCURRED.
> >
> > ​
> > 4.* Cache xmls used:*
> >
> > *Cache-XML- 1*
> >
> > 
> >
> >  >  xmlns="http://schema.pivotal.io/gemfire/cache;
> >  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
> >  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> >  xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
> > http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
> >  version="8.1">
> >
> >  
> >
> >  
> >  
> >  
> >  
> >
> > 
> >
> > ​
> >
> >
> > *Cache-XML- 2*
> >
> > 
> >
> >  >  xmlns="http://schema.pivotal.io/gemfire/cache;
> >  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
> >  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> >  xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
> > http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
> >  version="8.1">
> >
> >  
> >
> >  
> >  
> >  
> >  
> >
> > 
> >
> > ​
> >
> > 5. *StackTrace*:
> >
> >
> > com.gemstone.gemfire.cache.client.ServerOperationException: remote
> > server on gpdb(28401:loner):45180:20c75e59: : While performing a
> > remote put
> >
> > at com.gemstone.gemfire.cache.client.internal.PutOp$
> > PutOpImpl.processAck(PutOp.java:445)
> >
> > at com.gemstone.gemfire.cache.client.internal.PutOp$
> > PutOpImpl.processResponse(PutOp.java:355)
> >
> > at com.gemstone.gemfire.cache.client.internal.PutOp$
> > PutOpImpl.attemptReadResponse(PutOp.java:540)
> >
> > at com.gemstone.gemfire.cache.client.internal.AbstractOp.
> > attempt(AbstractOp.java:378)
> >
> > at com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(
> > ConnectionImpl.java:274)
> >
> > at com.gemstone.gemfire.cache.client.internal.pooling.
> > PooledConnection.execute(PooledConnection.java:328)
> >
> > at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.
> > executeWithPossibleReAuthentication(OpExecutorImpl.java:937)
> >
> > at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> > OpExecutorImpl.java:155)
> >
> > at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> > OpExecutorImpl.java:110)
> >
> > at com.gemstone.gemfire.cache.client.internal.PoolImpl.
> > execute(PoolImpl.java:700)
> >

Re: Region Put errors out in Local Multi Server environment.

2016-09-23 Thread Dan Smith
Hi Goutam,

It looks like you configured your two servers to have different regions.
Was that intentional? What's happening is that the client is connecting to
only one of the servers, which has one of your regions but not the other.

Generally, when you configure gemfire servers, you should configure the
same regions on all servers. In this case I would put both of your regions
in the same cache.xml and use that for both servers. By default a geode
client will assume that all of your servers have the same regions and will
connect to one or more servers as it sees fit to minimize the number of
connections and reduce the number of network hops.

If you really want to have servers that have different regions, you will
need to make use of the server-groups property on the server and create two
different pools on the client, using PoolFactory.setServerGroup to control
which group the client connects to. But I would say that's a less common
use case.

-Dan

On Fri, Sep 23, 2016 at 4:47 PM, Goutam Tadi  wrote:

> Hi,
>
> I was facing the *"Region not found"* exception when I do the following on
> local machine (single Node) :
> And, I don't see the exception when I was trying to perform remote debug
> which introduced some time lapse. I tried to introduce some `sleep` , but
> of no use
> Can you please help and let me know what I could be possibly doing wrong?
>
> *Geode version: 1.0.0-incubating.M3*
>
> 1. *Start a locator : *
>
> start locator --name=testlocator --include-system-classpath
>
> ​
> 2. *Start two servers:*
>
> - start server --name=testserver1
> --cache-xml-file=SITMultiServerMultiRegion1.cache.xml
> --locators=localhost[10334] --server-port=30303
> --include-system-classpath
>
> - start server --name=testserver2
> --cache-xml-file=SITMultiServerMultiRegion2.cache.xml
> --locators=localhost[10334] --server-port=30304
> --include-system-classpath
>
> ​
> 3. *Java code to create ClientRegion and perform region.put()*
>
> private final static String LOCATOR_HOST = "localhost";
> private final static int LOCATOR_PORT = 10334;
>
> clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
> LOCATOR_PORT).create();
>
> Region region1 = clientCache.
> createClientRegionFactory("PROXY").create(REGION1);
> assertNotNull(region1);
>
> Region region2 = clientCache.createClientRegionFactory("
> PROXY").create(REGION2);
> assertNotNull(region2);
>
> region1.put("key1", "region1"); // SUCCESSFUL
> region2.put("key2", "region2"); // ERROR OCCURRED.
>
> ​
> 4.* Cache xmls used:*
>
> *Cache-XML- 1*
>
> 
>
>   xmlns="http://schema.pivotal.io/gemfire/cache;
>  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>  xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
> http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
>  version="8.1">
>
>  
>
>  
>  
>  
>  
>
> 
>
> ​
>
>
> *Cache-XML- 2*
>
> 
>
>   xmlns="http://schema.pivotal.io/gemfire/cache;
>  xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>  xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
> http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
>  version="8.1">
>
>  
>
>  
>  
>  
>  
>
> 
>
> ​
>
> 5. *StackTrace*:
>
>
> com.gemstone.gemfire.cache.client.ServerOperationException: remote
> server on gpdb(28401:loner):45180:20c75e59: : While performing a
> remote put
>
> at com.gemstone.gemfire.cache.client.internal.PutOp$
> PutOpImpl.processAck(PutOp.java:445)
>
> at com.gemstone.gemfire.cache.client.internal.PutOp$
> PutOpImpl.processResponse(PutOp.java:355)
>
> at com.gemstone.gemfire.cache.client.internal.PutOp$
> PutOpImpl.attemptReadResponse(PutOp.java:540)
>
> at com.gemstone.gemfire.cache.client.internal.AbstractOp.
> attempt(AbstractOp.java:378)
>
> at com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(
> ConnectionImpl.java:274)
>
> at com.gemstone.gemfire.cache.client.internal.pooling.
> PooledConnection.execute(PooledConnection.java:328)
>
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.
> executeWithPossibleReAuthentication(OpExecutorImpl.java:937)
>
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> OpExecutorImpl.java:155)
>
> at com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(
> OpExecutorImpl.java:110)
>
> at com.gemstone.gemfire.cache.client.internal.PoolImpl.
> execute(PoolImpl.java:700)
>
> at com.gemstone.gemfire.cache.client.internal.PutOp.execute(
> PutOp.java:102)
>
> at com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.put(
> ServerRegionProxy.java:175)
>
> at com.gemstone.gemfire.internal.cache.LocalRegion.serverPut(
> LocalRegion.java:3173)
>
> at com.gemstone.gemfire.internal.cache.LocalRegion.
> cacheWriteBeforePut(LocalRegion.java:3300)
>
> at com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(
> 

Re: Build failed in Jenkins: Geode-spark-connector #78

2016-09-23 Thread Anthony Baker
Yep, I’m seeing failures on any client app that doesn’t explicitly include 
spring as dependency.

Exception in thread "main" java.lang.NoClassDefFoundError: 
org/springframework/util/StringUtils
at 
org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:274)
at 
org.apache.geode.internal.net.SSLConfigurationFactory.configureSSLPropertiesFromSystemProperties(SSLConfigurationFactory.java:270)
at 
org.apache.geode.internal.net.SSLConfigurationFactory.createSSLConfigForComponent(SSLConfigurationFactory.java:138)
at 
org.apache.geode.internal.net.SSLConfigurationFactory.getSSLConfigForComponent(SSLConfigurationFactory.java:67)
at 
org.apache.geode.internal.net.SocketCreatorFactory.getSocketCreatorForComponent(SocketCreatorFactory.java:67)
at 
org.apache.geode.distributed.internal.tcpserver.TcpClient.(TcpClient.java:69)
at 
org.apache.geode.cache.client.internal.AutoConnectionSourceImpl.(AutoConnectionSourceImpl.java:114)
at 
org.apache.geode.cache.client.internal.PoolImpl.getSourceImpl(PoolImpl.java:579)
at 
org.apache.geode.cache.client.internal.PoolImpl.(PoolImpl.java:219)
at 
org.apache.geode.cache.client.internal.PoolImpl.create(PoolImpl.java:132)
at 
org.apache.geode.internal.cache.PoolFactoryImpl.create(PoolFactoryImpl.java:319)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.determineDefaultPool(GemFireCacheImpl.java:2943)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initializeDeclarativeCache(GemFireCacheImpl.java:1293)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.initialize(GemFireCacheImpl.java:1124)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.basicCreate(GemFireCacheImpl.java:765)
at 
org.apache.geode.internal.cache.GemFireCacheImpl.createClient(GemFireCacheImpl.java:740)
at 
org.apache.geode.cache.client.ClientCacheFactory.basicCreate(ClientCacheFactory.java:235)
at 
org.apache.geode.cache.client.ClientCacheFactory.create(ClientCacheFactory.java:189)
at HelloWorld.main(HelloWorld.java:25)
Caused by: java.lang.ClassNotFoundException: 
org.springframework.util.StringUtils
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 19 more

Anthony


> On Sep 23, 2016, at 4:34 PM, Dan Smith  wrote:
> 
> I created GEODE-1934 for this. It looks like the problem is actually that
> our dependencies for geode-core are messed up. spring-core is marked
> optional, but we're using it in critical places like this
> SSLConfigurationFactory.
> 
> In my opinion we shouldn't depend on spring-core at all unless we're
> actually going to use it for things other than StringUtils. I think we've
> accidentally introduced dependencies on it because the gfsh code in the
> core is pulling in a bunch of spring libraries.
> 
> -Dan
> 
> 
> On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
> 
>> See 
>> 
>> Changes:
>> 
>> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method to
>> get the
>> 
>> [klund] GEODE-1906: fix misspelling of Successfully
>> 
>> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
>> with gateways
>> 
>> --
>> [...truncated 1883 lines...]
>> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
>> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
>> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
>> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
>> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
>> server' on port 40135.
>> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
>> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
>> 4040. Attempting port 4041.
>> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
>> port 4041.
>> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
>> 16/09/23 16:11:15 INFO Executor: Starting executor ID  on host
>> localhost
>> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
>> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
>> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
>> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
>> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager
>> localhost:41182 with 2.8 GB RAM, BlockManagerId(, localhost, 41182)
>> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
>> === GeodeRunner: stop server 1.
>> === GeodeRunner: stop 

Region Put errors out in Local Multi Server environment.

2016-09-23 Thread Goutam Tadi
Hi,

I was facing the *"Region not found"* exception when I do the following on
local machine (single Node) :
And, I don't see the exception when I was trying to perform remote debug
which introduced some time lapse. I tried to introduce some `sleep` , but
of no use
Can you please help and let me know what I could be possibly doing wrong?

*Geode version: 1.0.0-incubating.M3*

1. *Start a locator : *

start locator --name=testlocator --include-system-classpath

​
2. *Start two servers:*

- start server --name=testserver1
--cache-xml-file=SITMultiServerMultiRegion1.cache.xml
--locators=localhost[10334] --server-port=30303
--include-system-classpath

- start server --name=testserver2
--cache-xml-file=SITMultiServerMultiRegion2.cache.xml
--locators=localhost[10334] --server-port=30304
--include-system-classpath

​
3. *Java code to create ClientRegion and perform region.put()*

private final static String LOCATOR_HOST = "localhost";
private final static int LOCATOR_PORT = 10334;

clientCache = new ClientCacheFactory().addPoolLocator(LOCATOR_HOST,
LOCATOR_PORT).create();

Region region1 = clientCache.
createClientRegionFactory("PROXY").create(REGION1);
assertNotNull(region1);

Region region2 = clientCache.createClientRegionFactory("PROXY").create(REGION2);
assertNotNull(region2);

region1.put("key1", "region1"); // SUCCESSFUL
region2.put("key2", "region2"); // ERROR OCCURRED.

​
4.* Cache xmls used:*

*Cache-XML- 1*



http://schema.pivotal.io/gemfire/cache;
 xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
 version="8.1">

 

 
 
 
 



​


*Cache-XML- 2*



http://schema.pivotal.io/gemfire/cache;
 xmlns:gpdb="http://schema.pivotal.io/gemfire/gpdb;
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
 xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache
http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd;
 version="8.1">

 

 
 
 
 



​

5. *StackTrace*:


com.gemstone.gemfire.cache.client.ServerOperationException: remote
server on gpdb(28401:loner):45180:20c75e59: : While performing a
remote put

at 
com.gemstone.gemfire.cache.client.internal.PutOp$PutOpImpl.processAck(PutOp.java:445)

at 
com.gemstone.gemfire.cache.client.internal.PutOp$PutOpImpl.processResponse(PutOp.java:355)

at 
com.gemstone.gemfire.cache.client.internal.PutOp$PutOpImpl.attemptReadResponse(PutOp.java:540)

at 
com.gemstone.gemfire.cache.client.internal.AbstractOp.attempt(AbstractOp.java:378)

at 
com.gemstone.gemfire.cache.client.internal.ConnectionImpl.execute(ConnectionImpl.java:274)

at 
com.gemstone.gemfire.cache.client.internal.pooling.PooledConnection.execute(PooledConnection.java:328)

at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.executeWithPossibleReAuthentication(OpExecutorImpl.java:937)

at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:155)

at 
com.gemstone.gemfire.cache.client.internal.OpExecutorImpl.execute(OpExecutorImpl.java:110)

at 
com.gemstone.gemfire.cache.client.internal.PoolImpl.execute(PoolImpl.java:700)

at com.gemstone.gemfire.cache.client.internal.PutOp.execute(PutOp.java:102)

at 
com.gemstone.gemfire.cache.client.internal.ServerRegionProxy.put(ServerRegionProxy.java:175)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.serverPut(LocalRegion.java:3173)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.cacheWriteBeforePut(LocalRegion.java:3300)

at 
com.gemstone.gemfire.internal.cache.ProxyRegionMap.basicPut(ProxyRegionMap.java:230)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.virtualPut(LocalRegion.java:5955)

at 
com.gemstone.gemfire.internal.cache.LocalRegionDataView.putEntry(LocalRegionDataView.java:132)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.basicPut(LocalRegion.java:5350)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.validatedPut(LocalRegion.java:1668)

at 
com.gemstone.gemfire.internal.cache.LocalRegion.put(LocalRegion.java:1655)

at 
com.gemstone.gemfire.internal.cache.AbstractRegion.put(AbstractRegion.java:288)

at io.pivotal.gemfire.gpdb.SITSample.testMultiRegion(SITSample.java:104)

Caused by: com.gemstone.gemfire.cache.RegionDestroyedException: Server
connection from
[identity(gpdb(28401:loner):45180:20c75e59,connection=1; port=45180]:
Region named /SITMultiServerMultiRegion2 was not found during put
request

at 
com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.writeRegionDestroyedEx(BaseCommand.java:642)

at 
com.gemstone.gemfire.internal.cache.tier.sockets.command.Put65.cmdExecute(Put65.java:195)

at 
com.gemstone.gemfire.internal.cache.tier.sockets.BaseCommand.execute(BaseCommand.java:146)

at 
com.gemstone.gemfire.internal.cache.tier.sockets.ServerConnection.doNormalMsg(ServerConnection.java:783)

at 

Re: Build failed in Jenkins: Geode-spark-connector #78

2016-09-23 Thread Dan Smith
I created GEODE-1934 for this. It looks like the problem is actually that
our dependencies for geode-core are messed up. spring-core is marked
optional, but we're using it in critical places like this
SSLConfigurationFactory.

In my opinion we shouldn't depend on spring-core at all unless we're
actually going to use it for things other than StringUtils. I think we've
accidentally introduced dependencies on it because the gfsh code in the
core is pulling in a bunch of spring libraries.

-Dan


On Fri, Sep 23, 2016 at 9:12 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> See 
>
> Changes:
>
> [hkhamesra] GEODE-37 In spark connector we call TcpClient static method to
> get the
>
> [klund] GEODE-1906: fix misspelling of Successfully
>
> [upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators
> with gateways
>
> --
> [...truncated 1883 lines...]
> 16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is
> /tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-
> 02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
> 16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
> 16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file
> server' on port 40135.
> 16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
> 16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port
> 4040. Attempting port 4041.
> 16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on
> port 4041.
> 16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
> 16/09/23 16:11:15 INFO Executor: Starting executor ID  on host
> localhost
> 16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver:
> akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
> 16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
> 16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
> 16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager
> localhost:41182 with 2.8 GB RAM, BlockManagerId(, localhost, 41182)
> 16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
> === GeodeRunner: stop server 1.
> === GeodeRunner: stop server 2.
>  [0m[ [0minfo [0m]  [0m [32mRetrieveRegionIntegrationTest: [0m [0m
> ..
>
> === GeodeRunner: stop locator
> ...
> Successfully stop Geode locator at port 27662.
> === GeodeRunner: starting locator on port 23825
> === GeodeRunner: waiting for locator on port 23825
> === GeodeRunner: done waiting for locator on port 23825
> === GeodeRunner: starting server1 with clientPort 28993
> === GeodeRunner: starting server2 with clientPort 26318
> === GeodeRunner: starting server3 with clientPort 29777
> === GeodeRunner: starting server4 with clientPort 22946
> 
> Locator in
> /x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/target/testgeode/locator on
> hemera.apache.org[23825] as locator is currently online.
> Process ID: 1860
> Uptime: 4 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> locator/locator.log
> JVM Arguments: -Dgemfire.enable-cluster-configuration=true
> -Dgemfire.load-cluster-configuration-from-dir=false
> -Dgemfire.jmx-manager-http-port=29684 
> -Dgemfire.launcher.registerSignalHandlers=true
> -Djava.awt.headless=true -Dsun.rmi.dgc.server.
> gcInterval=9223372036854775806
> Class-Path: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-
> incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-assembly/build/install/apache-geode/
> lib/geode-dependencies.jar
>
> Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]
>
> Cluster configuration service is up and running.
>
> 
> Server in /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
> on hemera.apache.org[22946] as server4 is currently online.
> Process ID: 2204
> Uptime: 8 seconds
> GemFire Version: 1.0.0-incubating-SNAPSHOT
> Java Version: 1.8.0_66
> Log File: /x1/jenkins/jenkins-slave/workspace/Geode-spark-
> connector/geode-spark-connector/geode-spark-connector/target/testgeode/
> server4/server4.log
> JVM Arguments: -Dgemfire.locators=localhost[23825] 
> -Dgemfire.use-cluster-configuration=true
> -Dgemfire.bind-address=localhost -Dgemfire.cache-xml-file=/x1/
> jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-
> connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
> -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false
> -XX:OnOutOfMemoryError=kill 

Review Request 52228: fix hang introduced by GEODE-1885

2016-09-23 Thread Darrel Schneider

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52228/
---

Review request for geode, anilkumar gingade, Eric Shu, Scott Jewell, and Ken 
Howe.


Bugs: GEODE-1885
https://issues.apache.org/jira/browse/GEODE-1885


Repository: geode


Description
---

The fix for GEODE-1885 introduced a hang on off-heap regions.
If a concurrent close/destroy of the region happens while other threads are 
modifying it then the thread doing the modification can get stuck in a hot loop 
that never terminates.
The hot loop is in AbstractRegionMap when it tests the existing region entry it 
finds to see if it can be modified. If the region entry has a value that says 
it is removed then the operation spins around and tries again. It expects the 
thread that marked it as being removed to also remove it from the map. The fix 
for GEODE-1885 can cause a remove to not happen.
So this fix does two things:
 1. On retry remove the existing removed region entry from the map.
 2. putEntryIfAbsent now only releases the current entry if it has an off-heap 
reference. This prevents an infinite loop that was caused by the current thread 
who just added a new entry with REMOVE_PHASE1 from releasing it (changing it to 
REMOVE_PHASE2) because it see that the region is closed/destroyed.


Diffs
-

  
geode-core/src/main/java/org/apache/geode/internal/cache/AbstractRegionMap.java 
33e98b6ae8a795c5a7b60aa93c7384750eb9582b 

Diff: https://reviews.apache.org/r/52228/diff/


Testing
---

precheckin


Thanks,

Darrel Schneider



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread Dan Smith


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 796
> > 
> >
> > Minor nit - can't you just write this code like this?
> > 
> > Long currentKey = getCurrentKey();
> > if(currentKey == null) {
> > 
> > Seems less confusing that way.
> 
> nabarun nag wrote:
> I completely agree with this. I was following the way other if 
> comparisons were done in the peekAhead. ->
> 
> while (before(currentKey, getTailKey())
> && (object = getObjectInSerialSenderQueue(currentKey)) == null) {
> 
> I will fix such comparisons in other places too
> 
> Dan Smith wrote:
> Your example above is in a while loop, so it actually is fetching the 
> object on each iteration.
> 
> nabarun nag wrote:
> aah!! i assumed the amount of confusion would be same in the comparison 
> within an if condition and a while loop. Hence changed it to
> object = getObjectInSerialSenderQueue(currentKey);
> while (before(currentKey, getTailKey())
> && (object == null)) {
>   currentKey = inc(currentKey);
>   object = getObjectInSerialSenderQueue(currentKey);
>   }
>   
> I assumed keeping both the functions incrementing current key and 
> fetching the object within the loop was a good idea.
> 
> Do you think its not a good idea?

What you did looks good to me.


- Dan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150094
---


On Sept. 23, 2016, 9:58 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 23, 2016, 9:58 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread nabarun nag


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 796
> > 
> >
> > Minor nit - can't you just write this code like this?
> > 
> > Long currentKey = getCurrentKey();
> > if(currentKey == null) {
> > 
> > Seems less confusing that way.
> 
> nabarun nag wrote:
> I completely agree with this. I was following the way other if 
> comparisons were done in the peekAhead. ->
> 
> while (before(currentKey, getTailKey())
> && (object = getObjectInSerialSenderQueue(currentKey)) == null) {
> 
> I will fix such comparisons in other places too
> 
> Dan Smith wrote:
> Your example above is in a while loop, so it actually is fetching the 
> object on each iteration.

aah!! i assumed the amount of confusion would be same in the comparison within 
an if condition and a while loop. Hence changed it to
object = getObjectInSerialSenderQueue(currentKey);
while (before(currentKey, getTailKey())
&& (object == null)) {
  currentKey = inc(currentKey);
  object = getObjectInSerialSenderQueue(currentKey);
  }
  
I assumed keeping both the functions incrementing current key and fetching the 
object within the loop was a good idea.

Do you think its not a good idea?


- nabarun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150094
---


On Sept. 23, 2016, 9:58 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 23, 2016, 9:58 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150244
---



I was able to find paralleWANConflationDUnitTest but no 
serialWANConflationDUnitTest in Geode, 
Am I missing something or are serial WAN conflation tests under a different 
class.

- nabarun nag


On Sept. 23, 2016, 9:58 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 23, 2016, 9:58 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread nabarun nag

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/
---

(Updated Sept. 23, 2016, 9:58 p.m.)


Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, and 
xiaojian zhou.


Repository: geode


Description
---

Before my changes:
1. peek calls peekAhead to get the object from the sender queue to be put into 
the batch for dispatch.
2. peekAhead gets the current key for which it is able to get an object back by 
calling optimalGet.
3. It puts that current key into the peekedIds list and returns the object back 
to peek.
4. peek then tries to make a heapcopy and only if it is successful it will put 
the object into the dispatch batch.
5. Here is the issue, now conflation may kick in before peek is able to do 
heapcopy and that object is removed from the queue and hence the object will 
not be placed in the dispatch batch. However the the key representing that 
object in the queue still exist in the PeekedIDs list.
6. So now there is an extra key in the peekedIds list.
7. Batch is dispatched and now the ack is received, so things need to be 
removed from the sender queue.
8. remove() is called in a for loop with the count set to size of dispatch 
batch for which ack was received. 
9. The remove() operation picks up keys from peekedIds list sequentially and 
calls remove(key) on the queue.
10. Now since the batch size is smaller than the Ids peeked, element will still 
linger behind after all remove calls are completed.
11. so tests which wait for queues to be empty will hang forever.

Solution:
Since peek() doesnt have access to the current key but just the object and 
hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
into peekAhead. 

So now only if a successful heap copy is made then only the key will be placed 
into the peekedIDs list.


Diffs (updated)
-

  
geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
 a8bb72d 

Diff: https://reviews.apache.org/r/52176/diff/


Testing
---

precheck


Thanks,

nabarun nag



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread Dan Smith


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 796
> > 
> >
> > Minor nit - can't you just write this code like this?
> > 
> > Long currentKey = getCurrentKey();
> > if(currentKey == null) {
> > 
> > Seems less confusing that way.
> 
> nabarun nag wrote:
> I completely agree with this. I was following the way other if 
> comparisons were done in the peekAhead. ->
> 
> while (before(currentKey, getTailKey())
> && (object = getObjectInSerialSenderQueue(currentKey)) == null) {
> 
> I will fix such comparisons in other places too

Your example above is in a while loop, so it actually is fetching the object on 
each iteration.


- Dan


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150094
---


On Sept. 22, 2016, 9:46 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 22, 2016, 9:46 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread nabarun nag


> On Sept. 23, 2016, 8:29 p.m., Jason Huynh wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 796
> > 
> >
> > +1 on the less confusing part

sorry about that  copied previous coding style 
(object = optimalGet(currentKey)) == null)


- nabarun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150201
---


On Sept. 22, 2016, 9:46 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 22, 2016, 9:46 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread nabarun nag


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > Looks like a good fix for a really complicated issue!
> > 
> > I had a couple of minor nitpicks, below. Also, is there a reason why you 
> > did not remove the code to 
> > ((GatewaySenderEventImpl)object).makeHeapCopyIfOffHeap() in 
> > SerialGatewaySenderQueue.peek? It seems like since you moved the copy to 
> > peekAhead that code is not doing anything anymore?
> > 
> > Is there any way to write at least a unit test for this code?

was updated in a new patch.
review went on an older patch. sorry about that


I assumed peekAhead had already some JUnit tests written for it hence I was 
confident after pre-check and other WANtests passed. I will write few Junit 
tests if they are not present.


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 796
> > 
> >
> > Minor nit - can't you just write this code like this?
> > 
> > Long currentKey = getCurrentKey();
> > if(currentKey == null) {
> > 
> > Seems less confusing that way.

I completely agree with this. I was following the way other if comparisons were 
done in the peekAhead. ->

while (before(currentKey, getTailKey())
&& (object = getObjectInSerialSenderQueue(currentKey)) == null) {

I will fix such comparisons in other places too


> On Sept. 22, 2016, 11:02 p.m., Dan Smith wrote:
> > geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java,
> >  line 778
> > 
> >
> > valueOf is uneccessary here.

was updated in a new patch.
review went on an older patch. sorry about that


- nabarun


---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150094
---


On Sept. 22, 2016, 9:46 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 22, 2016, 9:46 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



Re: support for old clients

2016-09-23 Thread Anthony Baker
What about geode-client-support? Or geode-client-compatibility?

I think that expresses intent to support backwards compatibility for prior 
client versions accordingly to the policies in the wiki [1].

Anthony

[1] 
https://cwiki.apache.org/confluence/display/GEODE/Managing+Backward+Compatibility
 



> On Sep 23, 2016, at 1:02 PM, Bruce Schuchardt  wrote:
> 
> I'm creating a new subproject for supporting old GemFire clients and WAN 
> sites and need a name for it.  All of the current ones are prefixed with 
> "geode-".
> 
> How about geode-gemfire-support?
> 



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: Review Request 52176: GEODE-1926: Modification of peedAhead() function check if heapCopy is successful before adding the key to peekedIds

2016-09-23 Thread Jason Huynh

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/52176/#review150201
---



Not sure what an appropriate test would be for this, maybe create a dunit with 
test hooks or perhaps target the peekAhead method directly and override the 
optimalGet and makeHeapCopy methods to force/fake it down the problematic 
path... It would be nice to get a test if possible


geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
 (line 796)


+1 on the less confusing part


- Jason Huynh


On Sept. 22, 2016, 9:46 p.m., nabarun nag wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/52176/
> ---
> 
> (Updated Sept. 22, 2016, 9:46 p.m.)
> 
> 
> Review request for geode, Barry Oglesby, Eric Shu, Jason Huynh, Dan Smith, 
> and xiaojian zhou.
> 
> 
> Repository: geode
> 
> 
> Description
> ---
> 
> Before my changes:
> 1. peek calls peekAhead to get the object from the sender queue to be put 
> into the batch for dispatch.
> 2. peekAhead gets the current key for which it is able to get an object back 
> by calling optimalGet.
> 3. It puts that current key into the peekedIds list and returns the object 
> back to peek.
> 4. peek then tries to make a heapcopy and only if it is successful it will 
> put the object into the dispatch batch.
> 5. Here is the issue, now conflation may kick in before peek is able to do 
> heapcopy and that object is removed from the queue and hence the object will 
> not be placed in the dispatch batch. However the the key representing that 
> object in the queue still exist in the PeekedIDs list.
> 6. So now there is an extra key in the peekedIds list.
> 7. Batch is dispatched and now the ack is received, so things need to be 
> removed from the sender queue.
> 8. remove() is called in a for loop with the count set to size of dispatch 
> batch for which ack was received. 
> 9. The remove() operation picks up keys from peekedIds list sequentially and 
> calls remove(key) on the queue.
> 10. Now since the batch size is smaller than the Ids peeked, element will 
> still linger behind after all remove calls are completed.
> 11. so tests which wait for queues to be empty will hang forever.
> 
> Solution:
> Since peek() doesnt have access to the current key but just the object and 
> hence cannot remove it from peekedIDs list, we moved the check for heapcopy 
> into peekAhead. 
> 
> So now only if a successful heap copy is made then only the key will be 
> placed into the peekedIDs list.
> 
> 
> Diffs
> -
> 
>   
> geode-core/src/main/java/org/apache/geode/internal/cache/wan/serial/SerialGatewaySenderQueue.java
>  a8bb72d 
> 
> Diff: https://reviews.apache.org/r/52176/diff/
> 
> 
> Testing
> ---
> 
> precheck
> 
> 
> Thanks,
> 
> nabarun nag
> 
>



support for old clients

2016-09-23 Thread Bruce Schuchardt
I'm creating a new subproject for supporting old GemFire clients and WAN 
sites and need a name for it.  All of the current ones are prefixed with 
"geode-".


How about geode-gemfire-support?



Re: security properties in the cluster config

2016-09-23 Thread Bruce Schuchardt
SSL settings and the new UDP dhAlgo setting can't be in the cluster 
config.  The cluster config is received over TCP/IP so you would have to 
use unsecured information to retrieve the settings, and you'd have to do 
it before the cache is created.


Does the security-manager have any role to play prior to the cache being 
created?   For instance, is it involved in authenticating the receipt of 
a new membership view or a join request in GMSAuthenticator?  If so you 
can't store it in the cluster config, which is only retrieved later on 
during cache creation.



Le 9/23/2016 à 11:57 AM, Michael Stolz a écrit :

I am in favor of keeping the SSL thoughts separate from the RBAC thoughts,
but I don't see any reason they couldn't share the same repository.

That said though, does putting it all into the Cluster Configuration
Manager (CCM) make it so that you can only have security if you are using
CCM for configuration?


--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: 631-835-4771

On Fri, Sep 23, 2016 at 1:48 PM, Jinmei Liao  wrote:


Hi, All,

I am working on this ticket:
https://issues.apache.org/jira/browse/GEODE-1659. Basically, currently,
any
member(locator or server) needs to specify its own security-manager in
order to protect its data which could leads to misconfiguration and data
leak. So we would like to put it into the cluster configuration so any
member who wants to join the cluster will need to apply the same security
measures.

Now Here is my question, should we only put the "security-manager" and
"security-post-processor" in the cluster config or any "security-*"
settings, which include SSL settings as well.

Thanks!

--
Cheers

Jinmei





Re: security properties in the cluster config

2016-09-23 Thread Michael Stolz
I am in favor of keeping the SSL thoughts separate from the RBAC thoughts,
but I don't see any reason they couldn't share the same repository.

That said though, does putting it all into the Cluster Configuration
Manager (CCM) make it so that you can only have security if you are using
CCM for configuration?


--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: 631-835-4771

On Fri, Sep 23, 2016 at 1:48 PM, Jinmei Liao  wrote:

> Hi, All,
>
> I am working on this ticket:
> https://issues.apache.org/jira/browse/GEODE-1659. Basically, currently,
> any
> member(locator or server) needs to specify its own security-manager in
> order to protect its data which could leads to misconfiguration and data
> leak. So we would like to put it into the cluster configuration so any
> member who wants to join the cluster will need to apply the same security
> measures.
>
> Now Here is my question, should we only put the "security-manager" and
> "security-post-processor" in the cluster config or any "security-*"
> settings, which include SSL settings as well.
>
> Thanks!
>
> --
> Cheers
>
> Jinmei
>


security properties in the cluster config

2016-09-23 Thread Jinmei Liao
Hi, All,

I am working on this ticket:
https://issues.apache.org/jira/browse/GEODE-1659. Basically, currently, any
member(locator or server) needs to specify its own security-manager in
order to protect its data which could leads to misconfiguration and data
leak. So we would like to put it into the cluster configuration so any
member who wants to join the cluster will need to apply the same security
measures.

Now Here is my question, should we only put the "security-manager" and
"security-post-processor" in the cluster config or any "security-*"
settings, which include SSL settings as well.

Thanks!

-- 
Cheers

Jinmei


Re: jvsd

2016-09-23 Thread Michael Stolz
+1 If its not part of a release its documentation shouldn't be either.

--
Mike Stolz
Principal Engineer, GemFire Product Manager
Mobile: 631-835-4771

On Thu, Sep 22, 2016 at 7:58 PM, Joey McAllister 
wrote:

> If it isn't on develop or planned for release, then I vote for removing it
> from the user guide altogether. I like the recommendation to keep this and
> similar (non-develop branch) information on the wiki, so the community can
> still easily access it.
>
> On Thu, Sep 22, 2016 at 4:51 PM Kirk Lund  wrote:
>
> > JVSD isn't labeled as Experimental because it isn't even on develop or in
> > part of any Geode release candidate. As far as I know, it exists only on
> an
> > incomplete and out-of-date feature branch. Hence, my vote is to remove
> any
> > mention of it from the docs until it merges to develop or at a minimum
> gets
> > updated (rebased) from develop (or M3).
> >
> > Has anyone done any rebasing or other work on the JVSD branch that I'm
> not
> > aware of?
> >
> > -Kirk
> >
> > On Thursday, September 22, 2016, Swapnil Bawaskar 
> > wrote:
> >
> > > I would vote for including it as an "experimental" feature.
> > >
> > > On Thu, Sep 22, 2016 at 4:37 PM, Dave Barnes  > > > wrote:
> > >
> > > > Process for handling Experimental docs is still being hammered out.
> > > Common
> > > > element among working scenarios is isolation from the body of the
> User
> > > > Guide proper, so I'll remove the JVSD component from the User Guide's
> > > Tools
> > > > and Modules section.
> > > > Could go on the Wiki, could go in an appendix. We'll see what
> emerges.
> > > > Any favorites among the readers of this thread?
> > > > Silence = "Docs group gets to pick"
> > > >
> > > > On Thu, Sep 22, 2016 at 3:37 PM, John Blum  > > > wrote:
> > > >
> > > > > Truthfully, I don't think this is any different than API features
> > that
> > > > have
> > > > > been annotated with "@Experimental" (e.g. LucenceService
> > > > >  > > > > javadoc/com/gemstone/gemfire/cache/lucene/LuceneService.html>).
> > > > > I.e. nothing is going to stop a user from trying to use a
> > > > > feature/function/tool and searching for relevant information on how
> > to
> > > > use
> > > > > it if they know it exists, either explicitly or implicitly.
> > > > >
> > > > > In fact, I would think it is advantageous if they know it does
> exist,
> > > > even
> > > > > prior to an official release, so that feedback can be gathered.
> > > > >
> > > > > If it is not to be part of the "official" User Guide, perhaps a
> Wiki
> > > page
> > > > > (other than the specification
> > > > >  > > > > action?pageId=61309918=contextnavpagetreemode>
> > > > > [1])
> > > > > or better yet, a GitHub README page along with the source code if
> > users
> > > > are
> > > > > given access to build and use the tool themselves.
> > > > >
> > > > > If part of the "official" User Guide (under tools), then perhaps a
> > > > > "Experimental" label.
> > > > >
> > > > > Food for thought.
> > > > >
> > > > > -John
> > > > >
> > > > >
> > > > > [1]
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > > action?pageId=61309918=contextnavpagetreemode
> > > > >
> > > > >
> > > > > On Thu, Sep 22, 2016 at 3:19 PM, Anthony Baker  > > >
> > > > wrote:
> > > > >
> > > > > > I think that providing documentation for jvsd before it is
> included
> > > in
> > > > > the
> > > > > > source and binary release distributions will only confuse users.
> > +1
> > > > for
> > > > > > removing.
> > > > > >
> > > > > > Anthony
> > > > > >
> > > > > > > On Sep 22, 2016, at 2:39 PM, Dave Barnes  > > > wrote:
> > > > > > >
> > > > > > > JVSD has appeared in the Geode user manual since M2. See
> > > > > > > http://geode.docs.pivotal.io/docs/tools_modules/jvsd.html.
> > > > > > > Kirk, are you recommending that we remove this?
> > > > > > >
> > > > > > > On Thu, Sep 22, 2016 at 10:57 AM, Kirk Lund  > > >
> > > > wrote:
> > > > > > >
> > > > > > >> I would recommend not mentioning jVSD at all in the Geode 1.0
> > > docs.
> > > > > > Right
> > > > > > >> now it's just a Jira ticket and feature branch. I think the
> docs
> > > > > should
> > > > > > >> only cover what's in Geode 1.0.
> > > > > > >>
> > > > > > >> If there's some doc or wiki page about proposed future
> features
> > or
> > > > > > features
> > > > > > >> currently looking for contributors/developers, then that would
> > > > > probably
> > > > > > be
> > > > > > >> an appropriate place to mention jVSD.
> > > > > > >>
> > > > > > >> Thanks,
> > > > > > >> Kirk
> > > > > > >>
> > > > > > >> On Thursday, September 22, 2016, Joey McAllister <
> > > > > > jmcallis...@pivotal.io >
> 

Re: jvsd

2016-09-23 Thread Joey McAllister
+1

Thanks, Dave!

On Fri, Sep 23, 2016 at 9:37 AM Dave Barnes  wrote:

> Good observations, John.
> A clarification: JVSD is Geode-only. The corresponding GemFire tool, VSD,
> enjoys a more 'mainstream' status, at least for now, so it still appears in
> the GF user guide.
>
>
> On Fri, Sep 23, 2016 at 9:34 AM, John Blum  wrote:
>
> > +1
> >
> > Even a bit of documentation (which seems scattered about... in the
> > specification, on VMW sites/properties, etc) would go a long way in
> helping
> > users realize the benefit of the tool and provide feedback, maybe even
> > contribute some PRs.  Having metrics on GemFire in realtime is hugely
> > invaluable (even just explaining all the metrics and what they mean, how
> > they are visualized should be sufficient).
> >
> > On Fri, Sep 23, 2016 at 9:32 AM, Dave Barnes  wrote:
> >
> > > Here's a proposal based on what I've seen in this thread:
> > > 1. We remove JVSD documentation from the user manual.
> > > 2. We save what's been written so far (mostly just build instructions)
> > as a
> > > README on the feature branch.
> > >
> > > On Thu, Sep 22, 2016 at 7:40 PM, Anthony Baker 
> > wrote:
> > >
> > > > Corporate org structures don’t get a voice on ASF mailing lists, only
> > > > community participants :-)
> > > >
> > > > Anthony
> > > >
> > > >
> > > > Begin forwarded message:
> > > >
> > > > *From: *Dave Barnes 
> > > > *Subject: **Re: jvsd*
> > > > *Date: *September 22, 2016 at 4:37:19 PM PDT
> > > > *To: *dev@geode.incubator.apache.org
> > > > *Reply-To: *dev@geode.incubator.apache.org
> > > >
> > > > Silence = "Docs group gets to pick"
> > > >
> > > >
> > > >
> > >
> >
> >
> >
> > --
> > -John
> > 503-504-8657
> > john.blum10101 (skype)
> >
>


Re: jvsd

2016-09-23 Thread Dave Barnes
Good observations, John.
A clarification: JVSD is Geode-only. The corresponding GemFire tool, VSD,
enjoys a more 'mainstream' status, at least for now, so it still appears in
the GF user guide.


On Fri, Sep 23, 2016 at 9:34 AM, John Blum  wrote:

> +1
>
> Even a bit of documentation (which seems scattered about... in the
> specification, on VMW sites/properties, etc) would go a long way in helping
> users realize the benefit of the tool and provide feedback, maybe even
> contribute some PRs.  Having metrics on GemFire in realtime is hugely
> invaluable (even just explaining all the metrics and what they mean, how
> they are visualized should be sufficient).
>
> On Fri, Sep 23, 2016 at 9:32 AM, Dave Barnes  wrote:
>
> > Here's a proposal based on what I've seen in this thread:
> > 1. We remove JVSD documentation from the user manual.
> > 2. We save what's been written so far (mostly just build instructions)
> as a
> > README on the feature branch.
> >
> > On Thu, Sep 22, 2016 at 7:40 PM, Anthony Baker 
> wrote:
> >
> > > Corporate org structures don’t get a voice on ASF mailing lists, only
> > > community participants :-)
> > >
> > > Anthony
> > >
> > >
> > > Begin forwarded message:
> > >
> > > *From: *Dave Barnes 
> > > *Subject: **Re: jvsd*
> > > *Date: *September 22, 2016 at 4:37:19 PM PDT
> > > *To: *dev@geode.incubator.apache.org
> > > *Reply-To: *dev@geode.incubator.apache.org
> > >
> > > Silence = "Docs group gets to pick"
> > >
> > >
> > >
> >
>
>
>
> --
> -John
> 503-504-8657
> john.blum10101 (skype)
>


Re: jvsd

2016-09-23 Thread John Blum
+1

Even a bit of documentation (which seems scattered about... in the
specification, on VMW sites/properties, etc) would go a long way in helping
users realize the benefit of the tool and provide feedback, maybe even
contribute some PRs.  Having metrics on GemFire in realtime is hugely
invaluable (even just explaining all the metrics and what they mean, how
they are visualized should be sufficient).

On Fri, Sep 23, 2016 at 9:32 AM, Dave Barnes  wrote:

> Here's a proposal based on what I've seen in this thread:
> 1. We remove JVSD documentation from the user manual.
> 2. We save what's been written so far (mostly just build instructions) as a
> README on the feature branch.
>
> On Thu, Sep 22, 2016 at 7:40 PM, Anthony Baker  wrote:
>
> > Corporate org structures don’t get a voice on ASF mailing lists, only
> > community participants :-)
> >
> > Anthony
> >
> >
> > Begin forwarded message:
> >
> > *From: *Dave Barnes 
> > *Subject: **Re: jvsd*
> > *Date: *September 22, 2016 at 4:37:19 PM PDT
> > *To: *dev@geode.incubator.apache.org
> > *Reply-To: *dev@geode.incubator.apache.org
> >
> > Silence = "Docs group gets to pick"
> >
> >
> >
>



-- 
-John
503-504-8657
john.blum10101 (skype)


Re: jvsd

2016-09-23 Thread Dave Barnes
Here's a proposal based on what I've seen in this thread:
1. We remove JVSD documentation from the user manual.
2. We save what's been written so far (mostly just build instructions) as a
README on the feature branch.

On Thu, Sep 22, 2016 at 7:40 PM, Anthony Baker  wrote:

> Corporate org structures don’t get a voice on ASF mailing lists, only
> community participants :-)
>
> Anthony
>
>
> Begin forwarded message:
>
> *From: *Dave Barnes 
> *Subject: **Re: jvsd*
> *Date: *September 22, 2016 at 4:37:19 PM PDT
> *To: *dev@geode.incubator.apache.org
> *Reply-To: *dev@geode.incubator.apache.org
>
> Silence = "Docs group gets to pick"
>
>
>


Build failed in Jenkins: Geode-spark-connector #78

2016-09-23 Thread Apache Jenkins Server
See 

Changes:

[hkhamesra] GEODE-37 In spark connector we call TcpClient static method to get 
the

[klund] GEODE-1906: fix misspelling of Successfully

[upthewaterspout] GEODE-1915: Prevent deadlock registering instantiators with 
gateways

--
[...truncated 1883 lines...]
16/09/23 16:11:05 INFO HttpFileServer: HTTP File server directory is 
/tmp/spark-f13dac55-087f-4379-aeed-616fbdc7ffac/httpd-02c1fab9-faa0-47f4-b0f3-fd44383eeeb3
16/09/23 16:11:05 INFO HttpServer: Starting HTTP Server
16/09/23 16:11:05 INFO Utils: Successfully started service 'HTTP file server' 
on port 40135.
16/09/23 16:11:05 INFO SparkEnv: Registering OutputCommitCoordinator
16/09/23 16:11:10 WARN Utils: Service 'SparkUI' could not bind on port 4040. 
Attempting port 4041.
16/09/23 16:11:15 INFO Utils: Successfully started service 'SparkUI' on port 
4041.
16/09/23 16:11:15 INFO SparkUI: Started SparkUI at http://localhost:4041
16/09/23 16:11:15 INFO Executor: Starting executor ID  on host localhost
16/09/23 16:11:15 INFO AkkaUtils: Connecting to HeartbeatReceiver: 
akka.tcp://sparkDriver@localhost:54872/user/HeartbeatReceiver
16/09/23 16:11:16 INFO NettyBlockTransferService: Server created on 41182
16/09/23 16:11:16 INFO BlockManagerMaster: Trying to register BlockManager
16/09/23 16:11:16 INFO BlockManagerMasterActor: Registering block manager 
localhost:41182 with 2.8 GB RAM, BlockManagerId(, localhost, 41182)
16/09/23 16:11:16 INFO BlockManagerMaster: Registered BlockManager
=== GeodeRunner: stop server 1.
=== GeodeRunner: stop server 2.
[info] RetrieveRegionIntegrationTest:
..

=== GeodeRunner: stop locator
...
Successfully stop Geode locator at port 27662.
=== GeodeRunner: starting locator on port 23825
=== GeodeRunner: waiting for locator on port 23825
=== GeodeRunner: done waiting for locator on port 23825
=== GeodeRunner: starting server1 with clientPort 28993
=== GeodeRunner: starting server2 with clientPort 26318
=== GeodeRunner: starting server3 with clientPort 29777
=== GeodeRunner: starting server4 with clientPort 22946

Locator in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator
 on hemera.apache.org[23825] as locator is currently online.
Process ID: 1860
Uptime: 4 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/locator/locator.log
JVM Arguments: -Dgemfire.enable-cluster-configuration=true 
-Dgemfire.load-cluster-configuration-from-dir=false 
-Dgemfire.jmx-manager-http-port=29684 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

Successfully connected to: JMX Manager [host=hemera.apache.org, port=1099]

Cluster configuration service is up and running.


Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4
 on hemera.apache.org[22946] as server4 is currently online.
Process ID: 2204
Uptime: 8 seconds
GemFire Version: 1.0.0-incubating-SNAPSHOT
Java Version: 1.8.0_66
Log File: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server4/server4.log
JVM Arguments: -Dgemfire.locators=localhost[23825] 
-Dgemfire.use-cluster-configuration=true -Dgemfire.bind-address=localhost 
-Dgemfire.cache-xml-file=/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/src/it/resources/test-retrieve-regions.xml
 -Dgemfire.http-service-port=8080 -Dgemfire.start-dev-rest-api=false 
-XX:OnOutOfMemoryError=kill -KILL %p 
-Dgemfire.launcher.registerSignalHandlers=true -Djava.awt.headless=true 
-Dsun.rmi.dgc.server.gcInterval=9223372036854775806
Class-Path: 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-core-1.0.0-incubating-SNAPSHOT.jar:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/./target/scala-2.10/it-classes:/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-assembly/build/install/apache-geode/lib/geode-dependencies.jar

..
Server in 
/x1/jenkins/jenkins-slave/workspace/Geode-spark-connector/geode-spark-connector/geode-spark-connector/target/testgeode/server1
 on hemera.apache.org[28993] as server1 is currently online.
Process