RE: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
Thanks.



I get the utility of specifying the network address to bind to; I’m not
convinced using that to derive the name of the internal data store is a
good idea! J



For instance, what if you have to move a persistent data store to a
different server? Or are you saying everybody sets LocalHost or 120.0.0.1
to ensure the folder name is always essentially local host?



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 3:09 PM
*To:* user 
*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 6:07 PM, Raymond Wilson 
wrote:

Dmitriy,



I set up an XML file based on the default one and added the two elements
you noted.



However, this has brought up an issue in that the XML file and an
IgniteConfiguration instance can’t both be provided to the Ignition.Start()
call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
and set LocalAddress to “127.0.0.1” and LocalPort to 47500.



This did change the name of the persistence folder to be “127_0_0_1_47500”
as you suggested.



While this resolves my current issue with the folder name changing, it
still seems fragile as network configuration aspects of the server Ignite
is running on have a direct impact on an internal aspect of its
configuration (ie: the location where to store the persisted data). A DHCP
IP lease renewal or an internal DNS domain change or an internal IT
department change to using IPv6 addressing (among other things) could cause
problems when a node restarts and decides the location of its data is
different.



Do you know how GridGain manage this in their enterprise deployments using
persistence?



I am glad the issue is resolved. By default, Ignite will bind to all the
local network interfaces, and if they are provided in different order, it
may create the situation you witnessed.



All enterprise users explicitly specify which network address to bind to,
just like you did. This helps avoid any kind of magic in production.









Thanks,
Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:41 AM


*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location





On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

Hi,



It’s possible this could cause change in the folder name, though I do not
think this is an issue in my case. Below are three different folder names I
have seen. All use the same port number, but differ in terms of the IPV6
address (I have also seen variations where the IPv6 address is absent in
the folder name).

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500



I start the nodes in my local setup in a well defined order so I would
expect the port to be the same. I did once start a second instance by
mistake and did see the port number incremented in the folder name.



Are you suggesting the two changes you note below will result in the same
folder name being chosen every time, unlike above?





Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.











Thanks,

Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:17 AM
*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
wrote:

Hi,



I definitely have not had more than one server node running at the same
time (though there have been more than one client node running on the same
machine).



I suspect what is happening is that one or more of the network interfaces
on the machine can have their address change dynamically. What I thought of
as a GUID is actually (I think) an IPv6 address attached to one of the
interfaces. This aspect of the folder name tends to come and go.



You can see from the folder names below that there are quite a number of
addresses involved. This seems to be fragile (and I certainly see the name
of this folder changing frequently), so I think being able to set it to
something concrete would be a good idea.





I think I understand what is happening. Ignite starts off with a default
port, and then starts incrementing it with every new node started on the
same host. Perhaps you start 

Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 6:07 PM, Raymond Wilson 
wrote:

> Dmitriy,
>
>
>
> I set up an XML file based on the default one and added the two elements
> you noted.
>
>
>
> However, this has brought up an issue in that the XML file and an
> IgniteConfiguration instance can’t both be provided to the Ignition.Start()
> call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
> and set LocalAddress to “127.0.0.1” and LocalPort to 47500.
>
>
>
> This did change the name of the persistence folder to be “127_0_0_1_47500”
> as you suggested.
>
>
>
> While this resolves my current issue with the folder name changing, it
> still seems fragile as network configuration aspects of the server Ignite
> is running on have a direct impact on an internal aspect of its
> configuration (ie: the location where to store the persisted data). A DHCP
> IP lease renewal or an internal DNS domain change or an internal IT
> department change to using IPv6 addressing (among other things) could cause
> problems when a node restarts and decides the location of its data is
> different.
>
>
>
> Do you know how GridGain manage this in their enterprise deployments using
> persistence?
>

I am glad the issue is resolved. By default, Ignite will bind to all the
local network interfaces, and if they are provided in different order, it
may create the situation you witnessed.

All enterprise users explicitly specify which network address to bind to,
just like you did. This helps avoid any kind of magic in production.




>
>
> Thanks,
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:41 AM
>
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
> On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>
>
>
>
>
> Yes, exactly. My suggestions will ensure that you explicitly bind to the
> same address every time.
>
>
>
>
>
>
>
>
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
>
>
> I think I understand what is happening. Ignite starts off with a default
> port, and then starts incrementing it with every new node started on the
> same host. Perhaps you start server and client nodes in different order
> sometimes which causes server to bind to a different port.
>
>
>
> To make sure that your server node binds to the same port all the time,
> you should try specifying it explicitly in the server node configuration,
> like so (forgive me if 

RE: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
Dmitriy,



I set up an XML file based on the default one and added the two elements
you noted.



However, this has brought up an issue in that the XML file and an
IgniteConfiguration instance can’t both be provided to the Ignition.Start()
call. So I changed it to use the DiscoverSPI aspect of IgniteConfiguration
and set LocalAddress to “127.0.0.1” and LocalPort to 47500.



This did change the name of the persistence folder to be “127_0_0_1_47500”
as you suggested.



While this resolves my current issue with the folder name changing, it
still seems fragile as network configuration aspects of the server Ignite
is running on have a direct impact on an internal aspect of its
configuration (ie: the location where to store the persisted data). A DHCP
IP lease renewal or an internal DNS domain change or an internal IT
department change to using IPv6 addressing (among other things) could cause
problems when a node restarts and decides the location of its data is
different.



Do you know how GridGain manage this in their enterprise deployments using
persistence?



Thanks,
Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:41 AM
*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location





On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

Hi,



It’s possible this could cause change in the folder name, though I do not
think this is an issue in my case. Below are three different folder names I
have seen. All use the same port number, but differ in terms of the IPV6
address (I have also seen variations where the IPv6 address is absent in
the folder name).

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500

0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500



I start the nodes in my local setup in a well defined order so I would
expect the port to be the same. I did once start a second instance by
mistake and did see the port number incremented in the folder name.



Are you suggesting the two changes you note below will result in the same
folder name being chosen every time, unlike above?





Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.











Thanks,

Raymond.



*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 11:17 AM
*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location







On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
wrote:

Hi,



I definitely have not had more than one server node running at the same
time (though there have been more than one client node running on the same
machine).



I suspect what is happening is that one or more of the network interfaces
on the machine can have their address change dynamically. What I thought of
as a GUID is actually (I think) an IPv6 address attached to one of the
interfaces. This aspect of the folder name tends to come and go.



You can see from the folder names below that there are quite a number of
addresses involved. This seems to be fragile (and I certainly see the name
of this folder changing frequently), so I think being able to set it to
something concrete would be a good idea.





I think I understand what is happening. Ignite starts off with a default
port, and then starts incrementing it with every new node started on the
same host. Perhaps you start server and client nodes in different order
sometimes which causes server to bind to a different port.



To make sure that your server node binds to the same port all the time, you
should try specifying it explicitly in the server node configuration, like
so (forgive me if this snippet does not compile):







* 
  *



Please make sure that the client nodes either don't have any port
configured, or have a different port configured.



You should also make sure that Ignite always binds to the desired local
interface on client and server nodes, by specifying
IgniteConfiguration.setLocalHost(...) property, or like so in XML:



**



If my theory is correct, Ignite should make sure that the clients and
servers cannot theoretically bind to the same port. I will double check it
with the community and file a ticket if needed.


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 4:28 PM, Raymond Wilson 
wrote:

> Hi,
>
>
>
> It’s possible this could cause change in the folder name, though I do not
> think this is an issue in my case. Below are three different folder names I
> have seen. All use the same port number, but differ in terms of the IPV6
> address (I have also seen variations where the IPv6 address is absent in
> the folder name).
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_
> 50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> ,
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_
> 8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_
> 121_1_192_168_178_27_192_168_3_1_2406_e007_38b4_1_858c_
> f0ab_bc60_54ab_2406_e007_38b4_1_c5d8_af4b_55b2_582a_47500
>
>
>
> I start the nodes in my local setup in a well defined order so I would
> expect the port to be the same. I did once start a second instance by
> mistake and did see the port number incremented in the folder name.
>
>
>
> Are you suggesting the two changes you note below will result in the same
> folder name being chosen every time, unlike above?
>


Yes, exactly. My suggestions will ensure that you explicitly bind to the
same address every time.





>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
> *Sent:* Tuesday, September 5, 2017 11:17 AM
> *To:* user 
> *Cc:* Raymond Wilson 
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
>
>
>
>
> On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
> wrote:
>
> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
>
>
> I think I understand what is happening. Ignite starts off with a default
> port, and then starts incrementing it with every new node started on the
> same host. Perhaps you start server and client nodes in different order
> sometimes which causes server to bind to a different port.
>
>
>
> To make sure that your server node binds to the same port all the time,
> you should try specifying it explicitly in the server node configuration,
> like so (forgive me if this snippet does not compile):
>
>
>
>
>
>
>
> *  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>   *
>
>
>
> Please make sure that the client nodes either don't have any port
> configured, or have a different port configured.
>
>
>
> You should also make sure that Ignite always binds to the desired local
> interface on client and server nodes, by specifying 
> IgniteConfiguration.setLocalHost(...)
> property, or like so in XML:
>
>
>
> **
>
>
>
> If my theory is correct, Ignite should make sure that the clients and
> servers cannot theoretically bind to the same port. I will double check it
> with the community and file a ticket if needed.
>
>
>


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
On Mon, Sep 4, 2017 at 3:37 PM, Raymond Wilson 
wrote:

> Hi,
>
>
>
> I definitely have not had more than one server node running at the same
> time (though there have been more than one client node running on the same
> machine).
>
>
>
> I suspect what is happening is that one or more of the network interfaces
> on the machine can have their address change dynamically. What I thought of
> as a GUID is actually (I think) an IPv6 address attached to one of the
> interfaces. This aspect of the folder name tends to come and go.
>
>
>
> You can see from the folder names below that there are quite a number of
> addresses involved. This seems to be fragile (and I certainly see the name
> of this folder changing frequently), so I think being able to set it to
> something concrete would be a good idea.
>
>
>
I think I understand what is happening. Ignite starts off with a default
port, and then starts incrementing it with every new node started on the
same host. Perhaps you start server and client nodes in different order
sometimes which causes server to bind to a different port.

To make sure that your server node binds to the same port all the time, you
should try specifying it explicitly in the server node configuration, like
so (forgive me if this snippet does not compile):


>
>
>
> *  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>   *


Please make sure that the client nodes either don't have any port
configured, or have a different port configured.

You should also make sure that Ignite always binds to the desired local
interface on client and server nodes, by specifying
IgniteConfiguration.setLocalHost(...) property, or like so in XML:

**


If my theory is correct, Ignite should make sure that the clients and
servers cannot theoretically bind to the same port. I will double check it
with the community and file a ticket if needed.


RE: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
Hi,



I definitely have not had more than one server node running at the same
time (though there have been more than one client node running on the same
machine).



I suspect what is happening is that one or more of the network interfaces
on the machine can have their address change dynamically. What I thought of
as a GUID is actually (I think) an IPv6 address attached to one of the
interfaces. This aspect of the folder name tends to come and go.



You can see from the folder names below that there are quite a number of
addresses involved. This seems to be fragile (and I certainly see the name
of this folder changing frequently), so I think being able to set it to
something concrete would be a good idea.



Thanks,
Raymond.





*From:* Dmitriy Setrakyan [mailto:dsetrak...@apache.org]
*Sent:* Tuesday, September 5, 2017 10:23 AM
*To:* user 
*Cc:* Raymond Wilson 
*Subject:* Re: Specifying location of persistent storage location



Hi Raymond,



Sorry for the initial confusion. The consistent ID is the combination of
the local IP and port. You DO NOT need to do anything special to configure
it.



If you had different folders created under the work folder, you probably
had more than one node running at the same time. Can you please make sure
that it was not the case?



D.



On Mon, Sep 4, 2017 at 2:55 PM, Raymond Wilson 
wrote:

Hi Dmitry,



I looked at IgniteConfiguration in the C# client, but it does not have
consistentID in its namespace.



I pulled the C# client source code and searched in there and was not able
to find it. Perhaps this is not exposed in the C# client at all?



If that is that case, how would I configure this?



Thanks,

Raymond.



*From:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
*Sent:* Tuesday, September 5, 2017 9:24 AM


*To:* user@ignite.apache.org
*Subject:* Re: Specifying location of persistent storage location



Hi Ramond,



Node.Consistent ID by default is the sorted set of local IP addresses and
ports. This field value survives during node restart.



At the same time consistent ID may be set using
IgniteConfiguration.setConsistentId() if you need to specify it manually.

I'm not sure how to write in C# syntax, but I am pretty sure it may be
configured.



Sincerely,

Dmitriy Pavlov



вт, 5 сент. 2017 г. в 0:12, Raymond Wilson :

… also, the documentation for ClusterNode here (
https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cluster/ClusterNode.html)
only describes a getter for the consistent ID, I need to be able to set it.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, September 5, 2017 9:06 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Specifying location of persistent storage location



Apologies if this is a silly question, but I’m struggling to see how to get
at the consistentID member of ClusterNode on the C# client.



If I look at IClusterNode I only see “Id”, which is the ID that changes
each restart. Is consistentID a Java client only feature?



Thanks,

Raymond.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
]
*Sent:* Tuesday, September 5, 2017 6:04 AM
*To:* user@ignite.apache.org
*Subject:* Re: Specifying location of persistent storage location



Thank you Dmitry!

Sent from my iPhone


On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:

Hi Raymond,



Ignite Persistent Store includes consistentID parameter of cluster node
into folders name. It is required because there is possible that 2 nodes
would be started at same physical machine.



Consistency of using same folder each time is provided by this property,

ClusterNode.consistentID - consistent globally unique node ID. Unlike
ClusterNode.id this parameter constains consistent node ID which survives
node restarts.



Sincerely,

Dmitriy Pavlov





сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :

Hi,



I’m running a POC looking at the Ignite Persistent Store feature.



I have added a section to the configuration for the Ignite grid as follows:



cfg.PersistentStoreConfiguration = new
PersistentStoreConfiguration()

{

PersistentStorePath = PersistentCacheStoreLocation,

WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
"WalArchive"),

WalStorePath = Path.Combine(PersistentCacheStoreLocation,
"WalStore"),

};



When I run the Ignite grid (a single node running locally) it then creates
a folder inside the PersistentCacheStoreLocation with a complicated name,
like this (which looks like a collection of IP addresses and a GUID for
good measure, and perhaps with a port number added to the end):




Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitriy Setrakyan
Hi Raymond,

Sorry for the initial confusion. The consistent ID is the combination of
the local IP and port. You DO NOT need to do anything special to configure
it.

If you had different folders created under the work folder, you probably
had more than one node running at the same time. Can you please make sure
that it was not the case?

D.

On Mon, Sep 4, 2017 at 2:55 PM, Raymond Wilson 
wrote:

> Hi Dmitry,
>
>
>
> I looked at IgniteConfiguration in the C# client, but it does not have
> consistentID in its namespace.
>
>
>
> I pulled the C# client source code and searched in there and was not able
> to find it. Perhaps this is not exposed in the C# client at all?
>
>
>
> If that is that case, how would I configure this?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
> *Sent:* Tuesday, September 5, 2017 9:24 AM
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
> Hi Ramond,
>
>
>
> Node.Consistent ID by default is the sorted set of local IP addresses and
> ports. This field value survives during node restart.
>
>
>
> At the same time consistent ID may be set using
> IgniteConfiguration.setConsistentId() if you need to specify it manually.
>
> I'm not sure how to write in C# syntax, but I am pretty sure it may be
> configured.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
>
> вт, 5 сент. 2017 г. в 0:12, Raymond Wilson :
>
> … also, the documentation for ClusterNode here (
> https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/i
> gnite/cluster/ClusterNode.html) only describes a getter for the
> consistent ID, I need to be able to set it.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
> *Sent:* Tuesday, September 5, 2017 9:06 AM
> *To:* 'user@ignite.apache.org' 
> *Subject:* RE: Specifying location of persistent storage location
>
>
>
> Apologies if this is a silly question, but I’m struggling to see how to
> get at the consistentID member of ClusterNode on the C# client.
>
>
>
> If I look at IClusterNode I only see “Id”, which is the ID that changes
> each restart. Is consistentID a Java client only feature?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
> ]
> *Sent:* Tuesday, September 5, 2017 6:04 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
> Thank you Dmitry!
>
> Sent from my iPhone
>
>
> On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:
>
> Hi Raymond,
>
>
>
> Ignite Persistent Store includes consistentID parameter of cluster node
> into folders name. It is required because there is possible that 2 nodes
> would be started at same physical machine.
>
>
>
> Consistency of using same folder each time is provided by this property,
>
> ClusterNode.consistentID - consistent globally unique node ID. Unlike
> ClusterNode.id this parameter constains consistent node ID which survives
> node restarts.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
>
>
>
> сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :
>
> Hi,
>
>
>
> I’m running a POC looking at the Ignite Persistent Store feature.
>
>
>
> I have added a section to the configuration for the Ignite grid as follows:
>
>
>
> cfg.PersistentStoreConfiguration = new
> PersistentStoreConfiguration()
>
> {
>
> PersistentStorePath = PersistentCacheStoreLocation,
>
> WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
> "WalArchive"),
>
> WalStorePath = Path.Combine(PersistentCacheStoreLocation,
> "WalStore"),
>
> };
>
>
>
> When I run the Ignite grid (a single node running locally) it then creates
> a folder inside the PersistentCacheStoreLocation with a complicated name,
> like this (which looks like a collection of IP addresses and a GUID for
> good measure, and perhaps with a port number added to the end):
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_
> 1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9
> _6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
>,
>
>
>
> Within that folder are then placed folders containing the content for each
> cache in the system
>
>
>
> Oddly, if I stop and then restart the grid I sometime get another folder
> with a slightly different complicated name, like this:
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_
> 1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005
> _b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> How do I ensure my grid uses the same persistent location each time? There
> doesn’t seem anything obvious in the PersistentStoreConfiguration that
> relates to this, other than the root location of the folder to store
> persisted 

RE: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
Hi Dmitry,



I looked at IgniteConfiguration in the C# client, but it does not have
consistentID in its namespace.



I pulled the C# client source code and searched in there and was not able
to find it. Perhaps this is not exposed in the C# client at all?



If that is that case, how would I configure this?



Thanks,

Raymond.



*From:* Dmitry Pavlov [mailto:dpavlov@gmail.com]
*Sent:* Tuesday, September 5, 2017 9:24 AM
*To:* user@ignite.apache.org
*Subject:* Re: Specifying location of persistent storage location



Hi Ramond,



Node.Consistent ID by default is the sorted set of local IP addresses and
ports. This field value survives during node restart.



At the same time consistent ID may be set using
IgniteConfiguration.setConsistentId() if you need to specify it manually.

I'm not sure how to write in C# syntax, but I am pretty sure it may be
configured.



Sincerely,

Dmitriy Pavlov



вт, 5 сент. 2017 г. в 0:12, Raymond Wilson :

… also, the documentation for ClusterNode here (
https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cluster/ClusterNode.html)
only describes a getter for the consistent ID, I need to be able to set it.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, September 5, 2017 9:06 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Specifying location of persistent storage location



Apologies if this is a silly question, but I’m struggling to see how to get
at the consistentID member of ClusterNode on the C# client.



If I look at IClusterNode I only see “Id”, which is the ID that changes
each restart. Is consistentID a Java client only feature?



Thanks,

Raymond.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
]
*Sent:* Tuesday, September 5, 2017 6:04 AM
*To:* user@ignite.apache.org
*Subject:* Re: Specifying location of persistent storage location



Thank you Dmitry!

Sent from my iPhone


On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:

Hi Raymond,



Ignite Persistent Store includes consistentID parameter of cluster node
into folders name. It is required because there is possible that 2 nodes
would be started at same physical machine.



Consistency of using same folder each time is provided by this property,

ClusterNode.consistentID - consistent globally unique node ID. Unlike
ClusterNode.id this parameter constains consistent node ID which survives
node restarts.



Sincerely,

Dmitriy Pavlov





сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :

Hi,



I’m running a POC looking at the Ignite Persistent Store feature.



I have added a section to the configuration for the Ignite grid as follows:



cfg.PersistentStoreConfiguration = new
PersistentStoreConfiguration()

{

PersistentStorePath = PersistentCacheStoreLocation,

WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
"WalArchive"),

WalStorePath = Path.Combine(PersistentCacheStoreLocation,
"WalStore"),

};



When I run the Ignite grid (a single node running locally) it then creates
a folder inside the PersistentCacheStoreLocation with a complicated name,
like this (which looks like a collection of IP addresses and a GUID for
good measure, and perhaps with a port number added to the end):



0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,



Within that folder are then placed folders containing the content for each
cache in the system



Oddly, if I stop and then restart the grid I sometime get another folder
with a slightly different complicated name, like this:



0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500



How do I ensure my grid uses the same persistent location each time? There
doesn’t seem anything obvious in the PersistentStoreConfiguration that
relates to this, other than the root location of the folder to store
persisted data.



Thanks,
Raymond.


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitry Pavlov
Hi Ramond,

Node.Consistent ID by default is the sorted set of local IP addresses and
ports. This field value survives during node restart.

At the same time consistent ID may be set using
IgniteConfiguration.setConsistentId() if you need to specify it manually.
I'm not sure how to write in C# syntax, but I am pretty sure it may be
configured.

Sincerely,
Dmitriy Pavlov

вт, 5 сент. 2017 г. в 0:12, Raymond Wilson :

> … also, the documentation for ClusterNode here (
> https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cluster/ClusterNode.html)
> only describes a getter for the consistent ID, I need to be able to set it.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
> *Sent:* Tuesday, September 5, 2017 9:06 AM
> *To:* 'user@ignite.apache.org' 
> *Subject:* RE: Specifying location of persistent storage location
>
>
>
> Apologies if this is a silly question, but I’m struggling to see how to
> get at the consistentID member of ClusterNode on the C# client.
>
>
>
> If I look at IClusterNode I only see “Id”, which is the ID that changes
> each restart. Is consistentID a Java client only feature?
>
>
>
> Thanks,
>
> Raymond.
>
>
>
> *From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
> ]
> *Sent:* Tuesday, September 5, 2017 6:04 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Specifying location of persistent storage location
>
>
>
> Thank you Dmitry!
>
> Sent from my iPhone
>
>
> On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:
>
> Hi Raymond,
>
>
>
> Ignite Persistent Store includes consistentID parameter of cluster node
> into folders name. It is required because there is possible that 2 nodes
> would be started at same physical machine.
>
>
>
> Consistency of using same folder each time is provided by this property,
>
> ClusterNode.consistentID - consistent globally unique node ID. Unlike
> ClusterNode.id this parameter constains consistent node ID which survives
> node restarts.
>
>
>
> Sincerely,
>
> Dmitriy Pavlov
>
>
>
>
>
> сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :
>
> Hi,
>
>
>
> I’m running a POC looking at the Ignite Persistent Store feature.
>
>
>
> I have added a section to the configuration for the Ignite grid as follows:
>
>
>
> cfg.PersistentStoreConfiguration = new
> PersistentStoreConfiguration()
>
> {
>
> PersistentStorePath = PersistentCacheStoreLocation,
>
> WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
> "WalArchive"),
>
> WalStorePath = Path.Combine(PersistentCacheStoreLocation,
> "WalStore"),
>
> };
>
>
>
> When I run the Ignite grid (a single node running locally) it then creates
> a folder inside the PersistentCacheStoreLocation with a complicated name,
> like this (which looks like a collection of IP addresses and a GUID for
> good measure, and perhaps with a port number added to the end):
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
> ,
>
>
>
> Within that folder are then placed folders containing the content for each
> cache in the system
>
>
>
> Oddly, if I stop and then restart the grid I sometime get another folder
> with a slightly different complicated name, like this:
>
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> How do I ensure my grid uses the same persistent location each time? There
> doesn’t seem anything obvious in the PersistentStoreConfiguration that
> relates to this, other than the root location of the folder to store
> persisted data.
>
>
>
> Thanks,
> Raymond.
>
>
>
>


RE: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
… also, the documentation for ClusterNode here (
https://www.gridgain.com/sdk/pe/latest/javadoc/org/apache/ignite/cluster/ClusterNode.html)
only describes a getter for the consistent ID, I need to be able to set it.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com]
*Sent:* Tuesday, September 5, 2017 9:06 AM
*To:* 'user@ignite.apache.org' 
*Subject:* RE: Specifying location of persistent storage location



Apologies if this is a silly question, but I’m struggling to see how to get
at the consistentID member of ClusterNode on the C# client.



If I look at IClusterNode I only see “Id”, which is the ID that changes
each restart. Is consistentID a Java client only feature?



Thanks,

Raymond.



*From:* Raymond Wilson [mailto:raymond_wil...@trimble.com
]
*Sent:* Tuesday, September 5, 2017 6:04 AM
*To:* user@ignite.apache.org
*Subject:* Re: Specifying location of persistent storage location



Thank you Dmitry!

Sent from my iPhone


On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:

Hi Raymond,



Ignite Persistent Store includes consistentID parameter of cluster node
into folders name. It is required because there is possible that 2 nodes
would be started at same physical machine.



Consistency of using same folder each time is provided by this property,

ClusterNode.consistentID - consistent globally unique node ID. Unlike
ClusterNode.id this parameter constains consistent node ID which survives
node restarts.



Sincerely,

Dmitriy Pavlov





сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :

Hi,



I’m running a POC looking at the Ignite Persistent Store feature.



I have added a section to the configuration for the Ignite grid as follows:



cfg.PersistentStoreConfiguration = new
PersistentStoreConfiguration()

{

PersistentStorePath = PersistentCacheStoreLocation,

WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
"WalArchive"),

WalStorePath = Path.Combine(PersistentCacheStoreLocation,
"WalStore"),

};



When I run the Ignite grid (a single node running locally) it then creates
a folder inside the PersistentCacheStoreLocation with a complicated name,
like this (which looks like a collection of IP addresses and a GUID for
good measure, and perhaps with a port number added to the end):



0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
,



Within that folder are then placed folders containing the content for each
cache in the system



Oddly, if I stop and then restart the grid I sometime get another folder
with a slightly different complicated name, like this:



0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500



How do I ensure my grid uses the same persistent location each time? There
doesn’t seem anything obvious in the PersistentStoreConfiguration that
relates to this, other than the root location of the folder to store
persisted data.



Thanks,
Raymond.


Data Page Locking

2017-09-04 Thread John Wilson
Hi,

Ignite documentation describes how and when entry-based locks are obtained,
both in atomic and transactional atomicity modes.

I was wondering why and when locks on data pages are required/requested --
PageMemoryImp.java shows that data pages have 8 bytes reserved for LOCK:

https://github.com/apache/ignite/blob/15613e2af5e0a4a0014bb5c6d6f6915038b1be1a/modules/core/src/main/java/org/apache/ignite/internal/processors/cache/persistence/pagemem/PageMemoryImpl.java#L92

Thanks,


Re: client thread dumps

2017-09-04 Thread Evgenii Zhuravlev
>I am also seeing frequent GC in GC monitor, wondering what it can relate to
>that is with CPU spike ?

I think these spikes could be related to the GC, yes. But they also could
be related to the work that happens on this node. Could you describe how
you use this node? Which operation do you invoke on it?

Evgenii


2017-09-04 22:03 GMT+03:00 ezhuravlev :

> Hi,
>
> Here is a description of all thread pools:
> https://apacheignite.readme.io/v2.1/docs/thread-pools
> You can try to reduce their size, but do it carefully, these changes could
> lead to performance degradation.
>
> If you will never use visor or rest-api you can also set
> ConnectorConfiguration to null:
>
> IgniteConfiguration cfg = new IgniteConfiguration();
> cfg.setConnectorConfiguration(null); // disables the tcp rest connector
>
>
> Also, I found thread with pretty the same questions that you have about
> threads:
> http://apache-ignite-users.70518.x6.nabble.com/Ignite-
> Thread-count-td7636.html
>
> Evgenii
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: memory increases on heap

2017-09-04 Thread Evgenii Zhuravlev
Hi,

answered you here:
http://apache-ignite-users.70518.x6.nabble.com/client-thread-dumps-tc16658.html

Evgenii

2017-09-04 17:09 GMT+03:00 ignite_user2016 :

> Hello Igniters,
>
> We have 2 instances on ignite ( 2.0) in production, we mostly used ignite
> for spring cache.
>
> Seeing so many Ignite threads on client heap dumps, wondering what are
> these
> and can we reduce the thread size on client side ? with that can we reduce
> the memory foot print ?
>
> please see the attached image..
>  ignite_thread_dump.png>
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: client thread dumps

2017-09-04 Thread ezhuravlev
Hi,

Here is a description of all thread pools:
https://apacheignite.readme.io/v2.1/docs/thread-pools
You can try to reduce their size, but do it carefully, these changes could
lead to performance degradation.

If you will never use visor or rest-api you can also set
ConnectorConfiguration to null:

IgniteConfiguration cfg = new IgniteConfiguration(); 
cfg.setConnectorConfiguration(null); // disables the tcp rest connector 


Also, I found thread with pretty the same questions that you have about
threads:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Thread-count-td7636.html

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: copyOnRead to false

2017-09-04 Thread ezhuravlev
Hi,

Could you describe your benchmarks? So we can understand whats could be
wrong here.

If the value is fetched from a remote node, you will always get a copy. If
you get the value locally, you can force Ignite to return the stored
instance by setting CacheConfiguration.setCopyOnRead(false) property, but
this should be used only in read-only scenario. It's not safe to modify this
instance because the serialized form will not be updated until you call
cache.put(), so the one that reads it will potentially get the old value.
Additionally, it can be concurrently serialized which can cause data
corruption. 

If you use BinaryMarshaller, with copyOnRead flag, Ignite stores serialized
copy in heap and do not copy object on each "get" (if the object does not
change after "get"):

Please share info about your benchmarks(or code) to investigation of this
problem.

Evgenii 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Quick questions on Evictions

2017-09-04 Thread John Wilson
I appreciate the nice explanation. I got a few more questions:


   1. For the case where on-heap caching and persistent are both disabled,
   why does Ignite throw out out-dated pages from off-heap? Why not throw OOM
   error since the out-dated pages are not backed by persistent store and
   throwing away results in data loss?
   2. For off-heap eviction with persistent store enabled, will entries
   evicted from data pages be written to disk (in case they are dirty) or will
   they be thrown away (which would imply that entries eligible for eviction
   must be clean and have already been written to disk by checkpointing)?
   3.  Checkpointing works by locating dirty pages and writing them out. If
   a single entry in a data page is dirty (has been updated since the last
   check pointing), will checkpointing write the entire data page (all
   entries) to the partition files or just the dirty entry?

Thanks!

On Mon, Sep 4, 2017 at 8:17 AM, dkarachentsev 
wrote:

> Hi,
>
> Assume you have disabled onheapCahe and disabled persistence. In that case
> you may configure only datapage eviction mode, then outdated pages will be
> thrown away, when no free memory will be available for Ignite. Also you
> cannot configure per-entry eviction.
>
> OK, if you enable onheapCache, then Ignite will store on heap every entry
> that was read from off-heap (or disk). Next reads of it will not require
> off-heap readings, and every update will write to off-heap. To limit size
> of
> onheapCache you may set CacheConfiguration.setEvictionPolicy(), but it
> will
> not evict off-heap entries.
>
> So, off-heap eviction may be controlled with DataPageEvictionMode only, and
> as you suggested, it clears entries one-by-one from page, checking for
> current locks (transaction locks as well). If entry is locked, it won't be
> evicted.
>
> Thanks!
> -Dmitry.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Specifying location of persistent storage location

2017-09-04 Thread Raymond Wilson
Thank you Dmitry!

Sent from my iPhone

> On 5/09/2017, at 1:12 AM, Dmitry Pavlov  wrote:
> 
> Hi Raymond,
>  
> Ignite Persistent Store includes consistentID parameter of cluster node into 
> folders name. It is required because there is possible that 2 nodes would be 
> started at same physical machine.
>  
> Consistency of using same folder each time is provided by this property,
> ClusterNode.consistentID - consistent globally unique node ID. Unlike 
> ClusterNode.id this parameter constains consistent node ID which survives 
> node restarts.
>  
> Sincerely,
> Dmitriy Pavlov
> 
> 
> сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :
>> Hi,
>> 
>>  
>> 
>> I’m running a POC looking at the Ignite Persistent Store feature.
>> 
>>  
>> 
>> I have added a section to the configuration for the Ignite grid as follows:
>> 
>>  
>> 
>> cfg.PersistentStoreConfiguration = new 
>> PersistentStoreConfiguration()
>> 
>> {
>> 
>> PersistentStorePath = PersistentCacheStoreLocation,
>> 
>> WalArchivePath = Path.Combine(PersistentCacheStoreLocation, 
>> "WalArchive"),
>> 
>> WalStorePath = Path.Combine(PersistentCacheStoreLocation, 
>> "WalStore"),
>> 
>> };
>> 
>>  
>> 
>> When I run the Ignite grid (a single node running locally) it then creates a 
>> folder inside the PersistentCacheStoreLocation with a complicated name, like 
>> this (which looks like a collection of IP addresses and a GUID for good 
>> measure, and perhaps with a port number added to the end):
>> 
>>  
>> 
>> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>>  
>>  
>>  
>>   ,
>> 
>>  
>> 
>> Within that folder are then placed folders containing the content for each 
>> cache in the system
>> 
>>  
>> 
>> Oddly, if I stop and then restart the grid I sometime get another folder 
>> with a slightly different complicated name, like this:
>> 
>>  
>> 
>> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>> 
>>  
>> 
>> How do I ensure my grid uses the same persistent location each time? There 
>> doesn’t seem anything obvious in the PersistentStoreConfiguration that 
>> relates to this, other than the root location of the folder to store 
>> persisted data.
>> 
>>  
>> 
>> Thanks,
>> Raymond.
>> 
>>  


Re: About Apache Ignite Partitioned Cache

2017-09-04 Thread ezhuravlev
>The backups which is mentioned in the documentation, how do I define which
node is the primary node and >which node is the backup node.

It defined by Affinity Function, you can read about it here:
https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function

>I will have a four node cluster in two data centers , is it normal to think
each data center would have one >primary node and one backup node. What
would happen in the scenario that the primary node on one dc >cannot reach
the primary node on the other dc?

I think that you not fully understand how partitioned caches work. Node can
be primary for the part of the partitions, while other nodes will have other
parts as primary. Here is basic information about Partitioned cache that
should be enough to start:
https://apacheignite.readme.io/docs/cache-modes#section-partitioned-mode

>I assume that org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi is to be
used for the dsicoverySpi property >even for Partitioned Caches. Are there
special properties that should be filled in for the partitioned cache
>except for addresses and ports?

discoverySpi doesn't affect caches at all, so, you don't need to change
anything at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi
configuration

>Most of the caches I am using is using uuid as keys , in that case how
would affinity collocation of the keys >work? Unfortunately I cannot change
the key structure in a short notice.

Do you want to collocate Data with Data or Compute with Data? In both cases
you can find information on this page:
https://apacheignite.readme.io/docs/affinity-collocation


Evgenii




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Quick questions on Evictions

2017-09-04 Thread Dmitry Karachentsev

Hi,

Assume you have disabled onheapCahe and disabled persistence. In that 
case you may configure only datapage eviction mode, then outdated pages 
will be thrown away, when no free memory will be available for Ignite. 
Also you cannot configure per-entry eviction.


OK, if you enable onheapCache, then Ignite will store on heap every 
entry that was read from off-heap (or disk). Next reads of it will not 
require off-heap readings, and every update will write to off-heap. To 
limit size of onheapCache you may set 
CacheConfiguration.setEvictionPolicy(), but it will not evict off-heap 
entries.


So, off-heap eviction may be controlled with DataPageEvictionMode only, 
and as you suggested, it clears entries one-by-one from page, checking 
for current locks (transaction locks as well). If entry is locked, it 
won't be evicted.


Thanks!
-Dmitry.

02.09.2017 02:12, John Wilson пишет:

Hi,

I have been reading through Ignite doc and I still have these 
questions. I appreciate your answer.


Assume my Ignite native persistence is *not *enabled:

 1. if on-heap cache is also not enabled, then there are no
entry-based evictions, right?
 2. if on-heap cache is now enabled, does a write to on-heap cache
also results in a write-through or write-behind behavior to the
off-heap entry?
 3. If on-heap cache is not enabled but data page eviction mode is
enabled, then where do evicted pages from off-heap go/written to?

and, need confirmation on how data page eviction is implemented:

4. when a data page eviction is initiated, Ignite works by iterating 
through each entry in the page and evicting entries one by one. It may 
happen that certain entries may be involved in active transactions and 
hence certain entries may not be evicted at all.



Thanks,




About Apache Ignite Partitioned Cache

2017-09-04 Thread Sabyasachi Biswas
Hi,

I am using Apache Ignite 1.9 in the embedded mode, I am using it as an
inmemory data grid. I want to use the mode Partitioned Mode cache and I am
trying to gather usage information.


   - The backups which is mentioned in the documentation, how do I define
   which node is the primary node and which node is the backup node.
   - I will have a four node cluster in two data centers , is it normal to
   think each data center would have one primary node and one backup node.
   What would happen in the scenario that the primary node on one dc cannot
   reach the primary node on the other dc?
   - I assume that org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi is
   to be used for the dsicoverySpi property even for Partitioned Caches. Are
   there special properties that should be filled in for the partitioned cache
   except for addresses and ports?
   - Most of the caches I am using is using uuid as keys , in that case how
   would affinity collocation of the keys work? Unfortunately I cannot change
   the key structure in a short notice.

Thanks and Regards,
Saby


Re: client thread dumps

2017-09-04 Thread ignite_user2016
I took a heap dump and it all points to following class - 

One instance of
"org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager"
loaded by "sun.misc.Launcher$AppClassLoader @ 0x88003d40" occupies
375,647,072 (83.93%) bytes. The memory is accumulated in one instance of
"java.util.LinkedList" loaded by "".

Keywords
java.util.LinkedList
sun.misc.Launcher$AppClassLoader @ 0x88003d40
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager

see the attached images.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: UPDATE SQL for nested BinaryObject throws exception.

2017-09-04 Thread afedotov
Hi,

Actually, flattening the nested properties with aliases works only for one
level as for now.
Looks like it's a bug. I'll file a JIRA ticket for this.

Kind regards,
Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: client thread dumps

2017-09-04 Thread ignite_user2016
I am also seeing frequent GC in GC monitor, wondering what it can relate to
that is with CPU spike ? 

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Retrieving multiple keys with filtering

2017-09-04 Thread Semyon Boikov
Yes, read can be executed without acquiring entry lock. But you need take
into account that request for cache.get operation can be processed in the
same stripe as cache.invoke.

Semyon

On Mon, Sep 4, 2017 at 6:39 AM, Dmitriy Setrakyan 
wrote:

> Semyon,
>
> Can you please clarify this. Do we allow concurrent reads while invokeAll
> or invoke is executed?
>
> D.
>
> On Tue, Aug 29, 2017 at 11:59 AM, Andrey Kornev 
> wrote:
>
>> Ah, yes! Thank you, Semyon! According to invokeAll() javadocs "No
>> mappings will be returned for EntryProcessors that return a null value for
>> a key." I should read JCache javadocs more carefully next time. :)
>>
>>
>> Still, the processor is invoked while a monitor is held on the cache
>> entry being processed, which is of course unnecessary in a read-only case
>> like the one we're discussing in this thread...
>>
>>
>> I guess I'm stuck with the Compute-based approach for now. :(
>>
>> Thanks!
>> Andrey
>>
>> --
>> *From:* Semyon Boikov 
>> *Sent:* Tuesday, August 29, 2017 6:15 AM
>>
>> *To:* user@ignite.apache.org
>> *Subject:* Re: Retrieving multiple keys with filtering
>>
>> Hi,
>>
>> If EntryProcessor returns null then null is not added in the result map.
>> But I agree that using invokeAll() will have a lot of unnecessary overhead.
>> Perhaps we need add new getAll method on API, otherwise best alternative is
>> use custom ComputeJob or affinityCall.
>>
>> Thanks,
>> Semyon
>>
>> On Tue, Aug 29, 2017 at 7:20 AM, Dmitriy Setrakyan > > wrote:
>>
>>> Andrey,
>>>
>>> I am not sure I understand. According to EntryProcessor API [1] you can
>>> chose to return nothing.
>>>
>>> Also, to my knowledge, you can still do parallel reads while executing
>>> the EntryProcessor. Perhaps other community members can elaborate on this.
>>>
>>> [1] https://static.javadoc.io/javax.cache/cache-api/1.0.0/in
>>> dex.html?javax/cache/processor/EntryProcessor.html
>>>
>>> D.
>>>
>>>
>>> On Mon, Aug 28, 2017 at 8:29 PM, Andrey Kornev >> > wrote:
>>>
 Dmitriy,


 It's good to be back!  Glad to find Ignite community as vibrant
 and thriving as ever!

 Speaking of invokeAll(), even if we ignore for a moment the overhead
 associated with locking/unlocking a cache entry prior to passing it to the
 EntryProcessor as well as the overhead associated with enlisting the
 touched entries in a transaction, the bigger problem with using
 invokeAll() for filtering is that EntryProcessor must return a value. I'm
 not aware of any way to make EntryProcessor drop the entry from the
 response. The only options is to use a null (or false) to indicate a
 filtered out entry. In my specific case, I'll end up sending back a whole
 bunch of nulls in the result map as I expect most of the keys to be
 rejected by the filter.

 Overall, invokeAll() is not what one would call *efficient* (the key
 word in my original question) way of filtering.

 Thanks!
 Andrey

 --
 *From:* Dmitriy Setrakyan 
 *Sent:* Saturday, August 26, 2017 8:37 AM
 *To:* user

 *Subject:* Re: Retrieving multiple keys with filtering

 Andrey,

 Good to hear from you. Long time no talk.

 I don't think invokeAll has only update semantics. You can definitely
 use it just to look at the keys and return a result. Also, as you
 mentioned, Ignite compute is a viable option as well.

 The reason that predicates were removed from the get methods is because
 the API was becoming unwary, and also because JCache does not require it.

 D.

 On Thu, Aug 24, 2017 at 10:50 AM, Andrey Kornev <
 andrewkor...@hotmail.com> wrote:

> Well, I believe invokeAll() has "update" semantics and using it for
> read-only filtering of cache entries is probably not going to be efficient
> or even appropriate.
>
>
> I'm afraid the only viable option I'm left with is to use Ignite's
> Compute feature:
>
> - on the sender, group the keys by affinity.
>
> - send each group along with the filter predicate to their
> affinity nodes using IgniteCompute.
>
> - on each node, use getAll() to fetch the local keys and apply the
> filter.
>
> - on the sender node, collect the results of the compute jobs into a
> map.
>
>
> It's unfortunate that Ignite dropped that original API. What used to
> be a single API call is now a non-trivial algorithm and one have to worry
> about things like what happens if the grid topology changes while the
> compute jobs are executing, etc.
>
> Can anyone think of any other less complex/more robust approach?
>
> Thanks
> Andrey
>
> --

client thread dumps

2017-09-04 Thread ignite_user2016
Hello Igniters,

curious about client thread dumps, I see so many ignite threads on client
side, wondering what are there and how can I reduce the size of threads ? 

we run ignite 2.0 in production on SB instances.

see the attached image for more information.

 

And our JVM setting as follows - 

-Xms2g -Xmx2g -server -XX:+AggressiveOpts -XX:MaxMetaspaceSize=256m
-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=DIR_PATH -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:+UseTLAB -XX:NewSize=128m -XX:MaxNewSize=128m

Since we use ignite for bare minimum do I need some tuning here ? 

Thank you for all your help ..




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


memory increases on heap

2017-09-04 Thread ignite_user2016
Hello Igniters,

We have 2 instances on ignite ( 2.0) in production, we mostly used ignite
for spring cache.

Seeing so many Ignite threads on client heap dumps, wondering what are these
and can we reduce the thread size on client side ? with that can we reduce
the memory foot print ? 

please see the attached image..

 







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Specifying location of persistent storage location

2017-09-04 Thread Dmitry Pavlov
Hi Raymond,

Ignite Persistent Store includes consistentID parameter of cluster node
into folders name. It is required because there is possible that 2 nodes
would be started at same physical machine.

Consistency of using same folder each time is provided by this property,
ClusterNode.consistentID - consistent globally unique node ID. Unlike
ClusterNode.id this parameter constains consistent node ID which survives
node restarts.

Sincerely,
Dmitriy Pavlov


сб, 2 сент. 2017 г. в 23:40, Raymond Wilson :

> Hi,
>
>
>
> I’m running a POC looking at the Ignite Persistent Store feature.
>
>
>
> I have added a section to the configuration for the Ignite grid as follows:
>
>
>
> cfg.PersistentStoreConfiguration = new
> PersistentStoreConfiguration()
>
> {
>
> PersistentStorePath = PersistentCacheStoreLocation,
>
> WalArchivePath = Path.Combine(PersistentCacheStoreLocation,
> "WalArchive"),
>
> WalStorePath = Path.Combine(PersistentCacheStoreLocation,
> "WalStore"),
>
> };
>
>
>
> When I run the Ignite grid (a single node running locally) it then creates
> a folder inside the PersistentCacheStoreLocation with a complicated name,
> like this (which looks like a collection of IP addresses and a GUID for
> good measure, and perhaps with a port number added to the end):
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_9cc8_92bc_50c9_6794_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
> ,
>
>
>
> Within that folder are then placed folders containing the content for each
> cache in the system
>
>
>
> Oddly, if I stop and then restart the grid I sometime get another folder
> with a slightly different complicated name, like this:
>
>
>
>
> 0_0_0_0_0_0_0_1_10_0_75_1_10_3_72_117_127_0_0_1_192_168_121_1_192_168_178_27_192_168_3_1_2406_e007_9e5_1_a58c_2f32_8005_b03d_2406_e007_9e5_1_c5d8_af4b_55b2_582a_47500
>
>
>
> How do I ensure my grid uses the same persistent location each time? There
> doesn’t seem anything obvious in the PersistentStoreConfiguration that
> relates to this, other than the root location of the folder to store
> persisted data.
>
>
>
> Thanks,
> Raymond.
>
>
>


Re: Apache ignite transaction issue: Failed to enlist entry

2017-09-04 Thread Вячеслав Коптилин
Hi,

I tried the following code and it works as expected (without any exceptions)

Ignite ignite = Ignition.start();

CacheConfiguration cfg = new CacheConfiguration()
.setName("test-transactional-cache")
.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);

IgniteCache cache = ignite.getOrCreateCache(cfg);

Integer key = new Integer(12);

IgniteTransactions transactions = ignite.transactions();

try (Transaction tx =
transactions.txStart(TransactionConcurrency.OPTIMISTIC,
TransactionIsolation.SERIALIZABLE)) {
cache.put(key, 42);

tx.commit();
}

System.out.println("key=" + key + ", value=" + cache.get(key));


Could you please provide cache configuration and full code snippet that can
be used as reproducer?

Thanks!

2017-09-03 8:19 GMT+03:00 richabali :

> I am using transaction as below:
>
> ignite = Ignition.ignite();
> igniteTransactions = ignite.transactions();
> Transaction igniteTransaction = igniteTransactions.tx();
> if(igniteTransaction == null){
> igniteTransaction =
> igniteTransactions.txStart(TransactionConcurrency.OPTIMISTIC,
>
> TransactionIsolation.SERIALIZABLE);
> igniteTransaction.timeout(timeInMillis);
> }
>
> It gives below error when I try to put something in cache:
>
> class org.apache.ignite.IgniteCheckedException: Failed to enlist write
> value
> for key (cannot have update value in transaction after EntryProcessor is
> applied): 14630037
>
> org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.enlistWriteEntry(GridNearTxLocal.java:1340)
>
> org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.enlistWrite(GridNearTxLocal.java:856)
>
> org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.putAsync0(GridNearTxLocal.java:534)
>
> org.apache.ignite.internal.processors.cache.distributed.
> near.GridNearTxLocal.putAsync(GridNearTxLocal.java:386)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$21.op(
> GridCacheAdapter.java:2355)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter$21.op(
> GridCacheAdapter.java:2353)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.syncOp(
> GridCacheAdapter.java:4107)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put0(
> GridCacheAdapter.java:2353)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2334)
>
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(
> GridCacheAdapter.java:2311)
>
> org.apache.ignite.internal.processors.cache.IgniteCacheProxy.put(
> IgniteCacheProxy.java:1502)
>
> Please give some explanation on why this error occurs and what can be done
> to avoid it.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignit2.1 start h2 debug console error

2017-09-04 Thread ilya.kasnacheev
Hello Lucky,

It should be -DIGNITE_H2_DEBUG_CONSOLE=true

note the "D" after dash, that means "define property"
and the absense of whitespace before equals sign (and anywhere else) which
will otherwise split your parameter into two.

Hope it helps,
Ilya.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Confused about QueryEntity configuration

2017-09-04 Thread franck102
Hi Val,

I am able to use _key sucessfully, that is acceptable.

I tried defining an alias, using all combinations of _key/DB name and all
orders, and I couldn't get that to work.

Looking through the code it seems to me that aliases are only used to map
result column names to binary object field names; they are not used to
"rewrite" the query to replace java names with SQL names that the SQL engine
would understand - so aliases probably cannot help here.

Franck



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Custom SecurityCredentialsProvider and SecurityCredentials

2017-09-04 Thread franck102
I can implement my own flavor of SecurityCredentialsProvider yes.
But the Ignite code will not use it no matter what I do - unless I missed
something.

Franck



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: SQL query is slow

2017-09-04 Thread Vladimir Ozerov
Hi Mihaela,

Index is not used in your case because you specify function-based
condition. Usually this is resolved by adding functional index, but Ignite
doesn't support it at the moment unfortunately. Is it possible to
"materialize" the condition "POSITION ('Z',manufacturerCode)>0" as
additional attribute and add an index on it? In this case SQL would look
like this and index will be used:

SELECT COUNT(_KEY) FROM IgniteProduct AS product
WHERE manufacturerCodeZ=1

Another important thing is selectivity - which fraction of records fall
under this condition?
Also I would recommend to change "COUNT(_KEY)" to "COUNT(*)".

Vladimir.

On Tue, Aug 29, 2017 at 6:05 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> It is possible returned dataset is too large and cause high network
> pressure that results in large query execution time.
>
> There is no recommendation for grid nodes count.
> Simple SQL queries can work slower on large grid as most of time is spent
> in inter-node communication.
> Heavy SQL queries may show better results on larger grid as every node
> will have smaller dataset.
>
> You can try to look at page memory statistics [1] to get estimate numbers.
>
> Really, there is an issue with large OFFSET as Ignite can't just skip
> entries and have to fetch all of them from nodes.
> OFFSET makes no sense without ORDER as Ignite fetch rows from other nodes
> in async way and row order should be preserved between such queries.
> OFFSET applies on query initiator node (reduce side) after results merged
> as there is no way to understand on map side what rows should be skiped.
>
>
> Looks like underlying H2 tries to use index scan, but I don't think index
> can help in case of functional condition.
> You can try to make Ignite to have inline values in index or use separate
> field with smaller type that can be inlined. By default, index inlining is
> enabled for 10 byte length values.
> See IGNITE_MAX_INDEX_PAYLOAD_SIZE_DEFAULT system property docs and [2].
>
> [1] https://apacheignite.readme.io/v2.1/docs/memory-metrics
> [2] https://issues.apache.org/jira/browse/IGNITE-6060
>
> On Tue, Aug 29, 2017 at 3:59 PM, mhetea  wrote:
>
>> Thank you for your response.
>> I used query parallelizm and the time reduced to ~2.3s, which is still too
>> much.
>> Regarding 1. is there any documentation about configuration parameters
>> (recommended number of nodes, how much data should be stored on each
>> node).
>> We currently have 2 nodes with 32GB RAM each. Every 1 million records from
>> our cache occupy about 1GB (is there a way to see how much memory a cache
>> actually occupies? we look now at the Allocated next memory segment log
>> info)
>> For 3. it seems that the index is hit  from the execution plan:
>>  /* "productCache".IGNITEPRODUCT_MANUFACTURERCODE_IDX */
>> No?
>>
>> We have this issue also when we use a large OFFSET (we execute this kind
>> of
>> query because we want paginated results)
>>
>> Also, this cache will be updated frequently so we expect it to grow in
>> size.
>>
>> Thank you!
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/SQL-query-is-slow-tp16475p16487.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>