Re: Performance Issue with Enum Serialization in Ignite 2.8

2020-07-22 Thread zork
Can someone provide inputs on this please?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: A question regarding cluster groups

2020-07-22 Thread adipro
Thanks for the reply denis.

A few follow up questions.

1. Is there any way I can move the data from one cluster node to other
cluster node. Like exporting work folder or something like that
periodically.
2. Is there any snapshot of data concept that can be done periodically and
push it to somewhere?

Is there any way where we can achieve this without going for GridGain
Enterprise? We are checking all the possibilities here.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 - Loading Plugin Provider

2020-07-22 Thread akorensh
Veena,
  Per my previous note, the preferred way is to use the service provider as
noted in
  the documentation page: https://apacheignite.readme.io/docs/plugins
  
   That is how the ML and directIO plugins are designed.
  see: https://github.com/apache/ignite/tree/master/modules/ml
  and: https://github.com/apache/ignite/tree/master/modules/direct-io

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks Spring Injected dataSource

2020-07-22 Thread Yohan Fernando
Hi all,

I have created a test project to reproduce this issue.

Running the project (Run.java)  using Ignite 2.8.1 produces the following 
error. Under 2.7.6 no such error occurs.

Exception in thread "main" javax.cache.integration.CacheLoaderException: 
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:545)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:636)
at 
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.localLoadCache(GridCacheProxyImpl.java:226)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:6052)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:6101)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6735)
at 
org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:131)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:596)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7005)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:590)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:519)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1293)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1429)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:664)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:536)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:829)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:497)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:449)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:420)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:404)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.globalLoadCacheAsync(GridCacheAdapter.java:4020)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.globalLoadCache(GridCacheAdapter.java:3993)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.loadCache(IgniteCacheProxyImpl.java:387)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.loadCache(GatewayProtectedCacheProxy.java:311)
at com.example.Run.main(Run.java:13)
Caused by: java.lang.NullPointerException
at 
com.example.TraderCacheStore.loadCache(TraderCacheStore.java:26)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:519)
... 27 more

Please let me know if you have any questions.

Thanks

Yohan
From: Yohan Fernando
Sent: 21 July 2020 16:13
To: 'user@ignite.apache.org' 
Subject: Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks Spring 
Injected dataSource

Hello All,

We are migrating from Ignite 2.7.6 to 2.8.1 and have hit an issue where 
CacheStore implementations that include Spring injected DataSource objects, 
these datasources turn out to be null. After investigation, it appears that 
there is a change in behaviour under Ignite 2.8.x where it seems like the 
CacheStore is Serialized and therefore loose the injected Spring references. 
This was not the case with 2.7.6 as the transient DataSource did not loose it's 
injected value in that version.

I have tried to make the CacheStore implement ApplicationContextAware, but it 
hasn't helped as the Spring context is not re-initialized post serialization.

Has anyone come across this and figured out a way to resolve this?

Thanks

Yohan

_

This email, its 

Re: Apache Ignite CacheRebalanceMode Is Not Respected By Nodes

2020-07-22 Thread Denis Magda
Here is a detailed instruction for managing data distribution across server
nodes:
https://www.gridgain.com/docs/latest/developers-guide/configuring-caches/managing-data-distribution

-
Denis


On Tue, Jul 21, 2020 at 6:19 PM Evgenii Zhuravlev 
wrote:

> Hi,
>
> CacheRebalanceMode is responsible for a different thing - it starts to
> work when data need to be rebalanced due to topology(or baseline topology
> change). It's not responsible for data distribution between nodes for put
> operations. So, when you insert data, part of this data belongs to the
> partitions which are not related to the local node.
>
> To achieve what you want, you can just create 2 different caches with
> NodeFilter:
> https://www.javadoc.io/doc/org.apache.ignite/ignite-core/latest/org/apache/ignite/util/AttributeNodeFilter.html
> Using that you can avoid data movement between nodes and your thin client
> will see these caches too.
>
> Evgenii
>
>
>
>
> ср, 15 июл. 2020 г. в 07:58, cparaskeva :
>
>> The setup: Hello folks I have a simple Apache Ignite setup with two Ignite
>> instances configured as server nodes over C# and one Ignite instance as a
>> client node over java.
>>
>> What is the goal: Populate data on instance 1 and instance 2 but avoid
>> movement of data between them. In other words data receiced on each node
>> must stay in node. Then using the java client to run queries against the
>> two
>> nodes either combined (distributed join) or per node (using affinity).
>>
>> The issue: With one server node everything works as expected, however on
>> more than one server nodes, data of the cluster is balancing between the x
>> member nodes even if I have expliccitly set the CacheRebalanceMode to None
>> which should disable the rebalance between then nodes. The insert time is
>> increase by 4x-10x times, function to each node's populated data.
>>
>> P.S. I have tried change the cache mode from Partitioned to Local where
>> each
>> node will have isolated the data in it's internal H2 db however in that
>> case
>> the Java client is unable to detect the nodes or read any data from the
>> cache of each node.
>>
>> Java Client Node
>>
>> IgniteConfiguration cfg = new IgniteConfiguration();
>> // Enable client mode.
>> cfg.setClientMode(true);
>>
>> // Setting up an IP Finder to ensure the client can locate the
>> servers.
>> TcpDiscoveryMulticastIpFinder ipFinder = new
>> TcpDiscoveryMulticastIpFinder();
>>
>> ipFinder.setAddresses(Collections.singletonList("127.0.0.1:47500
>> ..47509"));
>> cfg.setDiscoverySpi(new TcpDiscoverySpi().setIpFinder(ipFinder));
>>
>> // Configure Ignite to connect with .NET nodes
>> cfg.setBinaryConfiguration(new BinaryConfiguration()
>> .setNameMapper(new BinaryBasicNameMapper(true))
>> .setCompactFooter(true)
>>
>> // Start Ignite in client mode.
>> Ignite ignite = Ignition.start(cfg);
>>
>>
>> IgniteCache cache0 = ignite.cache(CACHE_NAME);
>> IgniteCache cache =
>> cache0.withKeepBinary();
>>
>> // execute some queries to nodes
>> C# Server Node
>>
>>
>>IIgnite _ignite =
>> Ignition.Start(IgniteUtils.DefaultIgniteConfig()));
>>
>> // Create new cache and configure queries for Trade binary
>> types.
>> // Note that there are no such classes defined.
>> var cache0 = _ignite.GetOrCreateCache> Trade>("DEALIO");
>>
>> // Switch to binary mode to work with data in serialized
>> form.
>> _cache = cache0.WithKeepBinary> IBinaryObject>();
>>
>>//populate some data ...
>>
>> public static IgniteConfiguration DefaultIgniteConfig()
>> {
>> return new IgniteConfiguration
>> {
>>
>>
>> PeerAssemblyLoadingMode =
>> PeerAssemblyLoadingMode.CurrentAppDomain,
>> BinaryConfiguration = new BinaryConfiguration
>> {
>> NameMapper = new BinaryBasicNameMapper { IsSimpleName
>> =
>> true },
>> CompactFooter = true,
>> TypeConfigurations = new[] {
>> new BinaryTypeConfiguration(typeof(Trade)) {
>> Serializer = new IgniteTradeSerializer()
>> }
>> }
>> },
>> DiscoverySpi = new TcpDiscoverySpi
>> {
>> IpFinder = new TcpDiscoveryMulticastIpFinder
>> {
>> Endpoints = new[] { "127.0.0.1:47500..47509" }
>> },
>> SocketTimeout = TimeSpan.FromSeconds(0.10)
>> },
>> Logger = new IgniteNLogLogger(),
>> CacheConfiguration = new[]{
>> new CacheConfiguration{
>>
>> PartitionLossPolicy=PartitionLossPolicy.Ignore,
>>   

can web session filter be configured to create session on demand

2020-07-22 Thread Cameron Braid
At the moment the web session filter creates a new session for any request
that is mapped to the filter.

Is it possible to make it create sessions only when
request.getSession(true) is called ?

This is needed to support using a http caching proxy server.

I have pages like / and /about which dont need a session, and I have areas
that always need a session like /user and /admin.

Mapping the filter to /user and /admin is almost a solution, however there
are also some other dynamic urls that need access to the session under
certain cases based on request parameters.

So ideally the filter would be mapped to / with a 'create on demand'
configuration.

Cameron


Re: A question regarding cluster groups

2020-07-22 Thread Denis Magda
Hi,

A single scattered cluster across two distant data centers is a tricky
thing that leads to higher latencies and corner cases you're asking about.

As a general recommendation, each data center needs to have its own
cluster. The nodes of those two different clusters (belonging to the
different data centers) don't see each other. You connect your applications
to a data center you need. To replicate changes between the two independent
clusters, you can use GridGain Data Center Replication, Kafka or something
like that.

-
Denis


On Wed, Jul 22, 2020 at 12:52 AM R S S Aditya Harish <
aditya.har...@zohocorp.com> wrote:

> Hi Alex, I've an important question to ask. Can you please answer this.
>
> Let us say I've two server nodes in one data center DC1 and two more
> server nodes in another data center DC2. Two data centers have some network
> delay.
>
>
> Now I'm using SQL select statements on caches which are replicated. Now
> those caches' write synchronization mode is FULL_SYNC.
>
> Now at a time we have working clients nodes only in one DC but not both.
> Let's say we have two clients in DC1.
>
> So total nodes in DC1 is 6 (2 client nodes and 2 server nodes in DC1 and 2
> server nodes).
>
> Our use case is such a way that..
>
> 1. 2 clients should query only 2 server nodes in DC1 and not the other 2
> servers in DC2.
> 2. All the cache queries should be in FULL_SYNC with 2 server nodes in DC1
> and DC1-DC2 should be done in ASYNC mode.
> 3. A doubt I got which is, if in client's node discoveryspi, if I (X,Y) ip
> list as server nodes ips, would the queries always reach X,Y even though
> the entire topology contains X,Y,Z as server nodes?
>
> Please someone provide us the solution for this.
>
>


Re: Load balanced CQ listener/Catch up missed events

2020-07-22 Thread Devakumar J
Thanks Denis. I will explore further on this.

Some more questions in distributed queue.

1. When i create a queue, should it be created at each server node
separately or it should be created once cluster wide?

2) If cluster wide, can multiple server nodes publish to the same queue?

3) Also is there a way to create the queue at client side and store a part
of client node itself?

Thanks,
Devakumar J



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Load balanced CQ listener/Catch up missed events

2020-07-22 Thread Devakumar J
Thanks Denis. I will explore further on this.

Some more questions in distributed queue.

1. When i create a queue, should it be created at each server node
separately or it should be created once cluster wide?

2) If cluster wide, can multiple server nodes publish to the same queue?

3) Also is there a way to create the queue at client side and store a part
of client node itself?

Thanks,
Devakumar J



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ContinuousQueryWithTransformer with custom return type

2020-07-22 Thread Devakumar J
Thanks Denis for the information. That works well for me.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


custom ignite-log4j.xml when using stock docker ignite image

2020-07-22 Thread Maxim Volkomorov
Hi!

Is there a simple way to use a custom ignite-log4.xml config, nested in URI
ignite-config.xml, when ignite is running like a stock docker image?

Same question for custom app.properties beans, nested in config.xml.

Our docker run cmd:
docker run -it --net=localhost -e "CONFIG_URI=http://host/ignite-config.xml;
apacheignite/ignite


Re: Ignite DefaultDataRegion config

2020-07-22 Thread Evgenii Zhuravlev
Hi, If you don't plan to use it at all, then yes, 40mb should be fine.

Evgenii

вт, 21 июл. 2020 г. в 22:17, kay :

> Hello!
>
> What size should I give you for DefaultDataRegion?
> Every cache has a specific Region(I'm not going to use defaultDataRegion)
>
> 40MB is enough? If will not use defaultDataRegion.
>
> Thank you so much
> I will wait for reply!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: A question regarding cluster groups

2020-07-22 Thread adipro
Can someone please reply to this thread. It's kind of urgency.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


2.8.1 : Ignite Security : Cache_Put event generated from a remote_client user action has subject uuid of Node that executes the request

2020-07-22 Thread VeenaMithare
Hi Team,

1. I noticed that this issue (
https://issues.apache.org/jira/browse/IGNITE-12781) is not resolved in
2.8.1.

Could you guide how can we get audit information if a cache record
modification is done on dbeaver and the cache_put event contains the node id
instead of the remote_client subject id ?

Please note this is a blocker issue for us to use Apache Ignite , since we
use dbeaver to update records sometimes. 
If this is not resolved, could we kindly ask this to be included in the next
release. 

2. Even if the cache_put event did contain the remote_client user id , how
are we supposed to fetch it from the auditstoragespi ?

The below link mentions 
http://apache-ignite-users.70518.x6.nabble.com/JDBC-thin-client-incorrect-security-context-td31354.html

public class EventStorageSpi extends IgniteSpiAdapter implements
EventStorageSpi {
@LoggerResource
private IgniteLogger log;

@Override
public  Collection localEvents(IgnitePredicate p)
{
return null;
}

@Override
public void record(Event evt) throws IgniteSpiException {
if (evt.type() == EVT_MANAGEMENT_TASK_STARTED) {
TaskEvent taskEvent = (TaskEvent) evt;

SecuritySubject subj = taskEvent.subjectId() != null
?
getSpiContext().authenticatedSubject(taskEvent.subjectId())
: null;

log.info("Management task started: [" +
"name=" + taskEvent.taskName() + ", " +
"eventNode=" + taskEvent.node() + ", " +
"timestamp=" + taskEvent.timestamp() + ", " +
"info=" + taskEvent.message() + ", " +
"subjectId=" + taskEvent.subjectId() + ", " +
"secureSubject=" + subj +
"]");
}
}

@Override
public void spiStart(@Nullable String igniteInstanceName) throws
IgniteSpiException {
/* No-op. */
}

@Override
public void spiStop() throws IgniteSpiException {
/* No-op. */
}
}

IgniteSpiContext exposes authenticatedSubject which according to some
discussions gets the subject *only for node* . (
http://apache-ignite-developers.2346864.n4.nabble.com/Security-Subject-of-thin-client-on-remote-nodes-td46029.html#a46412
)

/*securityContext(uuid ) was added to the GridSecurityProcessor to get the
securitycontext of the thin client. However this is not exposed via the
IgniteSpiContext.* /


3. The workaround I did was as follows. Please let me know if you see any
concerns on this approach - 
a. Add the remoteclientsubject into the authorizationcontext of the
authenticationcontext in the authenticate method of the securityprocessor.

b. This authorizationcontext is now put in a threadlocal variable ( Check
the class AuthorizationContext )
private static ThreadLocal actx = new ThreadLocal<>();

c. The following has been done in the storagespi when a change is made in
the dbeaver, 
c1. capture the EVT_TX_STARTED in the storage spi. The thread that generates
this event contains the subject in its threadlocal authorizationcontext.
Store this in a cache that holds the mapping transaction id to security
subject.

c2. capture the cache_put event and link the transaction id in the cache_put
event to the transaction id in the EVT_TX_STARTED and get the subject by
this mapping. 

c3. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be same, in which case it is a direct mapping

c4. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be different, in which case it is a case of finding the nearxid of the
transactionid in the cacheput event. And then find the security subject of
the nearxid


regards,
Veena.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


A question regarding cluster groups

2020-07-22 Thread adipro
Let us say I've two server nodes in one data center DC1 and two more server
nodes in another data center DC2. Two data centers have some network delay.


Now I'm using SQL select statements on caches which are replicated. Now
those caches' write synchronization mode is FULL_SYNC.

Now at a time we have working clients nodes only in one DC but not both.
Let's say we have two clients in DC1.

So total nodes in DC1 is 6 (2 client nodes and 2 server nodes in DC1 and 2
server nodes).

Our use case is such a way that.. 

1. 2 clients should query only 2 server nodes in DC1 and not the other 2
servers in DC2.
2. All the cache queries should be in FULL_SYNC with 2 server nodes in DC1
and DC1-DC2 should be done in ASYNC mode.
3. A doubt I got which is, if in client's node discoveryspi, if I (X,Y) ip
list as server nodes ips, would the queries always reach X,Y even though the
entire topology contains X,Y,Z as server nodes?

Please someone provide us the solution for this.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Too much network latency issue

2020-07-22 Thread adipro
Can someone please reply to this urgently



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Block until partition map exchange is complete

2020-07-22 Thread ssansoy
Hi, could the behaviour I have observed be captured by this bug:

https://issues.apache.org/jira/browse/IGNITE-9841

"Note, ScanQuery exhibits the same behavior - returns partial results when
some partitions are lost.  Not sure if solution would be related or needs to
be tracked and fixed under a separate ticket."





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 - Loading Plugin Provider

2020-07-22 Thread VeenaMithare
Hi Alex,

Thanks for the reply, 

1. I noticed that the ml plugin uses getPluginConfigurations which seems to
have been deprecated..

https://github.com/apache/ignite/blob/master/modules/ml/src/main/java/org/apache/ignite/ml/util/plugin/MLPluginProvider.java

Would that mean that the preferred way  to load a PluginProvider is through
setPluginProvider in the latest ignite version  ?

2. Why did the earlier versions of ignite have service loader framework to
load plugins?

regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


A question regarding cluster groups

2020-07-22 Thread R S S Aditya Harish
Hi Alex, I've an important question to ask. Can you please answer this.


Let us say I've two server nodes in one data center DC1 and two more server 
nodes in another data center DC2. Two data centers have some network delay.





Now I'm using SQL select statements on caches which are replicated. Now those 
caches' write synchronization mode is FULL_SYNC.



Now at a time we have working clients nodes only in one DC but not both. Let's 
say we have two clients in DC1.



So total nodes in DC1 is 6 (2 client nodes and 2 server nodes in DC1 and 2 
server nodes).



Our use case is such a way that..



1. 2 clients should query only 2 server nodes in DC1 and not the other 2 
servers in DC2.

2. All the cache queries should be in FULL_SYNC with 2 server nodes in DC1 and 
DC1-DC2 should be done in ASYNC mode.

3. A doubt I got which is, if in client's node discoveryspi, if I (X,Y) ip list 
as server nodes ips, would the queries always reach X,Y even though the entire 
topology contains X,Y,Z as server nodes?



Please someone provide us the solution for this.

Re: 3rd party persistence: Mapping with RDBMS tables that don't have primary keys

2020-07-22 Thread Alex Panchenko
Hi Denis,
thanks for your reply

>>How will you access such records in Ignite? SQL lookups?
Yes, using SQL lookups

>>The default CacheStore implementation that writes down changes to a
relational database needs to be overridden.
I'm not going to write any new records or update existing ones for such
tables through Ignite. So I can skip the overrides in CacheStore
implementation, right? (I'm using default implementation at the moment)

>>Obviously, Ignite still requires a primary key and that can be an integer
number incremented by your application:
https://apacheignite.readme.io/docs/id-generator
Thi may work. Can you point me to the example where Ignite config is in XML
file

Also at the moment I'm using the following workaround. It seems like
working, but I'm still testing it

"keyType"="com.some.package.KeyValuesTableKey"

KeyValuesTableKey {
key: UUID
value: String
}


Thank you



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


java.lang.StackOverflowError when put a value on Ignite cache 2.7.6

2020-07-22 Thread abraham
I am using Ignite 2.7.6 in a cluster of 2 servers with persistence enabled. 



It is working fine but suddenly a java.lang.StackOverflowError happens. Do
you know if it is a bug in 2.7.6 version or maybe I am doing something
wrong?

Error trace is attached.  ignite-2.txt
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/