Re: Odd behavior/bug with DataRegion

2020-07-23 Thread Victor
Update,

Interestingly, i took the value to the wire, setting 'emptyPagesPoolSize' to
510 (< 512) and that seems to have done the trick. Atleast in my past 5 test
run's the preload has gone through fine.

Right now i have set it to pretty much the max value, since there is no good
way to identify what would be the runtime max value of an instance while
creating a cache.

Any other ideas around how to go about setting a safe value for this
property.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite with istio/envoy mTLS

2020-07-23 Thread Saikiran Daripelli
Thanks you.

From: Denis Magda 
Reply to: "user@ignite.apache.org" 
Date: Friday, 24 July 2020 at 12:43 AM
To: user 
Subject: Re: Apache Ignite with istio/envoy mTLS

To support this deployment model you would need to have a Kubernetes Service 
per Ignite server node so that remote thick clients can connect to any server 
from outside. Also, you would need to enable the following setting on the 
clients side (the setting will be released in Ignite 2.9): 
https://www.gridgain.com/docs/latest/developers-guide/clustering/running-client-nodes-behind-nat
 
[gridgain.com]

Thin clients might be a much better choice for those applications that connect 
from outside. It's easier to set up and manage. While inside K8 you can keep 
using the thick clients.

-
Denis


On Thu, Jul 23, 2020 at 11:36 AM akorensh 
mailto:alexanderko...@gmail.com>> wrote:
Hi,
   At this point(2.8.1), Apache Ignite does not support Thick clients being
outside of the K8 cluster.
   while servers being in the cluster.
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
[apache-ignite-users.70518.x6.nabble.com]


Re: Checking dataregion limits

2020-07-23 Thread Victor
Correction, max size is indeed in MB, 'offHeapSize' is in bytes, which maps
to max size. I was looking for similar attribute for initial size in bytes.
But i didn't find one.

That is ok i guess. MB value works too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Odd behavior/bug with DataRegion

2020-07-23 Thread Victor
On further reading around DataRegionConfiguration. I see
'setEmptyPagesPoolSize', it says,

"Increase this parameter if cache can contain very big entries (total size
of pages in this pool should be enough to contain largest cache entry).
Increase this parameter if IgniteOutOfMemoryException occurred with enabled
page eviction."

The default value as i see is 100. So initially bumped it up to 1000. But it
failed saying the value can't go beyond 512. So i see it to 500.

I still am getting the same OOM exception, when i am preloading the cache.

Not sure if this a bug or some config is missing.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Checking dataregion limits

2020-07-23 Thread Victor
Thanks Evgenii, this helps get some perspective.

It's confusing with maxSize being shown in bytes and the initialSize being
shown in MB.

Enabling metrics, does it entail any performance drop or can it be enabled
in production as well. I'll run my tests anyway. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Enabling swapPath causes invoking shutdown hook

2020-07-23 Thread 38797715

Hi community,

When swapPath is configured in DataRegionConfiguration and maxSize is 
greater than the physical memory (that is, swap space is enabled), if 
the amount of data exceeds the physical memory, a node failure will 
occur. The log is as follows:


[08:29:14,212][INFO][Thread-24][G] Invoking shutdown hook...

I think the node process may be killed by the OS. What parameters can be 
adjusted?





Odd behavior/bug with DataRegion

2020-07-23 Thread Victor
Hi,

I am running an app with a custom dataregion for one of the cache's with
following configuration,
max size = 20mb (just for example)
persistence=false 
expiry policy = Random_2_LRU

I am using a 3rd party persistence (RDBMS).

When i use this configuration and add data all works fine. E.g. say i put
50k records, all 50k make it to the DB, while only 30k stay in the cache
(based on the above expiration policy). All's good so far.

However, when i start the app all over again, i preload the cache with data
from the DB. The DB has 50k records, so i expect the cache to continue to
apply the expiration policy as it nears its 90% data region capacity.

However, as i am pre loading, i am monitoring the mbeans for data region and
the overall entry count. I see the data region allocation go above 20mb and
the entry count go above 40k. And then it just crashes with OOM with the
below error,

[2020-07-23T18:41:43,286][ERROR][Test.Thread.4][ignite] Critical system
error detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
[SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
o.a.i.i.mem.IgniteOutOfMemoryException: Out of memory in data region
[name=Account, initSize=19.0 MiB, maxSize=20.0 MiB,
persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies]]
org.apache.ignite.internal.mem.IgniteOutOfMemoryException: Out of memory in
data region [name=be.gen.Concepts.Account, initSize=19.0 MiB, maxSize=20.0
MiB, persistenceEnabled=false] Try the following:
  ^-- Increase maximum off-heap memory size
(DataRegionConfiguration.maxSize)
  ^-- Enable Ignite persistence (DataRegionConfiguration.persistenceEnabled)
  ^-- Enable eviction or expiration policies
at
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.allocatePage(PageMemoryNoStoreImpl.java:322)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.allocateDataPage(AbstractFreeList.java:570)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.writeSinglePage(AbstractFreeList.java:688)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.insertDataRow(AbstractFreeList.java:582)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeList.insertDataRow(CacheFreeList.java:74)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeList.insertDataRow(CacheFreeList.java:35)
[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.persistence.RowStore.addRow(RowStore.java:108)
[ignite-core-2.8.1.jar:2.8.1]
...

Not sure why is the expiration policy not kicking in. Is this a bug?

Any inputs appreciated.

Victor



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Apache Ignite with istio/envoy mTLS

2020-07-23 Thread Denis Magda
To support this deployment model you would need to have a Kubernetes
Service per Ignite server node so that remote thick clients can connect to
any server from outside. Also, you would need to enable the following
setting on the clients side (the setting will be released in Ignite 2.9):
https://www.gridgain.com/docs/latest/developers-guide/clustering/running-client-nodes-behind-nat

Thin clients might be a much better choice for those applications that
connect from outside. It's easier to set up and manage. While inside K8 you
can keep using the thick clients.

-
Denis


On Thu, Jul 23, 2020 at 11:36 AM akorensh  wrote:

> Hi,
>At this point(2.8.1), Apache Ignite does not support Thick clients being
> outside of the K8 cluster.
>while servers being in the cluster.
> Thanks, Alex
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: custom ignite-log4j.xml when using stock docker ignite image

2020-07-23 Thread Evgenii Zhuravlev
Hi Maxim,

Do you plan to use persistence or attach any disk for the work directory?
If so, you can just put a configuration file there, and just use inside the
ignite XML configuration file.

Evgenii

ср, 22 июл. 2020 г. в 08:13, Maxim Volkomorov <2201...@gmail.com>:

> Hi!
>
> Is there a simple way to use a custom ignite-log4.xml config, nested in
> URI ignite-config.xml, when ignite is running like a stock docker image?
>
> Same question for custom app.properties beans, nested in config.xml.
>
> Our docker run cmd:
> docker run -it --net=localhost -e "CONFIG_URI=
> http://host/ignite-config.xml"; apacheignite/ignite
>


Re: Apache Ignite with istio/envoy mTLS

2020-07-23 Thread akorensh
Hi, 
   At this point(2.8.1), Apache Ignite does not support Thick clients being
outside of the K8 cluster.
   while servers being in the cluster.
Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 - Loading Plugin Provider - Conflicting documentation

2020-07-23 Thread akorensh
Veena,
   The recommended way is to use the service provider as detailed here:
https://apacheignite.readme.io/docs/plugins 

  Service Provider does not use IgniteConfiguration.getPluginConfiguration
inside of itself.

 Here is a link of the function that does the loading:
https://github.com/apache/ignite/blob/45e525865c6c93b999ab5030d7568add0bde96a2/modules/core/src/main/java/org/apache/ignite/internal/util/IgniteUtils.java#L1021





Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


apache-ignite compatibility with armhf(32-bit arm linux)

2020-07-23 Thread rakshita04
I need to use apache-ignite for my armf(2-bit arm) linux application.
Is apache-ignite compatible with 32-bit arm(armhf) linux?
can i use apache-ignite debian package present on the website for armhf?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


apache ignite compatibility with armhf(arm 32-bit)

2020-07-23 Thread rakshita04
I need to use apache ignite for my Debian linux 32-bit version.
Is it compatible with armhf(32-bit arm linux)?
can i use apache-ignite debian package for the same?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance on Windows vs Linux

2020-07-23 Thread Nigel Street
Thanks very much everyone.

> On 23 Jul 2020, at 18:09, Pavel Tupitsyn  wrote:
> 


Re: Performance on Windows vs Linux

2020-07-23 Thread Pavel Tupitsyn
Linux often outperforms Windows in filesystem operations.
This is a big generalization, of course - every workload is different and
should be benchmarked.
But this is worth keeping in mind when using persistence.

Also, as Andrei said, most users run Ignite on Linux in production,
and Ignite documentation provides some performance tuning tips specifically
for Linux [1] [2]

[1] https://apacheignite.readme.io/docs/performance-tips
[2] https://apacheignite.readme.io/docs/durable-memory-tuning

On Thu, Jul 23, 2020 at 8:03 PM Andrei Aleksandrov 
wrote:

> Got it. If everything is the same then it looks like I can't suggest
> something to you here but as I know most users run their solutions on
> Linux OS. Probably it can be related to some performance benefits of
> Linux OS but I don't think that this difference should be significant.
>
> 7/23/2020 7:34 PM, njcstreet пишет:
> > Thanks. Not so easy to describe the PoC due to confidentiality. But it
> > involves writing a lot of data as fast as possible at the start of the
> day,
> > with persistence enabled, then with incremental updates throughout the
> day,
> > and with many user queries on top through SQL (sorry I know that is
> probably
> > not that helpful).
> >
> > I have 6 machines all with decent specification, SSD storage and 10gb
> > network connectivity. I was just wondering if there is a particular
> benefit
> > to deploying one OS over another. If there isn’t much in it, I will go
> with
> > the one I am familiar with.
> >
> > Regards,
> >
> > Nigel
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance on Windows vs Linux

2020-07-23 Thread Andrei Aleksandrov
Got it. If everything is the same then it looks like I can't suggest 
something to you here but as I know most users run their solutions on 
Linux OS. Probably it can be related to some performance benefits of 
Linux OS but I don't think that this difference should be significant.


7/23/2020 7:34 PM, njcstreet пишет:

Thanks. Not so easy to describe the PoC due to confidentiality. But it
involves writing a lot of data as fast as possible at the start of the day,
with persistence enabled, then with incremental updates throughout the day,
and with many user queries on top through SQL (sorry I know that is probably
not that helpful).

I have 6 machines all with decent specification, SSD storage and 10gb
network connectivity. I was just wondering if there is a particular benefit
to deploying one OS over another. If there isn’t much in it, I will go with
the one I am familiar with.

Regards,

Nigel



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Performance on Windows vs Linux

2020-07-23 Thread njcstreet
Thanks. Not so easy to describe the PoC due to confidentiality. But it
involves writing a lot of data as fast as possible at the start of the day,
with persistence enabled, then with incremental updates throughout the day,
and with many user queries on top through SQL (sorry I know that is probably
not that helpful).

I have 6 machines all with decent specification, SSD storage and 10gb
network connectivity. I was just wondering if there is a particular benefit
to deploying one OS over another. If there isn’t much in it, I will go with
the one I am familiar with.

Regards,

Nigel



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Load balanced CQ listener/Catch up missed events

2020-07-23 Thread Denis Magda
1. It depends on your application needs. Most likely a single distributed
queue will work if it needs to be accessed by any remote producer or
consumer.

2. Yes, refer to the documentation from my previous email.

3. What do you mean under "storing a part of the client node"?

-
Denis


On Wed, Jul 22, 2020 at 9:28 AM Devakumar J 
wrote:

> Thanks Denis. I will explore further on this.
>
> Some more questions in distributed queue.
>
> 1. When i create a queue, should it be created at each server node
> separately or it should be created once cluster wide?
>
> 2) If cluster wide, can multiple server nodes publish to the same queue?
>
> 3) Also is there a way to create the queue at client side and store a part
> of client node itself?
>
> Thanks,
> Devakumar J
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Performance on Windows vs Linux

2020-07-23 Thread Andrei Aleksandrov

Hi,

I guess that you should take a look from another point of view. You have 
two servers that should be compared to each other. Take a look at the 
following things:


1)Disk speed and size
2)Network latency
3)CPU and RAM capacities

I guess that these things will be more important than the used operation 
system.


However, can you describe your POC details? Probably it can help us to 
advise you something else.


BR,
Andrei

7/23/2020 12:45 PM, njcstreet пишет:

Hi,

I am about to start a proof of concept on Ignite and we have the option of
deploying either on Windows Server 2016 or Red Hat Linux 7. I know that
Ignite can be deployed on both, but is there reason to pick one over the
other?

Is performance better on a particular environment? We are using native
persistence -  think that Direct IO can be enabled but only on Linux?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks Spring Injected dataSource

2020-07-23 Thread Yohan Fernando
I managed to resolve this by using the @SpringResource annotation like below 
(this was not required in Ignite 2.7.6),

@SpringResource(resourceName = "dataSource")
transient DataSource dataSource;

Thanks

Yohan

From: Yohan Fernando
Sent: 22 July 2020 17:14
To: 'user@ignite.apache.org' 
Subject: RE: Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks 
Spring Injected dataSource

Hi all,

I have created a test project to reproduce this issue.

Running the project (Run.java)  using Ignite 2.8.1 produces the following 
error. Under 2.7.6 no such error occurs.

Exception in thread "main" javax.cache.integration.CacheLoaderException: 
java.lang.NullPointerException
at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:545)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.localLoadCache(GridDhtCacheAdapter.java:636)
at 
org.apache.ignite.internal.processors.cache.GridCacheProxyImpl.localLoadCache(GridCacheProxyImpl.java:226)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJob.localExecute(GridCacheAdapter.java:6052)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$LoadCacheJobV2.localExecute(GridCacheAdapter.java:6101)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter$TopologyVersionAwareJob.execute(GridCacheAdapter.java:6735)
at 
org.apache.ignite.compute.ComputeJobAdapter.call(ComputeJobAdapter.java:131)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor$C2.execute(GridClosureProcessor.java:1855)
at 
org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:596)
at 
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:7005)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:590)
at 
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:519)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.processors.job.GridJobProcessor.processJobExecuteRequest(GridJobProcessor.java:1293)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.sendRequest(GridTaskWorker.java:1429)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.processMappedJobs(GridTaskWorker.java:664)
at 
org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:536)
at 
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:829)
at 
org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:497)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:449)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:420)
at 
org.apache.ignite.internal.processors.closure.GridClosureProcessor.callAsync(GridClosureProcessor.java:404)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.globalLoadCacheAsync(GridCacheAdapter.java:4020)
at 
org.apache.ignite.internal.processors.cache.GridCacheAdapter.globalLoadCache(GridCacheAdapter.java:3993)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.loadCache(IgniteCacheProxyImpl.java:387)
at 
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.loadCache(GatewayProtectedCacheProxy.java:311)
at com.example.Run.main(Run.java:13)
Caused by: java.lang.NullPointerException
at 
com.example.TraderCacheStore.loadCache(TraderCacheStore.java:26)
at 
org.apache.ignite.internal.processors.cache.store.GridCacheStoreManagerAdapter.loadCache(GridCacheStoreManagerAdapter.java:519)
... 27 more

Please let me know if you have any questions.

Thanks

Yohan
From: Yohan Fernando
Sent: 21 July 2020 16:13
To: 'user@ignite.apache.org' 
mailto:user@ignite.apache.org>>
Subject: Change in CacheStore Serialization from 2.7.6 to 2.8.x breaks Spring 
Injected dataSource

Hello All,

We are migrating from Ignite 2.7.6 to 2.8.1 and have hit an issue where 
CacheStore implementations that include Spring injected DataSource objects, 
these datasources turn out to be null. After investigation, it appears that 
there is a change in behaviour under Ignite 2.8.x where it seems like the 
CacheStore is Serialized and therefore loose the injected Spring references. 
This was 

Re: 2.8.1 : Ignite Security : Cache_Put event generated from a remote_client user action has subject uuid of Node that executes the request

2020-07-23 Thread Andrei Aleksandrov

Hi Veena,

Indeed it looks like that current problem wasn't solved. It looks like 
there are not enough people interested in the current fix. However, 
Ignite is the open-source community. You can make a patch for you or 
even provide it to the community.


Unfortunately, I don't think that somebody on the user mail list can 
help here. You can try to ask one more time on the developer mail list.


Also, you can try to investigate some third party security plugins in 
case if it's important to you.


BR,
Andrei

7/22/2020 4:17 PM, VeenaMithare пишет:

Hi Team,

1. I noticed that this issue (
https://issues.apache.org/jira/browse/IGNITE-12781) is not resolved in
2.8.1.

Could you guide how can we get audit information if a cache record
modification is done on dbeaver and the cache_put event contains the node id
instead of the remote_client subject id ?

Please note this is a blocker issue for us to use Apache Ignite , since we
use dbeaver to update records sometimes.
If this is not resolved, could we kindly ask this to be included in the next
release.

2. Even if the cache_put event did contain the remote_client user id , how
are we supposed to fetch it from the auditstoragespi ?

The below link mentions
http://apache-ignite-users.70518.x6.nabble.com/JDBC-thin-client-incorrect-security-context-td31354.html

public class EventStorageSpi extends IgniteSpiAdapter implements
EventStorageSpi {
 @LoggerResource
 private IgniteLogger log;

 @Override
 public  Collection localEvents(IgnitePredicate p)
{
 return null;
 }

 @Override
 public void record(Event evt) throws IgniteSpiException {
 if (evt.type() == EVT_MANAGEMENT_TASK_STARTED) {
 TaskEvent taskEvent = (TaskEvent) evt;

 SecuritySubject subj = taskEvent.subjectId() != null
 ?
getSpiContext().authenticatedSubject(taskEvent.subjectId())
 : null;

 log.info("Management task started: [" +
 "name=" + taskEvent.taskName() + ", " +
 "eventNode=" + taskEvent.node() + ", " +
 "timestamp=" + taskEvent.timestamp() + ", " +
 "info=" + taskEvent.message() + ", " +
 "subjectId=" + taskEvent.subjectId() + ", " +
 "secureSubject=" + subj +
 "]");
 }
 }

 @Override
 public void spiStart(@Nullable String igniteInstanceName) throws
IgniteSpiException {
 /* No-op. */
 }

 @Override
 public void spiStop() throws IgniteSpiException {
 /* No-op. */
 }
}

IgniteSpiContext exposes authenticatedSubject which according to some
discussions gets the subject *only for node* . (
http://apache-ignite-developers.2346864.n4.nabble.com/Security-Subject-of-thin-client-on-remote-nodes-td46029.html#a46412
)

/*securityContext(uuid ) was added to the GridSecurityProcessor to get the
securitycontext of the thin client. However this is not exposed via the
IgniteSpiContext.* /


3. The workaround I did was as follows. Please let me know if you see any
concerns on this approach -
a. Add the remoteclientsubject into the authorizationcontext of the
authenticationcontext in the authenticate method of the securityprocessor.

b. This authorizationcontext is now put in a threadlocal variable ( Check
the class AuthorizationContext )
private static ThreadLocal actx = new ThreadLocal<>();

c. The following has been done in the storagespi when a change is made in
the dbeaver,
c1. capture the EVT_TX_STARTED in the storage spi. The thread that generates
this event contains the subject in its threadlocal authorizationcontext.
Store this in a cache that holds the mapping transaction id to security
subject.

c2. capture the cache_put event and link the transaction id in the cache_put
event to the transaction id in the EVT_TX_STARTED and get the subject by
this mapping.

c3. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be same, in which case it is a direct mapping

c4. The transactionid in cache_put and the transactionid in EVT_TX_STARTED
could be different, in which case it is a case of finding the nearxid of the
transactionid in the cacheput event. And then find the security subject of
the nearxid


regards,
Veena.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Checking dataregion limits

2020-07-23 Thread Evgenii Zhuravlev
Hi,

>What is the best way to check for data region limits.

*How about DataRegionMetricsMXBeanImpl.getMaxSize ?*

1. Why does max size not show what i exactly set. E.g. if i set the size as
20 mb (i.e. 20 * 1024 * 1024 = 20971520), but the value for OffHeapSize is
19922944. So why not exact?

*I think it might be an overhead for metadata information*

2. The Initial size under the same mbean show simply 19, where as i set it
to 19 mb (in bytes). Any idea why is that?

*Because this metrics show it in megabytes:dataRegCfg.getInitialSize()
/ (1024 * 1024)*

3. I expected "OffheapUsedSize" to be incremented everytime i add data to
this cache till it hits the max Size. However, it always stays at 0. The
only value increments is "TotalAllocatedSize". Is that the right attribute
to check for data size increments or should other attributes be checked?

*Are you sure that you have metrics enabled for this DataRegion?*

4. With no data yet, "TotalAllocatedSize" still shows some amount of
allocation. Like in the above case of max size of 20 mb, i could see
TotalAllocatedSize already at 8 mb, even before data was added to the cache.

*Ignite preallocates data by chunks.*

5. Finally if "TotalAllocatedSize" is indeed the attribute to track the size
increment, i should expect eviction to kick in when its value reaches 90% of
the max size. Is this understanding correct?

*Well, not really. If you have configured Eviction Policy, then you
can also configure it there, but TotalAllocatedSize won't be reduced
after the eviction, it can only grow. *


*Best Regards,*

*Evgenii*


вт, 21 июл. 2020 г. в 21:06, Victor :

> Hi,
>
> What is the best way to check for data region limits. Currently i am using
> below mbeans attributes to monitor this,
>
> 1. CacheClusterMetricsMXBeanImpl / CacheLocalMetricsMXBeanImpl
> Size - provides the total entries.
>
> 2. DataRegionMetricsMXBeanImpl
> OffHeapSize - Shows close to the max size i have set for the cache. Not
> exact though.
> TotalAllocatedSize - Seems to increase as data is added to the cache.
>
> Few queries,
> 1. Why does max size not show what i exactly set. E.g. if i set the size as
> 20 mb (i.e. 20 * 1024 * 1024 = 20971520), but the value for OffHeapSize is
> 19922944. So why not exact?
> 2. The Initial size under the same mbean show simply 19, where as i set it
> to 19 mb (in bytes). Any idea why is that?
> 3. I expected "OffheapUsedSize" to be incremented everytime i add data to
> this cache till it hits the max Size. However, it always stays at 0. The
> only value increments is "TotalAllocatedSize". Is that the right attribute
> to check for data size increments or should other attributes be checked?
> 4. With no data yet, "TotalAllocatedSize" still shows some amount of
> allocation. Like in the above case of max size of 20 mb, i could see
> TotalAllocatedSize already at 8 mb, even before data was added to the
> cache.
> 5. Finally if "TotalAllocatedSize" is indeed the attribute to track the
> size
> increment, i should expect eviction to kick in when its value reaches 90%
> of
> the max size. Is this understanding correct?
>
> I'll run some more tests.
>
> Victor
>
> I'll tr
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: third-party persistance and junction table

2020-07-23 Thread Andrei Aleksandrov

Hi,

Unfortunately, Ignite doesn't support such kind of relations out of the 
box. Ignite just translates it to third party data storage that used as 
cache-store.


It's expected that inserts and updates will be rejected in case if they 
break some rules.


BR,
Andrei
7/21/2020 11:16 AM, Bastien Durel пишет:

Hello,

I have a junction table in my model, and used the web console to
generate ignite config and classes from my SQL database

-> There is a table user with id (long) and some data
-> There is a table role with id (long) and some data
-> There is a table user_role with user_id (fk) and role_id (fk)

Reading cache from table works, I can query ignite with jdbc and I get
my relations as expected.

But if I want to add a new relation, the query :
insert into "UserRoleCache".user_role(USER_ID, ROLE_ID) values(6003, 2)
is translated into this one, sent to postgresql :
UPDATE public.user_role SET  WHERE (user_id=$1 AND role_id=$2)

Which obviously is rejected.

The web console generated a cache for this table, with UserRole
& UserRoleKey types, which each contains userId and roleId Long's.

Is there a better (correct) way to handle these many-to-many relations
in ignite (backed by RDBMS) ?

Regards,



Re: java.lang.StackOverflowError when put a value on Ignite cache 2.7.6

2020-07-23 Thread Evgenii Zhuravlev
Hi,

Did you find the object which caused this error? Can you share the
reproducer with us?

Thank you,
Evgenii


вт, 21 июл. 2020 г. в 23:15, abraham :

> I am using Ignite 2.7.6 in a cluster of 2 servers with persistence
> enabled.
>
>
>
> It is working fine but suddenly a java.lang.StackOverflowError happens. Do
> you know if it is a bug in 2.7.6 version or maybe I am doing something
> wrong?
>
> Error trace is attached.  ignite-2.txt
> 
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Cache query exception when using generic type: class java.util.ArrayList cannot be cast to

2020-07-23 Thread Andrei Aleksandrov

Hi,

You can put different types of objects in your cache because of the 
specific of its implementation. But this possibility can break your 
ScanQuery because you are going to see in cache only StationDto objects.


I guess that previously you put ArrayList inside the cache and then you 
put StationDto.


Please note that in case if you are going to change the ValueType of the 
cache then you must destroy it and then create with a new configuration.


I guess values of different types in the same cache are a reason of your 
issue.


BR,
Andrei

7/16/2020 4:41 PM, xingjl6280 пишет:

hi team,

Please kindly advise.
Below is my code and exception.
Btw, if I use ScanQuery and List, no error.

Something wrong with classloader? the normal cache put and get work well for
my class,  data could be deserialised to my class automatically.


thank you


My code:
***
cache.put(ProjectCacheConst.STATION_PREFIX+"1", new StationDto());
cache.put(ProjectCacheConst.STATION_PREFIX+"2", new StationDto());
cache.put(ProjectCacheConst.STATION_PREFIX+"3", new StationDto());

ScanQuery scanQuery = new ScanQuery<>(
 (k, v) -> k.startsWith(ProjectCacheConst.STATION_PREFIX) &&
nonNull(v));

List list = getCache(projectCode).query(scanQuery,
Cache.Entry::getValue).getAll();
***

Exception:

org.apache.ignite.IgniteException: class java.util.ArrayList cannot be cast
to class com.hh.sd.rtms.h_dto.map.StationDto (java.util.ArrayList is in
module java.base of loader 'bootstrap'; com.hh.sd.rtms.h_dto.map.StationDto
is in unnamed module of loader
org.apache.catalina.loader.ParallelWebappClassLoader @597bc1c6)
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$InternalScanFilter.apply(GridCacheQueryManager.java:3232)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.advance(GridCacheQueryManager.java:3108)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager$ScanQueryIterator.onHasNext(GridCacheQueryManager.java:2997)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.util.GridCloseableIteratorAdapter.hasNextX(GridCloseableIteratorAdapter.java:53)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.util.lang.GridIteratorAdapter.hasNext(GridIteratorAdapter.java:45)
~[ignite-core-2.8.1.jar:2.8.1]
at
org.apache.ignite.internal.processors.cache.QueryCursorImpl.getAll(QueryCursorImpl.java:123)
~[ignite-core-2.8.1.jar:2.8.1]
at
com.hh.sd.rtms.f_data_service.ProjectCacheServiceBean.getAllStations(ProjectCacheServiceBean.java:166)
~[rtms-core-0.1-SNAPSHOT.jar:na]
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native
Method) ~[na:na]
at
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:na]
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:na]
at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]
at
org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:344)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:198)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]
at
org.springframework.aop.aspectj.MethodInvocationProceedingJoinPoint.proceed(MethodInvocationProceedingJoinPoint.java:88)
~[spring-aop-5.2.3.RELEASE.jar:5.2.3.RELEASE]





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: cache.getAsync() blocks if cluster is not activated.

2020-07-23 Thread Andrei Aleksandrov

Hi,

I don't think that it should hang because there are no cache operations 
allowed when the cluster isn't activated. It should be some kind of 
CacheExcdption.


Is it possible to prepare some reproducer or unit test? Otherwise please 
provide some details:


1)What Ignite version was used?
2)Can you please share a server and cache configuration?

BR,
Andrei
7/15/2020 9:57 PM, John Smith пишет:

Hi, testing some failover scenarios etc...

When we call cache.getAsync() and the state of the cluster is not 
active. It seems to block.


I implemented a cache repository as follows and using Vertx.io. It 
seems to block at the cacheOperation.apply(cache)


So when I call myRepo.get(myKey) which underneath applies the 
cache.getAsync() function it blocks.


public class IgniteCacheRepository implements CacheRepository {
 public final long DEFAULT_OPERATION_TIMEOUT =1000; private final 
TimeUnitDEFAULT_TIMEOUT_UNIT = TimeUnit.MILLISECONDS; private Vertxvertx; private 
IgniteCache cache; public IgniteCacheRepository(Vertx vertx, IgniteCache cache) {
 this.vertx = vertx; this.cache = cache; }

 @Override public Futureput(K key, V value) {
 return executeAsync(cache -> cache.putAsync(key, value), 
DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public Future get(K key) {
 return executeAsync(cache -> cache.getAsync(key), 
DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public  Future invoke(K key, EntryProcessor 
processor, Object... arguments) {
 return executeAsync(cache -> cache.invokeAsync(key, processor, 
arguments), DEFAULT_OPERATION_TIMEOUT, DEFAULT_TIMEOUT_UNIT); }

 @Override public  T cache() {
 return (T)cache; }

 /** * Adapt Ignite async operation to vertx futures. * * @param 
cacheOperation The ignite operation to execute async. * @return The 
value from the cache operation. */ private  Future executeAsync(Function, IgniteFuture> cacheOperation, long timeout, TimeUnit unit) {

 Future future = Future.future(); try {
 IgniteFuture value = cacheOperation.apply(cache); 
value.listenAsync(result -> {
 try {
 future.complete(result.get(timeout, unit)); 
}catch(Exception ex) {
 future.fail(ex); }
 }, 
VertxIgniteExecutorAdapter.getOrCreate(vertx.getOrCreateContext())); 
}catch(Exception ex) {
 // Catch some RuntimeException that can be thrown by Ignite cache. 
future.fail(ex); }

 return future; }
}






Re: Performance Issue with Enum Serialization in Ignite 2.8

2020-07-23 Thread Pavel Tupitsyn
I can confirm the performance issue with enum serialization.
Ticket filed [1], I'm working on it.

The only workaround I can offer is to avoid WriteEnum for now.
Use WriteInt instead, add type casts accordingly.

[1] https://issues.apache.org/jira/browse/IGNITE-13293

On Thu, Jul 23, 2020 at 12:53 PM Pavel Tupitsyn 
wrote:

> I'll have a look today and get back to you.
> Sorry, I did not notice this thread until now.
>
> On Thu, Jul 23, 2020 at 7:27 AM zork  wrote:
>
>> Can someone provide inputs on this please?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Performance Issue with Enum Serialization in Ignite 2.8

2020-07-23 Thread Pavel Tupitsyn
I'll have a look today and get back to you.
Sorry, I did not notice this thread until now.

On Thu, Jul 23, 2020 at 7:27 AM zork  wrote:

> Can someone provide inputs on this please?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Performance on Windows vs Linux

2020-07-23 Thread njcstreet
Hi,

I am about to start a proof of concept on Ignite and we have the option of
deploying either on Windows Server 2016 or Red Hat Linux 7. I know that
Ignite can be deployed on both, but is there reason to pick one over the
other?

Is performance better on a particular environment? We are using native
persistence -  think that Direct IO can be enabled but only on Linux?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/