java.lang.AssertionError : 0

2019-12-02 Thread yann.blaz...@externe.bnpparibas.com
Hello, when I do some integration test with spring, at the end of the test, my 
code is calling Ignite.close(). I also had this stack trace under some loads 
with multi threads and putall on caches.

I see this stacktrace : 


java.lang.AssertionError: 0
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getPageEntrySize(AbstractDataPageIO.java:149)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getPageEntrySize(AbstractDataPageIO.java:140)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.actualFreeSpace(AbstractDataPageIO.java:1201)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.setRealFreeSpace(AbstractDataPageIO.java:190)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.removeRow(AbstractDataPageIO.java:754)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$RemoveRowHandler.run(AbstractFreeList.java:286)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$RemoveRowHandler.run(AbstractFreeList.java:261)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:279)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:256)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.removeDataRowByLink(AbstractFreeList.java:571)
at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.removeRow(RowStore.java:79)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl$1.apply(IgniteCacheOffheapManagerImpl.java:2929)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl$1.apply(IgniteCacheOffheapManagerImpl.java:2926)
at 
org.apache.ignite.internal.processors.cache.tree.AbstractDataLeafIO.visit(AbstractDataLeafIO.java:185)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.destroy(BPlusTree.java:2348)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.destroy(IgniteCacheOffheapManagerImpl.java:2926)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.destroyCacheDataStore0(IgniteCacheOffheapManagerImpl.java:1323)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.destroyCacheDataStore(IgniteCacheOffheapManagerImpl.java:1308)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.stop(IgniteCacheOffheapManagerImpl.java:251)
at 
org.apache.ignite.internal.processors.cache.CacheGroupContext.stopGroup(CacheGroupContext.java:751)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2579)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2572)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:1094)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:1059)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2356)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2228)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2612)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2575)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:379)
at org.apache.ignite.Ignition.stop(Ignition.java:225)
at org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:3568)


What's happening ? 


Thanks for your help.

Regards.
This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et 

Drop index do not release memory used ?

2019-10-25 Thread yann.blaz...@externe.bnpparibas.com
Hello all.

If you can remember, I found a way to compute the real size of memory used in 
offheap, using the reuselist size.

As I'm facing some limits on my hardware, I'm trying to optimize my memory 
consumption, in pure memory, no persistence on hdd or ssd.

For that as I have to execute plenty of request on my stored data, I saw that 
indexes consumes a lot of memory.

To improve that, in my algorithm I tried to create tables with only pk, no 
indexes at first.

Then before each request, I tried to create indexes , execute request, then 
drop indexes.

What I see, is that drop index do not release memory...

Everything is release only when we drop table.

Is this normal  ?


Thnaks and regards.
This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.



Re: How to know memory used by a cache or a set

2019-06-26 Thread yann.blaz...@externe.bnpparibas.com
Hello, can you help me o understand the real mean of this trace ?

Off-heap [used=79492MB, free=56.92%, comm=86642MB]
^--   sysMemPlc region [used=0MB, free=99.13%, comm=40MB]
^--   Gondor region [used=79491MB, free=56.87%, comm=86562MB]


My region is called Gondor, is "used" is the amount of memory really used in 
this node ? In this case what's the meaning of "free" and "comm" ?

Or do we have 79Gb allocated AND my data only take 44% of space in it ?

Thanks, and regards.


Le 26 juin 2019 à 01:28, Denis Magda 
mailto:dma...@apache.org>> a écrit :

Looks like a mess.

Alex Goncharuk, Nikolay Izhikov, considering the latest changes and new metrics 
& tracing framework, how would they one get cache/table memory size via a 
simple metric? There should be a way to make it workable without hacks like 
cachegroups, etc.


-
Denis


On Wed, Jun 19, 2019 at 3:29 AM Alex Plehanov 
mailto:plehanov.a...@gmail.com>> wrote:
Denis,

Documentation for memory usage calculation covers another case (memory usage by 
the node).
There is no ability (AFAIK) in released Ignite versions to calculate memory 
used by a cache or cache group when persistence is disabled.
Dedicated data region can be used for some of the caches in some cases and 
metrics can be collected for this data region, but when the cache is destroyed 
(or data is deleted) memory is not deallocated, it's going to reuse list.
There is a new metric implemented DataRegionMetrics#getTotalUsedPages (count of 
allocated pages minus count of pages in the reuse list) which will partially 
help to solve Yann's problem, but this metric will be available only in the 
next Ignite release.
Also, as a temporary workaround, some internal API can be used to get a count 
of pages in the reuse list and calculate total used pages by data region 
manually.

ср, 19 июн. 2019 г. в 07:59, Denis Magda 
mailto:dma...@gridgain.com>>:
+ dev list

Ignite developers,

Seems that the present solution for memory calculation doesn't work (check
the thread):
https://apacheignite.readme.io/v2.5/docs/memory-metrics#section-memory-usage-calculation
<https://apacheignite.readme.io/docs/cache-metrics>

Was it really broken?

--
Denis Magda


On Thu, Jun 13, 2019 at 9:45 AM yann Blazart 
mailto:yann.blaz...@gmail.com>> wrote:

> Ok, but I can't create dynamically a data region ? Because each time I
> receive a new file to process, I create a cachegroup to handle it, then I
> clean everything.
>
> Le jeu. 13 juin 2019 à 13:28, Alex Plehanov 
> mailto:plehanov.a...@gmail.com>> a
> écrit :
>
>> Hello,
>>
>> It's a known issue [1]. Now you can get cache group size via JMX only if
>> persistence is used.
>> If persistence is not used you can get allocated size only for data
>> region (but you can have more then one data region and assign cache groups
>> to data regions any way you want)
>>
>> [1] : https://issues.apache.org/jira/browse/IGNITE-8517
>>
>> ср, 12 июн. 2019 г. в 16:00, 
>> yann.blaz...@externe.bnpparibas.com<mailto:yann.blaz...@externe.bnpparibas.com>
>>  <
>> yann.blaz...@externe.bnpparibas.com<mailto:yann.blaz...@externe.bnpparibas.com>>:
>>
>>> Hello, I'm back.
>>>
>>> Well, I need to get memory used by each execution of my process, so I
>>> put all involved caches into the same cacheGroup.
>>>
>>> If I use the CacheGroupMetricsBean, the size gave to me is 0 !
>>> If I enable persistence on DataRegion, I get size, but I don't want to
>>> use the persistence enabled.
>>>
>>> Is it a bug ?
>>>
>>> regards.
>>>
>>> > Le 27 mai 2019 à 11:09, ibelyakov 
>>> > mailto:igor.belyako...@gmail.com>> a écrit
>>> :
>>> >
>>> > Hi,
>>> >
>>> > Did you turn on cache metrics for your data region?
>>> >
>>> > To turn the metrics on, use one of the following approaches:
>>> > 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every
>>> region you
>>> > want to collect the metrics for.
>>> > 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by a
>>> > special JMX bean.
>>> >
>>> > More information regarding cache metrics available here:
>>> > https://apacheignite.readme.io/docs/cache-metrics
>>> >
>>> > Regards,
>>> > Igor
>>> >
>>> >
>>> >
>>> >
>>> > --
>>> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>> This message and any attachments (the "message") is
>>> intended solely f

Re: How to know memory used by a cache or a set

2019-06-12 Thread yann.blaz...@externe.bnpparibas.com
Hello, I'm back.

Well, I need to get memory used by each execution of my process, so I put all 
involved caches into the same cacheGroup.

If I use the CacheGroupMetricsBean, the size gave to me is 0 !
If I enable persistence on DataRegion, I get size, but I don't want to use the 
persistence enabled.

Is it a bug ?

regards.

> Le 27 mai 2019 à 11:09, ibelyakov  a écrit :
> 
> Hi,
> 
> Did you turn on cache metrics for your data region?
> 
> To turn the metrics on, use one of the following approaches:
> 1. Set DataRegionConfiguration.setMetricsEnabled(true) for every region you
> want to collect the metrics for.
> 2. Use the DataRegionMetricsMXBean.enableMetrics() method exposed by a
> special JMX bean.
> 
> More information regarding cache metrics available here:
> https://apacheignite.readme.io/docs/cache-metrics
> 
> Regards,
> Igor
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.


Re: Insert into select OOM exception on java heap

2019-06-04 Thread yann.blaz...@externe.bnpparibas.com
Hello, finally I did the following trick.

I broadcast my select request on each node, executing the select with Lazy and 
collocated, then I user QueryCursor and create BinaryObjects that I insert into 
cache by 5000 packet.

Wich seems to be good enough.

Thanks for your help.

Le 30 mai 2019 à 18:18, yann Blazart 
mailto:yann.blaz...@gmail.com>> a écrit :

It's an insert into select. We made "meta" tables to allow doing other selects.

Or can I do a lazy select then batch insert you mean ?

Le jeu. 30 mai 2019 à 18:15, Ilya Kasnacheev 
mailto:ilya.kasnach...@gmail.com>> a écrit :
Hello!

I think it would make better sense to mark already updated entries, update in 
batches until no unmarked entries left.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 19:14, yann Blazart 
mailto:yann.blaz...@gmail.com>>:
Hmmm. Can I use limit and offset ?

Doing limit 1 by example and continue while insert  ount = 1 ???



Le jeu. 30 mai 2019 à 17:57, Ilya Kasnacheev 
mailto:ilya.kasnach...@gmail.com>> a écrit :
Hello!

I'm afraid you will have to split this query into smaller ones. Ignite doesn't 
really have lazy insert ... select, so the result set will have to be held in 
heap for some time.

Regards,
--
Ilya Kasnacheev


чт, 30 мая 2019 г. в 18:36, yann Blazart 
mailto:yann.blaz...@gmail.com>>:
Hello,  we have 6 nodes configured with 3Gb heap, 30Gb offheap.

We store lot's of data in some partitioned tables, then we are executing some 
"insert into select... join..." using SqlQueryField (or SqlQueryFieldEx).

With tables of 5000 000 lines, we ran in a OOM error, even with lazy set to 
true and skipOnReduceTable.

How can we handle this please ?

Regards.

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.


Ignite HashSet clearing

2019-06-04 Thread yann.blaz...@externe.bnpparibas.com
Hello all ! :)

We are using ignite HashSet in our processes to test unicity of Id's when we 
are importing files in our system.

I have 5 or 6 HashSet wich in my example can contains millions of lines. Insert 
and check is working great.
But in next step, when we do not need it anymore, we clear each of them, that 
process take 5 minutes !! Why ? Is there a fast way to delete these hashsets ?


Thks
Regards.
This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.



Apache ignite rewrite badly SQL ???

2019-05-21 Thread yann.blaz...@externe.bnpparibas.com
Hello all , I have a big problem.

I have  a lot of tabes in my ignite cluster and one request take 10 minutes 
executed as a SQLFieldQuery, but only 153ms from DataGrip in jdbc.

I explain, I have to change some names in request to not divulgate informations.


I have 3 tables  :

TC with 557 000 lines
TD with 3753 lines
TS with 1500 lines.


I want to execute this request :


select  *
from
TC co inner join TD dd
on dd.eid = co.eid
and dd.mid = co.mid inner join TS sch on sch.eid = dd.eid  and sch.mid = 
dd.mid;

On datagrip 153ms.

When I look into logs, I see that request has been rewrited like that :


SELECT
*
FROM TC CO__Z0
/*CONTRACT.__SCAN_ */
INNER JOIN TD DD__Z1
/* batched:unicast TD_3: EID = CO__Z0.EID
AND MID = CO__Z0.MID
 */
ON 1=1
/* WHERE (DD__Z1.EID = CO__Z0.EID)
AND (DD__Z1.MID = CO__Z0.MID)
*/
INNER JOIN TS SCH__Z2
/* batched:unicast TS_2: EID = DD__Z1.EID
AND MID = DD__Z1.MID
 */
ON 1=1
WHERE ((SCH__Z2.EID = DD__Z1.EID)
AND (SCH__Z2.MID = DD__Z1.MID))
AND ((DD__Z1.EID = CO__Z0.EID)
AND (DD__Z1.MID = CO__Z0.MID))

Why these (On 1=1 ) ???

If I understand, this make a scalar product of my lines that explain the 10 
minutes ! Why this is rewrited like that ???

Thanks for your help, regards

This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et sont confidentiels.
Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
merci de le detruire ainsi que toute copie de votre systeme et d'en avertir
immediatement l'expediteur. Toute lecture non autorisee, toute utilisation de 
ce message qui n'est pas conforme a sa destination, toute diffusion ou toute 
publication, totale ou partielle, est interdite. L'Internet ne permettant pas 
d'assurer
l'integrite de ce message electronique susceptible d'alteration, BNP Paribas 
(et ses filiales) decline(nt) toute responsabilite au titre de ce message dans 
l'hypothese
ou il aurait ete modifie, deforme ou falsifie. 
N'imprimez ce message que si necessaire, pensez a l'environnement.