Re: Spring Data native Query support

2019-11-01 Thread Denis Magda
It’s possible to pass an Ignite native query to a method but you might need
to configure the repository differently. Please check this documentation:

https://apacheignite-mix.readme.io/docs/spring-data

Denis

On Wednesday, October 30, 2019, niamin  wrote:

> Does Ignite provide native Query support using Spring Data? I am not able
> to
> bootstrap my application when I configure my Repository to include a method
> that implements a native query as below:
>
> @Repository
> @RepositoryConfig(cacheName = "FOO")
> public interface ARInvoiceRepository
> extends IgniteRepository {
> @Query(value = "SELECT * FROM AR_INVOICE i where i.customerId =
> :customerId " +
> "and i.installmentFl = 'N' and i.excludeFromArFl = 'N' and
> ((i.installmentNo > 0 " +
> "and i.installmentStartDt <= :toDate) or i.tranDt <= :toDate)
> and i.status != 'H' ",
> nativeQuery = true)
> List getInvoicesForOIStatement(@Param("customerId") String
> customerId, @Param("toDate") Date statementDate);
> }
>
> When I start my application I get an error as below:
>
> Caused by: java.lang.IllegalArgumentException: Not a managed type: class
> com.gpc.rpm.bo.ARInvoice
> at
> org.hibernate.metamodel.internal.MetamodelImpl.managedType(MetamodelImpl.
> java:473)
> ~[hibernate-core-5.2.17.Final.jar:5.2.17.Final]
> at
> org.springframework.data.jpa.repository.support.
> JpaMetamodelEntityInformation.(JpaMetamodelEntityInformation.
> java:73)
> ~[spring-data-jpa-2.0.9.RELEASE.jar:2.0.9.RELEASE]
> at
> org.springframework.data.jpa.repository.support.
> JpaEntityInformationSupport.getEntityInformation(
> JpaEntityInformationSupport.java:66)
> ~[spring-data-jpa-2.0.9.RELEASE.jar:2.0.9.RELEASE]
> at
> org.springframework.data.jpa.repository.support.JpaRepositoryFactory.
> getEntityInformation(JpaRepositoryFactory.java:180)
> ~[spring-data-jpa-2.0.9.RELEASE.jar:2.0.9.RELEASE]
> at
> org.springframework.data.jpa.repository.support.JpaRepositoryFactory.
> getTargetRepository(JpaRepositoryFactory.java:118)
> ~[spring-data-jpa-2.0.9.RELEASE.jar:2.0.9.RELEASE]
> at
> org.springframework.data.jpa.repository.support.JpaRepositoryFactory.
> getTargetRepository(JpaRepositoryFactory.java:101)
> ~[spring-data-jpa-2.0.9.RELEASE.jar:2.0.9.RELEASE]
> at
> org.springframework.data.repository.core.support.RepositoryFactorySupport.
> getRepository(RepositoryFactorySupport.java:304)
> ~[spring-data-commons-2.0.14.RELEASE.jar:2.0.14.RELEASE]
> at
> org.springframework.data.repository.core.support.
> RepositoryFactoryBeanSupport.lambda$afterPropertiesSet$4(
> RepositoryFactoryBeanSupport.java:290)
> ~[spring-data-commons-2.0.14.RELEASE.jar:2.0.14.RELEASE]
> at org.springframework.data.util.Lazy.getNullable(Lazy.java:141)
> ~[spring-data-commons-2.0.14.RELEASE.jar:2.0.14.RELEASE]
> at org.springframework.data.util.Lazy.get(Lazy.java:63)
> ~[spring-data-commons-2.0.14.RELEASE.jar:2.0.14.RELEASE]
>
>
> I've added @Entity annotation to ARInvoice class but that didn't change the
> error.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
-
Denis


Re: Cluster in AWS can not have more than 100 nodes?

2019-11-01 Thread Denis Magda
There is no hard limit of this kind, here is the source code of the IP
finder:
https://github.com/apache/ignite/blob/master/modules/aws/src/main/java/org/apache/ignite/spi/discovery/tcp/ipfinder/s3/TcpDiscoveryS3IpFinder.java

You can enable DEBUG level for Discovery and the IP Finder to collect more
details.
-
Denis


On Thu, Oct 31, 2019 at 4:41 PM codeboyyong  wrote:

> No . The thing is we started ignite in embedded mode as web app, we can see
> there are 128 instances.
> But when we check he S3 bucket , it always only have 100 its, which means
> only 100 nodes joined the ignite cluster.  Not sure if there are another
> way
> to get the cluster size.
> Since we are running as web app, we actually dont have all the access of
> command line tools.
>
> Thank You
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: High CPU(70-80%) utilization with single Ignite node

2019-11-01 Thread Denis Magda
Hello,

There might be many reasons that cause the CPU utilization spikes. I would
approach it debugging from bottom-up - by checking application, virtual
machine, and OS levels:

   - Application:
  - Are there any blocking calls, do clients create similar keys or
  compete for shared resources?
  - Optimize application logic. It seems like now you do two round
  trips - you check before you put. Eliminate one round trip by using Entry
  Processors:
  
https://www.gridgain.com/docs/latest/developers-guide/collocated-computations#entry-processor
  - How does your key look like? The simpler the key, the faster its
  hashcode will be calculated, and other internal checks are to be done.
  - How does your architecture look like? How many server nodes and
  what type of client you use?
   - JVMs: the high CPU utilization might also be triggered by raising GC
   activity after you put more load. Check GC logs and do a recording with
   FlightRecorder:
   
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/troubleshooting#performance-analysis-with-flight-recorder
   - OS: you might kicking off swapping the impacts CPU utilization:
   
https://www.gridgain.com/docs/latest/perf-troubleshooting-guide/memory-tuning#tune-swappiness-setting

Anyway, start checking from the application down to the OS level. Use
FlightRecorder or similar tools to capture bottlenecks.

-
Denis


On Thu, Oct 31, 2019 at 1:42 PM alokyadav12  wrote:

> Hi,
>   We are evaluating Apache.Ignite as a cache layer for our application, and
> able to implement in .Net
> Our application has a service to which multiple devices connected and that
> service saves the byte array information to Ignite Cache. Each device will
> have it own key and data is saved to that key.
> Once device disconnect will remove that key and one new device comes up
> will
> create key in Ignite and start using that to save and get data.
>
> We need to store a data for certain duration onle so we are reading from
> cache and remove the old values from returned data and save it back using
> put. Sample flow is like below
>
> Step 1 - Check if key exists in Cache
> Step 2 - If exists then get all the value for that key else create new key
> Step 3 - If data does not exists then add to cache using Put
> OR
> Step 3 - If data exists, then remove older data from return result, append
> new data and save to cache using PUT
>
> When we connect one or two devices it works fine, and we see low CPU usage,
> but when we connect more device say 10+ then CPU utilization slowly getting
> high and sometimes reaches to 70-80%. We are not doing anything else except
> saving and retrieving cache data.
>
> Below is ignite configuration
>
>  
>
> 
> 
> 
> 
> 127.0.0.1:47500
> 
> 
> 
>
> 
> Path for assemblies
> 
>
>   
>
>
>  We removed threadpool size and still the same.
>
> 
>
>
>  We created sample application and writing data to cache at faster rate
> (~100 entries per sec) and noticed that CPU spikes to 20-30%, just writing
> one int value
>
>  Are doing any misconfiguration due to that high CPU utilization, due to
> high CPU utilization other services are not getting enough CPU. And as per
> our understanding it should not take that much CPU as we just saving and
> retreiving
>
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2641/ignitePerformanceIssue.png>
>
>
> We are seeing another exception in Ignite Window
>
>
> [13:02:22,403][SEVERE][grid-nio-worker-client-listener-2-#31][ClientListenerProcessor]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=ByteBufferNioClientWorker [readBuf=java.nio.HeapByteBuffer[pos=0
> lim=8192 cap=8192], super=AbstractNioClientWorker [idx=2, bytesRcvd=0,
> bytesSent=0, bytesRcvd0=0, bytesSent0=0, select=true, super=GridWorker
> [name=grid-nio-worker-client-listener-2, igniteInstanceName=null,
> finished=false, heartbeatTs=1572548540778, hashCode=282194889,
> interrupted=false, runner=grid-nio-worker-client-listener-2-#31]]],
> writeBuf=null, readBuf=null, inRecovery=null, outRecovery=null,
> super=GridNioSessionImpl [locAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:10800,
> rmtAddr=/fe80:0:0:0:41d5:fd90:e62c:56e4%9:49827, createTime=1572548529138,
> closeTime=0, bytesSent=5, bytesRcvd=12, bytesSent0=0, bytesRcvd0=0,
> sndSchedTime=1572548538778, lastSndTime=1572548538778,
> lastRcvTime=1572548529138, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioAsyncNotifyFilter,
> GridNioCodecFilter [parser=ClientListenerBufferedParser,
> directMode=false]],
> accepted=true, markedForClose=false]]]
> java.io.IOException: An existing connection was forcibly closed by the
> remote host
> at java.base/sun.nio.ch.SocketDispatcher.read0(Native Method)
> at
> java.

Re: TransactionOptimisticException: Failed to prepare transaction, read/write conflict

2019-11-01 Thread Denis Magda
This is not supposed to happen if a record is not updated in parallel.
Could you prepare a reproducer?

In general, an optimistic transaction fails if any of the records within a
transaction gets modified concurrently.

-
Denis


On Thu, Oct 31, 2019 at 8:46 AM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Can someone from community advise?
>
> On Wed, Oct 30, 2019 at 7:28 PM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi,
>> Ignite version: 2.6.0
>>
>> I am getting following exception while committing a transaction inside
>> IgniteRunnable task.
>> Transaction is
>>
>> try (Transaction tx = 
>> transactions.txStart(TransactionConcurrency.OPTIMISTIC, 
>> TransactionIsolation.SERIALIZABLE)) {
>>
>> 
>>
>> ..
>>
>> }
>>
>>
>> As per the ignite doc
>> 
>>  if
>> the transaction isolation level is serializable,
>> Ignite will fail a transaction at the commit stage if the Ignite engine
>> detects that at least one of the entries used as part of the initiated
>> transaction has been modified.
>>
>> But inside transaction I am just reading SubscriptionData from
>> Sunsbscription cache, I am not modifying this data inside transaction. Also
>> this SubscriptionData is not being modified in some other request
>> outside the mentioned transaction.
>>
>> Then what could be the reason for this transaction failure?
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> *Caused by:
>> org.apache.ignite.transactions.TransactionOptimisticException: Failed to
>> prepare transaction, read/write conflict
>> [key=com.xyz.grid.data.key.DefaultDataKey@fe220,
>> keyCls=com.xyz.grid.data.key.DefaultDataKey,
>> val=SubscriptionData{subscriptionId=1040928, companyName='test-cmp',
>> expirationDate=1592092799000, activatedModules={100=159200640,
>> 101=159200640, 102=159200640, 107=159200640, 112=159200640,
>> 145=159200640, 114=159200640, 117=159200640,
>> 1206=159200640, 119=159200640, 1207=159200640,
>> 121=159200640, 1210=159200640, 9211=159200640}, enforced=false,
>> ipv6Enabled=false}, valCls=com.xyz.grid.data.SubscriptionData,
>> cache=SUBSCRIPTION_CACHE, thread=IgniteThread [compositeRwLockIdx=22,
>> stripe=10, plc=-1, name=sys-stripe-10-#11%springDataNode%]]at
>> org.apache.ignite.internal.util.IgniteUtils$14.apply(IgniteUtils.java:905)
>> at
>> org.apache.ignite.internal.util.IgniteUtils$14.apply(IgniteUtils.java:903)
>> at
>> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:985)
>> at
>> org.apache.ignite.internal.processors.cache.transactions.TransactionProxyImpl.commit(TransactionProxyImpl.java:305)
>> at
>> com.xyz.task.ignite.AbstractModuleIpAdderIgniteTask.addIps(AbstractModuleIpAdderIgniteTask.java:148)
>> at
>> com.xyz.task.ignite.AbstractModuleIpAdderIgniteTask.run(AbstractModuleIpAdderIgniteTask.java:93)
>> at
>> org.apache.ignite.internal.processors.closure.GridClosureProcessor$C4.execute(GridClosureProcessor.java:1944)
>> at
>> org.apache.ignite.internal.processors.job.GridJobWorker$2.call(GridJobWorker.java:568)
>> at
>> org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6695)
>> at
>> org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:562)
>> at
>> org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:491)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>> ... 3 common frames omittedCaused by:
>> org.apache.ignite.internal.transactions.IgniteTxOptimisticCheckedException:
>> Failed to prepare transaction, read/write conflict
>> [key=com.xyz.grid.data.key.DefaultDataKey@fe220,
>> keyCls=com.xyz.grid.data.key.DefaultDataKey,
>> val=SubscriptionData{subscriptionId=1040928, companyName='test-cmp',
>> expirationDate=1592092799000, activatedModules={100=159200640,
>> 101=159200640, 102=159200640, 107=159200640, 112=159200640,
>> 145=159200640, 114=159200640, 117=159200640,
>> 1206=159200640, 119=159200640, 1207=159200640,
>> 121=159200640, 1210=159200640, 9211=159200640}, enforced=false,
>> ipv6Enabled=false}, valCls=com.xyz.grid.data.SubscriptionData,
>> cache=SUBSCRIPTION_CACHE, thread=IgniteThread [compositeRwLockIdx=22,
>> stripe=10, plc=-1, name=sys-stripe-10-#11%springDataNode%]]at
>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.versionCheckError(GridDhtTxPrepareFuture.java:1190)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.checkReadConflict(GridDhtTxPrepareFuture.java:1138)
>> at
>> org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtTxPrepareFuture.prepare0(GridDhtTxPrepareFuture.java:1205)
>> at
>> org.apache.ignite.internal.proce

Re: Ignite cache partition size

2019-11-01 Thread Stanislav Lukyanov
It's best to have the number of partitions being a power of two, so better
to go with 32768 I think.
There are big clusters with hundreds of nodes out there, and they do use
large partition numbers sometimes - as large as 16k or 32k.

Note that it will bring some overhead on the metadata being stored, etc.
You may need to tune your heap to store the partition information.
And as Prasad has mentioned, you need to make sure you know how your system
behaves when a node fails - how much time does rebalancing takes, do your
remaining node have enough memory to restore the backups number, etc.

Stan

On Sun, Oct 27, 2019 at 7:21 PM Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> I have heard people creating 4096 but not more that.
> What's you reasoning behind creating such huge partitions.
> I have a table which contains 40+ million rows, but it has only 5 columns
> and I have kept the default partitions and getting good results out of it.
> You have 1450 columns, an interesting use case.
>
> 1) Is your app already in production?
> 2) Did you calculate the average size of single row.
> 3) How do you manage and monitor your big cluster.
> 4) Do you have any scripts or tools to stop,start, restart or rolling
> reastart the cluster or node of your cluster?
>
> I have cluster of 7 nodes, each node has a data region size of 40-50 Gb
> but whenever a segmentation happens i.e when Node goes of out cluster I
> have found it difficult to debug or investigate such issues. Most of the
> times I could not figure out the  exact reason for sengmentation and always
> ended up with blaming network issue for the segmentation.
>
> If you have already figured out all these things then well and good,
> please share your ideas to tools with usergroup if possible.
>
> If you are doing POC and if you want 24/7 availability then I would
> suggest to figure out these things as well.
>
> Thanks,
> Prasad
>
>
> On Sun 27 Oct, 2019, 6:53 AM Yong Zhao 
>>
>> Cool, thanks .
>> I have a quite big cluster with 100 nodes, so I guess 25600 will be good
>> partition?
>> I am using ignite as database.
>> I am creating a table with 20 million rows,each row has 1450 columns.
>> Is this a good idea?
>>
>> Thank you!
>>
>> On Sat, Oct 26, 2019 at 9:10 AM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Please check this link...
>>>
>>>
>>> https://apacheignite.readme.io/docs/affinity-collocation#section-affinity-function
>>>
>>>
>>> Example from the ignite doc.
>>>
>>>
>>> // Preparing Apache Ignite node configuration.IgniteConfiguration cfg = new 
>>> IgniteConfiguration();
>>> // Creating a cache configuration.CacheConfiguration cacheCfg = new 
>>> CacheConfiguration("myCache");
>>> // Creating the affinity function with custom 
>>> setting.RendezvousAffinityFunction affFunc = new 
>>> RendezvousAffinityFunction();
>>> affFunc.setExcludeNeighbors(true);
>>> affFunc.setPartitions(2048);
>>> // Applying the affinity function 
>>> configuration.cacheCfg.setAffinity(affFunc);
>>> // Setting the cache 
>>> configuration.cfg.setCacheConfiguration(cacheCfg);
>>>
>>>
>>>
>>> On Sat 26 Oct, 2019, 7:42 PM Andrey Dolmatov >> wrote:
>>>
 Try to implement you're own affinity function. If it map any key to
 numbers from 1 to n, you have n partitions

 On Sat, Oct 26, 2019, 09:46 codeboyyong  wrote:

> Hi can you please tell me how to config this ?
> "number of cache partitions"
>
> Thank You
> Yong
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



Re: Query execution in ignite

2019-11-01 Thread Stanislav Lukyanov
This is not exactly correct.
When you do an SQL query with only PARTITIONED tables, or with a mix of
PARTITIONED and REPLICATED, the data will be taken from the primary
partions of PARTITIONED tables and *all* partitions of REPLICATED tables.
When you do an SQL query with only REPLICATED tables, the data will be
taken from all partitions (as they're all always available).

E.g. take this query
SELECT * FROM A JOIN B WHERE A.FIELD_A = B.FIELD_B

This query will work correctly if the data is collocated, i.e. in the
following cases:
- A and B are both REPLICATED
- A is PARTIONED and B is REPLICATED
- A is REPLICATED and B is PARTITIONED
- A and B are both PARTITIONED with FIELD_A and FIELD_B both being their
affinity keys

If A and B are both PARTITIONED but FIELD_A or FIELD_B are not affinity
keys then the data is not correctly collocated and the query will
return incorrect results.

Stan

Stan

On Mon, Oct 28, 2019 at 9:34 PM  wrote:

> In PARTITIONED mode *SQL queries are executed* over each node’s *primary*
> partitions only.
>
>
>
> Here is more information about distributed joins:
> https://apacheignite-sql.readme.io/docs/distributed-joins
>
>
>
>
>
>
>
> *From:* Prasad Bhalerao 
> *Sent:* Saturday, October 26, 2019 12:25 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Query execution in ignite
>
>
>
>
>
> Question is specifically about primary and secondary partitions.
>
>
>
> So in case of replicated cache ignite scans primary and secondary
> partitions of any one of the node of the cluster to to fetch the data.
>
>
>
> But is it the case with partitioned cache.
>
> I mean in case of partitioned cache when SQL is executed, does ignite scan
> primary as well as secondary partitions of each node in the cluster or it
> just scans primary partitions of all the nodes in the cluster as the query
> is being executed on all nodes?
>
>
>
> On Fri 25 Oct, 2019, 10:46 PM 
> Hi,
>
>
>
>If a query is executed against a fully *REPLICATED* data then Ignite
> will send it to a single cluster node and run it over the local data there.
>
>
>
>
>
>
>
>  if a query is executed over a *PARTITIONED* data, then the execution
> flow will be the following:
>
> The query will be parsed and split into multiple map queries and a single
> reduce query.
>
> · All the map queries are executed on all the nodes where
> required data resides.
>
> · All the nodes provide result sets of local execution to the
> query initiator (reducer) that, in turn, will accomplish the reduce phase
> by properly merging provided result sets.
>
>
>
>
>
>
>
>  More information here:
> https://apacheignite-sql.readme.io/docs/how-ignite-sql-works
>
> Thanks, Alex
>
>
>
> *From:* Prasad Bhalerao 
> *Sent:* Friday, October 25, 2019 1:31 AM
> *To:* user@ignite.apache.org
> *Subject:* Query execution in ignite
>
>
>
> Hi,
>
>
>
> When SQL is executed, does ignite always scan only primary partitions of
> all available nodes in cluster irrespective of cache mode partitioned or
> replicated?
>
>
>
>
>
>
>
> Thanks ,
>
> Prasad
>
>


Re: Throttling getAll

2019-11-01 Thread Stanislav Lukyanov
The right answer to this is probably not to use getAll in such cases.
If you want to load data in batches then you should either split the keys
yourself or use Query APIs, like ScanQuery or SqlQuery.

Stan

On Mon, Oct 28, 2019 at 10:36 PM Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
agupta...@bloomberg.net> wrote:

> Ack. I've create a JIRA to track this.
>
> https://issues.apache.org/jira/browse/IGNITE-12334
>
>
>
> From: user@ignite.apache.org At: 10/28/19 09:08:10
> To: user@ignite.apache.org
> Subject: Re: Throttling getAll
>
> You might want to open a ticket. Of course, Ignite is open source and I’m
> sure the community would welcome a pull request.
>
> Regards,
> Stephen
>
> On 28 Oct 2019, at 12:14, Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
> agupta...@bloomberg.net> wrote:
>
> 
> Thanks Ilya for your response.
>
> Even if my value objects were not large, nothing stops clients from doing
> a getAll with say 100,000 keys. Having some kind of throttling would still
> be useful.
>
> -Abhishek
>
>
>
> - Original Message -
> From: Ilya Kasnacheev 
> To: ABHISHEK GUPTA
> CC: user@ignite.apache.org
> At: 28-Oct-2019 07:20:24
>
> Hello!
>
> Having very large objects is not a priority use case of Apache Ignite.
> Thus, it is your concern to make sure you don't run out of heap when doing
> operations on Ignite caches.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> сб, 26 окт. 2019 г. в 18:51, Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
> agupta...@bloomberg.net>:
>
>> Hello,
>> I've benchmarked my grid for users (clients) to do getAll with upto 100
>> keys at a time. My value objects tend to be quite large and my worry is if
>> there are errant clients might at times do a getAll with a larger number of
>> keys - say 1000. If that happens I worry about GC issues/humongous
>> objects/OOM on the grid. Is there a way to configure the grid to auto-split
>> these requests into smaller batches (smaller number of keys per batch) or
>> rejecting them?
>>
>>
>> Thanks,
>> Abhishek
>>
>>
>


Re: Drop index do not release memory used ?

2019-11-01 Thread Stanislav Lukyanov
Hi,

What version do you use?
There was an issue with recycling pages between data and indexes which has
been fixed in 2.7 https://issues.apache.org/jira/browse/IGNITE-4958.
In AI 2.7 and later this should be working fine.

Stan

On Sat, Oct 26, 2019 at 5:22 PM yann Blazart  wrote:

> Yes the data pages in reuse list are reused for new objects.
>
> If you drop table, you get back all pages used in reuse list.
>
> But it seems that it's not true for drop indexes
>
> Le sam. 26 oct. 2019 à 16:18, Andrey Dolmatov  a
> écrit :
>
>> Ignite believe that Ignite will reuse that memory. But it is a question,
>> does Ignite reuse index data blocks for data blocks.
>>
>> On Fri, Oct 25, 2019, 15:28 yann.blaz...@externe.bnpparibas.com <
>> yann.blaz...@externe.bnpparibas.com> wrote:
>>
>>> Hello all.
>>>
>>> If you can remember, I found a way to compute the real size of memory
>>> used in offheap, using the reuselist size.
>>>
>>> As I'm facing some limits on my hardware, I'm trying to optimize my
>>> memory consumption, in pure memory, no persistence on hdd or ssd.
>>>
>>> For that as I have to execute plenty of request on my stored data, I saw
>>> that indexes consumes a lot of memory.
>>>
>>> To improve that, in my algorithm I tried to create tables with only pk,
>>> no indexes at first.
>>>
>>> Then before each request, I tried to create indexes , execute request,
>>> then drop indexes.
>>>
>>> What I see, is that drop index do not release memory...
>>>
>>> Everything is release only when we drop table.
>>>
>>> Is this normal  ?
>>>
>>>
>>> Thnaks and regards.
>>> This message and any attachments (the "message") is
>>> intended solely for the intended addressees and is confidential.
>>> If you receive this message in error,or are not the intended
>>> recipient(s),
>>> please delete it and any copies from your systems and immediately notify
>>> the sender. Any unauthorized view, use that does not comply with its
>>> purpose,
>>> dissemination or disclosure, either whole or partial, is prohibited.
>>> Since the internet
>>> cannot guarantee the integrity of this message which may not be
>>> reliable, BNP PARIBAS
>>> (and its subsidiaries) shall not be liable for the message if modified,
>>> changed or falsified.
>>> Do not print this message unless it is necessary, consider the
>>> environment.
>>>
>>>
>>> --
>>>
>>> Ce message et toutes les pieces jointes (ci-apres le "message")
>>> sont etablis a l'intention exclusive de ses destinataires et sont
>>> confidentiels.
>>> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
>>> merci de le detruire ainsi que toute copie de votre systeme et d'en
>>> avertir
>>> immediatement l'expediteur. Toute lecture non autorisee, toute
>>> utilisation de
>>> ce message qui n'est pas conforme a sa destination, toute diffusion ou
>>> toute
>>> publication, totale ou partielle, est interdite. L'Internet ne
>>> permettant pas d'assurer
>>> l'integrite de ce message electronique susceptible d'alteration, BNP
>>> Paribas
>>> (et ses filiales) decline(nt) toute responsabilite au titre de ce
>>> message dans l'hypothese
>>> ou il aurait ete modifie, deforme ou falsifie.
>>> N'imprimez ce message que si necessaire, pensez a l'environnement.
>>>
>>>


Apache Ignite on Apache Con

2019-11-01 Thread Alexey Zinoviev
Hi, Igniters, I've presented a new topic "Ensembles of ML algorithms and
Distributed Online Machine Learning with Apache Ignite" on ApacheCon in
Berlin this year.

Video is here https://www.youtube.com/watch?v=3CmnV6IQtTw
Slides are here
https://speakerdeck.com/zaleslaw/ensembles-of-ml-algorithms-and-distributed-online-machine-learning-with-apache-ignite

You could find another talks here https://aceu19.apachecon.com/schedule


Re: recoveryBallotBoxes in MvccProcessorImpl memory leak?

2019-11-01 Thread Ivan Pavlukhin
Hi,

Sounds like a bug. Would be great to have a ticket with reproducer.

пт, 1 нояб. 2019 г. в 03:25, mvkarp :
>
> 
>
> I've attached an Eclipse MAT heap analysis. As you can see MVCC is disabled
> (there are no TRANSACTIONAL_SNAPSHOT caches in the cluster)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re: Issue with adding nested index dynamically

2019-11-01 Thread Ivan Pavlukhin
Hi Hemambara,

I appologize but I will be able to share a problematic example only on
next week.

чт, 31 окт. 2019 г. в 19:49, Hemambara :
>
> I did not face any issue. Its working fine for me. Can you share your code
> and exception that you are getting
>
> I tried like below and it worked for me.
> ((Person)cache.get(1)).address.community)
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



-- 
Best regards,
Ivan Pavlukhin


Re:

2019-11-01 Thread Mikael
That is up to you, you can use anything you want, the most obvious 
choice would be the built in attributes feature in Ignite configuration:


    
  
    
  
    
  

You will find a lot of information under the "Affinity Function" and 
"Affinity Key Mapper" sections here:


https://apacheignite.readme.io/docs/affinity-collocation

Did you consider the idea of using multiple caches ? that would be much 
easier to implement if it is a possible solution for you ?


Mikael


Den 2019-11-01 kl. 03:01, skrev BorisBelozerov:

How can I choose node? By IP, MAC, or other criteria??
Thank you!!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/