The right answer to this is probably not to use getAll in such cases. If you want to load data in batches then you should either split the keys yourself or use Query APIs, like ScanQuery or SqlQuery.
Stan On Mon, Oct 28, 2019 at 10:36 PM Abhishek Gupta (BLOOMBERG/ 919 3RD A) < [email protected]> wrote: > Ack. I've create a JIRA to track this. > > https://issues.apache.org/jira/browse/IGNITE-12334 > > > > From: [email protected] At: 10/28/19 09:08:10 > To: [email protected] > Subject: Re: Throttling getAll > > You might want to open a ticket. Of course, Ignite is open source and I’m > sure the community would welcome a pull request. > > Regards, > Stephen > > On 28 Oct 2019, at 12:14, Abhishek Gupta (BLOOMBERG/ 919 3RD A) < > [email protected]> wrote: > > > Thanks Ilya for your response. > > Even if my value objects were not large, nothing stops clients from doing > a getAll with say 100,000 keys. Having some kind of throttling would still > be useful. > > -Abhishek > > > > ----- Original Message ----- > From: Ilya Kasnacheev <[email protected]> > To: ABHISHEK GUPTA > CC: [email protected] > At: 28-Oct-2019 07:20:24 > > Hello! > > Having very large objects is not a priority use case of Apache Ignite. > Thus, it is your concern to make sure you don't run out of heap when doing > operations on Ignite caches. > > Regards, > -- > Ilya Kasnacheev > > > сб, 26 окт. 2019 г. в 18:51, Abhishek Gupta (BLOOMBERG/ 919 3RD A) < > [email protected]>: > >> Hello, >> I've benchmarked my grid for users (clients) to do getAll with upto 100 >> keys at a time. My value objects tend to be quite large and my worry is if >> there are errant clients might at times do a getAll with a larger number of >> keys - say 1000. If that happens I worry about GC issues/humongous >> objects/OOM on the grid. Is there a way to configure the grid to auto-split >> these requests into smaller batches (smaller number of keys per batch) or >> rejecting them? >> >> >> Thanks, >> Abhishek >> >> >
