Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc
driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <[email protected]> wrote:

> I understand from the code that there is no cursor from h2 db (or ignite
> embed h2 db) internally and all mapper response consolidated at reducer. It
> means when exporting large number of records, all data is in memory.
>
>              if (send(nodes,
>                     oldStyle ?
>                         new GridQueryRequest(qryReqId,
>                             r.pageSize,
>                             space,
>                             mapQrys,
>                             topVer,
>                             extraSpaces(space, qry.spaces()),
>                             null,
>                             timeoutMillis) :
>                         new GridH2QueryRequest()
>                             .requestId(qryReqId)
>                             .topologyVersion(topVer)
>                             .pageSize(r.pageSize)
>                             .caches(qry.caches())
>                             .tables(distributedJoins ? qry.tables() : null)
>                             .partitions(convert(partsMap))
>                             .queries(mapQrys)
>                             .flags(flags)
>                             .timeout(timeoutMillis),
>                     oldStyle && partsMap != null ? new
> ExplicitPartitionsSpecializer(partsMap) : null,
>                     false)) {
>
>                     awaitAllReplies(r, nodes, cancel);
>
> *// once the responses from all nodes for the query received.. proceed
> further ?*
>
>           if (!retry) {
>                     if (skipMergeTbl) {
>                         List<List<?>> res = new ArrayList<>();
>
>                         // Simple UNION ALL can have multiple indexes.
>                         for (GridMergeIndex idx : r.idxs) {
>                             Cursor cur = idx.findInStream(null, null);
>
>                             while (cur.next()) {
>                                 Row row = cur.get();
>
>                                 int cols = row.getColumnCount();
>
>                                 List<Object> resRow = new
> ArrayList<>(cols);
>
>                                 for (int c = 0; c < cols; c++)
>                                     resRow.add(row.getValue(c).
> getObject());
>
>                                 res.add(resRow);
>                             }
>                         }
>
>                         resIter = res.iterator();
>                     }else {
>                       // incase of split query scenario
>                     }
>
>          }
>
>       return new GridQueryCacheObjectsIterator(resIter, cctx,
> keepPortable);
>
>
> Query cursor is iterator which does column value mapping per page. But
> still all records of query are still in memory. correct?
>
> Please correct me if I am wrong. thanks.
>
>
> Thanks
>
>
> On 10 June 2017 at 15:53, Anil <[email protected]> wrote:
>
>>
>> jvm parameters used -
>>
>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>
>> Thanks.
>>
>> On 10 June 2017 at 15:06, Anil <[email protected]> wrote:
>>
>>> HI,
>>>
>>> I have implemented export feature of ignite data using JDBC Interator
>>>
>>> ResultSet rs = statement.executeQuery();
>>>
>>> while (rs.next()){
>>> // do operations
>>>
>>> }
>>>
>>> and fetch size is 200.
>>>
>>> when i run export operation twice for 4 L records whole 6B is filled up
>>> and never getting released.
>>>
>>> Initially i thought that operations transforting result set to file
>>> causing the memory full. But not.
>>>
>>> I just did follwoing and still the memory is growing and not getting
>>> released
>>>
>>> while (rs.next()){
>>>  // nothing
>>> }
>>>
>>> num     #instances         #bytes  class name
>>> ----------------------------------------------
>>>    1:      55072353     2408335272  [C
>>>    2:      54923606     1318166544  java.lang.String
>>>    3:        779006      746187792  [B
>>>    4:        903548      304746304  [Ljava.lang.Object;
>>>    5:        773348      259844928  net.juniper.cs.entity.InstallBase
>>>    6:       4745694      113896656  java.lang.Long
>>>    7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
>>>    8:        773348       30933920  org.apache.ignite.internal.bi
>>> nary.BinaryObjectImpl
>>>    9:        895627       21495048  java.util.ArrayList
>>>   10:         12427       16517632  [I
>>>
>>>
>>> Not sure why string objects are getting increased.
>>>
>>> Could you please help in understanding the issue ?
>>>
>>> Thanks
>>>
>>
>>
>

Reply via email to