HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry,
I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just
for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <alexander.fedot...@gmail.com>
wrote:

> Hi Anil.
>
> Could you please also share C:/Anil/ignite-client.xml ? As well, it would
> be useful if you took JFR reports for the case with allocation profiling
> enabled.
> Just to clarify, by 4L do you mean 4 million entries?
>
> Kind regards,
> Alex.
>
> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <
> alexander.fedot...@gmail.com> wrote:
>
>> Thanks. I'll take a look and let you know about any findings.
>>
>> Kind regards,
>> Alex
>>
>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <anilk...@gmail.com> написал:
>>
>> Hi Alex,
>>
>> test program repository - https://github.com/adasari/test-ignite-jdbc.git
>>
>> please let us if you have any suggestions/questions. thanks.
>>
>> Thanks
>>
>> On 15 June 2017 at 10:58, Anil <anilk...@gmail.com> wrote:
>>
>>> Sure. thanks
>>>
>>> On 14 June 2017 at 19:51, afedotov <alexander.fedot...@gmail.com> wrote:
>>>
>>>> Hi, Anil.
>>>>
>>>> Could you please share your full code (class/method) you are using to
>>>> read data.
>>>>
>>>> Kind regards,
>>>> Alex
>>>>
>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" 
>>>> <[hidden
>>>> email] <http:///user/SendEmail.jtp?type=node&node=13706&i=0>> написал:
>>>>
>>>>> Do you have any advice on implementing large records export from
>>>>> ignite ?
>>>>>
>>>>> I could not use ScanQuery right as my whole application built around
>>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=0>> wrote:
>>>>>
>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>> memory.
>>>>>>
>>>>>>              if (send(nodes,
>>>>>>                     oldStyle ?
>>>>>>                         new GridQueryRequest(qryReqId,
>>>>>>                             r.pageSize,
>>>>>>                             space,
>>>>>>                             mapQrys,
>>>>>>                             topVer,
>>>>>>                             extraSpaces(space, qry.spaces()),
>>>>>>                             null,
>>>>>>                             timeoutMillis) :
>>>>>>                         new GridH2QueryRequest()
>>>>>>                             .requestId(qryReqId)
>>>>>>                             .topologyVersion(topVer)
>>>>>>                             .pageSize(r.pageSize)
>>>>>>                             .caches(qry.caches())
>>>>>>                             .tables(distributedJoins ? qry.tables() :
>>>>>> null)
>>>>>>                             .partitions(convert(partsMap))
>>>>>>                             .queries(mapQrys)
>>>>>>                             .flags(flags)
>>>>>>                             .timeout(timeoutMillis),
>>>>>>                     oldStyle && partsMap != null ? new
>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>>                     false)) {
>>>>>>
>>>>>>                     awaitAllReplies(r, nodes, cancel);
>>>>>>
>>>>>> *// once the responses from all nodes for the query received..
>>>>>> proceed further ?*
>>>>>>
>>>>>>           if (!retry) {
>>>>>>                     if (skipMergeTbl) {
>>>>>>                         List<List<?>> res = new ArrayList<>();
>>>>>>
>>>>>>                         // Simple UNION ALL can have multiple indexes.
>>>>>>                         for (GridMergeIndex idx : r.idxs) {
>>>>>>                             Cursor cur = idx.findInStream(null, null);
>>>>>>
>>>>>>                             while (cur.next()) {
>>>>>>                                 Row row = cur.get();
>>>>>>
>>>>>>                                 int cols = row.getColumnCount();
>>>>>>
>>>>>>                                 List<Object> resRow = new
>>>>>> ArrayList<>(cols);
>>>>>>
>>>>>>                                 for (int c = 0; c < cols; c++)
>>>>>>                                     resRow.add(row.getValue(c).get
>>>>>> Object());
>>>>>>
>>>>>>                                 res.add(resRow);
>>>>>>                             }
>>>>>>                         }
>>>>>>
>>>>>>                         resIter = res.iterator();
>>>>>>                     }else {
>>>>>>                       // incase of split query scenario
>>>>>>                     }
>>>>>>
>>>>>>          }
>>>>>>
>>>>>>       return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>>>> keepPortable);
>>>>>>
>>>>>>
>>>>>> Query cursor is iterator which does column value mapping per page.
>>>>>> But still all records of query are still in memory. correct?
>>>>>>
>>>>>> Please correct me if I am wrong. thanks.
>>>>>>
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=1>> wrote:
>>>>>>
>>>>>>>
>>>>>>> jvm parameters used -
>>>>>>>
>>>>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC 
>>>>>>> -XX:+DisableExplicitGC
>>>>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>>>>>>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>>>>>>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>>>>>>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>>>>>>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps
>>>>>>> /heapdump-client.hprof
>>>>>>>
>>>>>>> Thanks.
>>>>>>>
>>>>>>> On 10 June 2017 at 15:06, Anil <[hidden email]
>>>>>>> <http:///user/SendEmail.jtp?type=node&node=13626&i=2>> wrote:
>>>>>>>
>>>>>>>> HI,
>>>>>>>>
>>>>>>>> I have implemented export feature of ignite data using JDBC
>>>>>>>> Interator
>>>>>>>>
>>>>>>>> ResultSet rs = statement.executeQuery();
>>>>>>>>
>>>>>>>> while (rs.next()){
>>>>>>>> // do operations
>>>>>>>>
>>>>>>>> }
>>>>>>>>
>>>>>>>> and fetch size is 200.
>>>>>>>>
>>>>>>>> when i run export operation twice for 4 L records whole 6B is
>>>>>>>> filled up and never getting released.
>>>>>>>>
>>>>>>>> Initially i thought that operations transforting result set to file
>>>>>>>> causing the memory full. But not.
>>>>>>>>
>>>>>>>> I just did follwoing and still the memory is growing and not
>>>>>>>> getting released
>>>>>>>>
>>>>>>>> while (rs.next()){
>>>>>>>>  // nothing
>>>>>>>> }
>>>>>>>>
>>>>>>>> num     #instances         #bytes  class name
>>>>>>>> ----------------------------------------------
>>>>>>>>    1:      55072353     2408335272  [C
>>>>>>>>    2:      54923606     1318166544  java.lang.String
>>>>>>>>    3:        779006      746187792  [B
>>>>>>>>    4:        903548      304746304  [Ljava.lang.Object;
>>>>>>>>    5:        773348      259844928  net.juniper.cs.entity.Install
>>>>>>>> Base
>>>>>>>>    6:       4745694      113896656  java.lang.Long
>>>>>>>>    7:       1111692       44467680  sun.nio.cs.UTF_8$Decoder
>>>>>>>>    8:        773348       30933920  org.apache.ignite.internal.bi
>>>>>>>> nary.BinaryObjectImpl
>>>>>>>>    9:        895627       21495048  java.util.ArrayList
>>>>>>>>   10:         12427       16517632  [I
>>>>>>>>
>>>>>>>>
>>>>>>>> Not sure why string objects are getting increased.
>>>>>>>>
>>>>>>>> Could you please help in understanding the issue ?
>>>>>>>>
>>>>>>>> Thanks
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> ------------------------------
>>>>> If you reply to this email, your message will be added to the
>>>>> discussion below:
>>>>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>>>>> ignite-client-tp13594p13626.html
>>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node&node=13706&i=1>
>>>>> To unsubscribe from Apache Ignite Users, click here.
>>>>> NAML
>>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>>
>>>>
>>>> ------------------------------
>>>> View this message in context: Re: High heap on ignite client
>>>> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p13706.html>
>>>> Sent from the Apache Ignite Users mailing list archive
>>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>>
>>>
>>>
>>
>>
>
<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans";
       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
       xmlns:util="http://www.springframework.org/schema/util";
       xsi:schemaLocation="http://www.springframework.org/schema/beans
                           http://www.springframework.org/schema/beans/spring-beans.xsd
                           http://www.springframework.org/schema/util
                           http://www.springframework.org/schema/util/spring-util.xsd";>

 	<bean id = "grid1" class="org.apache.ignite.configuration.IgniteConfiguration">		
	<property name="clientMode" value="true" />	
	<property name="peerClassLoadingEnabled" value="false" />
	<property name="metricsLogFrequency" value="0" />
	<property name="communicationSpi">
		<bean class="org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi">
		  		  <property name="idleConnectionTimeout" value="400000" />
		</bean>
	</property>
	   <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
    <property name="discoverySpi">
      <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
        <property name="ipFinder">
          <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
            <property name="addresses">
              <list>	
	
			 <value>X.X.X.186:47500..47509</value>
			  <value>X.X.X.187:47500..47509</value>
				<value>X.X.X.188:47500..47509</value> 
				<value>X.X.X.189:47500..47509</value> 
	          </list>
            </property>
          </bean>
        </property>
      </bean>
    </property>  

  </bean>
</beans>

Reply via email to