Re: Ignite cache data size problem.

2016-05-17 Thread ght230
I want to know how to see the size of memory used, using ignitevisorcmd?

for example, for the result about "Offheap, with indexes - about 400Mb
offheap memory is required, or ~20Gb for all your objects."

How to confirm the offheap memory is 400Mb, from "Non-heap memory
initialized", "Non-heap memory used", "Non-heap memory committed" or
"Non-heap memory maximum"?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p5008.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache data size problem.

2016-04-28 Thread vkulichenko
Hi Kevin,

Ignite Visor [1] provides 'gc' command that performs GC on all nodes. You
can also use any other tool, e.g. VisualVM.

[1] https://apacheignite.readme.io/docs/command-line-interface

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4680.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache data size problem.

2016-04-27 Thread kevin.zheng
Hi Binti, 
thank you for your help.
May I ask how to perform GC on each node?
using code? or command inside CLI?

best regards,
Kevin



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4635.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache data size problem.

2016-04-27 Thread bintisepaha
Kevin, we are facing similar issues. And the reason is not ignite using up
the heap, its the data you load from the database that takes up so much
memory until it makes it to the cache. We see a lot of heap usge after start
up and then if we perform a GC the memory on each node drops by at least 4
GB. 

The way we are loading the data from the database is only running the
loadCache on the final node (we call it Initiator) and others are called
joiners. the joiners do not go to the database and only initiator does. But
initiator calls ignite.compute on the server nodes so that data is loaded
parallely like we want. the joiners are started first and have no data
loading code and then initiator is started later and it distributes work to
other joiner nodes and itself. hence DB queries are performed only once, but
in parallel.

Let me know if this makes sense.

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4625.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache data size problem.

2016-04-27 Thread Vladimir Ozerov
Kevin,

I would say that your assumption - more nodes, faster data load - makes
perfect sense. N nodes will have N threads in total, performing the load.
The problem is that current JDBC POJO store implementation makes this
process not very efficient, because each node iterates over the whole data
set. More efficient approach is to split the whole data set into several
parts (e.g. by attribute's value) and then let nodes iterate only over a
part of data, taking in count affinity function. This will be the most
efficient approach, I believe.

We will improve that in further Ignite versions.

Vladimir.

On Wed, Apr 27, 2016 at 1:52 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
wrote:

> Thank you Vladimir,
>
> I will try to use the second recommendation first, and let you know the
> result asap.
>
>
>
> I thought more  nodes will result in fast data loading. Because I assume
> that each node can load different data simultaneously. Clearly, it is my
> misunderstanding about increasing number of nodes.
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月27日 18:03
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> My considerations:
>
> 1) I see that amount of allocated heap is gradually increases over time.
> Can you confirm that you use the configuration with OFFHEAP which you shown
> several posts earlier?
>
> 2) Why do you have several nodes per host? Our recommended approach is to
> have a single node per machine.
>
> 3) The main things: you invoke loadCache() method. This method will
> execute the same query - "SELECT *" as you mentioned earlier - on all
> nodes. It means that your cluster will have to perform full-scan N times,
> where N - number of nodes.
>
>
>
> I looked closely at our store API and I believe it is not suited for such
> cases (millions rows, lots of nodes in topology) well. We will think of
> better API to handle such scenarios.
>
>
>
> My recommendations for now:
>
>
>
> 1) Start no more than 1 node per host. This way you will decrease amount
> of scans from 16 to 2-3, which should make perf much better. Let each node
> take as much memory as possible.
>
>
>
> 2) If this approach still doesn't show good enough numbers, consider
> switching to *IgniteDataStreamer *instead -
> https://apacheignite.readme.io/docs/data-streamers
>
> It was designed specifically for efficient bulk data load by employing
> batching and affinity co-location techniques. You can do that in the
> following way:
>
> - Create *IgniteDataStreamer *for your cache;
>
> - Scan the table using JDBC, Hibernate of any other framework you have;
>
> - For each returned row, create appropriate cache key and value object;
>
> - Put key-value pair to the streamer: * IgniteDataStreamer.addData()*;
>
> - When scan is finished, close the streamer: IgniteDataStreamer.close().
>
> This should be done only on one node.
>
>
>
> I believe the second approach should show much better numbers.
>
>
>
> Vladimir.
>
>
>
>
>
> On Wed, Apr 27, 2016 at 6:06 AM, Zhengqingzheng <zhengqingzh...@huawei.com>
> wrote:
>
> Hi Vladimir,
>
> Sorry to reply so late.  The loadCache process took 8 hours to load all
> the data (this time ,there is no exception occurred, but memory consumption
> up to 56gb, 80% of all the heap that I have defined, which include 10
> nodes, each node allocate 7gb heap).
>
> I give you all the log files which were located at work/log/ folder.
> Please see the attachment.
>
>
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月25日 20:21
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> I performed several experiments. Essentially, I put 1M entries of the
> class you provided with fields initialized as follows:
>
>
>
> *for *(*int *i = 0; i < 1_000_000; i++) {
> UniqueField field = *new *UniqueField();
>
> field.setDate(*new *Date());
> field.setGuid(UUID.*randomUUID*().toString());
> field.setMsg(String.*valueOf*(i));
> 
> field.setNum(BigDecimal.*valueOf*(ThreadLocalRandom.*current*().nextDouble()));
> field.setOId(String.*valueOf*(i));
> field.setOrgId(String.*valueOf*(i));
>
> cache.put(i, field);
> }
>
>
>
> My results are:
>
> 1) Onheap, no indexes - about 400Mb is required to store 1M objects, or
> ~20Gb for 47M objects.
>
> 2) Onheap, with indexes - about 650Mb, or ~30Gb for 47M objects.
>
> 3) Offheap, with indexes - about 4

re: Ignite cache data size problem.

2016-04-27 Thread Zhengqingzheng
Thank you Vladimir,
I will try to use the second recommendation first, and let you know the result 
asap.

I thought more  nodes will result in fast data loading. Because I assume that 
each node can load different data simultaneously. Clearly, it is my 
misunderstanding about increasing number of nodes.

Best regards,
Kevin

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com]
发送时间: 2016年4月27日 18:03
收件人: user@ignite.apache.org
主题: Re: Ignite cache data size problem.

Hi Kevin,

My considerations:
1) I see that amount of allocated heap is gradually increases over time. Can 
you confirm that you use the configuration with OFFHEAP which you shown several 
posts earlier?
2) Why do you have several nodes per host? Our recommended approach is to have 
a single node per machine.
3) The main things: you invoke loadCache() method. This method will execute the 
same query - "SELECT *" as you mentioned earlier - on all nodes. It means that 
your cluster will have to perform full-scan N times, where N - number of nodes.

I looked closely at our store API and I believe it is not suited for such cases 
(millions rows, lots of nodes in topology) well. We will think of better API to 
handle such scenarios.

My recommendations for now:

1) Start no more than 1 node per host. This way you will decrease amount of 
scans from 16 to 2-3, which should make perf much better. Let each node take as 
much memory as possible.

2) If this approach still doesn't show good enough numbers, consider switching 
to IgniteDataStreamer instead - 
https://apacheignite.readme.io/docs/data-streamers
It was designed specifically for efficient bulk data load by employing batching 
and affinity co-location techniques. You can do that in the following way:
- Create IgniteDataStreamer for your cache;
- Scan the table using JDBC, Hibernate of any other framework you have;
- For each returned row, create appropriate cache key and value object;
- Put key-value pair to the streamer: IgniteDataStreamer.addData();
- When scan is finished, close the streamer: IgniteDataStreamer.close().
This should be done only on one node.

I believe the second approach should show much better numbers.

Vladimir.


On Wed, Apr 27, 2016 at 6:06 AM, Zhengqingzheng 
<zhengqingzh...@huawei.com<mailto:zhengqingzh...@huawei.com>> wrote:
Hi Vladimir,
Sorry to reply so late.  The loadCache process took 8 hours to load all the 
data (this time ,there is no exception occurred, but memory consumption up to 
56gb, 80% of all the heap that I have defined, which include 10 nodes, each 
node allocate 7gb heap).
I give you all the log files which were located at work/log/ folder. Please see 
the attachment.


发件人: Vladimir Ozerov [mailto:voze...@gridgain.com<mailto:voze...@gridgain.com>]
发送时间: 2016年4月25日 20:21
收件人: user@ignite.apache.org<mailto:user@ignite.apache.org>
主题: Re: Ignite cache data size problem.

Hi Kevin,

I performed several experiments. Essentially, I put 1M entries of the class you 
provided with fields initialized as follows:


for (int i = 0; i < 1_000_000; i++) {
UniqueField field = new UniqueField();

field.setDate(new Date());
field.setGuid(UUID.randomUUID().toString());
field.setMsg(String.valueOf(i));
field.setNum(BigDecimal.valueOf(ThreadLocalRandom.current().nextDouble()));
field.setOId(String.valueOf(i));
field.setOrgId(String.valueOf(i));

cache.put(i, field);
}

My results are:
1) Onheap, no indexes - about 400Mb is required to store 1M objects, or ~20Gb 
for 47M objects.
2) Onheap, with indexes - about 650Mb, or ~30Gb for 47M objects.
3) Offheap, with indexes - about 400Mb offheap memory is required, or ~20Gb for 
all your objects.

Could you please provide more information on the error you receive? Also, you 
could try load entries in a batches of a well-defined size (say, 1M), and see 
what happens to the system. I expect you should see similar numbers.

Vladimir.

On Fri, Apr 22, 2016 at 3:26 PM, kevin.zheng 
<zhengqingzh...@huawei.com<mailto:zhengqingzh...@huawei.com>> wrote:
BTW, I created 4 + 3 nodes on two servers.
each node I called command like this ./ignite.sh  -J-Xmx8g -J-Xms8g

kind regards,
Kevin



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4454.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.




Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
Hi Kevin,

Yes, log files are created under this directory by default.

Vladimir.

On Tue, Apr 26, 2016 at 3:18 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
wrote:

> Hi Vladimir,
>
> No problem.
>
> I will re-run the loading process and give you  the log file.
>
> To be clear, when you say log file, do you mean files located at
> work/log/*?
>
> I use IgntieCache.loadCache(null, “select * from table”) to load all the
> data.
>
>
>
> Best regards,
>
> Kevin
>
>
>
> *发件人:* Vladimir Ozerov [mailto:voze...@gridgain.com]
> *发送时间:* 2016年4月26日 20:15
> *收件人:* user@ignite.apache.org
> *主题:* Re: Ignite cache data size problem.
>
>
>
> Hi Kevin,
>
>
>
> Could you please re-run your case and attach Ignite logs and GC logs from
> all participating servers? I would expect that this is either a kind of
> out-of-memory problem, or network saturation. Also please explain how
> exactly do you load data to Ignite? Do you use *DataStreamer*, or may be
> *IgniteCache.loadlCache*, or *IgniteCache.put*?
>
>
>
> Vladimir.
>
>
>
> On Tue, Apr 26, 2016 at 3:01 PM, Zhengqingzheng <zhengqingzh...@huawei.com>
> wrote:
>
> Hi Vladimir,
>
> Thank you for your help.
>
> I tried to load 1Million records, and calculate each object’s size
> (including key and value object, where key is  string type), summarized
> together and found the total memory consumption is 130Mb.
>
>
>
> Because ./ignitevisor.sh  command only shows number of records, and no
> data allocation information, I don’t know how much memory has been consumed
> for each type of cache.
>
>
>
> My result is as follows:
>
> Log:
>
> Found big object: [Ljava.util.HashMap$Node;@24833164 size:
> 30.888206481933594Mb
>
> Found big object:   java.util.HashMap@15819383 size: 30.88824462890625Mb
>
> Found big object: java.util.HashSet@10236420 size: 30.888259887695312Mb
>
> key size: 32388688 human readable data: 30.888259887695312Mb
>
> Found big object:   [Lorg.jsr166.ConcurrentHashMap8$Node;@29556439 size:
> 129.99818420410156Mb
>
> Found big object: org.jsr166.ConcurrentHashMap8@19238297 size:
> *129.99822235107422Mb*
>
> value size: 136313016 human readable data: 129.99822235107422Mb
>
> *the whole number of record is 47Million, so the  data size inside the
> cache should be 130*47=**6110Mb(around 6Gb).*
>
>
>
> However, when I try to load the whole data into the cache, I still get
> exceptions:
>
> The exception information is listed as follows:
>
> 1.   -- exception info from client
> 
>
> Before exception occurred, I have 10 nodes on two servers,  server1 (48G
> ram) has 6nodes, each node is assigned with 7gb jvm heap; server2 (32gb
> ram)  has 4 nodes with the same jvm settings as previous one.
>
> After exception, client stopped, and 8 node left (server1’s node remain,
> no exception occurred on this server), server2 ( 2 nodes remain, two nodes
> droped)
>
> The total number of records loaded is 37Million.
>
>
>
> [19:01:47] Topology snapshot [ver=77, servers=9, clients=1, CPUs=20,
> heap=64.0GB]
>
> [19:01:47,463][SEVERE][pub-#46%null%][GridTaskWorker] Failed to obtain
> remote job result policy for result from ComputeTask.result(..) method
> (will fail the whole task): GridJobResultImpl [job=C2 [],
> sib=GridJobSiblingImpl
> [sesId=0dbc0f15451-880a2dd1-bc95-4084-a705-4effcec5d2cd,
> jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924,
> nodeId=832bed3e-dc5d-4743-9853-127e3b516924, isJobDone=false],
> jobCtx=GridJobContextImpl
> [jobId=6dbc0f15451-832bed3e-dc5d-4743-9853-127e3b516924, timeoutObj=null,
> attrs={}], node=TcpDiscoveryNode [id=832bed3e-dc5d-4743-9853-127e3b516924,
> addrs=[0:0:0:0:0:0:0:1%lo, 10.120.70.122, 127.0.0.1],
> sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /0:0:0:0:0:0:0:1%lo:47500, /
> 10.120.70.122:47500, /127.0.0.1:47500], discPort=47500, order=7,
> intOrder=7, lastExchangeTime=1461663619161, loc=false,
> ver=1.5.0#20151229-sha1:f1f8cda2, isClient=false], ex=class
> o.a.i.cluster.ClusterTopologyException: Node has left grid:
> 832bed3e-dc5d-4743-9853-127e3b516924, hasRes=true, isCancelled=false,
> isOccupied=true]
>
> class org.apache.ignite.cluster.ClusterTopologyException: Node has left
> grid: 832bed3e-dc5d-4743-9853-127e3b516924
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskWorker.onNodeLeft(GridTaskWorker.java:1315)
>
>  at
> org.apache.ignite.internal.processors.task.GridTaskProcessor$TaskDiscoveryListener$1.run(GridTaskProcessor.java:1246)
>
>  at
> org.apache.ignite.internal.util

Re: Ignite cache data size problem.

2016-04-26 Thread Vladimir Ozerov
t;35700K(1013632K), 0.0103066 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:46:28.413+0800: 3975.223: [GC (Allocation Failure)
> 2016-04-26T18:46:28.413+0800: 3975.223: [DefNew: 281296K->1679K(314560K),
> 0.0080063 secs] 315316K->35706K(1013632K), 0.0081820 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:47:04.190+0800: 4011.234: [GC (Allocation Failure)
> 2016-04-26T18:47:04.190+0800: 4011.234: [DefNew: 281295K->1678K(314560K),
> 0.0102572 secs] 315322K->35712K(1013632K), 0.0104334 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:47:40.161+0800: 4047.120: [GC (Allocation Failure)
> 2016-04-26T18:47:40.161+0800: 4047.120: [DefNew: 281294K->1677K(314560K),
> 0.0109581 secs] 315328K->35718K(1013632K), 0.0111609 secs] [Times:
> user=0.01 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:48:15.352+0800: 4082.428: [GC (Allocation Failure)
> 2016-04-26T18:48:15.352+0800: 4082.428: [DefNew: 281293K->1677K(314560K),
> 0.0087567 secs] 315334K->35724K(1013632K), 0.0089472 secs] [Times:
> user=0.00 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:48:51.254+0800: 4118.407: [GC (Allocation Failure)
> 2016-04-26T18:48:51.254+0800: 4118.407: [DefNew: 281293K->1677K(314560K),
> 0.0115241 secs] 315340K->35731K(1013632K), 0.0117504 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:49:26.547+0800: 4153.705: [GC (Allocation Failure)
> 2016-04-26T18:49:26.547+0800: 4153.705: [DefNew: 281293K->1967K(314560K),
> 0.0095867 secs] 315347K->36027K(1013632K), 0.0097683 secs] [Times:
> user=0.01 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:50:02.043+0800: 4189.206: [GC (Allocation Failure)
> 2016-04-26T18:50:02.043+0800: 4189.206: [DefNew: 281583K->1676K(314560K),
> 0.0096753 secs] 315643K->35742K(1013632K), 0.0098599 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:50:37.663+0800: 4224.505: [GC (Allocation Failure)
> 2016-04-26T18:50:37.663+0800: 4224.505: [DefNew: 281292K->1675K(314560K),
> 0.0077731 secs] 315358K->35749K(1013632K), 0.0079524 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:51:13.058+0800: 4259.780: [GC (Allocation Failure)
> 2016-04-26T18:51:13.058+0800: 4259.780: [DefNew: 281291K->1676K(314560K),
> 0.0086972 secs] 315365K->35756K(1013632K), 0.0088771 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:51:48.677+0800: 4295.269: [GC (Allocation Failure)
> 2016-04-26T18:51:48.677+0800: 4295.269: [DefNew: 281292K->1931K(314560K),
> 0.0099264 secs] 315372K->36016K(1013632K), 0.0101337 secs] [Times:
> user=0.02 sys=0.00, real=0.02 secs]
>
> 2016-04-26T18:52:23.956+0800: 4330.493: [GC (Allocation Failure)
> 2016-04-26T18:52:23.956+0800: 4330.493: [DefNew: 281547K->1673K(314560K),
> 0.0079896 secs] 315632K->35766K(1013632K), 0.0081558 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:52:59.459+0800: 4366.025: [GC (Allocation Failure)
> 2016-04-26T18:52:59.459+0800: 4366.025: [DefNew: 281289K->1942K(314560K),
> 0.0099881 secs] 315382K->36041K(1013632K), 0.0101999 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:53:35.248+0800: 4401.571: [GC (Allocation Failure)
> 2016-04-26T18:53:35.248+0800: 4401.571: [DefNew: 281558K->1675K(314560K),
> 0.0080323 secs] 315657K->35779K(1013632K), 0.0081949 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:54:11.231+0800: 4437.434: [GC (Allocation Failure)
> 2016-04-26T18:54:11.231+0800: 4437.434: [DefNew: 281291K->1672K(314560K),
> 0.0107709 secs] 315395K->35785K(1013632K), 0.0109682 secs] [Times:
> user=0.02 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:54:47.360+0800: 4473.741: [GC (Allocation Failure)
> 2016-04-26T18:54:47.360+0800: 4473.741: [DefNew: 281288K->1675K(314560K),
> 0.0077845 secs] 315401K->35796K(1013632K), 0.0079611 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:55:22.937+0800: 4509.361: [GC (Allocation Failure)
> 2016-04-26T18:55:22.937+0800: 4509.361: [DefNew: 281291K->1675K(314560K),
> 0.0082094 secs] 315412K->35802K(1013632K), 0.0083421 secs] [Times:
> user=0.01 sys=0.00, real=0.01 secs]
>
> 2016-04-26T18:55:58.273+0800: 4544.708: [GC (Allocation Failure)
> 2016-04-26T18:55:58.273+0800: 4544.708: [DefNew: 281291K->1674K(314560K),
> 0.0100806 secs] 315418K->35807K(1013632K), 0.0102764 secs] [Times:
> user=0.00 sys=0.00, real=0.00 secs]
>
> 2016-04-26T18:56:33.184+0800: 4579.506: [GC (Allocation Failure)
> 2016-04-26T18:56:33.184+0800: 4579.507: [DefNew: 281290K->1674K(314560K),
> 0.0111419 secs] 315423K->35814K(1013632K), 0.0113590 secs] [Times:
> user=0.00 sys=0.00, real=0.00 

Re: Ignite cache data size problem.

2016-04-22 Thread kevin.zheng
BTW, I created 4 + 3 nodes on two servers.
each node I called command like this ./ignite.sh  -J-Xmx8g -J-Xms8g 

kind regards,
Kevin



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cache-data-size-problem-tp4449p4454.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite cache data size problem.

2016-04-22 Thread Vladimir Ozerov
Hi,

It looks like you have relatively small entries (somewhere about 60-70
bytes per key-value pair). Ignite also has some intrinsic overhead, which
could be more than actual data in this case. However, I surely would not
expect that it will not fit into 80GB.

Could you please share your key and value model classes and your XML
configuration to investigate it further?

Vladimir.

On Fri, Apr 22, 2016 at 2:02 PM, Zhengqingzheng 
wrote:

> Hi there,
>
> I am trying to load a table with 47Million records, in which the data size
> is less than 3gb. However, When I load into the memory ( two vm with 48+32
> = 80gb), it is still crushed due to not enough memory space? This problem
> occurred when I instantiated 6 + 4 nodes.
>
> Why the cache model need so much space ( 3g vs 80g)?
>
> Any idea to explain this issue?
>
>
>
> Kind regards,
>
> Kevin
>
>
>