RE: Query execution too long even after providing index

2018-09-09 Thread Prasad Bhalerao
Guys, is there any solution for this?
Can someone please respond?

Thanks,
Prasad


-- Forwarded message -
From: Prasad Bhalerao 
Date: Fri, Sep 7, 2018, 8:04 AM
Subject: Fwd: Query execution too long even after providing index
To: 


Can we have update on this?

-- Forwarded message -
From: Prasad Bhalerao 
Date: Wed, Sep 5, 2018, 11:34 AM
Subject: Re: Query execution too long even after providing index
To: 


Hi Andrey,

Can you please help me with this? I

Thanks,
Prasad

On Tue, Sep 4, 2018 at 2:08 PM Prasad Bhalerao 
wrote:

>
> I tried changing SqlIndexMaxInlineSize to 32 byte and 100 byte using cache
> config.
>
> ipContainerIpV4CacheCfg.setSqlIndexMaxInlineSize(32/100);
>
> But it did not improve the sql execution time. Sql execution time
> increases with increase in cache size.
>
> It is a simple range scan query. Which part of the execution process might
> take time in this case?
>
> Can you please advise?
>
> Thanks,
> PRasad
>
> On Mon, Sep 3, 2018 at 8:06 PM Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> HI,
>>
>> Have you tried to increase index inlineSize? It is 10 bytes by default.
>>
>> Your indices uses simple value types (Java primitives) and all columns
>> can be easily inlined.
>> It should be enough to increase inlineSize up to 32 bytes (3 longs + 1
>> int = 3*(8 /*long*/ + 1/*type code*/) + (4/*int*/ + 1/*type code*/)) to
>> inline all columns for the idx1, and up to 27 (3 longs) for idx2.
>>
>> You can try to benchmark queries with different inline sizes to find
>> optimal ratio between speedup and index size.
>>
>>
>>
>> On Mon, Sep 3, 2018 at 5:12 PM Prasad Bhalerao <
>> prasadbhalerao1...@gmail.com> wrote:
>>
>>> Hi,
>>> My cache has 1 million rows and the sql is as follows.
>>> This sql is taking around 1.836 seconds to execute and this time
>>> increases as I go on adding the data to this cache. Some time it takes more
>>> than 4 seconds.
>>>
>>> Is there any way to improve the execution time?
>>>
>>> *SQL:*
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ?  AND moduleId = ? AND (ipStart
>>> <= ? AND ipEnd   >= ?)
>>> UNION ALL
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart<=
>>> ? AND ipEnd   >= ?)
>>> UNION ALL
>>> SELECT id, moduleId,ipEnd, ipStart
>>> FROM IpContainerIpV4Data USE INDEX(ip_container_ipv4_idx1)
>>> WHERE subscriptionId = ? AND moduleId = ? AND (ipStart>=
>>> ? AND ipEnd   <= ?)
>>>
>>> *Indexes are as follows:*
>>>
>>> public class IpContainerIpV4Data implements Data, 
>>> UpdatableData {
>>>
>>>   @QuerySqlField
>>>   private long id;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 1)})
>>>   private int moduleId;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 0),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 0)})
>>>   private long subscriptionId;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 3, descending = true),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 2, 
>>> descending = true)})
>>>   private long ipEnd;
>>>
>>>   @QuerySqlField(orderedGroups = {@QuerySqlField.Group(name = 
>>> "ip_container_ipv4_idx1", order = 2),
>>>   @QuerySqlField.Group(name = "ip_container_ipv4_idx2", order = 1)})
>>>   private long ipStart;
>>>
>>> }
>>>
>>>
>>> *Execution Plan:*
>>>
>>> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
>>> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
>>> __Z0.ID AS __C0_0,
>>> __Z0.MODULEID AS __C0_1,
>>> __Z0.IPEND AS __C0_2,
>>> __Z0.IPSTART AS __C0_3
>>> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z0 USE INDEX
>>> (IP_CONTAINER_IPV4_IDX1)
>>> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID =
>>> ?1
>>> AND MODULEID = ?2
>>> AND IPSTART <= ?3
>>> AND IPEND >= ?4
>>>  */
>>> WHERE ((__Z0.SUBSCRIPTIONID = ?1)
>>> AND (__Z0.MODULEID = ?2))
>>> AND ((__Z0.IPSTART <= ?3)
>>> AND (__Z0.IPEND >= ?4))
>>> 2018-09-03 19:32:03,098 232176 [pub-#78%springDataNode%] INFO
>>> c.q.a.g.d.IpContainerIpV4DataGridDaoImpl - SELECT
>>> __Z1.ID AS __C1_0,
>>> __Z1.MODULEID AS __C1_1,
>>> __Z1.IPEND AS __C1_2,
>>> __Z1.IPSTART AS __C1_3
>>> FROM IP_CONTAINER_IPV4_CACHE.IPCONTAINERIPV4DATA __Z1 USE INDEX
>>> (IP_CONTAINER_IPV4_IDX1)
>>> /* IP_CONTAINER_IPV4_CACHE.IP_CONTAINER_IPV4_IDX1: SUBSCRIPTIONID =
>>> ?5
>>> AND MODULEID = ?6
>>> AND IPSTART <= ?7
>>> AND IPEND >= ?8
>>>  */
>>> WHERE ((__Z1.SUBSCRIPTIONID = ?5)
>>> AND (__Z1.MODULEID = ?6))
>>> AND 

Re: The system cache size was slowly increased

2018-09-09 Thread Justin Ji
Prem - 

Thank for your reply. 
You explained why Ignite container grew from 2.5G to more than 4G because 
there is 2G heap memory and more than 2G off-heap memory, I think what you 
said is very correct and solve one of my main problems. 

But another question, why system memory cache also grows slow and never be 
reclaimed. As I know Linux kernel will reclaim system memory cache 
automatically when lack of memory. 

The memory cache I mentioned before refer to the 'buff/cache' that printed 
by 'free' command 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Connection reset by peer

2018-09-09 Thread Jeff Jiao
Thank you Ilya, I found that I increased this timeout on server side but not
on client side, i guess this should be the problem, it should be the client
who closed the connection.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Eviction Policy on Dirty data

2018-09-09 Thread Stanislav Lukyanov
With writeThrough an entry in the cache will never be "dirty" in that sense - 
cache store will update the backing DB at the same time the cache update 
happens.

From: monstereo
Sent: 14 августа 2018 г. 22:39
To: user@ignite.apache.org
Subject: Re: Eviction Policy on Dirty data

yes, using cachestore and write through

dkarachentsev wrote
> Hi,
> 
> Could you please explain how do you update database? Do you use CacheStore
> with writeThrough or manually save?
> 
> Anyway, you can update data with custom eviction policy:
> cache.withExpiryPolicy(policy) [1]
> 
> [1]
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteCache.html#withExpiryPolicy-javax.cache.expiry.ExpiryPolicy-
> 
> Thanks!
> -Dmitry
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: How to throttle/limit the cache-store read threads?

2018-09-09 Thread Saikat Maitra
Hi Mridul,

Have you considered the option of creating a dedicated Threadpool for the
cache store operations. Here is an example

http://commons.apache.org/dormant/threadpool/

Another option would be to consider Hystrix Command Threadpool since you
mentioned that the cache store is remote web service.

https://github.com/Netflix/Hystrix/wiki/How-To-Use#Command%20Thread-Pool

HTH
Regards
Saikat

On Sun, Sep 9, 2018 at 6:04 AM, Mridul Chopra 
wrote:

> Hello,
>
> I have implemented a cache store for read-through ignite caching, whenever
> there is a read-through operation through the cache-store, I want to limit
> this to single thread or at max 2. The reason behind the same is due to the
> fact that the underlying cache store is a remote Rest based webservice that
> can only support limited number of connections. Hence I want to have
> limited number of requests being sent for read-through. Please note that if
> I configure publicThreadPoolSize =1 , this would mean all cache.get and
> cache.getAll operations would be single threaded. Hence this behaviour is
> not desired, I want to have throttling at the cache store level only.
> Thanks,
> Mridul
>