Re: Public thread pool starvation detected

2018-11-18 Thread Anil
how did you guys resolve this ? thanks.

On Tue, 7 Aug 2018 at 13:03, Evgenii Zhuravlev 
wrote:

> Hi,
>
> What kind of compute jobs do you run? Do you start new jobs inside jobs?
> Can you share thread dumps?
>
> Evgenii
>
> 2018-08-07 1:48 GMT+03:00 boomi :
>
>> Hello,
>>
>> We are having a possible deadlock issue with Apache Ignite .NET 2.5.0.  We
>> have setup a cluster with 5-server nodes and 1-client node.  We try to
>> execute ICompute action from the client node.  And we see in one server
>> node
>> log, there was single line in the log file,  indicates that some kind of
>> thread pool starvation as below:
>>
>> Line 586: [22:14:34,834][WARNING][grid-timeout-worker-#23][IgniteKernal]
>> Possible thread pool starvation detected (no task completed in last
>> 3ms,
>> is public thread pool size large enough?)
>>
>> And all nodes,including client node, not responsive, when this happens.
>> We've been stuck with problem and any help to resolve this issue would be
>> appreciated.
>>
>> We setup the cluster on a VM with 4-core cpus and 32G memory.
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Ignite Xmx configuration

2017-07-28 Thread Anil
Hi Nikolai,


So i need to add 4gb + indexes size as cache size for off-heap cache ?

Thanks,
Anil

On 28 July 2017 at 17:23, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Indexes does not include in it. Indexes will occupy extra size.
>
> On Fri, Jul 28, 2017 at 12:21 PM, Anil <anilk...@gmail.com> wrote:
>
>> 1.9 version
>>
>> On 28 July 2017 at 14:08, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Which versioin ignite do you use?
>>>
>>> On Fri, Jul 28, 2017 at 11:12 AM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Nikolai,
>>>>
>>>> One more question- documentation says the indexes are stored in off
>>>> heap as well for off-heap cache?
>>>>
>>>> where does it store ? in the same 4 g (in my case) ? thanks.
>>>>
>>>> Regards,
>>>> Anil
>>>>
>>>> On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Thanks Nikolai.
>>>>>
>>>>> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org>
>>>>> wrote:
>>>>>
>>>>>> Hi!
>>>>>>
>>>>>> If you used off-heap cache then entry is not stored in heap memory.
>>>>>> Hence Xmx is not related with cache size. You need to choose Xmx/Xms 
>>>>>> based
>>>>>> on your application requirements (how many object will be created by your
>>>>>> code). I guess that 2-4 Gb will be enough in your case.
>>>>>>
>>>>>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Team,
>>>>>>>
>>>>>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite
>>>>>>> node.
>>>>>>>
>>>>>>> What would be the Xmx setting for ignite node ?
>>>>>>>
>>>>>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the
>>>>>>> cache size ? please clarify. thanks.
>>>>>>>
>>>>>>>
>>>>>>> Regards
>>>>>>> Anil
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Ignite Xmx configuration

2017-07-28 Thread Anil
1.9 version

On 28 July 2017 at 14:08, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Which versioin ignite do you use?
>
> On Fri, Jul 28, 2017 at 11:12 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> One more question- documentation says the indexes are stored in off heap
>> as well for off-heap cache?
>>
>> where does it store ? in the same 4 g (in my case) ? thanks.
>>
>> Regards,
>> Anil
>>
>> On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:
>>
>>> Thanks Nikolai.
>>>
>>> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>>
>>>> Hi!
>>>>
>>>> If you used off-heap cache then entry is not stored in heap memory.
>>>> Hence Xmx is not related with cache size. You need to choose Xmx/Xms based
>>>> on your application requirements (how many object will be created by your
>>>> code). I guess that 2-4 Gb will be enough in your case.
>>>>
>>>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Hi Team,
>>>>>
>>>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite
>>>>> node.
>>>>>
>>>>> What would be the Xmx setting for ignite node ?
>>>>>
>>>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the
>>>>> cache size ? please clarify. thanks.
>>>>>
>>>>>
>>>>> Regards
>>>>> Anil
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Ignite Xmx configuration

2017-07-28 Thread Anil
Hi Nikolai,

One more question- documentation says the indexes are stored in off heap as
well for off-heap cache?

where does it store ? in the same 4 g (in my case) ? thanks.

Regards,
Anil

On 28 July 2017 at 12:56, Anil <anilk...@gmail.com> wrote:

> Thanks Nikolai.
>
> On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>
>> Hi!
>>
>> If you used off-heap cache then entry is not stored in heap memory. Hence
>> Xmx is not related with cache size. You need to choose Xmx/Xms based on
>> your application requirements (how many object will be created by your
>> code). I guess that 2-4 Gb will be enough in your case.
>>
>> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Team,
>>>
>>> I have two off-heap caches with 4 gb size (per cache)  in my ignite node.
>>>
>>> What would be the Xmx setting for ignite node ?
>>>
>>> is it  2 * 4 + heap required ? or Xmx is not related to any of the cache
>>> size ? please clarify. thanks.
>>>
>>>
>>> Regards
>>> Anil
>>>
>>>
>>>
>>
>


Re: Ignite Xmx configuration

2017-07-28 Thread Anil
Thanks Nikolai.

On 28 July 2017 at 12:47, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Hi!
>
> If you used off-heap cache then entry is not stored in heap memory. Hence
> Xmx is not related with cache size. You need to choose Xmx/Xms based on
> your application requirements (how many object will be created by your
> code). I guess that 2-4 Gb will be enough in your case.
>
> On Fri, Jul 28, 2017 at 9:59 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Team,
>>
>> I have two off-heap caches with 4 gb size (per cache)  in my ignite node.
>>
>> What would be the Xmx setting for ignite node ?
>>
>> is it  2 * 4 + heap required ? or Xmx is not related to any of the cache
>> size ? please clarify. thanks.
>>
>>
>> Regards
>> Anil
>>
>>
>>
>


Ignite Xmx configuration

2017-07-28 Thread Anil
Hi Team,

I have two off-heap caches with 4 gb size (per cache)  in my ignite node.

What would be the Xmx setting for ignite node ?

is it  2 * 4 + heap required ? or Xmx is not related to any of the cache
size ? please clarify. thanks.


Regards
Anil


Re: Two ignite instances on a vm

2017-07-16 Thread Anil
HI Alex,

I am using ignite 1.9 with vertx 3.4.1. ignite instance name is grid name
in 1.9. Am i wrong ? vertx assigns the unique grid name and node id to each
instance.

two instances on single vm working with multicast but not with tcp.

Thanks,
Anil

On 16 July 2017 at 12:41, afedotov <alexander.fedot...@gmail.com> wrote:

> If you are running multiple Ignite nodes in the same VM each of them
> should have a distinct name. Please specify 
> IgniteConfiguration#igniteInstanceName
> for each node.
>
> Kind regards,
> Alex
>
> 16 июля 2017 г. 5:32 AM пользователь "Anil [via Apache Ignite Users]" <[hidden
> email] <http:///user/SendEmail.jtp?type=node=14971=0>> написал:
>
>> Hi Alex,
>>
>> one ignite instance per one node working good and two nodes joining the
>> cluster.
>>
>> But when two instances started on vm, only one of the instance is joining
>> the cluster. Second instance logs says nothing.. no logs are rolling when
>> other instance is stopped/started/restarted.
>>
>> We tried with default heartbeatFrequency as well and same behavior.  In
>> case of large export scenarios, servers experiencing little long gc's. so
>> testing with heartbeatFrequency = 6 ms.
>>
>> Thanks,
>> Anil
>>
>> On 16 July 2017 at 02:46, afedotov <[hidden email]
>> <http:///user/SendEmail.jtp?type=node=14969=0>> wrote:
>>
>>> Hi,
>>>
>>> What is the log when you start the second node?
>>> Why do you need heartbeatFrequency of 6 ms? Try commenting it so
>>> that it has a default value.
>>>
>>> Kind regards,
>>> Alex.
>>>
>>> On Sat, Jul 15, 2017 at 8:27 PM, Anil [via Apache Ignite Users] <[hidden
>>> email] <http:///user/SendEmail.jtp?type=node=14966=0>> wrote:
>>>
>>>> HI Team,
>>>>
>>>> I started two ignite instances with following configuration and
>>>> topology snapshot says server = 1
>>>>
>>>> Topology snapshot [ver=6, servers=1, clients=3, CPUs=32, heap=24.0GB]
>>>>
>>>>
>>>> *Configuration -*
>>>>
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>> 
>>>>   
>>>>  
>>>>  
>>>>  
>>>>  
>>>>  
>>>>
>>>> 
>>>>   
>>>> 
>>>>   
>>>> X.X.X.1
>>>> X.X.X.2
>>>>   
>>>> 
>>>>   
>>>> 
>>>>   
>>>> 
>>>>
>>>> Please let me know if anything wrong with above configuration. thanks.
>>>>
>>>> Regards,
>>>> Anil
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> If you reply to this email, your message will be added to the
>>>> discussion below:
>>>> http://apache-ignite-users.70518.x6.nabble.com/Two-ignite-in
>>>> stances-on-a-vm-tp14965.html
>>>> To start a new topic under Apache Ignite Users, email [hidden email]
>>>> <http:///user/SendEmail.jtp?type=node=14966=1>
>>>> To unsubscribe from Apache Ignite Users, click here.
>>>> NAML
>>>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>>>
>>>
>>>
>>> --
>>> View this message in context: Re: Two ignite instances on a vm
>>> <http://apache-ignite-users.70518.x6.nabble.com/Two-ignite-instances-on-a-vm-tp14965p14966.html>
>>> Sent from the Apache Ignite Users mailing list archive
>>> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>>>
>>
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/Two-ignite-
>> instances-on-a-vm-tp14965p14969.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node=14971=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
> --
> View this message in context: Re: Two ignite instances on a vm
> <http://apache-ignite-users.70518.x6.nabble.com/Two-ignite-instances-on-a-vm-tp14965p14971.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: High heap on ignite client

2017-07-06 Thread Anil
Hi Alex,

Thanks.

i have changed the swapiness to avoid sys time > user time. and did test
but no luck.

What do you mean by "apps/containers running on same physical machine" ?

You mean on kube instance ? if yes, yes there are number of
services/containers running on same kube cluster/instance.

Ignite client need high CPU ?

Thanks,
Anil

On 6 July 2017 at 17:39, afedotov <alexander.fedot...@gmail.com> wrote:

> Hi,
>
> I've taken a look at the logs.
> I don't see huge heap consumption but from the GC log for node1 I can see
> that in a couple of GCs real time is greater than user and sys time, as
> well as
> in some cases sys time is higher than the user time. Taking into account
> that you are running kubernetes and probably in a virtual environment, I
> suspect that
> overselling takes place here. Please check if there are other
> apps/containers running on the same physical machine.
>
> Kind regards,
> Alex.
>
> On Thu, Jul 6, 2017 at 7:23 AM, Anil [via Apache Ignite Users] <[hidden
> email] <http:///user/SendEmail.jtp?type=node=14388=0>> wrote:
>
>> Hi Alex,
>>
>> Did you get a chance to look into it ? thanks.
>>
>> Regards,
>> Anil
>>
>>
>>
>> --
>> If you reply to this email, your message will be added to the discussion
>> below:
>> http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-
>> ignite-client-tp13594p14372.html
>> To start a new topic under Apache Ignite Users, email [hidden email]
>> <http:///user/SendEmail.jtp?type=node=14388=1>
>> To unsubscribe from Apache Ignite Users, click here.
>> NAML
>> <http://apache-ignite-users.70518.x6.nabble.com/template/NamlServlet.jtp?macro=macro_viewer=instant_html%21nabble%3Aemail.naml=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>>
>
>
> --
> View this message in context: Re: High heap on ignite client
> <http://apache-ignite-users.70518.x6.nabble.com/High-heap-on-ignite-client-tp13594p14388.html>
> Sent from the Apache Ignite Users mailing list archive
> <http://apache-ignite-users.70518.x6.nabble.com/> at Nabble.com.
>


Re: High heap on ignite client

2017-07-05 Thread Anil
Hi Alex,

Did you get a chance to look into it ? thanks.

Regards,
Anil


Re: High heap on ignite client

2017-06-23 Thread Anil
Socket closed is the very frequent. May i now which causes the following
exception ? thanks,

Some more log -


2017-06-23 02:33:34 488 ERROR TcpDiscoverySpi:495 - Failed to send message:
TcpDiscoveryClientHeartbeatMessage [super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=a71a444dc51-9956f95a-3bf9-4777-9431-cda0df43ff7d,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
java.net.SocketException: Socket is closed
at java.net.Socket.getSendBufferSize(Socket.java:1215)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketStream(TcpDiscoverySpi.java:1254)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToSocket(TcpDiscoverySpi.java:1366)
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1095)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)


Thanks,
Anil

On 23 June 2017 at 15:14, Anil <anilk...@gmail.com> wrote:

> HI Alex,
>
> i tried XX:G1NewSizePercent=30 and ignite client is getting restarted very
> frequently,  for each export operation.
>
> -Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
> -XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> -Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof
>
> i have attached the gc logs and application logs
>
> I am not sure what is causing the ignite client restart for every restart.
>
> Do you have any suggestions ? please advice.
>
> Thanks,
> Anil
>
>
> On 21 June 2017 at 09:23, Anil <anilk...@gmail.com> wrote:
>
>> Thanks Alex. I will test it in my local and share the results.
>>
>> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>>
>> Thanks,
>> Anil
>>
>
>


Re: High heap on ignite client

2017-06-23 Thread Anil
HI Alex,

i tried XX:G1NewSizePercent=30 and ignite client is getting restarted very
frequently,  for each export operation.

-Xmx6144m -XX:MetaspaceSize=512m -XX:+UnlockExperimentalVMOptions
-XX:G1NewSizePercent=30 -XX:+UseTLAB -XX:+UseG1GC -XX:MaxGCPauseMillis=500
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-Xloggc:C:/Anil/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
-XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/heapdump-client.hprof

i have attached the gc logs and application logs

I am not sure what is causing the ignite client restart for every restart.

Do you have any suggestions ? please advice.

Thanks,
Anil


On 21 June 2017 at 09:23, Anil <anilk...@gmail.com> wrote:

> Thanks Alex. I will test it in my local and share the results.
>
> Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.
>
> Thanks,
> Anil
>
2017-06-21 20:36:58 753 WARN  TcpCommunicationSpi:480 - Connect timed out 
(consider increasing 'failureDetectionTimeout' configuration property) 
[addr=/127.0.0.1:47101, failureDetectionTimeout=1]
2017-06-21 20:36:58 755 WARN  TcpCommunicationSpi:480 - Connect timed out 
(consider increasing 'failureDetectionTimeout' configuration property) 
[addr=/0:0:0:0:0:0:0:1%lo:47101, failureDetectionTimeout=1]
2017-06-21 20:36:58 756 WARN  TcpCommunicationSpi:480 - Failed to connect to a 
remote node (make sure that destination node is alive and operating system 
firewall is disabled on local and remote hosts) [addrs=[/10.85.81.187:47101, 
/127.0.0.1:47101, /0:0:0:0:0:0:0:1%lo:47101]]
2017-06-21 20:36:58 757 WARN  IgniteH2Indexing:480 - Failed to send message 
[node=TcpDiscoveryNode [id=8955d2cf-69c6-4f7a-8fcb-a6d7d06726ed, 
addrs=[0:0:0:0:0:0:0:1%lo, 10.85.81.187, 127.0.0.1], 
sockAddrs=[/10.85.81.187:47501, /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], 
discPort=47501, order=7, intOrder=7, lastExchangeTime=1498047275743, loc=false, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=false], msg=GridQueryCancelRequest 
[qryReqId=83], errMsg=Failed to send message (node may have left the grid or 
TCP connection cannot be established due to firewall issues) 
[node=TcpDiscoveryNode [id=8955d2cf-69c6-4f7a-8fcb-a6d7d06726ed, 
addrs=[0:0:0:0:0:0:0:1%lo, 10.85.81.187, 127.0.0.1], 
sockAddrs=[/10.85.81.187:47501, /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], 
discPort=47501, order=7, intOrder=7, lastExchangeTime=1498047275743, loc=false, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=false], topic=TOPIC_QUERY, 
msg=GridQueryCancelRequest [qryReqId=83], policy=2]]
@

2017-06-21 20:36:58 762 WARN  TcpDiscoverySpi:480 - Local node was dropped from 
cluster due to network problems, will try to reconnect with new id after 
1ms (reconnect delay can be changed using 
IGNITE_DISCO_FAILED_CLIENT_RECONNECT_DELAY system property) 
[newId=a9978128-abac-47dc-8f81-939fdcbd6649, 
prevId=d9a17fa8-4e44-428b-a9e4-cb6c9ededa53, locNode=TcpDiscoveryNode 
[id=d9a17fa8-4e44-428b-a9e4-cb6c9ededa53, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 
172.16.92.6], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, 
aswb-ignite-export-stage-5hsvf/172.16.92.6:0], discPort=0, order=409, 
intOrder=0, lastExchangeTime=1498047274347, loc=true, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true], 
nodeInitiatedFail=d6dbb2d5-f343-4d5d-a5e1-600edacbbe85, msg=TcpCommunicationSpi 
failed to establish connection to node [rmtNode=TcpDiscoveryNode 
[id=d9a17fa8-4e44-428b-a9e4-cb6c9ededa53, addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 
172.16.92.6], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0, /172.16.92.6:0], 
discPort=0, order=409, intOrder=212, lastExchangeTime=1498047285964, loc=false, 
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true], errs=class 
o.a.i.IgniteCheckedException: Failed to connect to node (is node still alive?). 
Make sure that each ComputeTask and cache Transaction has a timeout set in 
order to prevent parties from waiting forever in case of network issues 
[nodeId=d9a17fa8-4e44-428b-a9e4-cb6c9ededa53, addrs=[/172.16.92.6:47100, 
/0:0:0:0:0:0:0:1%lo:47100, /127.0.0.1:47100]], connectErrs=[class 
o.a.i.IgniteCheckedException: Failed to connect to address: /172.16.92.6:47100, 
class o.a.i.IgniteCheckedException: Failed to connect to address: 
/0:0:0:0:0:0:0:1%lo:47100, class o.a.i.IgniteCheckedException: Failed to 
connect to address: /127.0.0.1:47100]]]
2017-06-21 20:36:58 768 WARN  IgniteH2Indexing:480 - Failed to send message 
[node=TcpDiscoveryNode [id=d6dbb2d5-f343-4d5d-a5e1-600edacbbe85, 
addrs=[0:0:0:0:0:0:0:1%lo, 10.85.81.186, 127.0.0.1], 
sockAddrs=[/10.85.81.186:47501, /0:0:0:0:0:0:0:1%lo:47501, /127.0.0.1:47501], 
discPort=47501, order=10, intOrder=10, lastExchangeTime=1498047275743, 
loc=false, ver=1.9.0#20170302-sha1:a816

Re: High heap on ignite client

2017-06-20 Thread Anil
Thanks Alex. I will test it in my local and share the results.

Did you get a chance to look at the Jdbc driver's next() issue ? Thanks.

Thanks,
Anil


Re: High heap on ignite client

2017-06-19 Thread Anil
HI Alex,

I have attached the ignite client xml. 4L means 0.4 million records. Sorry,
I didn't generate JFR. But created heap dump.

Do you agree that Jdbc driver loading everything in memory and next() just
for conversion ?

Thanks

On 19 June 2017 at 17:16, Alexander Fedotov <alexander.fedot...@gmail.com>
wrote:

> Hi Anil.
>
> Could you please also share C:/Anil/ignite-client.xml ? As well, it would
> be useful if you took JFR reports for the case with allocation profiling
> enabled.
> Just to clarify, by 4L do you mean 4 million entries?
>
> Kind regards,
> Alex.
>
> On Mon, Jun 19, 2017 at 10:15 AM, Alexander Fedotov <
> alexander.fedot...@gmail.com> wrote:
>
>> Thanks. I'll take a look and let you know about any findings.
>>
>> Kind regards,
>> Alex
>>
>> 18 июня 2017 г. 3:33 PM пользователь "Anil" <anilk...@gmail.com> написал:
>>
>> Hi Alex,
>>
>> test program repository - https://github.com/adasari/test-ignite-jdbc.git
>>
>> please let us if you have any suggestions/questions. thanks.
>>
>> Thanks
>>
>> On 15 June 2017 at 10:58, Anil <anilk...@gmail.com> wrote:
>>
>>> Sure. thanks
>>>
>>> On 14 June 2017 at 19:51, afedotov <alexander.fedot...@gmail.com> wrote:
>>>
>>>> Hi, Anil.
>>>>
>>>> Could you please share your full code (class/method) you are using to
>>>> read data.
>>>>
>>>> Kind regards,
>>>> Alex
>>>>
>>>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" 
>>>> <[hidden
>>>> email] <http:///user/SendEmail.jtp?type=node=13706=0>> написал:
>>>>
>>>>> Do you have any advice on implementing large records export from
>>>>> ignite ?
>>>>>
>>>>> I could not use ScanQuery right as my whole application built around
>>>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>>>> <http:///user/SendEmail.jtp?type=node=13626=0>> wrote:
>>>>>
>>>>>> I understand from the code that there is no cursor from h2 db (or
>>>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>>>> reducer. It means when exporting large number of records, all data is in
>>>>>> memory.
>>>>>>
>>>>>>  if (send(nodes,
>>>>>> oldStyle ?
>>>>>> new GridQueryRequest(qryReqId,
>>>>>> r.pageSize,
>>>>>> space,
>>>>>> mapQrys,
>>>>>> topVer,
>>>>>> extraSpaces(space, qry.spaces()),
>>>>>> null,
>>>>>> timeoutMillis) :
>>>>>> new GridH2QueryRequest()
>>>>>> .requestId(qryReqId)
>>>>>> .topologyVersion(topVer)
>>>>>> .pageSize(r.pageSize)
>>>>>> .caches(qry.caches())
>>>>>> .tables(distributedJoins ? qry.tables() :
>>>>>> null)
>>>>>> .partitions(convert(partsMap))
>>>>>> .queries(mapQrys)
>>>>>> .flags(flags)
>>>>>> .timeout(timeoutMillis),
>>>>>> oldStyle && partsMap != null ? new
>>>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>>>> false)) {
>>>>>>
>>>>>> awaitAllReplies(r, nodes, cancel);
>>>>>>
>>>>>> *// once the responses from all nodes for the query received..
>>>>>> proceed further ?*
>>>>>>
>>>>>>   if (!retry) {
>>>>>> if (skipMergeTbl) {
>>>>>> List<List> res = new ArrayList<>();
>>>>>>
>

Re: High heap on ignite client

2017-06-18 Thread Anil
Hi Alex,

test program repository - https://github.com/adasari/test-ignite-jdbc.git

please let us if you have any suggestions/questions. thanks.

Thanks

On 15 June 2017 at 10:58, Anil <anilk...@gmail.com> wrote:

> Sure. thanks
>
> On 14 June 2017 at 19:51, afedotov <alexander.fedot...@gmail.com> wrote:
>
>> Hi, Anil.
>>
>> Could you please share your full code (class/method) you are using to
>> read data.
>>
>> Kind regards,
>> Alex
>>
>> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" 
>> <[hidden
>> email] <http:///user/SendEmail.jtp?type=node=13706=0>> написал:
>>
>>> Do you have any advice on implementing large records export from ignite ?
>>>
>>> I could not use ScanQuery right as my whole application built around
>>> Jdbc driver and writing complex queries in scan query is very difficult.
>>>
>>> Thanks
>>>
>>> On 10 June 2017 at 18:48, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node=13626=0>> wrote:
>>>
>>>> I understand from the code that there is no cursor from h2 db (or
>>>> ignite embed h2 db) internally and all mapper response consolidated at
>>>> reducer. It means when exporting large number of records, all data is in
>>>> memory.
>>>>
>>>>  if (send(nodes,
>>>> oldStyle ?
>>>> new GridQueryRequest(qryReqId,
>>>> r.pageSize,
>>>> space,
>>>> mapQrys,
>>>> topVer,
>>>> extraSpaces(space, qry.spaces()),
>>>> null,
>>>> timeoutMillis) :
>>>> new GridH2QueryRequest()
>>>> .requestId(qryReqId)
>>>> .topologyVersion(topVer)
>>>> .pageSize(r.pageSize)
>>>> .caches(qry.caches())
>>>> .tables(distributedJoins ? qry.tables() :
>>>> null)
>>>> .partitions(convert(partsMap))
>>>> .queries(mapQrys)
>>>> .flags(flags)
>>>> .timeout(timeoutMillis),
>>>> oldStyle && partsMap != null ? new
>>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>>> false)) {
>>>>
>>>> awaitAllReplies(r, nodes, cancel);
>>>>
>>>> *// once the responses from all nodes for the query received.. proceed
>>>> further ?*
>>>>
>>>>   if (!retry) {
>>>> if (skipMergeTbl) {
>>>> List<List> res = new ArrayList<>();
>>>>
>>>> // Simple UNION ALL can have multiple indexes.
>>>> for (GridMergeIndex idx : r.idxs) {
>>>> Cursor cur = idx.findInStream(null, null);
>>>>
>>>> while (cur.next()) {
>>>> Row row = cur.get();
>>>>
>>>> int cols = row.getColumnCount();
>>>>
>>>> List resRow = new
>>>> ArrayList<>(cols);
>>>>
>>>> for (int c = 0; c < cols; c++)
>>>> resRow.add(row.getValue(c).get
>>>> Object());
>>>>
>>>> res.add(resRow);
>>>> }
>>>> }
>>>>
>>>> resIter = res.iterator();
>>>> }else {
>>>>   // incase of split query scenario
>>>> }
>>>>
>>>>  }
>>>>
>>>>   return new GridQueryCacheObjectsIterator(resIter, cctx,
>>>> keepPortable);
>>>>
>>>>
>>>> Query cursor is iterator which does column value mapping per page. But
>>>> s

Re: Range queries on indexed columns

2017-06-15 Thread Anil
Thanks Andrey.

On 14 June 2017 at 20:15, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> Yes, in your case map queries results already sorted and there is only
> merge on reduce side.
> Sorting can be disabled on map side when e.g. aggregates is used.
>
> On Wed, Jun 14, 2017 at 3:20 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Team,
>>
>> Can some help in understanding the below ? Thanks.
>>
>> On 13 June 2017 at 11:07, Anil <anilk...@gmail.com> wrote:
>>
>>> HI Team,
>>>
>>> I have a table TEST with a indexed column COL_A. Does the following
>>> query works ?
>>>
>>> select * from Test where COL_A > '1' and COL_A < '2' offset 10  ROWS
>>> FETCH NEXT 20 ROWS ONLY
>>>
>>> As per my understanding of distributed systems, the query is sent to all
>>> nodes and gets the 10 records from each node and return 10 (whatever
>>> returns first)
>>>
>>> as indexes are distributed, the above query may not return the records
>>> in paginated way without adding sort like below.
>>>
>>> select * from Test where COL_A > '1' and COL_A < '2' order by COL_A
>>> offset 10  ROWS FETCH NEXT 20 ROWS ONLY
>>>
>>> do you see any overhead of sort here ?
>>>
>>> Does it work in following way ?
>>>
>>> send the query to all nodes and get 10 (based on sorting) records and
>>> sort all results of each node at reducer and return final 10 .
>>>
>>> Sort should not have any overhead here as sort and filter is done on
>>> indexed column.
>>>
>>> Please correct me if i am wrong. thanks.
>>>
>>> Thanks
>>>
>>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: High heap on ignite client

2017-06-14 Thread Anil
Sure. thanks

On 14 June 2017 at 19:51, afedotov <alexander.fedot...@gmail.com> wrote:

> Hi, Anil.
>
> Could you please share your full code (class/method) you are using to read
> data.
>
> Kind regards,
> Alex
>
> 12 июня 2017 г. 4:07 PM пользователь "Anil [via Apache Ignite Users]" <[hidden
> email] <http:///user/SendEmail.jtp?type=node=13706=0>> написал:
>
>> Do you have any advice on implementing large records export from ignite ?
>>
>> I could not use ScanQuery right as my whole application built around Jdbc
>> driver and writing complex queries in scan query is very difficult.
>>
>> Thanks
>>
>> On 10 June 2017 at 18:48, Anil <[hidden email]
>> <http:///user/SendEmail.jtp?type=node=13626=0>> wrote:
>>
>>> I understand from the code that there is no cursor from h2 db (or ignite
>>> embed h2 db) internally and all mapper response consolidated at reducer. It
>>> means when exporting large number of records, all data is in memory.
>>>
>>>  if (send(nodes,
>>> oldStyle ?
>>> new GridQueryRequest(qryReqId,
>>> r.pageSize,
>>> space,
>>> mapQrys,
>>> topVer,
>>> extraSpaces(space, qry.spaces()),
>>> null,
>>> timeoutMillis) :
>>> new GridH2QueryRequest()
>>> .requestId(qryReqId)
>>> .topologyVersion(topVer)
>>> .pageSize(r.pageSize)
>>> .caches(qry.caches())
>>> .tables(distributedJoins ? qry.tables() :
>>> null)
>>> .partitions(convert(partsMap))
>>> .queries(mapQrys)
>>> .flags(flags)
>>> .timeout(timeoutMillis),
>>> oldStyle && partsMap != null ? new
>>> ExplicitPartitionsSpecializer(partsMap) : null,
>>> false)) {
>>>
>>> awaitAllReplies(r, nodes, cancel);
>>>
>>> *// once the responses from all nodes for the query received.. proceed
>>> further ?*
>>>
>>>   if (!retry) {
>>> if (skipMergeTbl) {
>>> List<List> res = new ArrayList<>();
>>>
>>> // Simple UNION ALL can have multiple indexes.
>>> for (GridMergeIndex idx : r.idxs) {
>>> Cursor cur = idx.findInStream(null, null);
>>>
>>> while (cur.next()) {
>>> Row row = cur.get();
>>>
>>> int cols = row.getColumnCount();
>>>
>>> List resRow = new
>>> ArrayList<>(cols);
>>>
>>> for (int c = 0; c < cols; c++)
>>> resRow.add(row.getValue(c).get
>>> Object());
>>>
>>> res.add(resRow);
>>> }
>>> }
>>>
>>> resIter = res.iterator();
>>> }else {
>>>   // incase of split query scenario
>>> }
>>>
>>>  }
>>>
>>>   return new GridQueryCacheObjectsIterator(resIter, cctx,
>>> keepPortable);
>>>
>>>
>>> Query cursor is iterator which does column value mapping per page. But
>>> still all records of query are still in memory. correct?
>>>
>>> Please correct me if I am wrong. thanks.
>>>
>>>
>>> Thanks
>>>
>>>
>>> On 10 June 2017 at 15:53, Anil <[hidden email]
>>> <http:///user/SendEmail.jtp?type=node=13626=1>> wrote:
>>>
>>>>
>>>> jvm parameters used -
>>>>
>>>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>>>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>>>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDum

Re: Range queries on indexed columns

2017-06-14 Thread Anil
Hi Team,

Can some help in understanding the below ? Thanks.

On 13 June 2017 at 11:07, Anil <anilk...@gmail.com> wrote:

> HI Team,
>
> I have a table TEST with a indexed column COL_A. Does the following query
> works ?
>
> select * from Test where COL_A > '1' and COL_A < '2' offset 10  ROWS FETCH
> NEXT 20 ROWS ONLY
>
> As per my understanding of distributed systems, the query is sent to all
> nodes and gets the 10 records from each node and return 10 (whatever
> returns first)
>
> as indexes are distributed, the above query may not return the records in
> paginated way without adding sort like below.
>
> select * from Test where COL_A > '1' and COL_A < '2' order by COL_A offset
> 10  ROWS FETCH NEXT 20 ROWS ONLY
>
> do you see any overhead of sort here ?
>
> Does it work in following way ?
>
> send the query to all nodes and get 10 (based on sorting) records and sort
> all results of each node at reducer and return final 10 .
>
> Sort should not have any overhead here as sort and filter is done on
> indexed column.
>
> Please correct me if i am wrong. thanks.
>
> Thanks
>
>


Range queries on indexed columns

2017-06-12 Thread Anil
HI Team,

I have a table TEST with a indexed column COL_A. Does the following query
works ?

select * from Test where COL_A > '1' and COL_A < '2' offset 10  ROWS FETCH
NEXT 20 ROWS ONLY

As per my understanding of distributed systems, the query is sent to all
nodes and gets the 10 records from each node and return 10 (whatever
returns first)

as indexes are distributed, the above query may not return the records in
paginated way without adding sort like below.

select * from Test where COL_A > '1' and COL_A < '2' order by COL_A offset
10  ROWS FETCH NEXT 20 ROWS ONLY

do you see any overhead of sort here ?

Does it work in following way ?

send the query to all nodes and get 10 (based on sorting) records and sort
all results of each node at reducer and return final 10 .

Sort should not have any overhead here as sort and filter is done on
indexed column.

Please correct me if i am wrong. thanks.

Thanks


Re: High heap on ignite client

2017-06-12 Thread Anil
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc
driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil <anilk...@gmail.com> wrote:

> I understand from the code that there is no cursor from h2 db (or ignite
> embed h2 db) internally and all mapper response consolidated at reducer. It
> means when exporting large number of records, all data is in memory.
>
>  if (send(nodes,
> oldStyle ?
> new GridQueryRequest(qryReqId,
> r.pageSize,
> space,
> mapQrys,
> topVer,
> extraSpaces(space, qry.spaces()),
> null,
> timeoutMillis) :
> new GridH2QueryRequest()
> .requestId(qryReqId)
> .topologyVersion(topVer)
> .pageSize(r.pageSize)
> .caches(qry.caches())
> .tables(distributedJoins ? qry.tables() : null)
> .partitions(convert(partsMap))
> .queries(mapQrys)
> .flags(flags)
> .timeout(timeoutMillis),
> oldStyle && partsMap != null ? new
> ExplicitPartitionsSpecializer(partsMap) : null,
> false)) {
>
> awaitAllReplies(r, nodes, cancel);
>
> *// once the responses from all nodes for the query received.. proceed
> further ?*
>
>   if (!retry) {
> if (skipMergeTbl) {
> List<List> res = new ArrayList<>();
>
> // Simple UNION ALL can have multiple indexes.
> for (GridMergeIndex idx : r.idxs) {
> Cursor cur = idx.findInStream(null, null);
>
> while (cur.next()) {
> Row row = cur.get();
>
> int cols = row.getColumnCount();
>
> List resRow = new
> ArrayList<>(cols);
>
> for (int c = 0; c < cols; c++)
> resRow.add(row.getValue(c).
> getObject());
>
> res.add(resRow);
> }
> }
>
> resIter = res.iterator();
> }else {
>   // incase of split query scenario
> }
>
>  }
>
>   return new GridQueryCacheObjectsIterator(resIter, cctx,
> keepPortable);
>
>
> Query cursor is iterator which does column value mapping per page. But
> still all records of query are still in memory. correct?
>
> Please correct me if I am wrong. thanks.
>
>
> Thanks
>
>
> On 10 June 2017 at 15:53, Anil <anilk...@gmail.com> wrote:
>
>>
>> jvm parameters used -
>>
>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>
>> Thanks.
>>
>> On 10 June 2017 at 15:06, Anil <anilk...@gmail.com> wrote:
>>
>>> HI,
>>>
>>> I have implemented export feature of ignite data using JDBC Interator
>>>
>>> ResultSet rs = statement.executeQuery();
>>>
>>> while (rs.next()){
>>> // do operations
>>>
>>> }
>>>
>>> and fetch size is 200.
>>>
>>> when i run export operation twice for 4 L records whole 6B is filled up
>>> and never getting released.
>>>
>>> Initially i thought that operations transforting result set to file
>>> causing the memory full. But not.
>>>
>>> I just did follwoing and still the memory is growing and not getting
>>> released
>>>
>>> while (rs.next()){
>>>  // nothing
>>> }
>>>
>>> num #in

Re: High heap on ignite client

2017-06-10 Thread Anil
I understand from the code that there is no cursor from h2 db (or ignite
embed h2 db) internally and all mapper response consolidated at reducer. It
means when exporting large number of records, all data is in memory.

 if (send(nodes,
oldStyle ?
new GridQueryRequest(qryReqId,
r.pageSize,
space,
mapQrys,
topVer,
extraSpaces(space, qry.spaces()),
null,
timeoutMillis) :
new GridH2QueryRequest()
.requestId(qryReqId)
.topologyVersion(topVer)
.pageSize(r.pageSize)
.caches(qry.caches())
.tables(distributedJoins ? qry.tables() : null)
.partitions(convert(partsMap))
.queries(mapQrys)
.flags(flags)
.timeout(timeoutMillis),
oldStyle && partsMap != null ? new
ExplicitPartitionsSpecializer(partsMap) : null,
false)) {

awaitAllReplies(r, nodes, cancel);

*// once the responses from all nodes for the query received.. proceed
further ?*

  if (!retry) {
if (skipMergeTbl) {
List<List> res = new ArrayList<>();

// Simple UNION ALL can have multiple indexes.
for (GridMergeIndex idx : r.idxs) {
Cursor cur = idx.findInStream(null, null);

while (cur.next()) {
Row row = cur.get();

int cols = row.getColumnCount();

List resRow = new ArrayList<>(cols);

for (int c = 0; c < cols; c++)
resRow.add(row.getValue(c).getObject());

res.add(resRow);
}
}

resIter = res.iterator();
}else {
  // incase of split query scenario
}

 }

  return new GridQueryCacheObjectsIterator(resIter, cctx, keepPortable);


Query cursor is iterator which does column value mapping per page. But
still all records of query are still in memory. correct?

Please correct me if I am wrong. thanks.


Thanks


On 10 June 2017 at 15:53, Anil <anilk...@gmail.com> wrote:

>
> jvm parameters used -
>
> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>
> Thanks.
>
> On 10 June 2017 at 15:06, Anil <anilk...@gmail.com> wrote:
>
>> HI,
>>
>> I have implemented export feature of ignite data using JDBC Interator
>>
>> ResultSet rs = statement.executeQuery();
>>
>> while (rs.next()){
>> // do operations
>>
>> }
>>
>> and fetch size is 200.
>>
>> when i run export operation twice for 4 L records whole 6B is filled up
>> and never getting released.
>>
>> Initially i thought that operations transforting result set to file
>> causing the memory full. But not.
>>
>> I just did follwoing and still the memory is growing and not getting
>> released
>>
>> while (rs.next()){
>>  // nothing
>> }
>>
>> num #instances #bytes  class name
>> --
>>1:  55072353 2408335272  [C
>>2:  54923606 1318166544  java.lang.String
>>3:779006  746187792  [B
>>4:903548  304746304  [Ljava.lang.Object;
>>5:773348  259844928  net.juniper.cs.entity.InstallBase
>>6:   4745694  113896656  java.lang.Long
>>7:   692   44467680  sun.nio.cs.UTF_8$Decoder
>>8:773348   30933920  org.apache.ignite.internal.bi
>> nary.BinaryObjectImpl
>>9:895627   21495048  java.util.ArrayList
>>   10: 12427   16517632  [I
>>
>>
>> Not sure why string objects are getting increased.
>>
>> Could you please help in understanding the issue ?
>>
>> Thanks
>>
>
>


Re: High heap on ignite client

2017-06-10 Thread Anil
jvm parameters used -

-Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
-XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
-Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
-XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
-XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
-XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
-XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof

Thanks.

On 10 June 2017 at 15:06, Anil <anilk...@gmail.com> wrote:

> HI,
>
> I have implemented export feature of ignite data using JDBC Interator
>
> ResultSet rs = statement.executeQuery();
>
> while (rs.next()){
> // do operations
>
> }
>
> and fetch size is 200.
>
> when i run export operation twice for 4 L records whole 6B is filled up
> and never getting released.
>
> Initially i thought that operations transforting result set to file
> causing the memory full. But not.
>
> I just did follwoing and still the memory is growing and not getting
> released
>
> while (rs.next()){
>  // nothing
> }
>
> num #instances #bytes  class name
> --
>1:  55072353 2408335272  [C
>2:  54923606 1318166544  java.lang.String
>3:779006  746187792  [B
>4:903548  304746304  [Ljava.lang.Object;
>5:773348  259844928  net.juniper.cs.entity.InstallBase
>6:   4745694  113896656  java.lang.Long
>7:   692   44467680  sun.nio.cs.UTF_8$Decoder
>8:773348   30933920  org.apache.ignite.internal.
> binary.BinaryObjectImpl
>9:895627   21495048  java.util.ArrayList
>   10: 12427   16517632  [I
>
>
> Not sure why string objects are getting increased.
>
> Could you please help in understanding the issue ?
>
> Thanks
>


High heap on ignite client

2017-06-10 Thread Anil
HI,

I have implemented export feature of ignite data using JDBC Interator

ResultSet rs = statement.executeQuery();

while (rs.next()){
// do operations

}

and fetch size is 200.

when i run export operation twice for 4 L records whole 6B is filled up and
never getting released.

Initially i thought that operations transforting result set to file causing
the memory full. But not.

I just did follwoing and still the memory is growing and not getting
released

while (rs.next()){
 // nothing
}

num #instances #bytes  class name
--
   1:  55072353 2408335272  [C
   2:  54923606 1318166544  java.lang.String
   3:779006  746187792  [B
   4:903548  304746304  [Ljava.lang.Object;
   5:773348  259844928  net.juniper.cs.entity.InstallBase
   6:   4745694  113896656  java.lang.Long
   7:   692   44467680  sun.nio.cs.UTF_8$Decoder
   8:773348   30933920
 org.apache.ignite.internal.binary.BinaryObjectImpl
   9:895627   21495048  java.util.ArrayList
  10: 12427   16517632  [I


Not sure why string objects are getting increased.

Could you please help in understanding the issue ?

Thanks


Re: vertx-ignite

2017-06-07 Thread Anil
Hi Nikhonov,

May I know the reason for adding template configuration.

Its a generic cache configuration. I have added all cache configurations
required for my application explicitly in ignite.xml as we don't use java
based IgniteCache creation. I dont see any importance of adding template
configuration. please correct me if I am wrong. Thanks.

Does vertex need any default caches/configurations for it to work? like
below semaphore in hazelcast cluster.xml

 
  
1
  

Thanks


On 5 June 2017 at 21:07, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Hi Anil,
>
> You missed a template for caches (lines 90-97).
>
> On Sat, May 13, 2017 at 12:05 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> Could you please help me here? Thanks.
>>
>> Thanks
>>
>> On 11 May 2017 at 14:16, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Andrey,
>>>
>>> I am checking the default-ignite.xml at https://github.com/apacheig
>>> nite/vertx-ignite/blob/master/src/main/resources/default-ignite.xml
>>>
>>> Could you please point what is missing in my configuration ?
>>>
>>> I could not find anything in default-ignite.xml.
>>>
>>> Thanks
>>>
>>> On 11 May 2017 at 11:07, Anil <anilk...@gmail.com> wrote:
>>>
>>>> HI Andrey,
>>>>
>>>> i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
>>>> default-ignite.xml.
>>>>
>>>> Thanks
>>>>
>>>> On 10 May 2017 at 21:31, Andrey Gura <ag...@apache.org> wrote:
>>>>
>>>>> Anil,
>>>>>
>>>>> What version of vertx-ignite or Ignite itself do you use?
>>>>>
>>>>> In provided ignite.xml there is no minimal configuration that is
>>>>> mandatory for Ignite cluster manager for vert.x (see
>>>>> default-ignite.xml for example).
>>>>>
>>>>>
>>>>> On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
>>>>> >
>>>>> > Hi Andrey,
>>>>> >
>>>>> > Apologies for late reply. I don't have any exact reproduce. I can
>>>>> see this
>>>>> > log frequently in our logs.
>>>>> >
>>>>> > attached the ignite.xml.
>>>>> >
>>>>> > Thanks.
>>>>> >
>>>>> >
>>>>> >
>>>>> > On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
>>>>> >>
>>>>> >> Anil,
>>>>> >>
>>>>> >> what kind of lock do you mean? What are steps for reproduce? What
>>>>> >> version if vert-ignite do use and what is your configuration?
>>>>> >>
>>>>> >> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
>>>>> >> > HI,
>>>>> >> >
>>>>> >> > I am using vertx-ignite and when node is left the topology, lock
>>>>> is not
>>>>> >> > getting released and whole server is not responding.
>>>>> >> >
>>>>> >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
>>>>> >> > BlockedThreadChecker:57 - Thread
>>>>> >> > Thread[vert.x-worker-thread-82,5,ignite]
>>>>> >> > has been blocked for 2329981 ms, time limit is 6
>>>>> >> > io.vertx.core.VertxException: Thread blocked
>>>>> >> > at sun.misc.Unsafe.park(Native Method)
>>>>> >> > at
>>>>> >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>>>> >> > at
>>>>> >> >
>>>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>>>>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>>>> >> > at
>>>>> >> >
>>>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>>>>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>>>> >> > at
>>>>> >> >
>>>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>>>>> eSharedInte

export ignite data

2017-05-15 Thread Anil
HI,

we have export functionality which reads the ignite data (around 1L
records) and add to file.

i have implemented using jdbc driver to get the fetch size of 100. when i
run parallel exports , client is getting restarted with following log

2017-05-15 22:36:50 342 ERROR TcpDiscoverySpi:495 - Failed to send message:
TcpDiscoveryClientHeartbeatMessage [super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=6e6089f0c51-17f17130-b771-42d2-88d6-3be03fda1389,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]]
java.net.SocketException: Socket is closed
at java.net.Socket.getSendBufferSize(Socket.java:1215)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.socketStream(TcpDiscoverySpi.java:1254)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.writeToSocket(TcpDiscoverySpi.java:1366)
at
org.apache.ignite.spi.discovery.tcp.ClientImpl$SocketWriter.body(ClientImpl.java:1095)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
2017-05-15 22:37:04 903 WARN  TcpDiscoverySpi:480 - Client node was
reconnected after it was already considered failed by the server topology
(this could happen after all servers restarted or due to a long network
outage between the client and servers). All continuous queries and remote
event listeners created by this client will be unsubscribed, consider
listening to EVT_CLIENT_NODE_RECONNECTED event to restore them.
2017-05-15 22:37:05 094 INFO  GridDiscoveryManager:475 - Client node
reconnected to topology: TcpDiscoveryNode
[id=3749aab0-5048-4b33-933e-7c03ff6d921b, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.16.41.6], sockAddrs=[/0:0:0:0:0:0:0:1%lo:0, /127.0.0.1:0,
aswb-ignite-client-stage-hnpqd/172.16.41.6:0], discPort=0, order=381,
intOrder=0, lastExchangeTime=1494910250111, loc=true,
ver=1.9.0#20170302-sha1:a8169d0a, isClient=true]
2017-05-15 22:37:05 094 INFO  GridDiscoveryManager:475 - Topology snapshot
[ver=381, servers=8, clients=5, CPUs=72, heap=160.0GB]

this could be because of long running query. Is there any better way of
implementing export like feature using ignite ? Please advise. Thanks.


Thanks.


Re: vertx-ignite

2017-05-13 Thread Anil
Hi Andrey,

Could you please help me here? Thanks.

Thanks

On 11 May 2017 at 14:16, Anil <anilk...@gmail.com> wrote:

> Hi Andrey,
>
> I am checking the default-ignite.xml at https://github.com/
> apacheignite/vertx-ignite/blob/master/src/main/
> resources/default-ignite.xml
>
> Could you please point what is missing in my configuration ?
>
> I could not find anything in default-ignite.xml.
>
> Thanks
>
> On 11 May 2017 at 11:07, Anil <anilk...@gmail.com> wrote:
>
>> HI Andrey,
>>
>> i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
>> default-ignite.xml.
>>
>> Thanks
>>
>> On 10 May 2017 at 21:31, Andrey Gura <ag...@apache.org> wrote:
>>
>>> Anil,
>>>
>>> What version of vertx-ignite or Ignite itself do you use?
>>>
>>> In provided ignite.xml there is no minimal configuration that is
>>> mandatory for Ignite cluster manager for vert.x (see
>>> default-ignite.xml for example).
>>>
>>>
>>> On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
>>> >
>>> > Hi Andrey,
>>> >
>>> > Apologies for late reply. I don't have any exact reproduce. I can see
>>> this
>>> > log frequently in our logs.
>>> >
>>> > attached the ignite.xml.
>>> >
>>> > Thanks.
>>> >
>>> >
>>> >
>>> > On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
>>> >>
>>> >> Anil,
>>> >>
>>> >> what kind of lock do you mean? What are steps for reproduce? What
>>> >> version if vert-ignite do use and what is your configuration?
>>> >>
>>> >> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
>>> >> > HI,
>>> >> >
>>> >> > I am using vertx-ignite and when node is left the topology, lock is
>>> not
>>> >> > getting released and whole server is not responding.
>>> >> >
>>> >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
>>> >> > BlockedThreadChecker:57 - Thread
>>> >> > Thread[vert.x-worker-thread-82,5,ignite]
>>> >> > has been blocked for 2329981 ms, time limit is 6
>>> >> > io.vertx.core.VertxException: Thread blocked
>>> >> > at sun.misc.Unsafe.park(Native Method)
>>> >> > at
>>> >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>>> >> > at
>>> >> >
>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>>> >> > at
>>> >> >
>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>>> >> > at
>>> >> >
>>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
>>> 0(GridFutureAdapter.java:161)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
>>> (GridFutureAdapter.java:119)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.processors.cache.distributed.dht.
>>> atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
>>> .get(GridCacheAdapter.java:4663)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
>>> .get(GridCacheAdapter.java:1388)
>>> >> > at
>>> >> >
>>> >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .get(IgniteCacheProxy.java:1117)
>>> >> > at io.vertx.spi.cluster.ignite.im
>>> pl.MapImpl.get(MapImpl.java:81)
>>> >> > at
>>> >> > io.vertx.core.impl.HAManager.chooseHashedNo

Re: vertx-ignite

2017-05-11 Thread Anil
Hi Andrey,

I am checking the default-ignite.xml at
https://github.com/apacheignite/vertx-ignite/blob/master/src/main/resources/default-ignite.xml

Could you please point what is missing in my configuration ?

I could not find anything in default-ignite.xml.

Thanks

On 11 May 2017 at 11:07, Anil <anilk...@gmail.com> wrote:

> HI Andrey,
>
> i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
> default-ignite.xml.
>
> Thanks
>
> On 10 May 2017 at 21:31, Andrey Gura <ag...@apache.org> wrote:
>
>> Anil,
>>
>> What version of vertx-ignite or Ignite itself do you use?
>>
>> In provided ignite.xml there is no minimal configuration that is
>> mandatory for Ignite cluster manager for vert.x (see
>> default-ignite.xml for example).
>>
>>
>> On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
>> >
>> > Hi Andrey,
>> >
>> > Apologies for late reply. I don't have any exact reproduce. I can see
>> this
>> > log frequently in our logs.
>> >
>> > attached the ignite.xml.
>> >
>> > Thanks.
>> >
>> >
>> >
>> > On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
>> >>
>> >> Anil,
>> >>
>> >> what kind of lock do you mean? What are steps for reproduce? What
>> >> version if vert-ignite do use and what is your configuration?
>> >>
>> >> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
>> >> > HI,
>> >> >
>> >> > I am using vertx-ignite and when node is left the topology, lock is
>> not
>> >> > getting released and whole server is not responding.
>> >> >
>> >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
>> >> > BlockedThreadChecker:57 - Thread
>> >> > Thread[vert.x-worker-thread-82,5,ignite]
>> >> > has been blocked for 2329981 ms, time limit is 6
>> >> > io.vertx.core.VertxException: Thread blocked
>> >> > at sun.misc.Unsafe.park(Native Method)
>> >> > at
>> >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>> >> > at
>> >> >
>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAn
>> dCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>> >> > at
>> >> >
>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcqu
>> ireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>> >> > at
>> >> >
>> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.acquir
>> eSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> 0(GridFutureAdapter.java:161)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.get
>> (GridFutureAdapter.java:119)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.processors.cache.distributed.dht.
>> atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:4663)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter
>> .get(GridCacheAdapter.java:1388)
>> >> > at
>> >> >
>> >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>> .get(IgniteCacheProxy.java:1117)
>> >> > at io.vertx.spi.cluster.ignite.im
>> pl.MapImpl.get(MapImpl.java:81)
>> >> > at
>> >> > io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.java:590)
>> >> > at io.vertx.core.impl.HAManager.c
>> heckSubs(HAManager.java:519)
>> >> > at io.vertx.core.impl.HAManager.nodeLeft(HAManager.java:305)
>> >> > at io.vertx.core.impl.HAManager.a
>> ccess$100(HAManager.java:107)
>> >> > at io.vertx.core.impl.HAManager$1
>> .nodeLeft(HAManager.java:157)
>> >> > at
>> >> >
>> >> > io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$null
>> $4(IgniteClusterManager.java:254)
>> >> > at
>> >> >
>> >> > io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$36/
>> 837728834.handle(Unknown
>> >> > Source)
>> >> > at
>> >> >
>> >> > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(
>> ContextImpl.java:271)
>> >> > at
>> >> > io.vertx.core.impl.ContextImpl$$Lambda$13/116289363.run(Unknown
>> >> > Source)
>> >> > at io.vertx.core.impl.TaskQueue.l
>> ambda$new$0(TaskQueue.java:60)
>> >> > at io.vertx.core.impl.TaskQueue$$
>> Lambda$12/443290224.run(Unknown
>> >> > Source)
>> >> > at
>> >> >
>> >> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> >> > at
>> >> >
>> >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> >> > at java.lang.Thread.run(Thread.java:745)
>> >> >
>> >> > was it a known issue ?
>> >> >
>> >> > Thanks
>> >
>> >
>>
>
>


Re: vertx-ignite

2017-05-10 Thread Anil
HI Andrey,

i am using vertx-ignite 3.4.1 and ignite 1.9 version. i will check the
default-ignite.xml.

Thanks

On 10 May 2017 at 21:31, Andrey Gura <ag...@apache.org> wrote:

> Anil,
>
> What version of vertx-ignite or Ignite itself do you use?
>
> In provided ignite.xml there is no minimal configuration that is
> mandatory for Ignite cluster manager for vert.x (see
> default-ignite.xml for example).
>
>
> On Tue, May 2, 2017 at 9:18 AM, Anil <anilk...@gmail.com> wrote:
> >
> > Hi Andrey,
> >
> > Apologies for late reply. I don't have any exact reproduce. I can see
> this
> > log frequently in our logs.
> >
> > attached the ignite.xml.
> >
> > Thanks.
> >
> >
> >
> > On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:
> >>
> >> Anil,
> >>
> >> what kind of lock do you mean? What are steps for reproduce? What
> >> version if vert-ignite do use and what is your configuration?
> >>
> >> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
> >> > HI,
> >> >
> >> > I am using vertx-ignite and when node is left the topology, lock is
> not
> >> > getting released and whole server is not responding.
> >> >
> >> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
> >> > BlockedThreadChecker:57 - Thread
> >> > Thread[vert.x-worker-thread-82,5,ignite]
> >> > has been blocked for 2329981 ms, time limit is 6
> >> > io.vertx.core.VertxException: Thread blocked
> >> > at sun.misc.Unsafe.park(Native Method)
> >> > at
> >> > java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
> >> > at
> >> >
> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> >> > at
> >> >
> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> >> > at
> >> >
> >> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> >> > at
> >> >
> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
> >> > at
> >> >
> >> > org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
> >> > at
> >> >
> >> > org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
> >> > at
> >> >
> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4663)
> >> > at
> >> >
> >> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1388)
> >> > at
> >> >
> >> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1117)
> >> > at io.vertx.spi.cluster.ignite.impl.MapImpl.get(MapImpl.java:
> 81)
> >> > at
> >> > io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.java:590)
> >> > at io.vertx.core.impl.HAManager.checkSubs(HAManager.java:519)
> >> > at io.vertx.core.impl.HAManager.nodeLeft(HAManager.java:305)
> >> > at io.vertx.core.impl.HAManager.
> access$100(HAManager.java:107)
> >> > at io.vertx.core.impl.HAManager$
> 1.nodeLeft(HAManager.java:157)
> >> > at
> >> >
> >> > io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$
> null$4(IgniteClusterManager.java:254)
> >> > at
> >> >
> >> > io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$
> 36/837728834.handle(Unknown
> >> > Source)
> >> > at
> >> >
> >> > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.
> java:271)
> >> > at
> >> > io.vertx.core.impl.ContextImpl$$Lambda$13/116289363.run(Unknown
> >> > Source)
> >> > at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:
> 60)
> >> > at io.vertx.core.impl.TaskQueue$$Lambda$12/443290224.run(
> Unknown
> >> > Source)
> >> > at
> >> >
> >> > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> >> > at
> >> >
> >> > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> >> > at java.lang.Thread.run(Thread.java:745)
> >> >
> >> > was it a known issue ?
> >> >
> >> > Thanks
> >
> >
>


Re: BinaryObject

2017-05-03 Thread Anil
HI Team,

Did you get a chance to look into it ? thanks.

Thanks

On 2 May 2017 at 11:19, Anil <anilk...@gmail.com> wrote:

> Hi,
>
> java.lang.ClassCastException: 
> org.apache.ignite.internal.binary.BinaryObjectImpl
> cannot be cast to org.apache.ignite.cache.affinity.Affinity  exception
> thrown when a field updated using BinaryObject for a cache entry. and it is
> intermittent.
>
> Following is the snippet i am using
>
> IgniteCache<Affinity, BinaryObject> cache =
> ignite.cache(CacheManager.CACHE).withKeepBinary();
> IgniteCache<Affinity, BinaryObject> lCache =
> ignite.cache(CacheManager.LOCK_CACHE).withKeepBinary();
> ScanQuery<Affinity, BinaryObject> scanQuery = new
> ScanQuery<Affinity, BinaryObject>();
> scanQuery.setLocal(true);
> scanQuery.setPartition(1);
>
> Iterator<Entry<Affinity, BinaryObject>> iterator =
> cache.query(scanQuery).iterator();
> Integer oldStat = null, newStat = null;
> boolean changed = false;
> Entry<Affinity, BinaryObject> row = null;
> while (iterator.hasNext()) {
> try {
> row = iterator.next();
> BinaryObject itrVal = row.getValue();
> String id = itrVal.field("id");
> Lock lock = lCache.lock(id);
> try {
> lock.lock();
> BinaryObject val = cache.get(row.getKey());
> if (null != val){
> BinaryObjectBuilder bldr = val.toBuilder();
> oldStat = val.field("stat");
> Status status = null ; // determine status
> if (!CommonUtils.equalsObject(oldStat, newStat)){
> changed = true;
> bldr.setField("stat", status.getStatus());
> bldr.setField("status", status.getDescription());
> }
>
> // update other
> fields
> if(changed){
> cache.put(row.getKey(), bldr.build());
> }
> }
> }catch (Exception ex){
> log.error("Failed to update the status of  {}  {} ", id, ex);
> }finally {
> lock.unlock();
> }
> }catch (Exception ex){
> log.error("Failed to process and update status of  {}  {} ", row, ex);
> }
> }
>
>
> Do you see any issue in the above snippet ? thanks.
>
>
> Thanks
>
>


Re: vertx-ignite

2017-05-02 Thread Anil
Hi Andrey,

Apologies for late reply. I don't have any exact reproduce. I can see this
log frequently in our logs.

attached the ignite.xml.

Thanks.



On 26 April 2017 at 18:32, Andrey Gura <ag...@apache.org> wrote:

> Anil,
>
> what kind of lock do you mean? What are steps for reproduce? What
> version if vert-ignite do use and what is your configuration?
>
> On Wed, Apr 26, 2017 at 2:16 PM, Anil <anilk...@gmail.com> wrote:
> > HI,
> >
> > I am using vertx-ignite and when node is left the topology, lock is not
> > getting released and whole server is not responding.
> >
> > 2017-04-26 04:09:15 WARN  vertx-blocked-thread-checker
> > BlockedThreadChecker:57 - Thread Thread[vert.x-worker-thread-
> 82,5,ignite]
> > has been blocked for 2329981 ms, time limit is 6
> > io.vertx.core.VertxException: Thread blocked
> > at sun.misc.Unsafe.park(Native Method)
> > at java.util.concurrent.locks.LockSupport.park(LockSupport.
> java:175)
> > at
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
> > at
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
> > at
> > java.util.concurrent.locks.AbstractQueuedSynchronizer.
> acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
> > at
> > org.apache.ignite.internal.util.future.GridFutureAdapter.
> get0(GridFutureAdapter.java:161)
> > at
> > org.apache.ignite.internal.util.future.GridFutureAdapter.
> get(GridFutureAdapter.java:119)
> > at
> > org.apache.ignite.internal.processors.cache.distributed.
> dht.atomic.GridDhtAtomicCache.get0(GridDhtAtomicCache.java:488)
> > at
> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4663)
> > at
> > org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1388)
> > at
> > org.apache.ignite.internal.processors.cache.IgniteCacheProxy.get(
> IgniteCacheProxy.java:1117)
> > at io.vertx.spi.cluster.ignite.impl.MapImpl.get(MapImpl.java:81)
> > at io.vertx.core.impl.HAManager.chooseHashedNode(HAManager.
> java:590)
> > at io.vertx.core.impl.HAManager.checkSubs(HAManager.java:519)
> > at io.vertx.core.impl.HAManager.nodeLeft(HAManager.java:305)
> > at io.vertx.core.impl.HAManager.access$100(HAManager.java:107)
> > at io.vertx.core.impl.HAManager$1.nodeLeft(HAManager.java:157)
> > at
> > io.vertx.spi.cluster.ignite.IgniteClusterManager.lambda$
> null$4(IgniteClusterManager.java:254)
> > at
> > io.vertx.spi.cluster.ignite.IgniteClusterManager$$Lambda$
> 36/837728834.handle(Unknown
> > Source)
> > at
> > io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.
> java:271)
> > at io.vertx.core.impl.ContextImpl$$Lambda$13/
> 116289363.run(Unknown
> > Source)
> > at io.vertx.core.impl.TaskQueue.lambda$new$0(TaskQueue.java:60)
> > at io.vertx.core.impl.TaskQueue$$Lambda$12/443290224.run(Unknown
> > Source)
> > at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1142)
> > at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:617)
> > at java.lang.Thread.run(Thread.java:745)
> >
> > was it a known issue ?
> >
> > Thanks
>


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>

  
 			
	
  
	
	
	
		

		
	



  
	 
	 
	 
	 
	 


  
	  

  
X.X.X.1
X.X.X.2
X.X.X.3
X.X.X.4
  

  

  


	
	
	
	
	
			
			
			
			

			
			
			
			 
	
	
	  
		java.lang.String
		net.test.cs.entity.Lock
	  
	


			
			
   
			
			
			
			 
	
	
	  
		org.apache.ignite.cache.affinity.AffinityKey
		net.test.cs.entity.Cache
	  
	

			
		



BinaryObject

2017-05-01 Thread Anil
Hi,

java.lang.ClassCastException:
org.apache.ignite.internal.binary.BinaryObjectImpl cannot be cast to
org.apache.ignite.cache.affinity.Affinity  exception thrown when a field
updated using BinaryObject for a cache entry. and it is intermittent.

Following is the snippet i am using

IgniteCache cache =
ignite.cache(CacheManager.CACHE).withKeepBinary();
IgniteCache lCache =
ignite.cache(CacheManager.LOCK_CACHE).withKeepBinary();
ScanQuery scanQuery = new
ScanQuery();
scanQuery.setLocal(true);
scanQuery.setPartition(1);

Iterator> iterator =
cache.query(scanQuery).iterator();
Integer oldStat = null, newStat = null;
boolean changed = false;
Entry row = null;
while (iterator.hasNext()) {
try {
row = iterator.next();
BinaryObject itrVal = row.getValue();
String id = itrVal.field("id");
Lock lock = lCache.lock(id);
try {
lock.lock();
BinaryObject val = cache.get(row.getKey());
if (null != val){
BinaryObjectBuilder bldr = val.toBuilder();
oldStat = val.field("stat");
Status status = null ; // determine status
if (!CommonUtils.equalsObject(oldStat, newStat)){
changed = true;
bldr.setField("stat", status.getStatus());
bldr.setField("status", status.getDescription());
}

// update other
fields
if(changed){
cache.put(row.getKey(), bldr.build());
}
}
}catch (Exception ex){
log.error("Failed to update the status of  {}  {} ", id, ex);
}finally {
lock.unlock();
}
}catch (Exception ex){
log.error("Failed to process and update status of  {}  {} ", row, ex);
}
}


Do you see any issue in the above snippet ? thanks.


Thanks


Node left topology

2017-04-18 Thread Anil
HI,

Could you please clarify  below ?

1. Can long running queries on a node cause the node to leave the topology ?
2. Can a client reconnect to larger cluster which is connected to the node
that left the topology ?

Thanks


Re: Sort queries are slow

2017-04-04 Thread Anil
Thanks Sergi.

#1 - we are displaying records in paginated way (using offset and rows) and
export feature whole data set.

waiting for ignite 2.0 :)

Thanks

On 4 April 2017 at 18:43, Sergi Vladykin <sergi.vlady...@gmail.com> wrote:

> Ok. Also I suspect that you have relatively large result sets, otherwise
> you would not notice any problems with sorting.
>
> I suggest you to do the following:
>
> 1. Return results by default with some reasonable LIMIT (30 or 50 for
> example) and have some separate button to get the whole result set if
> needed.
>
> 2. For the most frequently used filtering/sorting setups still create
> group indexes. I do not believe that your users will pick all the possible
> combinations with the same frequency.
>
> For Ignite 2.0 we already have some improvements in this area, but if the
> result set is huge enough, it will not help you as well.
>
> Sergi
>
> 2017-04-04 14:29 GMT+03:00 Anil <anilk...@gmail.com>:
>
>> Hi Sergi,
>>
>>
>> If you do not use indexes, then sorting will be performed each time.
>> Sorry.
>> * - i cannot use group indexes that you suggested. But i am using
>> individual indexes*
>>
>> From your pattern I suspect that you output the result set into some UI
>> table with sortable columns, am I right?
>> - *Yes* :)
>>
>> Thanks
>>
>> On 4 April 2017 at 16:45, Sergi Vladykin <sergi.vlady...@gmail.com>
>> wrote:
>>
>>> Alexey,
>>>
>>> Definitely! Please go ahead.
>>>
>>> Anil,
>>>
>>> If you do not use indexes, then sorting will be performed each time.
>>> Sorry.
>>>
>>> From your pattern I suspect that you output the result set into some UI
>>> table with sortable columns, am I right?
>>>
>>> Sergi
>>>
>>> 2017-04-04 13:54 GMT+03:00 Anil <anilk...@gmail.com>:
>>>
>>>> Hi Sergi,
>>>>
>>>> Thanks for the response.
>>>>
>>>> I have around 70 columns and support sorting on many columns. group
>>>> index is not suitable in my case. Do you have any other suggestions ?
>>>>
>>>> To some extent https://issues.apache.org/jira/browse/IGNITE-3013
>>>> improves the response time.
>>>>
>>>> Thanks
>>>>
>>>>
>>>> On 4 April 2017 at 15:28, Sergi Vladykin <sergi.vlady...@gmail.com>
>>>> wrote:
>>>>
>>>>> You should create a group index on (A, B) and rewrite the query the
>>>>> following way:
>>>>>
>>>>> select * from Test where A = ''  order by A, B
>>>>>
>>>>> Semantically it will be the same, but it will use index (A, B) for
>>>>> search and sorting.
>>>>>
>>>>> Sergi
>>>>>
>>>>> 2017-04-04 12:18 GMT+03:00 Anil <anilk...@gmail.com>:
>>>>>
>>>>>> HI,
>>>>>>
>>>>>> i have created a table with columns A and B. A is indexed column. and
>>>>>> use following queries
>>>>>>
>>>>>> 1. select * from Test where A = ''
>>>>>> 2. select * from Test where A = ''  order by B
>>>>>>
>>>>>> #1 is fast as it uses default sorting of indexed column A. But #2 is
>>>>>> slow.
>>>>>>
>>>>>> Do you think creating index on B will speed up #2 query ? i tried
>>>>>> that as well and no luck.
>>>>>>
>>>>>> are there any ways to improve the performance of #2 ? please advise.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: Sort queries are slow

2017-04-04 Thread Anil
Hi Sergi,


If you do not use indexes, then sorting will be performed each time. Sorry.
* - i cannot use group indexes that you suggested. But i am using
individual indexes*

>From your pattern I suspect that you output the result set into some UI
table with sortable columns, am I right?
- *Yes* :)

Thanks

On 4 April 2017 at 16:45, Sergi Vladykin <sergi.vlady...@gmail.com> wrote:

> Alexey,
>
> Definitely! Please go ahead.
>
> Anil,
>
> If you do not use indexes, then sorting will be performed each time. Sorry.
>
> From your pattern I suspect that you output the result set into some UI
> table with sortable columns, am I right?
>
> Sergi
>
> 2017-04-04 13:54 GMT+03:00 Anil <anilk...@gmail.com>:
>
>> Hi Sergi,
>>
>> Thanks for the response.
>>
>> I have around 70 columns and support sorting on many columns. group index
>> is not suitable in my case. Do you have any other suggestions ?
>>
>> To some extent https://issues.apache.org/jira/browse/IGNITE-3013
>> improves the response time.
>>
>> Thanks
>>
>>
>> On 4 April 2017 at 15:28, Sergi Vladykin <sergi.vlady...@gmail.com>
>> wrote:
>>
>>> You should create a group index on (A, B) and rewrite the query the
>>> following way:
>>>
>>> select * from Test where A = ''  order by A, B
>>>
>>> Semantically it will be the same, but it will use index (A, B) for
>>> search and sorting.
>>>
>>> Sergi
>>>
>>> 2017-04-04 12:18 GMT+03:00 Anil <anilk...@gmail.com>:
>>>
>>>> HI,
>>>>
>>>> i have created a table with columns A and B. A is indexed column. and
>>>> use following queries
>>>>
>>>> 1. select * from Test where A = ''
>>>> 2. select * from Test where A = ''  order by B
>>>>
>>>> #1 is fast as it uses default sorting of indexed column A. But #2 is
>>>> slow.
>>>>
>>>> Do you think creating index on B will speed up #2 query ? i tried that
>>>> as well and no luck.
>>>>
>>>> are there any ways to improve the performance of #2 ? please advise.
>>>>
>>>> Thanks
>>>>
>>>>
>>>
>>
>


Re: Sort queries are slow

2017-04-04 Thread Anil
Hi Sergi,

Thanks for the response.

I have around 70 columns and support sorting on many columns. group index
is not suitable in my case. Do you have any other suggestions ?

To some extent https://issues.apache.org/jira/browse/IGNITE-3013 improves
the response time.

Thanks


On 4 April 2017 at 15:28, Sergi Vladykin <sergi.vlady...@gmail.com> wrote:

> You should create a group index on (A, B) and rewrite the query the
> following way:
>
> select * from Test where A = ''  order by A, B
>
> Semantically it will be the same, but it will use index (A, B) for search
> and sorting.
>
> Sergi
>
> 2017-04-04 12:18 GMT+03:00 Anil <anilk...@gmail.com>:
>
>> HI,
>>
>> i have created a table with columns A and B. A is indexed column. and use
>> following queries
>>
>> 1. select * from Test where A = ''
>> 2. select * from Test where A = ''  order by B
>>
>> #1 is fast as it uses default sorting of indexed column A. But #2 is slow.
>>
>> Do you think creating index on B will speed up #2 query ? i tried that as
>> well and no luck.
>>
>> are there any ways to improve the performance of #2 ? please advise.
>>
>> Thanks
>>
>>
>


Sort queries are slow

2017-04-04 Thread Anil
HI,

i have created a table with columns A and B. A is indexed column. and use
following queries

1. select * from Test where A = ''
2. select * from Test where A = ''  order by B

#1 is fast as it uses default sorting of indexed column A. But #2 is slow.

Do you think creating index on B will speed up #2 query ? i tried that as
well and no luck.

are there any ways to improve the performance of #2 ? please advise.

Thanks


Re: 2.0

2017-04-03 Thread Anil
Hi Nikolai,

Yes. It is working :).

and vertx-ignite cluster also joined the cluster now :) . not sure how it
started working :)


Thanks.

On 3 April 2017 at 21:44, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> I mean that host equals machine. Can you try to start nodes with
> TcpDiscoveryVmIpFinder and check that ignite will be work properly?
>
> On Mon, Apr 3, 2017 at 7:08 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> What do you mean by hosts ? Two machines able to connect each other.
>>
>> Thanks
>>
>> On 3 April 2017 at 21:12, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Hi Anil,
>>>
>>> Are you sure that you this addresses (172.31.1.189#47500,
>>> 172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?
>>>
>>> On Mon, Apr 3, 2017 at 12:20 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Val,
>>>>
>>>> I tried without vertx and still nodes not joining the cluster. Looks
>>>> like the issue is not because of vertx.
>>>>
>>>> Thanks
>>>>
>>>> On 3 April 2017 at 12:13, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Hi Val,
>>>>>
>>>>> Can you help me in understanding following -
>>>>>
>>>>> 1. I see empty files created with 127.0.0.1#47500 and
>>>>> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
>>>>> reason ?
>>>>>
>>>>> - A number of nodes will have single set of 127.0.0.1#47500 and
>>>>> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>>>>>
>>>>> 2. a node is deleting the empty file of other node. the delete of file
>>>>> happens only during un-register (as per the code).
>>>>>
>>>>> logs of 192 -
>>>>>
>>>>> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
>>>>> in the bucket with name - 172.31.7.192#47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>>>> bucket objects - 3
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 127.0.0.1 - 47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 127.0.0.1 - 47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 172.31.1.189 - 47500
>>>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 172.31.1.189 - 47500
>>>>> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
>>>>> in the bucket with name - 172.31.1.189#47500*
>>>>> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
>>>>> in the bucket with name - 172.31.7.192#47500
>>>>>
>>>>> logs of 189 -
>>>>>
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>>>> bucket objects - 3
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 127.0.0.1 - 47500
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 127.0.0.1 - 47500
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered (before) - 172.31.7.192 - 47500
>>>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>>>> registered - 172.31.7.192 - 47500
>>>>> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
>>>>> in the bucket with name - 172.31.7.192#47500*
>>>>> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
>>>>> in the bucket with name - 172.31.1.189#47500
>>>>>
>>>>> Thanks
>>>>>
>>>>> On 2 April 2017 at 14:34, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi Val,
>>>>>>
>>>>>> Nodes not joining the cluster. I just shared the logs to understand
>>>>>> if there is any info in the logs.
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 2 April 2017 at 11:32, vkulichenko <valentin.kuliche...@gmail.com>
>>>>>> wrote:
>>>>>>
>>>>>>> Anil,
>>>>>>>
>>>>>>> I'm not sure I understand what is the issue in the first place. What
>>>>>>> is not
>>>>>>> working?
>>>>>>>
>>>>>>> -Val
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> View this message in context: http://apache-ignite-users.705
>>>>>>> 18.x6.nabble.com/2-0-tp11487p11640.html
>>>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>


Re: 2.0

2017-04-03 Thread Anil
Hi Nikolai,

What do you mean by hosts ? Two machines able to connect each other.

Thanks

On 3 April 2017 at 21:12, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Hi Anil,
>
> Are you sure that you this addresses (172.31.1.189#47500,
> 172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?
>
> On Mon, Apr 3, 2017 at 12:20 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Val,
>>
>> I tried without vertx and still nodes not joining the cluster. Looks like
>> the issue is not because of vertx.
>>
>> Thanks
>>
>> On 3 April 2017 at 12:13, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Val,
>>>
>>> Can you help me in understanding following -
>>>
>>> 1. I see empty files created with 127.0.0.1#47500 and
>>> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
>>> reason ?
>>>
>>> - A number of nodes will have single set of 127.0.0.1#47500 and
>>> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>>>
>>> 2. a node is deleting the empty file of other node. the delete of file
>>> happens only during un-register (as per the code).
>>>
>>> logs of 192 -
>>>
>>> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.7.192#47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>> bucket objects - 3
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 127.0.0.1 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 127.0.0.1 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 172.31.1.189 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 172.31.1.189 - 47500
>>> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>>> the bucket with name - 172.31.1.189#47500*
>>> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.7.192#47500
>>>
>>> logs of 189 -
>>>
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>> bucket objects - 3
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 127.0.0.1 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 127.0.0.1 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 172.31.7.192 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 172.31.7.192 - 47500
>>> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>>> the bucket with name - 172.31.7.192#47500*
>>> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.1.189#47500
>>>
>>> Thanks
>>>
>>> On 2 April 2017 at 14:34, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Val,
>>>>
>>>> Nodes not joining the cluster. I just shared the logs to understand if
>>>> there is any info in the logs.
>>>>
>>>> Thanks
>>>>
>>>> On 2 April 2017 at 11:32, vkulichenko <valentin.kuliche...@gmail.com>
>>>> wrote:
>>>>
>>>>> Anil,
>>>>>
>>>>> I'm not sure I understand what is the issue in the first place. What
>>>>> is not
>>>>> working?
>>>>>
>>>>> -Val
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context: http://apache-ignite-users.705
>>>>> 18.x6.nabble.com/2-0-tp11487p11640.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>
>


Re: 2.0

2017-04-03 Thread Anil
Hi Val,

I tried without vertx and still nodes not joining the cluster. Looks like
the issue is not because of vertx.

Thanks

On 3 April 2017 at 12:13, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> Can you help me in understanding following -
>
> 1. I see empty files created with 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
> reason ?
>
> - A number of nodes will have single set of 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>
> 2. a node is deleting the empty file of other node. the delete of file
> happens only during un-register (as per the code).
>
> logs of 192 -
>
> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.7.192#47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
> objects - 3
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.1.189 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.1.189 - 47500
> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
> the bucket with name - 172.31.1.189#47500*
> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.7.192#47500
>
> logs of 189 -
>
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
> objects - 3
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.7.192 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.7.192 - 47500
> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
> the bucket with name - 172.31.7.192#47500*
> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.1.189#47500
>
> Thanks
>
> On 2 April 2017 at 14:34, Anil <anilk...@gmail.com> wrote:
>
>> Hi Val,
>>
>> Nodes not joining the cluster. I just shared the logs to understand if
>> there is any info in the logs.
>>
>> Thanks
>>
>> On 2 April 2017 at 11:32, vkulichenko <valentin.kuliche...@gmail.com>
>> wrote:
>>
>>> Anil,
>>>
>>> I'm not sure I understand what is the issue in the first place. What is
>>> not
>>> working?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/2-0-tp11487p11640.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: 2.0

2017-04-03 Thread Anil
Hi Val,

Can you help me in understanding following -

1. I see empty files created with 127.0.0.1#47500 and
0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
reason ?

- A number of nodes will have single set of 127.0.0.1#47500 and
0:0:0:0:0:0:0:1%lo#47500 empty files.

2. a node is deleting the empty file of other node. the delete of file
happens only during un-register (as per the code).

logs of 192 -

2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.7.192#47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
objects - 3
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 127.0.0.1 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 127.0.0.1 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 172.31.1.189 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 172.31.1.189 - 47500
*2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
the bucket with name - 172.31.1.189#47500*
2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.7.192#47500

logs of 189 -

2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
objects - 3
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 127.0.0.1 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 127.0.0.1 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 172.31.7.192 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 172.31.7.192 - 47500
*2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
the bucket with name - 172.31.7.192#47500*
2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.1.189#47500

Thanks

On 2 April 2017 at 14:34, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> Nodes not joining the cluster. I just shared the logs to understand if
> there is any info in the logs.
>
> Thanks
>
> On 2 April 2017 at 11:32, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Anil,
>>
>> I'm not sure I understand what is the issue in the first place. What is
>> not
>> working?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/2-0-tp11487p11640.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: 2.0

2017-04-02 Thread Anil
Hi Val,

Nodes not joining the cluster. I just shared the logs to understand if
there is any info in the logs.

Thanks

On 2 April 2017 at 11:32, vkulichenko <valentin.kuliche...@gmail.com> wrote:

> Anil,
>
> I'm not sure I understand what is the issue in the first place. What is not
> working?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/2-0-tp11487p11640.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: 2.0

2017-03-31 Thread Anil
Thank you Denis and Val.

I will post the debug logs. thanks.

On 31 March 2017 at 22:02, Denis Magda <dma...@apache.org> wrote:

> Anil,
>
> Try to connect to the S3 bucket you specified in the configuration
>
> 
>
> and check you see there IP address the cluster nodes should have written
> there.
>
> If there is no address then something goes wrong with networking. This
> message proves this:
> 2017-03-29 10:23:39 WARN  MacAddressUtil:136 - Failed to find a usable
> hardware address from the network interfaces; using random bytes:
> c6:25:2b:74:0a:78:fc:66
>
> Also I would suggest enabling DEBUG level for loggers.
>
> —
> Denis
>
> On Mar 31, 2017, at 9:28 AM, Anil <anilk...@gmail.com> wrote:
>
> I will look into netty documentation.
>
> i see following log -
>
> 2017-03-31 10:04:55 INFO  TcpDiscoverySpi:475 - Successfully bound to TCP
> port [port=47500, localHost=0.0.0.0/0.0.0.0, locNodeId=3cd86374-c96c-4961-
> bf9f-fc058f89ddc0]
> 2017-03-31 10:04:55 WARN  TcpDiscoveryS3IpFinder:480 - Amazon client
> configuration is not set (will use default).
>
> Was it Ok ?
>
> On 30 March 2017 at 23:29, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Anil,
>>
>> So it's coming from vertx then. I would refer to their documentation and
>> other resources to understand why this warning is shown (my guess is that
>> it's specific to AWS). As I said, Ignite node is started successfully.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/2-0-tp11487p11580.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>
>


Re: 2.0

2017-03-31 Thread Anil
I will look into netty documentation.

i see following log -

2017-03-31 10:04:55 INFO  TcpDiscoverySpi:475 - Successfully bound to TCP
port [port=47500, localHost=0.0.0.0/0.0.0.0,
locNodeId=3cd86374-c96c-4961-bf9f-fc058f89ddc0]
2017-03-31 10:04:55 WARN  TcpDiscoveryS3IpFinder:480 - Amazon client
configuration is not set (will use default).

Was it Ok ?

On 30 March 2017 at 23:29, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Anil,
>
> So it's coming from vertx then. I would refer to their documentation and
> other resources to understand why this warning is shown (my guess is that
> it's specific to AWS). As I said, Ignite node is started successfully.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/2-0-tp11487p11580.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


ScanQuery performance

2017-03-30 Thread Anil
HI ,

When i am running a scan query for export operation , other sql queries are
very slow.

are there any issues with scanquery performance issues ?

Thanks


Re: 2.0

2017-03-30 Thread Anil
Hi Val,

No. MacAddressUtil  is not from my application. netty-common jar which is
from vertx-core. i am deploying ignite application as micro service using
 vertx-ignite.

Thanks

On 29 March 2017 at 23:41, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Hi Anil,
>
> What is the MacAddressUtil class? I can't find it in any of Ignite
> dependencies, is it coming from your code? In any case, it looks like the
> node is started, so there are no issues with discovery.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/2-0-tp11487p11552.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: 2.0

2017-03-29 Thread Anil
Hi,

I tried to use latest ignite-aws (with vertx-ignite) and nodes are not
joining the cluster.

 
   

   
 
   
   
 
   
   
 
   
 

logs :

2017-03-29 10:23:39 INFO  GridDiscoveryManager:475 - Topology snapshot
[ver=1, servers=1, clients=0, CPUs=4, heap=3.5GB]
2017-03-29 10:23:39 INFO  GridCacheProcessor:475 - Started cache
[name=__vertx.haInfo, mode=PARTITIONED]
2017-03-29 10:23:39 INFO  GridCachePartitionExchangeManager:475 - Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=1], evt=DISCOVERY_CUSTOM_EVT,
node=d97a7af4-8711-479b-b90e-e9f085d3dc6b]
2017-03-29 10:23:39 INFO  GridCacheProcessor:475 - Started cache
[name=__vertx.subs, mode=PARTITIONED]
2017-03-29 10:23:39 WARN  GridEventStorageManager:480 - Added listener for
disabled event type: CACHE_OBJECT_REMOVED
2017-03-29 10:23:39 INFO  GridCachePartitionExchangeManager:475 - Skipping
rebalancing (nothing scheduled) [top=AffinityTopologyVersion [topVer=1,
minorTopVer=2], evt=DISCOVERY_CUSTOM_EVT,
node=d97a7af4-8711-479b-b90e-e9f085d3dc6b]
2017-03-29 10:23:39 WARN  MacAddressUtil:136 - Failed to find a usable
hardware address from the network interfaces; using random bytes:
c6:25:2b:74:0a:78:fc:66


Any other configurations required ? Did I miss anything  ? Please advise.

Thanks

On 28 March 2017 at 22:49, Denis Magda  wrote:

> Lea,
>
> If you can’t wait for 2.0 release I would suggest you pick my commit,
> merge it to your fork of Ignite 1.9 release and build it from sources.
>
> Does it work for you?
>
> —
> Denis
>
> On Mar 28, 2017, at 1:21 AM, Lea Thurman  wrote:
>
> Thanks Pavel
>
> Would it be worth us reverting to an earlier release. Any idea when it was
> introduced?
>
> Regards
> Lea Thurman.
>
> On 28 March 2017 at 08:46, Pavel Tupitsyn  wrote:
>
>> According to the dev list thread (http://apache-ignite-develope
>> rs.2346864.n4.nabble.com/Apache-Ignite-2-0-Release-td15690.html),
>> you can expect 2.0 by the end of the April.
>>
>>
>> On Tue, Mar 28, 2017 at 10:41 AM, Lea Thurman 
>> wrote:
>>
>>> Hi all,
>>>
>>> We have upgraded to 1.9 and noticed the following issue:
>>>
>>> https://issues.apache.org/jira/browse/IGNITE-4858
>>>
>>> I understand this is to be fixed in 2.0.
>>>
>>> Is there any indicated when this is planned to be released?
>>>
>>> Regards
>>> Lea Thurman
>>>
>>> --
>>> *Lea Thurman*
>>> OneSoon Limited
>>> Manchester Business Park
>>> 3000 Aviator Way
>>> Manchester M22 5TG
>>>
>>> mob:   +44 (0) 7545 828 526 <+44+(0)+7545+828+526>
>>> tel:  +44 (0) 333 666 7366
>>> email:  lea.thur...@adalyser.com 
>>> web:www.adalyser.com
>>>
>>> *Adalyser* is a registered trademark and trading name of OneSoon Limited
>>> *OneSoon* is registered in England and Wales Company Number 04746025
>>>
>>
>>
>
>
> --
> *Lea Thurman*
> OneSoon Limited
> Manchester Business Park
> 3000 Aviator Way
> Manchester M22 5TG
>
> mob:   +44 (0) 7545 828 526 <+44+(0)+7545+828+526>
> tel:  +44 (0) 333 666 7366
> email:  lea.thur...@adalyser.com 
> web:www.adalyser.com
>
> *Adalyser* is a registered trademark and trading name of OneSoon Limited
> *OneSoon* is registered in England and Wales Company Number 04746025
>
>
>


Re: IGNITE-4106

2017-03-23 Thread Anil
I will try with sample program and share with you. Thanks.

On 23 March 2017 at 17:53, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> I can't reproduce this issue. Would you please share a repro?
>
> On Wed, Mar 22, 2017 at 9:04 AM, Anil <anilk...@gmail.com> wrote:
>
>> HI Andrey,
>>
>> i have two records for my query.
>> i did not see same results if i hit the same query number times. Results
>> in number of records are empty, 1, 2.
>>
>> Thanks
>>
>>
>>
>>
>> On 22 March 2017 at 10:49, Andrey Mashenkov <andrey.mashen...@gmail.com>
>> wrote:
>>
>>> Hi Anil,
>>>
>>> What do you mean "the results are not same"? It looks like query should
>>> return a single row.
>>> If there would be more than one row in result and order is not specified
>>> in query, then it is possible to get rows in different order due to data
>>> transferred from other nodes asynchronously.
>>>
>>>
>>>
>>>
>>>
>>> On Tue, Mar 21, 2017 at 7:02 AM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Andrew,
>>>>
>>>> #1 - it is very simple select query - select * from person hwere
>>>> personid = 'something';
>>>> i just ran the query in for loop and noticed the results are not same.
>>>>
>>>> #2 - it is stable topology. swap is configured. but this test was done
>>>> when full load is completed and some compute job going on for other cache.
>>>>
>>>> Please let me know if you have any questions. thanks.
>>>>
>>>> Thanks.
>>>>
>>>> On 20 March 2017 at 21:07, Andrey Mashenkov <andrey.mashen...@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi Anil,
>>>>>
>>>>> 1. Would you please share sql query text?
>>>>>
>>>>> 2. Is it happening on unstable topology or during rebalancing? Or may
>>>>> be eviction\expire policy or swap is configured?
>>>>>
>>>>> On Mon, Mar 20, 2017 at 5:41 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Yes. i am using partition cache only with no joins :)
>>>>>>
>>>>>> how about #2 ?
>>>>>>
>>>>>> On 20 March 2017 at 19:20, Andrey Mashenkov <
>>>>>> andrey.mashen...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Anil,
>>>>>>>
>>>>>>> I should although mention that Replicated caches can participate in
>>>>>>> same query with partitioned caches regardless a degree of parallelizm.
>>>>>>> This limitation relates to partitioned caches only.
>>>>>>>
>>>>>>> On Mon, Mar 20, 2017 at 3:54 PM, Andrey Mashenkov <
>>>>>>> andrey.mashen...@gmail.com> wrote:
>>>>>>>
>>>>>>>> Hi Anil,
>>>>>>>>
>>>>>>>> It is ok. Doc says *"If a query contains JOINs, then all the
>>>>>>>> participating caches must have the same degree of parallelism.".*
>>>>>>>> Possibly, it is easy to fix but there can be unobvious limitations,
>>>>>>>> so we need a time to make a POC.
>>>>>>>> I believe, it will be fixed in future releases.
>>>>>>>>
>>>>>>>> On Mon, Mar 20, 2017 at 1:11 PM, Anil <anilk...@gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi Andrey,
>>>>>>>>>
>>>>>>>>> I see few more issues with IGNITE-4826
>>>>>>>>>
>>>>>>>>> 1. queryParallelism should be used for all caches for which
>>>>>>>>> queries are used other it throws following exception.
>>>>>>>>>
>>>>>>>>> Caused by: java.sql.SQLException: Failed to query Ignite.
>>>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:131)
>>>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>>>> c2.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.
>>>>>>>>> java:76)
>>>>>>>>> at org.apache.commons.dbcp2.Deleg
>>>>>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>>>>>> ent.java:83)
>>>>>>>>> at org.apache.commons.dbcp2.Deleg
>>>>>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>>>>>> ent.java:83)
>>>>>>>>>
>>>>>>>>> Caused by: javax.cache.CacheException: class
>>>>>>>>> org.apache.ignite.IgniteException: Using indexes with different
>>>>>>>>> parallelism levels in same query is forbidden.
>>>>>>>>> at org.apache.ignite.internal.pro
>>>>>>>>> cessors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:760)
>>>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>>>> c2.JdbcQueryTask.call(JdbcQueryTask.java:161)
>>>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:116)
>>>>>>>>> ... 13 more
>>>>>>>>> 2. query is not returning same result if it is hit number of times.
>>>>>>>>>
>>>>>>>>> please let me know if these are known issues.
>>>>>>>>>
>>>>>>>>> Thanks
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Best regards,
>>>>>>>> Andrey V. Mashenkov
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Best regards,
>>>>>>> Andrey V. Mashenkov
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Andrey V. Mashenkov
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: IGNITE-4106

2017-03-22 Thread Anil
HI Andrey,

i have two records for my query.
i did not see same results if i hit the same query number times. Results in
number of records are empty, 1, 2.

Thanks




On 22 March 2017 at 10:49, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> What do you mean "the results are not same"? It looks like query should
> return a single row.
> If there would be more than one row in result and order is not specified
> in query, then it is possible to get rows in different order due to data
> transferred from other nodes asynchronously.
>
>
>
>
>
> On Tue, Mar 21, 2017 at 7:02 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Andrew,
>>
>> #1 - it is very simple select query - select * from person hwere personid
>> = 'something';
>> i just ran the query in for loop and noticed the results are not same.
>>
>> #2 - it is stable topology. swap is configured. but this test was done
>> when full load is completed and some compute job going on for other cache.
>>
>> Please let me know if you have any questions. thanks.
>>
>> Thanks.
>>
>> On 20 March 2017 at 21:07, Andrey Mashenkov <andrey.mashen...@gmail.com>
>> wrote:
>>
>>> Hi Anil,
>>>
>>> 1. Would you please share sql query text?
>>>
>>> 2. Is it happening on unstable topology or during rebalancing? Or may be
>>> eviction\expire policy or swap is configured?
>>>
>>> On Mon, Mar 20, 2017 at 5:41 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Yes. i am using partition cache only with no joins :)
>>>>
>>>> how about #2 ?
>>>>
>>>> On 20 March 2017 at 19:20, Andrey Mashenkov <andrey.mashen...@gmail.com
>>>> > wrote:
>>>>
>>>>> Hi Anil,
>>>>>
>>>>> I should although mention that Replicated caches can participate in
>>>>> same query with partitioned caches regardless a degree of parallelizm.
>>>>> This limitation relates to partitioned caches only.
>>>>>
>>>>> On Mon, Mar 20, 2017 at 3:54 PM, Andrey Mashenkov <
>>>>> andrey.mashen...@gmail.com> wrote:
>>>>>
>>>>>> Hi Anil,
>>>>>>
>>>>>> It is ok. Doc says *"If a query contains JOINs, then all the
>>>>>> participating caches must have the same degree of parallelism.".*
>>>>>> Possibly, it is easy to fix but there can be unobvious limitations,
>>>>>> so we need a time to make a POC.
>>>>>> I believe, it will be fixed in future releases.
>>>>>>
>>>>>> On Mon, Mar 20, 2017 at 1:11 PM, Anil <anilk...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi Andrey,
>>>>>>>
>>>>>>> I see few more issues with IGNITE-4826
>>>>>>>
>>>>>>> 1. queryParallelism should be used for all caches for which queries
>>>>>>> are used other it throws following exception.
>>>>>>>
>>>>>>> Caused by: java.sql.SQLException: Failed to query Ignite.
>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:131)
>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>> c2.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:76)
>>>>>>> at org.apache.commons.dbcp2.Deleg
>>>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>>>> ent.java:83)
>>>>>>> at org.apache.commons.dbcp2.Deleg
>>>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>>>> ent.java:83)
>>>>>>>
>>>>>>> Caused by: javax.cache.CacheException: class
>>>>>>> org.apache.ignite.IgniteException: Using indexes with different
>>>>>>> parallelism levels in same query is forbidden.
>>>>>>> at org.apache.ignite.internal.pro
>>>>>>> cessors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:760)
>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>> c2.JdbcQueryTask.call(JdbcQueryTask.java:161)
>>>>>>> at org.apache.ignite.internal.jdb
>>>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:116)
>>>>>>> ... 13 more
>>>>>>> 2. query is not returning same result if it is hit number of times.
>>>>>>>
>>>>>>> please let me know if these are known issues.
>>>>>>>
>>>>>>> Thanks
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Best regards,
>>>>>> Andrey V. Mashenkov
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Andrey V. Mashenkov
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: IGNITE-4106

2017-03-20 Thread Anil
Hi Andrew,

#1 - it is very simple select query - select * from person hwere personid =
'something';
i just ran the query in for loop and noticed the results are not same.

#2 - it is stable topology. swap is configured. but this test was done when
full load is completed and some compute job going on for other cache.

Please let me know if you have any questions. thanks.

Thanks.

On 20 March 2017 at 21:07, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> 1. Would you please share sql query text?
>
> 2. Is it happening on unstable topology or during rebalancing? Or may be
> eviction\expire policy or swap is configured?
>
> On Mon, Mar 20, 2017 at 5:41 PM, Anil <anilk...@gmail.com> wrote:
>
>> Yes. i am using partition cache only with no joins :)
>>
>> how about #2 ?
>>
>> On 20 March 2017 at 19:20, Andrey Mashenkov <andrey.mashen...@gmail.com>
>> wrote:
>>
>>> Hi Anil,
>>>
>>> I should although mention that Replicated caches can participate in same
>>> query with partitioned caches regardless a degree of parallelizm.
>>> This limitation relates to partitioned caches only.
>>>
>>> On Mon, Mar 20, 2017 at 3:54 PM, Andrey Mashenkov <
>>> andrey.mashen...@gmail.com> wrote:
>>>
>>>> Hi Anil,
>>>>
>>>> It is ok. Doc says *"If a query contains JOINs, then all the
>>>> participating caches must have the same degree of parallelism.".*
>>>> Possibly, it is easy to fix but there can be unobvious limitations, so
>>>> we need a time to make a POC.
>>>> I believe, it will be fixed in future releases.
>>>>
>>>> On Mon, Mar 20, 2017 at 1:11 PM, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Hi Andrey,
>>>>>
>>>>> I see few more issues with IGNITE-4826
>>>>>
>>>>> 1. queryParallelism should be used for all caches for which queries
>>>>> are used other it throws following exception.
>>>>>
>>>>> Caused by: java.sql.SQLException: Failed to query Ignite.
>>>>> at org.apache.ignite.internal.jdb
>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:131)
>>>>> at org.apache.ignite.internal.jdb
>>>>> c2.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:76)
>>>>> at org.apache.commons.dbcp2.Deleg
>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>> ent.java:83)
>>>>> at org.apache.commons.dbcp2.Deleg
>>>>> atingPreparedStatement.executeQuery(DelegatingPreparedStatem
>>>>> ent.java:83)
>>>>>
>>>>> Caused by: javax.cache.CacheException: class
>>>>> org.apache.ignite.IgniteException: Using indexes with different
>>>>> parallelism levels in same query is forbidden.
>>>>> at org.apache.ignite.internal.pro
>>>>> cessors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:760)
>>>>> at org.apache.ignite.internal.jdb
>>>>> c2.JdbcQueryTask.call(JdbcQueryTask.java:161)
>>>>> at org.apache.ignite.internal.jdb
>>>>> c2.JdbcStatement.executeQuery(JdbcStatement.java:116)
>>>>> ... 13 more
>>>>> 2. query is not returning same result if it is hit number of times.
>>>>>
>>>>> please let me know if these are known issues.
>>>>>
>>>>> Thanks
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Andrey V. Mashenkov
>>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: IGNITE-4106

2017-03-20 Thread Anil
Yes. i am using partition cache only with no joins :)

how about #2 ?

On 20 March 2017 at 19:20, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> I should although mention that Replicated caches can participate in same
> query with partitioned caches regardless a degree of parallelizm.
> This limitation relates to partitioned caches only.
>
> On Mon, Mar 20, 2017 at 3:54 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi Anil,
>>
>> It is ok. Doc says *"If a query contains JOINs, then all the
>> participating caches must have the same degree of parallelism.".*
>> Possibly, it is easy to fix but there can be unobvious limitations, so we
>> need a time to make a POC.
>> I believe, it will be fixed in future releases.
>>
>> On Mon, Mar 20, 2017 at 1:11 PM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Andrey,
>>>
>>> I see few more issues with IGNITE-4826
>>>
>>> 1. queryParallelism should be used for all caches for which queries are
>>> used other it throws following exception.
>>>
>>> Caused by: java.sql.SQLException: Failed to query Ignite.
>>> at org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(
>>> JdbcStatement.java:131)
>>> at org.apache.ignite.internal.jdbc2.JdbcPreparedStatement.execu
>>> teQuery(JdbcPreparedStatement.java:76)
>>> at org.apache.commons.dbcp2.DelegatingPreparedStatement.execute
>>> Query(DelegatingPreparedStatement.java:83)
>>> at org.apache.commons.dbcp2.DelegatingPreparedStatement.execute
>>> Query(DelegatingPreparedStatement.java:83)
>>>
>>> Caused by: javax.cache.CacheException: class
>>> org.apache.ignite.IgniteException: Using indexes with different
>>> parallelism levels in same query is forbidden.
>>> at org.apache.ignite.internal.processors.cache.IgniteCacheProxy
>>> .query(IgniteCacheProxy.java:760)
>>> at org.apache.ignite.internal.jdbc2.JdbcQueryTask.call(JdbcQuer
>>> yTask.java:161)
>>> at org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(
>>> JdbcStatement.java:116)
>>> ... 13 more
>>> 2. query is not returning same result if it is hit number of times.
>>>
>>> please let me know if these are known issues.
>>>
>>> Thanks
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: IGNITE-4106

2017-03-20 Thread Anil
Hi Andrey,

I see few more issues with IGNITE-4826

1. queryParallelism should be used for all caches for which queries are
used other it throws following exception.

Caused by: java.sql.SQLException: Failed to query Ignite.
at
org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(JdbcStatement.java:131)
at
org.apache.ignite.internal.jdbc2.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:76)
at
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)

Caused by: javax.cache.CacheException: class
org.apache.ignite.IgniteException: Using indexes with different parallelism
levels in same query is forbidden.
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:760)
at
org.apache.ignite.internal.jdbc2.JdbcQueryTask.call(JdbcQueryTask.java:161)
at
org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(JdbcStatement.java:116)
... 13 more
2. query is not returning same result if it is hit number of times.

please let me know if these are known issues.

Thanks


Re: IGNITE-4106

2017-03-18 Thread Anil
Hi ,

May I know the ignite-1.9.0 branch ? i want to add the fix given in the
jira and test it.

ignite-1.9.0 pom says 2.0.0-SNAPSHOT.

Thanks

On 15 March 2017 at 23:26, Anil <anilk...@gmail.com> wrote:

> Hi Andrey,
>
> Thank you.
>
> I see it as Path Available. You guys are quick. I will test the fix
> tomorrow.
>
> Thanks.
>
> On 15 March 2017 at 20:58, Andrey Mashenkov <andrey.mashen...@gmail.com>
> wrote:
>
>> Hi Anil,
>>
>> It is a bug. Error occurs when entry has evicted from cache.
>> I've create a ticket IGNITE-4826 [1].
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-4826
>>
>>
>> On Wed, Mar 15, 2017 at 10:22 AM, Anil <anilk...@gmail.com> wrote:
>>
>>> Hi Val and Andrey,
>>>
>>> I am seeing exception with following code as well. Not sure why is not
>>> reproduced at your end.,
>>>
>>> Ignite ignite = Ignition.start(new File("/workspace/cache-manager
>>> /test-parallelism/src/main/resources/ignite.xml").toURI().toURL());
>>> IgniteCache<String, Test> cache = ignite.cache("TEST_CACHE");
>>> IgniteDataStreamer<String, Test> streamer =
>>> ignite.dataStreamer("TEST_CACHE");
>>> for (int i =1; i< 10; i++){
>>> streamer.addData(String.valueOf(i), new Test("1", "1"));
>>> }
>>>
>>> Exception :
>>>
>>> 2017-03-15 12:46:43 ERROR DataStreamerImpl:495 - DataStreamer operation
>>> failed.
>>> class org.apache.ignite.IgniteCheckedException: Failed to finish
>>> operation (too many remaps): 32
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$5.apply(DataStreamerImpl.java:863)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$5.apply(DataStreamerImpl.java:828)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>>> ayListener.apply(GridFutureAdapter.java:456)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>>> ayListener.apply(GridFutureAdapter.java:439)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>>> ifyListener(GridFutureAdapter.java:271)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>>> ifyListeners(GridFutureAdapter.java:259)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>>> one(GridFutureAdapter.java:389)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>>> one(GridFutureAdapter.java:355)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>>> one(GridFutureAdapter.java:343)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1564)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1554)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>>> ifyListener(GridFutureAdapter.java:271)
>>> at org.apache.ignite.internal.util.future.GridFutureAdapter.lis
>>> ten(GridFutureAdapter.java:228)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$Buffer.localUpdate(DataStreamerImpl.java:1554)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$Buffer.submit(DataStreamerImpl.java:1626)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$Buffer.update(DataStreamerImpl.java:1416)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.load0(DataStreamerImpl.java:932)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl.access$1100(DataStreamerImpl.java:121)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$5$1.run(DataStreamerImpl.java:876)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$5$2.call(DataStreamerImpl.java:903)
>>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>>> merImpl$5$2.call(DataStreamerImpl.java:891)
>>> at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader
>>> (IgniteUtils.java:6618)
>>> at org.apache.ignite.internal.processors.closure.GridClosurePro
>>> cessor$2.body(GridClosureProcessor.java:925)
>>> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>>> rker.java:110)
>>> at java.util.concurrent.ThreadPoolExecutor.runWo

Re: Lock and Transaction

2017-03-17 Thread Anil
Thank you very much Nikolai. this is very helpful.

Thanks.

On 17 March 2017 at 17:54, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Yes Ignite transaction allows to avoid inconsistency. Also it's mean that
> you don't need to do extra work, Ignite will do it. :) When you using
> IgniteCache#lock, this is similar us you try to implement your own
> transactions, but in your case it not needed. Let's allows to Ignite does
> it. When you using ignite Transaction API it's allows you to tweak
> configuration for getting best performance. If you are sure that operation
> by person has low contention that you can using OPTIMISTIC transactions and
> get better performance otherwise you can choose PESSIMISTIC transaction.
> Explicit locks don't provide this ability. Also Ignite transaction have
> deadlock detection mechanism that very useful for fixing lock conflicts. In
> my opinion that transactions better then explicite locks in 99% cases.
>
> Thanks,
> Nikolay
>
> On Fri, Mar 17, 2017 at 3:02 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> thanks.
>>
>> You mean personCache.get(personId); inside transaction would avoid
>> concurrent access for person id entries in both person cache and detail
>> cache ?
>>
>> Thanks.
>>
>> On 17 March 2017 at 17:24, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Anil,
>>>
>>> You can just enlisted entry with personId in transaction and you don't
>>> need to use explicit lock.
>>>
>>> personCache = ignite.cache("PERSON_CACHE");
>>> detailCache = ignite.cache("DETAIL_CACHE");
>>>
>>> try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC,
>>> REPEATABLE_READ)) {
>>> // On this step will be acquired lock on personId
>>> // and only one thread in grid will execute code bellow.
>>> personCache.get(personId);
>>> detailCache.put(...)
>>>
>>> // cache put,remove,invoke and etc.
>>> tx.commit();
>>> }
>>>
>>> Ignite cross-cache transactions has ACID guarantee. I would recommend
>>> use this approach.
>>>
>>> On Fri, Mar 17, 2017 at 2:49 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Nikolai,
>>>>
>>>> I need to perform cross cache updates in case of person message and
>>>> detail message from kafka.
>>>>
>>>> that cross cache updates happens based on person Id which is person
>>>> cache key. I am using explicit lock on person cache for personId and
>>>> avoiding the parallel cross cache operations for the same person id.
>>>>
>>>> Please let me know if you have any questions. thanks.
>>>>
>>>> On 17 March 2017 at 15:30, Nikolai Tikhonov <ntikho...@apache.org>
>>>> wrote:
>>>>
>>>>> Hi Anil!
>>>>>
>>>>> Ignite Transaction API allowed to achieve your requirements. It's
>>>>> allow to avoid using explicit lock. Could you describe why do you need use
>>>>> explicit lock in your case?
>>>>>
>>>>> On Fri, Mar 17, 2017 at 6:46 AM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi Nikolai,
>>>>>>
>>>>>> Thanks for response. in my usecase, i need to control incoming
>>>>>> messages so that no two messages of personId process in parallel. i 
>>>>>> believe
>>>>>> this can achieved with both explicit locks and entry processor.
>>>>>>
>>>>>> Can entryprocessor support cross cache atomicity ?
>>>>>>
>>>>>> Thanks
>>>>>>
>>>>>> On 17 March 2017 at 01:29, Nikolai Tikhonov <ntikho...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> Anil,
>>>>>>>
>>>>>>> Yes, you're right that atomic cache doesn't support explicit locks.
>>>>>>>
>>>>>>> >I am not sure how entryprocessor invoke behaves in my case. if it
>>>>>>> is single cache update, that is straight forward. and not sure about
>>>>>>> transaction for 2-3 operations for 2 caches.
>>>>>>> EntryProcessor modifies (update/remove/create) only one entry.
>>>>>>>
>>>>>>> >I do 2 to 3 updates/puts between two caches like updating parent
>>>>>>> person 

Re: Lock and Transaction

2017-03-17 Thread Anil
Hi Nikolai,

thanks.

You mean personCache.get(personId); inside transaction would avoid
concurrent access for person id entries in both person cache and detail
cache ?

Thanks.

On 17 March 2017 at 17:24, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Anil,
>
> You can just enlisted entry with personId in transaction and you don't
> need to use explicit lock.
>
> personCache = ignite.cache("PERSON_CACHE");
> detailCache = ignite.cache("DETAIL_CACHE");
>
> try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC,
> REPEATABLE_READ)) {
> // On this step will be acquired lock on personId
> // and only one thread in grid will execute code bellow.
> personCache.get(personId);
> detailCache.put(...)
>
> // cache put,remove,invoke and etc.
> tx.commit();
> }
>
> Ignite cross-cache transactions has ACID guarantee. I would recommend use
> this approach.
>
> On Fri, Mar 17, 2017 at 2:49 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> I need to perform cross cache updates in case of person message and
>> detail message from kafka.
>>
>> that cross cache updates happens based on person Id which is person cache
>> key. I am using explicit lock on person cache for personId and avoiding the
>> parallel cross cache operations for the same person id.
>>
>> Please let me know if you have any questions. thanks.
>>
>> On 17 March 2017 at 15:30, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Hi Anil!
>>>
>>> Ignite Transaction API allowed to achieve your requirements. It's allow
>>> to avoid using explicit lock. Could you describe why do you need use
>>> explicit lock in your case?
>>>
>>> On Fri, Mar 17, 2017 at 6:46 AM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Nikolai,
>>>>
>>>> Thanks for response. in my usecase, i need to control incoming messages
>>>> so that no two messages of personId process in parallel. i believe this can
>>>> achieved with both explicit locks and entry processor.
>>>>
>>>> Can entryprocessor support cross cache atomicity ?
>>>>
>>>> Thanks
>>>>
>>>> On 17 March 2017 at 01:29, Nikolai Tikhonov <ntikho...@apache.org>
>>>> wrote:
>>>>
>>>>> Anil,
>>>>>
>>>>> Yes, you're right that atomic cache doesn't support explicit locks.
>>>>>
>>>>> >I am not sure how entryprocessor invoke behaves in my case. if it is
>>>>> single cache update, that is straight forward. and not sure about
>>>>> transaction for 2-3 operations for 2 caches.
>>>>> EntryProcessor modifies (update/remove/create) only one entry.
>>>>>
>>>>> >I do 2 to 3 updates/puts between two caches like updating parent
>>>>> person info to child person and creating empty detail info by checking
>>>>> detail cache.
>>>>> Ignite supports cross-cache transactions (one transaction can update
>>>>> to several caches) with support ACID . I think that this feature will be
>>>>> helpful for your.
>>>>>
>>>>> On Thu, Mar 16, 2017 at 8:20 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi Nikolai,
>>>>>>
>>>>>> Thanks for response.
>>>>>>
>>>>>> Distributed locks wont work for atomic caches. correct ? if i
>>>>>> remember it correclty, i see an exception sometime back and then i used
>>>>>> transcational.
>>>>>>
>>>>>> I do 2 to 3 updates/puts between two caches like updating parent
>>>>>> person info to child person and creating empty detail info by checking
>>>>>> detail cache.
>>>>>>
>>>>>> I am not sure how entryprocessor invoke behaves in my case. if it is
>>>>>> single cache update, that is straight forward. and not sure about
>>>>>> transaction for 2-3 operations for 2 caches.
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>> On 16 March 2017 at 22:43, Nikolai Tikhonov <ntikho...@apache.org>
>>>>>> wrote:
>>>>>>
>>>>>>> If your update logic does not contains heavy operations (locks,
>>>>>>> cache puts and etc) and how I see from your comment above this is true 
>>>>>>> you
&

Re: Lock and Transaction

2017-03-17 Thread Anil
Hi Nikolai,

I need to perform cross cache updates in case of person message and detail
message from kafka.

that cross cache updates happens based on person Id which is person cache
key. I am using explicit lock on person cache for personId and avoiding the
parallel cross cache operations for the same person id.

Please let me know if you have any questions. thanks.

On 17 March 2017 at 15:30, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Hi Anil!
>
> Ignite Transaction API allowed to achieve your requirements. It's allow to
> avoid using explicit lock. Could you describe why do you need use explicit
> lock in your case?
>
> On Fri, Mar 17, 2017 at 6:46 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> Thanks for response. in my usecase, i need to control incoming messages
>> so that no two messages of personId process in parallel. i believe this can
>> achieved with both explicit locks and entry processor.
>>
>> Can entryprocessor support cross cache atomicity ?
>>
>> Thanks
>>
>> On 17 March 2017 at 01:29, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> Anil,
>>>
>>> Yes, you're right that atomic cache doesn't support explicit locks.
>>>
>>> >I am not sure how entryprocessor invoke behaves in my case. if it is
>>> single cache update, that is straight forward. and not sure about
>>> transaction for 2-3 operations for 2 caches.
>>> EntryProcessor modifies (update/remove/create) only one entry.
>>>
>>> >I do 2 to 3 updates/puts between two caches like updating parent person
>>> info to child person and creating empty detail info by checking detail
>>> cache.
>>> Ignite supports cross-cache transactions (one transaction can update to
>>> several caches) with support ACID . I think that this feature will be
>>> helpful for your.
>>>
>>> On Thu, Mar 16, 2017 at 8:20 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Nikolai,
>>>>
>>>> Thanks for response.
>>>>
>>>> Distributed locks wont work for atomic caches. correct ? if i remember
>>>> it correclty, i see an exception sometime back and then i used
>>>> transcational.
>>>>
>>>> I do 2 to 3 updates/puts between two caches like updating parent person
>>>> info to child person and creating empty detail info by checking detail
>>>> cache.
>>>>
>>>> I am not sure how entryprocessor invoke behaves in my case. if it is
>>>> single cache update, that is straight forward. and not sure about
>>>> transaction for 2-3 operations for 2 caches.
>>>>
>>>> Thanks.
>>>>
>>>> On 16 March 2017 at 22:43, Nikolai Tikhonov <ntikho...@apache.org>
>>>> wrote:
>>>>
>>>>> If your update logic does not contains heavy operations (locks, cache
>>>>> puts and etc) and how I see from your comment above this is true you can
>>>>> use IgniteCache#invoke [1]. The method provides needed you garanty. Also
>>>>> you can change cache mode to ATOMIC for better perfomance.
>>>>>
>>>>> 1. https://apacheignite.readme.io/docs/jcache#section-entryprocessor
>>>>>
>>>>> On Thu, Mar 16, 2017 at 7:55 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi Nikolai,
>>>>>>
>>>>>> No. person message and detail message can be executed by different
>>>>>> nodes and both messages executes different logics.
>>>>>>
>>>>>> *Person message* - will update/create person entry into person cache
>>>>>> and check the if any detail entry is available or not in detail cache. if
>>>>>> exists, updates person info to detail entry. if not, creates empty detail
>>>>>> entry
>>>>>> *Detail Message* - will check empty detail entry is availble or not.
>>>>>> if yes, delete and create new detail entry with personal info. else
>>>>>> creates/updates the entry.
>>>>>>
>>>>>> To avoid data inconsistency, i created lock on person id so messages
>>>>>> (person and detail) of person id wont run in parallel.
>>>>>>
>>>>>> now, i am trying to achieve atomicity for each message operations.
>>>>>> Hope this is clear.
>>>>>>
>>>>>> Thanks,
>>>>>>
>>>>>> On

Re: Lock and Transaction

2017-03-16 Thread Anil
Hi Nikolai,

Thanks for response. in my usecase, i need to control incoming messages so
that no two messages of personId process in parallel. i believe this can
achieved with both explicit locks and entry processor.

Can entryprocessor support cross cache atomicity ?

Thanks

On 17 March 2017 at 01:29, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Anil,
>
> Yes, you're right that atomic cache doesn't support explicit locks.
>
> >I am not sure how entryprocessor invoke behaves in my case. if it is
> single cache update, that is straight forward. and not sure about
> transaction for 2-3 operations for 2 caches.
> EntryProcessor modifies (update/remove/create) only one entry.
>
> >I do 2 to 3 updates/puts between two caches like updating parent person
> info to child person and creating empty detail info by checking detail
> cache.
> Ignite supports cross-cache transactions (one transaction can update to
> several caches) with support ACID . I think that this feature will be
> helpful for your.
>
> On Thu, Mar 16, 2017 at 8:20 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Nikolai,
>>
>> Thanks for response.
>>
>> Distributed locks wont work for atomic caches. correct ? if i remember it
>> correclty, i see an exception sometime back and then i used transcational.
>>
>> I do 2 to 3 updates/puts between two caches like updating parent person
>> info to child person and creating empty detail info by checking detail
>> cache.
>>
>> I am not sure how entryprocessor invoke behaves in my case. if it is
>> single cache update, that is straight forward. and not sure about
>> transaction for 2-3 operations for 2 caches.
>>
>> Thanks.
>>
>> On 16 March 2017 at 22:43, Nikolai Tikhonov <ntikho...@apache.org> wrote:
>>
>>> If your update logic does not contains heavy operations (locks, cache
>>> puts and etc) and how I see from your comment above this is true you can
>>> use IgniteCache#invoke [1]. The method provides needed you garanty. Also
>>> you can change cache mode to ATOMIC for better perfomance.
>>>
>>> 1. https://apacheignite.readme.io/docs/jcache#section-entryprocessor
>>>
>>> On Thu, Mar 16, 2017 at 7:55 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Nikolai,
>>>>
>>>> No. person message and detail message can be executed by different
>>>> nodes and both messages executes different logics.
>>>>
>>>> *Person message* - will update/create person entry into person cache
>>>> and check the if any detail entry is available or not in detail cache. if
>>>> exists, updates person info to detail entry. if not, creates empty detail
>>>> entry
>>>> *Detail Message* - will check empty detail entry is availble or not.
>>>> if yes, delete and create new detail entry with personal info. else
>>>> creates/updates the entry.
>>>>
>>>> To avoid data inconsistency, i created lock on person id so messages
>>>> (person and detail) of person id wont run in parallel.
>>>>
>>>> now, i am trying to achieve atomicity for each message operations. Hope
>>>> this is clear.
>>>>
>>>> Thanks,
>>>>
>>>> On 16 March 2017 at 22:12, Nikolai Tikhonov <ntikho...@apache.org>
>>>> wrote:
>>>>
>>>>> Hi Anil!
>>>>>
>>>>> If I understood correctly (you need to perform operations on two
>>>>> caches have exclusive lock on personId) then in your case the better way 
>>>>> is
>>>>> using Ignite pessimistic transaction:
>>>>>
>>>>> personCache = ignite.cache("PERSON_CACHE");
>>>>> detailCache = ignite.cache("DETAIL_CACHE");
>>>>>
>>>>> try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC,
>>>>> REPEATABLE_READ)) {
>>>>> // On this step will be acquired lock on personId
>>>>> // and only one thread in grid will execute code bellow.
>>>>> personCache.get(personId);
>>>>>
>>>>> // cache put,remove,invoke and etc.
>>>>> tx.commit();
>>>>> }
>>>>>
>>>>> Is't work for you?
>>>>>
>>>>> On Thu, Mar 16, 2017 at 3:09 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> I need to make sure that entries a

Re: Lock and Transaction

2017-03-16 Thread Anil
Hi Nikolai,

No. person message and detail message can be executed by different nodes
and both messages executes different logics.

*Person message* - will update/create person entry into person cache and
check the if any detail entry is available or not in detail cache. if
exists, updates person info to detail entry. if not, creates empty detail
entry
*Detail Message* - will check empty detail entry is availble or not. if
yes, delete and create new detail entry with personal info. else
creates/updates the entry.

To avoid data inconsistency, i created lock on person id so messages
(person and detail) of person id wont run in parallel.

now, i am trying to achieve atomicity for each message operations. Hope
this is clear.

Thanks,

On 16 March 2017 at 22:12, Nikolai Tikhonov <ntikho...@apache.org> wrote:

> Hi Anil!
>
> If I understood correctly (you need to perform operations on two caches
> have exclusive lock on personId) then in your case the better way is using
> Ignite pessimistic transaction:
>
> personCache = ignite.cache("PERSON_CACHE");
> detailCache = ignite.cache("DETAIL_CACHE");
>
> try (Transaction tx = ignite.transactions().txStart(PESSIMISTIC,
> REPEATABLE_READ)) {
> // On this step will be acquired lock on personId
> // and only one thread in grid will execute code bellow.
> personCache.get(personId);
>
> // cache put,remove,invoke and etc.
> tx.commit();
> }
>
> Is't work for you?
>
> On Thu, Mar 16, 2017 at 3:09 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi,
>>
>> I need to make sure that entries are added correctly in two caches.
>>
>> I have two caches  Person and Detail. personId is the key for Person
>> cache and detailedId is the key for Detail cache.
>>
>> Each Detail cache entry would have some information of Person cache entry
>> based on personId. and i am adding entries to caches using Kafka.
>>
>> When Person and Detail messages are processed, order cannot be maintained
>> and processed by different nodes. So to avoid data inconsistency issues - i
>> did following.
>>
>> *Person message :*
>>
>> Locl lock = personCache.lock(personId);
>> lock.lock();
>>
>> // update person operations for both person cache and detail cache
>>
>> lock.unlock();
>>
>> *Detail Mesage :*
>>
>>
>> Locl lock = detailCache.lock(personId);  // person id from detail message
>> lock.lock();
>>
>> // update person operations for both person cache and detail cache
>>
>> lock.unlock();
>>
>> with this, till one of the message processed for same person Id, other
>> would not acquire lock.
>>
>> now how to maintain the ACID for update operations ? ignite transactions
>> does not work inside lock. Is there anyway to achieve the above usecase
>> with ACID ?
>>
>> Thanks
>>
>>
>


Lock and Transaction

2017-03-16 Thread Anil
Hi,

I need to make sure that entries are added correctly in two caches.

I have two caches  Person and Detail. personId is the key for Person cache
and detailedId is the key for Detail cache.

Each Detail cache entry would have some information of Person cache entry
based on personId. and i am adding entries to caches using Kafka.

When Person and Detail messages are processed, order cannot be maintained
and processed by different nodes. So to avoid data inconsistency issues - i
did following.

*Person message :*

Locl lock = personCache.lock(personId);
lock.lock();

// update person operations for both person cache and detail cache

lock.unlock();

*Detail Mesage :*


Locl lock = detailCache.lock(personId);  // person id from detail message
lock.lock();

// update person operations for both person cache and detail cache

lock.unlock();

with this, till one of the message processed for same person Id, other
would not acquire lock.

now how to maintain the ACID for update operations ? ignite transactions
does not work inside lock. Is there anyway to achieve the above usecase
with ACID ?

Thanks


Re: IGNITE-4106

2017-03-15 Thread Anil
Hi Andrey,

Thank you.

I see it as Path Available. You guys are quick. I will test the fix
tomorrow.

Thanks.

On 15 March 2017 at 20:58, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> It is a bug. Error occurs when entry has evicted from cache.
> I've create a ticket IGNITE-4826 [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-4826
>
>
> On Wed, Mar 15, 2017 at 10:22 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Val and Andrey,
>>
>> I am seeing exception with following code as well. Not sure why is not
>> reproduced at your end.,
>>
>> Ignite ignite = Ignition.start(new File("/workspace/cache-manager
>> /test-parallelism/src/main/resources/ignite.xml").toURI().toURL());
>> IgniteCache<String, Test> cache = ignite.cache("TEST_CACHE");
>> IgniteDataStreamer<String, Test> streamer = ignite.dataStreamer("TEST_CACH
>> E");
>> for (int i =1; i< 10; i++){
>> streamer.addData(String.valueOf(i), new Test("1", "1"));
>> }
>>
>> Exception :
>>
>> 2017-03-15 12:46:43 ERROR DataStreamerImpl:495 - DataStreamer operation
>> failed.
>> class org.apache.ignite.IgniteCheckedException: Failed to finish
>> operation (too many remaps): 32
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5.apply(DataStreamerImpl.java:863)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5.apply(DataStreamerImpl.java:828)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>> ayListener.apply(GridFutureAdapter.java:456)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter$Arr
>> ayListener.apply(GridFutureAdapter.java:439)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListener(GridFutureAdapter.java:271)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListeners(GridFutureAdapter.java:259)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:389)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:355)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.onD
>> one(GridFutureAdapter.java:343)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1564)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer$2.apply(DataStreamerImpl.java:1554)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.not
>> ifyListener(GridFutureAdapter.java:271)
>> at org.apache.ignite.internal.util.future.GridFutureAdapter.lis
>> ten(GridFutureAdapter.java:228)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.localUpdate(DataStreamerImpl.java:1554)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.submit(DataStreamerImpl.java:1626)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$Buffer.update(DataStreamerImpl.java:1416)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.load0(DataStreamerImpl.java:932)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl.access$1100(DataStreamerImpl.java:121)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$1.run(DataStreamerImpl.java:876)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$2.call(DataStreamerImpl.java:903)
>> at org.apache.ignite.internal.processors.datastreamer.DataStrea
>> merImpl$5$2.call(DataStreamerImpl.java:891)
>> at org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader
>> (IgniteUtils.java:6618)
>> at org.apache.ignite.internal.processors.closure.GridClosurePro
>> cessor$2.body(GridClosureProcessor.java:925)
>> at org.apache.ignite.internal.util.worker.GridWorker.run(GridWo
>> rker.java:110)
>> at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPool
>> Executor.java:1142)
>> at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoo
>> lExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> Caused by: class org.apache.ignite.IgniteCheckedException:
>> GridH2QueryContext is not initialized.
>> at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils
>> .java:7239)
>> at org.apache.ignite.internal.processors.closure.GridClosurePro
>> cessor$2.body(GridClosureProcessor.java:933)
>> ... 4 more
>> Caused by: java.

Re: IGNITE-4106

2017-03-15 Thread Anil
e.ignite.internal.processors.cache.GridCacheEvictionManager.touch(GridCacheEvictionManager.java:798)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater.receive(DataStreamerImpl.java:1957)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:140)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6618)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:925)
... 4 more
Exception in thread "main" java.lang.IllegalStateException: Data streamer
has been closed.
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:406)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addDataInternal(DataStreamerImpl.java:613)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:667)
at com.test.cache.loader.TestManager.main(TestManager.java:19)


attached the ignite.xml that is used for test.

Please let me know if you have any questions. thanks.

Thanks

On 15 March 2017 at 11:59, Anil <anilk...@gmail.com> wrote:

> Sure Val. let me try again.
>
> Thanks.
>
> On 14 March 2017 at 20:28, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Hi Anil,
>>
>> I tried to run your project and also didn't get the exception. Please
>> provide exact steps how to run it in order to reproduce the behavior.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/IGNITE-4106-tp11073p11169.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xmlns:util="http://www.springframework.org/schema/util;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd
   http://www.springframework.org/schema/util
   http://www.springframework.org/schema/util/spring-util.xsd;>

  


	
	  
	
	  
	
	  
	127.0.0.1
	  
	
	  
	
	  
	
	   
			
			
			
			
						
	
			
			
			
			 
	
	
	  
		java.lang.String
		com.test.cache.entity.Test
	  
	


			
		
	
	  


Re: IGNITE-4106

2017-03-15 Thread Anil
Sure Val. let me try again.

Thanks.

On 14 March 2017 at 20:28, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Hi Anil,
>
> I tried to run your project and also didn't get the exception. Please
> provide exact steps how to run it in order to reproduce the behavior.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGNITE-4106-tp11073p11169.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


ScanQuery and Lock

2017-03-14 Thread Anil
HI,

How does scan query and lock on same entry works ?

thread 1 -> lock on a entry
thread 2 -> scan query trying to get the record

i believe, scan query returns old entry. or waits till lock is released on
entry ?

Thanks


Re: IGNITE-4106

2017-03-13 Thread Anil
Hi Andrey,

Did you get a chance to look into the reproducer ? thanks.

Thanks.

On 11 March 2017 at 17:37, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Thanks Anil.
>
> I'll take a look.
>
> 11 марта 2017 г. 9:58 пользователь "Anil" <anilk...@gmail.com> написал:
>
> Hi Andrey,
>
> i am able to reproduce the issue with following reproducer.
>
> https://github.com/adasari/test-ignite-parallelism.git
>
> Please let me know if you see any issue with reproducer.
>
> Thanks
>
> On 11 March 2017 at 11:46, Anil <anilk...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> i have created test project which pushes the entries to drastreamer using
>> for loop and it is working without any issue.
>>
>> But my application which does the same thing but loads the data from
>> hbase and it is failing with *"java.lang.IllegalStateException: Data
>> streamer has been closed." *
>>
>>
>>
>> *java.lang.IllegalStateException: Data streamer has been closed.
>> at org.apache.ignite.internal.pro
>> <http://org.apache.ignite.internal.pro>cessors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:406)
>>   at org.apache.ignite.internal.pro
>> <http://org.apache.ignite.internal.pro>cessors.datastreamer.DataStreamerImpl.addDataInternal(DataStreamerImpl.java:613)
>>   at org.apache.ignite.internal.pro
>> <http://org.apache.ignite.internal.pro>cessors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:667)*
>>
>> i set the system and public thread pool as 32 for an 8 core machine and
>> using 4 node cluster.
>>
>> and i see that loading data into cache for which queryParallelism
>> configured is failing and other cache loads working fine.
>>
>> Thanks
>>
>>
>> On 10 March 2017 at 05:05, Andrey Mashenkov <andrey.mashen...@gmail.com>
>> wrote:
>>
>>> Hi Anil,
>>>
>>> I don't think it is a thead starvation due to it work fine for 1 thread.
>>> Whould you please share a repro?
>>>
>>> On Thu, Mar 9, 2017 at 9:09 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>>
>>>> Hi Andrey,
>>>>
>>>> i tried to set the parallelism to 2, 4 on 4 node cluster (8 core
>>>> machines) and initiated the data load (using compute job). Data streamers
>>>> are getting closed and data load is failing.
>>>>
>>>> When I set the parallelism to 1, data load working as expected. the
>>>> issue could be threads starvation/ non availability of resources and not
>>>> ignite issue. Do you think any other issue ?
>>>>
>>>> I will try on 16 core machines cluster tomorrow.
>>>>
>>>> Thanks.
>>>>
>>>> On 8 March 2017 at 18:04, Andrey Mashenkov <andrey.mashen...@gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Anil,
>>>>>
>>>>> SQL queries are run in system pool.
>>>>>
>>>>> On Wed, Mar 8, 2017 at 3:04 PM, Anil <anilk...@gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> Does queryparallelism uses system pool or separate thread pool other
>>>>>> than system and public thread pool ? please clarify.
>>>>>>
>>>>>> 1. https://apacheignite.readme.io/docs/sql-performance-and-d
>>>>>> ebugging#sql-performance-and-usability-considerations
>>>>>>
>>>>>> Thanks.
>>>>>>
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Best regards,
>>>>> Andrey V. Mashenkov
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>


Re: IGNITE-4106

2017-03-10 Thread Anil
Hi Andrey,

i am able to reproduce the issue with following reproducer.

https://github.com/adasari/test-ignite-parallelism.git

Please let me know if you see any issue with reproducer.

Thanks

On 11 March 2017 at 11:46, Anil <anilk...@gmail.com> wrote:

> Hi Andrey,
>
> i have created test project which pushes the entries to drastreamer using
> for loop and it is working without any issue.
>
> But my application which does the same thing but loads the data from hbase
> and it is failing with *"java.lang.IllegalStateException: Data streamer
> has been closed." *
>
>
>
> *java.lang.IllegalStateException: Data streamer has been closed.at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:406)
>   at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addDataInternal(DataStreamerImpl.java:613)
>   at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:667)*
>
> i set the system and public thread pool as 32 for an 8 core machine and
> using 4 node cluster.
>
> and i see that loading data into cache for which queryParallelism
> configured is failing and other cache loads working fine.
>
> Thanks
>
>
> On 10 March 2017 at 05:05, Andrey Mashenkov <andrey.mashen...@gmail.com>
> wrote:
>
>> Hi Anil,
>>
>> I don't think it is a thead starvation due to it work fine for 1 thread.
>> Whould you please share a repro?
>>
>> On Thu, Mar 9, 2017 at 9:09 PM, Anil <anilk...@gmail.com> wrote:
>>
>>>
>>> Hi Andrey,
>>>
>>> i tried to set the parallelism to 2, 4 on 4 node cluster (8 core
>>> machines) and initiated the data load (using compute job). Data streamers
>>> are getting closed and data load is failing.
>>>
>>> When I set the parallelism to 1, data load working as expected. the
>>> issue could be threads starvation/ non availability of resources and not
>>> ignite issue. Do you think any other issue ?
>>>
>>> I will try on 16 core machines cluster tomorrow.
>>>
>>> Thanks.
>>>
>>> On 8 March 2017 at 18:04, Andrey Mashenkov <andrey.mashen...@gmail.com>
>>> wrote:
>>>
>>>> Hi Anil,
>>>>
>>>> SQL queries are run in system pool.
>>>>
>>>> On Wed, Mar 8, 2017 at 3:04 PM, Anil <anilk...@gmail.com> wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> Does queryparallelism uses system pool or separate thread pool other
>>>>> than system and public thread pool ? please clarify.
>>>>>
>>>>> 1. https://apacheignite.readme.io/docs/sql-performance-and-d
>>>>> ebugging#sql-performance-and-usability-considerations
>>>>>
>>>>> Thanks.
>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Andrey V. Mashenkov
>>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


Re: IGNITE-4106

2017-03-10 Thread Anil
Hi Andrey,

i have created test project which pushes the entries to drastreamer using
for loop and it is working without any issue.

But my application which does the same thing but loads the data from hbase
and it is failing with *"java.lang.IllegalStateException: Data streamer has
been closed." *



*java.lang.IllegalStateException: Data streamer has been closed.at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:406)
  at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addDataInternal(DataStreamerImpl.java:613)
  at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.addData(DataStreamerImpl.java:667)*

i set the system and public thread pool as 32 for an 8 core machine and
using 4 node cluster.

and i see that loading data into cache for which queryParallelism
configured is failing and other cache loads working fine.

Thanks


On 10 March 2017 at 05:05, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> I don't think it is a thead starvation due to it work fine for 1 thread.
> Whould you please share a repro?
>
> On Thu, Mar 9, 2017 at 9:09 PM, Anil <anilk...@gmail.com> wrote:
>
>>
>> Hi Andrey,
>>
>> i tried to set the parallelism to 2, 4 on 4 node cluster (8 core
>> machines) and initiated the data load (using compute job). Data streamers
>> are getting closed and data load is failing.
>>
>> When I set the parallelism to 1, data load working as expected. the issue
>> could be threads starvation/ non availability of resources and not ignite
>> issue. Do you think any other issue ?
>>
>> I will try on 16 core machines cluster tomorrow.
>>
>> Thanks.
>>
>> On 8 March 2017 at 18:04, Andrey Mashenkov <andrey.mashen...@gmail.com>
>> wrote:
>>
>>> Hi Anil,
>>>
>>> SQL queries are run in system pool.
>>>
>>> On Wed, Mar 8, 2017 at 3:04 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Does queryparallelism uses system pool or separate thread pool other
>>>> than system and public thread pool ? please clarify.
>>>>
>>>> 1. https://apacheignite.readme.io/docs/sql-performance-and-d
>>>> ebugging#sql-performance-and-usability-considerations
>>>>
>>>> Thanks.
>>>>
>>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


IGNITE-4106

2017-03-08 Thread Anil
Hi,

Does queryparallelism uses system pool or separate thread pool other than
system and public thread pool ? please clarify.

1.
https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations

Thanks.


Re: [ANNOUNCE] Apache Ignite 1.9.0 Released

2017-03-07 Thread Anil
Thank you Andrey for good news.

On 8 March 2017 at 01:17, Andrey Gura  wrote:

> JFYI
>
> Also today Vert.x 3.4.0 was released with Apache Ignite 1.9 based
> cluster manager for Vert.x in HA/Clustered mode.
>
> On Tue, Mar 7, 2017 at 3:10 AM, Denis Magda  wrote:
> > The Apache Ignite Community is pleased to announce the release of Apache
> Ignite 1.9.0.
> >
> > Apache Ignite In-Memory Data Fabric [1] is a high-performance,
> integrated and distributed in-memory platform for computing and transacting
> on large-scale data sets in real-time, orders of magnitude faster than
> possible with traditional disk-based or flash-based technologies.
> >
> > The Fabric is a collection of independent and well integrated components
> some of which are the following:
> > Data Grid
> > SQL Grid
> > Compute Grid
> > Streaming & CEP
> > Service Grid
> >
> >
> > In this release the community provided an integration with Kubernetes
> cluster manager, improved performance of core and SQL Grid components,
> expanded Data Modification Language support to the level of .NET and C++
> API, integrated with .NET TransactionScope API and more.
> >
> > Learn more details from our blog post: https://blogs.apache.org/
> ignite/entry/apache-ignite-1-9-released
> >
> > The full list of the changes can be found here [2].
> >
> > Please visit this page if you’re ready to try the release out:
> > https://ignite.apache.org/download.cgi
> >
> > Please let us know [3] if you encounter any problems.
> >
> > Regards,
> >
> > The Apache Ignite Community
> >
> > [1] https://ignite.apache.org
> > [2] https://github.com/apache/ignite/blob/master/RELEASE_NOTES.txt
> > [3] https://ignite.apache.org/community/resources.html#ask
>


Re: Running a IgniteRunnable on local node using compute

2017-03-06 Thread Anil
ignite.compute(ignite..cluster().forLocal()).run() will run the ignite
runnble on local node.

final pseudo code -

ignite.compute().broadcast(new IgniteRunnable() {
@Override
public void run() {

Observable o = Observable.timer(((CommonUtils.getCurrentDayTime() +
24 * 60 * 60 * 1000) - System.currentTimeMillis()), 24 * 60 * 60 *
1000, TimeUnit.MILLISECONDS,
scheduler);
o.subscribe(item -> {
ignite.compute(ignite..cluster().forLocal()).run(new IgniteRunnableTask());
});
}
});



On 7 March 2017 at 10:23, Anil <anilk...@gmail.com> wrote:

> Hi,
>
> Are there any settings to use ignite.compute().run() to run the task on
> local node ?
>
> i am try to schedule the task on each node using Rx Scheduler. so I am
> broadcasting a runnable which initiates the scheduler on each node.
>
> As I know, we cannot run a task on local node using
> ignite.compute().run(). this will run the task on one of the nodes. correct
> ?
>
> we can start the thread for runnable instance and this will run the task
> on local node. Bu this cannot inject ignite instance into runnable instance.
>
> Do you have any suggestions to achieve this ?
>
> Thanks
>
>
>


Running a IgniteRunnable on local node using compute

2017-03-06 Thread Anil
Hi,

Are there any settings to use ignite.compute().run() to run the task on
local node ?

i am try to schedule the task on each node using Rx Scheduler. so I am
broadcasting a runnable which initiates the scheduler on each node.

As I know, we cannot run a task on local node using ignite.compute().run().
this will run the task on one of the nodes. correct ?

we can start the thread for runnable instance and this will run the task on
local node. Bu this cannot inject ignite instance into runnable instance.

Do you have any suggestions to achieve this ?

Thanks


Apache Ignite 1.9

2017-03-03 Thread Anil
Hi,

What would be release date for Apache Ignite 1.9 ? thanks.

Thanks


Re: Ignite RoundRobinLoadBalancingSpi Per Task not distributing tasks.

2017-03-02 Thread Anil
Hi Ramzinator,

You created  number of compute jobs with one task for each.

Thanks

On 2 March 2017 at 20:11, Ramzinator <rami.hamm...@gmail.com> wrote:

> Thanks Anil! It worked great.
>
> However, is there a way that Ignite can distribute tasks in a round robin
> fashion if tasks are called sequentially?
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-RoundRobinLoadBalancingSpi-
> Per-Task-not-distributing-tasks-tp10991p10993.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite RoundRobinLoadBalancingSpi Per Task not distributing tasks.

2017-03-02 Thread Anil
Hi Ramzinator,

please try providing all runnables to single compute() method.

Thanks.

On 2 March 2017 at 19:17, Ramzinator  wrote:

> Hi all,
>
> I'm trying to use Ignite's RoundRobinLoadBalancingSpi with perTask enabled
> to guarantee distribution of tasks to my ignite nodes, but it seems that
> the
> load balancing is not working as expected.
>
> Consider the following test:
>
> @Test
>   public void testRR() {
> Ignite node1 = Ignition.start(igniteConfig("node1"));
> Ignite node2 = Ignition.start(igniteConfig("node2"));
> Ignite node3 = Ignition.start(igniteConfig("node3"));
> Ignite node4 = Ignition.start(igniteConfig("node4"));
> Ignite node5 = Ignition.start(igniteConfig("node5"));
>
> node1.compute().run(runnable());
> node1.compute().run(runnable());
> node1.compute().run(runnable());
> node1.compute().run(runnable());
> node1.compute().run(runnable());
>   }
>
>   public static IgniteConfiguration igniteConfig(String gridName) {
> IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
> RoundRobinLoadBalancingSpi rr = new RoundRobinLoadBalancingSpi();
> rr.setPerTask(true);
> igniteConfiguration.setIncludeEventTypes(EVT_TASK_FAILED,
> EVT_TASK_FINISHED, EVT_JOB_MAPPED);
> igniteConfiguration.setLoadBalancingSpi(rr);
> igniteConfiguration.setGridName(gridName);
> return igniteConfiguration;
>   }
>
>   private IgniteRunnable runnable() {
> return new IgniteRunnable() {
>   @IgniteInstanceResource
>   Ignite ignite;
>
>   @Override
>   public void run() {
> System.out.println("Executing on node: " + ignite.name());
> System.out.println("Local Node Id: " +
> ignite.cluster().localNode().id());
>   }
> };
>   }
>
> I would expect to see one execution on each of my 5 nodes as stated in the
> documentation:
> https://apacheignite.readme.io/docs/load-balancing#section-per-task-mode
> However, this is not the case, and the output is as follows:
>
> Executing on node: node4
> Local Node Id: 011c1b83-b2fc-4d51-976c-00cc3ddfd27a
> Executing on node: node3
> Local Node Id: 4593c1bc-d373-41e8-b913-707e8dd96b2c
> Executing on node: node3
> Local Node Id: 4593c1bc-d373-41e8-b913-707e8dd96b2c
> Executing on node: node4
> Local Node Id: 011c1b83-b2fc-4d51-976c-00cc3ddfd27a
> Executing on node: node2
> Local Node Id: a3723772-3fa6-42eb-b578-88e507cf98e8
>
> As you can see nodes 1 and 5 were excluded, and nodes 4 and 3 were used
> twice each.
> Any explanation why this is? Am I missing any configuration?
>
> Thanks for the help!
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Ignite-RoundRobinLoadBalancingSpi-
> Per-Task-not-distributing-tasks-tp10991.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Unexpected flag value

2017-03-02 Thread Anil
Hi Val,

This impacted all queries.  I am not sure what went wrong all of sudden.

Thanks

On 2 March 2017 at 09:54, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> i am using 1.8 for both client and server.
>
> select * from Person - is not working
>
> select id, name , from Person -  working.
>
> This is strange. i did not see any issues with other queries. So trying to
> understand the root causes of Unexpected flag value exception.
>
> Thanks
>
>
>
> On 2 March 2017 at 02:53, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Hi,
>>
>> I would recommend to check that JDBC driver version is the same as Ignite
>> version on server side.
>>
>> Can you also show the query and clarify what do you mean by "run query
>> explicitely"?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Unexpected-flag-value-tp10961p10980.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Unexpected flag value

2017-03-01 Thread Anil
Hi Val,

i am using 1.8 for both client and server.

select * from Person - is not working

select id, name , from Person -  working.

This is strange. i did not see any issues with other queries. So trying to
understand the root causes of Unexpected flag value exception.

Thanks



On 2 March 2017 at 02:53, vkulichenko  wrote:

> Hi,
>
> I would recommend to check that JDBC driver version is the same as Ignite
> version on server side.
>
> Can you also show the query and clarify what do you mean by "run query
> explicitely"?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Unexpected-flag-value-tp10961p10980.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Unexpected flag value

2017-02-28 Thread Anil
Hi,

i am seeing *class org.apache.ignite.binary.BinaryObjectException:
Unexpected flag value *when i run a query.

What could be the actual root cause of this issue ? i have attached the
complete stack trace.

But the query is running fine when i run explicitly.

I can provide the sudo code for the scenario if required. thanks,

Thanks.

rx.exceptions.OnErrorNotImplementedException: Failed to query Ignite.
at 
rx.internal.util.InternalObservableUtils$ErrorNotImplementedAction.call(InternalObservableUtils.java:386)
at 
rx.internal.util.InternalObservableUtils$ErrorNotImplementedAction.call(InternalObservableUtils.java:383)
at rx.internal.util.ActionSubscriber.onError(ActionSubscriber.java:44)
at rx.observers.SafeSubscriber._onError(SafeSubscriber.java:153)
at rx.observers.SafeSubscriber.onError(SafeSubscriber.java:115)
at 
com.test.ignite.IgniteJdbcTemplate.lambda$null$1(IgniteJdbcTemplate.java:32)
at rx.internal.util.ActionSubscriber.onError(ActionSubscriber.java:44)
at rx.observers.SafeSubscriber._onError(SafeSubscriber.java:153)
at rx.observers.SafeSubscriber.onError(SafeSubscriber.java:115)
at 
io.vertx.rx.java.ObservableOnSubscribeAdapter.fireError(ObservableOnSubscribeAdapter.java:87)
at 
io.vertx.rx.java.ObservableFuture$1.dispatch(ObservableFuture.java:61)
at 
io.vertx.rx.java.ObservableFuture$HandlerAdapter.handle(ObservableFuture.java:32)
at 
io.vertx.rx.java.ObservableFuture$HandlerAdapter.handle(ObservableFuture.java:12)
at io.vertx.core.impl.FutureImpl.checkCallHandler(FutureImpl.java:158)
at io.vertx.core.impl.FutureImpl.setHandler(FutureImpl.java:100)
at io.vertx.core.impl.ContextImpl.lambda$null$0(ContextImpl.java:279)
at 
io.vertx.core.impl.ContextImpl.lambda$wrapTask$2(ContextImpl.java:324)
at 
io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
at 
io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:403)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:445)
at 
io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: Failed to query Ignite.
at 
org.apache.ignite.internal.jdbc2.JdbcStatement.executeQuery(JdbcStatement.java:149)
at 
org.apache.ignite.internal.jdbc2.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:64)
at 
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at 
org.apache.commons.dbcp2.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:83)
at com.test.ignite.JDBCQuery.executeStatement(JDBCQuery.java:45)
at com.test.ignite.JDBCQuery.execute(JDBCQuery.java:37)
at com.test.ignite.JDBCQuery.execute(JDBCQuery.java:14)
at com.test.ignite.JDBCHandler.handle(JDBCHandler.java:42)
at com.test.ignite.JDBCHandler.handle(JDBCHandler.java:28)
at 
io.vertx.core.impl.ContextImpl.lambda$executeBlocking$1(ContextImpl.java:263)
at 
io.vertx.core.impl.OrderedExecutorFactory$OrderedExecutor.lambda$new$0(OrderedExecutorFactory.java:94)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more
Caused by: class org.apache.ignite.binary.BinaryObjectException: Unexpected 
flag value [pos=340, expected=9, actual=4]
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.checkFlagNoHandles(BinaryReaderExImpl.java:1424)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readString(BinaryReaderExImpl.java:935)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.readString(BinaryReaderExImpl.java:930)
at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.readFixedType(BinaryFieldAccessor.java:707)
at 
org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read(BinaryFieldAccessor.java:639)
at 
org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:829)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1498)
at 
org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1450)
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.deserializeValue(BinaryObjectImpl.java:637)
at 
org.apache.ignite.internal.binary.BinaryObjectImpl.value(BinaryObjectImpl.java:142)
at 
org.apache.ignite.internal.processors.cache.CacheObjectContext.unwrapBinary(CacheObjectContext.java:272)
at 

Re: backup to swap

2017-02-27 Thread Anil
Hi Val,

I spent some time on figure out a way to move all backup copies to swap and
no luck.

Could you please help me out in achieving this ? Thanks.

Thanks.

On 28 January 2017 at 02:26, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Anil,
>
> What exactly does not exist? There is a swap space implementation out of
> the
> box, and you only need to implement eviction policy.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/backup-to-swap-tp10255p10293.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Parallel Queries

2017-02-27 Thread Anil
Hi,

Can single ignite client handle number of queries at time ?

I have two clients and only i see two queries are executed parallel when i
perform jmeter test.

Am I missing anything here ? Please advice.

Thanks.


Re: Node fauliure

2017-02-27 Thread Anil
Hi Andrey,

thanks for looking into it. could you please share more details around the
bug ? this helps us.

Thanks.

On 27 February 2017 at 17:27, Andrey Mashenkov <amashen...@gridgain.com>
wrote:

> Thanks, It was very helpful.
>
> Seems, Offheap with swap enabled funcionality has a bug.
>
> On Mon, Feb 27, 2017 at 2:46 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> I set both off heap cache and swap enabled = true.
>>
>> Thanks
>>
>> On 27 February 2017 at 16:48, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi Anil,
>>>
>>> One more question. Did you use Offheap cache or may be SwapEnabled=true
>>> is set?
>>>
>>>
>>> On Sat, Feb 25, 2017 at 5:14 AM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Thank you Andrey.
>>>>
>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>


Re: IGNITE-2680

2017-02-27 Thread Anil
Hi Val,

I have added the comment to the jira. Could you please take a look ? thanks.

Thanks.


Re: Node fauliure

2017-02-27 Thread Anil
Hi Andrey,

I set both off heap cache and swap enabled = true.

Thanks

On 27 February 2017 at 16:48, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> One more question. Did you use Offheap cache or may be SwapEnabled=true is
> set?
>
>
> On Sat, Feb 25, 2017 at 5:14 AM, Anil <anilk...@gmail.com> wrote:
>
>> Thank you Andrey.
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: Node fauliure

2017-02-24 Thread Anil
Thank you Andrey.


Re: Node fauliure

2017-02-24 Thread Anil
Hi Andrey,

if you notice in the log, time taken to process the partition is high ( >
15 sec). Not sure what is causing that high query time.

In my case, both caches are collocated, and eqId column is indexed and
setLocal is true for the query.

I wonder if my approach is correct. please correct it, in case you see it
is suspicious.

Thanks.



On 24 February 2017 at 18:37, Anil <anilk...@gmail.com> wrote:

> Hi Andrey,
>
> I have attached the log. thanks.
>
> Thanks.
>
>
>
>
>
> On 24 February 2017 at 18:16, Andrey Mashenkov <andrey.mashen...@gmail.com
> > wrote:
>
>> Hi Anil,
>>
>> Would you please provide ignite logs as well?
>>
>>
>> On Fri, Feb 24, 2017 at 3:33 PM, Andrey Gura <ag...@apache.org> wrote:
>>
>>> Hi, Anil
>>>
>>> Could you please provide crash dump? In your case it is
>>> /opt/ignite-manager/api/hs_err_pid18543.log file.
>>>
>>> On Fri, Feb 24, 2017 at 9:05 AM, Anil <anilk...@gmail.com> wrote:
>>> > Hi ,
>>> >
>>> > I see the node is down with following error while running compute task
>>> >
>>> >
>>> > # A fatal error has been detected by the Java Runtime Environment:
>>> > #
>>> > #  SIGSEGV (0xb) at pc=0x7facd5cae561, pid=18543,
>>> tid=0x7fab8a9ea700
>>> > #
>>> > # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
>>> > 1.8.0_111-8u111-b14-3~14.04.1-b14)
>>> > # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
>>> > compressed oops)
>>> > # Problematic frame:
>>> > # J 8676 C2
>>> > org.apache.ignite.internal.processors.query.h2.opt.GridH2Key
>>> ValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
>>> > (290 bytes) @ 0x7facd5cae561 [0x7facd5cae180+0x3e1]
>>> > #
>>> > # Failed to write core dump. Core dumps have been disabled. To enable
>>> core
>>> > dumping, try "ulimit -c unlimited" before starting Java again
>>> > #
>>> > # An error report file with more information is saved as:
>>> > # /opt/ignite-manager/api/hs_err_pid18543.log
>>> > #
>>> > # If you would like to submit a bug report, please visit:
>>> > #   http://bugreport.java.com/bugreport/crash.jsp
>>> > #
>>> >
>>> >
>>> > I have two 2 caches on 4 node cluster each cache is configured with 10
>>> gb
>>> > off heap.
>>> >
>>> > ComputeTask performs the following execution and it is broad casted to
>>> all
>>> > nodes.
>>> >
>>> >for (Integer part : parts) {
>>> > ScanQuery<String, Person> scanQuery = new ScanQuery<String, Person>();
>>> > scanQuery.setLocal(true);
>>> > scanQuery.setPartition(part);
>>> >
>>> > Iterator<Cache.Entry<String, Person>> iterator =
>>> > cache.query(scanQuery).iterator();
>>> >
>>> > while (iterator.hasNext()) {
>>> > Cache.Entry<String, Person> row = iterator.next();
>>> > String eqId =   row.getValue().getEqId();
>>> > try {
>>> > QueryCursor<Entry<AffinityKey, Contract>> pdCursor =
>>> > detailsCache.query(new SqlQuery<AffinityKey,
>>> > PersonDetail>(PersonDetail.class,
>>> > "select * from DETAIL_CACHE.PersonDetail where eqId = ? order by
>>> enddate
>>> > desc").setLocal(true).setArgs(eqId));
>>> > Long prev = null;
>>> > for (Entry<AffinityKey, PersonDetail> d : pdCursor) {
>>> > // populate person info into person detail
>>> > dataStreamer.addData(new AffinityKey(detaildId, eqId),
>>> > d);
>>> > }
>>> > pdCursor.close();
>>> > }catch (Exception ex){
>>> > }
>>> > }
>>> >
>>> > }
>>> >
>>> >
>>> > Please let me know if you see any issues with approach or any
>>> > configurations.
>>> >
>>> > Thanks.
>>> >
>>>
>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>
>


Re: Node fauliure

2017-02-24 Thread Anil
Hi Andrey,

I have attached the log. thanks.

Thanks.





On 24 February 2017 at 18:16, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> Would you please provide ignite logs as well?
>
>
> On Fri, Feb 24, 2017 at 3:33 PM, Andrey Gura <ag...@apache.org> wrote:
>
>> Hi, Anil
>>
>> Could you please provide crash dump? In your case it is
>> /opt/ignite-manager/api/hs_err_pid18543.log file.
>>
>> On Fri, Feb 24, 2017 at 9:05 AM, Anil <anilk...@gmail.com> wrote:
>> > Hi ,
>> >
>> > I see the node is down with following error while running compute task
>> >
>> >
>> > # A fatal error has been detected by the Java Runtime Environment:
>> > #
>> > #  SIGSEGV (0xb) at pc=0x7facd5cae561, pid=18543,
>> tid=0x7fab8a9ea700
>> > #
>> > # JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
>> > 1.8.0_111-8u111-b14-3~14.04.1-b14)
>> > # Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
>> > compressed oops)
>> > # Problematic frame:
>> > # J 8676 C2
>> > org.apache.ignite.internal.processors.query.h2.opt.GridH2Key
>> ValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
>> > (290 bytes) @ 0x7facd5cae561 [0x7facd5cae180+0x3e1]
>> > #
>> > # Failed to write core dump. Core dumps have been disabled. To enable
>> core
>> > dumping, try "ulimit -c unlimited" before starting Java again
>> > #
>> > # An error report file with more information is saved as:
>> > # /opt/ignite-manager/api/hs_err_pid18543.log
>> > #
>> > # If you would like to submit a bug report, please visit:
>> > #   http://bugreport.java.com/bugreport/crash.jsp
>> > #
>> >
>> >
>> > I have two 2 caches on 4 node cluster each cache is configured with 10
>> gb
>> > off heap.
>> >
>> > ComputeTask performs the following execution and it is broad casted to
>> all
>> > nodes.
>> >
>> >for (Integer part : parts) {
>> > ScanQuery<String, Person> scanQuery = new ScanQuery<String, Person>();
>> > scanQuery.setLocal(true);
>> > scanQuery.setPartition(part);
>> >
>> > Iterator<Cache.Entry<String, Person>> iterator =
>> > cache.query(scanQuery).iterator();
>> >
>> > while (iterator.hasNext()) {
>> > Cache.Entry<String, Person> row = iterator.next();
>> > String eqId =   row.getValue().getEqId();
>> > try {
>> > QueryCursor<Entry<AffinityKey, Contract>> pdCursor =
>> > detailsCache.query(new SqlQuery<AffinityKey,
>> > PersonDetail>(PersonDetail.class,
>> > "select * from DETAIL_CACHE.PersonDetail where eqId = ? order by enddate
>> > desc").setLocal(true).setArgs(eqId));
>> > Long prev = null;
>> > for (Entry<AffinityKey, PersonDetail> d : pdCursor) {
>> > // populate person info into person detail
>> > dataStreamer.addData(new AffinityKey(detaildId, eqId),
>> > d);
>> > }
>> > pdCursor.close();
>> > }catch (Exception ex){
>> > }
>> > }
>> >
>> > }
>> >
>> >
>> > Please let me know if you see any issues with approach or any
>> > configurations.
>> >
>> > Thanks.
>> >
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>
2017-02-23 19:34:52 610 INFO  CacheUpdateService:262 - 
before ignite started*
2017-02-23 19:34:52 763 WARN  NoopCheckpointSpi:480 - Checkpoints are disabled 
(to enable configure any GridCheckpointSpi implementation)
2017-02-23 19:34:52 808 WARN  GridCollisionManager:480 - Collision resolution 
is disabled (all jobs will be activated upon arrival).
2017-02-23 19:34:53 153 WARN  TcpDiscoverySpi:480 - Failure detection timeout 
will be ignored (one of SPI parameters has been set explicitly)
2017-02-23 19:34:54 996 INFO  CacheUpdateService:264 - 
After ignite started
2017-02-23 19:34:55 220 WARN  VerifiableProperties:83 - Property 
enable.auto.commit is not valid
2017-02-23 19:34:55 260 INFO  ZkEventThread:64 - Starting ZkClient event thread.
2017-02-23 19:34:55 820 INFO  CacheUpdateService:107 - Kafka is connected 
successfully
2017-02-23 19:34:56 795 DEBUG ApplicationLauncher:57 - Vertx started as a 
cluster
2017-02-23 19:34:56 926 DEBUG ApplicationLauncher:70 - RestControllerVerticle  
deployed successfully
2017-02-23 19:34:56 931 WARN  RestControllerVe

Node fauliure

2017-02-23 Thread Anil
Hi ,

I see the node is down with following error while running compute task


# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x7facd5cae561, pid=18543, tid=0x7fab8a9ea700
#
# JRE version: OpenJDK Runtime Environment (8.0_111-b14) (build
1.8.0_111-8u111-b14-3~14.04.1-b14)
# Java VM: OpenJDK 64-Bit Server VM (25.111-b14 mixed mode linux-amd64
compressed oops)
# Problematic frame:
# J 8676 C2
org.apache.ignite.internal.processors.query.h2.opt.GridH2KeyValueRowOffheap.getOffheapValue(I)Lorg/h2/value/Value;
(290 bytes) @ 0x7facd5cae561 [0x7facd5cae180+0x3e1]
#
# Failed to write core dump. Core dumps have been disabled. To enable core
dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /opt/ignite-manager/api/hs_err_pid18543.log
#
# If you would like to submit a bug report, please visit:
#   http://bugreport.java.com/bugreport/crash.jsp
#


I have two 2 caches on 4 node cluster each cache is configured with 10 gb
off heap.

ComputeTask performs the following execution and it is broad casted to all
nodes.

   for (Integer part : parts) {
ScanQuery scanQuery = new ScanQuery();
scanQuery.setLocal(true);
scanQuery.setPartition(part);

Iterator> iterator =
cache.query(scanQuery).iterator();

while (iterator.hasNext()) {
Cache.Entry row = iterator.next();
String eqId =   row.getValue().getEqId();
try {
QueryCursor> pdCursor =
detailsCache.query(new SqlQuery(PersonDetail.class,
"select * from DETAIL_CACHE.PersonDetail where eqId = ? order by enddate
desc").setLocal(true).setArgs(eqId));
Long prev = null;
for (Entry d : pdCursor) {
// populate person info into person detail
dataStreamer.addData(new AffinityKey(detaildId, eqId),
d);
}
pdCursor.close();
}catch (Exception ex){
}
}

}


Please let me know if you see any issues with approach or any
configurations.

Thanks.


Re: QueryCursor with Order by query

2017-02-23 Thread Anil
Thanks Andrey.

where are these merge temporary tables created ? on client ? i may need to
more to time to understand the internal code :)

Thanks

On 23 February 2017 at 20:21, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> Query initiator node will fetch all records before appling sorting. It is
> known bug [1], and I hope it will be fixed soon.
>
> What about QueryCursor, it is just wrapper that supports query
> cancellation. Pagination is applied to load map-queries results by reducer
> in async manner.
>
> Looking at these classes can help you to understand how it works:
> GridReduceQueryExecutor, GridMapQueryExecutor, GridMergeIndex,
> GridQueryNextPageRequest, GridQueryNextPageResponse
> It looks like sorting is applied by H2 internals when reducer makes query
> to merge table.
>
>
> [1] https://issues.apache.org/jira/browse/IGNITE-3013
>
> On Thu, Feb 23, 2017 at 2:34 PM, Anil <anilk...@gmail.com> wrote:
>
>> Hi,
>>
>> QueryCursor is used to get the records in pages instead of loading all
>> records into client memory.
>>
>> and I understand that sorting needs to get all the records into client
>> (assuming reducer is client. correct me if i am wrong) memory.
>>
>> How does QueryCursor with sort query behaves ? Thanks.
>>
>> Thanks.
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: IGNITE-2680

2017-02-23 Thread Anil
Hi Val,

Thanks. I can take it up. let me comeback on this on next Monday :)

Thanks

On 23 February 2017 at 07:09, vkulichenko 
wrote:

> I believe it was missed, I created a ticket:
> https://issues.apache.org/jira/browse/IGNITE-4748
>
> Hopefully someone will fix it in 1.9. Feel free to pick it up ;)
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IGNITE-2680-tp10783p10821.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


QueryCursor with Order by query

2017-02-23 Thread Anil
Hi,

QueryCursor is used to get the records in pages instead of loading all
records into client memory.

and I understand that sorting needs to get all the records into client
(assuming reducer is client. correct me if i am wrong) memory.

How does QueryCursor with sort query behaves ? Thanks.

Thanks.


IGNITE-2680

2017-02-21 Thread Anil
Hi,

IGNITE-2680 says it is resolved. Would it be available in 1.9 ?

I see following code in 1.8.

@Override public void setQueryTimeout(int timeout) throws SQLException {
ensureNotClosed();

throw new SQLFeatureNotSupportedException("Query timeout is not
supported.");
}

Thanks


Re: NOT IN in ignite

2017-02-21 Thread Anil
Hi Val,

I agree with you.

Controlling query execution plan as per query is useful in this case.
collocation = true does not make sense for queries without join though
caches are collocated. what do you say ?

i feel query executor must be intelligent enough to use collection as per
query.

Thanks.

On 22 February 2017 at 06:09, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Anil,
>
> OK, so you're talking about setting collocated flag on per query level in
> JDBC driver, right? This makes sense, but it seems to be a limitation of
> JDBC API rather than Ignite implementation. How would you provide a
> parameter when creating a statement and/or executing a query? Do you have
> any ideas how to do this?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p10777.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignit Cache Stopped

2017-02-20 Thread Anil
Hi Andrey,

Does client ignite gc impact ignite cluster topology ?

Thanks

On 17 February 2017 at 22:56, Andrey Gura <ag...@apache.org> wrote:

> From GC logs at the end of files I see Full GC pauses like this:
>
> 2017-02-17T04:29:22.118-0800: 21122.643: [Full GC (Allocation Failure)
>  10226M->8526M(10G), 26.8952036 secs]
>[Eden: 0.0B(512.0M)->0.0B(536.0M) Survivors: 0.0B->0.0B Heap:
> 10226.0M(10.0G)->8526.8M(10.0G)], [Metaspace:
> 77592K->77592K(1120256K)]
>
> Your heap is exhausted. During GC discovery doesn't receive heart
> betas and nodes stopped due to segmentation. Please check your nodes'
> logs for NODE_SEGMENTED pattern. If it is your case try to tune GC or
> reduce load on GC (see for details [1])
>
> [1] https://apacheignite.readme.io/docs/jvm-and-system-tuning
>
> On Fri, Feb 17, 2017 at 6:35 PM, Anil <anilk...@gmail.com> wrote:
> > Hi Andrey,
> >
> > The queyr execution time is very high when limit 1+250 .
> >
> > 10 GB of heap memory for both client and servers. I have attached the gc
> > logs of 4 servers. Could you please take a look ? thanks.
> >
> >
> > On 17 February 2017 at 20:52, Anil <anilk...@gmail.com> wrote:
> >>
> >> Hi Andrey,
> >>
> >> I checked GClogs  and everything looks good.
> >>
> >> Thanks
> >>
> >> On 17 February 2017 at 20:45, Andrey Gura <ag...@apache.org> wrote:
> >>>
> >>> Anil,
> >>>
> >>> IGNITE-4003 isn't related with your problem.
> >>>
> >>> I think that nodes are going out of topology due to long GC pauses.
> >>> You can easily check this using GC logs.
> >>>
> >>> On Fri, Feb 17, 2017 at 6:04 PM, Anil <anilk...@gmail.com> wrote:
> >>> > Hi,
> >>> >
> >>> > We noticed whenever long running queries fired, nodes are going out
> of
> >>> > topology and entire ignite cluster is down.
> >>> >
> >>> > In my case, a filter criteria could get 5L records. So each API
> request
> >>> > could fetch 250 records. When page number is getting increased the
> >>> > query
> >>> > execution time is high and entire cluster is down
> >>> >
> >>> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?
> >>> >
> >>> > Can we set seperate thread pool for queries executions, compute jobs
> >>> > and
> >>> > other services instead of common public thread pool ?
> >>> >
> >>> > Thanks
> >>> >
> >>> >
> >>
> >>
> >
>


Re: Ignit Cache Stopped

2017-02-17 Thread Anil
Hi Andrey,

I checked GClogs  and everything looks good.

Thanks

On 17 February 2017 at 20:45, Andrey Gura <ag...@apache.org> wrote:

> Anil,
>
> IGNITE-4003 isn't related with your problem.
>
> I think that nodes are going out of topology due to long GC pauses.
> You can easily check this using GC logs.
>
> On Fri, Feb 17, 2017 at 6:04 PM, Anil <anilk...@gmail.com> wrote:
> > Hi,
> >
> > We noticed whenever long running queries fired, nodes are going out of
> > topology and entire ignite cluster is down.
> >
> > In my case, a filter criteria could get 5L records. So each API request
> > could fetch 250 records. When page number is getting increased the query
> > execution time is high and entire cluster is down
> >
> >  https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?
> >
> > Can we set seperate thread pool for queries executions, compute jobs and
> > other services instead of common public thread pool ?
> >
> > Thanks
> >
> >
>


Ignit Cache Stopped

2017-02-17 Thread Anil
Hi,

We noticed whenever long running queries fired, nodes are going out of
topology and entire ignite cluster is down.

In my case, a filter criteria could get 5L records. So each API request
could fetch 250 records. When page number is getting increased the query
execution time is high and entire cluster is down

 https://issues.apache.org/jira/browse/IGNITE-4003 related to this ?

Can we set seperate thread pool for queries executions, compute jobs and
other services instead of common public thread pool ?

Thanks


Re: EntryProcessor for cache

2017-02-17 Thread Anil
Hi Andrey,

Thanks. this looks promising and will try that.

the only way to get the partitions
is ignite.affinity("PERSON_CACHE").partitions(). is that holds for non
affinity cache ?

Thanks

On 17 February 2017 at 10:39, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> Most likely, your query takes long time due to SQL query is running in
> single thread. The only workaround for now is to add more nodes.
>
> However, query is quite simple, so you can run ScanQuery per partition in
> parallel manner for iterating over PERSON_CACHE.
>
>
> On Fri, Feb 17, 2017 at 5:29 AM, Anil <anilk...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> Yes. index is available on eqId of PersonDetail object.
>>
>> Query says scan for Person cache not the PersonDetail cache.
>>
>> and i think the  above  Computask executed by only one thread and  not by
>> number of threads on number of partitions. Can parallelism achieved here ?
>>
>> Thanks.
>>
>>
>>
>> On 17 February 2017 at 02:32, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi Anil,
>>>
>>> 1. Seems, some node enter to topology, but cannot finish partition map
>>> exchange operations due to long running transtaction or smth holds lock on
>>> a partition.
>>>
>>> 2.     /* PERSON_CACHE.PERSON.__SCAN_ */ says that no indices is used
>>> for this query and sull scan will be performed.Do you have an index on 
>>> PersonDetail.eqId
>>> field?
>>>
>>> On Thu, Feb 16, 2017 at 6:50 PM, Anil <anilk...@gmail.com> wrote:
>>>
>>>> Hi Val,
>>>>
>>>> I have created ComputeTask which updates which scans the local cache
>>>> and updates its information to child records in another cache. Both caches
>>>> are collocated so that parent and child records fall under node and
>>>> partition.
>>>>
>>>> 1. I see following warning in the logs when compute task is running -
>>>>
>>>>  GridCachePartitionExchangeManager:480 - Failed to wait for partition
>>>> map exchange [topVer=AffinityTopologyVersion [topVer=6,
>>>> minorTopVer=0], node=c7a3957b-a3d0-4923-8e5d-e95430c7e66e]. Dumping
>>>> pending objects that might be the cause:
>>>>
>>>> Should I worry about this warning ? what could be the reason for this
>>>> warning.
>>>>
>>>> 2.
>>>>
>>>> QueryCursor<Entry<String, Person>> cursor = cache.query(new
>>>> SqlQuery<String, Person>(Person.class, "select * from Person").
>>>> *setLocal(**true**)*);
>>>>
>>>>
>>>>
>>>>   for (Entry<String, Person> row : cursor) {
>>>>
>>>>String eqId =   row.getValue().getEqId(); //(String) row.get(0);
>>>>
>>>>QueryCursor<Entry<AffinityKey, PersonDetail>> dCursor =
>>>>
>>>>  detailsCache.query(new
>>>> SqlQuery<AffinityKey, PersonDetail>(PersonDetail.class,
>>>>
>>>>
>>>>
>>>> "select * from DETAIL_CACHE.PersonDetail  where eqId =
>>>> ?").*setLocal(true)*.setArgs(eqId));
>>>>
>>>>  for (Entry<AffinityKey, PersonDetail> d : dCursor) {
>>>>
>>>>// add person info to person detail and add to person
>>>> detail data streamer.
>>>>
>>>> }
>>>>
>>>>
>>>>  }
>>>>
>>>>
>>>> I see (in logs) that query is taking long time -
>>>>
>>>>
>>>> Query execution is too long [time=23309 ms, sql='SELECT
>>>> "PERSON_CACHE".Person._key, "PERSON_CACHE".PERSON._val from Person', plan=
>>>>
>>>> SELECT
>>>>
>>>> PERSON_CACHE.PERSON._KEY,
>>>>
>>>> PERSON_CACHE.PERSON._VAL
>>>>
>>>> FROM PERSON_CACHE.PERSON
>>>>
>>>> /* PERSON_CACHE.PERSON.__SCAN_ */
>>>>
>>>> , parameters=[]]
>>>>
>>>> any issues with the above approach ? thanks.
>>>>
>>>>
>>>> Thanks.
>>>>
>>>>
>>>> On 11 February 2017 at 04:18, vkulichenko <
>>>> valentin.kuliche...@gmail.com> wrote:
>>>>
>>>>> Looks ok except that the first query should also be local I guess.
>>>>> Also note
>>>>> that you used split adapter, so didn't actually map the jobs to nodes,
>>>>> leaving this to Ignite. This means that there is a chance some nodes
>>>>> will
>>>>> get more than one job, and some none of the jobs. Round robin
>>>>> balancing is
>>>>> used by default, so this should not happen, at least on stable
>>>>> topology, but
>>>>> theoretically there is no guarantee. Use map method instead to
>>>>> manually map
>>>>> jobs to nodes, or just use broadcast() method.
>>>>>
>>>>> Jobs are executed in parallel in the public thread pool.
>>>>>
>>>>> -Val
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> View this message in context: http://apache-ignite-users.705
>>>>> 18.x6.nabble.com/EntryProcessor-for-cache-tp10432p10559.html
>>>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: NOT IN in ignite

2017-02-16 Thread Anil
Hi Team,

I just would like to check if there is a plan to relook into these behavior
?

Thanks.

On 13 February 2017 at 09:29, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> When two cache's require a join, collocation must be true in jdbc
> connection and then group by queries on individual caches will not return
> aggregated results.  you mean this is not limitation ? if Yes, i am sorry,
> i may not agree on this :(
>
> in this case, to make sql queries work, two jdbc client must be created..
> one for queries of individual cache and other for join queries.
>
> Thanks
>
>
>
> On 13 February 2017 at 07:56, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Anil,
>>
>> I don't see any limitations (except IGNITE-3860). Aggregation without
>> collocation works properly and return correct result unless collected flag
>> is set to true (doing so in this scenario is a misuse). As for
>> performance,
>> collocated execution will always be faster than non-collocated. That's
>> true
>> for any distributed system and there is no magic, you know :)
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/NOT-IN-in-ignite-tp9861p10582.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Execution of Compute jobs

2017-02-15 Thread Anil
Hi,

Does ignite assign the jobs when ignite.compute().call(jobs) is executed
even number of jobs greater than number of nodes?

Lets say i have to execute the 9 jobs on 4 node cluster. Will 9 jobs
assigned to nodes in the beginning itself ? or assign each job to each node
and when node finishes the job assign the other job ?

Thanks.


Re: NOT IN in ignite

2017-02-12 Thread Anil
Hi Val,

When two cache's require a join, collocation must be true in jdbc
connection and then group by queries on individual caches will not return
aggregated results.  you mean this is not limitation ? if Yes, i am sorry,
i may not agree on this :(

in this case, to make sql queries work, two jdbc client must be created..
one for queries of individual cache and other for join queries.

Thanks



On 13 February 2017 at 07:56, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Anil,
>
> I don't see any limitations (except IGNITE-3860). Aggregation without
> collocation works properly and return correct result unless collected flag
> is set to true (doing so in this scenario is a misuse). As for performance,
> collocated execution will always be faster than non-collocated. That's true
> for any distributed system and there is no magic, you know :)
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/NOT-IN-in-ignite-tp9861p10582.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: NOT IN in ignite

2017-02-11 Thread Anil
Thanks Andrey.

Ideally Affinity must be used to find the node of an entry and for joins to
reduce the data transfer between nodes. With the current implementation,
group by queries of a cache (with no joins) on non affinity key will not
work :( . Cout queries are also behaving in the same way. This is limiting
usage of data grid feature. (this is just my view and may be wrong )

if i remember it correctly, Sergi said that Query 2 should work for one of
previous questions.

Thanks,
Anil

On 11 February 2017 at 22:39, Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Hi Anil,
>
> Query 1 results looks ok. You got different results as affinity key is
> equipmentId, but not serialnumber.
> Query 2 has aggregates in subquery that is not supported yet [1].
>
> [1] https://issues.apache.org/jira/browse/IGNITE-3860.
>
>
> On Sat, Feb 11, 2017 at 7:18 PM, Anil <anilk...@gmail.com> wrote:
>
>>
>> Hi team,
>>
>> i fee this is a bug. i have loaded cache with affinity key and group by
>> queries on non affinity key returning results per node with both collocate
>> = true/false.
>>
>> i have any created INSTALL_BASE cache with key as AffinityKey and
>> value is InstallBase pojo. and affinity key is equipmentId
>>
>>
>>
>> *Query 1 -*
>>
>> SELECT count (*) as count, serialnumber  FROM InstallBase where
>> serialnumber= '031438' group by serialnumber= '031438'
>>
>> Results - on 4 node cluster
>> *With collocated = true* :
>>
>> 1 -  031438
>> 3 -  031438
>>
>> *With collocated = false* :
>>
>> 4 -  031438
>>
>> *Query 2 -*
>>
>> Select ib.*, p.count from installbase ib join (SELECT serialnumber ,
>> count (*) as count FROM InstallBase group by serialnumber) p on
>> ib.serialnumber = p.serialnumber and ib.serialnumber = '031438'
>>
>> *With collocated = true* :
>>
>> 1 -  031438
>> 3 -  031438
>> 3 -  031438
>> 3 -  031438
>>
>> *With collocated = false* :
>>
>> 1 -  031438
>> 3 -  031438
>> 3 -  031438
>> 3 -  031438
>>
>>
>> I see similar behavior with count queries as well.
>>
>> i strongly feel this is not correct behavior. Group by query on non
>> affinity field is very common usecase. please share your view on this.
>>
>> Thanks
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: NOT IN in ignite

2017-02-11 Thread Anil
Hi team,

i fee this is a bug. i have loaded cache with affinity key and group by
queries on non affinity key returning results per node with both collocate
= true/false.

i have any created INSTALL_BASE cache with key as AffinityKey and
value is InstallBase pojo. and affinity key is equipmentId



*Query 1 -*

SELECT count (*) as count, serialnumber  FROM InstallBase where
serialnumber= '031438' group by serialnumber= '031438'

Results - on 4 node cluster
*With collocated = true* :

1 -  031438
3 -  031438

*With collocated = false* :

4 -  031438

*Query 2 -*

Select ib.*, p.count from installbase ib join (SELECT serialnumber , count
(*) as count FROM InstallBase group by serialnumber) p on ib.serialnumber =
p.serialnumber and ib.serialnumber = '031438'

*With collocated = true* :

1 -  031438
3 -  031438
3 -  031438
3 -  031438

*With collocated = false* :

1 -  031438
3 -  031438
3 -  031438
3 -  031438


I see similar behavior with count queries as well.

i strongly feel this is not correct behavior. Group by query on non
affinity field is very common usecase. please share your view on this.

Thanks


Re: NOT IN in ignite

2017-02-10 Thread Anil
Hi Val,

Do you guys accept this behavior as bug ? correct it so that group by with
non affinity key returns same results with/without collocated true ?

Thanks.

On 9 February 2017 at 09:40, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> You are right. i collocated the data and set the collocated = true. and it
> impacted my group by queries.
>
> I was mentioning only group by queries on non affinity key field as an
> issue.
>
> Thanks.
>
> On 9 February 2017 at 03:18, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Anil,
>>
>> You should always try to colocate as much as possible when working with a
>> distributed system. If you colocate properly and set collocated=true, you
>> will get correct result with the best possible performance. If you can't
>> colocate, you have to set the flag to false. Result will still correct,
>> but
>> it will work slower. In other words, this is just an optional performance
>> optimization.
>>
>> Not colocating and setting flag to true is a misuse as this combination
>> leads to incorrect result.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/NOT-IN-in-ignite-tp9861p10506.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: EntryProcessor for cache

2017-02-10 Thread Anil
and Does compute task execute in parallel on number of partitions per node
? (like entry processor) thanks.

On 10 February 2017 at 10:52, Anil <anilk...@gmail.com> wrote:

> Hi Val,
>
> i have attached the code. please let me know if you see any issues with
> approach. thanks.
>
> Thanks.
>
> On 10 February 2017 at 02:16, vkulichenko <valentin.kuliche...@gmail.com>
> wrote:
>
>> Anil,
>>
>> What exactly did you try and what didn't work? Can you show your code?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/EntryProcessor-for-cache-tp10432p10532.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: EntryProcessor for cache

2017-02-09 Thread Anil
Hi Val,

i have attached the code. please let me know if you see any issues with
approach. thanks.

Thanks.

On 10 February 2017 at 02:16, vkulichenko <valentin.kuliche...@gmail.com>
wrote:

> Anil,
>
> What exactly did you try and what didn't work? Can you show your code?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/EntryProcessor-for-cache-tp10432p10532.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


TestComputeTask.java
Description: Binary data


  1   2   3   >