“Binary type has different field types” Error when using Date type field in Key or Value

2018-11-27 Thread rishi007bansod
I am trying to use java.util.Date type field in my Ignite Key and Value
objects. But when I start caching data in same Ignite cache using Java code,
I get following error.

*[12:43:01,485][SEVERE][pool-8-thread-1][] Message is ignored due to an
error [msg=MessageAndMetadata(test1,2,Message(magic = 1, attributes = 0,
CreateTime = -1, crc = 3705259101, key = java.nio.HeapByteBuffer[pos=0 lim=4
cap=3288], payload = java.nio.HeapByteBuffer[pos=0 lim=3280
cap=3280]),302,kafka.serializer.DefaultDecoder@2d50c6a2,kafka.serializer.DefaultDecoder@1ff7596c,-1,CreateTime)]
class org.apache.ignite.binary.BinaryObjectException: Binary type has
different field types [typeName=test.demo.DataKey, fieldName=tstamp,
fieldTypeName1=String, fieldTypeName2=Date]
at
org.apache.ignite.internal.binary.BinaryUtils.mergeMetadata(BinaryUtils.java:1027)
at
org.apache.ignite.internal.processors.cache.binary.BinaryMetadataTransport$MetadataUpdateProposedListener.onCustomEvent(BinaryMetadataTransport.java:293)
at
org.apache.ignite.internal.processors.cache.binary.BinaryMetadataTransport$MetadataUpdateProposedListener.onCustomEvent(BinaryMetadataTransport.java:258)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery0(GridDiscoveryManager.java:707)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$4.onDiscovery(GridDiscoveryManager.java:589)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.notifyDiscoveryListener(ServerImpl.java:5479)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processCustomMessage(ServerImpl.java:5305)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2765)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2536)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6775)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2621)
at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)*



Where DataKey is Ignite cache Key which is defined as follows,


*package test.demo;
import java.util.Date;

public class DataKey{

private Long sess_id ;  

private Long   s_id;

private Long   version; 

private Date tstamp;


public DataKey(Long sess_id, Long s_id, Long version,
Date tstamp) {
super();
this.sess_id = sess_id;
this.s_id = s_id;
this.version = version;
this.tstamp = tstamp;
}


@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result
+ ((s_id == null) ? 0 : s_id.hashCode());
result = prime * result
+ ((sess_id == null) ? 0 : sess_id.hashCode());
result = prime * result
+ ((tstamp == null) ? 0 : tstamp.hashCode());
result = prime * result + ((version == null) ? 0 :
version.hashCode());
return result;
}


@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
DataKey other = (DataKey) obj;
if (s_id == null) {
if (other.s_id != null)
return false;
} else if (!s_id.equals(other.s_id))
return false;
if (sess_id == null) {
if (other.sess_id != null)
return false;
} else if (!sess_id.equals(other.sess_id))
return false;
if (tstamp == null) {
if (other.tstamp != null)
return false;
} else if (!tstamp.equals(other.tstamp))
return false;
if (version == null) {
if (other.version != null)
return false;
} else if (!version.equals(other.version))
return false;
return true;
}
}*

As mentioned in link 
http://apache-ignite-users.70518.x6.nabble.com/Binary-type-has-different-fields-error-td21540.html

  
, I even deleted contents from $IGNITE_HOME/work/ directory and restarted
the node. But still error is there. What is causing this error? Also same
error occurs if java.util.Date type field is only used in cache value(not in
key).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


kafka.common.KafkaException: Failed to parse the broker info from zookeeper

2018-09-25 Thread rishi007bansod
*I have deployed kafka in kubernetes using
https://github.com/Yolean/kubernetes-kafka. But while consuming using kafka
consumer, I get following error :
*
SEVERE: Failed to resolve default logging config file:
config/java.util.logging.properties
[10:23:00]__   
[10:23:00]   /  _/ ___/ |/ /  _/_  __/ __/ 
[10:23:00]  _/ // (7 7// /  / / / _/   
[10:23:00] /___/\___/_/|_/___/ /_/ /___/  
[10:23:00] 
[10:23:00] ver. 1.9.0#20170302-sha1:a8169d0a
[10:23:00] 2017 Copyright(C) Apache Software Foundation
[10:23:00] 
[10:23:00] Ignite documentation: http://ignite.apache.org
[10:23:00] 
[10:23:00] Quiet mode.
[10:23:00]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[10:23:00] 
[10:23:00] OS: Linux 3.10.0-862.11.6.el7.x86_64 amd64
[10:23:00] VM information: OpenJDK Runtime Environment
1.8.0_181-8u181-b13-1~deb9u1-b13 Oracle Corporation OpenJDK 64-Bit Server VM
25.181-b13
[10:23:02] Configured plugins:
[10:23:02]   ^-- None
[10:23:02] 
[10:23:02] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[10:23:02] Security status [authentication=off, tls/ssl=off]
[10:23:03] REST protocols do not start on client node. To start the
protocols on client node set '-DIGNITE_REST_START_ON_CLIENT=true' system
property.
[10:23:24] Topology snapshot [ver=8, servers=1, clients=1, CPUs=112,
heap=53.0GB]
[10:23:34] Performance suggestions for grid  (fix if possible)
[10:23:34] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[10:23:34]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[10:23:34]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[10:23:34]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[10:23:34]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[10:23:34] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[10:23:34] 
[10:23:34] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[10:23:34] 
[10:23:34] Ignite node started OK (id=c10d143b)
[10:23:34] Topology snapshot [ver=7, servers=1, clients=2, CPUs=168,
heap=80.0GB]
start creating caches
inside caches
{xgboostMainCache=IgniteCacheProxy [delegate=GridDhtAtomicCache
[deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@66c83fc8,
near=null, super=GridDhtCacheAdapter
[multiTxHolder=java.lang.ThreadLocal@ae7950d,
super=GridDistributedCacheAdapter [super=GridCacheAdapter
[locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@6fd1660,
clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@4a6c18ad,
aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@5e8604bf,
igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false,
igfsDataCacheSize=null, igfsDataSpaceMax=0,
asyncOpsSem=java.util.concurrent.Semaphore@20095ab4[Permits = 500],
name=xgboostMainCache, size=0, opCtx=null],
xgboostTrainedDataColumnSetCache=IgniteCacheProxy
[delegate=GridDhtAtomicCache
[deferredUpdateMsgSnd=org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$3@53e3a87a,
near=null, super=GridDhtCacheAdapter
[multiTxHolder=java.lang.ThreadLocal@4dafba3e,
super=GridDistributedCacheAdapter [super=GridCacheAdapter
[locMxBean=org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl@546621c4,
clusterMxBean=org.apache.ignite.internal.processors.cache.CacheClusterMetricsMXBeanImpl@621f89b8,
aff=org.apache.ignite.internal.processors.cache.affinity.GridCacheAffinityImpl@f339eae,
igfsDataCache=false, mongoDataCache=false, mongoMetaCache=false,
igfsDataCacheSize=null, igfsDataSpaceMax=0,
asyncOpsSem=java.util.concurrent.Semaphore@2822c6ff[Permits = 500],
name=xgboostTrainedDataColumnSetCache, size=0, opCtx=null]}
end creating caches
start creating data streamers
end creating  data streamers
Launching Prediction Module
41098 [main] INFO  kafka.utils.VerifiableProperties  - Verifying properties
41527 [main] INFO  kafka.utils.VerifiableProperties  - Property
auto.offset.reset is overridden to smallest
41528 [main] WARN  kafka.utils.VerifiableProperties  - Property
bootstrap.servers is not valid
41528 [main] INFO  kafka.utils.VerifiableProperties  - Property group.id is
overridden to IgniteGroup_1
41528 [main] INFO  kafka.utils.VerifiableProperties  - Property
zookeeper.connect is overridden to zookeeper.kafka:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in
[jar:rsrc:slf4j-log4j12-1.7.21.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in

Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2018-09-11 Thread rishi007bansod
"serviceAccountName: ignite" should be present in Pod Deployment
specification as mentioned by Anton in post 
https://stackoverflow.com/questions/49395481/how-to-setmasterurl-in-ignite-xml-config-for-kubernetes-ipfinder/49405879#49405879

 
.  It is currently absent in 
https://apacheignite.readme.io/docs/stateless-deployment
  
"ignite-deployment.yaml" file

Thanks,
Rishikesh



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Xgboost and LSTM support in Apache Ignite ML and DL

2018-05-14 Thread rishi007bansod
Hi,

 In our machine learning based system we have used Xgboost and LSTM
algorithms. I want to use Ignite-based machine learning libraries to
optimize this system's performance. Does Apache Ignite ML and DL libraries
have the support of Xgboost and LSTM algorithms? 

Thanks & Regards,
Rishikesh



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Memory Storage Options

2018-01-05 Thread rishi007bansod
Hi,
I am using Ignite version 2.3.0, I want to know whether it is possible
to, 
 
(1) store all cache data in the disk (no data in memory at all)
(2) store exclusive set of data in memory and on disk i.e. data stored in
memory should be available in memory only and data stored on disk should be
available on disk only(disk should not have the superset of data)

Thanks,
Rishikesh




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2017-11-08 Thread rishi007bansod
Hi,
I have used API method *TcpDiscoveryKubernetesIpFinder* and set Ip
address explicitly as *https://192.168.120.92 using ipFinder.setMasterUrl *.
But still I am unable to retrieve ip addresses of running ignite pods. I am
using flannel network in kubernetes. Following is the error log I am getting
:

*[13:42:10,489][INFO][main][IgniteKernal]

>>>__  
>>>   /  _/ ___/ |/ /  _/_  __/ __/
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 1.9.0#20170302-sha1:a8169d0a
>>> 2017 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org

[13:42:10,491][INFO][main][IgniteKernal] Config URL: n/a
[13:42:10,491][INFO][main][IgniteKernal] Daemon mode: off
[13:42:10,491][INFO][main][IgniteKernal] OS: Linux
3.10.0-514.21.2.el7.x86_64 amd64
[13:42:10,491][INFO][main][IgniteKernal] OS user: root
[13:42:10,496][INFO][main][IgniteKernal] PID: 7
[13:42:10,496][INFO][main][IgniteKernal] Language runtime: Java Platform API
Specification ver. 1.8
[13:42:10,496][INFO][main][IgniteKernal] VM information: OpenJDK Runtime
Environment 1.8.0_111-8u111-b14-2~bpo8+1-b14 Oracle Corporation OpenJDK
64-Bit Server VM 25.111-b14
[13:42:10,499][INFO][main][IgniteKernal] VM total memory: 27.0GB
[13:42:10,499][INFO][main][IgniteKernal] Remote Management [restart: off,
REST: on, JMX (remote: off)]
[13:42:10,499][INFO][main][IgniteKernal]
IGNITE_HOME=/opt/ignite/apache-ignite-fabric-1.9.0-bin
[13:42:10,500][INFO][main][IgniteKernal] VM arguments:
[-DIGNITE_QUIET=false]
[13:42:10,500][INFO][main][IgniteKernal] Configured caches
['ignite-marshaller-sys-cache', 'ignite-sys-cache',
'ignite-atomics-sys-cache']
[13:42:10,508][INFO][main][IgniteKernal] 3-rd party licenses can be found
at: /opt/ignite/apache-ignite-fabric-1.9.0-bin/libs/licenses
[13:42:10,610][INFO][main][IgnitePluginProcessor] Configured plugins:
[13:42:10,610][INFO][main][IgnitePluginProcessor]   ^-- None
[13:42:10,610][INFO][main][IgnitePluginProcessor]
[13:42:10,703][INFO][main][TcpCommunicationSpi] Successfully bound
communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=28, selectorSpins=0, pairedConn=false]
[13:42:10,711][WARNING][main][TcpCommunicationSpi] Message queue limit is
set to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[13:42:10,745][WARNING][main][NoopCheckpointSpi] Checkpoints are disabled
(to enable configure any GridCheckpointSpi implementation)
[13:42:10,787][WARNING][main][GridCollisionManager] Collision resolution is
disabled (all jobs will be activated upon arrival).
[13:42:10,792][WARNING][main][NoopSwapSpaceSpi] Swap space is disabled. To
enable use FileSwapSpaceSpi.
[13:42:10,794][INFO][main][IgniteKernal] Security status
[authentication=off, tls/ssl=off]
[13:42:11,254][INFO][main][GridTcpRestProtocol] Command protocol
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[13:42:11,303][INFO][main][IgniteKernal] Non-loopback local IPs: 172.17.0.4,
fe80:0:0:0:42:acff:fe11:4%eth0
[13:42:11,303][INFO][main][IgniteKernal] Enabled local MACs: 0242AC110004
[13:42:11,344][INFO][main][TcpDiscoverySpi] Successfully bound to TCP port
[port=47500, localHost=0.0.0.0/0.0.0.0,
locNodeId=3c13c077-b9cc-4f00-91e2-57749c3fea32]
[13:42:11,946][SEVERE][main][TcpDiscoverySpi] Failed to get registered
addresses from IP finder on start (retrying every 2000 ms).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1613)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1562)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:974)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:837)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:351)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1850)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:268)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:685)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1626)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:924)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at

Unable to connect ignite pods in Kubernetes using Ip-finder

2017-11-07 Thread rishi007bansod
Hi, 
I have initialized my pods as per steps in link 
https://apacheignite.readme.io/docs/kubernetes-deployment
  . I  have also
tried setting Kubernetes network to *1. flannel 2. weave net*. But I get
same error in both cases. Following are error logs :

*Flannel Error
*

[11:39:30,256][SEVERE][main][TcpDiscoverySpi] Failed to get registered
addresses from IP finder on start (retrying every 2000 ms).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1613)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.resolvedAddresses(TcpDiscoverySpi.java:1562)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.sendJoinRequestMessage(ServerImpl.java:974)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:837)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:351)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:1850)
at
org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:268)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:685)
at
org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1626)
at
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:924)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1799)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1602)
at
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)
at
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)
at org.apache.ignite.Ignition.start(Ignition.java:347)
at
org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:302)
Caused by: java.net.UnknownHostException:
kubernetes.default.svc.cluster.local
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:184)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:668)
at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:264)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:367)
at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1138)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1032)
at
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1546)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1474)
at
sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:254)
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:153)
... 20 more




*Weave Net Error
*

[12:14:59] Security status [authentication=off, tls/ssl=off]
[12:15:05,243][SEVERE][main][TcpDiscoverySpi] Failed to get registered
addresses from IP finder on start (retrying every 2000 ms).
class org.apache.ignite.spi.IgniteSpiException: Failed to retrieve Ignite
pods IP addresses.
at
org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder.getRegisteredAddresses(TcpDiscoveryKubernetesIpFinder.java:172)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.registeredAddresses(TcpDiscoverySpi.java:1613)
at

Re: Ignite Nodes not connecting to cluster in docker swam mode

2017-11-02 Thread rishi007bansod
Hi, 
 Initially I was trying with auto discovery in swarm mode. But as
mentioned in post 
https://stackoverflow.com/questions/35039612/multicast-with-docker-swarm-and-overlay-network

  
Multicast is not supported in overlay network
So, now I have tried with static discovery. Created subnet 172.18.0.0/28
and mentioned all ips in static discovery. But still in this case nodes are
not able to connect each other. Following is the log in -verbose mode :
*
[08:35:54,717][INFO][main][IgniteKernal]

>>>__  
>>>   /  _/ ___/ |/ /  _/_  __/ __/
>>>  _/ // (7 7// /  / / / _/
>>> /___/\___/_/|_/___/ /_/ /___/
>>>
>>> ver. 1.9.0#20170302-sha1:a8169d0a
>>> 2017 Copyright(C) Apache Software Foundation
>>>
>>> Ignite documentation: http://ignite.apache.org

[08:35:54,718][INFO][main][IgniteKernal] Config URL: n/a
[08:35:54,718][INFO][main][IgniteKernal] Daemon mode: off
[08:35:54,718][INFO][main][IgniteKernal] OS: Linux
3.10.0-229.14.1.el7.x86_64 amd64
[08:35:54,718][INFO][main][IgniteKernal] OS user: root
[08:35:54,720][INFO][main][IgniteKernal] PID: 7
[08:35:54,720][INFO][main][IgniteKernal] Language runtime: Java Platform API
Specification ver. 1.8
[08:35:54,721][INFO][main][IgniteKernal] VM information: OpenJDK Runtime
Environment 1.8.0_111-8u111-b14-2~bpo8+1-b14 Oracle Corporation OpenJDK
64-Bit Server VM 25.111-b14
[08:35:54,722][INFO][main][IgniteKernal] VM total memory: 27.0GB
[08:35:54,722][INFO][main][IgniteKernal] Remote Management [restart: off,
REST: on, JMX (remote: off)]
[08:35:54,722][INFO][main][IgniteKernal]
IGNITE_HOME=/opt/ignite/apache-ignite-fabric-1.9.0-bin
[08:35:54,723][INFO][main][IgniteKernal] VM arguments:
[-DIGNITE_QUIET=false]
[08:35:54,723][INFO][main][IgniteKernal] Configured caches
['ignite-marshaller-sys-cache', 'ignite-sys-cache',
'ignite-atomics-sys-cache']
[08:35:54,729][INFO][main][IgniteKernal] 3-rd party licenses can be found
at: /opt/ignite/apache-ignite-fabric-1.9.0-bin/libs/licenses
[08:35:54,806][INFO][main][IgnitePluginProcessor] Configured plugins:
[08:35:54,806][INFO][main][IgnitePluginProcessor]   ^-- None
[08:35:54,806][INFO][main][IgnitePluginProcessor]
[08:35:54,867][INFO][main][TcpCommunicationSpi] Successfully bound
communication NIO server to TCP port [port=47100, locHost=0.0.0.0/0.0.0.0,
selectorsCnt=28, selectorSpins=0, pairedConn=false]
[08:36:14,894][WARNING][main][TcpCommunicationSpi] Message queue limit is
set to 0 which may lead to potential OOMEs when running cache operations in
FULL_ASYNC or PRIMARY_SYNC modes due to message queues growth on sender and
receiver sides.
[08:36:14,937][WARNING][main][NoopCheckpointSpi] Checkpoints are disabled
(to enable configure any GridCheckpointSpi implementation)
[08:36:14,976][WARNING][main][GridCollisionManager] Collision resolution is
disabled (all jobs will be activated upon arrival).
[08:36:14,980][WARNING][main][NoopSwapSpaceSpi] Swap space is disabled. To
enable use FileSwapSpaceSpi.
[08:36:14,982][INFO][main][IgniteKernal] Security status
[authentication=off, tls/ssl=off]
[08:36:15,391][INFO][main][GridTcpRestProtocol] Command protocol
successfully started [name=TCP binary, host=0.0.0.0/0.0.0.0, port=11211]
[08:36:15,436][INFO][main][IgniteKernal] Non-loopback local IPs: 172.18.0.4,
172.28.0.5
[08:36:15,436][INFO][main][IgniteKernal] Enabled local MACs: 0242AC120004,
0242AC1C0005
[08:36:15,474][INFO][main][TcpDiscoverySpi] Successfully bound to TCP port
[port=47500, localHost=0.0.0.0/0.0.0.0,
locNodeId=7e2135d1-5761-4d09-b3a2-55703aa8a3a0]
*

After this ignite waits indefinitely without starting node



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Nodes not connecting to cluster in docker swam mode

2017-11-01 Thread rishi007bansod
Hi,
 When I connect ignite multiple containers in --net=host mode then they
get connected. But when I try to connect Ignite containers in swarm(cluster)
mode, they are unable to connect. I am getting following error messege[in
verbose(-v) mode]:

*[10:46:41,947][SEVERE][grid-time-coordinator-#102%null%][GridClockSyncProcessor]
Failed to send time request to remote node
[rmtNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, addr=/10.255.0.5,
port=31100]
class org.apache.ignite.IgniteCheckedException: Failed to send datagram
message to remote node [addr=/10.255.0.5, port=31100, msg=GridClockMessage
[origNodeId=94cf9c90-285e-4afb-959c-fe649d665ae3,
targetNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, origTs=1509533201848,
replyTs=0]]
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:162)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.requestTime(GridClockSyncProcessor.java:458)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.body(GridClockSyncProcessor.java:385)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Operation not permitted (sendto failed)
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:158)
... 4 more
*
Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Nodes not connecting to cluster in docker swam mode

2017-11-01 Thread rishi007bansod
Hi,
 When I connect ignite multiple containers in --net=host mode then they
get connected. But when I try to connect Ignite containers in swarm(cluster)
mode, they are unable to connect. I am getting following error messege:

*[10:46:41,947][SEVERE][grid-time-coordinator-#102%null%][GridClockSyncProcessor]
Failed to send time request to remote node
[rmtNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, addr=/10.255.0.5,
port=31100]
class org.apache.ignite.IgniteCheckedException: Failed to send datagram
message to remote node [addr=/10.255.0.5, port=31100, msg=GridClockMessage
[origNodeId=94cf9c90-285e-4afb-959c-fe649d665ae3,
targetNodeId=41514e41-97f1-4cda-970f-cb17d0c611d2, origTs=1509533201848,
replyTs=0]]
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:162)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.requestTime(GridClockSyncProcessor.java:458)
at
org.apache.ignite.internal.processors.clock.GridClockSyncProcessor$TimeCoordinator.body(GridClockSyncProcessor.java:385)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: Operation not permitted (sendto failed)
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at
org.apache.ignite.internal.processors.clock.GridClockServer.sendPacket(GridClockServer.java:158)
... 4 more
*
Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite-Python Support for stream processing

2017-10-11 Thread rishi007bansod
Hi 
Is there any ignite streaming support in *python*? 
I want to perform following operations in python :
(read data from kafka)==>(some stream processing)==>(finally store data into
ignite cache)
So do we have kafka-ignite connector and ignite-stream-visitor in python?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Continuous Query on Multiple caches

2017-08-28 Thread rishi007bansod
Hi,
 In our case data is coming from 2 kafka streams. We want to compare current
data from 2 streams and take some action(e.g. raise alert). We want to make
this processing event based i.e. as soon as data comes from 2 streams, we
should take action associated with this event. 
For example, 
((Curr_stream1.f0 - Curr_stream2.f0) > T ) then > raise alert.

Initially I thought of caching both streams data and then compare it, but it
will take more time to process.

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444p16473.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Continuous Query on Multiple caches

2017-08-28 Thread rishi007bansod
Hi,
   In documentation 
https://apacheignite.readme.io/v2.1/docs/continuous-queries
   , continuous
query is mentioned for single cache only. In our case I want to use it for
multiple caches, how can we use continuous query for same? Please provide
example 

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Continuous-Query-on-Multiple-caches-tp16444.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while starting service grid with persistent store enabled

2017-08-22 Thread rishi007bansod
Hi, 
  When error occured I have used following deployment method,

cfg.setName("myService_rishi");
cfg.setService(new codeToDeploy());
cfg.setTotalCount(9);
cfg.setMaxPerNodeCount(1);
ignite.active(true);
ignite.services().deploy(cfg);

but then I changed deployment method to, 

ignite.active(true);
IgniteServices svcs = ignite.services();
svcs.deployNodeSingleton("TestService", new SurveillanceAlert());

Also initially code I wrote in java file and xml file, configuration was
differing. Later I corrected it.

Thanks,
Rishikesh




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-starting-service-grid-with-persistent-store-enabled-tp15946p16361.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Tuning parameters for performance of ignite Persistence store

2017-08-08 Thread rishi007bansod
Hi,
 I have tried settings you have mentioned for WAL, it improves
performance in case of WAL write. But when check pointing process
starts(default - after 3 mins), caching process slows down(almost stops). Is
there any way by which we can write checkpoint to disk in background, so
that caching throughput is improved?
 Also, though I have connected 3 disks to my machine only 1 is getting
used while writing, is there any way by which I can use all 3 of them?  

Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Tuning-parameters-for-performance-of-ignite-Persistence-store-tp16051p16072.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while starting service grid with persistent store enabled

2017-08-07 Thread rishi007bansod
Hi,
Setting same configuration in xml files and java code solved my problem.
Also one another question, what parameters can we tune when persistent store
enabled to get high throughput for data caching?

Thanks,
Rishikesh




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-starting-service-grid-with-persistent-store-enabled-tp15946p16028.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Error while starting service grid with persistent store enabled

2017-08-03 Thread rishi007bansod
Hi,
I am getting following error when I enable persistent store and using
service grid(when more than 1 node is connected in grid)


[18:47:55]__   
[18:47:55]   /  _/ ___/ |/ /  _/_  __/ __/ 
[18:47:55]  _/ // (7 7// /  / / / _/   
[18:47:55] /___/\___/_/|_/___/ /_/ /___/  
[18:47:55] 
[18:47:55] ver. 2.1.0#20170720-sha1:a6ca5c8a
[18:47:55] 2017 Copyright(C) Apache Software Foundation
[18:47:55] 
[18:47:55] Ignite documentation: http://ignite.apache.org
[18:47:55] 
[18:47:55] Quiet mode.
[18:47:55]   ^-- Logging to file
'/home/rishikesh/apache-ignite-fabric-2.1.0-bin/work/log/ignite-e36f2795.0.log'
[18:47:55]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[18:47:55] 
[18:47:55] OS: Linux 3.10.0-229.14.1.el7.x86_64 amd64
[18:47:55] VM information: Java(TM) SE Runtime Environment 1.8.0_71-b15
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.71-b15
[18:47:55] Configured plugins:
[18:47:55]   ^-- None
[18:47:55] 
[18:47:56] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[18:47:56] Security status [authentication=off, tls/ssl=off]
[18:47:56] Performance suggestions for grid  (fix if possible)
[18:47:56] To disable, set -DIGNITE_PERFORMANCE_SUGGESTIONS_DISABLED=true
[18:47:56]   ^-- Enable G1 Garbage Collector (add '-XX:+UseG1GC' to JVM
options)
[18:47:56]   ^-- Specify JVM heap max size (add '-Xmx[g|G|m|M|k|K]' to
JVM options)
[18:47:56]   ^-- Set max direct memory size if getting 'OOME: Direct buffer
memory' (add '-XX:MaxDirectMemorySize=[g|G|m|M|k|K]' to JVM options)
[18:47:56]   ^-- Disable processing of calls to System.gc() (add
'-XX:+DisableExplicitGC' to JVM options)
[18:47:56]   ^-- Speed up flushing of dirty pages by OS (alter
vm.dirty_expire_centisecs parameter by setting to 500)
[18:47:56] Refer to this page for more performance suggestions:
https://apacheignite.readme.io/docs/jvm-and-system-tuning
[18:47:56] 
[18:47:56] To start Console Management & Monitoring run
ignitevisorcmd.{sh|bat}
[18:47:56] 
[18:47:56] Ignite node started OK (id=e36f2795)
[18:47:56] Topology snapshot [ver=108, servers=9, clients=0, CPUs=56,
heap=59.0GB]
[18:47:57] Default checkpoint page buffer size is too small, setting to an
adjusted value: 2.0 GiB
[18:48:05] Cancelled rebalancing from all nodes [topology=null]
[18:48:07,122][SEVERE][exchange-worker-#106%null%][GridCachePartitionExchangeManager]
Failed to wait for completion of partition map exchange (preloading will not
start): GridDhtPartitionsExchangeFuture [dummy=false, forcePreload=false,
reassign=false, discoEvt=DiscoveryCustomEvent [customMsg=null,
affTopVer=AffinityTopologyVersion [topVer=108, minorTopVer=1],
super=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=e36f2795-f680-44fe-81bb-b7f10279a0ba, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 192.168.140.44],
sockAddrs=[pinta.testing.com/192.168.140.44:47508,
/0:0:0:0:0:0:0:1%lo:47508, /127.0.0.1:47508], discPort=47508, order=108,
intOrder=59, lastExchangeTime=1501766287107, loc=true,
ver=2.1.0#20170720-sha1:a6ca5c8a, isClient=false], topVer=108,
nodeId8=e36f2795, msg=null, type=DISCOVERY_CUSTOM_EVT,
tstamp=1501766277038]], crd=TcpDiscoveryNode
[id=1352d67c-64de-4979-aa00-8335edb20683, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 192.168.140.44],
sockAddrs=[pinta.testing.com/192.168.140.44:47500,
/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=100,
intOrder=51, lastExchangeTime=1501766276774, loc=false,
ver=2.1.0#20170720-sha1:a6ca5c8a, isClient=false],
exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=108, minorTopVer=1], nodeId=e36f2795, evt=DISCOVERY_CUSTOM_EVT],
added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
res=true, hash=1885469686], init=true, lastVer=null,
partReleaseFut=GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=4,
done=true, cancelled=false, err=null, futs=[true, true, true, true]],
exchActions=null, affChangeMsg=null, skipPreload=false,
clientOnlyExchange=false, initTs=1501766277058, centralizedAff=false,
changeGlobalStateE=null, forcedRebFut=null, done=true, evtLatch=0,
remaining=[f5f31d39-d8e6-4a7a-897f-9c693c689438,
e79ff60d-abd2-4929-b627-749040cbe1e7, 34ec752d-ac47-4105-8275-53199a03d5cf,
f81d1cdc-20e8-455c-ab80-e150b5283bb9, 83117cdd-808b-4136-a25d-e7f3500ae59d,
1352d67c-64de-4979-aa00-8335edb20683, f71f01b9-24a0-47d5-885d-61f542b406e2,
19ff8832-436e-4ef2-a710-197fc16c1a7d], super=GridFutureAdapter
[ignoreInterrupts=false, state=DONE, res=class o.a.i.IgniteCheckedException:
Cluster state change failed, hash=1200929855]]
class org.apache.ignite.IgniteCheckedException: Cluster state change failed
at
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.processMessage(GridDhtPartitionsExchangeFuture.java:2045)
at

Re: Ignite Data Streamer Performance is not improving with increase in threads

2017-07-11 Thread rishi007bansod
Hi Andrew,
We have observed that following method(segment.access()) blocks ignite
data caching using data streamers(For single ignite instance). This limits
our resource utilization i.e. CPU, MEM are not fully utilized. How can we
avoid this blocking so that we can get maximum performance out of single
ignite instance?
 

Thanks  




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151p14623.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Data Streamer Performance is not improving with increase in threads

2017-07-03 Thread rishi007bansod
Hi Andrey,
Attached is the code we have used for bench marking. Is there any tuning
that we can apply to get better performance out of ignite single instance
further?
 Also we have attached logs taken from our tool where we varied
datastreamer parallelism from 1 to 16(default). In this case it is observed
that,
(1) ignite by default creates threadpool of size 56 and datastreamer uses
threads among this threadool depending upon parallelism set(is it correct??,
correct me if I am wrong) 
(2)  Also when datastreamer parallelism is set to 1, it is observed that
while loop thread(Timer-0) goes into waiting state after some interval(here
we get 30k rate)(Why is this happening? why rate here is limited to 30k
instead of 80K(80k is rate in case of default parallelism))
(3) Whereas in case of default parallelism(i.e. 16) while loop
thread(Timer-0) is continuously in running state(here we get 80k rate only).
But in this case public thread pools of data streamers are waiting most of
the time, is this the reason for less throughput? 

 


 

code.java
  

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151p14276.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Data Streamer Performance is not improving with increase in threads

2017-06-29 Thread rishi007bansod
I have tried writing while loop which continuously inserts data(same entry)
with increamenting cache key(so that there is unique key). Without
datastrmr.addData() while loop is generating data at 200K msgs/sec(Data
generation rate is much more than caching rate). Is there any blocking done
by datastreamer which limits CPU utilization? 

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151p14162.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Ignite Data Streamer Performance is not improving with increase in threads

2017-06-28 Thread rishi007bansod
Hi,
 With datastreamer threads = 16(default)  am able to get upto 80 K
msgs/sec throughput. As I have 56 core machine(with hyperthreading), I have
tried increasing this pernodeparallelprocessing parameter of datastreamer to
56. But still no increase in throughput is observed. Also out of 5600% CPU
ignite instance only uses 400% CPU. Is there any limiting factor for data
caching?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: System Parameters to improve CPU utilization

2017-06-13 Thread rishi007bansod
Hi,
  We are performing parallel data loading in memory using multiple
instances of ignite(from kafka to ignite) in single node. While caching CPU
is not getting utilized above 70%. How can we improve this?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/System-Parameters-to-improve-CPU-utilization-tp13562p13640.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


System Parameters to improve CPU utilization

2017-06-09 Thread rishi007bansod
Hi, 
   For my ignite data caching process i have recorded following
statistics. In which I have found my CPU utilization is not much(only
60-70%). Also during this run high number of minor page faults, context
switches/sec are seen. Are these parameters limiting my system performance?
Are there any tuning that I can apply to improve CPU utilization?

*CPU Utilization :*
 

*Page Faults :*
 

*Context Switches/sec :*

 

I have also tried increasing setStartCacheSize for cache but still same
number of Page faults, Context switches/sec and CPU utilization is seen.

*Page Faults when setCacheStartSize is set to 60*1024*1024(for 60M entries
in our case):*

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/System-Parameters-to-improve-CPU-utilization-tp13562.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while passing arraylist as argument to INSERT data into Ignite cache using pyodbc

2017-05-24 Thread rishi007bansod
SQL script without "?" works correctly



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-passing-arraylist-as-argument-to-INSERT-data-into-Ignite-cache-using-pyodbc-tp13049p13132.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while passing arraylist as argument to INSERT data into Ignite cache using pyodbc

2017-05-23 Thread rishi007bansod
Hi,
I have set encoding/decoding to 'utf8' as,

*cnxn.setdecoding(pyodbc.SQL_CHAR, encoding='utf-8')
cnxn.setdecoding(pyodbc.SQL_WCHAR, encoding='utf-8')
cnxn.setencoding(unicode, encoding='utf-8')*

as mentioned at link 
https://github.com/mkleehammer/pyodbc/wiki/Unicode#mysql-and-postgresql
  .
But still I am getting error.

Thanks.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-passing-arraylist-as-argument-to-INSERT-data-into-Ignite-cache-using-pyodbc-tp13049p13097.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ODBC driver error during SELECT query for string data type

2017-05-21 Thread rishi007bansod
setting decoder to 'utf8' solved my problem



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ODBC-driver-error-during-SELECT-query-for-string-data-type-tp13024p13051.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Error while passing arraylist as argument to INSERT data into Ignite cache using pyodbc

2017-05-20 Thread rishi007bansod
I have written following code to insert data into Ingite cache from python
using ODBC driver. In this want to pass arraylist of strings(id_list) as
argument. My list contains variable number of elements that is why I need to
pass it as argument to query.

cursor = connection.cursor()
select_string= "INSERT INTO Person (_key, id,  firstName, lastName, salary)
VALUES (322, ? , 'abcd', 'dhsagd', 1000)"

id_list = ['test1', 'test2', 'test3', 'test4']
cursor.execute(select_string, id_list)

But when I pass list of strings as parameter to query I am getting following
error.

Traceback (most recent call last):
  File "pythonOdbclist.py", line 13, in 
cursor.execute(select_string, id_list)
pyodbc.ProgrammingError: ('The SQL contains 0 parameter markers, but 4
parameters were supplied', 'HY000')





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-passing-arraylist-as-argument-to-INSERT-data-into-Ignite-cache-using-pyodbc-tp13049.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ODBC driver error during SELECT query for string data type

2017-05-19 Thread rishi007bansod
Hi, 
While fetching string fields using ODBC driver from ignite cache in
python, I am getting following error,

Traceback (most recent call last):
  File "pythonOdbc.py", line 13, in 
row = cursor.fetchone()
UnicodeDecodeError: 'utf16' codec can't decode byte 0x00 in position 2:
truncated data


ODBC connection string i have used is :
connection_string=
'DRIVER=/usr/local/lib/libignite-odbc.so;ADDRESS=localhost:10800;CACHE=Person'

Is there any encoding parameter that I need to set to remove above error?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ODBC-driver-error-during-SELECT-query-for-string-data-type-tp13024.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Affinity Function in Apache Ignite

2017-05-18 Thread rishi007bansod
Hi,
   1. What is default affinity function used in ignite for key to partition
to node mapping?
   2. Also exactly how mapping is done in RendezvousAffinityFunction using
highest random weight algorithm?

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Affinity-Function-in-Apache-Ignite-tp12991.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-05-04 Thread rishi007bansod
Hi,
I have attached log file for one ignite node  ignite-b6321e0a.zip

  

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12414.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-24 Thread rishi007bansod
Hi,
   In our case we killed 3 ignite nodes among the 30 nodes we have
started in grid to check fault tolerance of ignite i.e. whether remaining 27
nodes remap to kafka consumers or not. fe6409cb-88a2-43da-9eb7-9b17cf5debcb
node is one of the nodes we have killed to check fault tolerance. Another
thing, kafka consumers keep consuming messages when these 3 nodes are killed
and we get *Data streamer has been closed* message displayed continuously
along with *class org.apache.ignite.IgniteCheckedException: Failed to finish
operation (too many remaps): 32 * error.


Thanks,
Rishikesh



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12213.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-24 Thread rishi007bansod
*Hi,
   In our initial setting we have kept allowoverwrite to default i.e false.
Now we tried setting allowoverwrite property of ignitedatastreamer to
"true". Initially when we killed ignite instances we got following error.
Due to this error some of ignite nodes/instances got disconnected.
*
[17:40:30,386][WARNING][grid-nio-worker-tcp-comm-0-#65%null%][TcpCommunicationSpi]
Communication SPI session write timed out (consider increasing
'socketWriteTimeout' configuration property)
[remoteAddr=/192.168.140.44:42370, writeTimeout=2000]
[17:40:42,397][INFO][grid-timeout-worker-#63%null%][IgniteKernal]
Metrics for local node (to disable set 'metricsLogFrequency' to 0)
^-- Node [id=ced80027, name=null, uptime=00:16:00:157]
^-- H/N/C [hosts=3, nodes=24, CPUs=168]
^-- CPU [cur=0.13%, avg=5%, GC=0%]
^-- Heap [used=2929MB, free=28.48%, comm=4096MB]
^-- Non heap [used=94MB, free=93.77%, comm=96MB]
^-- Public thread pool [active=0, idle=56, qSize=0]
^-- System thread pool [active=0, idle=55, qSize=0]
^-- Outbound messages queue [size=13]
[17:41:06,346][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:16.736,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8563779624, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:33:11.642,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=9092344352, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:34:20.429,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=9569548832, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:38.946,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8278360104, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:16.736,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8514297071, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:26.144,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8099247144, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:46.651,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8399928360, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:32.752,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8326161647, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:33:11.642,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=9185813024, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:36:02.190,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8426645544, hasValBytes=true],
nearReaderEntry=null]]
[17:41:06,347][WARNING][grid-timeout-worker-#63%null%][GridCachePartitionExchangeManager]
Found long running cache future [startTime=17:32:23.224,
curTime=17:41:06.340, fut=GridDhtAtomicSingleUpdateFuture
[key=KeyCacheObjectImpl [val=8103641327, hasValBytes=true],
nearReaderEntry=null]]


*So we increased parameter "socketWriteTimeout" to 20 seconds. But now again
we got following remaps error,
*



[19:20:27,185][SEVERE][sys-#338%null%][DataStreamerImpl] DataStreamer
operation failed.
class org.apache.ignite.IgniteCheckedException: Failed to finish operation
(too many remaps): 32
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:863)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:828)
at
org.apache.ignite.internal.util.future.GridFutureAdapter$ArrayListener.apply(GridFutureAdapter.java:456)
at

Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread rishi007bansod
Hi Below is our architecture,

1. Ignite receives data via Kafka Streamer
2. Tuple Extractor is implemented in ignite code
Everything works fine till this step.
3. We stop kafka. No error yet.
4. We kill 2 instance (out of n instance) of ignite. 
5. Kafka Consumer remapping also happens without any issue.
6. Cache rebalancing also seems completed (via log)
6. Post 2 mins, kafka is again started.
And then we get the below error
class org.apache.ignite.IgniteCheckedException: Failed to finish operation
(too many remaps): 32

Even after Cache rebalancing and consumer mapping seems completed and when
ignite receives new data through kafka then only error is coming.
So not able to understand what remap ignite is doing?

One point just for info, we are using Data streamer to add data in one cache
and ignite visitor to process data.
All caches are partitioned and have backup set to 1.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12085.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: what does remaps mean in DataStreamerImpl.java

2017-04-19 Thread rishi007bansod
Hi, the backup is in our case is 1.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/what-does-remaps-mean-in-DataStreamerImpl-java-tp12033p12083.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while running query against Ignite Server Node

2017-04-02 Thread rishi007bansod
Hi,
I am running 3 instances of ignite per machine(total 3 machines). Data is
inserted in partitioned cache continuously and I am running query against
this cache. After query I am getting above error. Is there any way I can
make topology stable before querying data? For example, how can I query data
up to certain  snapshot of cache when data is continuously getting added and
I am querying data to avoid this error(As this error only occurs during run
time query against data i.e. when cache is getting populated and not when
cache is completely loaded) 

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-running-query-against-Ignite-Server-Node-tp11030p11643.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while running query against Ignite Server Node

2017-03-30 Thread rishi007bansod
Hi,
But this error gets continuously displayed, and some of data is lost.
How can I avoid this?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-running-query-against-Ignite-Server-Node-tp11030p11596.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cache get using sub-fields of composite key

2017-03-28 Thread rishi007bansod
Hi,
I have composite key object of more than one fields(f1, f2) for my
cache. Now I want to get cache values using cache.get() based on either
using f1 or f2(not using both f1 and f2). How can I do this?

Thanks. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-get-using-sub-fields-of-composite-key-tp11514.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Fault Tolerance in Ignite Services

2017-03-19 Thread rishi007bansod
Hi,
I have deployed service()(some job to be performed on ignite nodes) on 4
ignite nodes. If one node fails and job is in midway of execution, I want
other node to take over this job and continue executing it. How can we
handle this type of failure in case of service grids? I have read about fail
over SPI and check pointing SPI, which of these can be used in my case? are
there any examples for same?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fault-Tolerance-in-Ignite-Services-tp11295.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Error while running query against Ignite Server Node

2017-03-04 Thread rishi007bansod
Hi, 
I am getting following error after running query against Ignite cache
from Client node,


[17:53:28] (err) Failed to execute compound future reducer:
GridCompoundFuture [rdc=null, initFlag=1, lsnrCalls=0, done=false,
cancelled=false, err=null, futs=[true]]class
org.apache.ignite.IgniteCheckedException: DataStreamer request failed
[node=0a0580b8-2eb1-4531-98d4-b04bb910f0df]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$Buffer.onResponse(DataStreamerImpl.java:1777)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$3.onMessage(DataStreamerImpl.java:335)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1215)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:843)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$2100(GridIoManager.java:108)
at
org.apache.ignite.internal.managers.communication.GridIoManager$6.run(GridIoManager.java:783)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: class org.apache.ignite.IgniteCheckedException: DataStreamer will
retry data transfer at stable topology [reqTop=AffinityTopologyVersion
[topVer=2597, minorTopVer=10], topVer=AffinityTopologyVersion [topVer=2598,
minorTopVer=0], node=remote]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.localUpdate(DataStreamProcessor.java:339)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.processRequest(DataStreamProcessor.java:297)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor.access$000(DataStreamProcessor.java:56)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamProcessor$1.onMessage(DataStreamProcessor.java:86)
... 7 more

what is cause for this error?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-running-query-against-Ignite-Server-Node-tp11030.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Effect of Autoflush frequency in Ignite Data Streamer

2017-03-01 Thread rishi007bansod
Suppose autoflush frequency is 1msec for ignite data streamer that put
data into cache. Now if I query this cache, then will this query result also
include data from data streamer buffer(which is not flushed into
cache)?(whose flush frequency we have set to 10 sec) 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Effect-of-Autoflush-frequency-in-Ignite-Data-Streamer-tp10977.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Get high throughput for loading data in ignite

2017-02-22 Thread rishi007bansod
hi,
   Is it possible to use partition aware data loading in case of streaming
data? Like in my case I have stored data in kafka and then I am passing it
to ignite instance, so in this case how can we use partition aware data
loading? Example provided at link : 
http://apacheignite.gridgain.org/docs/data-loading#section-partition-aware-data-loading

 
, provides only case for loading data from persistent store. Is there any
way I can use partition aware data loading in kafka to ignite data loading?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483p10804.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Get high throughput for loading data in ignite

2017-02-16 Thread rishi007bansod
Hi,
In my case CPU is becoming bottleneck when 2 nodes are connected, I have
attached my atop command snapshot,  
  
I have not set number of backups parameter, it is default(i.e. 0) in my
case. Is there any way I can improve performance? Which ignite parameters or
system parameters should I look for? Are there any settings that I should
look for when 2 or more nodes are connected in grid? As in my case data
loading rate is not scaling up with 2 nodes connected

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483p10675.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Get high throughput for loading data in ignite

2017-02-15 Thread rishi007bansod
Hi,
On one server node I am getting upto 2,30,000 rec/sec. For scaling up
this data loading rate I have added one more node with same configuration.
But data loading rate is hardly scaling up, with 2 nodes connected I am
getting only 2,70,000 rec/sec rate. What are best practices that should be
followed while loading data so that I can scale up data loading rate on this
2 node cluster?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483p10655.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Issue in starting IgniteSinkConnector with Distributed Worker mode

2017-02-15 Thread rishi007bansod
Hi,
As mentioned in post, 
https://dzone.com/articles/linking-apache-ignite-and-apache-kafka-for-highly
 
, I am able to use connect-standalone.sh for ignite kafka connection. But
now for scaling up these connectors across  cluster I am trying to use
connect-distributed.sh from kafka. I have also attached *.properties files I
am using.
 But in case of connect-distributed.sh i.e. distributed worker mode
Ignite instance is not getting started at all. What might be issue in this
case? Is kafka Distributed Worker mode supported in Ignite?
connect-distributed.properties

  
ignite-connector.properties

  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Issue-in-starting-IgniteSinkConnector-with-Distributed-Worker-mode-tp10647.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Get high throughput for loading data in ignite

2017-02-14 Thread rishi007bansod
Hi Val,
  I have following machine configuration,
  Number of CPU cores : 24
  RAM : 65 GB
  Processor : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz

  My message size is 512 Bytes

  Currently I am getting data loading throughput of about 2,30,000
messages/sec(i.e. about 117.76 MB/sec) on single machine.
   Here I want to ask what is maximum data cache write rate that can be
achievable using Apache Ignite? What is maximum expected data loading
throughput in this configuration?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483p10638.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Reduce Movement of data in cluster while data loading in PARTITIONED cache

2017-02-13 Thread rishi007bansod
But in my case I am loading data from kafka and not from database, so i am
loading data using ignite data streamer. What can I do for partition aware
data loading in this case(for ignite data streamer)?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reduce-Movement-of-data-in-cluster-while-data-loading-in-PARTITIONED-cache-tp10570p10603.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Equal Distribution of data among Ignite instances

2017-02-13 Thread rishi007bansod
I have 9 ignite server instances I0, I1,..., I8 which has cache in
PARTITIONED mode, in which I am loading data in parallel from partitions P0,
P1.P8 in kafka. Here partition P0, P1P8 contains number of entries
which can be uniquely identified by field  seq_no, also I am using part_ID
for collocating entries from one partition to one instance only. So, i have
defined key as,

class key()
{
int seq_no;
@AffinityKeyMapped
int part_ID; //for collocating entries from one partition to one
instance only
} 

So, I am trying to achieve one to one mapping between cache entries in
ignite instances and partitions e.g. I0->P0, I1->P1, ...,I8->P8. But in
my case mapping I am getting is 

I0-> NULL(No Entries), 
I1-> P5, 
I2-> NULL,
I3-> P7,
I4-> P2, P6
I5-> P1
I6-> P8
I7-> P0, P4
I8-> P3

So, Affinity part is achieved here i.e. entries with same partition ID gets
cached on same ignite instance. But, data is not equally distributed among
ignite instances i.e. I4 and I7 holds 2 partitions' data whereas I0 and I2
does not contain any data. So how can we achieve this equal distribution of
data? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Equal-Distribution-of-data-among-Ignite-instances-tp10602.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Reduce Movement of data in cluster while data loading in PARTITIONED cache

2017-02-11 Thread rishi007bansod
Hi,
 Is there any way we can reduce movement of data across nodes in cluster
while data loading, so that we can speed up data loading process? By
observation I am getting better data loading rates in cache LOCAL mode as
there is no movement of data across ignite instances. Can we do some setting
in PARTITIONED cache mode, so that in this mode also movement across network
gets reduced?

Thanks. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Reduce-Movement-of-data-in-cluster-while-data-loading-in-PARTITIONED-cache-tp10570.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Get high throughput for loading data in ignite

2017-02-09 Thread rishi007bansod
Cache is configured in OFF HEAP, partitioned mode. Data is read from kafka
topic. There must be some reference settings, e.g. for machine with certain
memory, cores how data streamer can be tunned? What are system parameters,
that I should look for improved performance for data loading?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483p10529.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Partition mapping to Ignite node

2017-02-07 Thread rishi007bansod
I looked at following link,
https://ignite.apache.org/releases/1.7.0/javadoc/org/apache/ignite/cache/affinity/AffinityFunction.html

  
But i am not getting idea, how can i use AffinityFunctionContext to map
certain partition to certain node(i.e. how can i define my own mapping)? can
you give example for same(As above documentation does not explains this part
in detail)?

Thanks.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partition-mapping-to-Ignite-node-tp10300p10485.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Get high throughput for loading data in ignite

2017-02-07 Thread rishi007bansod
I am loading data through kafkastreamer and ignitedatastreamer to cache. What
are ideal settings for ignitedatastreamer(for parameters like,
pernodebuffersize, pernodeparallelprocessing, autoflushfreq) to load data at
high rate. Also are there any system settings in linux that can increase
data loading performance significantly? what else can I try for improving
data load throughput. My message size is 500 bytes and i am getting rate of
only 50k msgs/sec i.e. 25MB/s, but RAM write rates are upto 100GB/sec. So
how can I achieve this rate?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-high-throughput-for-loading-data-in-ignite-tp10483.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Deploying cache on selected server nodes within cluster

2017-02-02 Thread rishi007bansod
In following example,
https://dzone.com/articles/how-apache-ignite-helped-a-large-bank-process-geog-1

  

How is data cache deployed on selected servers within cluster? As, by using
cache in partitioned mode, it gets distributed across all servers in cluster
by default. How can we deploy it on selected servers within cluster?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Deploying-cache-on-selected-server-nodes-within-cluster-tp10383.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Partition mapping to Ignite node

2017-01-31 Thread rishi007bansod
Thanks, Can you give example, that how we can use affinityfunction for this
purpose? Or how can we write hashcode() for key to node mapping?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partition-mapping-to-Ignite-node-tp10300p10328.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Get distributed query data from Ignite local caches in cluster

2017-01-30 Thread rishi007bansod
I have Ignite cache with name "IgniteCache" on each node in cluster(of 2
servers) with local mode enabled. Certain number of entries are loaded into
these local caches. Now, I have started separate client node which queries
data from this "IgniteCache" on cluster. But always when I query data, I am
getting null result(Instead of getting data from both server nodes)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Get-distributed-query-data-from-Ignite-local-caches-in-cluster-tp10310.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Windowing in Apache Ignite

2017-01-28 Thread rishi007bansod
Is there any concept of sliding time in ignite(time at which query gets
automatically fired) in windowing. also can we use streamvisitor for
windowing(without storing data in ignite cache)?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Windowing-in-Apache-Ignite-tp10301.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Partition mapping to Ignite node

2017-01-28 Thread rishi007bansod
I have few questions regarding partition mapping,1. Is there any way I can
one to one map kafka partitions to ignite node? So that for example, if I
have 2 partitions P1, P2 in kafka and I have ignite partitioned cache on 2
nodes. I want data from partition P1 to be cached on one node and from
partition P2 to the another node.   2. Also is there any way we can change
hash function(make it user defined) in ignite so that we can cache entries
on nodes in cluster as per our requirement?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partition-mapping-to-Ignite-node-tp10300.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Service Deployment in Ignite

2017-01-28 Thread rishi007bansod
I have few questions regarding Ignite cluster,
1. What is standard practice for deploying service across ignite cluster(of
server nodes), through separate client node or through server node? 
2. Is there any grid/job manager in ignite which manages task distribution
and load balancing in ignite cluster?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Service-Deployment-in-Ignite-tp10299.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Update value in stream transformer

2017-01-09 Thread rishi007bansod
I want to update current value based on previous value in data streamer, so I
am using stream transformer. But initially when cache is empty I want to add
entries in cache(using same data streamer) as it is without update. In case
of stream transformer it provides initial null values. I want to get current
values initially and add them as it is. How can we do this?

(Since I have to update values after initial data insertion, hence I have
not considered stream visitor)




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Update-value-in-stream-transformer-tp9974.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Trigger query in ignite

2017-01-09 Thread rishi007bansod
Does ignite have any *time based event(scheduling)* on *ignite data streamer*
by which we can trigger some operation on data added by streamer into
cache?(Ignite cache have event on get/put operations but there is no time
based event)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Trigger-query-in-ignite-tp9930p9972.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Scan query vs Sql query

2016-12-28 Thread rishi007bansod
>From performance point of view which type of query is better? When should we
use scan query and  when should we use sql query?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Scan-query-vs-Sql-query-tp9793.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Partitions within single node in Apache Ignite

2016-12-28 Thread rishi007bansod
Can you give example for how we can scan query by partition id  in single
node?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partitions-within-single-node-in-Apache-Ignite-tp9726p9792.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while writing to oracle DB

2016-12-27 Thread rishi007bansod
Thanks, increasing number of processes solved my problem.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writing-to-oracle-DB-tp9740p9771.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while writethrough operation in Ignite

2016-12-25 Thread rishi007bansod
Following are my files 
New_order_error.rar
 
 

I can read through from this new order database table but I am unable to
write to it



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writethrough-operation-in-Ignite-tp9696p9729.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Improve data loading speed from persistant data storage

2016-12-25 Thread rishi007bansod
Can you give example for how can we *connect Ignitedatastreamer with
loadcache()* 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Improve-data-loading-speed-from-persistant-data-storage-tp9692p9727.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Partitions within single node in Apache Ignite

2016-12-25 Thread rishi007bansod
Is there any concept of partitions in Ignite for parallel processing of data
within single node. That is, can there be more than one partitions within
one node for multi-threading queries within single node. If there is any
option, how can we set it?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Partitions-within-single-node-in-Apache-Ignite-tp9726.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while writethrough operation in Ignite

2016-12-22 Thread rishi007bansod
Ignite version 1.7
I am using Ojdbc version 7. 

Following is the cache config file I am using
CacheConfig.java
  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writethrough-operation-in-Ignite-tp9696p9703.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Error while writethrough operation in Ignite

2016-12-22 Thread rishi007bansod
But I have used CacheConfig file generated by schema import, what might be
problem



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Error-while-writethrough-operation-in-Ignite-tp9696p9701.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Improve data loading speed from persistant data storage

2016-12-21 Thread rishi007bansod
I am loading data from Oracle database using *cache.loadCache()* command, but
it is taking more time in data loading(Takes about *5 minutes to load 300
MB* of data). Can we use *Ignitedatastreamer* here to improve data loading
speed? or is there any other way for bulk loading of data from underlying
persistent database?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Improve-data-loading-speed-from-persistant-data-storage-tp9692.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Variation in BinaryObjectImpl object size with number of entries

2016-12-21 Thread rishi007bansod
JVM version I have used is 1.8. So for heap size > 32 GB object size of
BinaryObjectImpl will always be 56 Bytes?
Also, calculations done at link 
http://apacheignite.gridgain.org/v1.8/docs/capacity-planning
  , are they
for cache size > 32 GB?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Variation-in-BinaryObjectImpl-object-size-with-number-of-entries-tp9673p9680.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Off heap memory contents in linux

2016-12-21 Thread rishi007bansod
I want to see per entry memory consumption in case of off heap memory in
ignite. Is there any tool by which I can see memory consumption of only off
heap entries in ignite(excluding JVM heap usage)? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-heap-memory-contents-in-linux-tp9677.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Variation in BinaryObjectImpl object size with number of entries

2016-12-21 Thread rishi007bansod
Following arguments are passed when I load 39M entries

-server 
-Xms35g 
-Xmx35g 
-XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC 
-XX:+UseTLAB 
-XX:NewSize=128m 
-XX:MaxNewSize=128m 
-XX:MaxTenuringThreshold=0 
-XX:SurvivorRatio=1024 
-XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=40
-XX:MaxGCPauseMillis=1000 
-XX:InitiatingHeapOccupancyPercent=50 
-XX:+UseCompressedOops
-XX:ParallelGCThreads=8 
-XX:ConcGCThreads=8 
-XX:+DisableExplicitGC


Heap Dump size in this case is 31.8 GB.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Variation-in-BinaryObjectImpl-object-size-with-number-of-entries-tp9673p9675.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Variation in BinaryObjectImpl object size with number of entries

2016-12-21 Thread rishi007bansod
For number of entries = 150, BinaryObjectImpl object size is 40 Bytes for
both key and value

 


whereas,
For number of entries = 3900, BinaryObjectImpl object size is 56 Bytes
for both key and value

 



Why there is variation in this object size as it only holds reference to
other objects. *What is distribution* of this *40 & 56 Bytes* in terms of
data.
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Variation-in-BinaryObjectImpl-object-size-with-number-of-entries-tp9673.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Memory Overhead per entry in Apache Ignite

2016-12-06 Thread rishi007bansod
I have loaded table consisting of 15 entries in cache. From heap dump I
understood that, byte[] array stores key & values in this table. 
But following objects also gets added with each entry,

BinaryObjImpl = 40 Bytes *15 * 2(key+value) = 12 MB 
GridAtomicCacheEntry = 64 Bytes * 15 = 9.6 MB
Also, I am getting* per entry 200 Bytes overhead = 200 * 15 = 30 MB*

Also, I observed that ConcurrentSkipListMap$Node,
ConcurrentSkipListMap$Index, GridH2ValuecacheObject, GridH2KeyValueRowOnHeap
consumes about 29 MBs 

How can I reduce, above mentioned number of objects(As Byte array(key, value
pairs) size is only 139 MB where total consumption is 232MB) 

 




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-Overhead-per-entry-in-Apache-Ignite-tp9412.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Package jar file path for sql Query Entity

2016-12-01 Thread rishi007bansod
Thanks. After importing same package on server and client side solved my
problem(initially i wrote class in client node instead of importing it). 

But, I am still unclear about how ignite stores data 
case I  : In both Binary and serialized format, and de-serializes data when
required
case II : Only in binary format, and then there is no need of
de-serialization

Can you please explain what is default behavior of ignite for storing
data(is it case I or case II). Also I have configured cache from xml file
that I have attached and I have used binarylizable interface so does this
mean my objects are stored in binary format only?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Getting-null-query-output-after-using-QueryEntity-tp9217p9331.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Objects in heap dump corresponding to binary and de-serialized version of entries

2016-12-01 Thread rishi007bansod
Hi, 
Is there any way we can differentiate between deserialized and binary
objects from heap dump? I want to get details about *how many deserialized
and how many binary objects* are present in heap dump. Is there any way we
can store data in only binary format instead of serialized data?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Objects-in-heap-dump-corresponding-to-binary-and-de-serialized-version-of-entries-tp9321p9330.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Objects in heap dump corresponding to binary and de-serialized version of entries

2016-11-30 Thread rishi007bansod
I have loaded customer table with 3,899,999 entries in cache. Following is
heap dump for same.

 

Which objects in this heap dump represents binary and de-serialized version
of entries?
*GridH2ValueCacheObject, GridH2KeyValueRowOnHeap* are these objects
corresponding to de-serialized entries? Also what does
*ConcurrentSkipListMap$Node* and *ConcurrentSkipListMap$Index* holds?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Objects-in-heap-dump-corresponding-to-binary-and-de-serialized-version-of-entries-tp9321.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Package jar file path for sql Query Entity

2016-11-28 Thread rishi007bansod
I have added warehouse.java containing class warehouse to package schema.
Thats why I wrote schema.warehouse. Query I am executing is SELECT * from
warehouse w WHERE w.w_id = 1 and entry corresponding to w_id = 1 is present
in warehouse cache. schema.jar
  
this is my jar file.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Getting-null-query-output-after-using-QueryEntity-tp9217p9252.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Package jar file path for sql Query Entity

2016-11-28 Thread rishi007bansod
I have written following query entity for my database

up1.xml   

For this query entity I have created jar file of package schema, which
contains warehouse class. I kept this jar file in libs folder in apache
ignite installation path. But when I run query on "warehouse cache" that I
have created I am always getting null(blank) result. Even if I remove this
jar file from libs folder(and keeping same xml file), still the query runs
but gives null(blank) result. I think this packaged jar file is not getting
included when ignite is running. So where should this jar file kept, so that
i can be automatically loaded when ignite starts? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Package-jar-file-path-for-sql-Query-Entity-tp9217.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-22 Thread rishi007bansod
Hi dkarachentsev,
I did loading of data from client node as suggested, but still I think
values are getting stored in both serialized and de-serialized format. I
have attached my heap dump. Which objects in this dump represents serialized
and de-serialized form. Also instead of QueryEntity, I did same indexing
configuration in my java code only, will this have any effect on memory
consumption? 

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9131.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-20 Thread rishi007bansod
Data is stored in CSV format on disk. So, I calculated CSV file size on disk.
 

Following are tables and number of fields in them
Table   Fields  Entries
warehouse 9 5
district  11   50
customer   21  15
history  83
new_order 345000
order815
order_line 10   148
item 5
10
stock17   50

are you saying for each field it will take 4 bytes overhead? So, total
overhead = total Entires*number of fields*4*2 bytes? Factor 2 is for
BinaryObjectImpl. Is this calculation correct? Also one another question,
BinaryObjectImpl are objects in deserialized format, right?  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9107.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-18 Thread rishi007bansod
I did modifications as you suggested. But, still memory consumption is about
2 GB, what might be reason for this?

Memory consumption before data loading
 

Memory consumption after data loading
 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9081.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Binarylizable interface in apache ignite

2016-11-18 Thread rishi007bansod
I am trying to use Binarylizable interface in apache ignite for reducing
memory utilization(for compact representation of data). Following is data
class that I have made binarylizable.

class order_line implements Binarylizable{  
@QuerySqlField(index = true)
int ol_o_id; 
@QuerySqlField(index = true)
int ol_d_id;
@QuerySqlField(index = true)
int ol_w_id;
@QuerySqlField(index = true)
int ol_number;
@QuerySqlField
int ol_i_id; 
@QuerySqlField
int ol_supply_w_id;
@QuerySqlField
String ol_delivery_d;
@QuerySqlField
int ol_quantity;
@QuerySqlField
double ol_amount;
@QuerySqlField
String ol_dist_info;


private order_lineKey key;
public order_lineKey key()
{ 
if(key == null)
key = new order_lineKey(ol_w_id, ol_d_id, ol_o_id, 
ol_number);

return key;

}
@Override
public void readBinary(BinaryReader reader) throws 
BinaryObjectException {
// TODO Auto-generated method stub
ol_o_id = reader.readInt("ol_o_id");
ol_d_id = reader.readInt("ol_d_id");
ol_w_id = reader.readInt("ol_w_id");
ol_number = reader.readInt("ol_number");
ol_i_id = reader.readInt("ol_i_id");
ol_supply_w_id = reader.readInt("ol_supply_w_id");
ol_delivery_d = reader.readString("ol_delivery_d"); 
ol_quantity = reader.readInt("ol_quantity");
ol_amount = reader.readDouble("ol_amount");
ol_dist_info = reader.readString("ol_dist_info");


}
@Override
public void writeBinary(BinaryWriter writer) throws 
BinaryObjectException {
// TODO Auto-generated method stub

writer.writeInt("ol_o_id",ol_o_id);
writer.writeInt("ol_d_id",ol_d_id);
writer.writeInt("ol_w_id",ol_w_id);
writer.writeInt("ol_number",ol_number);
writer.writeInt("ol_i_id",ol_i_id);
writer.writeInt("ol_supply_w_id",ol_supply_w_id);
writer.writeString("ol_delivery_d",ol_delivery_d);  
writer.writeInt("ol_quantity",ol_quantity);
writer.writeDouble("ol_amount", ol_amount);
writer.writeString("ol_dist_info",ol_dist_info);

}

}


Is this the correct way for compact binary representation of data? Because I
am not getting any improvement in memory consumption after using this
interface. Also, in readBinary() and writeBinary() methods do we have to
write writer.writeInt/String/Double and reader.readInt/String/Double methods
for every field or only fields participating in SQL queries(Does this have
any effect on memory consumption)?  







--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Binarylizable-interface-in-apache-ignite-tp9078.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-18 Thread rishi007bansod
I have measured database size on disk 370MB 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9073.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-18 Thread rishi007bansod
But for data storage required data classes must be defined on server node.
What do you mean by "without data classes"? Can you give me example? 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9071.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-17 Thread rishi007bansod
But now with indexing types disabled I am getting following error,

Exception in thread "main" javax.cache.CacheException: Indexing is disabled
for cache: order_line_cache. Use setIndexedTypes or setTypeMetadata methods
on CacheConfiguration to enable.
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.validate(IgniteCacheProxy.java:732)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxy.query(IgniteCacheProxy.java:664)
at data_test.order(data_test.java:2903)
at data_test.main(data_test.java:2839)




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9064.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Memory consumption in apache ignite

2016-11-17 Thread rishi007bansod
I have used single node. My cache configuration is,
Cache configuration I have used is, 

CacheConfiguration ccfg_order_line = new
CacheConfiguration<>(); 
ccfg_order_line.setIndexedTypes(order_lineKey.class, order_line.class); 
ccfg_order_line.setName("order_line_cache"); 
ccfg_order_line.setCopyOnRead(false); 
ccfg_order_line.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED); 
ccfg_order_line.setSwapEnabled(false); 
ccfg_order_line.setBackups(0); 
IgniteCache cache_order_line =
ignite.createCache(ccfg_order_line); 





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035p9044.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Memory consumption in apache ignite

2016-11-17 Thread rishi007bansod
For 370MB of data ignite is consuming about 3GB space in memory(ON HEAP
MODE). Following is heap dump I got. What memory optimizations can be
applied? Does ignite stores object by default in both serialized and
deserialized formats?

  



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Memory-consumption-in-apache-ignite-tp9035.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Very high memory consumption in apache ignite

2016-11-16 Thread rishi007bansod
I did memory analysis following is result I got in eclipse MAT. 
 
What further memory tuning can I apply?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Very-high-memory-consumption-in-apache-ignite-tp8822p9034.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Multithreading SQL queries in Apache Ignite

2016-11-14 Thread rishi007bansod
I have set it to default value i.e. double the number of cores. But will it
improve performance if I increase it further?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multithreading-SQL-queries-in-Apache-Ignite-tp8944p8949.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Multithreading SQL queries in Apache Ignite

2016-11-14 Thread rishi007bansod
In my case I have data present on 1 server node and 25 clients connected to
this server, concurrently firing sql queries. So, does Ignite by default
parallelizes these queries or do we have to do some settings? Can we apply
some kind of multithreading on server side to handle these queries for
performance improvement?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Multithreading-SQL-queries-in-Apache-Ignite-tp8944.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Very high memory consumption in apache ignite

2016-11-10 Thread rishi007bansod
Cache configuration I have used is,

CacheConfiguration ccfg_order_line = new
CacheConfiguration<>();
ccfg_order_line.setIndexedTypes(order_lineKey.class, order_line.class);
ccfg_order_line.setName("order_line_cache");
ccfg_order_line.setCopyOnRead(false);
ccfg_order_line.setMemoryMode(CacheMemoryMode.ONHEAP_TIERED);
ccfg_order_line.setSwapEnabled(false);
ccfg_order_line.setBackups(0);
IgniteCache cache_order_line =
ignite.createCache(ccfg_order_line);

JVM configuration I have used is,

-server 
-Xms10g 
-Xmx10g 
-XX:+UseParNewGC 
-XX:+UseConcMarkSweepGC 
-XX:+UseTLAB 
-XX:NewSize=128m 
-XX:MaxNewSize=128m 
-XX:MaxTenuringThreshold=0 
-XX:SurvivorRatio=1024 
-XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=40
-XX:MaxGCPauseMillis=1000 
-XX:InitiatingHeapOccupancyPercent=50 
-XX:+UseCompressedOops
-XX:ParallelGCThreads=8 
-XX:ConcGCThreads=8 
-XX:+DisableExplicitGC

same as provided at link 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Very-high-memory-consumption-in-apache-ignite-tp8822p8880.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


ON HEAP vs OFF HEAP memory mode performance Apache Ignite

2016-11-10 Thread rishi007bansod
Following are average execution time for running 14 queries against 16
million entries (DB size: 370 MB)

OFF HEAP memory mode - 47 millisec
ON HEAP memory mode - 16 millisec

why there is difference in execution times between off heap and on heap
memory modes as both are In-memory? What performance tuning can be applied
on off heap memory mode for better results?(I have also tried JVM tuning
mentioned in Ignite documentation, but its not giving any better results)



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ON-HEAP-vs-OFF-HEAP-memory-mode-performance-Apache-Ignite-tp8870.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Very high memory consumption in apache ignite

2016-11-09 Thread rishi007bansod
I have 9 tables with number of entries as mentioned below,
 

total size of database is 370MB but when I put data in cache it almost
consumes 7GB of Memory. Following are memory consumption details.

Memory consumption before data loading in cache :
 

Memory consumption after data loading in cache :
 

Why this is happening? Also as mentioned in post 
https://apacheignite.readme.io/docs/capacity-planning
   total in-memory
database size should increase by 30% after applying indexing. So in my case
data size should be at max 410MB. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Very-high-memory-consumption-in-apache-ignite-tp8822.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Support for SQL insert, update and delete queries in Apache Ignite(1.7.0)

2016-10-10 Thread rishi007bansod
https://github.com/apache/ignite/pull/886

Is this the link for applying patch for update, insert and delete SQL
queries? 

I am getting following error messeges : 

[rishikesh@01hw738457 apache-ignite-1.7.0-src]$ git apply
--ignore-space-change --ignore-whitespace /tmp/886.patch 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlFunction.java
has type 100755, expected 100644 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:19
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:
patch does not apply 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:25
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:
patch does not apply 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlConst.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:20
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:
patch does not apply 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/opt/GridH2TreeIndex.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:30
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:
patch does not apply 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning: modules/core/src/main/java/org/apache/ignite/IgniteCache.java has
type 100755, expected 100644 
warning:
modules/core/src/main/java/org/apache/ignite/internal/processors/cache/IgniteCacheProxy.java
has type 100755, expected 100644 
warning:
modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryProcessor.java
has type 100755, expected 100644 
warning:
modules/core/src/test/java/org/apache/ignite/testframework/junits/multijvm/IgniteCacheProcessProxy.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuery.java
has type 100755, expected 100644 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:25
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQueryParser.java:
patch does not apply 
warning:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java
has type 100755, expected 100644 
error: patch failed:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java:144
 
error:
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/sql/GridSqlQuerySplitter.java:
patch does not apply 
warning:
modules/indexing/src/test/java/org/apache/ignite/internal/processors/query/h2/sql/GridQueryParsingTest.java
has type 100755, expected 100644 
warning:
modules/core/src/main/java/org/apache/ignite/internal/processors/query/GridQueryIndexing.java
has type 100755, expected 100644 
warning:

Patch for Update, Insert and Delete SQL queries

2016-10-10 Thread rishi007bansod
https://github.com/apache/ignite/pull/886
  

Is this the link for applying patch for update, insert and delete SQL
queries?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Patch-for-Update-Insert-and-Delete-SQL-queries-tp8170.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cache entry list on particular node in apache ignite

2016-10-05 Thread rishi007bansod
Is there any option in ignitevisorcmd where I can see what
entries( details) are present in particular node? I tried *cache
-scan -c=cache -id8=12345678* command but it prints entries from all other
nodes also.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-entry-list-on-particular-node-in-apache-ignite-tp8110.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Time break up of SQL query execution in Apache Ignite

2016-10-05 Thread rishi007bansod
Can I get time break up of SQL query in Apache Ignite in terms of how much
time is spent by query in processing(computation part), networking, memory
read and write?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Time-break-up-of-SQL-query-execution-in-Apache-Ignite-tp8101.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Support for SQL insert, update and delete queries in Apache Ignite(1.7.0)

2016-10-04 Thread rishi007bansod
Is there any temporary patch that I can get? what is tentative date of
version 1.8.0 release?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Support-for-SQL-insert-update-and-delete-queries-in-Apache-Ignite-1-7-0-tp8078p8083.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.