Re: ignite server cpu is high

2018-06-11 Thread shawn.du






Hi,I see most running threads are doing below code:     "tcp-disco-sock-reader-#469" #82488 prio=10 os_prio=0 tid=0x7febf8308000 nid=0x3e8f runnable [0x7feb948ef000]   java.lang.Thread.State: RUNNABLE        at java.net.SocketInputStream.socketRead0(Native Method)        at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)        at java.net.SocketInputStream.read(SocketInputStream.java:171)        at java.net.SocketInputStream.read(SocketInputStream.java:141)        at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)        at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)        at java.io.BufferedInputStream.read(BufferedInputStream.java:345)        - locked <0x0005cee474e0> (a java.io.BufferedInputStream)        at org.apache.ignite.marshaller.jdk.JdkMarshallerInputStreamWrapper.read(JdkMarshallerInputStreamWrapper.java:53)        at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2657)        at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2673)        at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3150)        at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:859)        at java.io.ObjectInputStream.(ObjectInputStream.java:355)        at org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.(JdkMarshallerObjectInputStream.java:39)        at org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java:119)        at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(AbstractNodeNameAwareMarshaller.java:94)        at org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9740)        at org.apache.ignite.spi.discovery.tcp.ServerImpl$SocketReader.body(ServerImpl.java:5946)        at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Our data into ignite are big(at most several Mega bytes.) binary data like below:  class Data{  byte[] data; }one question:if disk IO is very slow, it will make CPU busy?






ThanksShawn




On 6/12/2018 00:01,Stanislav Lukyanov wrote: 


Sorry, but there isn’t much else to be said without additional data.If consistent usage of resources on that host is important, I suggest to setup some monitoring so that if it happens again there’ll be at least some place to start. Stan From: shawn.duSent: 11 июня 2018 г. 16:34To: user@ignite.apache.orgCc: user@ignite.apache.orgSubject: Re: ignite server cpu is high the server is a dedicate ignite server。i am sure it is ignite consume cpu. I can’t get more info now, the cpu issue is gone as now there are very few traffic on our system.  For our case,we store data into ignite every 30 seconds. If no query, the cpu should be very low. But yesterday it is a exception. Very strange. shawn.du邮箱:shawn...@neulion.com.cn Signature is customized by Netease Mail MasterOn 06/11/2018 17:43, Stanislav Lukyanov wrote: How do you monitor your CPU usage? Do you know which processes consume CPU? Are you sure it is Ignite’s process? Is CPU consumed more in user space or in system space? Can you share the actual stats? From what I see in the thread dump, there is at least some activity on this Ignite: sys-stripe-5-#6 thread is processing an update.In any case, that thread is the only one in the JVM that is actually performing some work, so I’d assume that the CPU load comes from other processes.  Thanks,Stan From: shawn.duSent: 11 июня 2018 г. 5:38To: userSubject: ignite server cpu is high Hi Community, My single-node ignite cluster has started since Apr 23. In past couple weeks, It worked fine. For our case, Ignite server's CPU is very low at most time with exception that there are complex/concurrent queries.event in case of query, ignite server's CPU will be high for a very short while. I think all above is normal behavior.But yesterday, ignite server's CPU keep high for a long time event we don't have query.I use jstack to dump the threads(see the attachment), We don't find any our business code.please help and thanks in advance. We use ignite 2.3.0. java version "1.8.0_151"Java(TM) SE Runtime Environment (build 1.8.0_151-b12)Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode) ThanksShawn  




Re: ignite server cpu is high

2018-06-11 Thread shawn.du


the server is a dedicate ignite server。i am sure it is ignite consume cpu. I can’t get more info now, the cpu issue is gone as now there are very few traffic on our system. For our case,we store data into ignite every 30 seconds. If no query, the cpu should be very low. But yesterday it is a exception. Very strange.


















shawn.du



邮箱:shawn...@neulion.com.cn









Signature is customized by Netease Mail Master
On 06/11/2018 17:43, Stanislav Lukyanov wrote: How do you monitor your CPU usage? Do you know which processes consume CPU? Are you sure it is Ignite’s process? Is CPU consumed more in user space or in system space? Can you share the actual stats? From what I see in the thread dump, there is at least some activity on this Ignite: sys-stripe-5-#6 thread is processing an update.In any case, that thread is the only one in the JVM that is actually performing some work, so I’d assume that the CPU load comes from other processes.  Thanks,Stan From: shawn.duSent: 11 июня 2018 г. 5:38To: userSubject: ignite server cpu is high Hi Community, My single-node ignite cluster has started since Apr 23. In past couple weeks, It worked fine. For our case, Ignite server's CPU is very low at most time with exception that there are complex/concurrent queries.event in case of query, ignite server's CPU will be high for a very short while. I think all above is normal behavior.But yesterday, ignite server's CPU keep high for a long time event we don't have query.I use jstack to dump the threads(see the attachment), We don't find any our business code.please help and thanks in advance. We use ignite 2.3.0. java version "1.8.0_151"Java(TM) SE Runtime Environment (build 1.8.0_151-b12)Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode) ThanksShawn 
 




ignite server cpu is high

2018-06-10 Thread shawn.du






Hi Community,My single-node ignite cluster has started since Apr 23. In past couple weeks, It worked fine. For our case, Ignite server's CPU is very low at most time with exception that there are complex/concurrent queries.event in case of query, ignite server's CPU will be high for a very short while. I think all above is normal behavior.But yesterday, ignite server's CPU keep high for a long time event we don't have query.I use jstack to dump the threads(see the attachment), We don't find any our business code.please help and thanks in advance.We use ignite 2.3.0. java version "1.8.0_151"Java(TM) SE Runtime Environment (build 1.8.0_151-b12)Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)






ThanksShawn








4215.1
Description: Binary data


connection pool in ignite

2018-04-20 Thread shawn.du






Hi,Is there connection pool concept in ignite client?I notice there are only one TCP connection between my ignite client and server.Currently I don't find any issues with it, but please explain.If a client communicate(put/get/query) with server heavily, does one connection is enough?
Also I find this:https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.htmlwhat does setConnectionsPerNode do?







ThanksShawn








Re: Ignite Client Heap out of Memory issue

2018-04-01 Thread shawn.du






Hi Andrey,Thanks for your replay. It still confused me for:1) for storm worker process, If it is  because of OOM and crashed. it should dump the heap. for I set  -XX:+HeapDumpOnOutOfMemoryError    but it didn't.  For storm worker, it behaves like a normal fatal error which make storm worker restart.
2) It did make ignite server heap dump. for I analyzed the dump, it is ignite server's. (seeing from PID and java command etc.) anyone can explain? Thanks.






ThanksShawn





On 3/31/2018 01:36,Andrey Mashenkov<andrey.mashen...@gmail.com> wrote: 


Hi Shawn,1. Ignite use off heap to store cache entries. Client store no cache data. Cache in LOCAL mode can be used on client side, and it should use offheap of course.All data client retreive from server will be in offheap2. It is not IgniteOutOfMemory error, but JVM OOM. So, try to investigate if there is a memory leak in your code.On Fri, Mar 30, 2018 at 6:36 AM, shawn.du <shawn...@neulion.com.cn> wrote:







Hi,My Ignite client heap OOM yesterday.  This is the first time we encounter this issue.My ignite client colocates within Storm worker process. this issue cause storm worker restart.I have several questions about it: our ignite version is 2.3.01) if ignite in client mode, it use offheap? how to set the max onheap/offheap memory to use.2) our storm worker have 8G memory, ignite client print OOM, it doesn't trigger storm worker to dump the heap.    but we get a ignite server's heap dump.  ignite server didn't die.  The ignite server's heap dump is very small. only have 200M.    which process is OOM? worker or ignite server?This is logs:  Thanks in advance.  Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node.                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090) ~[stormjar.jar:?]                at org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505) ~[stormjar.jar:?]                at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]                Suppressed: java.lang.OutOfMemoryError: Java heap space                        at org.apache.ignite.internal.processors.cache.IncompleteCacheObject.(IncompleteCacheObject.java:44) ~[

Ignite Client Heap out of Memory issue

2018-03-29 Thread shawn.du






Hi,My Ignite client heap OOM yesterday.  This is the first time we encounter this issue.My ignite client colocates within Storm worker process. this issue cause storm worker restart.I have several questions about it: our ignite version is 2.3.01) if ignite in client mode, it use offheap? how to set the max onheap/offheap memory to use.2) our storm worker have 8G memory, ignite client print OOM, it doesn't trigger storm worker to dump the heap.    but we get a ignite server's heap dump.  ignite server didn't die.  The ignite server's heap dump is very small. only have 200M.    which process is OOM? worker or ignite server?This is logs:  Thanks in advance.  Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update keys on primary node.                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateResponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.processNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.access$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99) ~[stormjar.jar:?]                at org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager.access$4200(GridIoManager.java:126) ~[stormjar.jar:?]                at org.apache.ignite.internal.managers.communication.GridIoManager$9.run(GridIoManager.java:1090) ~[stormjar.jar:?]                at org.apache.ignite.internal.util.StripedExecutor$Stripe.run(StripedExecutor.java:505) ~[stormjar.jar:?]                at java.lang.Thread.run(Thread.java:748) [?:1.8.0_151]                Suppressed: java.lang.OutOfMemoryError: Java heap space                        at org.apache.ignite.internal.processors.cache.IncompleteCacheObject.(IncompleteCacheObject.java:44) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cacheobject.IgniteCacheObjectProcessorImpl.toCacheObject(IgniteCacheObjectProcessorImpl.java:191) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readIncompleteValue(CacheDataRowAdapter.java:404) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.readFragment(CacheDataRowAdapter.java:248) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:174) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102) ~[stormjar.jar:?]                        at org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:62) ~[stormjar.jar:?]                        at 

Re: compute ignite data with spark

2018-02-27 Thread shawn.du






Hi Denis,Thanks, that's cool! looking forward 2.4.0 release.







ThanksShawn





On 2/28/2018 12:11,Denis Magda wrote: 


Hi Shawn,In addition to RDDs, you'll be able to use Data Frames APIs soon with Ignite as storage for Spark. They will be released within nearest weeks in Ignite 2.4.As for your question on how Ignite compares to Spark. The fist is not just a computational engine. It's a distributed database (or cache depending on your use case) with a variety of APIs including the compute grid. Though you can use Ignite as storage for Spark, Ignite native APIs should be more performant.--DenisOn Mon, Feb 26, 2018 at 2:25 AM, Stanislav Lukyanov  wrote:Hi Shawn, You can use Ignite standalone and you can also use it together with Spark.Please take a look at these SO question and an article:https://stackoverflow.com/questions/36036910/apache-spark-vs-apache-ignitehttps://insidebigdata.com/2016/06/20/apache-ignite-and-apache-spark-complementary-in-memory-computing-solutions/ Stan From: shawn.duSent: 24 февраля 2018 г. 9:56To: userSubject: compute ignite data with spark Hi, Spark is a compute engine.  Ignite also provide compute feature. Also Ignite can integrate with spark.We are using ignite compute map-reduce feature now.  It is very fast.I am just curious how spark compares with ignite on computing.it is possible using spark API computing ignite cache data?  ThanksShawn 





Re: compute ignite data with spark

2018-02-26 Thread shawn.du






Hi, stan,Thanks. I am evaluating igniteRDD. it is cool!  I am new to spark, I can get sqlContext by ds.sqlContext.If I want to query all using select, what's the table name?see below in spark-shell,  I want to do something like > sc.sql("select * from ").  Where is the table name?scala>val rdd = ic.fromCache[String,BinaryObject]("testCache")rdd: org.apache.ignite.spark.IgniteRDD[String,org.apache.ignite.binary.BinaryObject] = IgniteRDD[0] at RDD at IgniteAbstractRDD.scala:32scala> val ds = rdd.sql("select site,timestamp,product from testCache where site=?",("site1"))ds: org.apache.spark.sql.DataFrame = [SITE: string, TIMESTAMP: bigint ... 1 more field]scala> ds.schemares1: org.apache.spark.sql.types.StructType = StructType(StructField(SITE,StringType,true), StructField(TIMESTAMP,LongType,true), StructField(PRODUCT,StringType,true))scala> val sc = ds.sqlContextsc: org.apache.spark.sql.SQLContext = org.apache.spark.sql.SQLContext@78682201






ThanksShawn





On 2/26/2018 18:25,Stanislav Lukyanov wrote: 


Hi Shawn, You can use Ignite standalone and you can also use it together with Spark.Please take a look at these SO question and an article:https://stackoverflow.com/questions/36036910/apache-spark-vs-apache-ignitehttps://insidebigdata.com/2016/06/20/apache-ignite-and-apache-spark-complementary-in-memory-computing-solutions/ Stan From: shawn.duSent: 24 февраля 2018 г. 9:56To: userSubject: compute ignite data with spark Hi, Spark is a compute engine.  Ignite also provide compute feature. Also Ignite can integrate with spark.We are using ignite compute map-reduce feature now.  It is very fast.I am just curious how spark compares with ignite on computing.it is possible using spark API computing ignite cache data?  ThanksShawn 




compute ignite data with spark

2018-02-23 Thread shawn.du






Hi,Spark is a compute engine.  Ignite also provide compute feature. Also Ignite can integrate with spark.We are using ignite compute map-reduce feature now.  It is very fast.I am just curious how spark compares with ignite on computing.it is possible using spark API computing ignite cache data? 






ThanksShawn









Re: how to dismiss a ignite server node.

2018-02-23 Thread shawn.du






Hi,I know it is hard.  But it is a common requirement. I wish ignite can have this feature in future.Think below scenarios:I initialize a 2-nodes server and all works fine.  later  I find my memory will be used up soon.so i decide to add a new node.  I know ignite can do this. All works fine. later then I find thatuse 3 node is too expensive(we run ignite on AWS) we don't need that much memory, I want to shrink the cluster, but I don't wantto stop it. It means even we are shrinking the cluster, all our business logic  like cache, computing, SQL queryshould works as usual. For we really need this feature, I have a careful thought about it. My thought is simple:1) use admin tools to mark a node in a DISMISSING status. when node in this status, it is only readable, computing jobs also OK, but can't write.2) copy the data in above node to other node. 3) when copy is finished, set the node status in DISMISSED status. node in DISMISSED status will not involved any operations.4) then user can safely kill the process. I know talking is always simple, coding is hard.  Leveraging ignite's atomic, distributed locks, it may be possible.welcome any comments.






ThanksShawn





On 2/23/2018 17:47,Вячеслав Коптилин<slava.kopti...@gmail.com> wrote: 


Hello,If you don't have configured 3rd party cache store or backups,then there is only one possible way, I think.You need to  - store all the data from the node you want to shut down, - shut down the node  - upload stored data back to the cluster.It can be done via an external database, for example.If the data set is not huge, you can try to use a new partitioned cache (with node filter/backups) or replicated cache.Thanks.2018-02-23 10:00 GMT+03:00 shawn.du <shawn...@neulion.com.cn>:







Hi,Suppose I have several ignite server nodes which compose a cluster.all data in PARTITIONED mode, and with no backups.It is possible to dismiss a node without restarting and data lose?if possible, what are the steps?






ThanksShawn













how to dismiss a ignite server node.

2018-02-22 Thread shawn.du






Hi,Suppose I have several ignite server nodes which compose a cluster.all data in PARTITIONED mode, and with no backups.It is possible to dismiss a node without restarting and data lose?if possible, what are the steps?






ThanksShawn









set table schema in sqlline command line

2018-02-22 Thread shawn.du






Hi,I am trying sqlline in CLI,./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/I can connect ignite successfully, also command !tables can list all tables.but when I query something, it always fail. 
++++++++|           TABLE_CAT            |          TABLE_SCHEM           |           TABLE_NAME           |           TABLE_TYPE           |            REMARKS             |            TYPE_CAT            |        |++++++++|                                 | Content                        | SINTRIPLE                      | TABLE                          |                                |                                |        ||                                | Content                        | SINTRIPLE                      | TABLE                          |                                |                                |        |
above are some part of !tables output. I want to query sql  from Content schema and Table SINTRIPLE.It seems that "select * from Content.SINTRIPLE" doesn't work.I also tried to set schema in url. ./sqlline.sh --color=true --verbose=true -u jdbc:ignite:thin://127.0.0.1/Contentalso doesn't work. any advice? Thanks in advance.






ThanksShawn









off heap memory usage

2018-01-04 Thread shawn.du






Hi community,I want monitor ignite cluster off heap memory usage. if off heap available memory is less than 1G then alert.
I went through ignite MBean,  find ClusterLocalNodeMetricsMXBeanImpl's NonHeapMemoryUsed. The value is too small. I don't think it is.  How can I get it?






ThanksShawn









Re: create singleton instance in ignite server side.

2017-12-17 Thread shawn.du






fixed. Some library is missing.








ThanksShawn





On 12/18/2017 13:20,shawn.du<shawn...@neulion.com.cn> wrote: 






Hi,I am using https://github.com/RuedigerMoeller/fast-serialization for ser/des.my code is rather simplepublic class EncoderSerDes{private static ISerDes serDes = new FstSerDes();public static byte[] serialize(ColumnEncoder encoder) throws Exception{return serDes.serialize(encoder);}public static ColumnEncoder deserialize(byte[] bytes) throws Exception{return serDes.deserialize(bytes);}interface ISerDes{byte[] serialize(ColumnEncoder encoder) throws Exception;ColumnEncoder deserialize(byte[] bytes) throws Exception;}static class JavaSerDes implements ISerDes{@Overridepublic byte[] serialize(ColumnEncoder encoder) throws Exception{try (ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos)){oos.writeObject(encoder);return baos.toByteArray();}}@Overridepublic ColumnEncoder deserialize(byte[] bytes) throws Exception{try (ByteArrayInputStream bais = new ByteArrayInputStream(bytes); ObjectInputStream ois = new ObjectInputStream(bais)){return (ColumnEncoder) ois.readObject();}}}static class FstSerDes implements ISerDes{private static FSTConfiguration conf = FSTConfiguration.createDefaultConfiguration();static{conf.registerClass(ArrayEncoder.class,...Column.class);}@Overridepublic byte[] serialize(ColumnEncoder encoder) throws Exception{return conf.asByteArray(encoder);}@Overridepublic ColumnEncoder deserialize(byte[] bytes) throws Exception{return (ColumnEncoder) conf.asObject(bytes);}}}

when this code run in ignite server side. it throwsException in thread "pub-#173" java.lang.NoClassDefFoundError: Could not initialize class encoder.EncoderSerDesHow to use singleton instance in ignite server side?  I read doc  https://apacheignite.readme.io/docs/cluster-singletonsbut It is not so clear for me. it is the right way for me? and how to call the singleton service?






ThanksShawn










create singleton instance in ignite server side.

2017-12-17 Thread shawn.du






Hi,I am using https://github.com/RuedigerMoeller/fast-serialization for ser/des.my code is rather simplepublic class EncoderSerDes{private static ISerDes serDes = new FstSerDes();public static byte[] serialize(ColumnEncoder encoder) throws Exception{return serDes.serialize(encoder);}public static ColumnEncoder deserialize(byte[] bytes) throws Exception{return serDes.deserialize(bytes);}interface ISerDes{byte[] serialize(ColumnEncoder encoder) throws Exception;ColumnEncoder deserialize(byte[] bytes) throws Exception;}static class JavaSerDes implements ISerDes{@Overridepublic byte[] serialize(ColumnEncoder encoder) throws Exception{try (ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(baos)){oos.writeObject(encoder);return baos.toByteArray();}}@Overridepublic ColumnEncoder deserialize(byte[] bytes) throws Exception{try (ByteArrayInputStream bais = new ByteArrayInputStream(bytes); ObjectInputStream ois = new ObjectInputStream(bais)){return (ColumnEncoder) ois.readObject();}}}static class FstSerDes implements ISerDes{private static FSTConfiguration conf = FSTConfiguration.createDefaultConfiguration();static{conf.registerClass(ArrayEncoder.class,...Column.class);}@Overridepublic byte[] serialize(ColumnEncoder encoder) throws Exception{return conf.asByteArray(encoder);}@Overridepublic ColumnEncoder deserialize(byte[] bytes) throws Exception{return (ColumnEncoder) conf.asObject(bytes);}}}

when this code run in ignite server side. it throwsException in thread "pub-#173" java.lang.NoClassDefFoundError: Could not initialize class encoder.EncoderSerDesHow to use singleton instance in ignite server side?  I read doc  https://apacheignite.readme.io/docs/cluster-singletonsbut It is not so clear for me. it is the right way for me? and how to call the singleton service?






ThanksShawn









Re: memory usage in 2.3

2017-12-12 Thread shawn.du






Thank you Denis,I think OOM is caused by invalid configuration. for I supposed ignite use on-heap memory,  I set a very big value for -Xmx -Xms, in this way, memory left for off-heap and other programs is limited. We are run testing now, till now all seems good.







ThanksShawn





On 12/13/2017 07:36,Denis Magda wrote: 


Shawn,If you still need to use the on-heap caching by some reason make sure you enabled an eviction policy:https://apacheignite.readme.io/docs/evictions#section-java-heap-cacheOtherwise the on-heap cache will grow endlessly.BTW, what kind of OOM you got? It might be off-heap or on-heap caching related. Share the whole stack trace.—DenisOn Dec 12, 2017, at 4:37 AM, slava.koptilin  wrote:Hi Shawn,how to disable off heap completely?You cannot disable off-heap. As of 2.0, Apache Ignite stores all the dataoutside of java heap.does it mean half in on-heap and half in off-heap?On-heap caching allows to get a subset of the data into java heapand can be useful for scenarios when you do a lot of cache reads on servernodes that work with cache entries in the binary formor invoke cache entry deserialization.Thanks!--Sent from: http://apache-ignite-users.70518.x6.nabble.com/




memory usage in 2.3

2017-12-12 Thread shawn.du






Hi,I upgrade my ignite version from 1.9 to 2.3. In 1.9, We store all data in memory. I know that in ignite 2.0+, ignite use off-heap memory by default.so I change my code like below:  config.setOnheapCacheEnabled(true);but after running several hours, my server is crash for OOM.running again, I find ignite server's RES memory grows as time go.run visorcmd, for  column "Entries (Heap / Off-heap)"most looks like this: avg: 4606.00 (2303.00 / 2303.00)avg: 44973.33 (22486.67 / 22486.67)what does this mean?  does it mean half in on-heap and half in off-heap?how to disable off heap completely?ThanksShawn









deadlock

2017-10-31 Thread shawn.du






Hi, My ignite server stop to response.  Any client can't connect it, and I found dead lock in logs,what's the possible reason for this?  I use ignite 1.9.0.Thanks in advance.log===deadlock: truecompleted: 29519967Thread [name="sys-stripe-12-#13%null%", id=36, state=BLOCKED, blockCnt=308, waitCnt=28469776]    Lock [object=o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCacheEntry@4f5b4fce, ownerName=sys-stripe-9-#10%null%, ownerId=33]        at sun.misc.Unsafe.monitorEnter(Native Method)        at o.a.i.i.util.GridUnsafe.monitorEnter(GridUnsafe.java:1136)        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.lockEntries(GridDhtAtomicCache.java:2958)        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1793)        at o.a.i.i.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1721)        at







ThanksShawn









Re: integrate with prestodb

2017-10-16 Thread shawn.du






Hi Denis,We are evaluating this feature(our production use ignite 1.9 and we are testing ignite 2.2),  this do make things simple. But we don't want to loss performance.we need careful testing, seeing from our first round test result, the disk IO will be the bottleneck.The load average is higher than ignite 1.9 without this feature.  Also i don't know ignite load data from disk will fast enoughcomparing with decode the data in memory.







ThanksShawn





On 10/17/2017 10:25,Denis Magda<dma...@apache.org> wrote: 


Shawn,

Then my suggestion would be to enable Ignite persistence [1] that will store the whole data set you have. RAM will keep only a subset for the performance benefits. Ignite SQL is full supported for the persistence, you can even join data RAM and disk only data sets. Plus, your compression becomes optional.

[1] https://ignite.apache.org/features/persistence.html 

—
Denis

> On Oct 16, 2017, at 7:18 PM, shawn.du <shawn...@neulion.com.cn> wrote:
> 
> Hi Denis,
> 
> Yes, We do want to limit the RAM to less than 64G.  RAM resource is still an expensive resource.
> If we store our data with ignite SQL queryable format, our data may use more than 640G. This is too expensive for us.
> So we store data using binary format which works a bit like orc or parquet. Only several important columns are SQL queryable and the others are not. In this way, we do store using less RAMs, but we have to use map-reduce to query the data, which is a little bit of complex: Query in client with SQL, then submit jobs to ignite compute, finally do some post aggregation in client.
> This is why I want to have a try of Presto. We like SQL, we want all computation on server side. 
> 
> welcome your comments.
> 
> Thanks
> Shawn
> 
> On 10/17/2017 07:57,Denis Magda<dma...@apache.org> <mailto:dma...@apache.org> wrote: 
> Hello Shawn, 
> 
> Do I understand properly that you have scarce RAM resources and think to exploit Presto as an alternative SQL engine in Ignite that queries both RAM and disk data sets? If that’s the case than just enable Ignite native persistence [1] and you’ll get all the data stored on disk and as much as you can afford in RAM. The SQL works over both tiers transparently for you. 
> 
> [1] https://ignite.apache.org/features/persistence.html  > 
> 
> — 
> Denis 
> 
> > On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin <kukushkinale...@gmail.com <mailto:kukushkinale...@gmail.com>> wrote: 
> >  
> > Cross-sending to the DEV community. 
> >  
> > On Mon, Oct 16, 2017 at 12:14 PM, shawn.du <shawn...@neulion.com.cn <mailto:shawn...@neulion.com.cn> <mailto:shawn...@neulion.com.cn <mailto:shawn...@neulion.com.cn>>> wrote: 
> > Hi community, 
> >  
> > I am trying to implement a connector for presto to connect ignite.  
> > I think it will be a very interest thing to connect ignite and presto. 
> >  
> > In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data. 
> > thus we cannot query them using SQL. We use ignite map-reduce to query the data. 
> >  
> > Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query. 
> > The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds. 
> > Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite? 
> >  
> > Thanks 
> > Shawn 
> >  
> >  
> >  
> >  
> > --  
> > Best regards, 
> > Alexey 
> 






Re: integrate with prestodb

2017-10-16 Thread shawn.du






Hi Denis,Yes, We do want to limit the RAM to less than 64G.  RAM resource is still an expensive resource.If we store our data with ignite SQL queryable format, our data may use more than 640G. This is too expensive for us.So we store data using binary format which works a bit like orc or parquet. Only several important columns are SQL queryable and the others are not. In this way, we do store using less RAMs, but we have to use map-reduce to query the data, which is a little bit of complex: Query in client with SQL, then submit jobs to ignite compute, finally do some post aggregation in client.This is why I want to have a try of Presto. We like SQL, we want all computation on server side. welcome your comments.






ThanksShawn





On 10/17/2017 07:57,Denis Magda<dma...@apache.org> wrote: 


Hello Shawn,

Do I understand properly that you have scarce RAM resources and think to exploit Presto as an alternative SQL engine in Ignite that queries both RAM and disk data sets? If that’s the case than just enable Ignite native persistence [1] and you’ll get all the data stored on disk and as much as you can afford in RAM. The SQL works over both tiers transparently for you.

[1] https://ignite.apache.org/features/persistence.html 

—
Denis

> On Oct 16, 2017, at 2:19 AM, Alexey Kukushkin <kukushkinale...@gmail.com> wrote:
> 
> Cross-sending to the DEV community.
> 
> On Mon, Oct 16, 2017 at 12:14 PM, shawn.du <shawn...@neulion.com.cn <mailto:shawn...@neulion.com.cn>> wrote:
> Hi community,
> 
> I am trying to implement a connector for presto to connect ignite. 
> I think it will be a very interest thing to connect ignite and presto.
> 
> In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data.
> thus we cannot query them using SQL. We use ignite map-reduce to query the data.
> 
> Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query.
> The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds.
> Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite?
> 
> Thanks
> Shawn
> 
> 
> 
> 
> -- 
> Best regards,
> Alexey






integrate with prestodb

2017-10-16 Thread shawn.du






Hi community,I am trying to implement a connector for presto to connect ignite. 
I think it will be a very interest thing to connect ignite and presto.In fact, currently we use ignite and it works very well.  but in order to save memory, we build compressed binary data.thus we cannot query them using SQL. We use ignite map-reduce to query the data.Using presto, we may use SQL again. If it is fast enough, ignite will be our in memory storage and not responsible for computing or only for simple query.The only thing I concern about is presto is fast enough or not like Ignite. For now all ignite query cost less than 5 seconds and most are hundreds of milliseconds.Also presto provides a connector for redis.  I don't know community has interest to contribute to presto-ignite?






ThanksShawn









Re: create two client instance in one JVM to connect two ignite

2017-09-07 Thread shawn.du






Hi Val,I want to do something like below:Ignition.setClientMode(true);Ignite ignite1 = Ignition.start(config1);Ignite ignite2 = Ignition.start(config2);

with above 2 ignite client instance, so that I can query from one and put the query result into the other manually, even with some customized logic.






ThanksShawn





On 9/7/2017 07:20,vkulichenko wrote: 


Shawn,

Can you please provide more detail on what you're trying to achieve and what
you tried for that? What exactly is not working?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/





create two client instance in one JVM to connect two ignite cluster

2017-09-06 Thread shawn.du






Hi,It seems that ignite can't do as the topic describe. I want to do data synchronization manually between two cluster, any suggest?






ThanksShawn









ignite compute debugging

2017-06-27 Thread shawn.du






Hi,I see many NullPointerException in my ignite log. like this:java.lang.NullPointerException[15:41:05,542][SEVERE][pub-#5702%null%][GridJobWorker] Failed to execute job due to unexpected runtime exception [jobId=751a821cc51-a76e994f-a6fc-4483-9063-2a51edb5195e, ses=GridJobSessionImpl [ses=GridTaskSessionImpl [taskName=test.MetricQueryTask, dep=LocalDeployment [super=GridDeployment [ts=1497833877393, depMode=SHARED, clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, clsLdrId=9912bddbc51-62cb6ed8-1a2c-498c-8bd6-f2170b87ebad, userVer=0, loc=true, sampleClsName=java.lang.String, pendingUndeploy=false, undeployed=false, usage=0]], taskClsName=test.MetricQueryTask, sesId=451a821cc51-a76e994f-a6fc-4483-9063-2a51edb5195e, startTime=1497901265536, endTime=9223372036854775807, taskNodeId=a76e994f-a6fc-4483-9063-2a51edb5195e, clsLdr=sun.misc.Launcher$AppClassLoader@764c12b6, closed=false, cpSpi=null, failSpi=null, loadSpi=null, usage=1, fullSup=false, internal=false, subjId=a76e994f-a6fc-4483-9063-2a51edb5195e, mapFut=IgniteFuture [orig=GridFutureAdapter [resFlag=0, res=null, startTime=1497901265536, endTime=0, ignoreInterrupts=false, state=INIT]]], jobId=751a821cc51-a76e994f-a6fc-4483-9063-2a51edb5195e]]
How can I locate which line throw the exception?






ThanksShawn









Re: messaging behavior

2017-06-16 Thread shawn.du






Hi,I want to use ignite messaging to send notifications to some client nodes. I write a tool to do this.this tool will start as a client node and send some message. when it send out the message,the tool will stopped. it seems that this doesn't work and i notice error like: Failed to resolve sender node (did the node left grid?)it seems the node gone are too fast. How to solve this?  another question is how can I get the feedback of how message are received by other nodes.






ThanksShawn





On 06/12/2017 20:52,Nikolai Tikhonov<ntikho...@apache.org> wrote: 


Hi,Ignite does not accumulate messages which were sent to non-exist topic. Messages will be lost in your case.On Mon, Jun 12, 2017 at 12:30 PM, shawn.du <shawn...@neulion.com.cn> wrote:







Hi,I am trying ignite topic based messaging. I wonder to know ignite behavior in blow case:Client A send a message with topic T1  to ignite server, but there are no topic listeners at this time, after for a while(like 1 or 2 minutes), Client B is online and subscribe topic T1, will client B get the message? if true, how longthe message will stay in ignite queue and how to set it? how it is for ordered message?






ThanksShawn













messaging behavior

2017-06-12 Thread shawn.du






Hi,I am trying ignite topic based messaging. I wonder to know ignite behavior in blow case:Client A send a message with topic T1  to ignite server, but there are no topic listeners at this time, after for a while(like 1 or 2 minutes), Client B is online and subscribe topic T1, will client B get the message? if true, how longthe message will stay in ignite queue and how to set it? how it is for ordered message?






ThanksShawn









Re: CacheInterceptor issue

2017-03-24 Thread shawn.du





Hi Andrey,Thanks for your reply. My interceptor will save some data into local files then move the file to another mounted directory(NFS or S3).  There are other programs do the merge operations. I use node Id to make the file unique. so I think it is fine.  Just as your mentioned, there are IO operations, I already make it async-write.now all seems good. 






ThanksShawn





On 03/24/2017 16:31,Andrey Mashenkov<amashen...@gridgain.com> wrote: 


Hi Shawn,What stream do you mean? Ignite CacheInterceptorcan be serialized and executed on different node, possibly you send it with a stream created on client side. If it so, it is a bad idea.CacehInterceptor supports resource injecting. See org.apache.ignite.resources package. So, you can make async call of some code from it, which will do all IO work.Also it has a note in javadoc: * Implementations should not execute any complex logic,* including locking, networking or cache operations,* as it may lead to deadlock, since this method is called* from sensitive synchronization blocks.On Fri, Mar 24, 2017 at 4:16 AM, shawn.du <shawn...@neulion.com.cn> wrote:

 




Hi Andrey,
Thanks for your reply. one more question:Does the interceptor need to be thread-safe?  My interceptor throws unexpected Io Exceptions like trying to write closed stream.

 



 
ThanksShawn





On 03/23/2017 15:18,Andrey Mashenkov<andrey.mashenkov@gmail.com> wrote: 


Hi Shawn,CacheInterceptor should be called once on one node.On Wed, Mar 22, 2017 at 5:11 AM, shawn.du <shawn...@neulion.com.cn> wrote:

  





Hi,If I have multiple-nodes cluster, and I set up a cache interceptor on a cache. the cache mode is partitioned or duplicated. in partition mode, the cache maybe have a backup or not.  My question is: when a cache entry is removed,  the interceptor will be called once on one node or multiple times on multiple node? 

  



  
ThanksShawn








-- Best regards,Andrey V. Mashenkov












Re: CacheInterceptor issue

2017-03-23 Thread shawn.du





Hi Andrey,
Thanks for your reply. one more question:Does the interceptor need to be thread-safe?  My interceptor throws unexpected Io Exceptions like trying to write closed stream.






ThanksShawn





On 03/23/2017 15:18,Andrey Mashenkov<andrey.mashen...@gmail.com> wrote: 


Hi Shawn,CacheInterceptor should be called once on one node.On Wed, Mar 22, 2017 at 5:11 AM, shawn.du <shawn...@neulion.com.cn> wrote:

 





Hi,If I have multiple-nodes cluster, and I set up a cache interceptor on a cache. the cache mode is partitioned or duplicated. in partition mode, the cache maybe have a backup or not.  My question is: when a cache entry is removed,  the interceptor will be called once on one node or multiple times on multiple node? 

 



 
ThanksShawn








-- Best regards,Andrey V. Mashenkov







CacheInterceptor issue

2017-03-21 Thread shawn.du






Hi,If I have multiple-nodes cluster, and I set up a cache interceptor on a cache. the cache mode is partitioned or duplicated. in partition mode, the cache maybe have a backup or not.  My question is: when a cache entry is removed,  the interceptor will be called once on one node or multiple times on multiple node? 






ThanksShawn










generic class support

2017-03-08 Thread shawn.du





Hi,see below configuration:                                                     java.lang.String                     com.example.MyClass                       MyClass is a generic class.  T can be Integer/Long/byte[]/String etc simple data type.public class MyClass{    @QuerySqlField     private T value;}Does ignite support? 






ThanksShawn










consistence issue

2017-03-01 Thread shawn.du






Hi,I have a cache. there may be thousands of cache entry. Each cache entry may be updated every several seconds. I update them using cache.invoke method which will update cache entry partially.Questions#1 when invoke method return at client side. other client will get the latest data?#2 for some reason, client process is killed. there may be data lose? even invoke method is already returned.#3 if ignite.close normally, it will ensure all data is written into the server? 






ThanksShawn










cache namespace support

2017-02-27 Thread shawn.du






Hi,I have a use case like below:I want to store key-value caches in ignite, but each key-value has a namespace. key are unique in a namespace.Each cache entry has the same data type for both key and value.When query, We always query by key and namespace.  sometimes, we query all caches of a namespace or remove all entries in one namespace.I think there are two solutions:#1 create cache for each namespace, like use namespace as the cache name.  using the approach, there may many caches in ignite.  IMO, it is a bit dirty, I meant when using ignitevisor to show caches  you will see many caches. #2 create one cache, each cache entry has namespace,key and value. using (namespace,key) as the cache key.  This solution is clean. but you have to create a static class/complex configuration. also you have to use SQL to query all/remove all  data in a namespace.I think this is a very common use case. Can ignite support it in  lower layer for solution clean and  better performance?welcome your comments.






ThanksShawn










Re: simple file based persistent store support TTL

2017-02-15 Thread shawn.du






Hi,after my code run almost one day, the server throwed many below exceptions, and all seems stop to work(cache,query and compute).my code is rather simple. what's the possible reason? Thanks in advance.public boolean apply(CacheEvent cacheEvent){	if (cacheEvent.type() == EventType.EVT_CACHE_OBJECT_EXPIRED)	{		logger.info("remove cache entry key:" + cacheEvent.key() + " in cache " + cacheEvent.cacheName());		ignite.cache(cacheEvent.cacheName()).remove(cacheEvent.key());	}	return true;}at org.apache.ignite.internal.GridEventConsumeHandler$2.onEvent(GridEventConsumeHandler.java:169)        at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager.notifyListeners(GridEventStorageManager.java:770)        at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager.notifyListeners(GridEventStorageManager.java:755)        at org.apache.ignite.internal.managers.eventstorage.GridEventStorageManager.record(GridEventStorageManager.java:295)        at org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:297)        at org.apache.ignite.internal.processors.cache.GridCacheEventManager.addEvent(GridCacheEventManager.java:204)        at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onExpired(GridCacheMapEntry.java:3883)        at org.apache.ignite.internal.processors.cache.GridCacheMapEntry.onTtlExpired(GridCacheMapEntry.java:3808)        at org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:139)        at org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:103)        at org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:920)        at org.apache.ignite.internal.processors.cache.GridCacheGateway.leaveNoLock(GridCacheGateway.java:234)        at org.apache.ignite.internal.processors.cache.GridCacheGateway.leave(GridCacheGateway.java:219)        at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.onLeave(IgniteCacheProxy.java:2287)        at org.apache.ignite.internal.processors.cache.IgniteCacheProxy.remove(IgniteCacheProxy.java:1476)        at com.neulion.qos.ignite.listener.CacheEventListener.apply(CacheEventListener.java:29)








ThanksShawn





On 02/14/2017 11:44,shawn.du<shawn...@neulion.com.cn> wrote: 





Thanks Val, I forgot to enable them. I just got example code and run them.  Now it works.






ThanksShawn





On 02/14/2017 11:40,vkulichenko<valentin.kuliche...@gmail.com> wrote: 


Hi Shawn,

Did you enable the event (IgniteConfiguration.setIncludeEventTypes(..))?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/simple-file-based-persistent-store-support-TTL-tp10355p10616.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.







Re: persist periodically

2017-02-14 Thread shawn.du





Hi, Andrey,then do you know how to gracefully shut down ignite servers when write-behind enabled without data lose?






ThanksShawn





On 02/14/2017 14:04,shawn.du<shawn...@neulion.com.cn> wrote: 






exactly.  Thank you, Andrey!






ThanksShawn





On 02/14/2017 13:54,Andrey Mashenkov<amashen...@gridgain.com> wrote: 


Hi Shawn,Look at CacheConfiguration.setWriteBehindEnabled [1] and related methods.Does it meets your needs?[1] http://apacheignite.gridgain.org/v1.8/docs/persistent-store#write-behind-cachingOn Tue, Feb 14, 2017 at 8:40 AM, shawn.du <shawn...@neulion.com.cn> wrote:

  





Hi,ignite support persist data periodically?see blow use case:we want to implement storm state management using ignite.we maintain storm stateful object as caches in ignite.for storm used processing message in very low latency, so the stateful object maybe changed very fast.if ignite persist each change into storage, I think it will both decrease ignite and storm performance.how to make ignite persist periodically? like every seconds.

  



  
ThanksShawn
















Re: persist periodically

2017-02-13 Thread shawn.du






exactly.  Thank you, Andrey!






ThanksShawn





On 02/14/2017 13:54,Andrey Mashenkov<amashen...@gridgain.com> wrote: 


Hi Shawn,Look at CacheConfiguration.setWriteBehindEnabled [1] and related methods.Does it meets your needs?[1] http://apacheignite.gridgain.org/v1.8/docs/persistent-store#write-behind-cachingOn Tue, Feb 14, 2017 at 8:40 AM, shawn.du <shawn...@neulion.com.cn> wrote:

 





Hi,ignite support persist data periodically?see blow use case:we want to implement storm state management using ignite.we maintain storm stateful object as caches in ignite.for storm used processing message in very low latency, so the stateful object maybe changed very fast.if ignite persist each change into storage, I think it will both decrease ignite and storm performance.how to make ignite persist periodically? like every seconds.

 



 
ThanksShawn















persist periodically

2017-02-13 Thread shawn.du






Hi,ignite support persist data periodically?see blow use case:we want to implement storm state management using ignite.we maintain storm stateful object as caches in ignite.for storm used processing message in very low latency, so the stateful object maybe changed very fast.if ignite persist each change into storage, I think it will both decrease ignite and storm performance.how to make ignite persist periodically? like every seconds.






ThanksShawn










Re: simple file based persistent store support TTL

2017-02-13 Thread shawn.du





Thanks Val, I forgot to enable them. I just got example code and run them.  Now it works.






ThanksShawn





On 02/14/2017 11:40,vkulichenko wrote: 


Hi Shawn,

Did you enable the event (IgniteConfiguration.setIncludeEventTypes(..))?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/simple-file-based-persistent-store-support-TTL-tp10355p10616.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






Re: simple file based persistent store support TTL

2017-02-12 Thread shawn.du





Hi Val,do you mean that when cache is expired, I have to call cache.remove(key) manually in order to call delete in cachestore?I try to implement by remote listener like this:ignite.events().remoteListen(null, new IgnitePredicate(){	@IgniteInstanceResource	private Ignite ignite;	@LoggerResource	private IgniteLogger logger;	@Override	public boolean apply(CacheEvent event)	{		logger.info("remove cache  " + event.cacheName() + "for key " + event.key());		ignite.cache(event.cacheName()).remove(event.key());		return true;	}}, EventType.EVT_CACHE_OBJECT_EXPIRED);it seems that it doesn't work. I don't see any logs of calling it. what i missed?ThanksShawn
On 02/2/2017 05:27,vkulichenko wrote: 


Hi Shawn,

Built in expiration is cache expiration, which by definition doesn't touch
database. This means that you need to manually use remove() operation to
delete from both cache and store. I think approach #1 is the best way to
approach this.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/simple-file-based-persistent-store-support-TTL-tp10355p10365.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






Register CacheEvent Remote Listener

2017-02-12 Thread shawn.du





Hi,I want to only register listeners for cacheEvent, but I don't want to specify the particular cache.it is possible?like this code:ignite.events().remoteListen(null, new IgnitePredicate(){  }






ThanksShawn










Re: restore Java Object from BinaryObject

2017-02-12 Thread shawn.du





My code error, BinaryObject deserialize twice.






ThanksShawn





On 02/11/2017 06:26,vkulichenko wrote: 


Shawn,

What's the actual issue after you disabled compact footer? Exception?
Incorrect behavior?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/restore-Java-Object-from-BinaryObject-tp10525p10556.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






Re: restore Java Object from BinaryObject

2017-02-09 Thread shawn.du





Hi Ilya,Thank your reply. And for my code it works partially. 
                   
above configuration works for the caches which have declared Java Class as the cache value. It seems it doesn't work for the caches which created by "setQueryEntities".My Use case like below:# 1, I create caches dynamically by calling setQueryEntities.# 2, I want to load all caches when server start.# 3, For I create caches dynamically, so I can't use spring XML configuration.# 4, I save the cache creation parameters somewhere, and in the LifeCyleBean, I create the caches and call loadCache. metricCache = ignite.createCache(CacheUtils.createColumnCacheConfiguration(tableSchema.getCacheName(), tableSchema,cacheConf));if (metricCache != null && cacheConf.isPersistEnable()){metricCache.loadCache(null);}I guess every time, cache created by setQueryEntities even with the same parameters they have different hashcode. please confirm.






ThanksShawn





On 02/9/2017 19:03,Ilya Lantukh<ilant...@gridgain.com> wrote: 


Hi,In your case you should disable compact footer. See https://ignite.apache.org/releases/mobile/org/apache/ignite/configuration/BinaryConfiguration.html#isCompactFooter().On Thu, Feb 9, 2017 at 1:28 PM, shawn.du <shawn...@neulion.com.cn> wrote:

 




Hi,I implement a cacheStore, this cachestore will persist a binaryObject into bytes and store in MySQL blob. Exceptions occurred when calling loadCache function:binaryObject.deserialize() will throw exceptions like "Cannot find metadata for object with compact footer: -1615140068"If  I just put the binaryObject into the cache without deserialization , it is ok.  when i get it and use it, it will throw the exception again.How to fix it? Thanks in advance.public class BlobCacheStore extends CacheStoreAdapter<String, BinaryObject>{   public void loadCache(IgniteBiInClosure<String, BinaryObject> clo, Object... args)  {init();String sql = TEMPLATE_LOAD_SQL.replace(FAKE_TABLE_NAME, tableName);try (Connection connection = dataSource.getConnection(); Statement statement = connection.createStatement(); ResultSet rs = statement.executeQuery(sql)){while (rs.next()){String key = rs.getString("aKey");Blob blob = rs.getBlob("val");BinaryObject binaryObject = ignite.configuration().getMarshaller().unmarshal(blob.getBytes(1, (int) blob.length()), getClass().getClassLoader());blob.free();ignite.cache(cacheName).put(key, binaryObject.deserialize()); //here will throw exceptions}}catch (Exception e){throw new IgniteException(e);}  }}

 



 
ThanksShawn








-- Best regards,Ilya







restore Java Object from BinaryObject

2017-02-09 Thread shawn.du





Hi,I implement a cacheStore, this cachestore will persist a binaryObject into bytes and store in MySQL blob. Exceptions occurred when calling loadCache function:binaryObject.deserialize() will throw exceptions like "Cannot find metadata for object with compact footer: -1615140068"If  I just put the binaryObject into the cache without deserialization , it is ok.  when i get it and use it, it will throw the exception again.How to fix it? Thanks in advance.public class BlobCacheStore extends CacheStoreAdapter{   public void loadCache(IgniteBiInClosure clo, Object... args)  {init();String sql = TEMPLATE_LOAD_SQL.replace(FAKE_TABLE_NAME, tableName);try (Connection connection = dataSource.getConnection(); Statement statement = connection.createStatement(); ResultSet rs = statement.executeQuery(sql)){while (rs.next()){String key = rs.getString("aKey");Blob blob = rs.getBlob("val");BinaryObject binaryObject = ignite.configuration().getMarshaller().unmarshal(blob.getBytes(1, (int) blob.length()), getClass().getClassLoader());blob.free();ignite.cache(cacheName).put(key, binaryObject.deserialize()); //here will throw exceptions}}catch (Exception e){throw new IgniteException(e);}  }}






ThanksShawn










Re: how to get BinaryMarshaller instance

2017-02-07 Thread shawn.du






get it. storeKeepBinary flag will help. 






ThanksShawn





On 02/8/2017 13:58,shawn.du<shawn...@neulion.com.cn> wrote: 






Hi,I want to persist some data in binary, for I put cache in BinaryObject format like this:BinaryObjectBuilder builder = ignite.binary().builder(typeName);cache.put(key,builder.build());how can I get or create BinaryMarshaller instance? 







ThanksShawn











how to get BinaryMarshaller instance

2017-02-07 Thread shawn.du






Hi,I want to persist some data in binary, for I put cache in BinaryObject format like this:BinaryObjectBuilder builder = ignite.binary().builder(typeName);cache.put(key,builder.build());how can I get or create BinaryMarshaller instance? 







ThanksShawn










Re: persist only on delete

2017-02-06 Thread shawn.du





Hi Val,just curious about why onAfterRemove(Cache.Entry(K,V) entry)entry.getValue() sometimes return null.  but I never put a null value into the cache.I have to check it to avoid NullPointerException.






ThanksShawn





On 02/7/2017 09:07,shawn.du<shawn...@neulion.com.cn> wrote: 





Hi Val,Thanks very much. CacheInterceptor is the best way.Shawn





On 02/7/2017 08:09,vkulichenko<valentin.kuliche...@gmail.com> wrote: 


Hi Shawn,

Your approach sounds a bit dangerous. Store is called within an entry lock,
which means that if you do a distributed get there, you will likely get
thread starvation. This should not happen on stable topology, but if another
node fails or joins, you can get stuck.

I think you should use interceptor instead:
https://ignite.apache.org/releases/mobile/org/apache/ignite/cache/CacheInterceptor.html

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/persist-only-on-delete-tp10438p10461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.







Re: persist only on delete

2017-02-06 Thread shawn.du





Hi Val,Thanks very much. CacheInterceptor is the best way.Shawn





On 02/7/2017 08:09,vkulichenko wrote: 


Hi Shawn,

Your approach sounds a bit dangerous. Store is called within an entry lock,
which means that if you do a distributed get there, you will likely get
thread starvation. This should not happen on stable topology, but if another
node fails or joins, you can get stuck.

I think you should use interceptor instead:
https://ignite.apache.org/releases/mobile/org/apache/ignite/cache/CacheInterceptor.html

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/persist-only-on-delete-tp10438p10461.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






Re: simple file based persistent store support TTL

2017-02-02 Thread shawn.du





thanks Val. 

On 02/2/2017 05:27,

vkulichenko

wrote:





Hi Shawn,

Built in expiration is cache expiration, which by definition doesn't touch
database. This means that you need to manually use remove() operation to
delete from both cache and store. I think approach #1 is the best way to
approach this.

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/simple-file-based-persistent-store-support-TTL-tp10355p10365.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.






Re: compute SQL returned data.

2017-01-07 Thread shawn.du







Hi,I think my case is a typical Map-Reduce problem.  I go through Ignite Compute API, but I don't see how ignite collocate compute and data when doing map-reduce.I describe my logic here:#1 client issues SQL query to server to get the cache keys. It is a very light task. #2 client sends the returned keys in step #1 to ignite sever to do map-reduce jobs.      The map-reduce job input data are in ignite caches which can be get by the key.      we can split the jobs by the key.#3 client get the map-reduce result. Also i don't know to set mappers. please help and thanks in advance.ThanksShawn

On 01/6/2017 01:57,

Denis Magda

wrote:





Keep in mind that if you’re going to execute local SQL queries inside of distributed computations then the computations have to be sent to the cluster with affinity based methodshttps://apacheignite.readme.io/docs/collocate-compute-and-data#section-affinity-call-and-run-methods—DenisOn Jan 5, 2017, at 12:30 AM, Nikolai Tikhonov  wrote:We can submite Compute Task which will be mapped jobs on nodes. Jobs can execute local SQL query (see Query#setLocal). Examples queries and compute task can be found here: https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examplesOn Thu, Jan 5, 2017 at 11:09 AM, Shawn Du  wrote:Thanks! My case looks like this: We store big cache object to overcome the defects of ignite overhead of each cache entry. For these big cache object, we can’t use SQL to query/aggregate them.  So we need compute by ourselves. We want to compute on the SQL returned data in ignite server side for several reasons:#1, for the SQL returned data is huge, we don’t want to transfer them from sever to client.#2, compute on the client server, we need write code to compute parallel. I assume ignite will do it for us.#3, in future, If we add new server nodes, we will benefit immediately. ThanksShawn   发件人: Nikolai Tikhonov [mailto:ntikho...@apache.org] 发送时间: 2017年1月5日 15:45收件人: user@ignite.apache.org主题: Re: compute SQL returned data. Hi, I'm not sure that undestrand what do you want. Could you describe your case more details?Also you can find examples on here: https://github.com/apache/ignite/tree/master/examples/src/main/java/org/apache/ignite/examples On Thu, Jan 5, 2017 at 10:20 AM, Shawn Du  wrote:Hi experts, I want to compute the SQL returned data on the server side. Is there any examples?  ThanksShawn  






回复:BinaryObject and String.intern

2017-01-03 Thread shawn.du


thanks dkarachentsev.
在2017年01月03日 18:25,dkarachentsev 写道:Actually no, because Ignite internally will store it as a BinaryObject and
will send to other nodes in a binary format as well, where all string fields
will be unmarshaled without intern().  



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-and-String-intern-tp9826p9834.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
 




回复:Re: 答复: Cache.invoke are terribly slow and can't update cache

2016-12-22 Thread shawn.du






Thanks Val!Now I understood entryprocessor better. as to "error prone", I meant if we follow the document,it will be error prone.  For EntryProcessor is just an interface, it doesn't prohibit user to use lamda or anonymous classes. 




 


When we implement the interface at client side, the user may use client side global variables/classes/instances(some of them even will start new threads), I don't know it will work or not, at least it will introduce much complexity. At yesterday's debuging, I event don't see much useful logs.  back to the document, I think it can be improved and give some recommendations. ThanksShawn  



在2016年12月23日 03:03,

vkulichenko

写道:





Shawn,

#1 - Yes, peer class loading is needed if you don't have entry processor
implementation class deployed on server nodes.
#2 - Correct. Moreover, I would recommend not to use lambdas or anonymous
classes for anything that can be serialized, but to use static classes
instead (this is actually a recommendation for Java in general, not only for
Ignite). This will give you much more control on what is serialized along
with entry processor and will simplify the understanding of how it is
executed.

Can you clarify why do you think it's error prone?

-Val



--
View this message in context: http://apache-ignite-users.70518.x6.nabble.com/Cache-invoke-are-terribly-slow-and-can-t-update-cache-tp9676p9708.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.