Error in write-through

2018-11-12 Thread Akash Shinde
Hi,
I have started four ignite nodes and configured cache in distributed mode.
When I initiated thousands of requests to write the data on this
cache(write through enabled) , facing below error.
>From logs we can see this error is occurring  while witting to oracle
database.(using cache write-through).
This error is not consistent.Node does stop for while after this error and
continues to pick up the next ignite tasks.
Please someone advise what does following log means.


2018-11-13 05:52:05,577 2377545 [core-1] INFO
c.q.a.a.s.AssetManagementService - Add asset request processing started,
requestId ADD_Ip_483, subscriptionId =262604, userId=547159
2018-11-13 05:52:06,647 2378615 [grid-timeout-worker-#39%springDataNode%]
WARN  o.a.ignite.internal.util.typedef.G - >>> Possible starvation in
striped pool.
Thread name: sys-stripe-11-#12%springDataNode%
Queue: [Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE,
topicOrd=8, ordered=false, timeout=0, skipOnTimeout=false,
msg=GridNearSingleGetResponse [futId=1542085977929, res=BinaryObjectImpl
[arr= true, ctx=false, start=0], topVer=null, err=null, flags=0]]], Message
closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false, msg=GridDhtTxPrepareResponse
[nearEvicted=null, futId=b67ac7b0761-93ebea72-bf4e-40d8-8a19-d3258be94ce9,
miniId=1, super=GridDistributedTxPrepareResponse [txState=null, part=-1,
err=null, super=GridDistributedBaseMessage [ver=GridCacheVersion
[topVer=153565953, order=1542089536997, nodeOrder=3], committedVers=null,
rolledbackVers=null, cnt=0, super=GridCacheIdMessage [cacheId=0]],
Message closure [msg=GridIoMessage [plc=2, topic=TOPIC_CACHE, topicOrd=8,
ordered=false, timeout=0, skipOnTimeout=false, msg=GridNearSingleGetRequest
[futId=1542085976178, key=BinaryObjectImpl [arr= true, ctx=false, start=0],
flags=1, topVer=AffinityTopologyVersion [topVer=7, minorTopVer=0],
subjId=9e8db7e7-48ba-4161-881b-ad4fcfc175a0, taskNameHash=0, createTtl=-1,
accessTtl=-1
Deadlock: false
Completed: 703
Thread [name="sys-stripe-11-#12%springDataNode%", id=41, state=RUNNABLE,
blockCnt=37, waitCnt=729]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at oracle.net.ns.Packet.receive(Packet.java:311)
at oracle.net.ns.DataPacket.receive(DataPacket.java:105)
at
oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:305)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:249)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:171)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:89)
at
oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:123)
at
oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:79)
at
oracle.jdbc.driver.T4CMAREngineStream.unmarshalUB1(T4CMAREngineStream.java:429)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:397)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:257)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:587)
at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:225)
at
oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:53)
at
oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:943)
at
oracle.jdbc.driver.OraclePreparedStatement.executeForRowsWithTimeout(OraclePreparedStatement.java:12029)
at
oracle.jdbc.driver.OraclePreparedStatement.executeBatch(OraclePreparedStatement.java:12140)
at
oracle.jdbc.driver.OracleStatementWrapper.executeBatch(OracleStatementWrapper.java:246)
at
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:128)
at
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
at
com.qualys.agms.grid.cache.loader.AbstractDefaultCacheStore.writeAll(AbstractDefaultCacheStore.java:126)
at
o.a.i.i.processors.cache.store.GridCacheStoreManagerAdapter.putAll(GridCacheStoreManagerAdapter.java:641)
at
o.a.i.i.processors.cache.transactions.IgniteTxAdapter.batchStoreCommit(IgniteTxAdapter.java:1422)
at
o.a.i.i.processors.cache.transactions.IgniteTxLocalAdapter.userCommit(IgniteTxLocalAdapter.java:502)
at
o.a.i.i.processors.cache.distributed.near.GridNearTxLocal.localFinish(GridNearTxLocal.java:3185)
at
o.a.i.i.processors.cache.distributed.near.GridNearTxFinishFuture.doFinish(GridNearTxFinishFuture.java:467)
at
o.a.i.i.processors.cache.distributed.near.GridNearTxFinishFuture.finish(GridNearTxFinishFuture.java:417)
at

Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread Evgenii Zhuravlev
>then we can use `cache group` to share some backend data struct to gain
some
level performance.
It will reduce the overhead for each cache since they will share the same
data structures under the hood.

>Are there any potential issues or potential consideration need to be
takecare of?
Caches in one cache group should have the same data distribution
configuration(affinity function, backups count, node filters).

Evgenii

вт, 13 нояб. 2018 г. в 6:39, kcheng.mvp :

> Thank you reply very much!
>
> If I keep all the caches for my system are created via sql, then all the
> backend h2 tables are in public schema.
>
> in this case even all the caches are in the same schema but there are
> different cache (one cache per table but all in the same schema)
>
> then we can use `cache group` to share some backend data struct to gain
> some
> level performance.
>
>
> Are there any potential issues or potential consideration need to be take
> care of?
>
>
> right now we are doing production environment preparation. I know there are
> must be new tables(caches) in the near future, but I want to keep all the
> tables in the same database schema(just like traditional database such as
> mysql)
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread kcheng.mvp
Thank you reply very much!

If I keep all the caches for my system are created via sql, then all the
backend h2 tables are in public schema.

in this case even all the caches are in the same schema but there are
different cache (one cache per table but all in the same schema)

then we can use `cache group` to share some backend data struct to gain some
level performance. 


Are there any potential issues or potential consideration need to be take
care of?


right now we are doing production environment preparation. I know there are
must be new tables(caches) in the near future, but I want to keep all the
tables in the same database schema(just like traditional database such as
mysql)





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Suppressing reflective serialisation in Ignite

2018-11-12 Thread Raymond Wilson
Thanks Pavel - works well! :)

Raymond.

On Tue, Nov 13, 2018 at 9:20 AM Pavel Tupitsyn  wrote:

> Hi Raymond,
>
> Yes, you can do that by implementing IBinarySerializer like this:
>
> class BinarizableSerializer : IBinarySerializer
> {
> public void WriteBinary(object obj, IBinaryWriter writer)
> {
> if (obj is IBinarizable bin)
> {
> bin.WriteBinary(writer);
> }
>
> throw new Exception("Not IBinarizable: " + obj.GetType());
>
> }
>
> public void ReadBinary(object obj, IBinaryReader reader)
> {
> if (obj is IBinarizable bin)
> {
> bin.ReadBinary(reader);
> }
>
> throw new Exception("Not IBinarizable: " + obj.GetType());
> }
> }
>
> Then set it globally in IgniteConfiguration:
>
> var cfg = new IgniteConfiguration
> {
> BinaryConfiguration = new BinaryConfiguration
> {
> Serializer = new BinarizableSerializer()
> }
> };
>
>
>
> On Thu, Nov 8, 2018 at 9:28 PM Raymond Wilson 
> wrote:
>
>> Hi Denis,
>>
>> Yes, I understand reflective serialisation uses binarizable serialisation
>> under the hood (and it's fast and easy to use). But it has issues in the
>> face of schema changes so it is better (and recommended in the Ignite docs)
>> to use Binarizable serialization for production.
>>
>> I want to make sure all my serialization contexts are covered by explicit
>> IBinarizable serialization. A simple approach would be to turn off
>> reflective serialization to ensure cases where we have missed it fail
>> explicitly. Is that possible?
>>
>> Thanks,
>> Raymond.
>>
>>
>> On Thu, Nov 8, 2018 at 1:10 PM Denis Magda  wrote:
>>
>>> Hi Raymond,
>>>
>>> If to believe this page, the reflective serialization converts an object
>>> to the binary format (sort of marked with IBaniralizable interface
>>> implicitly):
>>>
>>> https://apacheignite-net.readme.io/docs/serialization#section-ignite-reflective-serialization
>>>
>>> --
>>> Denis
>>>
>>>
>>> On Tue, Nov 6, 2018 at 1:01 PM Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
 We are currently converting our use of Ignite reflective serialisation
 to use IBinarizable based serialisation [using Ignite 2.6 with c# client]

 What I would like to do is enforce a policy of not using reflective
 serialisation to ensure we have all the bases covered.

 Is there a way to do this in Ignite?

 Thanks,
 Raymond.




Re: Suppressing reflective serialisation in Ignite

2018-11-12 Thread Pavel Tupitsyn
Hi Raymond,

Yes, you can do that by implementing IBinarySerializer like this:

class BinarizableSerializer : IBinarySerializer
{
public void WriteBinary(object obj, IBinaryWriter writer)
{
if (obj is IBinarizable bin)
{
bin.WriteBinary(writer);
}

throw new Exception("Not IBinarizable: " + obj.GetType());

}

public void ReadBinary(object obj, IBinaryReader reader)
{
if (obj is IBinarizable bin)
{
bin.ReadBinary(reader);
}

throw new Exception("Not IBinarizable: " + obj.GetType());
}
}

Then set it globally in IgniteConfiguration:

var cfg = new IgniteConfiguration
{
BinaryConfiguration = new BinaryConfiguration
{
Serializer = new BinarizableSerializer()
}
};



On Thu, Nov 8, 2018 at 9:28 PM Raymond Wilson 
wrote:

> Hi Denis,
>
> Yes, I understand reflective serialisation uses binarizable serialisation
> under the hood (and it's fast and easy to use). But it has issues in the
> face of schema changes so it is better (and recommended in the Ignite docs)
> to use Binarizable serialization for production.
>
> I want to make sure all my serialization contexts are covered by explicit
> IBinarizable serialization. A simple approach would be to turn off
> reflective serialization to ensure cases where we have missed it fail
> explicitly. Is that possible?
>
> Thanks,
> Raymond.
>
>
> On Thu, Nov 8, 2018 at 1:10 PM Denis Magda  wrote:
>
>> Hi Raymond,
>>
>> If to believe this page, the reflective serialization converts an object
>> to the binary format (sort of marked with IBaniralizable interface
>> implicitly):
>>
>> https://apacheignite-net.readme.io/docs/serialization#section-ignite-reflective-serialization
>>
>> --
>> Denis
>>
>>
>> On Tue, Nov 6, 2018 at 1:01 PM Raymond Wilson 
>> wrote:
>>
>>> We are currently converting our use of Ignite reflective serialisation
>>> to use IBinarizable based serialisation [using Ignite 2.6 with c# client]
>>>
>>> What I would like to do is enforce a policy of not using reflective
>>> serialisation to ensure we have all the bases covered.
>>>
>>> Is there a way to do this in Ignite?
>>>
>>> Thanks,
>>> Raymond.
>>>
>>>


Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread Evgenii Zhuravlev
cache and schema is a different things in Ignite. By default, when you
create table with SQL, it will be created in separate cache.

Evgenii

пн, 12 нояб. 2018 г. в 20:00, kcheng.mvp :

> based on my knowledge about ignite, there is a corresponding database
> `schema` for each `cache` in ignite  (cache name)
>
>
> in real case there are always more than one tables in a database `schema`.
> that's why I keep more than one tables in a single ignite `cache`
>
> if use one table per cache,  then it's a bit uncomfortable for a developer.
> why there are so many schemas for a system and each schema just have one
> table.
>
>
> I checked the document about group :
> https://apacheignite.readme.io/docs/cache-groups
>
> it does not address how the backend h2 database schema generated when keep
> different cache use same `group name`.  Is the `group name` will be used as
> schema name in this case?
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread kcheng.mvp
based on my knowledge about ignite, there is a corresponding database
`schema` for each `cache` in ignite  (cache name) 


in real case there are always more than one tables in a database `schema`. 
that's why I keep more than one tables in a single ignite `cache`

if use one table per cache,  then it's a bit uncomfortable for a developer.
why there are so many schemas for a system and each schema just have one
table.


I checked the document about group :
https://apacheignite.readme.io/docs/cache-groups

it does not address how the backend h2 database schema generated when keep
different cache use same `group name`.  Is the `group name` will be used as
schema name in this case?






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Comparison between java 8 streams functionality and Apache Ignite

2018-11-12 Thread Ilya Kasnacheev
Hello!

I would believe that reduce may run multiple times as you have suggested.

Regards,
-- 
Ilya Kasnacheev


чт, 1 нояб. 2018 г. в 5:48, gsaxena888 :

> I've been thinking about this some more: I think the ignite solution is
> nearly perfect, *if* the reduce operation runs within every node (so that,
> for example, the results of ~96 threads on one google compute enginer were
> reduced/summarized to a single value) and then either a single final
> reduction occurrs on one node (eg a leader node?) or, if we want to get
> fancy, the last reduction can occur in parallel accross the nodes (but it's
> not clear whether doing a parallel reduction accross al the nodes is a
> performant in the majority of real-life cases given the overhead of doing
> so). So the question is: does the "reduce" operation run first within each
> node? If not, wouldn't it make sense to do so (to minimize transferring
> data
> to a single node to do a giant reduction and to also make good use of all
> the nodes' cores?) And if not, is there an alternative way of achieving
> this
> in Ignite (ie what do developers do?)
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: My cluster can't activate after restart

2018-11-12 Thread Ilya Kasnacheev
Hello!

The last error looks like you have lost contents of work/marshaller
directory on your nodes. Did you? Do you have -1434421210.classname file or
similar?

Regards,
-- 
Ilya Kasnacheev


пт, 9 нояб. 2018 г. в 4:12, yangjiajun <1371549...@qq.com>:

> Hi,
>
> 1)JDBC thin client becomes not work after one node fails.I think the cause
> is my node uses too many heap space and jvm pause too much time.I was doing
> write performance test at that time.
>
> 2)I use control.sh script to active the cluster,but it does not exit .
>
> 3)Yes.I use persistence.
>
> 4)I have already disable quiet  mode.The logs shows that ignite throws an
> exception after I call control.sh to active the cluster.The exception is:
> class org.apache.ignite.IgniteCheckedException: Cannot find metadata for
> object with compact footer: -1434421210.And you can see more details above.
>
> Thanks for your reply.
>
>
>
>
> aealexsandrov wrote
> > Hi,
> >
> > What do you mean when say that
> >
> > 1)JDBC thin client isn't worked
> > 2)cluster block in activate stage
> >
> > How you activate your cluster? Are you use the persistence?
> >
> > To get more logs try to add next option to java options of Ignite:
> >
> > -DIGNITE_QUIET=false
> >
> > BR,
> > Andrei
> >
> >
> >
> > --
> > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Why Ignite use so many heap space?

2018-11-12 Thread Ilya Kasnacheev
Hello!

https://issues.apache.org/jira/browse/IGNITE-10224 might be related. Try
closing connections every now and then. If it does not help, collect heap
dump & analyze it.

Regards,
-- 
Ilya Kasnacheev


пн, 12 нояб. 2018 г. в 12:44, yangjiajun <1371549...@qq.com>:

> My test scenario:
> 1.One ignite node uses 12GB heap memory and 30GB off-heap memory with
> persistence。Here is my cmd to start the node:
> nohup ./ignite.sh ../examples/config/example-ignite.xml -J-server -J-Xms12g
> -J-Xmx12g -J-XX:+AlwaysPreTouch -J-XX:+UseG1GC -J-XX:+ScavengeBeforeFullGC
> -J-XX:+DisableExplicitGC -J-XX:+HeapDumpOnOutOfMemoryError
> -J-XX:HeapDumpPath=./logs -J-XX:+ExitOnOutOfMemoryError
> -J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
> -J-XX:+UseGCLogFileRotation -J-XX:NumberOfGCLogFiles=10
> -J-XX:GCLogFileSize=10M -J-Xloggc:./gclog.txt -v&
> 2.My jdbc thin connection is:
>
> jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true
> 2.Excute merge into statements.Each statement has  hundreds of data
> rows.The
> test runs about one statement per second.
> 3.The ignite node run out heap memory when there is no idle limit(gc is not
> work as I describe before).The ignite node uses 30%-70% heap memory when I
> set Idle connections limit to 50.The connection pool size is 200.
> I also tried to limit idle connections in ignite with idleTimeout
> setting.But it makes my connection pool not work.
> 4.I think gc frequency is natural.I am still doing more test and sorry for
> no more details about it.
>
>
> Mikael wrote
> > Hi!
> >
> > You said: "And it still uses 80%-85% of heap memory after I stop my
> > application", I assume you mean your client application ?
> >
> > So after a GC it will still stay at 80% java heap usage ?
> >
> > I assume you are using off-heap memory ?
> >
> > I am not sure what the problem is, I am running an application that
> > updates around 10.000 cache entries every 10 seconds (persistence
> > enabled) and run a lot of other code that generates java heap garbage,
> > and the application generates around 40MB garbage per second, it fills
> > up the heap in a minute or so but goes down to around 20% after every gc.
> >
> > How often does you application GC ? if you have 80% heap filled it
> > should GC pretty often I would think ?
> >
> > There is garbage generated, there is not much you can do about that if
> > it's not your own code that generates the garbage, Ignite is not bad and
> > with the cache keys and values and even index of heap it should work
> > fine, but i guess it all depends on what your application do.
> >
> > Mikael
> >
> > Den 2018-11-08 kl. 14:07, skrev yangjiajun:
> >> HI.
> >>
> >> I have an ignite node  which is version 2.6 and has fixed 12GB heap
> >> memory
> >> and 30GB data region.I use it as a database with persistence.It uses
> >> 90%-95%
> >> of heap memory when my application is busy(My application uses jdbc thin
> >> connection).And it still uses 80%-85% of heap memory after I stop my
> >> application.Ignite is in bad performance when its heap usage is high.
> >>
> >> How to reduce ignite heap usage?
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread Evgenii Zhuravlev
>can you tell me why there is a such limit that new created table via sql
can
not be in the non-public schema?
It is a current restriction, that will be fixed in the future.

>as it's hard to forecast how many tables should be in a caches. If there is
a such limit, then what's the best practice when using ignite with
persistent.
There is no such kind of limit, however, it's recommended to have only one
table per cache. It's better to set caches to one cache group if needed. I
don't see any usefulness in creating several tables in one cache.

Evgenii

пн, 12 нояб. 2018 г. в 18:57, kcheng.mvp :

> Thank you very much!
>
> can you tell me why there is a such limit that new created table via sql
> can
> not be in the non-public schema?
>
>
> I am using ignite with persistent, each `module` in my system is using a
> cache, which there are some tables generated via `cfg.setIndexedTypes`
>
> right now for business changes, I need to add new indexedTypes(sql table)
> to
> an existing module(cache).
>
> in order to keep module and cache(tables) align. so it's better to add the
> new table to existing cache(module).
>
>
> as it's hard to forecast how many tables should be in a caches. If there is
> a such limit, then what's the best practice when using ignite with
> persistent.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Can't add new key/value pair to existing cache via sql command

2018-11-12 Thread kcheng.mvp
Thank you very much!

can you tell me why there is a such limit that new created table via sql can
not be in the non-public schema?


I am using ignite with persistent, each `module` in my system is using a
cache, which there are some tables generated via `cfg.setIndexedTypes`

right now for business changes, I need to add new indexedTypes(sql table) to
an existing module(cache).

in order to keep module and cache(tables) align. so it's better to add the
new table to existing cache(module).


as it's hard to forecast how many tables should be in a caches. If there is
a such limit, then what's the best practice when using ignite with
persistent.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteJdbcThinDriver statements accumulation

2018-11-12 Thread Ilya Kasnacheev
Hello!

Looks like a bug. Track it at
https://issues.apache.org/jira/browse/IGNITE-10224

Regards,
-- 
Ilya Kasnacheev


вт, 30 окт. 2018 г. в 16:31, Mikhail :

> Hello,
>
>  I need to execute a lot of SQL statements in one connection
> using IgniteJdbcThinDriver. I get memory leak because of accumulation of
> all statements in:
> private final ArrayList stmts = new ArrayList<>();
> (IgniteJdbcThinDriver:118). All statements are added to this list. As I see
> it, this list is cleared only by onDisconnect() method, which is called
> only on error. So In case of many statements, there will be memory leaks.
> And possibly the same situation will occur in connection pools, because
> they can reuse one connection many times. Is it the desired behavior for
> this ignite jdbc driver?
>
> --
> Best Regards,
> Mikhail


ZookeeperDiscovery block when communication error

2018-11-12 Thread wangsan
I have a server node in zone A ,then I start a client from zone B, Now access
between A,B was controlled by firewall,The acl is B can access A,but A can
not access B.
So when client in zone B join the cluster,the communication will fail caused
by firewall.

But when client in zone B closed, The cluster will be crashed(hang on new
join even from same zone without fireWall). And when restart the coordinator
server(If I started two servers in Zone A) .Another server will hang with
communication.

Looks like the whole cluster crashed when a node join failed by firewall.

But when I use tcpDiscovery, I didn't saw the cluster crash. Just saw some
communication errors,And when new node join,It still be well.

Is this a ZookeeperDiscovery bug?

The log is : zkcommuerror.log
  






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: cotinuousquery -> Requesting mapping from grid failed for [ platformId=1, typeId=1173251103]

2018-11-12 Thread Ilya Kasnacheev
Hello!

Should be OK. Can you prepare a small reproducer of this problem, upload it
to github?

Regards,
-- 
Ilya Kasnacheev


пн, 29 окт. 2018 г. в 16:57, jcalahor :

> i forgot to mention that my starting params is:
> Apache.Ignite.exe
>
> -Assembly=C:\apache_ignite\_net\ignite_shared\ignite_shared\bin\Debug\ignite_shared.dll
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Client connects to server after too long time interval (1 minute)

2018-11-12 Thread Dmitry Lazurkin
After reconnect unmarshalling of TcpDiscoveryNodeAddedMessage takes 20
seconds:
2018-11-12 14:10:36.105 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller : Unmarshall 1
2018-11-12 14:10:36.107 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller : Unmarshall 2
2018-11-12 14:10:56.262 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller : Unmarshall 3:
TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode [id=06960cfd-17

Does class loader load many classes on initial connect?

On 11/12/18 16:28, Dmitry Lazurkin wrote:
> Hi, Andrei. Thank you for reply.
>
> I have found that problem is in unmarshalling:
> 2018-11-12 13:18:24.375  INFO 10 --- [-sock-reader-#3]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : 1
> 2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
> node: 28ebe679-e815-4c16-bd5c-ee320caa3019
> 2018-11-12 13:18:24.375 ERROR 10 --- [-sock-reader-#3]
> o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 1
> 2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
> node: 12f39501-0fba-4c1c-9c37-dc35f33fcee9
> 2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
> node: 40b74eed-a636-488f-b1a8-6d58f4bc9137
> 2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
> o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
> node: f4c370e7-cf4a-4209-84fe-42fba4a30eef
> 2018-11-12 13:18:24.380 ERROR 10 --- [-sock-reader-#3]
> o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 2
> 2018-11-12 13:19:44.965 ERROR 10 --- [-sock-reader-#3]
> o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 3:
> TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode
> [id=8807c1ce-1e4a-496b-9aea-66cee8d95434, addrs=[10.37.92.255],
> sockAddrs=[client-98d86c46f-j76s5/10.37.92.255:0], discPort=0, order=0,
> intOrder=12, lastExchangeTime=1542028704438, loc=false,
> ver=2.6.0#20180710-sha1:669feacc, isClient=true],
> dataPacket=org.apache.ignite.spi.discovery.tcp.internal.DiscoveryDataPacket@3b238c6e,
> discardMsgId=null, discardCustomMsgId=null, top=[TcpDiscoveryNode
> [id=28ebe679-e815-4c16-bd5c-ee320caa3019, addrs=[10.48.14.1],
> sockAddrs=[ignite-1/10.48.14.1:47500], discPort=47500, order=1,
> intOrder=1, lastExchangeTime=1542028704438, loc=false,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], TcpDiscoveryNode
> [id=f4c370e7-cf4a-4209-84fe-42fba4a30eef, addrs=[10.37.92.208],
> sockAddrs=[/10.37.92.208:0], discPort=0, order=2, intOrder=2,
> lastExchangeTime=1542028704448, loc=false,
> ver=2.6.0#20180710-sha1:669feacc, isClient=true], TcpDiscoveryNode
> [id=12f39501-0fba-4c1c-9c37-dc35f33fcee9, addrs=[10.37.92.205],
> sockAddrs=[/10.37.92.205:0], discPort=0, order=5, intOrder=5,
> lastExchangeTime=1542028714511, loc=false,
> ver=2.6.0#20180710-sha1:669feacc, isClient=true], TcpDiscoveryNode
> [id=40b74eed-a636-488f-b1a8-6d58f4bc9137, addrs=[10.37.92.222],
> sockAddrs=[/10.37.92.222:0], discPort=0, order=15, intOrder=11,
> lastExchangeTime=1542028724583, loc=false,
> ver=2.6.0#20180710-sha1:669feacc, isClient=true]], clientTop=null,
> gridStartTime=1542023768674, super=TcpDiscoveryAbstractMessage
> [sndNodeId=null, id=0c5f7c70761-28ebe679-e815-4c16-bd5c-ee320caa3019,
> verifierNodeId=28ebe679-e815-4c16-bd5c-ee320caa3019, topVer=0,
> pendingIdx=0, failedNodes=null, isClient=true]]
>
> Unmarshalling TcpDiscoveryNodeAddedMessage need 1 minute...(:
>
> That's org.apache.ignite.marshaller.jdk.JdkMarshaller with logs:
>     /** {@inheritDoc} */
>     @SuppressWarnings({"unchecked"})
>     @Override protected  T unmarshal0(InputStream in, @Nullable
> ClassLoader clsLdr) throws IgniteCheckedException {
>     assert in != null;
>
>     if (clsLdr == null)
>     clsLdr = getClass().getClassLoader();
>
>     logger.error("Unmarshall 1");
>
>     ObjectInputStream objIn = null;
>
>     try {
>     objIn = new JdkMarshallerObjectInputStream(new
> JdkMarshallerInputStreamWrapper(in), clsLdr, clsFilter);
>
>     logger.error("Unmarshall 2");
>
>     T t = (T)objIn.readObject();
>
>     logger.error("Unmarshall 3: " + t);
>
>     return t;
>     }
> ...
>
>
> On 11/12/18 10:24, aealexsandrov wrote:
>> Hi,
>>
>> Could you please attach the XML configurations of your client and server
>> nodes and logs?
>>
>> BR,
>> Andrei
>>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>
>



Re: Client connects to server after too long time interval (1 minute)

2018-11-12 Thread Dmitry Lazurkin
Hi, Andrei. Thank you for reply.

I have found that problem is in unmarshalling:
2018-11-12 13:18:24.375  INFO 10 --- [-sock-reader-#3]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : 1
2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
node: 28ebe679-e815-4c16-bd5c-ee320caa3019
2018-11-12 13:18:24.375 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 1
2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
node: 12f39501-0fba-4c1c-9c37-dc35f33fcee9
2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
node: 40b74eed-a636-488f-b1a8-6d58f4bc9137
2018-11-12 13:18:24.375 DEBUG 10 --- [o-msg-worker-#4]
o.a.i.spi.discovery.tcp.TcpDiscoverySpi  : Received metrics from unknown
node: f4c370e7-cf4a-4209-84fe-42fba4a30eef
2018-11-12 13:18:24.380 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 2
2018-11-12 13:19:44.965 ERROR 10 --- [-sock-reader-#3]
o.a.ignite.marshaller.jdk.JdkMarshaller  : Unmarshall 3:
TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode
[id=8807c1ce-1e4a-496b-9aea-66cee8d95434, addrs=[10.37.92.255],
sockAddrs=[client-98d86c46f-j76s5/10.37.92.255:0], discPort=0, order=0,
intOrder=12, lastExchangeTime=1542028704438, loc=false,
ver=2.6.0#20180710-sha1:669feacc, isClient=true],
dataPacket=org.apache.ignite.spi.discovery.tcp.internal.DiscoveryDataPacket@3b238c6e,
discardMsgId=null, discardCustomMsgId=null, top=[TcpDiscoveryNode
[id=28ebe679-e815-4c16-bd5c-ee320caa3019, addrs=[10.48.14.1],
sockAddrs=[ignite-1/10.48.14.1:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1542028704438, loc=false,
ver=2.6.0#20180710-sha1:669feacc, isClient=false], TcpDiscoveryNode
[id=f4c370e7-cf4a-4209-84fe-42fba4a30eef, addrs=[10.37.92.208],
sockAddrs=[/10.37.92.208:0], discPort=0, order=2, intOrder=2,
lastExchangeTime=1542028704448, loc=false,
ver=2.6.0#20180710-sha1:669feacc, isClient=true], TcpDiscoveryNode
[id=12f39501-0fba-4c1c-9c37-dc35f33fcee9, addrs=[10.37.92.205],
sockAddrs=[/10.37.92.205:0], discPort=0, order=5, intOrder=5,
lastExchangeTime=1542028714511, loc=false,
ver=2.6.0#20180710-sha1:669feacc, isClient=true], TcpDiscoveryNode
[id=40b74eed-a636-488f-b1a8-6d58f4bc9137, addrs=[10.37.92.222],
sockAddrs=[/10.37.92.222:0], discPort=0, order=15, intOrder=11,
lastExchangeTime=1542028724583, loc=false,
ver=2.6.0#20180710-sha1:669feacc, isClient=true]], clientTop=null,
gridStartTime=1542023768674, super=TcpDiscoveryAbstractMessage
[sndNodeId=null, id=0c5f7c70761-28ebe679-e815-4c16-bd5c-ee320caa3019,
verifierNodeId=28ebe679-e815-4c16-bd5c-ee320caa3019, topVer=0,
pendingIdx=0, failedNodes=null, isClient=true]]

Unmarshalling TcpDiscoveryNodeAddedMessage need 1 minute...(:

That's org.apache.ignite.marshaller.jdk.JdkMarshaller with logs:
    /** {@inheritDoc} */
    @SuppressWarnings({"unchecked"})
    @Override protected  T unmarshal0(InputStream in, @Nullable
ClassLoader clsLdr) throws IgniteCheckedException {
    assert in != null;

    if (clsLdr == null)
    clsLdr = getClass().getClassLoader();

    logger.error("Unmarshall 1");

    ObjectInputStream objIn = null;

    try {
    objIn = new JdkMarshallerObjectInputStream(new
JdkMarshallerInputStreamWrapper(in), clsLdr, clsFilter);

    logger.error("Unmarshall 2");

    T t = (T)objIn.readObject();

    logger.error("Unmarshall 3: " + t);

    return t;
    }
...


On 11/12/18 10:24, aealexsandrov wrote:
> Hi,
>
> Could you please attach the XML configurations of your client and server
> nodes and logs?
>
> BR,
> Andrei
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





RE: Slow Data Insertion On Large Cache : Spark Streaming

2018-11-12 Thread ApacheUser
Thanks Stan,
planning to move on to 2.7.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Slow Data Insertion On Large Cache : Spark Streaming

2018-11-12 Thread Stanislav Lukyanov
Hi,

Do you use persistence? Do you have more data on disk than RAM size?
If yes, it’s almost definitely 
https://issues.apache.org/jira/browse/IGNITE-9519.
If no, it still can be the same issue.
Try running on 2.7, it should be released soon.

Stan

From: ApacheUser
Sent: 5 ноября 2018 г. 20:10
To: user@ignite.apache.org
Subject: Slow Data Insertion On Large Cache : Spark Streaming

Hi Team,

We have 6 node Ignite cluster with 72CPU, 256GB  RAM and 5TB Storage . Data
ingested using Spark Streaming  into Ignite Cluster for SQL and Tableau
Usage.

I have couple of Large tables with 200ml rows with (200GB) and 800ml rows
with (500GB)  .
The insertion is taking more than 40secs if there is already existing
Composite key, if new row its around 10ms.

We have Entry, Main and Details tables, "Entry" cache has single field "id"
primary key, second cache "Main"  is with composite Primary key "id" and
"mainid" third Cache "Details" with composite primary key "id","mainrid" and
"detailid". "id" is the affinity key for all and some other small tables.

1. Is there any performance of insertion/updation diffeence  for  single
field primary key vs multi field primary key?
 will it make any differenc if I convert composite primary key as singe
field primary Key?
  like  concatanate all composite fields and make sigle filed primary key?

2.what are ignite.sh and Config parameters needs tuning?

My Spark Dataframe save options (Save to Ignite)

 .option(OPTION_STREAMER_ALLOW_OVERWRITE, true)
.mode(SaveMode.Append)
.save()

My Ignite.sh

JVM_OPTS="-server -Xms10g -Xmx10g -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=512m"
JVM_OPTS="${JVM_OPTS} -XX:+AlwaysPreTouch"
JVM_OPTS="${JVM_OPTS} -XX:+UseG1GC"
JVM_OPTS="${JVM_OPTS} -XX:+ScavengeBeforeFullGC"
JVM_OPTS="${JVM_OPTS} -XX:+DisableExplicitGC"
JVM_OPTS="${JVM_OPTS} -XX:+HeapDumpOnOutOfMemoryError "
JVM_OPTS="${JVM_OPTS} -XX:HeapDumpPath=${IGNITE_HOME}/work"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDetails"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCTimeStamps"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDateStamps"
JVM_OPTS="${JVM_OPTS} -XX:+UseGCLogFileRotation"
JVM_OPTS="${JVM_OPTS} -XX:NumberOfGCLogFiles=10"
JVM_OPTS="${JVM_OPTS} -XX:GCLogFileSize=100M"
JVM_OPTS="${JVM_OPTS} -Xloggc:${IGNITE_HOME}/work/gc.log"
JVM_OPTS="${JVM_OPTS} -XX:+PrintAdaptiveSizePolicy"
JVM_OPTS="${JVM_OPTS} -XX:MaxGCPauseMillis=100"

export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true

default-Config.xml






http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>







 



   



 





 






  











  
   











  
  

















  


  

  

  


  
  
   

  

 
 

  
  

  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
 64.x.x.x:47500..47509
 64.x.x.x:47500..47509

  

  

 

Re: Apache Ignite cannot access cache from listener after event “EVT_CACHE_STARTED” fired

2018-11-12 Thread Vadym Vasiuk
Hi,

I changed listener code and it worked I got cache from completeale future
inside listener:

IgnitePredicate locLsnr = new IgnitePredicate(){
@IgniteInstanceResource
private Ignite ignite;
@Override
public boolean apply(CacheEvent evt) {
System.out.println("Received event [evt=" + evt.name() + "
cacheName=" + evt.cacheName());
 CompletableFuture fut =
CompleteableFuture.supplyAsync(() -> { IgniteCache c =
ignite.cache(evt.vacheName()); // not null in here
return "";
};
System.out.println("finish listener");
return true;
}
};




On Nov 12, 2018 12:31, "Vadym Vasiuk"  wrote:

In other words - there is no way to do something with the cache after event
which signals about its creation.


On Mon, Nov 12, 2018, 11:57 Ilya Kasnacheev  Cross-posting from SO:
>
> As a rule you should not perform cache operations, or most of other
> operations that block or access Ignite internals. Events should be very
> fast and lightweight, meaning that they are executed from inside Ignite
> threads and Ignite internal locks.
>
> Just schedule an operation in a different thread on event arrival.
> Regards,
>
> --
> Ilya Kasnacheev
>
>
> пн, 12 нояб. 2018 г. в 10:17, aealexsandrov :
>
>> Hi,
>>
>> The cache could be started but possible cluster wasn't activated yet.
>> Could
>> you please provide some information:
>>
>> 1)Did you see warnings or errors in the log?
>> 2)Did you see this issue on normal cluster start or only on client
>> restart?
>>
>> Also, I see that you use it in the class named server config. Could you
>> please also provide the use case of this class?
>>
>> BR,
>> Andrei
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Apache Ignite cannot access cache from listener after event “EVT_CACHE_STARTED” fired

2018-11-12 Thread Vadym Vasiuk
In other words - there is no way to do something with the cache after event
which signals about its creation.


On Mon, Nov 12, 2018, 11:57 Ilya Kasnacheev  Cross-posting from SO:
>
> As a rule you should not perform cache operations, or most of other
> operations that block or access Ignite internals. Events should be very
> fast and lightweight, meaning that they are executed from inside Ignite
> threads and Ignite internal locks.
>
> Just schedule an operation in a different thread on event arrival.
> Regards,
>
> --
> Ilya Kasnacheev
>
>
> пн, 12 нояб. 2018 г. в 10:17, aealexsandrov :
>
>> Hi,
>>
>> The cache could be started but possible cluster wasn't activated yet.
>> Could
>> you please provide some information:
>>
>> 1)Did you see warnings or errors in the log?
>> 2)Did you see this issue on normal cluster start or only on client
>> restart?
>>
>> Also, I see that you use it in the class named server config. Could you
>> please also provide the use case of this class?
>>
>> BR,
>> Andrei
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Apache Ignite cannot access cache from listener after event “EVT_CACHE_STARTED” fired

2018-11-12 Thread Ilya Kasnacheev
Cross-posting from SO:

As a rule you should not perform cache operations, or most of other
operations that block or access Ignite internals. Events should be very
fast and lightweight, meaning that they are executed from inside Ignite
threads and Ignite internal locks.

Just schedule an operation in a different thread on event arrival.
Regards,

-- 
Ilya Kasnacheev


пн, 12 нояб. 2018 г. в 10:17, aealexsandrov :

> Hi,
>
> The cache could be started but possible cluster wasn't activated yet. Could
> you please provide some information:
>
> 1)Did you see warnings or errors in the log?
> 2)Did you see this issue on normal cluster start or only on client restart?
>
> Also, I see that you use it in the class named server config. Could you
> please also provide the use case of this class?
>
> BR,
> Andrei
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Why Ignite use so many heap space?

2018-11-12 Thread yangjiajun
My test scenario:
1.One ignite node uses 12GB heap memory and 30GB off-heap memory with
persistence。Here is my cmd to start the node:
nohup ./ignite.sh ../examples/config/example-ignite.xml -J-server -J-Xms12g
-J-Xmx12g -J-XX:+AlwaysPreTouch -J-XX:+UseG1GC -J-XX:+ScavengeBeforeFullGC
-J-XX:+DisableExplicitGC -J-XX:+HeapDumpOnOutOfMemoryError
-J-XX:HeapDumpPath=./logs -J-XX:+ExitOnOutOfMemoryError
-J-XX:+PrintGCDetails -J-XX:+PrintGCTimeStamps -J-XX:+PrintGCDateStamps
-J-XX:+UseGCLogFileRotation -J-XX:NumberOfGCLogFiles=10
-J-XX:GCLogFileSize=10M -J-Xloggc:./gclog.txt -v&
2.My jdbc thin connection is:
jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true
2.Excute merge into statements.Each statement has  hundreds of data rows.The
test runs about one statement per second.
3.The ignite node run out heap memory when there is no idle limit(gc is not
work as I describe before).The ignite node uses 30%-70% heap memory when I
set Idle connections limit to 50.The connection pool size is 200.
I also tried to limit idle connections in ignite with idleTimeout
setting.But it makes my connection pool not work.
4.I think gc frequency is natural.I am still doing more test and sorry for
no more details about it.


Mikael wrote
> Hi!
> 
> You said: "And it still uses 80%-85% of heap memory after I stop my 
> application", I assume you mean your client application ?
> 
> So after a GC it will still stay at 80% java heap usage ?
> 
> I assume you are using off-heap memory ?
> 
> I am not sure what the problem is, I am running an application that 
> updates around 10.000 cache entries every 10 seconds (persistence 
> enabled) and run a lot of other code that generates java heap garbage, 
> and the application generates around 40MB garbage per second, it fills 
> up the heap in a minute or so but goes down to around 20% after every gc.
> 
> How often does you application GC ? if you have 80% heap filled it 
> should GC pretty often I would think ?
> 
> There is garbage generated, there is not much you can do about that if 
> it's not your own code that generates the garbage, Ignite is not bad and 
> with the cache keys and values and even index of heap it should work 
> fine, but i guess it all depends on what your application do.
> 
> Mikael
> 
> Den 2018-11-08 kl. 14:07, skrev yangjiajun:
>> HI.
>>
>> I have an ignite node  which is version 2.6 and has fixed 12GB heap
>> memory
>> and 30GB data region.I use it as a database with persistence.It uses
>> 90%-95%
>> of heap memory when my application is busy(My application uses jdbc thin
>> connection).And it still uses 80%-85% of heap memory after I stop my
>> application.Ignite is in bad performance when its heap usage is high.
>>
>> How to reduce ignite heap usage?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Why Ignite use so many heap space?

2018-11-12 Thread yangjiajun
Hi!

Thanks for your reply.

My client application uses jdbc thin connection to test performance of merge
into statement. We also use DPCP connection pooling providers to improve
performance.According to your suggestions and my experiment,I think idle
connections in the connection pool cause memory leak in the ignite node.But
I am still not sure and doing more test.


Mikael wrote
> Hi!
> 
> You said: "And it still uses 80%-85% of heap memory after I stop my 
> application", I assume you mean your client application ?
> 
> So after a GC it will still stay at 80% java heap usage ?
> 
> I assume you are using off-heap memory ?
> 
> I am not sure what the problem is, I am running an application that 
> updates around 10.000 cache entries every 10 seconds (persistence 
> enabled) and run a lot of other code that generates java heap garbage, 
> and the application generates around 40MB garbage per second, it fills 
> up the heap in a minute or so but goes down to around 20% after every gc.
> 
> How often does you application GC ? if you have 80% heap filled it 
> should GC pretty often I would think ?
> 
> There is garbage generated, there is not much you can do about that if 
> it's not your own code that generates the garbage, Ignite is not bad and 
> with the cache keys and values and even index of heap it should work 
> fine, but i guess it all depends on what your application do.
> 
> Mikael
> 
> Den 2018-11-08 kl. 14:07, skrev yangjiajun:
>> HI.
>>
>> I have an ignite node  which is version 2.6 and has fixed 12GB heap
>> memory
>> and 30GB data region.I use it as a database with persistence.It uses
>> 90%-95%
>> of heap memory when my application is busy(My application uses jdbc thin
>> connection).And it still uses 80%-85% of heap memory after I stop my
>> application.Ignite is in bad performance when its heap usage is high.
>>
>> How to reduce ignite heap usage?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: persistence when disk full

2018-11-12 Thread aealexsandrov
Hi,

The expected behavior is that the only node without space will be failed.
Could you please attach the logs from both of the nodes to get more details?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite issue during spark job execution

2018-11-12 Thread aealexsandrov
Hi,

Looks like your JVM faces long garbage collection pauses. You can configure
detailed GC logs to see how much time is spent in GC:
https://apacheignite.readme.io/docs/jvm-and-system-tuning#section-detailed-garbage-collection-stats

Also, you can follow next guide and add more heap:

https://apacheignite.readme.io/docs/jvm-and-system-tuning#garbage-collection-tuning

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite 2.7 connection pool support?

2018-11-12 Thread aealexsandrov
Hi,

Ignite data source should be supported in 2.7 release. The use case you can
see in next test:

https://github.com/apache/ignite/blob/master/modules/clients/src/test/java/org/apache/ignite/jdbc/thin/JdbcThinDataSourceSelfTest.java

However, you can try to ask Taras according to
https://issues.apache.org/jira/browse/IGNITE-6145 on development user list
http://apache-ignite-developers.2346864.n4.nabble.com.

For example here:

http://apache-ignite-developers.2346864.n4.nabble.com/jira-Created-IGNITE-6145-JDBC-thin-support-connection-pooling-td21177.html

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Status of Spark Structured Streaming Support IGNITE-9357

2018-11-12 Thread aealexsandrov
Hi,

Could you please re-create this issue on development user list:

http://apache-ignite-developers.2346864.n4.nabble.com/

Development user list is the better place to discuss new functionality.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/