Re[4]: What is data-streamer-stripe threasd?

2022-09-19 Thread Zhenya Stanilovsky via user


It`s up to you, if it not annoying you leave it as it is and fill otherwise )


 
>Nah, it's fine just wanted to make sure what it was. Unless you think I should 
>log at least an issue?
>   
>On Wed, Sep 14, 2022 at 3:13 AM Zhenya Stanilovsky via user < 
>user@ignite.apache.org > wrote:
>>Yep, i already mention that you can`t disable this pool at all and 1 worker 
>>thread still be visible.
>>You can fill the issue but i can`t guarantee that it would be completed soon, 
>>or can do it yourself and present pull request.
>> 
>>best.
>>   
>>>Ok so just to understand on the client side. Set the pool size for data 
>>>streamer to 1.
>>> 
>>>But it will still look blocked?  
>>>On Mon., Sep. 12, 2022, 8:59 a.m. Zhenya Stanilovsky via user, < 
>>>user@ignite.apache.org > wrote:
>>>>John, seems all you can here is just to set this pool size into «1» , «0» — 
>>>>tends to error.
>>>> 
>>>>https://ignite.apache.org/docs/latest/data-streaming#configuring-data-streamer-thread-pool-size
>>>> 
>>>>1 thread will still be frozen in such a case. 
>>>> 
>>>>> 
>>>>>> 
>>>>>>>Hi I'm profiling my application through YourKit and it indicates that a 
>>>>>>>bunch of these threads (data-streamer-stripe) are "frozen" for 21 days. 
>>>>>>>This 
>>>>>>>
>>>>>>>I'm not using data streaming, is there a way to disable it or just 
>>>>>>>ignore the messages? The application is configured as thick client 
>>>>>>>(client = true) 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>  
>> 
>> 
>> 
>>  
 
 
 
 

Re: Re[2]: What is data-streamer-stripe threasd?

2022-09-19 Thread John Smith
Nah, it's fine just wanted to make sure what it was. Unless you think I
should log at least an issue?


On Wed, Sep 14, 2022 at 3:13 AM Zhenya Stanilovsky via user <
user@ignite.apache.org> wrote:

> Yep, i already mention that you can`t disable this pool at all and 1
> worker thread still be visible.
> You can fill the issue but i can`t guarantee that it would be completed
> soon, or can do it yourself and present pull request.
>
> best.
>
>
> Ok so just to understand on the client side. Set the pool size for data
> streamer to 1.
>
> But it will still look blocked?
>
> On Mon., Sep. 12, 2022, 8:59 a.m. Zhenya Stanilovsky via user, <
> user@ignite.apache.org
> > wrote:
>
> John, seems all you can here is just to set this pool size into «1» , «0»
> — tends to error.
>
>
> https://ignite.apache.org/docs/latest/data-streaming#configuring-data-streamer-thread-pool-size
>
> 1 thread will still be frozen in such a case.
>
>
>
>
>
>
> Hi I'm profiling my application through YourKit and it indicates that a
> bunch of these threads (data-streamer-stripe) are "frozen" for 21 days.
> This
>
> I'm not using data streaming, is there a way to disable it or just ignore
> the messages? The application is configured as thick client (client = true)
>
>
>
>
>
>
>
>
>
>
>


Re[2]: What is data-streamer-stripe threasd?

2022-09-14 Thread Zhenya Stanilovsky via user

Yep, i already mention that you can`t disable this pool at all and 1 worker 
thread still be visible.
You can fill the issue but i can`t guarantee that it would be completed soon, 
or can do it yourself and present pull request.
 
best.
 
>Ok so just to understand on the client side. Set the pool size for data 
>streamer to 1.
> 
>But it will still look blocked?  
>On Mon., Sep. 12, 2022, 8:59 a.m. Zhenya Stanilovsky via user, < 
>user@ignite.apache.org > wrote:
>>John, seems all you can here is just to set this pool size into «1» , «0» — 
>>tends to error.
>> 
>>https://ignite.apache.org/docs/latest/data-streaming#configuring-data-streamer-thread-pool-size
>> 
>>1 thread will still be frozen in such a case. 
>> 
>>> 
>>>> 
>>>>>Hi I'm profiling my application through YourKit and it indicates that a 
>>>>>bunch of these threads (data-streamer-stripe) are "frozen" for 21 days. 
>>>>>This 
>>>>>
>>>>>I'm not using data streaming, is there a way to disable it or just ignore 
>>>>>the messages? The application is configured as thick client (client = 
>>>>>true) 
>>>> 
>>>> 
>>>> 
>>>>  
 
 
 
 

Re: What is data-streamer-stripe threasd?

2022-09-13 Thread John Smith
Ok so just to understand on the client side. Set the pool size for data
streamer to 1.

But it will still look blocked?

On Mon., Sep. 12, 2022, 8:59 a.m. Zhenya Stanilovsky via user, <
user@ignite.apache.org> wrote:

> John, seems all you can here is just to set this pool size into «1» , «0»
> — tends to error.
>
>
> https://ignite.apache.org/docs/latest/data-streaming#configuring-data-streamer-thread-pool-size
>
> 1 thread will still be frozen in such a case.
>
>
>
>
>
>
> Hi I'm profiling my application through YourKit and it indicates that a
> bunch of these threads (data-streamer-stripe) are "frozen" for 21 days.
> This
>
> I'm not using data streaming, is there a way to disable it or just ignore
> the messages? The application is configured as thick client (client = true)
>
>
>
>
>
>
>


Re: What is data-streamer-stripe threasd?

2022-09-12 Thread Zhenya Stanilovsky via user

John, seems all you can here is just to set this pool size into «1» , «0» — 
tends to error.
 
https://ignite.apache.org/docs/latest/data-streaming#configuring-data-streamer-thread-pool-size
 
1 thread will still be frozen in such a case. 
 
> 
>> 
>>>Hi I'm profiling my application through YourKit and it indicates that a 
>>>bunch of these threads (data-streamer-stripe) are "frozen" for 21 days. This 
>>>
>>>I'm not using data streaming, is there a way to disable it or just ignore 
>>>the messages? The application is configured as thick client (client = true) 
>> 
>> 
>> 
>> 

What is data-streamer-stripe threasd?

2022-09-09 Thread John Smith
Hi I'm profiling my application through YourKit and it indicates that a
bunch of these threads (data-streamer-stripe) are "frozen" for 21 days.
This

I'm not using data streaming, is there a way to disable it or just ignore
the messages? The application is configured as thick client (client = true)


Re: Loading data to cache using a data streamer

2021-03-12 Thread Ilya Kasnacheev
Hello!

Unfortunately, the exception snippet that you have pasted is not sufficient
to understand what's going on.

Regards,
-- 
Ilya Kasnacheev


ср, 10 мар. 2021 г. в 19:35, Pelado :

> Take a look at the data node. If the data node has restarted you might see
> a client disconnected exception.
>
> On Tue, Mar 9, 2021 at 1:12 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Does it work with a smaller batch? My guess is that you end up consuming
>> too much resources and then the cluster falls apart.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> сб, 6 мар. 2021 г. в 00:59, Josh Katz <
>> josh.katz.contrac...@dodgeandcox.com>:
>>
>>> Client node gets disconnected while trying to load data to cache using a
>>> data streamer…
>>>
>>>
>>>
>>> How can we overcome this issue and load data to cache reliably?
>>>
>>>
>>>
>>> See Exception call stack:
>>>
>>>
>>>
>>> Apache.Ignite.Core.Common.ClientDisconnectedException: Client node
>>> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 --->
>>> Apache.Ignite.Core.Common.JavaException: class
>>> org.apache.ignite.IgniteClientDisconnectedException: Client node
>>> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at
>>> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
>>> at
>>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
>>> at
>>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
>>> at
>>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
>>> at
>>> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
>>> at
>>> org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
>>> at
>>> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
>>> at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at
>>> Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj,
>>> IntPtr methodId, Int64* argsPtr) at
>>> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef
>>> target, Int32 opType, Int64 memPtr) at
>>> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
>>> val) --- End of inner exception stack trace --- at
>>> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
>>> val) at
>>> Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
>>> value) at PopulateCache(String accountStyle, DateTime startDate,
>>> DateTime endDate)
>>>
>>>
>>>
>>> Thanks,
>>>
>>> Josh Katz
>>> --
>>> Please follow the hyperlink to important disclosures.
>>> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>>>
>>>
>
> --
> Facundo Maldonado
>


Re: Loading data to cache using a data streamer

2021-03-10 Thread Pelado
Take a look at the data node. If the data node has restarted you might see
a client disconnected exception.

On Tue, Mar 9, 2021 at 1:12 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Does it work with a smaller batch? My guess is that you end up consuming
> too much resources and then the cluster falls apart.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> сб, 6 мар. 2021 г. в 00:59, Josh Katz <
> josh.katz.contrac...@dodgeandcox.com>:
>
>> Client node gets disconnected while trying to load data to cache using a
>> data streamer…
>>
>>
>>
>> How can we overcome this issue and load data to cache reliably?
>>
>>
>>
>> See Exception call stack:
>>
>>
>>
>> Apache.Ignite.Core.Common.ClientDisconnectedException: Client node
>> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 --->
>> Apache.Ignite.Core.Common.JavaException: class
>> org.apache.ignite.IgniteClientDisconnectedException: Client node
>> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at
>> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
>> at
>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
>> at
>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
>> at
>> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
>> at
>> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
>> at
>> org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
>> at
>> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
>> at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at
>> Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj,
>> IntPtr methodId, Int64* argsPtr) at
>> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef
>> target, Int32 opType, Int64 memPtr) at
>> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
>> val) --- End of inner exception stack trace --- at
>> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
>> val) at
>> Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
>> value) at PopulateCache(String accountStyle, DateTime startDate,
>> DateTime endDate)
>>
>>
>>
>> Thanks,
>>
>> Josh Katz
>> --
>> Please follow the hyperlink to important disclosures.
>> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>>
>>

-- 
Facundo Maldonado


Re: Loading data to cache using a data streamer

2021-03-09 Thread Ilya Kasnacheev
Hello!

Does it work with a smaller batch? My guess is that you end up consuming
too much resources and then the cluster falls apart.

Regards,
-- 
Ilya Kasnacheev


сб, 6 мар. 2021 г. в 00:59, Josh Katz :

> Client node gets disconnected while trying to load data to cache using a
> data streamer…
>
>
>
> How can we overcome this issue and load data to cache reliably?
>
>
>
> See Exception call stack:
>
>
>
> Apache.Ignite.Core.Common.ClientDisconnectedException: Client node
> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 --->
> Apache.Ignite.Core.Common.JavaException: class
> org.apache.ignite.IgniteClientDisconnectedException: Client node
> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at
> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
> at
> org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
> at
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
> at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at
> Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj,
> IntPtr methodId, Int64* argsPtr) at
> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef
> target, Int32 opType, Int64 memPtr) at
> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
> val) --- End of inner exception stack trace --- at
> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
> val) at
> Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
> value) at PopulateCache(String accountStyle, DateTime startDate,
> DateTime endDate)
>
>
>
> Thanks,
>
> Josh Katz
> --
> Please follow the hyperlink to important disclosures.
> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>
>


Re: Loading data to cache using a data streamer

2021-03-09 Thread Pavel Tupitsyn
Josh,

Do you have a reproducer for this issue?
Can you please attach logs from all nodes?

On Sat, Mar 6, 2021 at 12:59 AM Josh Katz <
josh.katz.contrac...@dodgeandcox.com> wrote:

> Client node gets disconnected while trying to load data to cache using a
> data streamer…
>
>
>
> How can we overcome this issue and load data to cache reliably?
>
>
>
> See Exception call stack:
>
>
>
> Apache.Ignite.Core.Common.ClientDisconnectedException: Client node
> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 --->
> Apache.Ignite.Core.Common.JavaException: class
> org.apache.ignite.IgniteClientDisconnectedException: Client node
> disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at
> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
> at
> org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
> at
> org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
> at
> org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
> at
> org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
> at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at
> Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj,
> IntPtr methodId, Int64* argsPtr) at
> Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef
> target, Int32 opType, Int64 memPtr) at
> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
> val) --- End of inner exception stack trace --- at
> Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64
> val) at
> Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
> value) at PopulateCache(String accountStyle, DateTime startDate,
> DateTime endDate)
>
>
>
> Thanks,
>
> Josh Katz
> --
> Please follow the hyperlink to important disclosures.
> https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html
>
>


Loading data to cache using a data streamer

2021-03-05 Thread Josh Katz
Client node gets disconnected while trying to load data to cache using a data 
streamer...

How can we overcome this issue and load data to cache reliably?

See Exception call stack:

Apache.Ignite.Core.Common.ClientDisconnectedException: Client node 
disconnected: ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7 ---> 
Apache.Ignite.Core.Common.JavaException: class 
org.apache.ignite.IgniteClientDisconnectedException: Client node disconnected: 
ignite-instance-dab0a266-a4fb-403f-863c-289d3ed18ab7   at 
org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:93)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.guard(ClusterGroupAdapter.java:175)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.forPredicate(ClusterGroupAdapter.java:379)
   at 
org.apache.ignite.internal.cluster.ClusterGroupAdapter.forCacheNodes(ClusterGroupAdapter.java:593)
   at 
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.allowOverwrite(DataStreamerImpl.java:496)
   at 
org.apache.ignite.internal.processors.platform.datastreamer.PlatformDataStreamer.processInLongOutLong(PlatformDataStreamer.java:187)
   at 
org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inLongOutLong(PlatformTargetProxyImpl.java:55)
   at Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.ExceptionCheck() at 
Apache.Ignite.Core.Impl.Unmanaged.Jni.Env.CallLongMethod(GlobalRef obj, IntPtr 
methodId, Int64* argsPtr) at 
Apache.Ignite.Core.Impl.Unmanaged.UnmanagedUtils.TargetInLongOutLong(GlobalRef 
target, Int32 opType, Int64 memPtr) at 
Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64 val)  
   --- End of inner exception stack trace --- at 
Apache.Ignite.Core.Impl.PlatformJniTarget.InLongOutLong(Int32 type, Int64 val)  
   at 
Apache.Ignite.Core.Impl.Datastream.DataStreamerImpl`2.set_AllowOverwrite(Boolean
 value) at PopulateCache(String accountStyle, DateTime startDate, DateTime 
endDate)

Thanks,
Josh Katz

--
Please follow the hyperlink to important 
disclosures.https://www.dodgeandcox.com/disclosures/email_disclosure_funds.html



Re: Data streamer hangs

2020-06-17 Thread akorensh
Hi,
Are you seeing pool starvation messages? How long does it stay in this
state?

   Are you not seeing any data saved, even when you close the jdbc
connection?
   Are you able to reproduce this scenario? If so please describe steps/send
a reproducer and send full logs.


   Data Streamer pool config:
https://apacheignite.readme.io/docs/thread-pools#data-streamer-pool

info on data streamer:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html
   see: 
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteDataStreamer.html#perNodeBufferSize-int-

   from the doc: https://apacheignite-sql.readme.io/docs/set#description
   When streaming is enabled, JDBC/ODBC driver will pack your commands in
batches and send them to the server (Ignite cluster). On the server side,
the batch is converted into a stream of cache update commands which are
distributed asynchronously between server nodes.

   In a JDBC SET STREAMING ON/OFF, this buffer is set internally to, and you
need to flush the data by closing the connection.

  see : https://apacheignite-sql.readme.io/docs/set#example

Thanks, Alex



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Data streamer hangs

2020-06-16 Thread yangjiajun
Hello.I use SET STREAMING ON and SET STREAMING OFF style to flush data to
ignite.Ignite sometimes hangs on SET STREAMING OFF.
The thread hangs on client side looks like:
 daemon prio=5 os_prio=0 tid=0x7fdba0003000 nid=0x3c1 waiting on
condition [0x7fdf473f1000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:178)
at
org.apache.ignite.internal.util.future.GridFutureAdapter.get(GridFutureAdapter.java:141)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.executeBatch(JdbcThinConnection.java:1280)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.close0(JdbcThinConnection.java:1335)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.close(JdbcThinConnection.java:1325)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.executeNative(JdbcThinConnection.java:253)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:206)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:559)
at
com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:95)
at
com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java)
...
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

The thread hangs on server side looks like:
"data-streamer-stripe-21-#150" #178 prio=5 os_prio=0 tid=0x7f29b3737000
nid=0x37c36 waiting on condition [0x7f041d1d]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at
org.apache.ignite.internal.util.StripedExecutor$StripeConcurrentQueue.take(StripedExecutor.java:730)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:541)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at java.lang.Thread.run(Thread.java:748)

   Locked ownable synchronizers:
- None

I do not see any error in ignite log and I do not find any dead locks.

Do u have any ideas about such situation?Thanks!




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been cancelled

2020-05-18 Thread nithin91
Got it. Thanks a lot. This is very useful



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been cancelled

2020-05-18 Thread Manuel Núñez Sánchez
Since there are several approaches on solving this, I’m continue with your 
code, a couple of suggestions:

- Create streamer instance out of loop, and close it on finally, not within the 
loop.
- stmr.autoflushfrequency(0) as you do it every 2000 elements… 
- Don’t forget remaining data (< 2000) from last iteration

IgniteDataStreamer stmr = 
ignite.dataStreamer("PieCountryAllocationCache");
stmr.allowOverwrite(true);
// disable auto flush - we’ll do it manually
stmr.autoflushfrequency(0);

try{
int j=0;
for (Map.Entry entry :
PieCountryAllocationobjs.entrySet()) { 
 
tempobjs.put(entry.getKey(), 
entry.getValue());
//For ever 2000 rows i am callling stmr.addData(tempobjs) and then stmr.flush 
and stmr.close(false)
if(j++ == 2000 ){
System.out.println(j);
stmr.addData(tempobjs);
// do flush every 2000 items
stmr.flush();
tempobjs.clear();
System.out.println(“Sent 
Ended");
System.out.println(j);
j = 0;
}
 }

  // stream remaining data
  if (!tempobjs.is Empty()){
  stmr.addData(tempobjs);
  }

} finally {
stmr.flush();
stmr.close(false);
}

> El 18 may 2020, a las 12:36, nithin91 
>  escribió:
> 
>   int j=0;
>   for (Map.Entry entry :
> PieCountryAllocationobjs.entrySet()) { 
>
>   tempobjs.put(entry.getKey(), 
> entry.getValue());
> //For ever 2000 rows i am callling stmr.addData(tempobjs) and then
> stmr.flush and stmr.close(false)
>   if((j%2000==0 && j!=0) ||
>   
> (PieCountryAllocationobjs.keySet().size() < 2000 &&
> j==PieCountryAllocationobjs.keySet().size())
>   || 
> j==PieCountryAllocationobjs.keySet().size()
>   ){
>   System.out.println(j);
>   IgniteDataStreamer PieCountryAllocation> stmr =
> ignite.dataStreamer("PieCountryAllocationCache");
>   stmr.allowOverwrite(true);
>   stmr.addData(tempobjs);
>   stmr.flush();
>   stmr.close(false);
>   tempobjs.clear();
>   System.out.println("Stream 
> Ended");
>   System.out.println(j);
>   
>   }
>   j++;
>}



Re: Data streamer has been cancelled

2020-05-18 Thread nithin91
Hi 

Implemented the code as suggested by you. Please find the code related to
this. Please let me know is this 
right way of implementing what you suggested.

Also can you please let me know the use of stmr.autoflushfrequency(2000)
method usage .If i pass higher number to this method,will that improve the
performance.

Map Originalobjs=new HashMap();--Contains all the
0.1 million key value pairs that has to be loaded

Map tempobjs=new HashMap();--Temp object that
will contain only 2000 
records at a time and which will be pushed to cache using data streamer

int j=0;
for (Map.Entry entry :
PieCountryAllocationobjs.entrySet()) { 
 
tempobjs.put(entry.getKey(), 
entry.getValue());
//For ever 2000 rows i am callling stmr.addData(tempobjs) and then
stmr.flush and stmr.close(false)
if((j%2000==0 && j!=0) ||

(PieCountryAllocationobjs.keySet().size() < 2000 &&
j==PieCountryAllocationobjs.keySet().size())
|| 
j==PieCountryAllocationobjs.keySet().size()
){
System.out.println(j);
IgniteDataStreamer stmr =
ignite.dataStreamer("PieCountryAllocationCache");
stmr.allowOverwrite(true);
stmr.addData(tempobjs);
stmr.flush();
stmr.close(false);
tempobjs.clear();
System.out.println("Stream 
Ended");
System.out.println(j);

}
j++;
 }



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been cancelled

2020-05-17 Thread Manuel Núñez
To improve performance use addData in bach mode (map) for example 
every 2000 entries, use finally with flush and close(false) on streamer to 
ensure data have been properly loaded.

Cheers!

Manuel.

> El 17 may 2020, a las 12:08, nithin91 
>  escribió:
> 
> Hi 
> 
> Currently i am trying to load the data into ignite cache using data steamer
> from Oracle DB.
> 
> Currently i have two server nodes deployed on two Linux servers and i am
> executing this as a standalone java program from my local machine.
> 
> To achieve this i have followed the below steps.
> 
> 1. Start in client Mode by setting client node=true in the bean file.
> 2.  Fetch the data from Oracle DB using JDBC resultset and set the fetchsize
> =10 for the prepared statement and load this data into a temporary Map
> Object.
> 3. Iterate through the map object and load data into cache using
> stmr.adddata API corresponding data steamer.
> 
> Out of 1 lakh rows, only 35K rows are getting loaded and the client node is
> stopped all of a sudden.Can anyone please help me in resolving this issue.
> 
> 
> Following is the log generated by this program.Attached you the Program code
> that i am executing 
> and Client.xml file for your reference.
> Client.xml
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2737/Client.xml>  
> Java_Program.txt
> <http://apache-ignite-users.70518.x6.nabble.com/file/t2737/Java_Program.txt>  
> 
> what my understanding after looking at the log is , stmr.adddata retruns
> IgniteFuture which means it is an async operation and since the program gets
> ended after completion of iteration with some data yet to load in the cache.
> 
> 
> May 17, 2020 1:35:29 PM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: DataStreamer operation failed.
> class org.apache.ignite.IgniteCheckedException: Data streamer has been
> cancelled: DataStreamerImpl [bufLdrSzPerThread=4096,
> rcvr=org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater@3b0ee03a,
> ioPlcRslvr=null, cacheName=PieCountryAllocationCache, bufSize=512,
> parallelOps=0, timeout=-1, autoFlushFreq=0,
> bufMappings={be10cd31-aed7-448a-8fec-60fd72a62313=Buffer
> [node=TcpDiscoveryNode [id=be10cd31-aed7-448a-8fec-60fd72a62313,
> addrs=[127.0.0.1, 172.30.197.5], sockAddrs=[/127.0.0.1:47500,
> azuswvlnx00687.corp.frk.com/172.30.197.5:47500], discPort=47500, order=1,
> intOrder=1, lastExchangeTime=1589702634687, loc=false,
> ver=2.7.6#20190911-sha1:21f7ca41, isClient=false], isLocNode=false,
> idGen=85, sem=java.util.concurrent.Semaphore@5b12012e[Permits = 15],
> perNodeParallelOps=64, entriesCnt=2944, locFutsSize=0, reqsSize=49],
> 92cbed29-93d6-428d-a3da-a30e4264aa20=Buffer [node=TcpDiscoveryNode
> [id=92cbed29-93d6-428d-a3da-a30e4264aa20, addrs=[127.0.0.1, 172.30.197.6],
> sockAddrs=[azuswvlnx00688.corp.frk.com/172.30.197.6:47500,
> /127.0.0.1:47500], discPort=47500, order=2, intOrder=2,
> lastExchangeTime=1589702635145, loc=false, ver=2.7.6#20190911-sha1:21f7ca41,
> isClient=false], isLocNode=false, idGen=99,
> sem=java.util.concurrent.Semaphore@2f7dcef2[Permits = 0],
> perNodeParallelOps=64, entriesCnt=1152, locFutsSize=0, reqsSize=64]},
> cacheObjProc=GridProcessorAdapter [],
> cacheObjCtx=org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryContext@4a3be6a5,
> cancelled=true, cancellationReason=null, failCntr=0,
> activeFuts=GridConcurrentHashSet [elements=[GridFutureAdapter
> [ignoreInterrupts=false, state=INIT, res=null, hash=1579584742],
> GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=2059282367], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
> res=null, hash=1027006452], GridFutureAdapter [ignoreInterrupts=false,
> state=INIT, res=null, hash=950125603], GridFutureAdapter
> [ignoreInterrupts=false, state=INIT, res=null, hash=227100877],
> GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=741370455], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
> res=null, hash=1536478396], GridFutureAdapter [ignoreInterrupts=false,
> state=INIT, res=null, hash=1081344572], GridFutureAdapter
> [ignoreInterrupts=false, state=INIT, res=null, hash=1538745405],
> GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=2000563893], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
> res=null, hash=997918120], GridFutureAdapter [ignoreInterrupts=false,
> state=INIT, res=null, hash=985679444], GridFutureAdapter
> [ignoreInterrupts=false, state=INIT, res=null, hash=1164436797],
> GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
> hash=954937264], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
> res=null, hash=339126187], GridF

Data streamer has been cancelled

2020-05-17 Thread nithin91
Hi 

Currently i am trying to load the data into ignite cache using data steamer
from Oracle DB.

Currently i have two server nodes deployed on two Linux servers and i am
executing this as a standalone java program from my local machine.

To achieve this i have followed the below steps.

1. Start in client Mode by setting client node=true in the bean file.
2.  Fetch the data from Oracle DB using JDBC resultset and set the fetchsize
=10 for the prepared statement and load this data into a temporary Map
Object.
3. Iterate through the map object and load data into cache using
stmr.adddata API corresponding data steamer.

 Out of 1 lakh rows, only 35K rows are getting loaded and the client node is
stopped all of a sudden.Can anyone please help me in resolving this issue.


Following is the log generated by this program.Attached you the Program code
that i am executing 
and Client.xml file for your reference.
Client.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t2737/Client.xml>  
Java_Program.txt
<http://apache-ignite-users.70518.x6.nabble.com/file/t2737/Java_Program.txt>  

what my understanding after looking at the log is , stmr.adddata retruns
IgniteFuture which means it is an async operation and since the program gets
ended after completion of iteration with some data yet to load in the cache.


May 17, 2020 1:35:29 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: DataStreamer operation failed.
class org.apache.ignite.IgniteCheckedException: Data streamer has been
cancelled: DataStreamerImpl [bufLdrSzPerThread=4096,
rcvr=org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$IsolatedUpdater@3b0ee03a,
ioPlcRslvr=null, cacheName=PieCountryAllocationCache, bufSize=512,
parallelOps=0, timeout=-1, autoFlushFreq=0,
bufMappings={be10cd31-aed7-448a-8fec-60fd72a62313=Buffer
[node=TcpDiscoveryNode [id=be10cd31-aed7-448a-8fec-60fd72a62313,
addrs=[127.0.0.1, 172.30.197.5], sockAddrs=[/127.0.0.1:47500,
azuswvlnx00687.corp.frk.com/172.30.197.5:47500], discPort=47500, order=1,
intOrder=1, lastExchangeTime=1589702634687, loc=false,
ver=2.7.6#20190911-sha1:21f7ca41, isClient=false], isLocNode=false,
idGen=85, sem=java.util.concurrent.Semaphore@5b12012e[Permits = 15],
perNodeParallelOps=64, entriesCnt=2944, locFutsSize=0, reqsSize=49],
92cbed29-93d6-428d-a3da-a30e4264aa20=Buffer [node=TcpDiscoveryNode
[id=92cbed29-93d6-428d-a3da-a30e4264aa20, addrs=[127.0.0.1, 172.30.197.6],
sockAddrs=[azuswvlnx00688.corp.frk.com/172.30.197.6:47500,
/127.0.0.1:47500], discPort=47500, order=2, intOrder=2,
lastExchangeTime=1589702635145, loc=false, ver=2.7.6#20190911-sha1:21f7ca41,
isClient=false], isLocNode=false, idGen=99,
sem=java.util.concurrent.Semaphore@2f7dcef2[Permits = 0],
perNodeParallelOps=64, entriesCnt=1152, locFutsSize=0, reqsSize=64]},
cacheObjProc=GridProcessorAdapter [],
cacheObjCtx=org.apache.ignite.internal.processors.cache.binary.CacheObjectBinaryContext@4a3be6a5,
cancelled=true, cancellationReason=null, failCntr=0,
activeFuts=GridConcurrentHashSet [elements=[GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1579584742],
GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=2059282367], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
res=null, hash=1027006452], GridFutureAdapter [ignoreInterrupts=false,
state=INIT, res=null, hash=950125603], GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=227100877],
GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=741370455], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
res=null, hash=1536478396], GridFutureAdapter [ignoreInterrupts=false,
state=INIT, res=null, hash=1081344572], GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1538745405],
GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=2000563893], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
res=null, hash=997918120], GridFutureAdapter [ignoreInterrupts=false,
state=INIT, res=null, hash=985679444], GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1164436797],
GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=954937264], GridFutureAdapter [ignoreInterrupts=false, state=INIT,
res=null, hash=339126187], GridFutureAdapter [ignoreInterrupts=false,
state=INIT, res=null, hash=1053856141], GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=862152124],
GridFutureAdapter [ignoreInterrupts=false, state=INIT, res=null,
hash=1934729582]]],
jobPda=org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$DataStreamerPda@3721177d,
depCls=null, fut=DataStreamerFuture [super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1986676021]],
publicFut=IgniteFuture [orig=DataStreamerFuture [super=GridFutureAdapter
[ignoreInterrupts=false, state=INIT, res=null, hash=1986676021]]],
disconnectErr=null, closed=true, lastFlushTime=1589702649336,
skipStore=false, keepBinary=false,

Re: Loading Data from RDMS to ignite using Data Streamer

2020-03-26 Thread nithin91
Hi

Currently i am able to load 5.5 million data from Oracle Table to Ignite in
1 hr using JDBC Pojo Configuration.So i was curious that how much time will
it take to load the data using data streamers as a result of which i tried
to load 10,000 records using the Data Streamers and it took almost half an
hour.
So wanted to know whether technique i implemented for  Data Streamers to
load Data from Oracle to Ignite is correct or wrong. PFB for the psuedo code
i implemented for  Data Streamers to load Data from Oracle to Ignite.
Using this  method it is taking lot of time which is as expected as we are
looping through each
and every record .

try (IgniteDataStreamer stmr = ignite.dataStreamer("CacheName")) {
// Stream entries.


while(rs.hasNext()){-->rs is JDBC result set

Key keyobj=new Key(rs.getString(1));
Value valueobj=new
Value(rs.getString(2),rs.getString(3),rs.getString(4));
stmr.addData(keyobj , valueobj);
}
}

Can you please share a sample code streaming data  from any RDBMS to ignite.
Also can you please let me know if an external streamer is needed to stream
data from RDBMS to ignite



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Loading Data from RDMS to ignite using Data Streamer

2020-03-24 Thread akurbanov
Since this feels like as a performance question, I would start with
identifying the bottleneck in this case.

How much data are you going to load from Oracle?

What is consuming the most time while streaming, reading data from Oracle or
streaming itself? How high the resource usage while streaming (CPU/memory
consumption)? This will define what should be tuned/changed.

What current numbers are for loadCache(null) vs DataStreamer?

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Loading Data from RDMS to ignite using Data Streamer

2020-03-23 Thread nithin91
Hi ,

I am new to Ignite. i am able to load data from oracle into ignite using
Cache Pojo Store method ignite.cache.("cachename").loadcache(null) . But in
the documentation it was mentioned that Data Steamers will provide better 
performance when loading data into ignite cache.

*Documentation Reference:*

https://apacheignite.readme.io/docs/data-streamers

Following is the example provided for Data Streamers in the documentation

// Get the data streamer reference and stream data.
try (IgniteDataStreamer stmr =
ignite.dataStreamer("myStreamCache")) {
// Stream entries.
for (int i = 0; i < 10; i++)
stmr.addData(i, Integer.toString(i));
}

I have following questions by taking  this example as reference to stream
data from oracle table to ignite 
cache.

1. Is an external streamer needed to stream data from oracle to ignite to
use data streaming technique
as mentioned in above example.

2. If an external streamer is not required, then do we need to loop through
the jdbc result set and add data to the cache.Please find below for the
pseudo code implementation of the same.

try (IgniteDataStreamer stmr = ignite.dataStreamer("CacheName")) {
// Stream entries.


while(rs.hasNext()){-->rs is JDBC result set

Key keyobj=new Key(rs.getString(1));
Value valueobj=new
Value(rs.getString(2),rs.getString(3),rs.getString(4));
stmr.addData(keyobj , valueobj);
}
}

When i tried to load data into ignite cache from oracle using this  method
it is taking lot of time which is as expected as we are looping through each
and every record.Is this right method of implementing Data Streamers to load
data from Oracle to Ignite.If this is not right way,can anyone share sample
code to stream data from any RDBMS table to Ignite cache.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using data streamer for local entries update

2020-01-10 Thread Ilya Kasnacheev
Hello!

If no rebalance is going on, then the answer is yes.

Regards,
-- 
Ilya Kasnacheev


пт, 10 янв. 2020 г. в 13:35, djm132 :

> Is localEntries(CachePeekMode.PRIMARY) guarantees that keys will not
> overlap
> for partitioned cache?
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using data streamer for local entries update

2020-01-10 Thread djm132
Is localEntries(CachePeekMode.PRIMARY) guarantees that keys will not overlap
for partitioned cache?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Using data streamer for local entries update

2020-01-10 Thread Ilya Kasnacheev
Hello!

Yes, it should be perfectly safe. Data Streamer does batching, thus the
speed up. Please note that Data Streamer batches may lock keys, which means
you should be careful with doing batch operations in parallel to avoid
deadlock.

Regards,
-- 
Ilya Kasnacheev


пт, 10 янв. 2020 г. в 01:15, djm132 :

> Backup count = 1 and wal mode = background.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Using data streamer for local entries update

2020-01-09 Thread djm132
Backup count = 1 and wal mode = background.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Using data streamer for local entries update

2020-01-09 Thread djm132
Hi,

Is it safe to parallel update local node entries with data streamer ? Here
is the benchmark code, it seems that addData() much faster than individual
put().


val res = Node.ignite.compute(project.indexNodes).broadcast {
  var n = 0L
  Node.ignite.dataStreamer(project.index.name).use { ds
->
ds.allowOverwrite(true)
project.index.localEntries(CachePeekMode.PRIMARY).forEach { it ->
  it.value.status = IndexRecordStatus.COMPLETED
  ds.addData(it.value.reqKey, it.value)
  n++
}
  }

  println("Updated $n records")
  n
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to set timeout while use data streamer in sql mode.

2020-01-07 Thread Andrei Aleksandrov

Hi,

You can find all existed options here:

https://apacheignite-sql.readme.io/docs/jdbc-client-driver
https://apacheignite-sql.readme.io/docs/jdbc-driver
https://apacheignite-sql.readme.io/docs/odbc-driver

At the moment I see that only *jdbc-client-driver* has special 
*streamingFlushFrequency *timeout:


Timeout, in milliseconds, that data streamer should use to flush data. 
By default, the data is flushed on connection close.


Thin clients looks like don't have it yet. Probably you should ask about 
it on developers user list.


BR,
Andrei

1/2/2020 9:25 AM, yangjiajun пишет:

Hello.

I use 'SET STREAMING ON ORDERED;' to use streaming mode.I know that we can
set timeout to a data streamer.How can I set this in sql mode?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to set timeout while use data streamer in sql mode.

2020-01-01 Thread yangjiajun
Hello.

I use 'SET STREAMING ON ORDERED;' to use streaming mode.I know that we can
set timeout to a data streamer.How can I set this in sql mode?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Streamer API issues to load data into cache

2019-10-07 Thread Ilya Kasnacheev
Hello!

Unfortunately it's hard to say what's going on. Can you please share the
stack trace? Furthermore, my recommendation will be to either disable
failure detection altogether, or increase failureDetectionTimeout.

Regards,
-- 
Ilya Kasnacheev


пт, 4 окт. 2019 г. в 09:07, vinod.jv :

> Hi,
>
> I am using Data Streamer API to load the data into ignite cache in embedded
> mode.
> But i am seeing intermittent issues. Below is the exception. Sometimes job
> run successfully and sometimes jobs are hung and the log shows too many of
> the below exceptions.
>
> I tried upgrading form Ignite 2.7 to 2.7.5 version and removed
> allowOverwrite(true) which was added.
> I also tried setting autoFlushFrequency(100) to 100 ms. Still i am seeing
> the issue.
>
> ERROR : Critical system error detected. Will be handled accordingly to
> configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false,
> timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=data-streamer-stripe-19, igniteInstanceName=null, finished=false,
> heartbeatTs=1568896033931]]]
> class org.apache.ignite.IgniteException: GridWorker
> [name=data-streamer-stripe-19, igniteInstanceName=null, finished=false,
> heartbeatTs=1568896033931]
> 19/09/19 07:29:53 ERROR : Critical system error detected. Will be handled
> accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler
> [tryStop=false, timeout=0, super=AbstractFailureHandler
> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext
> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
> [name=data-streamer-stripe-41, igniteInstanceName=null, finished=false,
> heartbeatTs=1568896023252]]]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Data Streamer API issues to load data into cache

2019-10-04 Thread vinod.jv
Hi,

I am using Data Streamer API to load the data into ignite cache in embedded
mode.
But i am seeing intermittent issues. Below is the exception. Sometimes job
run successfully and sometimes jobs are hung and the log shows too many of
the below exceptions.

I tried upgrading form Ignite 2.7 to 2.7.5 version and removed
allowOverwrite(true) which was added.
I also tried setting autoFlushFrequency(100) to 100 ms. Still i am seeing
the issue.

ERROR : Critical system error detected. Will be handled accordingly to
configured handler [hnd=StopNodeOrHaltFailureHandler [tryStop=false,
timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
[name=data-streamer-stripe-19, igniteInstanceName=null, finished=false,
heartbeatTs=1568896033931]]]
class org.apache.ignite.IgniteException: GridWorker
[name=data-streamer-stripe-19, igniteInstanceName=null, finished=false,
heartbeatTs=1568896033931]
19/09/19 07:29:53 ERROR : Critical system error detected. Will be handled
accordingly to configured handler [hnd=StopNodeOrHaltFailureHandler
[tryStop=false, timeout=0, super=AbstractFailureHandler
[ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED]]], failureCtx=FailureContext
[type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
[name=data-streamer-stripe-41, igniteInstanceName=null, finished=false,
heartbeatTs=1568896023252]]]



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer closed randomly

2019-08-28 Thread Ilya Kasnacheev
Hello!

Can you upload full logs from all nodes somewhere? I guess there should be
something else besides this error.

Regards,
-- 
Ilya Kasnacheev


ср, 28 авг. 2019 г. в 07:11, KR Kumar :

> Hi guys -I have a three node cluster in which one node has 192GB ram and
> 48 cores and (i call this manager as it does some heavy lifting) and other
> 2 nodes have 60GB ram and 36 cores.  ( worker nodes).  I am getting
> following exception ramdomly :
>
> [2019-08-28
> 03:03:52,346][ERROR][client-connector-#277][JdbcRequestHandler] Failed to
> execute batch query [qry=SqlFieldsQuery [sql=INSERT INTO
> EVENTS_IDX_MAIN_SPL_1 VALUES (?, ?, ?, ?, ?, ?), args=null,
> collocated=false, timeout=0, enforceJoinOrder=false,
> distributedJoins=false, replicatedOnly=false, lazy=false, schema=PUBLIC]]
> javax.cache.CacheException: class
> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
> at
> org.apache.ignite.internal.processors.query.GridQueryProcessor.streamBatchedUpdateQuery(GridQueryProcessor.java:2120)
> at
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatchedQuery(JdbcRequestHandler.java:694)
> at
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatch(JdbcRequestHandler.java:650)
> at
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatchOrdered(JdbcRequestHandler.java:278)
> at
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.dispatchBatchOrdered(JdbcRequestHandler.java:264)
> at
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:218)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:160)
> at
> org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:44)
> at
> org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)
>
> Worker nodes are up and heathy but not sure if its a split brain syndrome
> sort of things??
>
> Request your help in this regard?
>
> Regards,
> RAGHAV
>


Data streamer closed randomly

2019-08-27 Thread KR Kumar
Hi guys -I have a three node cluster in which one node has 192GB ram and 48
cores and (i call this manager as it does some heavy lifting) and other 2
nodes have 60GB ram and 36 cores.  ( worker nodes).  I am getting following
exception ramdomly :

[2019-08-28 03:03:52,346][ERROR][client-connector-#277][JdbcRequestHandler]
Failed to execute batch query [qry=SqlFieldsQuery [sql=INSERT INTO
EVENTS_IDX_MAIN_SPL_1 VALUES (?, ?, ?, ?, ?, ?), args=null,
collocated=false, timeout=0, enforceJoinOrder=false,
distributedJoins=false, replicatedOnly=false, lazy=false, schema=PUBLIC]]
javax.cache.CacheException: class org.apache.ignite.IgniteCheckedException:
Data streamer has been closed.
at
org.apache.ignite.internal.processors.query.GridQueryProcessor.streamBatchedUpdateQuery(GridQueryProcessor.java:2120)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatchedQuery(JdbcRequestHandler.java:694)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatch(JdbcRequestHandler.java:650)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.executeBatchOrdered(JdbcRequestHandler.java:278)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.dispatchBatchOrdered(JdbcRequestHandler.java:264)
at
org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler.handle(JdbcRequestHandler.java:218)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:160)
at
org.apache.ignite.internal.processors.odbc.ClientListenerNioListener.onMessage(ClientListenerNioListener.java:44)
at
org.apache.ignite.internal.util.nio.GridNioFilterChain$TailFilter.onMessageReceived(GridNioFilterChain.java:279)

Worker nodes are up and heathy but not sure if its a split brain syndrome
sort of things??

Request your help in this regard?

Regards,
RAGHAV


Re: Data streamer has been closed.

2019-05-07 Thread yangjiajun
Hello.

I forget to add 'ordered' somewhere so that breaks this workaround.I think
'SET STREAMING ON ALLOW_OVERWRITE ON ORDERED;' works for me if I always use
it.




yangjiajun wrote
> Hello.
> 
> How to use this workaround when allow overwrite?
> 
> I try 'SET STREAMING ON ALLOW_OVERWRITE ON ORDERED;' and 'SET STREAMING ON
> ORDERED ALLOW_OVERWRITE ON;' but don't works.
> 
> 
> Taras Ledkov wrote
>> Hi,
>> 
>> Workaround: use ordered streaming:
>> SET STREAMING ON ORDERED
>> 
>> There is a bug at the Ignite server on not ordered mode.
>> The fix will be at the master soon.
>> 
>> 20.01.2019 14:11, ilya.kasnacheev пишет:
>>> Hello!
>>>
>>> I have filed an issue https://issues.apache.org/jira/browse/IGNITE-10991
>>>
>>> Regards,
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>> 
>> -- 
>> Taras Ledkov
>> Mail-To: 
> 
>> tledkov@
> 
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been closed.

2019-05-07 Thread yangjiajun
Hello.

How to use this workaround when allow overwrite?

I try 'SET STREAMING ON ALLOW_OVERWRITE ON ORDERED;' and 'SET STREAMING ON
ORDERED ALLOW_OVERWRITE ON;' but don't works.


Taras Ledkov wrote
> Hi,
> 
> Workaround: use ordered streaming:
> SET STREAMING ON ORDERED
> 
> There is a bug at the Ignite server on not ordered mode.
> The fix will be at the master soon.
> 
> 20.01.2019 14:11, ilya.kasnacheev пишет:
>> Hello!
>>
>> I have filed an issue https://issues.apache.org/jira/browse/IGNITE-10991
>>
>> Regards,
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 
> -- 
> Taras Ledkov
> Mail-To: 

> tledkov@





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [data-streamer-stripe-2-#35][] Critical system error detected.

2019-05-07 Thread Evgenii Zhuravlev
Hi,

Here is the documentation regarding this message:
https://apacheignite.readme.io/v2.7/docs/jvm-and-system-tuning#section-file-descriptors

Evgenii

вт, 7 мая 2019 г. в 12:09, yangjiajun <1371549...@qq.com>:

> thread_dump.rar
> 
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: [data-streamer-stripe-2-#35][] Critical system error detected.

2019-05-07 Thread yangjiajun
thread_dump.rar
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[data-streamer-stripe-2-#35][] Critical system error detected.

2019-05-07 Thread yangjiajun
Hello.

My ignite crash with following errors.

[16:39:31,163][SEVERE][data-streamer-stripe-2-#35][] Critical system error
detected. Will be handled accordingly to configured handler
[hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class
o.a.i.i.processors.cache.persistence.StorageException: Failed to initialize
partition file:
//data/persistence/node00-51465194-9ac4-4c65-b156-9d61d3cd3299/cache-SQL_PUBLIC_TEMP_TABLE_94_1_5_20190507163914_94/part-439.bin]]
class
org.apache.ignite.internal.processors.cache.persistence.StorageException:
Failed to initialize partition file:
//ignite/data/persistence/node00-51465194-9ac4-4c65-b156-9d61d3cd3299/cache-SQL_PUBLIC_TEMP_TABLE_94_1_5_20190507163914_94/part-439.bin
at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.init(FilePageStore.java:466)
at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:355)
at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:490)
at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:474)
at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:857)
at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:698)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.getOrAllocatePartitionMetas(GridCacheOffheapManager.java:1657)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1494)
at
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.invoke(GridCacheOffheapManager.java:2136)
at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.invoke(IgniteCacheOffheapManagerImpl.java:457)
at
org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerUpdate(GridCacheMapEntry.java:2357)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateSingle(GridDhtAtomicCache.java:2584)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update(GridDhtAtomicCache.java:2041)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal0(GridDhtAtomicCache.java:1870)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.updateAllAsyncInternal(GridDhtAtomicCache.java:1690)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.sendSingleRequest(GridNearAtomicAbstractUpdateFuture.java:317)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.map(GridNearAtomicSingleUpdateFuture.java:504)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicSingleUpdateFuture.mapOnTopology(GridNearAtomicSingleUpdateFuture.java:464)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridNearAtomicAbstractUpdateFuture.map(GridNearAtomicAbstractUpdateFuture.java:266)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.update0(GridDhtAtomicCache.java:1175)
at
org.apache.ignite.internal.processors.cache.distributed.dht.atomic.GridDhtAtomicCache.put0(GridDhtAtomicCache.java:633)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2517)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.put(GridCacheAdapter.java:2494)
at
org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.put(IgniteCacheProxyImpl.java:1306)
at
org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.put(GatewayProtectedCacheProxy.java:835)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerCacheUpdaters$Individual.receive(DataStreamerCacheUpdaters.java:138)
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerUpdateJob.call(DataStreamerUpdateJob.java:157)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6836)
at
org.apache.ignite.internal.processors.closure.GridClosureProcessor$2.body(GridClosureProcessor.java:984)
at
org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:137)
at
org.apache.ignite.internal.util.StripedExecutor$Stripe.body(StripedExecutor.java:522

Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Ilya Kasnacheev
Hello!

The reason for slow index building, as well as workaround, is described in:
http://apache-ignite-users.70518.x6.nabble.com/Performance-degradation-in-case-of-high-volumes-tp27150p27204.html

Regards,
-- 
Ilya Kasnacheev


чт, 28 февр. 2019 г. в 14:34, Justin Ji :

> I have tried to load data without indexes, but it does not have any help!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
I have tried to load data without indexes, but it does not have any help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
I have tried to load data without indexes, but it does not have any help!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 回复: Ignite Data Streamer Hung after a period

2019-02-28 Thread Justin Ji
After I dig in the issue, I found that the streamer threads are waiting for
building the index.
This looks normal in a database based system, the more data, the slower
insertion.
But ignite is a wide used system, I think other people may encounter this
problem, and have ways to improve the performance. 

Would appreciate anyone who can give me some advices.

"data-streamer-stripe-2-#11%nx-s-ignite-001%" #30 prio=5 os_prio=0
tid=0x5571cbdba800 nid=0x95 waiting on condition [0x7f8e8c9ed000]
   java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
   at
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
   at
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
   at
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:351)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:328)
   at
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:312)
   at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:779)
   at
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:624)
   at
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:140)
   at
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
   at
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:149)
   at
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:67)
   at
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:33)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:167)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:46)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4482)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:209)
   at
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:46)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4469)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4389)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:83)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:278)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
   at
org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2336)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2086)
   at
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2066)
   at
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:247)
   at
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:466)
   at
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:659)
   at
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1866)
   at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1393)
   at
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.invoke(IgniteCacheOffheapManagerImpl

?????? Ignite Data Streamer Hung after a period

2019-02-28 Thread BinaryTree
After I dig in the issue, I found that the streamer threads are waiting for 
building the index.
This looks normal in a database based system, the more data, the slower 
insertion.
But ignite is a wide used system, I think other people may encounter this 
problem, and have ways to improve the performance. 


Would appreciate anyone who can give me some advices.


"data-streamer-stripe-2-#11%nx-s-ignite-001%" #30 prio=5 os_prio=0 
tid=0x5571cbdba800 nid=0x95 waiting on condition [0x7f8e8c9ed000]
   java.lang.Thread.State: WAITING (parking)
   at sun.misc.Unsafe.park(Native Method)
   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
   at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
   at 
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:351)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:328)
   at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:312)
   at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:779)
   at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:624)
   at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:140)
   at 
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2RowFactory.getRow(H2RowFactory.java:61)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.createRowFromLink(H2Tree.java:149)
   at 
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:67)
   at 
org.apache.ignite.internal.processors.query.h2.database.io.H2LeafIO.getLookupRow(H2LeafIO.java:33)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:167)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.getRow(H2Tree.java:46)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.getRow(BPlusTree.java:4482)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:209)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2Tree.compare(H2Tree.java:46)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.compare(BPlusTree.java:4469)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.findInsertionPoint(BPlusTree.java:4389)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.access$1500(BPlusTree.java:83)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$Search.run0(BPlusTree.java:278)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4816)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree$GetPageHandler.run(BPlusTree.java:4801)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.readPage(PageHandler.java:158)
   at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.read(DataStructure.java:332)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2336)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putDown(BPlusTree.java:2348)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.doPut(BPlusTree.java:2086)
   at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.putx(BPlusTree.java:2066)
   at 
org.apache.ignite.internal.processors.query.h2.database.H2TreeIndex.putx(H2TreeIndex.java:247)
   at 
org.apache.ignite.internal.processors.query.h2.opt.GridH2Table.update(GridH2Table.java:466)
   at 
org.apache.ignite.internal.processors.query.h2.IgniteH2Indexing.store(IgniteH2Indexing.java:659)
   at 
org.apache.ignite.internal.processors.query.GridQueryProcessor.store(GridQueryProcessor.java:1866)
   at 
org.apache.ignite.internal.processors.cache.query.GridCacheQueryManager.store(GridCacheQueryManager.java:403)
   at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.finishUpdate(IgniteCacheOffheapManagerImpl.java:1393)
   at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreI

?????? Ignite Data Streamer Hung after a period

2019-02-27 Thread BinaryTree
Thank for your reply. 
1. Yes, I have persistence.
2. I think the cache store is not the bottleneck, because the skipStore is 
enabled when loading data.
IgniteDataStreamer streamer = 
ignite.dataStreamer(IgniteCacheKey.DATA_POINT_NEW.getCode());
streamer.skipStore(true);
streamer.keepBinary(true);
streamer.perNodeBufferSize(1);
streamer.perNodeParallelOperations(32);





--  --
??: "Ilya Kasnacheev";
: 2019??2??27??(??) 9:59
??: "user";

????: Re: Ignite Data Streamer Hung after a period



Hello!



It's hard to say. Do you have persistence? Are you sure that cache store is not 
the bottleneck?


I would start with gathering thread dumps from whole cluster when in stuck 
state.


Regards,

-- 

Ilya Kasnacheev









, 27 . 2019 ??. ?? 15:06, Justin Ji :

Dmitry  - 
 
 I also encountered this problem.
 
 I used both persistence and indexing, when I loaded 20 million records, the
 loading speed became much slower than before, but the CPU of the ignite
 server is low.
 
 
<http://apache-ignite-users.70518.x6.nabble.com/file/t2000/WX20190227-200059.png>
 
 
 Here is my cache configuration:
 
 CacheConfiguration cacheCfg = new CacheConfiguration();
 cacheCfg.setName(cacheName);
 cacheCfg.setCacheMode(CacheMode.PARTITIONED);
 cacheCfg.setBackups(1);
 cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
 
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
 cacheCfg.setWriteThrough(true);
 cacheCfg.setWriteBehindEnabled(true);
 cacheCfg.setWriteBehindFlushThreadCount(2);
 cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
 cacheCfg.setWriteBehindFlushSize(409600);
 cacheCfg.setWriteBehindBatchSize(1024);
 cacheCfg.setStoreKeepBinary(true);
 cacheCfg.setQueryParallelism(16);
 cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
 cacheCfg.setRebalanceThrottle(100);
 CacheKeyConfiguration cacheKeyConfiguration = new
 CacheKeyConfiguration(DpKey.class);
 cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
 
 List entities = Lists.newArrayList();
 
 QueryEntity entity = new QueryEntity(DpKey.class.getName(),
 DpCache.class.getName());
 entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
 
 LinkedHashMap map = new LinkedHashMap<>();
 map.put("id", "java.lang.String");
 map.put("gmtCreate", "java.lang.Long");
 map.put("gmtModified", "java.lang.Long");
 map.put("devId", "java.lang.String");
 map.put("dpId", "java.lang.Integer");
 map.put("code", "java.lang.String");
 map.put("name", "java.lang.String");
 map.put("customName", "java.lang.String");
 map.put("mode", "java.lang.String");
 map.put("type", "java.lang.String");
 map.put("value", "java.lang.String");
 map.put("rawValue", byte[].class.getName());
 map.put("time", "java.lang.Long");
 map.put("status", "java.lang.Boolean");
 map.put("uuid", "java.lang.String");
 
 entity.setFields(map);
 QueryIndex devIdIdx = new QueryIndex("devId");
 devIdIdx.setName("idx_devId");
 devIdIdx.setInlineSize(128);
 List indexes = Lists.newArrayList(devIdIdx);
 entity.setIndexes(indexes);
 
 entities.add(entity);
 cacheCfg.setQueryEntities(entities);
 
 
 Can you give me some advice on where to start solving these problems?
 
 
 
 
 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

Re: Ignite Data streamer optimization

2019-02-27 Thread Ilya Kasnacheev
Hello!

Why do you have nodeParallelOperations of 1?

Do you really have issues with default configuration?

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 08:59, ashishb888 :

> Sure. But in my case I can not do so. Any other options for single threads?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Data Streamer Hung after a period

2019-02-27 Thread Ilya Kasnacheev
Hello!

It's hard to say. Do you have persistence? Are you sure that cache store is
not the bottleneck?

I would start with gathering thread dumps from whole cluster when in stuck
state.

Regards,
-- 
Ilya Kasnacheev


ср, 27 февр. 2019 г. в 15:06, Justin Ji :

> Dmitry  -
>
> I also encountered this problem.
>
> I used both persistence and indexing, when I loaded 20 million records, the
> loading speed became much slower than before, but the CPU of the ignite
> server is low.
>
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2000/WX20190227-200059.png>
>
>
> Here is my cache configuration:
>
> CacheConfiguration cacheCfg = new CacheConfiguration();
> cacheCfg.setName(cacheName);
> cacheCfg.setCacheMode(CacheMode.PARTITIONED);
> cacheCfg.setBackups(1);
> cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
>
> cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
> cacheCfg.setWriteThrough(true);
> cacheCfg.setWriteBehindEnabled(true);
> cacheCfg.setWriteBehindFlushThreadCount(2);
> cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
> cacheCfg.setWriteBehindFlushSize(409600);
> cacheCfg.setWriteBehindBatchSize(1024);
> cacheCfg.setStoreKeepBinary(true);
> cacheCfg.setQueryParallelism(16);
> cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
> cacheCfg.setRebalanceThrottle(100);
> CacheKeyConfiguration cacheKeyConfiguration = new
> CacheKeyConfiguration(DpKey.class);
> cacheCfg.setKeyConfiguration(cacheKeyConfiguration);
>
> List entities = Lists.newArrayList();
>
> QueryEntity entity = new QueryEntity(DpKey.class.getName(),
> DpCache.class.getName());
> entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());
>
> LinkedHashMap map = new LinkedHashMap<>();
> map.put("id", "java.lang.String");
> map.put("gmtCreate", "java.lang.Long");
> map.put("gmtModified", "java.lang.Long");
> map.put("devId", "java.lang.String");
> map.put("dpId", "java.lang.Integer");
> map.put("code", "java.lang.String");
> map.put("name", "java.lang.String");
> map.put("customName", "java.lang.String");
> map.put("mode", "java.lang.String");
> map.put("type", "java.lang.String");
> map.put("value", "java.lang.String");
> map.put("rawValue", byte[].class.getName());
> map.put("time", "java.lang.Long");
> map.put("status", "java.lang.Boolean");
> map.put("uuid", "java.lang.String");
>
> entity.setFields(map);
> QueryIndex devIdIdx = new QueryIndex("devId");
> devIdIdx.setName("idx_devId");
> devIdIdx.setInlineSize(128);
> List indexes = Lists.newArrayList(devIdIdx);
> entity.setIndexes(indexes);
>
> entities.add(entity);
> cacheCfg.setQueryEntities(entities);
>
>
> Can you give me some advice on where to start solving these problems?
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Data Streamer Hung after a period

2019-02-27 Thread Justin Ji
Dmitry  - 

I also encountered this problem.

I used both persistence and indexing, when I loaded 20 million records, the
loading speed became much slower than before, but the CPU of the ignite
server is low.


 

Here is my cache configuration:

CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName(cacheName);
cacheCfg.setCacheMode(CacheMode.PARTITIONED);
cacheCfg.setBackups(1);
cacheCfg.setAtomicityMode(CacheAtomicityMode.ATOMIC);
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(DataPointCacheStore.class));
cacheCfg.setWriteThrough(true);
cacheCfg.setWriteBehindEnabled(true);
cacheCfg.setWriteBehindFlushThreadCount(2);
cacheCfg.setWriteBehindFlushFrequency(15 * 1000);
cacheCfg.setWriteBehindFlushSize(409600);
cacheCfg.setWriteBehindBatchSize(1024);
cacheCfg.setStoreKeepBinary(true);
cacheCfg.setQueryParallelism(16);
cacheCfg.setRebalanceBatchSize(2 * 1024 * 1024);
cacheCfg.setRebalanceThrottle(100);
CacheKeyConfiguration cacheKeyConfiguration = new
CacheKeyConfiguration(DpKey.class);
cacheCfg.setKeyConfiguration(cacheKeyConfiguration);

List entities = Lists.newArrayList();

QueryEntity entity = new QueryEntity(DpKey.class.getName(),
DpCache.class.getName());
entity.setTableName(IgniteTableKey.T_DATA_POINT_NEW.getCode());

LinkedHashMap map = new LinkedHashMap<>();
map.put("id", "java.lang.String");
map.put("gmtCreate", "java.lang.Long");
map.put("gmtModified", "java.lang.Long");
map.put("devId", "java.lang.String");
map.put("dpId", "java.lang.Integer");
map.put("code", "java.lang.String");
map.put("name", "java.lang.String");
map.put("customName", "java.lang.String");
map.put("mode", "java.lang.String");
map.put("type", "java.lang.String");
map.put("value", "java.lang.String");
map.put("rawValue", byte[].class.getName());
map.put("time", "java.lang.Long");
map.put("status", "java.lang.Boolean");
map.put("uuid", "java.lang.String");

entity.setFields(map);
QueryIndex devIdIdx = new QueryIndex("devId");
devIdIdx.setName("idx_devId");
devIdIdx.setInlineSize(128);
List indexes = Lists.newArrayList(devIdIdx);
entity.setIndexes(indexes);

entities.add(entity);
cacheCfg.setQueryEntities(entities);


Can you give me some advice on where to start solving these problems?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Data streamer optimization

2019-02-26 Thread ashishb888
Sure. But in my case I can not do so. Any other options for single threads?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been closed.

2019-02-20 Thread Taras Ledkov

Hi,

Workaround: use ordered streaming:
SET STREAMING ON ORDERED

There is a bug at the Ignite server on not ordered mode.
The fix will be at the master soon.

20.01.2019 14:11, ilya.kasnacheev пишет:

Hello!

I have filed an issue https://issues.apache.org/jira/browse/IGNITE-10991

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


--
Taras Ledkov
Mail-To: tled...@gridgain.com



Re: Data streamer has been closed.

2019-01-20 Thread ilya.kasnacheev
Hello!

I have filed an issue https://issues.apache.org/jira/browse/IGNITE-10991

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been closed.

2019-01-18 Thread Ilya Kasnacheev
Hello!

I can observe this problem with this reproducer on 2.7. I will try to
investigate more and file a ticket.

Regards,+
-- 
Ilya Kasnacheev


чт, 17 янв. 2019 г. в 16:14, yangjiajun <1371549...@qq.com>:

> Hello.
>
> Please try this loop:
> for(int i=0;i<10;i++){
> System.out.println(i);
> initData();
> overwriteData();
> }
>
> And here is my config file:
> example-default12.xml
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/example-default12.xml>
>
>
> It is harder to reproduce this issue in 2.7 than 2.6.But the exception does
> appear.
>
>
> ilya.kasnacheev wrote
> > Hello!
> >
> > I have ran your code several times and not encountered this problem. Can
> > you make a new reproducer which will show it (semi)reliably?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > чт, 17 янв. 2019 г. в 05:18, yangjiajun <
>
> > 1371549332@
>
> >>:
> >
> >> Hello.
> >>
> >> Thanks for reply.Unfortunately,I still get the exception after running
> my
> >> test on 2.7  for several times.
> >>
> >>
> >> ilya.kasnacheev wrote
> >> > Hello!
> >> >
> >> > I can reproduce this problem, but then again, it does not seem to
> >> > reproduce
> >> > on 2.7. Have you considered upgrading?
> >> >
> >> > Regards,
> >> > --
> >> > Ilya Kasnacheev
> >> >
> >> >
> >> > ср, 16 янв. 2019 г. в 14:14, yangjiajun <
> >>
> >> > 1371549332@
> >>
> >> >>:
> >> >
> >> >> Hello.
> >> >>
> >> >> I do  test on a ignite 2.6 node with persistence enabled and get an
> >> >> exception:
> >> >>
> >> >>  Exception in thread "main" java.sql.BatchUpdateException: class
> >> >> org.apache.ignite.IgniteCheckedException: Data streamer has been
> >> closed.
> >> >> at
> >> >>
> >> >>
> >>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
> >> >> at java.lang.Thread.run(Unknown Source)
> >> >>
> >> >> Here is my test code:
> >> >>
> >> >> import java.sql.Connection;
> >> >> import java.sql.DriverManager;
> >> >> import java.sql.PreparedStatement;
> >> >> import java.sql.SQLException;
> >> >> import java.util.Properties;
> >> >>
> >> >> /**
> >> >>  * test insert data in streaming mode
> >> >>  * */
> >> >> public class InsertStreamingMode {
> >> >>
> >> >> private static Connection conn;
> >> >>
> >> >> public static void main(String[] args) throws Exception {
> >> >>
> >> >> initialize();
> >> >>
> >> >> close();
> >> >> }
> >> >>
> >> >> public static void close() throws Exception {
> >> >> conn.close();
> >> >> }
> >> >>
> >> >> public static void initialize() throws Exception {
> >> >>
> >> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> >> >> final String dbUrl =
> >> >>
> >> >>
> >>
> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
> >> >> final Properties props = new Properties();
> >> >> conn = DriverManager.getConnection(dbUrl, props);
> >> >> initData();
> >> >> overwriteData();
> >> >> }
> >> >>
> >> >> private static void initData() throws SQLException{
> >> >>
> >> >> long start=System.currentTimeMillis();
> >> >> conn.prepareStatement("SET STREAMING ON
> >> ALLOW_OVERWRITE
> >> >> ON").execute();
> >> >>
> >> >> String sql="insert INTO  city1(id,name,name1)
> >> >> VALUES(?,?,?)";

Re: Data streamer has been closed.

2019-01-17 Thread yangjiajun
Hello.

Please try this loop:
for(int i=0;i<10;i++){
System.out.println(i);
initData(); 
overwriteData();
}

And here is my config file:
example-default12.xml
<http://apache-ignite-users.70518.x6.nabble.com/file/t2059/example-default12.xml>
  

It is harder to reproduce this issue in 2.7 than 2.6.But the exception does
appear.


ilya.kasnacheev wrote
> Hello!
> 
> I have ran your code several times and not encountered this problem. Can
> you make a new reproducer which will show it (semi)reliably?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> чт, 17 янв. 2019 г. в 05:18, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> Thanks for reply.Unfortunately,I still get the exception after running my
>> test on 2.7  for several times.
>>
>>
>> ilya.kasnacheev wrote
>> > Hello!
>> >
>> > I can reproduce this problem, but then again, it does not seem to
>> > reproduce
>> > on 2.7. Have you considered upgrading?
>> >
>> > Regards,
>> > --
>> > Ilya Kasnacheev
>> >
>> >
>> > ср, 16 янв. 2019 г. в 14:14, yangjiajun <
>>
>> > 1371549332@
>>
>> >>:
>> >
>> >> Hello.
>> >>
>> >> I do  test on a ignite 2.6 node with persistence enabled and get an
>> >> exception:
>> >>
>> >>  Exception in thread "main" java.sql.BatchUpdateException: class
>> >> org.apache.ignite.IgniteCheckedException: Data streamer has been
>> closed.
>> >> at
>> >>
>> >>
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
>> >> at java.lang.Thread.run(Unknown Source)
>> >>
>> >> Here is my test code:
>> >>
>> >> import java.sql.Connection;
>> >> import java.sql.DriverManager;
>> >> import java.sql.PreparedStatement;
>> >> import java.sql.SQLException;
>> >> import java.util.Properties;
>> >>
>> >> /**
>> >>  * test insert data in streaming mode
>> >>  * */
>> >> public class InsertStreamingMode {
>> >>
>> >> private static Connection conn;
>> >>
>> >> public static void main(String[] args) throws Exception {
>> >>
>> >> initialize();
>> >>
>> >> close();
>> >> }
>> >>
>> >> public static void close() throws Exception {
>> >> conn.close();
>> >> }
>> >>
>> >> public static void initialize() throws Exception {
>> >>
>> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
>> >> final String dbUrl =
>> >>
>> >>
>> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
>> >> final Properties props = new Properties();
>> >> conn = DriverManager.getConnection(dbUrl, props);
>> >> initData();
>> >> overwriteData();
>> >> }
>> >>
>> >> private static void initData() throws SQLException{
>> >>
>> >> long start=System.currentTimeMillis();
>> >> conn.prepareStatement("SET STREAMING ON
>> ALLOW_OVERWRITE
>> >> ON").execute();
>> >>
>> >> String sql="insert INTO  city1(id,name,name1)
>> >> VALUES(?,?,?)";
>> >> PreparedStatement ps=conn.prepareStatement(sql);
>> >> for(int i=0;i<160;i++){
>> >> String s1=String.valueOf(Math.random());
>> >> String s2=String.valueOf(Math.random());
>> >> ps.setInt(1, i);
>> >> ps.setString(2, s1);
>> >> ps.setString(3, s2);
>> >> ps.execute();
>> >> }
>> >> conn.prepareStatement("set streaming off").execute();
>> >> long end=System.currentTimeMillis();
>> >> System.out.println(end-start);
>> >> 

Re: Data streamer has been closed.

2019-01-17 Thread Ilya Kasnacheev
Hello!

I have ran your code several times and not encountered this problem. Can
you make a new reproducer which will show it (semi)reliably?

Regards,
-- 
Ilya Kasnacheev


чт, 17 янв. 2019 г. в 05:18, yangjiajun <1371549...@qq.com>:

> Hello.
>
> Thanks for reply.Unfortunately,I still get the exception after running my
> test on 2.7  for several times.
>
>
> ilya.kasnacheev wrote
> > Hello!
> >
> > I can reproduce this problem, but then again, it does not seem to
> > reproduce
> > on 2.7. Have you considered upgrading?
> >
> > Regards,
> > --
> > Ilya Kasnacheev
> >
> >
> > ср, 16 янв. 2019 г. в 14:14, yangjiajun <
>
> > 1371549332@
>
> >>:
> >
> >> Hello.
> >>
> >> I do  test on a ignite 2.6 node with persistence enabled and get an
> >> exception:
> >>
> >>  Exception in thread "main" java.sql.BatchUpdateException: class
> >> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
> >> at
> >>
> >>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
> >> at java.lang.Thread.run(Unknown Source)
> >>
> >> Here is my test code:
> >>
> >> import java.sql.Connection;
> >> import java.sql.DriverManager;
> >> import java.sql.PreparedStatement;
> >> import java.sql.SQLException;
> >> import java.util.Properties;
> >>
> >> /**
> >>  * test insert data in streaming mode
> >>  * */
> >> public class InsertStreamingMode {
> >>
> >> private static Connection conn;
> >>
> >> public static void main(String[] args) throws Exception {
> >>
> >> initialize();
> >>
> >> close();
> >> }
> >>
> >> public static void close() throws Exception {
> >> conn.close();
> >> }
> >>
> >> public static void initialize() throws Exception {
> >> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> >> final String dbUrl =
> >>
> >>
> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
> >> final Properties props = new Properties();
> >> conn = DriverManager.getConnection(dbUrl, props);
> >> initData();
> >> overwriteData();
> >> }
> >>
> >> private static void initData() throws SQLException{
> >>
> >> long start=System.currentTimeMillis();
> >> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> >> ON").execute();
> >>
> >> String sql="insert INTO  city1(id,name,name1)
> >> VALUES(?,?,?)";
> >> PreparedStatement ps=conn.prepareStatement(sql);
> >> for(int i=0;i<160;i++){
> >> String s1=String.valueOf(Math.random());
> >> String s2=String.valueOf(Math.random());
> >> ps.setInt(1, i);
> >> ps.setString(2, s1);
> >> ps.setString(3, s2);
> >> ps.execute();
> >> }
> >> conn.prepareStatement("set streaming off").execute();
> >> long end=System.currentTimeMillis();
> >> System.out.println(end-start);
> >> }
> >>
> >> private static void overwriteData() throws SQLException{
> >>
> >> long start=System.currentTimeMillis();
> >> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> >> ON").execute();
> >>
> >> String sql="insert INTO  city1(id,name,name1)
> >> VALUES(?,?,?)";
> >> PreparedStatement ps=conn.prepareStatement(sql);
> >> for(int i=0;i<160;i++){
> >> String s1="test";
> >> String s2="test";
> >> ps.setInt(1, i);
> >> ps.setString(2, s1);
> >> ps.setString(3, s2);
> >> ps.execute();
> >> }
> >> conn.prepareStatement("set streaming off").execute();
> >> long end=System.currentTimeMillis();
> >> System.out.println(end-start);
> >> }
> >> }
> >>
> >> Here is the table:
> >> CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
> >> "template=replicated"
> >>
> >> The exception occurs on overwriteData method.
> >>
> >>
> >>
> >> --
> >> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Data streamer has been closed.

2019-01-16 Thread yangjiajun
Hello.

Thanks for reply.Unfortunately,I still get the exception after running my
test on 2.7  for several times.


ilya.kasnacheev wrote
> Hello!
> 
> I can reproduce this problem, but then again, it does not seem to
> reproduce
> on 2.7. Have you considered upgrading?
> 
> Regards,
> -- 
> Ilya Kasnacheev
> 
> 
> ср, 16 янв. 2019 г. в 14:14, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I do  test on a ignite 2.6 node with persistence enabled and get an
>> exception:
>>
>>  Exception in thread "main" java.sql.BatchUpdateException: class
>> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
>> at
>>
>> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
>> at java.lang.Thread.run(Unknown Source)
>>
>> Here is my test code:
>>
>> import java.sql.Connection;
>> import java.sql.DriverManager;
>> import java.sql.PreparedStatement;
>> import java.sql.SQLException;
>> import java.util.Properties;
>>
>> /**
>>  * test insert data in streaming mode
>>  * */
>> public class InsertStreamingMode {
>>
>> private static Connection conn;
>>
>> public static void main(String[] args) throws Exception {
>>
>> initialize();
>>
>> close();
>> }
>>
>> public static void close() throws Exception {
>> conn.close();
>> }
>>
>> public static void initialize() throws Exception {
>> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
>> final String dbUrl =
>>
>> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
>> final Properties props = new Properties();
>> conn = DriverManager.getConnection(dbUrl, props);
>> initData();
>> overwriteData();
>> }
>>
>> private static void initData() throws SQLException{
>>
>> long start=System.currentTimeMillis();
>> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
>> ON").execute();
>>
>> String sql="insert INTO  city1(id,name,name1)
>> VALUES(?,?,?)";
>> PreparedStatement ps=conn.prepareStatement(sql);
>> for(int i=0;i<160;i++){
>> String s1=String.valueOf(Math.random());
>> String s2=String.valueOf(Math.random());
>> ps.setInt(1, i);
>> ps.setString(2, s1);
>> ps.setString(3, s2);
>> ps.execute();
>> }
>> conn.prepareStatement("set streaming off").execute();
>> long end=System.currentTimeMillis();
>> System.out.println(end-start);
>> }
>>
>> private static void overwriteData() throws SQLException{
>>
>> long start=System.currentTimeMillis();
>> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
>> ON").execute();
>>
>> String sql="insert INTO  city1(id,name,name1)
>> VALUES(?,?,?)";
>> PreparedStatement ps=conn.prepareStatement(sql);
>> for(int i=0;i<160;i++){
>> String s1="test";
>> String s2="test";
>> ps.setInt(1, i);
>> ps.setString(2, s1);
>> ps.setString(3, s2);
>> ps.execute();
>> }
>> conn.prepareStatement("set streaming off").execute();
>> long end=System.currentTimeMillis();
>> System.out.println(end-start);
>> }
>> }
>>
>> Here is the table:
>> CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
>> "template=replicated"
>>
>> The exception occurs on overwriteData method.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data streamer has been closed.

2019-01-16 Thread Ilya Kasnacheev
Hello!

I can reproduce this problem, but then again, it does not seem to reproduce
on 2.7. Have you considered upgrading?

Regards,
-- 
Ilya Kasnacheev


ср, 16 янв. 2019 г. в 14:14, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I do  test on a ignite 2.6 node with persistence enabled and get an
> exception:
>
>  Exception in thread "main" java.sql.BatchUpdateException: class
> org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
> at
>
> org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
> at java.lang.Thread.run(Unknown Source)
>
> Here is my test code:
>
> import java.sql.Connection;
> import java.sql.DriverManager;
> import java.sql.PreparedStatement;
> import java.sql.SQLException;
> import java.util.Properties;
>
> /**
>  * test insert data in streaming mode
>  * */
> public class InsertStreamingMode {
>
> private static Connection conn;
>
> public static void main(String[] args) throws Exception {
>
> initialize();
>
> close();
> }
>
> public static void close() throws Exception {
> conn.close();
> }
>
> public static void initialize() throws Exception {
> Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
> final String dbUrl =
>
> "jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
> final Properties props = new Properties();
> conn = DriverManager.getConnection(dbUrl, props);
> initData();
> overwriteData();
> }
>
> private static void initData() throws SQLException{
>
> long start=System.currentTimeMillis();
> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> ON").execute();
>
> String sql="insert INTO  city1(id,name,name1)
> VALUES(?,?,?)";
> PreparedStatement ps=conn.prepareStatement(sql);
> for(int i=0;i<160;i++){
> String s1=String.valueOf(Math.random());
> String s2=String.valueOf(Math.random());
> ps.setInt(1, i);
> ps.setString(2, s1);
> ps.setString(3, s2);
> ps.execute();
> }
> conn.prepareStatement("set streaming off").execute();
> long end=System.currentTimeMillis();
> System.out.println(end-start);
> }
>
> private static void overwriteData() throws SQLException{
>
> long start=System.currentTimeMillis();
> conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE
> ON").execute();
>
> String sql="insert INTO  city1(id,name,name1)
> VALUES(?,?,?)";
> PreparedStatement ps=conn.prepareStatement(sql);
> for(int i=0;i<160;i++){
> String s1="test";
> String s2="test";
> ps.setInt(1, i);
> ps.setString(2, s1);
> ps.setString(3, s2);
> ps.execute();
> }
> conn.prepareStatement("set streaming off").execute();
> long end=System.currentTimeMillis();
> System.out.println(end-start);
> }
> }
>
> Here is the table:
> CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
> "template=replicated"
>
> The exception occurs on overwriteData method.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Data streamer has been closed.

2019-01-16 Thread yangjiajun
Hello.

I do  test on a ignite 2.6 node with persistence enabled and get an
exception:

 Exception in thread "main" java.sql.BatchUpdateException: class
org.apache.ignite.IgniteCheckedException: Data streamer has been closed.
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection$StreamState.readResponses(JdbcThinConnection.java:1017)
at java.lang.Thread.run(Unknown Source)

Here is my test code:

import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Properties;

/**
 * test insert data in streaming mode
 * */
public class InsertStreamingMode {

private static Connection conn;

public static void main(String[] args) throws Exception {

initialize();

close();
}

public static void close() throws Exception {
conn.close();
}

public static void initialize() throws Exception {
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
final String dbUrl =
"jdbc:ignite:thin://ip:port;lazy=true;skipReducerOnUpdate=true;replicatedOnly=true";
final Properties props = new Properties();
conn = DriverManager.getConnection(dbUrl, props);
initData();
overwriteData();
}

private static void initData() throws SQLException{

long start=System.currentTimeMillis();
conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE 
ON").execute();

String sql="insert INTO  city1(id,name,name1) VALUES(?,?,?)";
PreparedStatement ps=conn.prepareStatement(sql);
for(int i=0;i<160;i++){
String s1=String.valueOf(Math.random());
String s2=String.valueOf(Math.random());
ps.setInt(1, i);
ps.setString(2, s1);
ps.setString(3, s2);
ps.execute();
}
conn.prepareStatement("set streaming off").execute();
long end=System.currentTimeMillis();
System.out.println(end-start);
}

private static void overwriteData() throws SQLException{

long start=System.currentTimeMillis();
conn.prepareStatement("SET STREAMING ON ALLOW_OVERWRITE 
ON").execute();

String sql="insert INTO  city1(id,name,name1) VALUES(?,?,?)";
PreparedStatement ps=conn.prepareStatement(sql);
for(int i=0;i<160;i++){
String s1="test";
String s2="test";
ps.setInt(1, i);
ps.setString(2, s1);
ps.setString(3, s2);
ps.execute();
}
conn.prepareStatement("set streaming off").execute();
long end=System.currentTimeMillis();
System.out.println(end-start);
}
}

Here is the table:
CREATE TABLE city1(id LONG PRIMARY KEY, name VARCHAR,name1 VARCHAR) WITH
"template=replicated"

The exception occurs on overwriteData method.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Data Streamer Kafka version 2.0

2019-01-15 Thread Alexey Kukushkin
Hi Mahesh,

I do not think Kafka streamer uses any 2.x specific Kafka client APIs. From
what you say I think kafka-client-1.1 (that Ignite 2.7 uses) cannot connect
to Kafka 2.x cluster. Can you manually replace kafka-client-1.1.jar with
kafka-client-2.0 jar on the Kafka streamer side and see if it fixes your
issue? BTW, how does your issue look like? What is the error message?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Data Streamer Kafka version 2.0

2019-01-13 Thread Mahesh Nair
Hi,

I am trying to stream some data from a few kafka topics to an Ignite cache
using the ignite data streamer. I was using ignite version 2.6.0 and kafka
version 0.10 and that was working fine. Now we have upgraded the kafka
version to 2.0 and the streamer has stopped working.

I understand with the 2.6.0 version of ignite the KafkaStreamer uses the
deprecated ConsumerConfig which has since changed in version 2.7.0.

I see that the kafka.version for 2.7.0 is 1.1.1 and for the master branch
it is 2.0.

Has anyone tried using the 2.6.0/2.7.0 with Kafka 2.0? as in build ignite
with that version?


Mahesh


Re: Ignite Data streamer optimization

2019-01-08 Thread Gaurav Bajaj
I agree with Ilya, That's the silver bullet may be :)

On Fri, Dec 28, 2018 at 3:16 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Maybe times have changed but it used to be that the best optimization was
> to feed DataStreamer's addData() from multiple threads in parallel.
>
> Can you try that?
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 28 дек. 2018 г. в 14:32, ashishb888 :
>
>> I am using below settings:
>> allowOverwrite: false
>> nodeParallelOperations: 1
>> autoFlushFrequency: 10
>> perNodeBufferSize: 500
>>
>>
>> My records size is around 2000 bytes. And see the
>> "grid-data-loader-flusher"
>> thread stats as below:
>>
>> Thread  Count   Average Longest Duration
>> grid-data-loader-flusher-#100   38  4,737,793.579   30,427,862
>> 180,036,156
>>
>> What would be the best configurations for Data streamer?
>>
>> Thanks
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: Ignite Data streamer optimization

2018-12-28 Thread Ilya Kasnacheev
Hello!

Maybe times have changed but it used to be that the best optimization was
to feed DataStreamer's addData() from multiple threads in parallel.

Can you try that?

Regards,
-- 
Ilya Kasnacheev


пт, 28 дек. 2018 г. в 14:32, ashishb888 :

> I am using below settings:
> allowOverwrite: false
> nodeParallelOperations: 1
> autoFlushFrequency: 10
> perNodeBufferSize: 500
>
>
> My records size is around 2000 bytes. And see the
> "grid-data-loader-flusher"
> thread stats as below:
>
> Thread  Count   Average Longest Duration
> grid-data-loader-flusher-#100   38  4,737,793.579   30,427,862
> 180,036,156
>
> What would be the best configurations for Data streamer?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Data streamer optimization

2018-12-28 Thread Denis Mekhanikov
To achieve the best data streaming performance, you should
aim at highest utilization of resources on data nodes.
There is no silver bullet for data streamer tuning, unfortunately.
Try changing parameters and see, how they affect the utilization and
overall performance.

For me default data streamer parameters usually work fine.
Your value of perNodeBufferSize looks too big. It's measured in records,
not bytes.
By default it's 512.

Denis

пт, 28 дек. 2018 г. в 14:32, ashishb888 :

> I am using below settings:
> allowOverwrite: false
> nodeParallelOperations: 1
> autoFlushFrequency: 10
> perNodeBufferSize: 500
>
>
> My records size is around 2000 bytes. And see the
> "grid-data-loader-flusher"
> thread stats as below:
>
> Thread  Count   Average Longest Duration
> grid-data-loader-flusher-#100   38  4,737,793.579   30,427,862
> 180,036,156
>
> What would be the best configurations for Data streamer?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Ignite Data streamer optimization

2018-12-28 Thread ashishb888
I am using below settings:
allowOverwrite: false
nodeParallelOperations: 1
autoFlushFrequency: 10
perNodeBufferSize: 500


My records size is around 2000 bytes. And see the "grid-data-loader-flusher"
thread stats as below:

Thread  Count   Average Longest Duration
grid-data-loader-flusher-#100   38  4,737,793.579   30,427,862  
180,036,156

What would be the best configurations for Data streamer?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: data streamer - failed to update keys (GUID)

2018-10-15 Thread wt
please ignore this thread - i  have found the problem and it was some old
residual test code that was being called instead. Correcting that has
resolved the issue. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: data streamer - failed to update keys (GUID)

2018-10-12 Thread wt
There is an error “Failed to update index, incorrect key class”.

Any chance you’ve changed an integer field to a string one, or something
like that?

 using (var ldr = igniteclient.GetDataStreamer(TableName))  =
works

 using (var ldr = igniteclient.GetDataStreamer(TableName))
= fails shown error

 using (var ldr = igniteclient.GetDataStreamer(TableName)) =
fails witj same error


I build the classes up dynamically and it is the exact same code that
generates all the table classes. When the data loader changes between int
and other types i get this error. to give you an idea here is the test
process


1) stop server and clear all data in work folder
2) modify source db view to change the key data type (i have tested int,
string, guid)
3) start server
4) a tool i developed dynamically builds classes based on source data
structure in sql server (this code doesnt change)
5) update the load code as shown above and map the record that is the key in
the source data to the key in the data streamer
6) run data load - only fails when underlying db connection closes which i
assume results in a flush.

when i repeat this process for int it works but anything else i get that
error. It looks to me as though the data streamer in .net only wants an int.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: data streamer - failed to update keys (GUID)

2018-10-12 Thread Stanislav Lukyanov
There is an error “Failed to update index, incorrect key class”.
Any chance you’ve changed an integer field to a string one, or something like 
that?
Changing field types is generally not supported.

Stan

From: wt
Sent: 12 октября 2018 г. 14:06
To: user@ignite.apache.org
Subject: RE: data streamer - failed to update keys (GUID)

I get the same error when trying the key as a string. here are some screen
shots of the code, error, and table structure

table_stucture.png
<http://apache-ignite-users.70518.x6.nabble.com/file/t1892/table_stucture.png>  

code.png
<http://apache-ignite-users.70518.x6.nabble.com/file/t1892/code.png>  

error.png
<http://apache-ignite-users.70518.x6.nabble.com/file/t1892/error.png>  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: data streamer - failed to update keys (GUID)

2018-10-12 Thread wt
not sure if it is related but the error isn;t pointing to it being an issue.
interop requires uct date and from sql server we are parsing in a
datetimeoffset and ignite sees it as an object and not timestamp



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: data streamer - failed to update keys (GUID)

2018-10-12 Thread wt
I get the same error when trying the key as a string. here are some screen
shots of the code, error, and table structure

table_stucture.png
  

code.png
  

error.png
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: data streamer - failed to update keys (GUID)

2018-10-12 Thread Stanislav Lukyanov
Yes, there is a direct support for UUID.
If you don’t know where the error is coming from, please share the code and the 
logs.

Stan

From: wt
Sent: 12 октября 2018 г. 13:00
To: user@ignite.apache.org
Subject: data streamer - failed to update keys (GUID)

hi

I just wanted to check something. I have a table that has a guid key. When i
try insert into this table at the end of the insert when the data reader
connection closes i get this error (i have included a select that shows
there is just 1 record with that key). Can Ignite handle guid keys?

guid.png
<http://apache-ignite-users.70518.x6.nabble.com/file/t1892/guid.png>  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



data streamer - failed to update keys (GUID)

2018-10-12 Thread wt
hi

I just wanted to check something. I have a table that has a guid key. When i
try insert into this table at the end of the insert when the data reader
connection closes i get this error (i have included a select that shows
there is just 1 record with that key). Can Ignite handle guid keys?

guid.png
  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Transaction Throughput in Data Streamer

2018-08-13 Thread dkarachentsev
Hi,

It looks like the most of the time transactions in receiver are waiting for
locks. Any lock adds serialization for parallel code. And in your case I
don't think it's possible to tune throughput with settings, because ten
transactions could wait when one finish. You need to change algorithm. 

The most effective way would be to stream data with DataStreamer with
disabled allowOverride and without any transactions. You need to stream data
independently if it's possible, avoid serial code and non-local cache
reads/writes.

Thanks!
-Dmitry



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Transaction Throughput in Data Streamer

2018-08-09 Thread Dave Harvey
We are trying to load and transform a large amount of data using the
IgniteDataStreamer using a custom StreamReceiver.We'd like this to run
a lot faster, and we cannot find anything that is close to saturated,
except the data-streamer threads, queues.   This is 2.5, with Ignite
persistence, and enough memory to fit all the data.

I was looking to turn some knob, like the size of a thread pool, to
increase the throughput, but I can't find any bottleneck.   If I turn up
the demand, the throughput does not increase, and the per transaction
latency increases. This would indicate a bottleneck somewhere.

The application has loaded about 900 million records of type A at this
point, and now we would like to load 2.5B records of type B.Records of
type A have a key and a unique ID.Records of type B have a different
key type, plus a foreign field that is A's unique ID.   The key we use in
ignite for record B is (B's key, A's key as affinity key). We also
maintain caches to map A's ID back to its key, and something similar for B.

For each record the stream receiver starts a pessimistic transaction,we
will end up with 1 local gets and 2-3 gets with no affinity (i.e. 50% local
on two nodes), and 2-4 puts, before we commit the transaction.  (FULL_SYNC
caches). There are a several fields with indices.

I've simplified this down to two nodes, with 4 cache caches each with one
backup, all with WAL LOGGING disabled.  The two nodes have 256GB of memory
and 32 CPUs and local SSDs that are unmirrored (i3.8xlarge on AWS). The
network is supposed to be 10 Gb.   The dataset is basically in memory, and
with the WAL disabled there is very little I/O.

The WAL logging disable only pushed the transaction rate from about 1750
to about 2000 TPS.

The CPU doesn't get above 20%, the network bandwidth is  only about 6MB/s
from each node and only about 1500 packets per second per node.   The read
wait time on the SSDs  is only enough to lock up a single thread, and there
are no writes except during checkpoints.

When I look at thread dumps, there is no obvious bottleneck except for the
Datastreamer threads.  Doubling the number of DataStreamer threads from
current 64 to 128 has no effect on throughput.

Looking via MXbeans, where I have a fix for IGNITE-7616, the DataStreamer
pool is saturated.   The "Striped Executor" is not.  With the WAL enabled,
the "StripedExecutor" shows some bursty load, when disabled the active
threads are queue low.  The work is distributed across the StripedExecutor
threads.   The nonDataStreamer  thread pools all frequently go to 0 active
threads, while the DataStreamer pool stays backed up.

With the WAL on with 64 DataStreamer threads, there tended to be able 53
"Owner transactions" on the node.

A snapshot of transactions outstanding follows.

Is there another place to look?   The DS threads tend top be waiting on
futures,  and the other threads are consistent with the relatively

THanks
-DH

f0a49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
104

33549c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal],
dae1a619-4886-4001-8ac5-6651339c67b7 [ip-172-17-0-1.ec2.internal,
ip-10-32-98-209.ec2.internal]], DURATION: 134

b0949c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

2ca49c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
104

96349c53561--08a9-7ea9--0002=PREPARED, NEAR, DURATION: 134

9ca49c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 104

28f39c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
215

a2649c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:
124

e7849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

06849c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 114

89849c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [6d3f06d6-3346-4ca7-8d5d-b5d8af2ad12e
[ip-172-17-0-1.ec2.internal, ip-10-32-97-243.ec2.internal]], DURATION:
114

35549c53561--08a9-7ea9--0002=ACTIVE, NEAR, DURATION: 134

f0449c53561--08a9-7ea9--0002=PREPARING, NEAR,
PRIMARY: [dae1a619-4886-4001-8ac5-6651339c67b7
[ip-172-17-0-1.ec2.internal, ip-10-32-98-209.ec2.internal]], DURATION:

Re: c++ data streamer api

2018-08-01 Thread Igor Sapego
Hi,

1) Yes, it is really missing.
2) It is "we haven’t got around to it yet" case. There are not so much C++
contributes.

Best Regards,
Igor


On Wed, Aug 1, 2018 at 12:46 PM Floris Van Nee 
wrote:

> Hi all,
>
>
>
> I’m looking into using Ignite and noticed the C++ API seems to be missing
> functionality for data streaming (Java IgniteDataStreamer together with the
> StreamReceiver/StreamVisitor/StreamTransformer classes). Without these, I
> assume it is not going to be easy to stream large amounts of data with an
> acceptable data rate (because it doesn’t batch inserts etc.)
>
> 1)  Are they indeed not available or am I missing something in the
> docs?
>
> 2)  If they are missing – are there plans to add these and how much
> effort is estimated to implement such API? Is it just a matter of “we
> haven’t got around to it yet, but it’s similar to what’s already there in
> the C++ API” or is there some limitation that is currently blocking
> implementation of the data streamer in C++?
>
>
>
> -Floris
>
>
>


c++ data streamer api

2018-08-01 Thread Floris Van Nee
Hi all,

I'm looking into using Ignite and noticed the C++ API seems to be missing 
functionality for data streaming (Java IgniteDataStreamer together with the 
StreamReceiver/StreamVisitor/StreamTransformer classes). Without these, I 
assume it is not going to be easy to stream large amounts of data with an 
acceptable data rate (because it doesn't batch inserts etc.)

1)  Are they indeed not available or am I missing something in the docs?

2)  If they are missing - are there plans to add these and how much effort 
is estimated to implement such API? Is it just a matter of "we haven't got 
around to it yet, but it's similar to what's already there in the C++ API" or 
is there some limitation that is currently blocking implementation of the data 
streamer in C++?

-Floris



Re: Ignite Data Streamer Hung after a period

2018-04-13 Thread dkarachentsev
Hi,

Blocked threads show only the fact that there are no tasks to process in
pool. Do you use persistence and/or indexing? Could you please attach your
configs and logs from all nodes? Please take few sequential thread dumps
when throughput is low.

Thanks!
-Dmitry 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Streamer not flushing data to cache

2018-03-31 Thread Andrey Kuznetsov
Indeed, the only reliable way is flush/close. Nonzero automatic flush
frequency doesn't provide the same guarantee.

2018-03-31 21:11 GMT+03:00 begineer :

> One more query.. Would it never flush the data if nothing more is added to
> streamer and current size is less than buffer size ?
> What is the default time. I can see only flush frequency
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
  Andrey Kuznetsov.


Re: Data Streamer not flushing data to cache

2018-03-31 Thread begineer
One more query.. Would it never flush the data if nothing more is added to
streamer and current size is less than buffer size ?
What is the default time. I can see only flush frequency 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Streamer not flushing data to cache

2018-03-31 Thread begineer
Thanks for reply... Its works after invoking flush(). 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Data Streamer not flushing data to cache

2018-03-31 Thread David Harvey
By default, DataStreamer will only send a full buffer, unless you
explicitly flush or close it, or as suggested, implicitly close it.   You
can set a flush timer also.

The last time I looked (2.3), flush timer is implemented to flush
periodically, and this is unaffected by when data was last added to the
stream, i.e., if you want to ensure that it has stored all of the records
older than 5 seconds, setting the flush timer to 5 seconds will cause it to
flush every 5 seconds, even when all of the outstanding data was added in
the last 10ms.  So the flush timer creates an (unnecessary) performance
anomaly under heavy load.

Note that while the DataStreamer is thread safe, if you have a
multi-threaded producer, works much better to have a DataStreamer per
producer thread, especially if there are explicit or timer driven flushes.


-DH


On Sat, Mar 31, 2018 at 5:38 AM, begineer <redni...@gmail.com> wrote:

> Hi, This must be something very simple. I am adding 100 items to data
> streamer. But it is not flushing items to cache. Is there a settings much
> enables it. Cache size is zero. Am I doing something wrong ?
>
> public class DataStreamerExample {
> public static void main(String[] args) throws InterruptedException {
> Ignite ignite = Ignition.start("examples/
> config/example-ignite.xml");
> CacheConfiguration<Long, Long> config = new
> CacheConfiguration<>("mycache");
> IgniteCache<Long, Long> cache = ignite.getOrCreateCache(config);
> IgniteDataStreamer<Long, Long> streamer =
> ignite.dataStreamer("mycache");
> LongStream.range(1, 100).forEach( l->{
> System.out.println("Adding to streamer "+ l);
> streamer.addData(l, l);
> });
> System.out.println(streamer.perNodeBufferSize());
> System.out.println("Cache size : "+ cache.size(CachePeekMode.ALL))
> ;
> cache.query(new ScanQuery<>()).getAll().stream().forEach(entry->{
> System.out.println("cache Entry: " + entry.getKey()+" "+
> entry.getValue());
> });
> }
> }
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: Data Streamer not flushing data to cache

2018-03-31 Thread Andrey Kuznetsov
Hello!

The simplest way to ensure your data have got to the cache is to use
IgniteDataStreamer in try-with-resources block. I some rare scenarios it
can make sense to call {{flush()}} or {{close()}} on streamer instance
directly.

2018-03-31 12:38 GMT+03:00 begineer <redni...@gmail.com>:

> Hi, This must be something very simple. I am adding 100 items to data
> streamer. But it is not flushing items to cache. Is there a settings much
> enables it. Cache size is zero. Am I doing something wrong ?
>
> public class DataStreamerExample {
> public static void main(String[] args) throws InterruptedException {
> Ignite ignite = Ignition.start("examples/
> config/example-ignite.xml");
> CacheConfiguration<Long, Long> config = new
> CacheConfiguration<>("mycache");
> IgniteCache<Long, Long> cache = ignite.getOrCreateCache(config);
> IgniteDataStreamer<Long, Long> streamer =
> ignite.dataStreamer("mycache");
> LongStream.range(1, 100).forEach( l->{
> System.out.println("Adding to streamer "+ l);
> streamer.addData(l, l);
> });
> System.out.println(streamer.perNodeBufferSize());
> System.out.println("Cache size : "+ cache.size(CachePeekMode.ALL))
> ;
> cache.query(new ScanQuery<>()).getAll().stream().forEach(entry->{
> System.out.println("cache Entry: " + entry.getKey()+" "+
> entry.getValue());
> });
> }
> }
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
  Andrey Kuznetsov.


Data Streamer not flushing data to cache

2018-03-31 Thread begineer
Hi, This must be something very simple. I am adding 100 items to data
streamer. But it is not flushing items to cache. Is there a settings much
enables it. Cache size is zero. Am I doing something wrong ?

public class DataStreamerExample {
public static void main(String[] args) throws InterruptedException {
Ignite ignite = Ignition.start("examples/config/example-ignite.xml");
CacheConfiguration<Long, Long> config = new
CacheConfiguration<>("mycache");
IgniteCache<Long, Long> cache = ignite.getOrCreateCache(config);
IgniteDataStreamer<Long, Long> streamer = 
ignite.dataStreamer("mycache");
LongStream.range(1, 100).forEach( l->{
System.out.println("Adding to streamer "+ l);
streamer.addData(l, l);
});
System.out.println(streamer.perNodeBufferSize());
System.out.println("Cache size : "+ cache.size(CachePeekMode.ALL));
cache.query(new ScanQuery<>()).getAll().stream().forEach(entry->{
System.out.println("cache Entry: " + entry.getKey()+" "+
entry.getValue());
});
}
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-25 Thread vkulichenko
Hm.. Not sore what happened exactly in your case, but cache store is never
deployed via peer class loading. It's required that you have a class
explicitly deployed on every node prior to start up.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-23 Thread vbm
Here the issue was with peerClassLoading flag. 

I had not enabled it on the server. The cache store factory class was not
getting loaded as per the server logs. 
I enabled it and now write-through is achieved.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-21 Thread ezhuravlev
if skipStore flag is enabled, then data won't be propagated to the store


are you sure that you have writeThrough flag enabled?

Additionally, you need to check your CacheStore implementation, if you don't
have any ideas, you can post it here, the community will check it.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-12 Thread vbm
I have set the overwrite flag to true.
stmr.allowOverwrite(true); 

What is the significance of skipStore flag ? 
What is the flow for an entry from to setTupleMultipleExtractore to reach
the cache ? 

I am thinking it should go through write method with which it gets put to
the cache. I have overloaded the write method in the cachefactory as below:


myCacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(MyCacheStoreAdapter.class));



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to achieve writethrough with ignite data streamer

2018-03-12 Thread Evgenii Zhuravlev
Hi,

You need to set IgniteDataStreamer.allowOverwrite(true), as javadoc
says: Note that when this flag is {@code false}, updates will not be
propagated to the cache store

* (i.e. {@link #skipStore()} flag will be set to {@code true} implicitly).


Evgenii


2018-03-12 11:34 GMT+03:00 vbm :

> Hi,
>
> I am using ignite datastreamer to pull the data from a kafka topic.
> THe data is getting loaded to the ignite cache, but it is not getting
> written to the 3rd party persistance (mysql db).
>
> I have set the cacheStoreFactory to my CustomClass which has extended
> CacheStoreAdapter class.
>
> Code Snippet:
>
>
> myCacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(
> MyCacheStoreAdapter.class));
> // Set as write-thorugh cache
> myCacheCfg.setWriteThrough(true);
>
>try (IgniteDataStreamer stmr =
> ignite.dataStreamer("myCache")) {
> // allow overwriting cache data
> stmr.allowOverwrite(true);
>
> kafkaStreamer = new KafkaStreamer<>();
> kafkaStreamer.setIgnite(ignite);
>
> kafkaStreamer.setStreamer(stmr);
>
> // set the topic
> kafkaStreamer.setTopic(topic);
>
> // set the number of threads to process Kafka streams
> kafkaStreamer.setThreads(1);
>
> Properties settings = new Properties();
>
> kafkaStreamer.setConsumerConfig(new ConsumerConfig(settings));
> kafkaStreamer.setMultipleTupleExtractor(
> new
> StreamMultipleTupleExtractor, Long,
> MyOrders>() {
> @Override public Map
> extract(MessageAndMetadata msg) {
> Map entries = new HashMap<>();
>
> try {
> // Converting the message recieved to my
> requirement
> and adding it to the map
> entries.put(key, order);
>
> }
> catch (Exception ex) {
> ex.printStackTrace();
> }
>
> return entries;
> }
> });
>
> With this code, I am able to get the data in to the cache. But the write
> through behaviour is not getting triggered. The code in my
> cacheStroreFactory (write, load) is not getting called.
>
>
> Can anyone help me with this and let me know how to achieve the write
> through behaviour.
>
>
> Regards,
> Vishwas
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


How to achieve writethrough with ignite data streamer

2018-03-12 Thread vbm
Hi,

I am using ignite datastreamer to pull the data from a kafka topic. 
THe data is getting loaded to the ignite cache, but it is not getting
written to the 3rd party persistance (mysql db).

I have set the cacheStoreFactory to my CustomClass which has extended
CacheStoreAdapter class.

Code Snippet:

   
myCacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(MyCacheStoreAdapter.class));
// Set as write-thorugh cache
myCacheCfg.setWriteThrough(true);

   try (IgniteDataStreamer stmr =
ignite.dataStreamer("myCache")) {
// allow overwriting cache data
stmr.allowOverwrite(true);

kafkaStreamer = new KafkaStreamer<>();
kafkaStreamer.setIgnite(ignite);

kafkaStreamer.setStreamer(stmr);

// set the topic
kafkaStreamer.setTopic(topic);

// set the number of threads to process Kafka streams
kafkaStreamer.setThreads(1);

Properties settings = new Properties();

kafkaStreamer.setConsumerConfig(new ConsumerConfig(settings));
kafkaStreamer.setMultipleTupleExtractor(
new
StreamMultipleTupleExtractor, Long,
MyOrders>() {
@Override public Map
extract(MessageAndMetadata msg) {
Map entries = new HashMap<>();

try {
// Converting the message recieved to my requirement
and adding it to the map
entries.put(key, order);

}
catch (Exception ex) {
ex.printStackTrace();
}
   
return entries;
}
});

With this code, I am able to get the data in to the cache. But the write
through behaviour is not getting triggered. The code in my
cacheStroreFactory (write, load) is not getting called.


Can anyone help me with this and let me know how to achieve the write
through behaviour. 


Regards,
Vishwas



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts.

 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-20 Thread Dave Harvey
I've started reproducing this issue with more  statistics, but have not
reached the worst performance point yet, but somethings are starting to
become clearer:

The DataStreamer hashes the affinity key to partition, and then maps the
partition to a node, and fills a single buffer at a time for the node.  A
DataStreamer thread on the node therefore get a buffer's worth of requests
grouped by the time of the addData() call, with no per thread grouping by
affinity key (as I had originally assumed).

The test I was running was using a large amount of data where the average
number of keys for each unique affinity key is 3, with some outliers up to
50K.   One of the caches being updated in the optimistic transaction in the
StreamReceiver contains an object whose key is the affinity key, and whose
contents are the set of keys that have that affinity key. We expect some
temporal locality for objects with the same affinity key.

We had a number of worker threads on a client node, but only one data
streamer, where we increased the buffer count.   Once we understood how the
data streamer actually worked, we made each worker have its own
DataStreamer.   This way, each worker could issue a flush, without affecting
the other workers.   That, in turn, allowed us to use smaller batches per
worker, decreasing the odds of temporal locality.

So it seems like we would get updates for the same affinity key on different
data streamer threads, and they could conflict updating the common record.  
The more keys per affinity key the more likely a conflict, and the more data
would need to be saved.   A flush operation could stall multiple workers,
and the flush operation might be dependent on requests that are conflicting.

We chose to use OPTIMISTIC transactions because of their lack-of-deadlock
characteristics, rather than because we thought there would be high
contention.  I do think this behavior suggests something sub-optimal
about the OPTIMISTIC lock implementation, because I see a dramatic decrease
in throughput, but not a dramatic increase in transaction restarts. 
"In OPTIMISTIC transactions, entry locks are acquired on primary nodes
during the prepare step,"  does not say anything about  the order that locks
are acquired.  Sorting the locks so there is a consistent order would avoid
deadlocks.   
If there are no deadlocks, then there could be n-1 restarts of the
transaction for each commit, where n is the number of data streamer threads.
This is the old "thundering herd" problem, which can easily be made order n
by only allowing one of the waiting threads to proceed at a time.
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-13 Thread Dave Harvey
I made improvements to the statistics collection in the stream receiver, and
I'm finding an excessive number of retry's of the optimistic transactions we
are using.   I will understand that and retry.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-13 Thread David Harvey
We are pulling in a large number of records from an RDB, and reorganizing
the data so that our analytics will be much faster.

I'm running Sumo, and have looked at all of the log files from all the
nodes, and the only things are checkpoints and GC logs.  The checkpoints
are fast, and occur at a lower rate during the slowdowns.   GC is not a
problem at all.
(I see a project in the future where the number of messages/bytes per TOPIC
are counted.)

The average packet size goes to 6KB from a normal ~ 400 bytes.   I'm going
to add  a rebalance throttle.


On Tue, Feb 13, 2018 at 5:41 AM, slava.koptilin 
wrote:

> Hi Dave,
>
> Could you please provide more details about that use-case.
> Is it possible to reproduce the issue and gather JFR and log files from all
> participated nodes?
> It would be very helpful in order to understand the cause of that behavior.
>
> Thanks!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-13 Thread slava.koptilin
Hi Dave,

Is possible to share a code snippet which illustrates DataStreamer settings
and stream receiver code?

Best regards,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 20 minute 12x throughput drop using data streamer and Ignite persistence

2018-02-13 Thread slava.koptilin
Hi Dave,

Could you please provide more details about that use-case.
Is it possible to reproduce the issue and gather JFR and log files from all
participated nodes?
It would be very helpful in order to understand the cause of that behavior.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread zbyszek
Thank you for confirming Valentin!
zb

  From: vkulichenko [via Apache Ignite Users] 
<ml+s70518n15190...@n6.nabble.com>
 To: zbyszek <zbab...@yahoo.com> 
 Sent: Thursday, July 20, 2017 8:13 PM
 Subject: Re: java.lang.IllegalStateException: Data streamer has been closed.
   
 Hi Zbyszek,

You're subscribed now, all good.

-Val 
 
   If you reply to this email, your message will be added to the discussion 
below: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15190.html
   To unsubscribe from java.lang.IllegalStateException: Data streamer has been 
closed., click here.
 NAML 

   



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15193.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread vkulichenko
Hi Zbyszek,

You're subscribed now, all good.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15059p15190.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-20 Thread zbyszek
Hi,I believe I have subscribed properly... clicked 'Subscribe' button on the 
web page and sent separate email followed by executing received instructions.
Is there anything else I can do?
Thanx,Zbyszek
Sent from Yahoo Mail for iPhone
On Tuesday, July 18, 2017, 11:37 PM, vkulichenko [via Apache Ignite Users] 
<ml+s70518n15086...@n6.nabble.com> wrote:

 Hi Zbyszek,

Please properly subscribe to the mailing list so that the community can receive 
email notifications for your messages. To subscribe, send empty email to 
user-subscr...@ignite.apache.org and follow simple instructions in the reply.


zbyszek wroteHello All,

I was wondering if anybody has encountered the following issue.
I have 2 servers (Ignite 2.0.1) in the cluster. Each of these 2 servers loads 
different caches (with different names) in LOCAL mode using DataStreamer.
I am starting these servers simultaneously (say the second server is started 1 
sec. after the first one). Very often, say with the 25% chance, 
the first server's addData(DataStreamerImpl.java:665) call fails with the error 
"java.lang.IllegalStateException: Data streamer has been closed". Looking at 
the log one can see that this error is always 
preceeded with the error "org.apache.ignite.IgniteCheckedException: Failed to 
finish operation (too many remaps): 32".
I have verified already that this is not not us calling the close() on the 
DataStreamer. I seem to never observe this issue when I start only the first 
(one) server.

Fragment of the log is attached below.

Thank you in advance for any help or suggestion,

Zbyszek



evt=DISCOVERY_CUSTOM_EVT, node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_BOND-L1555-1500303702823, memoryPolicyName=null, mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=125], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1300-1500296583907, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=126], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1100-1500289395073, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=127], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Exchange_EQUITY-U1200-1500293005458, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=128], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U0800-1500278534000, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=129], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U1000-1500285783155, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping 
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2, 
minorTopVer=130], evt=DISCOVERY_CUSTOM_EVT, 
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%] 
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache 
[name=fvpd_Listing_EQUITY-U1400-1500300190375, memoryPolicyName=null, 
mode=LOCAL]
2017-07-17 21:40:39 ERROR [sys-#131%null%] o.a.i.i.p.d.DataStreamerI

Re: java.lang.IllegalStateException: Data streamer has been closed.

2017-07-18 Thread vkulichenko
Hi Zbyszek,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


zbyszek wrote
> Hello All,
> 
> I was wondering if anybody has encountered the following issue.
> I have 2 servers (Ignite 2.0.1) in the cluster. Each of these 2 servers
> loads different caches (with different names) in LOCAL mode using
> DataStreamer.
> I am starting these servers simultaneously (say the second server is
> started 1 sec. after the first one). Very often, say with the 25% chance, 
> the first server's addData(DataStreamerImpl.java:665) call fails with the
> error "java.lang.IllegalStateException: Data streamer has been closed".
> Looking at the log one can see that this error is always 
> preceeded with the error "org.apache.ignite.IgniteCheckedException: Failed
> to finish operation (too many remaps): 32".
> I have verified already that this is not not us calling the close() on the
> DataStreamer. I seem to never observe this issue when I start only the
> first (one) server.
> 
> Fragment of the log is attached below.
> 
> Thank you in advance for any help or suggestion,
> 
> Zbyszek
> 
> 
> 
> evt=DISCOVERY_CUSTOM_EVT, node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Exchange_BOND-L1555-1500303702823, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=125], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Exchange_EQUITY-U1300-1500296583907, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=126], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Exchange_EQUITY-U1100-1500289395073, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=127], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Exchange_EQUITY-U1200-1500293005458, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=128], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Listing_EQUITY-U0800-1500278534000, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=129], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Listing_EQUITY-U1000-1500285783155, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99]
> Skipping rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion
> [topVer=2, minorTopVer=130], evt=DISCOVERY_CUSTOM_EVT,
> node=918cd6f8-e761-43d7-9467-a28f65163c8c]
> 2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
> o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
> [name=fvpd_Listing_EQUITY-U1400-1500300190375, memoryPolicyName=null,
> mode=LOCAL]
> 2017-07-17 21:40:39 ERROR [sys-#131%null%] o.a.i.i.p.d.DataStr

Re: java.lang.IllegalStateException: Data streamer has been closed

2017-07-18 Thread zbyszek
Thank you Mikhail,

Your answer is very helpful. Just needed confirmation that this was a known
issue.

Regards,
Zbyszek



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15061p15067.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: java.lang.IllegalStateException: Data streamer has been closed

2017-07-18 Thread mcherkasov
Hi, 

looks like you faced with the following bug:
https://issues.apache.org/jira/browse/IGNITE-5195
unfortunately there's no workaround for this. Don't change cluster topology
:)

Thanks,
Mikhail.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/java-lang-IllegalStateException-Data-streamer-has-been-closed-tp15061p15065.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


java.lang.IllegalStateException: Data streamer has been closed

2017-07-18 Thread zbyszek


Hello All,

I was wondering if anybody has encountered the following issue.
I have 2 servers (Ignite 2.0.1) in the cluster. Each of these 2 servers
loads different caches (with different names) in LOCAL mode using
DataStreamer.
I am starting these servers simultaneously (say the second server is started
1 sec. after the first one). Very often, say with the 25% chance, 
the first server's addData(DataStreamerImpl.java:665) call fails with the
error "java.lang.IllegalStateException: Data streamer has been closed".
Looking at the log one can see that this error is always 
preceeded with the error "org.apache.ignite.IgniteCheckedException: Failed
to finish operation (too many remaps): 32" Caused by:
"org.apache.ignite.IgniteCheckedException: Topology changed during batch
preparation".
I have verified already that this is not not us calling the close() on the
DataStreamer. I seem to never observe this issue when I start only the first
(one) server.

Fragment of the log is attached below.

Thank you in advance for any help or suggestion,

Zbyszek



evt=DISCOVERY_CUSTOM_EVT, node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Exchange_BOND-L1555-1500303702823, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=125], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Exchange_EQUITY-U1300-1500296583907, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=126], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Exchange_EQUITY-U1100-1500289395073, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=127], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Exchange_EQUITY-U1200-1500293005458, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=128], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Listing_EQUITY-U0800-1500278534000, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=129], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Listing_EQUITY-U1000-1500285783155, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.c.GridCachePartitionExchangeManager [Slf4jLogger.java:99] Skipping
rebalancing (obsolete exchange ID) [top=AffinityTopologyVersion [topVer=2,
minorTopVer=130], evt=DISCOVERY_CUSTOM_EVT,
node=918cd6f8-e761-43d7-9467-a28f65163c8c]
2017-07-17 21:40:39 INFO [exchange-worker-#65%null%]
o.a.i.i.p.cache.GridCacheProcessor [Slf4jLogger.java:99] Started cache
[name=fvpd_Listing_EQUITY-U1400-1500300190375, memoryPolicyName=null,
mode=LOCAL]
2017-07-17 21:40:39 ERROR [sys-#131%null%] o.a.i.i.p.d.DataStreamerImpl
[Slf4jLogger.java:119] DataStreamer operation failed.
org.apache.ignite.IgniteCheckedException: Failed to finish operation (too
many remaps): 32
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:861)
[ignite-core-2.0.1.jar:2.0.1]
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl$5.apply(DataStreamerImpl.java:826)
[ignite-core-2.0.1.jar:2.0.1]
at
org.apache.ignite.internal.util.future.GridFutureAdapter.notifyListener(GridFutureAdapter.java:382

Re: Ignite Data Streamer Performance is not improving with increase in threads

2017-07-11 Thread rishi007bansod
Hi Andrew,
We have observed that following method(segment.access()) blocks ignite
data caching using data streamers(For single ignite instance). This limits
our resource utilization i.e. CPU, MEM are not fully utilized. How can we
avoid this blocking so that we can get maximum performance out of single
ignite instance?
<http://apache-ignite-users.70518.x6.nabble.com/file/n14623/blockedthreads.png> 

Thanks  




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151p14623.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Data Streamer Performance is not improving with increase in threads

2017-07-03 Thread rishi007bansod
Hi Andrey,
Attached is the code we have used for bench marking. Is there any tuning
that we can apply to get better performance out of ignite single instance
further?
 Also we have attached logs taken from our tool where we varied
datastreamer parallelism from 1 to 16(default). In this case it is observed
that,
(1) ignite by default creates threadpool of size 56 and datastreamer uses
threads among this threadool depending upon parallelism set(is it correct??,
correct me if I am wrong) 
(2)  Also when datastreamer parallelism is set to 1, it is observed that
while loop thread(Timer-0) goes into waiting state after some interval(here
we get 30k rate)(Why is this happening? why rate here is limited to 30k
instead of 80K(80k is rate in case of default parallelism))
(3) Whereas in case of default parallelism(i.e. 16) while loop
thread(Timer-0) is continuously in running state(here we get 80k rate only).
But in this case public thread pools of data streamers are waiting most of
the time, is this the reason for less throughput? 

<http://apache-ignite-users.70518.x6.nabble.com/file/n14276/SingleThread.png> 


<http://apache-ignite-users.70518.x6.nabble.com/file/n14276/DefaultThreads.png> 

code.java
<http://apache-ignite-users.70518.x6.nabble.com/file/n14276/code.java>  

Thanks



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Data-Streamer-Performance-is-not-improving-with-increase-in-threads-tp14151p14276.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


  1   2   >