Re: 2.8.1 - Loading Plugin Provider - Conflicting documentation

2020-08-13 Thread Denis Magda
Veena, thanks for the ticket! We’ll take care of it.


On Thursday, August 13, 2020, VeenaMithare  wrote:

> HI ,
>
> Raised a Documentation Improvement jira :
> IGNITE-13356
> Documentation Change needed: PluginProvider loading changed from 2.8.1
> regards,
> Veena.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
-
Denis


Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread Denis Magda
If Ignite can hold all the records in memory it won't be reading pages from
disk (the page replacement). Thus, it's expected that the 12GB performs
better than the 5GB case.

Btw, are you planning to reload the whole data set into Ignite on potential
cluster restarts? Each node loses a subset of the data located in the swap
space on restarts.

-
Denis


On Thu, Aug 13, 2020 at 7:36 PM 38797715 <38797...@qq.com> wrote:

> Hi Denis,
>
> We did a test, in the same environment (8G RAM, 10G swap partition) and
> the same configuration (2G Heap, enable persistence, data volume is about
> 6G), the only difference is that the maxSize size is different, which is
> configured as 5G and 12G respectively. We found that the performance of the
> scenario with maxSize = 12G is better than that of the scenario with
> maxSize = 5G, and the write performance is improved by more than 10%.
>
> I suspect that if the memory data region is large enough to hold all the
> data, ignite's page replacement might not be enabled.
>
> Our test scenarios are limited and may not be convincing. However, I think
> that the lack of memory may be the norm. At this time, it may be a good
> practice to make full use of the swap mechanism of the OS, which takes up
> more disk space but achieves better performance.
> 在 2020/8/14 上午8:22, Denis Magda 写道:
>
> Ignite swapping is based on the swapping mechanism of the OS. So, you
> shouldn’t see any difference if enable the OS one directly some way.
>
> Generally, you should not use swapping of any form as a permanent
> persistence layer due to the performance penalty. Once the swapping kicks
> in, you should scale out your cluster and wait while the cluster rebalances
> a part of the data to a new node. When the rebalancing completes, the
> performance will be recovered and swapping won’t longer be needed.
>
> Denis
>
> On Thursday, August 13, 2020, 38797715 <38797...@qq.com> wrote:
>
>> Hi,
>>
>> We retested and found that if we configured swapPath, as the amount of
>> data increased, the write speed was actually slower and slower. If the
>> amount of data is large, on average, it is much slower than the scenario
>> where native persistence is enabled and wal is disabled.
>>
>> In this way, the use of the swapPath property has no productive value,
>> maybe it was an early development function, and now it is a bit out of date.
>>
>> What I want to ask is, in the case of small physical memory, turning on
>> persistence, and then configuring a larger maxSize (using the swap
>> mechanism of the OS), is this a solution? In other words, the swap
>> mechanism of the OS and the page replacement of Ignite, which is better?
>> 在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:
>>
>> Hello!
>>
>> I think the performance of swap space should be on par with persistence
>> with disabled WAL.
>>
>> You can submit suggested updates to the documentation if you like.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com>:
>>
>>> Hi Ilya,
>>>
>>> If so, there are two ways to implement ignite's swap space:
>>> 1. maxSize > physical memory, which will use the swap mechanism of the
>>> OS, can be used *vm.swappiness* Adjust.
>>> 2. Configure the *swapPath* property, which is implemented by Ignite
>>> itself, is independent of the OS and has no optimization parameters.
>>> There's a choice between these two models, right? Then I think there may
>>> be many problems in the description of the document. I hope you can check
>>> it again:
>>> https://apacheignite.readme.io/docs/swap-space
>>>
>>> After our initial testing, the performance of swap space is much better
>>> than native persistence, so I think this pattern is valuable in some
>>> scenarios.
>>> 在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:
>>>
>>> Hello!
>>>
>>> From the docs:
>>>
>>> To avoid this situation with the swapping capabilities, you need to :
>>>
>>>- Set maxSize = bigger_ than_RAM_size, in which case, the OS will
>>>take care of the swapping.
>>>- Enable swapping by setting the DataRegionConfiguration.swapPath
>>> property.
>>>
>>>
>>> I actually think these are either-or. You should either do the first
>>> (and configure OS swapping) or the second part.
>>>
>>> Having said that, I recommend setting proper Native Persistence instead.
>>>
>>> Regards,
>>> --
>>> Ilya Kasnacheev
>>>
>>>
>>> сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com>:
>>>
 Hi,

 https://apacheignite.readme.io/docs/swap-space

 According to the above document, if the physical memory is small, you
 can solve this problem by opening the swap space,The specific method is to
 configure maxSize to a larger value (i.e. larger than the physical memory),
 and the swapPath property needs to be configured.

 But from the test results, the node is terminated.

 I think the correct result should be that even if the amount of data
 exceeds the physical memory, the node should still be able 

Service request distribution not happening if initiated through server node

2020-08-13 Thread satyan
Hi, I am new to ignite. I have a requirement and I don't know how to achieve
it through Ignite.

My usecase:

Suppose if I have 3 servers, one among them will act as a leader and will
co-ordinate all the internal service requests.
The requests should get executed either in the same server(leader) or any
other servers.
For this I have embedded a Ignite leader node through service proxy
instance, all the requests are getting executed in the same server itself,
distribution is not happening.
But If I create a client & a server node instance in t server node instance
in all the 3 servers and have formed a cluster.
If I execute the service requests from the leader node and then execute the
service requests through the client, the requests are getting distributed
across other nodes.
Is this client node instance required for service distribution to happen?
Please help.
_




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread 38797715

Hi Denis,

We did a test, in the same environment (8G RAM, 10G swap partition) and 
the same configuration (2G Heap, enable persistence, data volume is 
about 6G), the only difference is that the maxSize size is different, 
which is configured as 5G and 12G respectively. We found that the 
performance of the scenario with maxSize = 12G is better than that of 
the scenario with maxSize = 5G, and the write performance is improved by 
more than 10%.


I suspect that if the memory data region is large enough to hold all the 
data, ignite's page replacement might not be enabled.


Our test scenarios are limited and may not be convincing. However, I 
think that the lack of memory may be the norm. At this time, it may be a 
good practice to make full use of the swap mechanism of the OS, which 
takes up more disk space but achieves better performance.


在 2020/8/14 上午8:22, Denis Magda 写道:
Ignite swapping is based on the swapping mechanism of the OS. So, you 
shouldn’t see any difference if enable the OS one directly some way.


Generally, you should not use swapping of any form as a permanent 
persistence layer due to the performance penalty. Once the swapping 
kicks in, you should scale out your cluster and wait while the cluster 
rebalances a part of the data to a new node. When the rebalancing 
completes, the performance will be recovered and swapping won’t longer 
be needed.


Denis

On Thursday, August 13, 2020, 38797715 <38797...@qq.com 
> wrote:


Hi,

We retested and found that if we configured swapPath, as the
amount of data increased, the write speed was actually slower and
slower. If the amount of data is large, on average, it is much
slower than the scenario where native persistence is enabled and
wal is disabled.

In this way, the use of the swapPath property has no productive
value, maybe it was an early development function, and now it is a
bit out of date.

What I want to ask is, in the case of small physical memory,
turning on persistence, and then configuring a larger maxSize
(using the swap mechanism of the OS), is this a solution? In other
words, the swap mechanism of the OS and the page replacement of
Ignite, which is better?

在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:

Hello!

I think the performance of swap space should be on par with
persistence with disabled WAL.

You can submit suggested updates to the documentation if you like.

Regards,
-- 
Ilya Kasnacheev



ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com
>:

Hi Ilya,

If so, there are two ways to implement ignite's swap space:
1. maxSize > physical memory, which will use the swap
mechanism of the OS, can be used *vm.swappiness* Adjust.
2. Configure the *swapPath* property, which is implemented by
Ignite itself, is independent of the OS and has no
optimization parameters.

There's a choice between these two models, right? Then I
think there may be many problems in the description of the
document. I hope you can check it again:
https://apacheignite.readme.io/docs/swap-space


After our initial testing, the performance of swap space is
much better than native persistence, so I think this pattern
is valuable in some scenarios.

在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:

Hello!

From the docs:

To avoid this situation with the swapping capabilities, you
need to :

  * Set |maxSize = bigger_ than_RAM_size|, in which case,
the OS will take care of the swapping.
  * Enable swapping by setting the
|DataRegionConfiguration.swapPath| property.


I actually think these are either-or. You should either do
the first (and configure OS swapping) or the second part.

Having said that, I recommend setting proper Native
Persistence instead.

Regards,
-- 
Ilya Kasnacheev



сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com
>:

Hi,

https://apacheignite.readme.io/docs/swap-space


According to the above document, if the physical memory
is small, you can solve this problem by opening the swap
space,The specific method is to configure maxSize to a
larger value (i.e. larger than the physical memory), and
the swapPath property needs to be configured.

But from the test results, the node is terminated.

I think the correct result should be that even if the
amount of data exceeds the physical memory, the node
should still be able to run normally, but the data is
exchanged to 

Re: Operation block on Cluster recovery/rebalance.

2020-08-13 Thread John Smith
No I, reuse the instance. The cache instance is created once at startup of
the application and I pass it to my "repository" class

public abstract class AbstractIgniteRepository implements
CacheRepository {
public final long DEFAULT_OPERATION_TIMEOUT = 2000;

private Vertx vertx;
private IgniteCache cache;

AbstractIgniteRepository(Vertx vertx, IgniteCache cache) {
this.vertx = vertx;
this.cache = cache;
}

...

Future> query(final String sql, final long
timeoutMs, final Object... args) {
final Promise> promise = Promise.promise();

vertx.setTimer(timeoutMs, l -> {
promise.tryFail(new TimeoutException("Cache operation did
not complete within: " + timeoutMs + " Ms.")); // THIS FIRE IF THE
BLOE DOESN"T COMPLETE IN TIME.
});

vertx.>executeBlocking(code -> {
SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(args);
query.setTimeout((int) timeoutMs, TimeUnit.MILLISECONDS);


try (QueryCursor> cursor = cache.query(query)) {
// <--- BLOCKS HERE.
List rows = new ArrayList<>();
Iterator> iterator = cursor.iterator();

while(iterator.hasNext()) {
List currentRow = iterator.next();
JsonArray row = new JsonArray();

currentRow.forEach(o -> row.add(o));

rows.add(row);
}

code.complete(rows);
} catch(Exception ex) {
code.fail(ex);
}
}, result -> {
if(result.succeeded()) {
promise.tryComplete(result.result());
} else {
promise.tryFail(result.cause());
}
});

return promise.future();
}

public  T cache() {
return (T) cache;
}
}



On Thu, 13 Aug 2020 at 16:29, Denis Magda  wrote:

> I've created a simple test and always getting the exception below on an
> attempt to get a reference to an IgniteCache instance in cases when the
> cluster is not activated:
>
> *Exception in thread "main" class org.apache.ignite.IgniteException: Can
> not perform the operation because the cluster is inactive. Note, that the
> cluster is considered inactive by default if Ignite Persistent Store is
> used to let all the nodes join the cluster. To activate the cluster call
> Ignite.active(true)*
>
> Are you trying to get a new IgniteCache reference whenever the client
> reconnects successfully to the cluster? My guts feel that currently, Ignite
> verifies the activation status and generates the exception above whenever
> you're getting a reference to an IgniteCache or IgniteCompute. But once you
> got those references and try to run some operations then those get stuck if
> the cluster is not activated.
> -
> Denis
>
>
> On Thu, Aug 13, 2020 at 6:37 AM John Smith  wrote:
>
>> The cache.query() starts to block when ignite server nodes are being
>> restarted and there's no baseline topology yet. The server nodes do not
>> block. It's the client that blocks.
>>
>> The dumpfiles are of the server nodes. The screen shot is from the client
>> app using your kit profiler on the client side the threads are marked as
>> red on your kit.
>>
>> The app is simple, make http request, it runs cache Sql query on ignite
>> and if it succeeds does a put back to ignite.
>>
>> The Client disconnected exception only happens when all server nodes in
>> the cluster are down. The blockage only happens when the cluster is trying
>> to establish baseline topology.
>>
>> On Wed., Aug. 12, 2020, 6:28 p.m. Denis Magda,  wrote:
>>
>>> John,
>>>
>>> I don't see any traits of an application-caused deadlock in the thread
>>> dumps. Please elaborate on the following:
>>>
>>> 7- Restart 1st node, run operation, operation fails with
 ClientDisconectedException but application still able to complete it's
 request.
>>>
>>>
>>> What's the IP address of the server node the client app uses to join the
>>> cluster? If that's not the address of the 1st node, that is already
>>> restarted, then the client couldn't join the cluster and it's expected that
>>> it fails with the ClientDisconnectedException.
>>>
>>> 8- Start 2nd node, run operation, from here on all operations just block.
>>>
>>>
>>> Are the operations unblocked and completed successfully when the third
>>> node joins the cluster and the cluster gets activated automatically?
>>>
>>> -
>>> Denis
>>>
>>>
>>> On Wed, Aug 12, 2020 at 11:08 AM John Smith 
>>> wrote:
>>>
 Ok Denis here they are...

 3 nodes and I capture a yourlit screenshot of what it thinks are
 deadlocks on the client app.


 https://www.dropbox.com/sh/2cxjkngvx0ubw3b/AADa--HQg-rRsY3RBo2vQeJ9a?dl=0

 On Wed, 12 Aug 2020 at 11:07, John Smith 
 wrote:

> Hi Denis. I will asap but you I think you were right it is the query
> that blocks.
>
> My application first first 

Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread Denis Magda
Ignite swapping is based on the swapping mechanism of the OS. So, you
shouldn’t see any difference if enable the OS one directly some way.

Generally, you should not use swapping of any form as a permanent
persistence layer due to the performance penalty. Once the swapping kicks
in, you should scale out your cluster and wait while the cluster rebalances
a part of the data to a new node. When the rebalancing completes, the
performance will be recovered and swapping won’t longer be needed.

Denis

On Thursday, August 13, 2020, 38797715 <38797...@qq.com> wrote:

> Hi,
>
> We retested and found that if we configured swapPath, as the amount of
> data increased, the write speed was actually slower and slower. If the
> amount of data is large, on average, it is much slower than the scenario
> where native persistence is enabled and wal is disabled.
>
> In this way, the use of the swapPath property has no productive value,
> maybe it was an early development function, and now it is a bit out of date.
>
> What I want to ask is, in the case of small physical memory, turning on
> persistence, and then configuring a larger maxSize (using the swap
> mechanism of the OS), is this a solution? In other words, the swap
> mechanism of the OS and the page replacement of Ignite, which is better?
> 在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:
>
> Hello!
>
> I think the performance of swap space should be on par with persistence
> with disabled WAL.
>
> You can submit suggested updates to the documentation if you like.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com>:
>
>> Hi Ilya,
>>
>> If so, there are two ways to implement ignite's swap space:
>> 1. maxSize > physical memory, which will use the swap mechanism of the
>> OS, can be used *vm.swappiness* Adjust.
>> 2. Configure the *swapPath* property, which is implemented by Ignite
>> itself, is independent of the OS and has no optimization parameters.
>> There's a choice between these two models, right? Then I think there may
>> be many problems in the description of the document. I hope you can check
>> it again:
>> https://apacheignite.readme.io/docs/swap-space
>>
>> After our initial testing, the performance of swap space is much better
>> than native persistence, so I think this pattern is valuable in some
>> scenarios.
>> 在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:
>>
>> Hello!
>>
>> From the docs:
>>
>> To avoid this situation with the swapping capabilities, you need to :
>>
>>- Set maxSize = bigger_ than_RAM_size, in which case, the OS will
>>take care of the swapping.
>>- Enable swapping by setting the DataRegionConfiguration.swapPath
>> property.
>>
>>
>> I actually think these are either-or. You should either do the first (and
>> configure OS swapping) or the second part.
>>
>> Having said that, I recommend setting proper Native Persistence instead.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com>:
>>
>>> Hi,
>>>
>>> https://apacheignite.readme.io/docs/swap-space
>>>
>>> According to the above document, if the physical memory is small, you
>>> can solve this problem by opening the swap space,The specific method is to
>>> configure maxSize to a larger value (i.e. larger than the physical memory),
>>> and the swapPath property needs to be configured.
>>>
>>> But from the test results, the node is terminated.
>>>
>>> I think the correct result should be that even if the amount of data
>>> exceeds the physical memory, the node should still be able to run normally,
>>> but the data is exchanged to the disk.
>>>
>>> I want to know what parameters affect the behavior of this
>>> configuration? *vm.swappiness* or others?
>>> 在 2020/7/24 下午9:55, aealexsandrov 写道:
>>>
>>> Hi,
>>>
>>> Can you please clarify your expectations? You expected that JVM process will
>>> be killed instead of gracefully stopping? What you are going to achieve?
>>>
>>> BR,
>>> Andrei
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>>

-- 
-
Denis


[Apache Ignite Virtual Meetup] Building a Blockchain Network with Apache Ignite

2020-08-13 Thread Branimir Angelov
Hello Igniters,

I would like to share with you the video recording[1] of our presentation
from the last Ignite Virtual Meetup. Although our use case might sound a
bit extraordinary, we are extremely satisfied with our decision to use
Ignite for it.

Thanks, Branimir
[1] https://youtu.be/lCiZ3x8IRvI


Re: Operation block on Cluster recovery/rebalance.

2020-08-13 Thread Denis Magda
I've created a simple test and always getting the exception below on an
attempt to get a reference to an IgniteCache instance in cases when the
cluster is not activated:

*Exception in thread "main" class org.apache.ignite.IgniteException: Can
not perform the operation because the cluster is inactive. Note, that the
cluster is considered inactive by default if Ignite Persistent Store is
used to let all the nodes join the cluster. To activate the cluster call
Ignite.active(true)*

Are you trying to get a new IgniteCache reference whenever the client
reconnects successfully to the cluster? My guts feel that currently, Ignite
verifies the activation status and generates the exception above whenever
you're getting a reference to an IgniteCache or IgniteCompute. But once you
got those references and try to run some operations then those get stuck if
the cluster is not activated.
-
Denis


On Thu, Aug 13, 2020 at 6:37 AM John Smith  wrote:

> The cache.query() starts to block when ignite server nodes are being
> restarted and there's no baseline topology yet. The server nodes do not
> block. It's the client that blocks.
>
> The dumpfiles are of the server nodes. The screen shot is from the client
> app using your kit profiler on the client side the threads are marked as
> red on your kit.
>
> The app is simple, make http request, it runs cache Sql query on ignite
> and if it succeeds does a put back to ignite.
>
> The Client disconnected exception only happens when all server nodes in
> the cluster are down. The blockage only happens when the cluster is trying
> to establish baseline topology.
>
> On Wed., Aug. 12, 2020, 6:28 p.m. Denis Magda,  wrote:
>
>> John,
>>
>> I don't see any traits of an application-caused deadlock in the thread
>> dumps. Please elaborate on the following:
>>
>> 7- Restart 1st node, run operation, operation fails with
>>> ClientDisconectedException but application still able to complete it's
>>> request.
>>
>>
>> What's the IP address of the server node the client app uses to join the
>> cluster? If that's not the address of the 1st node, that is already
>> restarted, then the client couldn't join the cluster and it's expected that
>> it fails with the ClientDisconnectedException.
>>
>> 8- Start 2nd node, run operation, from here on all operations just block.
>>
>>
>> Are the operations unblocked and completed successfully when the third
>> node joins the cluster and the cluster gets activated automatically?
>>
>> -
>> Denis
>>
>>
>> On Wed, Aug 12, 2020 at 11:08 AM John Smith 
>> wrote:
>>
>>> Ok Denis here they are...
>>>
>>> 3 nodes and I capture a yourlit screenshot of what it thinks are
>>> deadlocks on the client app.
>>>
>>> https://www.dropbox.com/sh/2cxjkngvx0ubw3b/AADa--HQg-rRsY3RBo2vQeJ9a?dl=0
>>>
>>> On Wed, 12 Aug 2020 at 11:07, John Smith  wrote:
>>>
 Hi Denis. I will asap but you I think you were right it is the query
 that blocks.

 My application first first runs a select on the cache and then does a
 put to cache.

 On Tue, 11 Aug 2020 at 19:22, Denis Magda  wrote:

> John,
>
> It sounds like a deadlock caused by the application logic. Is there
> any chance that the operation you run on step 8 accesses several keys in
> one order while the other operations work with the same keys but in a
> different order. The deadlocks are possible when you use Ignite 
> Transaction
> API or simply execute bulk operations such as cache.readAll() or
> cache.writeAll(..).
>
> Please take and attach thread dumps from all the cluster nodes for
> analysis if we need to dig deeper.
>
> -
> Denis
>
>
> On Mon, Aug 10, 2020 at 6:23 PM John Smith 
> wrote:
>
>> Hi Denis, I think you are right. It's the query that blocks the other
>> k/v operations are ok.
>>
>> Any thoughts on this?
>>
>> On Mon, 10 Aug 2020 at 15:28, John Smith 
>> wrote:
>>
>>> I tried with 2.8.1, same issue. Operations block indefinitely...
>>>
>>> 1- Start 3 node cluster
>>> 2- Start client application client = true with Ignition.start()
>>> 3- Run some cache operations, everything ok...
>>> 4- Shut down one node, run operation, still ok
>>> 5- Shut down 2nd node, run operation, still ok
>>> 6- Shut down 3rd node, run operation, still ok... Operations start
>>> failing with ClientDisconectedException...
>>> 7- Restart 1st node, run operation, operation fails
>>> with ClientDisconectedException but application still able to complete 
>>> it's
>>> request.
>>> 8- Start 2nd node, run operation, from here on all operations just
>>> block.
>>>
>>> Basically the client application is an HTTP Server on each HTTP
>>> request does cache exception.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, 7 Aug 2020 at 19:46, John Smith 
>>> wrote:
>>>
 No, everything blocks... Also using 2.7.0 

Re: How can I find out if indexes are used in a query?

2020-08-13 Thread Axel Luft
I can't see any difference with or without an index. with 1.000.000 rows it
is between 4-5 secs, 2.000.000 rows between 6-7, 3.000.000 between 10-12
seconds.

Our target size will be 300.000.000 rows easily. 
AL



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Change Data Capture Feature

2020-08-13 Thread Pavel Strashkin
Hello,

Is there a Change Data Capture (CDC) feature in plans to make it possible
to stream cache updates to Kafka for example?

I've found the Continuous Queries feature, but it's not clear to me whether
it's possible to use it for implementation. What's missing it seems is
ability to start from the last position whenever the client restarts.

Thanks.


Re: IpFinder with domain

2020-08-13 Thread kay
Hello, 

In my case, 'cache.ignite.com' is a L4 and the port is 80.

cache.ignite.com(ex. 41.1.166.123) will be connect Ignite Server.(ex.
42.1.129.123:47500, 42.1.129.123:47501 ...)

Is it possible? or should I define port for connect to Ignite Server.

I will waiting for reply!
Thank u so much!






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Operation block on Cluster recovery/rebalance.

2020-08-13 Thread John Smith
The cache.query() starts to block when ignite server nodes are being
restarted and there's no baseline topology yet. The server nodes do not
block. It's the client that blocks.

The dumpfiles are of the server nodes. The screen shot is from the client
app using your kit profiler on the client side the threads are marked as
red on your kit.

The app is simple, make http request, it runs cache Sql query on ignite and
if it succeeds does a put back to ignite.

The Client disconnected exception only happens when all server nodes in the
cluster are down. The blockage only happens when the cluster is trying to
establish baseline topology.

On Wed., Aug. 12, 2020, 6:28 p.m. Denis Magda,  wrote:

> John,
>
> I don't see any traits of an application-caused deadlock in the thread
> dumps. Please elaborate on the following:
>
> 7- Restart 1st node, run operation, operation fails with
>> ClientDisconectedException but application still able to complete it's
>> request.
>
>
> What's the IP address of the server node the client app uses to join the
> cluster? If that's not the address of the 1st node, that is already
> restarted, then the client couldn't join the cluster and it's expected that
> it fails with the ClientDisconnectedException.
>
> 8- Start 2nd node, run operation, from here on all operations just block.
>
>
> Are the operations unblocked and completed successfully when the third
> node joins the cluster and the cluster gets activated automatically?
>
> -
> Denis
>
>
> On Wed, Aug 12, 2020 at 11:08 AM John Smith 
> wrote:
>
>> Ok Denis here they are...
>>
>> 3 nodes and I capture a yourlit screenshot of what it thinks are
>> deadlocks on the client app.
>>
>> https://www.dropbox.com/sh/2cxjkngvx0ubw3b/AADa--HQg-rRsY3RBo2vQeJ9a?dl=0
>>
>> On Wed, 12 Aug 2020 at 11:07, John Smith  wrote:
>>
>>> Hi Denis. I will asap but you I think you were right it is the query
>>> that blocks.
>>>
>>> My application first first runs a select on the cache and then does a
>>> put to cache.
>>>
>>> On Tue, 11 Aug 2020 at 19:22, Denis Magda  wrote:
>>>
 John,

 It sounds like a deadlock caused by the application logic. Is there any
 chance that the operation you run on step 8 accesses several keys in one
 order while the other operations work with the same keys but in a different
 order. The deadlocks are possible when you use Ignite Transaction API or
 simply execute bulk operations such as cache.readAll() or
 cache.writeAll(..).

 Please take and attach thread dumps from all the cluster nodes for
 analysis if we need to dig deeper.

 -
 Denis


 On Mon, Aug 10, 2020 at 6:23 PM John Smith 
 wrote:

> Hi Denis, I think you are right. It's the query that blocks the other
> k/v operations are ok.
>
> Any thoughts on this?
>
> On Mon, 10 Aug 2020 at 15:28, John Smith 
> wrote:
>
>> I tried with 2.8.1, same issue. Operations block indefinitely...
>>
>> 1- Start 3 node cluster
>> 2- Start client application client = true with Ignition.start()
>> 3- Run some cache operations, everything ok...
>> 4- Shut down one node, run operation, still ok
>> 5- Shut down 2nd node, run operation, still ok
>> 6- Shut down 3rd node, run operation, still ok... Operations start
>> failing with ClientDisconectedException...
>> 7- Restart 1st node, run operation, operation fails
>> with ClientDisconectedException but application still able to complete 
>> it's
>> request.
>> 8- Start 2nd node, run operation, from here on all operations just
>> block.
>>
>> Basically the client application is an HTTP Server on each HTTP
>> request does cache exception.
>>
>>
>>
>>
>>
>>
>> On Fri, 7 Aug 2020 at 19:46, John Smith 
>> wrote:
>>
>>> No, everything blocks... Also using 2.7.0 just in case.
>>>
>>> Only time I get exception is if the cluster is completely off, then
>>> I get ClientDisconectedException...
>>>
>>> On Fri, 7 Aug 2020 at 18:52, Denis Magda  wrote:
>>>
 If I'm not mistaken, key-value operations (cache.get/put) and
 compute calls fail with an exception if the cluster is deactivated. Do
 those fail on your end?

 As for the async and SQL operations, let's see what other community
 members say.

 -
 Denis


 On Fri, Aug 7, 2020 at 1:06 PM John Smith 
 wrote:

> Hi any thoughts on this?
>
> On Thu, 6 Aug 2020 at 23:33, John Smith 
> wrote:
>
>> Here is another example where it blocks.
>>
>> SqlFieldsQuery query = new SqlFieldsQuery(
>> "select * from my_table")
>> .setArgs(providerId, carrierCode);
>> query.setTimeout(1000, TimeUnit.MILLISECONDS);
>>
>> try (QueryCursor> cursor = 

2.7.0 -> 2.8.1 Upgrade warnings and issues

2020-08-13 Thread bhlewka
Hello all, I'm trying to upgrade an existing cluster from 2.7.0 to 2.8.1, and
am following the upgrade procedure outlined in the Ignite documentation. I'm
noticing new warnings I haven't seen before from my 2.8.1 clients, and am
not sure how to get rid of them, or how important they are to fix. 

*Rebalance batches prefetch count mismatch [cacheName=ignite-sys-cache,
localRebalanceBatchesPrefetchCount=3, remoteRebalanceBatchesPrefetchCount=2,
rmtNodeId=xxx-xxx-xxx-xxx]*
The Rebalance batches prefetch one seems to be an issue between the 2.7.0
default being 2, and the 2.8.1 default being 3, however after upgrading the
cluster it is not using the new default of 3. I've tried restarting the
cluster, as well as explicitly setting rebalanceBatchesPrefetchCount to 3 in
the config, but it still stays at 2. Is this a bug or is there a specific
way to change this config value?

*Ignite work directory is not provided, automatically resolved to:
/app/ignite/work
Serialization of Java objects in H2 was enabled.*
I'm assuming these other two are less important, but I'm still not sure how
to get rid of them.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Enabling swapPath causes invoking shutdown hook

2020-08-13 Thread 38797715

Hi,

We retested and found that if we configured swapPath, as the amount of 
data increased, the write speed was actually slower and slower. If the 
amount of data is large, on average, it is much slower than the scenario 
where native persistence is enabled and wal is disabled.


In this way, the use of the swapPath property has no productive value, 
maybe it was an early development function, and now it is a bit out of date.


What I want to ask is, in the case of small physical memory, turning on 
persistence, and then configuring a larger maxSize (using the swap 
mechanism of the OS), is this a solution? In other words, the swap 
mechanism of the OS and the page replacement of Ignite, which is better?


在 2020/8/6 下午9:23, Ilya Kasnacheev 写道:

Hello!

I think the performance of swap space should be on par with 
persistence with disabled WAL.


You can submit suggested updates to the documentation if you like.

Regards,
--
Ilya Kasnacheev


ср, 5 авг. 2020 г. в 06:00, 38797715 <38797...@qq.com 
>:


Hi Ilya,

If so, there are two ways to implement ignite's swap space:
1. maxSize > physical memory, which will use the swap mechanism of
the OS, can be used *vm.swappiness* Adjust.
2. Configure the *swapPath* property, which is implemented by
Ignite itself, is independent of the OS and has no optimization
parameters.

There's a choice between these two models, right? Then I think
there may be many problems in the description of the document. I
hope you can check it again:
https://apacheignite.readme.io/docs/swap-space


After our initial testing, the performance of swap space is much
better than native persistence, so I think this pattern is
valuable in some scenarios.

在 2020/8/4 下午10:16, Ilya Kasnacheev 写道:

Hello!

From the docs:

To avoid this situation with the swapping capabilities, you need to :

  * Set |maxSize = bigger_ than_RAM_size|, in which case, the OS
will take care of the swapping.
  * Enable swapping by setting the
|DataRegionConfiguration.swapPath| property.


I actually think these are either-or. You should either do the
first (and configure OS swapping) or the second part.

Having said that, I recommend setting proper Native Persistence
instead.

Regards,
-- 
Ilya Kasnacheev



сб, 25 июл. 2020 г. в 04:49, 38797715 <38797...@qq.com
>:

Hi,

https://apacheignite.readme.io/docs/swap-space


According to the above document, if the physical memory is
small, you can solve this problem by opening the swap
space,The specific method is to configure maxSize to a larger
value (i.e. larger than the physical memory), and the
swapPath property needs to be configured.

But from the test results, the node is terminated.

I think the correct result should be that even if the amount
of data exceeds the physical memory, the node should still be
able to run normally, but the data is exchanged to the disk.

I want to know what parameters affect the behavior of this
configuration? *vm.swappiness* or others?

在 2020/7/24 下午9:55, aealexsandrov 写道:

Hi,

Can you please clarify your expectations? You expected that JVM process 
will
be killed instead of gracefully stopping? What you are going to achieve?

BR,
Andrei



--
Sent from:http://apache-ignite-users.70518.x6.nabble.com/  





Re: Local node is not added in baseline topology

2020-08-13 Thread Ali Bagewadi
Hello,
Thanks for the response.
However m requirements are
1)I don't want to add the nodes manually using control script as it is not
feasible to get the node id at runtime on hardware as per my requirements.
2)I have used the auto adjust command but its unable to add the local node
to baseline topology.

And Currently I am using below commands to add a node to baseline topology
and auto adjust respectively.

control.sh --baseline add consistentID
control.sh --baseline auto_adjust enable timeout 5000

Please suggest.

Thank you,
Ali



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite thin client add column and index dynamically

2020-08-13 Thread Ilya Kasnacheev
Hello!

It should work.

Do you have a reproducer?

Thanks,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 15:06, Hemambara :

> I have seen that if I provide query entities in cache configuration I am
> able
> to query by column. But when I execute ALTER TABLE ADD COLUMN
> programmatically its not working with java thin clients where its working
> fine with thick clients. Java  Thin clients supports sqlfieldquery then why
> not alter table ? DDLs done work with Java thin clients??
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Local node is not added in baseline topology

2020-08-13 Thread Ilya Kasnacheev
Hello!

You can use control.sh. It can look up nodes as well as add them to
baseline.


Unfortunately I did not see your commands so I can't know for sure. I'm not
sure that auto-adjust will work for nodes added prior to enabling it.

Regards,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 10:17, rakshita04 :

> How to achieve the same using C++?
> Are there any cluster APIs for C++?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Ignite thin client add column and index dynamically

2020-08-13 Thread Hemambara
I have seen that if I provide query entities in cache configuration I am able
to query by column. But when I execute ALTER TABLE ADD COLUMN
programmatically its not working with java thin clients where its working
fine with thick clients. Java  Thin clients supports sqlfieldquery then why
not alter table ? DDLs done work with Java thin clients??



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Java Heap

2020-08-13 Thread Ilya Kasnacheev
Hello!

It is showing available heap of 2 nodes (combined) because that's what it
is showing.

It is not utilization, but merely total available value.

Regards,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 10:16, rakshita04 :

> But we are creating cache node in off heap memory right?
> Then why is it showing double value in "On heap"?
> Is it only showing double size as On heap or it will actually allocate that
> much memory?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to remove IgniteAtomicLong using REST API?

2020-08-13 Thread Ilya Kasnacheev
Hello!

No, it's not there. You will need to do that in Java.

Regards,
-- 
Ilya Kasnacheev


ср, 12 авг. 2020 г. в 17:16, Pavel Strashkin :

> I don't see it being part of REST API - is it missing indeed or my eyes
> aren't that good anymore? :)
>
> (clients aren't Java-based - mostly node.js).
>
> Thanks for prompt responses. I appreciate it.
>
> On Wed, Aug 12, 2020 at 7:13 AM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> You need to issue atomicLong.close().
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> ср, 12 авг. 2020 г. в 16:58, Pavel Strashkin :
>>
>>> There are “incr” and “decr” commands for atomics exposed via REST, but
>>> what if I need to delete it?
>>>
>>> On Wed, Aug 12, 2020 at 6:42 AM Ilya Kasnacheev <
>>> ilya.kasnach...@gmail.com> wrote:
>>>
 Hello!

 Please elaborate wrt "doesn't really remove the key".

 REST, etc, do not have access to system caches (where atomics are
 stored) neither do they have API to deal with atomics.

 Regards.
 --
 Ilya Kasnacheev


 ср, 12 авг. 2020 г. в 08:12, Pavel Strashkin >>> >:

> Hi there,
>
> It seems atomic longs are stored in a special system cache as the
> "rmv" command doesn't really remove the key.
>
> Is there a way to remove IgniteAtomicLong using REST API? If not -
> what would it take to add such a command? Is there any other option like
> thin client?
>
> Thanks.
>



Re: IpFinder with domain

2020-08-13 Thread Ilya Kasnacheev
Hello!

It should work. If port is not specified, discoveryPort or default port
(47500) will be used.

Hostnames are supported all right.

Regards,
-- 
Ilya Kasnacheev


чт, 13 авг. 2020 г. в 13:53, kay :

> Hello I have 8nodes, and using java thin client.
>
> I know set Ip:port List at config file
>
> ex)
>
> 
>   
> ip:port
> ...
>
>
> I'm curious is it possible to set only url? not using port.
> for example)
> cache.ignite.com
>
>
> I already set that url(L4) but is not response(timeout also)
>
> Our Project using L4 for loadbalancing infront of Ignite Nodes.
> If is not available to use L4(with url) do I have to list all nodes??
>
> I'm waiting for reply! Thank you so much?
>
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


IpFinder with domain

2020-08-13 Thread kay
Hello I have 8nodes, and using java thin client.

I know set Ip:port List at config file 

ex)


  
ip:port
...


I'm curious is it possible to set only url? not using port.
for example) 
cache.ignite.com


I already set that url(L4) but is not response(timeout also)

Our Project using L4 for loadbalancing infront of Ignite Nodes.
If is not available to use L4(with url) do I have to list all nodes?? 

I'm waiting for reply! Thank you so much?







--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 2.8.1 - Loading Plugin Provider - Conflicting documentation

2020-08-13 Thread VeenaMithare
HI , 

Raised a Documentation Improvement jira : 
IGNITE-13356
Documentation Change needed: PluginProvider loading changed from 2.8.1
regards,
Veena.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Local node is not added in baseline topology

2020-08-13 Thread rakshita04
How to achieve the same using C++?
Are there any cluster APIs for C++?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Java Heap

2020-08-13 Thread rakshita04
But we are creating cache node in off heap memory right?
Then why is it showing double value in "On heap"?
Is it only showing double size as On heap or it will actually allocate that
much memory?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/