Hi,
I truly appreciated the support we are getting from the community.
As of now we don't have a re-producer, The above issue basically comes once
in a while.
The server is up and running, *Note*: The ignite cluster has been installed
in azure kubernetes cluster as statefulset pods.
We have oth
Can you please check above questions and help me out
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hello!
This is such a small value for volatile topologies such as yours. I
recommend switching to 2.9.0 and then changing this number to 200.
Regards,
--
Ilya Kasnacheev
вт, 3 нояб. 2020 г. в 21:40, Mahesh Renduchintala <
mahesh.renduchint...@aline-consulting.com>:
> Yes. I have IGNITE_EXCHAN
Hi Mahesh,
Use these metrics to monitor the progress:
- JMX:
https://ignite.apache.org/docs/latest/monitoring-metrics/metrics#monitoring-rebalancing
- Rebalancing widget of Control Center:
https://www.gridgain.com/docs/control-center/latest/monitoring/configuring-widgets#rebalance-w
Hi,
As soon as we add a new server node into the cluster, rebalancing starts
this is clear.
is there a way to know when the rebalancing successfully ends on the new server
node?
Caches in the cluster are both replicated and partitioned.
regards
Mahesh
Yes. I have IGNITE_EXCHANGE_HISTORY_SIZE set to 10.
should I set IGNITE_EXCHANGE_HISTORY_SIZE = 0 ???
will file a bug shortly
From: Ilya Kasnacheev
Sent: Tuesday, November 3, 2020 8:57 PM
To: user@ignite.apache.org
Subject: Re: Failed to Resolve NodeTopology
Hello!
You seemed to have a very old transaction which tried to access a topology
version which was not in the history. Do you have
IGNITE_EXCHANGE_HISTORY_SIZE set?
I think this is a bug that it causes node failure. I would expect that
transaction is killed, that's all. Can you please file a tic
Hi aealexsandrov,
thanks a lot for your answer!
I have questions regarding your points in 3) and 4):
3) As you can see in the configs I posted we have set a quite large
failureDetectionTimeout=60. So I guess increasing it even more would not
help us here a lot. Am I right?
4) Why would decr
Hello!
Are you sure that the Ignite cluster is in fact up? :)
If it is, maybe your usage patterns of this pool somehow assign the
connection to two different threads, which try to do queries in parallel.
In theory, this is what connection pools are explicitly created to avoid,
but maybe there's s
Hello!
800MB entry is far above of the entry size that we ever expected to see.
Even brief holding of these entries on heap will cause problems for you, as
well as sending them over communication.
I recommend splitting entries into chunks, maybe. That's what IGFS did
basically, we decided to ax i
I wasn't able to reproduce that hurriedly. What Hikari settings do you have?
Maybe you have a reproducer?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks,
How is this different to multiple puts inside a transaction?
By using the data streamer to write the records, does that mean the
continuous query will receive all 10,000 records in one go in the local
listen?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thank you for the suggestions.
I will try the timeout settings. It looks like the server has connected to
the client on the new client id using the discovery SPI but it is trying to
send the cache update to the old client id using the communication SPI.
What impact does the IgniteAsyncCallback an
Hi,
not too long ago I tested apache ignite for my use case on Open JDK 11. The
use case consists of writing cache entries with values going up to 800MB in
size, the data itself being a simple string. After writing 5 caches
entries, 800 MB each, I noticed my Heap space exploding up to 11GB, while
Hello!
We have a lot of tests which do exactly that, and they don't seem to
exhibit that behavior. Please provide a reproducer.
Regards,
--
Ilya Kasnacheev
вт, 3 нояб. 2020 г. в 11:13, VeenaMithare :
> Hi Ilya,
>
> This is easy to reproduce. Have a server node and a client node in a
> cluster
Hi Ilya,
This is easy to reproduce. Have a server node and a client node in a
cluster. Stop and start the client immediately so that the start happens
within the failure detection timeout ( 10 sec typically ). You will see
these messages in the client log as it is starting up the second time.
Let
Hi,
It will be great if you share the reproducer.
BR,
Andrei
11/3/2020 10:17 AM, Humphrey пишет:
Let me have a summarize here. Working with
| IgniteDataFrameSettings.OPTION_CONFIG_FILE(), configPath |
seems to work fine.
But when I'm using the JDBC thin client connection, (like connecting to
17 matches
Mail list logo