Hi Denis,
We already know that using the thin client introduces an extra network hop,
and to minimice its impact we've deployed a thick client node colocated
with our application and every application instance connects to the local
Ignite node.
We'd love to continue using the Ignite.NET thick cli
Hi,
I believe support for MongoDB 4.x is already implemented in
https://issues.apache.org/jira/browse/IGNITE-10847.
Also, I believe Ignite doesn't require a specific version of MongoDB. Have
you tried to install the latest 3.4.x version?
Thanks,
Stan
On Sun, Aug 25, 2019 at 7:04 PM Ashfaq Ahamed
Hi,
AFAICS this is not about the *protocol*, this is about *implementations* of
the protocol. I've followed the links and found this matrix of vulnerable
technologies:
https://vuls.cert.org/confluence/pages/viewpage.action?pageId=56393752
>From this matrix, Ignite uses only Node.js in WebConsole,
Partition map exchange is an absolutely necessary procedure, that cannot be
disabled. Functionality of all caches depend on it.
I checked, and a cache destruction is performed as a part of a partition map
exchange, and not the opposite. If you see, that nodes join the cluster fast,
but cache de
Hi,
Is there a particular reason for why replicated caches has their partition size
set to 512 by default?
I found this in
org.apache.ignite.internal.processors.cache.GridCacheUtils#initializeConfigDefaults(IgniteLogger,
CacheConfiguration, CacheObjectContext):V
if (cfg.getAffinity() =
Niels,
I believe, that the reason is performance of an affinity function and a size of
GridDhtPartitionsFullMessage.
An affinity function needs to assign partitions to nodes. In case of a
replicated cache, there are (number of partitions) x (number of nodes) pairs of
(node, partition) that need
Hi Stan,
Thanks for your response. I have tried this, but it has not fixed the
issue.
The grpc server class was moved into the service where the interface
methods "init","execute" and "cancel" perform an initialization of the
serve, as well start and stopping respectively. But this was already
imp
Hello. Why the number of partitions should be even?
May be you have some links to documentation about this stuff, do you?
Thanks.
> 26 авг. 2019 г., в 15:47, Denis Mekhanikov написал(а):
>
> Niels,
>
> I believe, that the reason is performance of an affinity function and a size
> of GridDht
Hi All,
In the link:
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-LocalCrashRecovery
Following is mentioned about the Estimation:
what is est. maximum data volume to be writen on 1 checkpoint? Is it the
size of 1 w
Eduard,
I tried the following to reproduce segfaults according to your description:
* Start Ignite server node
* Infinite loop, perform Cache.Put operations
* In the same loop access Process.HandleCount property
On Ubuntu 16.04, .NET Core 2.2.103 I see no crashes.
Can you please provide more deta
Hi,
In normal circumstances checkpoint is triggered on timeout, e.g. every 3
minutes (controlled by checkpointFrequency). So, the size of the checkpoint
is the amount of data written/updated in a 3-minute interval.
The best way to estimate it in your system is to enable data storage
metrics (DataS
The message "Failed to deserialize object
[typeName=io.grpc.internal.InternalHandlerRegistry]"
means InternalHandlerRegistry is being sent between the nodes - which it
shouldn't be.
What you need to do is to find where is it being sent. You shouldn't pass
any gRPC to any Ignite configuration, or, I
Hello,
WAL active folder size calculation is correct, that is walSegmentSize *
walSegment = 64 * 10 = 640Mb.
However, you may completely disregard estimations for WAL archive, as since
latest 2.7 version there is a configuration property to limit WAL archive size
in bytes, which is obviously m
Hello,
I am working on a project and we have run into two related problems while
doing Map_Reduce on Ignite Filesystem Cache.
We were originally on Ignite 2.6 but upgraded to 2.7.5 in an unsuccessful
bid to resolve the problem.
We have a deadlock in our map-reduce process and have reproduced it
Thanks for your response.
On Mon, Aug 26, 2019 at 10:59 PM Anton wrote:
> Hello,
>
>
>
> WAL active folder size calculation is correct, that is walSegmentSize *
> walSegment = 64 * 10 = 640Mb.
>
>
>
> However, you may completely disregard estimations for WAL archive, as
> since latest 2.7 versio
Hi Stan,
Thanks for the info! Indeed this was a very big mistake and your
explanation made it clear what the error was.
I was passing an instance of the grpc server to the compute job, a part of
the code which I had neglected to add in the description above.
Cheers!
On Mon, Aug 26, 2019 at 7:21
16 matches
Mail list logo