in client configuration? If you have any
> caches there then PME will be triggered.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 4 февр. 2021 г. в 14:37, Shiva Kumar :
>
>> Even i observed the same, during thick client or visor joining cluster
>> looks like something
Even i observed the same, during thick client or visor joining cluster
looks like something related to PME happens but not data rebalancing, and
also it is putting some lock on WAL archive segment which never gets
released and causing WAL disk running out of space.
On Thu, 4 Feb, 2021, 4:59 pm
Unsubscribe
Hi Ilya,
My goal is to deactivate the cluster and not restart !! There is an issue
in deactivating the cluster in my deployment so I am going with restart.
I have the ignite deployment on kubernetes and during deactivation entire
cluster and even request to deactivate (rest or control.sh) hangs
Hi all,
I am trying to deactivate a cluster which is being connected with few
clients over JDBC.
As part of these clients connections, it inserts some records to many
tables and runs some long-running queries.
At this time I am trying to deactivate the cluster [basically trying to
take data
id;. I don't really understand
> why do you want to run a fully distributed cross join on these tables - it
> doesn't make sense, moreover, it will lead to the a lot of data movement
> between nodes.
>
> What are you trying to achieve?
>
> Best Regards,
> Evgenii
>
>
; Denis
>
> ср, 25 сент. 2019 г. в 02:30, Denis Magda :
> >
> > Shiva,
> >
> > Does this issue still exist? Ignite Dev how do we debug this sort of
> thing?
> >
> > -
> > Denis
> >
> >
> > On Tue, Sep 17, 2019 at 7:22 AM Shiva Kumar
Hi all,
I have deployed 3 node Ignite cluster with native persistence on Kubernetes
and one of the node crashed with below error message,
*org.h2.message.DbException: General error: "class
org.apache.ignite.internal.processors.cache.persistence.tree.CorruptedTreeException:
Runtime failure on
Hi all,
I am trying to do a simple cross join on two tables with non-collocated
data (without affinity key),
This non-collocated distributed join always fails with the error message:
*"java.sql.SQLException: javax.cache.CacheException: Failed to prepare
distributed join query: join condition does
Hi all,
I have deployed Ignite on Kubernetes and configured liveness and readiness
probe like this
readinessProbe:
tcpSocket:
port: 10800
initialDelaySeconds: 10
periodSeconds: 2
failureThreshold: 60
livenessProbe:
Hi dmagda,
I am trying to drop the table which has around 10 million records and I am
seeing "*Out of memory in data region*" error messages in Ignite logs and
ignite node [Ignite pod on kubernetes] is restarting.
I have configured 3GB for default data region, 7GB for JVM and total 15GB
for
space used by these unwanted pages.
here is the developer's discussion link
http://apache-ignite-developers.2346864.n4.nabble.com/How-to-free-up-space-on-disc-after-removing-entries-from-IgniteCache-with-enabled-PDS-td39839.html
On Mon, Sep 9, 2019 at 11:53 PM Shiva Kumar
wrote:
> Hi
>
; I guess that generated WAL will take this disk space. Please read about
> WAL here:
>
> https://apacheignite.readme.io/docs/write-ahead-log
>
> Please provide the size of every folder under /opt/ignite/persistence.
>
> BR,
> Andrei
> 9/6/2019 9:45 PM, Shiva Kumar пишет:
>
>
Hi all,
I have set cache expiry policy like this
l memory utilization, so it’s possible,
> that nodes will crash because of out of memory errors.
> So, it’s better to follow the given recommendation.
>
> If you want us to investigate reasons of the failures, please provide logs
> and configuration of the failed nodes.
>
>
Hi all,
we are testing field use case before deploying in the field and we want to
know whether below resource limits are suitable in production.
There are 3 nodes (3 pods on kubernetes) running. Each having below
configuration
DefaultDataRegion: 60GB
16 matches
Mail list logo