Hi Tom,
In case of a replicated cache the Ignite plans the execution of the sql
query across whole cluster by splitting into multiple map queries and a
single reduce query.
Thus it is possible communication overheads caused by that the "reduce"
node collects data from multiple nodes.
Please show
Hi,
I am running Ignite 2.3 using Cassandra as my persistence store.
I got unmarshall error when a server node trying to unmarshall an object of old
version from Cassandra.
This is the scenario:
1. Object of ClassA (older version) is serialized and persisted into
Cassandra
2.
Hi,
an object takes 200 bytes on heap and cache has such 50 million objects
stored in it.
Is it ok to calculate cache size as follows?
Cache size in bytes = object size * no of objects
Cache size in bytes = 200 * 50 million
Thanks,
Prasad
Got it, if you serialized with a given builder once, this type will be cached
globally in the internal cache. So the behaviour you observe is expected and
correct: you can change schema dynamically, but you have to preserve
compatibility. This means that you can change and remove fields, but not
Hi Andrey,
I see Fix version 2.7 in Jira:
https://issues.apache.org/jira/browse/IGNITE-8659
This is a critical bug.. bouncing of server node in not-a-right-time causes
a catastrophe.
This mean no availability in fact - I had to clean data folders to start my
cluster after that
BR, Oleksandr
On
Thanks for the recommendation, but we already identified and addressed the
issues with GC pauses in JVM, and now we could not find any long gc activity
during the time of node failure due to network segmentation. (please find
the attached screenshot of GC activity from dynatrace agent).
>From the
You plan sounds legit.
Basically, Ignite splits your files into chunks of data and puts them into
an ordinary distributed cache.
You can find general recommendation on configuring the Ignite cluster in
the documentation: https://apacheignite.readme.io/docs/
Regarding IGFS: you are going to need
Hi,
Where did you get that images? In logs of all your instances do you see
2.5.0 version?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Your Kubernetes image and your jar contains different versions of Ignite.
Check logs to see, which one has the version 2.4.0. It's printed at the
startup.
Denis
вт, 26 июн. 2018 г. в 15:51, wadhwasahil :
> I am trying to connect my spark client with ignite cluster version 2.5.0.
> When I run
Hi,
I've recently started using the Ignite FileSystem (igfs) in our API, to
fully buffer an incoming stream of byte[] values (chunked InputStream). I'm
doing this because that stream then needs to be sent along to another remote
service, and I'd like the ability to retry without telling the
Hi,
Thread dumps look healthy. Please share full logs at that time when you took
that thread dumps or take a new ones (thread dumps + logs).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi, you can go 2 ways:
Since metadata is also cached, you can clear it by removing
ignite/work/marshaller/ directory (and clearing all persistent data if it
exist) and restarting cluster.
Second one is to try to remove the field you want to change first and then
add it again like this:
Hello. I have a problem with BinaryObjectBuilder.
I want change field type in BinaryObjectBuilder, but even after cache
destroy and new builder with new field type creation, I get old type from
ignite.binary().type().fieldType() method. And also get an exception that I
have error field type when
I am trying to connect my spark client with ignite cluster version 2.5.0.
When I run my spark submit job.
**sudo /opt/spark-2.3.0-bin-hadoop2.7/bin/spark-submit --master
k8s://https://35.192.214.68 --deploy-mode cluster --name sparkIgnite --class
org.blk.igniteSparkResearch.ScalarSharedRDDExample
This is example of data stored in cache taked from visor.
java.lang.Long | 2147604480 | o.a.i.i.binary.BinaryObjectImpl |
Cache.UserObjectCacheItem [hash=684724513, UserName=omguser,
LastUpdated=System.DateTime [idHash=658840403, hash=1247822310,
ticks=636656011684408212,
2Denis:
About measuring.
I'm use cluser only for this type of cache. And nothing more. So it scares
because it takes too much memory.
I have read already docs about memory metrics and i'm affraid about this
warning:
> Metrics collection is not a free operation and might affect the
> performance
Hi!
The overhead for an entry in Ignite is much bigger than an SQL server
record, you have something like 200 byte overhead for each entry and the
binary format used in an SQL server is more compact compared to the
binary storage format used in Ignite, so if your records are small the
size
Michael,
Take a look at the following page:
https://apacheignite.readme.io/docs/memory-metrics
To monitor off-heap memory usage, you can use
*DataRegionMetrics#physicalMemorySize* metric.
If you multiply the result by the *DataRegionMetrics#pagesFillFactor *metric,
you will get an approximation
Hi Naresh,
Actually any JVM process hang could lead to segmentation. If some node is
not responsive for longer than failureDetectionTimeout, it will be kicked
off from the cluster to prevent all over grid performance degradation.
It works on following scenario. Let's say we have 3 nodes in a
Hi dear dr. Allcome ;)
I have an table in MSSQL which size is 4GB
I've transfer it to cache and it's size takes more tan 80GB(5 nodes per 22GB
offheap each)
Can you advise me how i can control size of caches for monitoring?
Regards,
Michael.
--
Sent from:
Hello Ravi,
Were you able to solve this issue?
If so, kindly request you to post the correct configurations for Kerberos
enabled Ignite Installation.
Thanks in Advance!!!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> ThreadAbortException: Thread was being aborted.
This is not related to Ignite. Thread.Abort is called on .NET side, and
Ignite never does that.
Please check if ASP.NET or some other framework is involved.
On Mon, Jun 25, 2018 at 10:57 PM aealexsandrov
wrote:
> Very strange. By default,
Attaching ignite config. A failed code is a simple execution of hibernate
query. Here is bigger stacktrace:
Caused by: java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast
to [Ljava.io.Serializable;
at
org.hibernate.cache.internal.StandardQueryCache.get(StandardQueryCache.java:189)
at
23 matches
Mail list logo