Check your configuration. This code works perfectly well for me. If set page
eviction mode to disabled - IOOME will be thrown:
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
DataStorageConfiguration dataStorageConfig = new
DataStorageConfiguration();
long
Hi,
Probably the best choice would be Cassandra as Ignite has out of the box
integration with it [1].
[1]
https://apacheignite-mix.readme.io/v2.5/docs/ignite-with-apache-cassandra
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It's hard to get what's going wrong from your question.
Please attach full logs and thread dumps from all server nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, Ignite will send messages to all nodes, but you may use filter:
ignite.message(ignite.cluster().forAttribute("topic1", Boolean.TRUE));
In this case messages would be sent to all nodes from the cluster group, in
this example - only nodes with set attribute "topic1" [1].
[1]
Hi,
Thread dumps look healthy. Please share full logs at that time when you took
that thread dumps or take a new ones (thread dumps + logs).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Where did you get that images? In logs of all your instances do you see
2.5.0 version?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Calvin,
BinaryMarshaller can solve that issue with involving a few more.
First of all, you will need to disable compact footer to let each
BinaryObject has it's schema in footer.
If you need just put/get POJOs everything will be fine. But you need to
enlist your POJO in BinaryConfiguration
Hi Naresh,
Actually any JVM process hang could lead to segmentation. If some node is
not responsive for longer than failureDetectionTimeout, it will be kicked
off from the cluster to prevent all over grid performance degradation.
It works on following scenario. Let's say we have 3 nodes in a
Hi Oleksandr,
It's OK for discovery, and this message is printed only in debug mode:
if (log.isDebugEnabled())
log.error("Exception on direct send: " + e.getMessage(),
e);
Just turn off debug logging for discovery package:
org.apache.ignite.spi.discovery.tcp.
Thanks!
Hi,
What is your configuration? Check WAL mode and path to persistence.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Just because:
1) not all users build their apps from scratch, they might have some legacy
code built over Cassandra DB;
2) native persistence featured much later than Cassandra module, and there
is no point to remove it now;
3) it's always better to offer more choices to user.
Anyway,
Hi Jose,
1. Yep, I would say, you'll get more profit in persistence. Because if you
split between real machines, each may keep more hot data in memory and each
has separate hard drive. The more data you can fit into RAM and more hard
drive could work in parallel, the better performance you get.
Hi,
Slight degradation is expected in some cases. Let me explain how it works.
1) Client sends request to each node (if you have query parallelism > 1 than
number of requests multiplied by that num).
2) Each node runs that query against it's local dataset.
3) Each node responses with 100 entries.
1) This applicable to Ignite. As it grown from GridGain sometimes it may
appear in docs, because missed fro removal.
2) Yes, and I would say overhead could be even bigger. But anyway I cannot
say definitely how much, because Ignite doesn't store data sequentially,
there a lot of nuances.
3) Ignite
Naresh,
GC logs show not only GC pause, but system pause as well. Try these
parameters:
-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps
-XX:+PrintGCApplicationStoppedTime
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Please attach thread dumps from all nodes taken at the moment of hang.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Sharavya,
This exception means that client node is disconnected from cluster and tries
to reconnect. You may get reconnect future on it
(IgniteClientDisconnectedException.reconnectFuture().get()) and wait when
client will be reconnected.
So it looks like you're trying to create cache on
Hi Ranjit,
That metrics should be correct, you also may check [1], because Ignite
anyway keeps data in offheap. But if enabled on-heap, it caches entries in
java heap.
[1] https://apacheignite.readme.io/docs/memory-metrics
Thanks!
-Dmitry
--
Sent from:
Hi Shravya,
To understand what's going on in your cluster I need full logs from all
nodes. Please, share all files, if it's possible.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
You may fuse filter for that, for example:
ContinuousQuery qry = new ContinuousQuery<>();
final Set nodes = new
HashSet<>(client.cluster().forDataNodes("cache")
.forHost(client.cluster().localNode()).nodes());
qry.setRemoteFilterFactory(new
Hi,
This exception says that client node was stopped, but by default it should
wait for servers. In other words, wait for reconnect, in this case it throws
IgniteClientDisconnectedException that contains future on which you may wait
for reconnect event.
You may locally listen for
Hi,
Transaction here might be a not optimal solution, as it by default
optimistic and may throw optimistic transaction exception. I believe the
best solution would be to use EntryProcessor [1], it will atomically modify
entry as on TRANSACTIONAL as on ATOMIC cache on affinity data node (that
Jet,
Yep, this should work, but meanwhile this ticket remains unresolved [1].
[1] https://issues.apache.org/jira/browse/IGNITE-5371
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Prasad,
If you started Ignite with IgniteSpringBean or IgniteSpring try
@SpringApplicationContextResource [1] annotation. Ignite's resource injector
will use spring context to set a dependency annotated by it. But I'm not
sure that this will work with CacheStore, it should be rechecked.
[1]
Hi Prasad,
This approach will work with multiple keys if they are collocated on the
same node and you start/stop transaction in the same thread/task. There no
other workaround.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Jet,
Full text search creates Lucene in-memory indexes and after restart they are
not available, so you cannot use it with persistence. @QuerySqlField enables
DB indexes that are able to work with persisted data, and probably no way to
rebuild them for now.
Thanks!
-Dmitry
--
Sent from:
Hi Bryan,
You need to use StatefulSet [1], Kubernetes will start nodes one-by-one when
each comes in a ready state.
[1] https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It's hard to say why it happens. I'm not familiar with mybatis and actually
don't know if it shares jdbc connection between threads. It would be great
if you could provide some reproducible example that will help to debug the
issue.
Thanks!
-Dmitry
--
Sent from:
Hi,
Discovery events are processed in a single thread, and cache creation uses
discovery custom messages. Trying to create cache in discovery thread will
lead to deadlock, because discovery thread will wait in your lambda instead
of processing messages.
To avoid it just start another thread in
Hi,
Anonymous and inner classes have link to outer class object and might bring
it to marshaller. When you set it inner static or separate class you're
explicitly saying that you don't need such links.
In thread dumps you need to lookup for waiting or blocked threads. In your
case in service
Hi,
There are few options:
1) You need to have backups to survive node loss. [1]
2) You may enable persistence to survive grid restart and store more data
that available in memory. [2]
3) Checkout nohup command [3]
[1] https://apacheignite.readme.io/docs/primary-and-backup-copies
[2]
Hi,
Looks like not on all nodes exist your classes. Please check if all classes
that you're using in cache are available on all nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Glad to hear that it was helpful! I wrote the example just in email, so
didn't have a compiler to check it :)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
TTL fixes are not included in 2.6 as it was an emergency release. You'll
need to wait for 2.7.
https://issues.apache.org/jira/browse/IGNITE-5874
https://issues.apache.org/jira/browse/IGNITE-8503
https://issues.apache.org/jira/browse/IGNITE-8681
Hi,
It might be an issue with deactivation. Try update to 2.6 or wait 2.7. Right
now just skip cluster deactivation. Once you formed a baseline topology and
finished loading data, just enable WAL log for all caches. When log enabled
successfully, you can safely stop nodes.
On next time when all
Hi,
Rules of field naming defined in BinaryIdMapper interface. By default used
BinaryBasicIdMapper implementation that is by default converts all field
names to lower case. So Ignite doesn't support the same field names in
different cases as it will treat them as same field.
But you can
Hi Akash,
1) Actually exchange is a short-time process when nodes remap partitions.
But Ignite uses late affinity assignment, that means affinity distribution
will be switched after rebalance completed. In other words after rebalance
it will atomically switch partition distribution.
But you don't
Hi,
It defines by AffinityFunction [1]. By default 1024 partitions, affinity
automatically calculates nodes that will keep required partitions and
minifies rebalancing when topology changes (nodes arrive or quit).
Hi Akash,
How do you measure partition distribution? Can you provide code for that
test? I can assume that you get partitions before exchange process if
finished. Try to use delay in 5 sec after all nodes are started and check
again.
Thanks!
-Dmitry
--
Sent from:
Hi,
I'm not sure that nightly builds are updates regularly, but you should a
try. The biggest impact that nightly build could have some bugs that will be
fixed on release.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Where did you find it? It might be a broken link.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
I've opened a ticket for this [1]. It seems LOCAL cache keeps all entries
on-heap. If you use only one node - switch to PARTITIONED, if more than one
- PARTITIONED + node filter [2]
[1] https://issues.apache.org/jira/browse/IGNITE-9257
[2]
Hi,
1) You need to add jetbrains annotation in compile-time [1].
2) Imports depend on what are you using :) It's hard to say if your imports
enough. Add ignite-core to your plugin dependencies.
I don't think that there are other examples besides that blog post.
[1]
Hi,
Nice work, thank you! I'm sure it will be very useful. Looking forward for
your contributions in Apache Ignite project ;)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Ignite by default uses Rendezvous hashing algorithm [1] and
RendezvoudAffinityFunction is an implementation that responsible of
partition distribution [2]. This allows significantly reduce traffic on
partiton rebalancing.
[1] https://en.wikipedia.org/wiki/Rendezvous_hashing
[2]
Hi,
I think the best way here would be to read items directly from kafka,
process and store in cache and rememeber in another cache kafka stream
offset. If node crashes, your service could start from the last point
(offset).
Thanks!
-Dmitry
--
Sent from:
Hi,
It looks like the most of the time transactions in receiver are waiting for
locks. Any lock adds serialization for parallel code. And in your case I
don't think it's possible to tune throughput with settings, because ten
transactions could wait when one finish. You need to change algorithm.
Hi,
Looks like it was killed by kernel. Check logs for OOM Killer:
grep -i 'killed process' /var/log/messages
If process was killed by Linux, correct your config, you might be set too
much memory for Ignite paged memory, set to lower values [1]
If not, try to find in logs by PID, maybe it was
Hi,
Yes, you're right, it was missed during refactoring. I've created a ticket
[1], you may fix it and contribute to Apache Ignite :)
[1] https://issues.apache.org/jira/browse/IGNITE-9259
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Dynamic schema chages is available only via SQL/JDBC [1].
BTW caches created via SQL could be accessed from java API if you add
SQL_PUBLIC_ to table. For example: ignite.cache(SQL_PUBLIC_TABLENAME).
[1] https://apacheignite-sql.readme.io/docs/ddl
Thanks!
-Dmitry
--
Sent from:
Hi,
Could you please explain how do you update database? Do you use CacheStore
with writeThrough or manually save?
Anyway, you can update data with custom eviction policy:
cache.withExpiryPolicy(policy) [1]
[1]
Hi Akash,
First of all SQL is not transactional yet, this feature will be available
only since 2.7 [1]. Your exception might be caused if query was canceled or
node stopped.
[1] https://issues.apache.org/jira/browse/IGNITE-5934
Thanks!
-Dmitry
--
Sent from:
Hi Calvin,
> Can I assume that BinaryMarshaller won't be used for any object embedded
> inside GridCacheQueryResponse?
Yes, because Binary can fallback to Optimized, but not vice versa.
> If I am correct, do you have any suggestion on how I can avoid this type
> of issue?
Probably you need
Hi,
There are no such limitations on peer class loading, but it was designed and
works for compute jobs, remote filters or queries only. All unknown classes
from tasks or queries will be deployed in cluster with dependencies
according to deployment mode [1]. Actually with job Ignite sends
Hi,
You can, for example, set SYNC rebalance mode for your replicated cache [1].
In that case all cache operations will be blocked unless rebalance is
finished, and when it's done you'll get a fully replicated cache.
But this will block cache on each topology change.
[1]
Hi,
get() operation from client always go to the primary node. If you run
compute task on other nodes, where each will do get() request for that key,
it will read local value. REPLICATED has many other optimizations, for
example for SQL queries.
Thanks!
-Dmitry
--
Sent from:
Hi,
Usually it's enough to open ports for communication and discovery, thair
default values: 47500 and 47100.
If you run more than one node per pachine, you'll need to open a port range:
47500..47509 and 47100...47109.
You always can configure other values [1, 2]
[1]
Hi Svonn,
I'm not sure that I properly understand your issue. Could you please provide
a problematic code snipped?
> is the policy also deleting the Map
Yes, if it was stored as a value.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Duplicates
http://apache-ignite-users.70518.x6.nabble.com/Strange-node-fail-td21078.html.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
If your JVM process consumes more memory, then started swapping may cause
JVM freeze, and as a consequence, throwing it out from the cluster. Check
your free memory, disable swapping, if possible, or increase
IgniteConfiguration.failureDetectionTimeout.
To check that guess you may use
Hi Naveen,
Unfortunately I'm unable to reproduce that error. Could you please attach
simple code/project that fails with specified exception?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
If you have enabled read through mode for cache, entry will be loaded on
next IgniteCache.get() operation, or when IgniteCache.loadCache() was
called.
Next time entry will be evicted according to your eviction policy.
Please note that entry will not be counted in SQL queries if it was
Hi Prasad,
This issue could not be completed in 2.5 as it's done in a low priority. As
a workaround, you can wrap your executeEntryProcessorTransaction() method
into affinity run [1], and no additional value transferring will happen.
[1]
Hi Christoph,
This metric is not implemented because of complexity. But you may get to
know now much of space your cache or cashes consumes with DataRegionMetrics:
DataRegionMetrics drm = ignite.dataRegionMetrics("region_name");
long used = (long)(drm.getPhysicalMemorySize() *
Hi Ray,
I think the only way to do it is to use
IgniteDataFrameSettings.OPTION_CONFIG_FILE and set path to xml configuration
with all settings you need. Here is a nice article about this [1]
[1]
Hi Dome,
Could you please attach full logs?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Blocked threads show only the fact that there are no tasks to process in
pool. Do you use persistence and/or indexing? Could you please attach your
configs and logs from all nodes? Please take few sequential thread dumps
when throughput is low.
Thanks!
-Dmitry
--
Sent from:
Hi Ankit,
No, Ignite uses sun.misc.Unsafe for offheap memory. Direct memory may be
used in DirectBuffers used for intercommunication. Usually defaults quite
enough.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Yes, for complex transaction this workaround will not work. So you need
either wait for fix or avoid using EntryProcessor for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
For sure Ignite caches queries, that's why first request runs much longer
than rest ones.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Praveen,
Stack traces only show that thread is waiting for response, to get the full
picture, please attach full logs and thread dumps at the moment of hang from
all nodes. I need from all nodes, because actual issue happened on remote
node.
Also, according to last exception, there might be
Hi Anshu,
This looks like a bug that was fixed in 2.4, try to upgrade [1].
[1] https://ignite.apache.org/download.cgi
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I suppose that is issue with updating timestamps, rather with WAL writes.
Try to make a load test and compare hash sum of files before load test and
after. Also check if WAL history grow.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Reduce will be done on node to which JDBC or thin client connected, it could
be either client or server node.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Calvin,
1. Enlist I mean that if you want, for example, to get to see what fields
present in BinaryObject. In other words, if you want to work with
BinaryObject directly. For POJO serialization/deserialization this should
not be and issue at all.
2-3. In your case, you have and java.time.Ser
201 - 275 of 275 matches
Mail list logo