Hi Roger,
The recovery message in logs is normal case when node was forced to stop.
This only means that data are restoring from WAL on start.
Slow activation doesn't look OK, it shouldn't take so long. Could you please
restart grid with -DIGNITE_QUIET=false JVM flag and share logs?
Thanks!
Hi,
It looks OK, I don't see any problems here.
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/TcpDiscoveryVmIpFinder-handle-IP-change-tp16156p16198.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
Peer class loading was designed for compute tasks deployment, it's not
applicable for configuration classes or cache entries. So you have to copy
that classes to all nodes.
Thanks!
-Dmitry.
--
View this message in context:
Hi,
Cache requests will continue going to old nodes unless re-balancing
finished. After re-balancing new affinity mappings applied and discarded the
old ones, so you request only actual data. In details, before sending
request to node, requester checks a last valid topology version, and gets
Hi Priya,
I think you're looking for Ignite web console [1].
[1] https://ignite.apache.org/addons.html#web-console
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Gridgain-Stat-Data-tp14750p14868.html
Sent from the Apache Ignite Users mailing
Hi,
It seems Ignite doesn't sort fields for binary objects by default. Use
IGNITE_BINARY_SORT_OBJECT_FIELDS system property:
-DIGNITE_BINARY_SORT_OBJECT_FIELDS=true
or
System.setProperty("IGNITE_BINARY_SORT_OBJECT_FIELDS", "true");
Thanks!
-Dmitry
--
View this message in context:
Yes, try Visor CLI, but it's very limited [1].
[1] https://apacheignite-tools.readme.io/docs/command-line-interface
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Gridgain-Stat-Data-tp14750p14870.html
Sent from the Apache Ignite Users mailing
Hi Luqman,
ExtensionRegistry is used by IgnitePluginProcessor to replace internal
Ignite interface implementations, but it's only used for MessageFormatter,
MessageFactory and IoPool (search usages for
IgnitePluginProcessor#extensions). So if you do not want to use your own
implementation for
Hi,
This value is just a sum of available heaps on JVMs on which nodes are
running. You may limit each node with -Xmx10g or -Xmx5g for example.
Thanks!
-Dmitry
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Heap-Memory-Limit-Set-tp14560p14597.html
Sent from
Hi,
You may check out the swap storage [1], but it will be useful if keys
significantly smaller than values, because keys will be kept in memory.
[1] https://apacheignite.readme.io/v1.8/docs/off-heap-memory#swap-space
Thanks!
-Dmitry
--
View this message in context:
Hi Chris,
You understand correctly that you need to subscribe for node joined event.
Here is possible example of how to write it:
private static class Listener implements IgnitePredicate {
@IgniteInstanceResource
private Ignite ignite;
@Override public boolean
Hi Chris,
Collecting all data from cache to map is a bad idea, because it could be
quite large and you may get OutOfMemoryError. Exactly for that purpose
ScanQuery was designed [1]. You may set filter and using cursor get data:
iterating over it (this will load entries from grid with batches), or
Hi,
Probably the reason of client reconnection was a long GC pause on it.
Try to use scan query [1], it's liter than SQL, and check that you have
enough of heap.
[1] https://apacheignite.readme.io/v1.8/docs/cache-queries#scan-queries
Thanks!
-Dmitry
--
View this message in context:
Hi,
You may use Ignite Web Console for that purpose [1].
[1] https://ignite.apache.org/features/datavisualization.html
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Monitor-remote-cache-in-Ignite-tp13982p14022.html
Sent from the Apache
Hi,
Is it possible to provide a simple reproducer?
Answering your second question, yes, you can use BinaryObject as persistence
class name.
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-cassandra-module-issue-tp13808p14025.html
Sent
Hi,
Could you please share your test project?
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Streaming-test-tp14039p14041.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
It's usually better approach to run client code on client nodes, because
they don't keep cache data and, as a consequence, shutdown or restart
doesn't change cache state.
Another way is to configure cache with backups, f.e. in your case you have 2
server nodes, if you set
Hi,
You forgot to pass configuration to Ignition.start() method :)
Ignition.start(config);
-Dmitry
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/BinaryFieldIdentityResolver-tp12896p13018.html
Sent from the Apache Ignite Users mailing list archive at
Hi,
This is a correct behavior, by design any cache evictions and expirations
don't affect store, because cache usually keeps only hot data. If you set
read though, then Ignite will request store in case if entry wasn't found in
cache, otherwise it returns null and to load that data back in cache
Hi Sam,
There is no method in public API that checks filed existense, but you may
use readObject(). It will return null if field was not found, otherwise
you'll need to unbox the value.
Thanks!
-Dmitry.
--
View this message in context:
Hi Jeff,
Unfortunately, you cannot avoid this and you need to open that ports
(47100-47109) on client node, because communication must has ability to
initiate connection in both directions. Also you may leave those ports open
only for hosts where server nodes are running.
Thanks!
-Dmitry
--
Hi,
Sorry for late response. It looks like you close the stream and stop the
node before any data got received. Try to remove try-with-resources from
starting client node, creating cache and streamer.
-Dmitry.
--
View this message in context:
Hi,
To make semaphore fail safe in your case you may set AtomicConfiguration to
IgniteConfiguration with REPLICATED cache mode, so semaphore data will be
stored on every node.
Additionally check FifoQueueCollisionSpi [1] that will control jobs
execution for you.
[1]
Hi,
You can dynamically create and change binary objects with
BinaryObjectBuilder [1], but fields that you want to query must be specified
on node startup at least with QueryEntity [2], but altering internal query
tables at runtime is not possible for now.
[1]
Hi,
First of all, Java task class should be available on server, then you'll be
able to run the task using it's class name [1].
In that task you may access cache with transaction using standard Ignite
Java API [2].
You need start server only once, and then invoke task as many times as you
want.
Hi,
Could you please attach thread dumps from all nodes?
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteRDD-SQL-query-stalled-or-extremly-slow-on-large-input-argument-tp13711p13859.html
Sent from the Apache Ignite Users mailing list
Sam,
Please attach logs from your node.
-Dmitry
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Grid-is-in-invalid-state-to-perform-this-operation-tp12360p12411.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi,
I'm not sure I understand your question, but if you need to calculate memory
consumption, this might be helpful [1].
[1] https://apacheignite.readme.io/v1.9/docs/capacity-planning
-Dmitry
--
View this message in context:
Hi,
Ignite and IgniteCache implement AutoCloseable interface, which means, if
you use try-with-resources or explicitly call close() method it stops node
and cache.
Try not to stop Ignite node and cache while test.
-Dmitry.
--
View this message in context:
Hi Sam,
Was local node connected to remote cluster? Probably it was kicked off the
cluster, because of closed connection.
Thanks!
-Dmitry
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Grid-is-in-invalid-state-to-perform-this-operation-tp12360p12379.html
Sent
Hi,
You may use your custom collections as is - they will be deserialized in
correct type.
If you want to implement Binarylizable with additional serialization logic,
you can use something like follows (but this approach will need intermediate
object):
private static class CustomList extends
Hi Sergey,
To use IGFS from non-JVM languages you may configure it as FS for Hadoop in
IgfsMode.PRIMARY mode. After that you can configure connectivity from any
supported language to Hadoop and it's file system.
Thanks!
-Dmitry
--
View this message in context:
Hi Franck,
Yes, here is used client-side security, looks like it was made to allow
connect of different clients with different permissions. But it depends on
GridSecurityProcessor. For example, it may have a node validation logic that
will not accept nodes with unapproved security processor.
In
Hi Raymond,
Could you please attach full log and config for failed node?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
@IgniteInstanceResource annotation is a correct and the best way to get
Ignite instance in service.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Franck,
You're definitely right, but this is more like client roles than regular
security.
On "they have a number of connected clients with actual applications" I
meant that user's application is connected to the grid via clients with
their local permissions. But end user cannot access the grid
Raymond,
Without logs I see just that deserialization failed by some reason. Actually
I more interested in exceptions that come from Ignite's java part if any.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Alisher,
This issue is under active development:
https://issues.apache.org/jira/browse/IGNITE-3478
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Ankit,
Thanks for pointing to mistake in documentation. I've suggested edits for
it.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Rishi,
Yes, but you need set heap size at least 512M and proper off-heap size [1].
But you won't be able to store large amount of data, you will need to enable
persistence, it will allow to extend available memory with dis k[2].
[1]
Hi,
What is the size of your objects? Have you set
CacheConfiguration.setOnheapEnabled() to true? Can you get heap dump and
check out biggest consumers and their nearest roots?
Normally all entries will go to off-heap, if it doesn't fit off-heap memory
it starts swapping to disk. But loading may
Hi,
Looks like you have open transactions with a big number of entries (or large
entries). Do you use TRASACTIONAL cache? How do you save values, using
putAll()?
If yes, you have following options:
- Reduce batch size (number of entries that participate in one transaction).
- Check keys, maybe
Hi John,
BinaryHeapOutputStream is the part of binary marshaller, and it's used for
object serialization only. Entry are stored in off-heap paged memory as
expected.
Thanks!
-Dmitry.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Rishi,
Message queue limit was disabled in 1.9 release because in async mode it may
cause not obvious cluster stall, when OOMEs a quite rare issue. Currently
planned a new back-pressure mechanism [1], another possible issue with
PRIMARY_SYNC was already resolved [2].
Generally it's no need
Hi Rishikesh,
Could you please create a reproducer project with config that fails with
NPE? This will help to find a bug.
Thanks!
-Dmitry.
--
View this message in context:
Hi,
The reason of slowdown might be the fact, that Ignite by default consumes
80% of available memory, which forces OS to start a lot of swapping, so all
memory operations limited by disk speed. Try to reduce memory size [1].
[1]
Hi Roger,
The reason of slowdown might be the fact, that Ignite by default consumes
80% of available memory, which forces OS to start a lot of swapping, so all
memory operations limited by disk speed. Try to reduce memory size [1].
[1]
Hi,
Had you any chance to get GC and dstat logs?
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-of-persistent-store-too-low-when-bulb-loading-tp16247p16417.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Hi Aaron,
If data could not be stored to DB, they will reside in heap queue, but no
more than batchSize.
I see you set copyOnRead to false, this property forces ignite to keep
object on-heap, why do you need it?
Thanks!
-Dmitry.
--
View this message in context:
Hi,
Please share full logs and thread dumps, it's hard to understand the root
cause.
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Activation-slow-and-Ignite-node-crashed-in-the-middle-of-checkpoint-tp16144p16341.html
Sent from the Apache
Hi,
1. In most cases that exception might be thrown if cache store cannot be
updated. Regularly even on unstable topology you won't get it, so the
easiest would be to do putAll() with failed keys, it should be finished
successfully.
2. By default memory shared across all caches, so you may check
Hi,
Could you please take dstat, Ignite and GC logs and few thread dumps when
application is experiencing slowdown?
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-of-persistent-store-too-low-when-bulb-loading-tp16247p16342.html
Use this command for dstat:
dstat -cmdgs --fs > dstat.txt
Thanks!
-Dmitry.
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/Performance-of-persistent-store-too-low-when-bulb-loading-tp16247p16345.html
Sent from the Apache Ignite Users mailing list archive at
Hi Chris,
1. Indexing may slowdown insertions, but in will not be used if no
CacheConfiguration.setIndexedTypes() or
CacheConfiguration.setQueryEntities() set.
2. It depends on conditions. If you have a lot of data, but you need to
filter out a small set of them, then indexing may greatly help
Hi,
Under batch I understand key-value map that you pass to putAll().
Thanks!
-Dmitry.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Ignite can only increase memory consumption, there are no ways to reduce it.
Practically it's not needed, because it's enough just to set desired memory
size, and if persistence enabled - swap data to disk, when this limit was
hit.
Thanks!
-Dmitry.
--
Sent from:
Hi Chris,
Your listener receives CacheEvent, which contains cache name, key and value.
You may just check cache name and react accordingly [1] or use
ContinuousQuery as you suggested.
[1]
Hi,
Just to clarify my words a bit. When persistence is enabled, all memory data
are stored on disk with all durability guarantees. But it is also allows you
to store in cache more data than you can fit in memory: Ignite just evicts
stale data pages from RAM, and when they will be needed - loaded
Hi,
putAll() for atomic cache forces it to lock all keys and make this entry
group updated atomically. I think it would be enough to reduce batch size or
to use IgniteDataStreamer, if it's an initial data load.
No need to change direct memory size, because Ignite off-heap doesn't rely
on it.
Hi,
How many nodes do you have and how do you measure that 70 ms? Is it first
query or average time? Please show your EXPLAIN of the query.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
First of all, Ignite object represents an Ignite node. Each such node may
run more than one transaction and this object is thread safe. You may start
only one transaction in thread, but Ignite object could be safely shared
between your threads.
Each transaction is bound to thread that it's
Hi,
You have a few options here:
1) Write code that scans all tables in MySQL and loads data to grid with
IgniteDataStreamer [1].
2) Write code that parses MySQL CSV and using IgniteDataStreamer loads to
grid.
3) Use existing CacheJdbcStore to preload data from MySQL (check out screen
casts [2]
Hi,
Please attach thread dumps from all cluster nodes.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
How many records your query returns without LIMIT? How long does it take to
select all records without grouping?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi again,
This looks quite similar to your issue, and it was fixed in 2.3 [1]. Check
it out.
[1] https://issues.apache.org/jira/browse/IGNITE-6071
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Try to enable paired connections [1].
[1]
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/spi/communication/tcp/TcpCommunicationSpi.html#setUsePairedConnections(boolean)
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ray,
I wasn't able to run benchmarks quickly, but I've got following results for
atomic put throughput (numbers are a bit lower that could be, because of
profiling):
Throughput Cluster
264930 1c4s
513775 2c4s
968475 4c4s
281425 1c8s
530597
Hi Ray,
I've finally got results of query benchmarks:
4s1c 80725.80 80725.80
4s2c 78797.90 157595.80
4s4c 54029.70 216118.80
8s1c 64185.60 64185.60
8s2c 61058.10 122116.20
8s4c 34792.70 139170.80
First column - cluster configuration (in 8 server variant 2 nodes per
machine), second - average
Hi,
In your code Ignite could not inject it's instance, because you have two
instances of your class: one in Ignite as a service, another one is object
that processes requests in Jersey. So when you're doing http query, it goes
to jersey instance.
How do you start Ignite? You may get ignite with
Hi Ray,
I've got the same results on my environment and checking what happens.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It looks like anonymous EntryProcessor gets excess data in context. Try to
make it inner static class and check logs for exceptions on all nodes.
Thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Indranil,
These measurements are not fully correct, for example select count(*) might
use only index and in select * was not actually invoked, because you need to
run over cursor.
Also by default query is not parallelized on one node, and scan with
grouping is going sequentially in one
Hi,
Is it possible that version of thin driver is different from version of
cluster nodes? Does it happen on concrete queries or it could be on any one?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Currently it's not possible. What's for do you need such possibility?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Is there any case that you're using Connection in more than one thread? It's
not thread safe for now.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Is it possible that firewall configured to block DDoS breaks connection to
client node? Because I see here two possible cases:
1) STW pause on client, but we should see connection timeout exception;
2) Firewall rejects connections with a large traffic, and now you're getting
connection
Hi Dmitriy,
1. You may use node filter [1] and specifically
org.apache.ignite.util.AttributeNodeFilter that could be configured in XML
without writing code.
2. Yes you can. You need to configure data regions and set
persistenceEnabled flag. After that you may apply cachesh to that regions.
[2]
Sure, I meant you need to create your own inner class:
private static class WorkflowEntryProcessor extends EntryProcessor {
@Override
public Object process(MutableEntry entry, Object...
arguments) throws EntryProcessorException {
Hi,
Could you please provide a reproducer? I don't get such exception.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Mikael,
Please share your Ignite settings and logs.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Make sure that your keys are go to specific partition. Only one node could
keep that partition at a time (except backups, of course). To do that, you
may use @AffinityKeyMapped annotation [1].
Additionally you can implement your own AffinityFunction that will assign
partitions that you need
Normally (without @AffinityKeyMapped) Ignite will use CustomerKey hash code
(not object hashCode()) to find a partition. Ignite will colsult with
AffinityFunction (partition() method) and to what partition put key and with
assignPartitions find concrete node that holds that partition.
In other
Hi,
TcpDiscoveryMulticastIpFinder produces such a big number of connections. I'd
recommend to switch to TcpDiscoveryVmIpFinder with static set of addresses.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
There are various possible ways, but use one partition per node is
definitely a bad idea, because you're loosing scaling possibilities. If you
have 5 partitions and 5 nodes, then 6 node will be empty.
It's much better if you in AffinityFunction.partition() method will
calculate node according to
Hi,
Ignite keeps Tx cached values on-heap.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
What IgniteConfiguration do you use? Could you please share it?
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
1. By default get() will read backups if node, on which it's invoked is
affinity node. In other words, if current node has backups, Ignite prefer to
read local data from backup rather requesting primary node over network.
This can be changed by setting
Hi,
AFAIK, you cannot download plugin separately, it's commercial product. You
can use it for free from here [1] or purchase a payed version for internal
use.
[1] http://console.gridgain.com/
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Not sure if it's possible to remove the ticket. Just close it with won't fix
status, it would be enough.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
You configured external public EC interface address (34.241...), but it
should be internal: 172...
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
This thread dump is absolutely fine, you confused socket state and java
thread state. These two things are absolutely unrelated.
There should not be so many socket connections (TIME_WAIT means that socket
already closed and waiting for last packages) for three nodes. Could you
please share
Hi,
I totally agree with Val that implementing own AffinityFunction is quite
complex way. Requirement that you described is named affinity co-location as
I wrote before.
Let me explain in more details what to do and what are the drawbacks.
1. Use use @AffinityKeyMapped for all your keys. For
Hi Naresh,
Recommendation will be the same: increase failureDetectionTimeout unless
nodes stop segmenting or use gdb (or remove "live" option from jmap command
to skip full GC).
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
It would be better to upgrade to 2.5, where it is fixed.
But if you want to overcome this issue in your's version, you need to add
ignite-indexing dependency to your classpath and configure SQL indexes. For
example [1], just modify it to work with Spring in XML:
Hi,
localPeek must be called on local node. I fyou want to do that from client,
you have to execute a task [1] targeting on server node. But to list all
entries ScanQuery is designed for [2]. You may run it via compute task from
client with setLocal() flag set to true.
[1]
Hi,
REST API does not have such option, but you can write your own compute task
(that uses Java API) and call it from REST [1]. It's not possible to use
Lucene search from SQL interfaces.
To use full text search you need to annotate fields with @QueryTextField [2]
and add to indexed types [3].
Hi Mikael!
Don't worry about this message and you may just ignore it. It's absolutely
OK and means that WAL was read fully. The question is why it's WARNING... In
future releases it would be changed to INFO and message content to avoid
such confusing.
Thanks!
-Dmitry
--
Sent from:
Hi,
I see you used your data region as default and set name for it. Try to set
it to DataStorageConfiguration.setDataRegionConfigurations().
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Check system logs for that time, maybe there was some system freeze and add
more information in GC logs, for example safepoints:
-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Jose,
Unfortunately there is no other tools at the moment. But you still can
contribute to Apache Ignite and implement that ticket which will persist
Lucene indexes. It would be a great help!
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
101 - 200 of 275 matches
Mail list logo