Hello,
IgniteCache has a way to specify the expiry policy at key level for thick
clients via IgniteCache#withExpiryPolicy() facade. I think it may be
reasonable to add similar option to the thin clients protocol as well. Feel
free to open a ticket.
tasks, so they are always submitted to primary
> node. How setting DIGNITE_READ_LOAD_BALANCING to false help in this case ?
> Even if it is true it will always read the values from primary node as the
> task is landed on primary node.
>
> Thanks,
> Prasad
>
> On Fri, Feb 28,
Prasad,
The current version in the entry is checked agains the version which was
read from the very same entry, so with absence of concurrent updates the
version will be the same.
>From your description, I think there might be a concurrent read for the key
that you clear which loads the value on
Prasad,
> Can you please answer following questions?
> 1) The significance of the nodeOrder w.r.t Grid and cache?
>
Node order is a unique integer assigned to a node when the node joins grid.
The node order is included into GridCacheVersion to disambiguate versions
generated on different nodes
Prasad,
Since optimistic transactions do not acquire key locks until prepare phase,
it is possible that the key value is concurrently changed before the
prepare commences. Optimistic exceptions is thrown exactly in this case and
suggest a user that they should retry the transaction.
Consider the
introduces addresses frequent usability and critical stability
issues
https://ignite.apache.org/releases/2.7.6/release_notes.html
Download the latest Ignite version from here:
https://ignite.apache.org/download.cgi
Please let us know [2] if you encounter any problems.
Regards,
Alexey Goncharuk
Yuriy,
Is your Ignite node running on localhost and has a REST endpoint bound to
localhost:11211? If not, default values will not work. I've re-checked the
control utility in Ignite 2.7 in several environments, works fine for me.
вт, 11 дек. 2018 г. в 15:19, Yuriy :
> I am explicitly set the
Hi Murthy,
You should use user-unsubscr...@ignite.apache.org in order to unsubscribe
from the list.
Cheers,
Alexey
вт, 28 авг. 2018 г. в 5:37, Murthy Kakarlamudi :
>
>
Hello,
Can you please share the full stacktrace so we can see where the original
ClassCastException is initiated? If it is not printed on a client, it
should be printed on one of the server nodes.
Thanks!
вт, 5 июн. 2018 г. в 18:35, Cong Guo :
> Hello,
>
>
>
> Can anyone see this email?
>
>
>
Ray,
Which Ignite version are you running on. You may be affected by [1] which
becomes worse the larger the data set is. Please wait for the Ignite 2.5
release which will be available shortly.
[1] https://issues.apache.org/jira/browse/IGNITE-7638
пт, 18 мая 2018 г. в 5:44, Ray :
> I ran into
:25 GMT+03:00 Larry <lar...@gmail.com>:
> Hi Alexey.
>
> Were there any findings? Any updates would be helpful.
>
> Thanks,
> -Larry
>
> On Thu, Mar 8, 2018 at 3:48 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
> wrote:
>
>> Hi Lawernce,
>>
Andrey,
Can you please describe in greater detail the configuration of your nodes
(specifically, number of caches and number of partitions). Ignite would not
load all the partitions into memory on startup simply because there is no
such logic. What it does, however, is loading meta pages for each
Hi,
Just to reiterate and clarify the behavior. Region maxSize defines the
total maxSize of the region, you will get OOME if your data size exceeds
the maxSize. However, when using swap, you can set maxSize _bigger_ than
RAM size, in this case, the OS will take care of the swapping.
2017-12-21
Created the ticket: https://issues.apache.org/jira/browse/IGNITE-7235
2017-12-15 16:16 GMT+03:00 Alexey Goncharuk <alexey.goncha...@gmail.com>:
> Ray,
>
> With the current API it is impossible to get a reliable integration of
> Ignite native persistence with 3rd party persi
Ray,
With the current API it is impossible to get a reliable integration of
Ignite native persistence with 3rd party persistence. The reason is that
first, CacheStore interface does not have methods for 2-phase commit,
second, it would require significant changes to the persistence layer
itself
Hi Ray,
Do you see "Page evictions started, this will affect storage performance"
message in the log? If so, dramatic performance drop you observe might
indicate that we have an issue with page replacement algorithm that we need
to investigate. Can you please check the message?
2017-10-17 17:09
Hi,
I assume you have backups=0 for your cache (otherwise you should not see
data loss). There are two ways to achieve what you need:
1) Set PartitionLossPolicy different from IGNORE in your cache
configuration. This way your clients will get an exception when trying to
read a lost key. After a
Hi,
This should never happen in BACKGROUND mode unless you have a hard power
kill for your Ignite node (which is not your case). I've reviewed the
related parts of the code and found that there were a few tickets fixed in
2.3 that may have caused this issue (e.g. IGNITE-5772). Can you try
Hi,
In default WAL mode each cache put() is fsynced to the disk, which causes a
major performance penalty.
You can do either: batch your updates using putAll or using a data streamer.
BTW, how do you insert data to postgres?
2017-10-04 14:32 GMT+03:00 Dmitry Pryakhin
Hi Raul,
Do you observe this exception under some specific events, like topology
change? Can you share an example of how you use Ignite scheduler in your
service here?
Thanks,
AG
2017-06-06 17:21 GMT+03:00 Raul :
> Hi,
>
> We are trying to deploy a service as cluster
Alexey,
There is no CacheMemoryMode in Ignite 2.0 anymore since it has been removed
in favor of the new Ignite architecture. It seems that you've built Ignite
from one of the intermediate states between 1.9 and 2.0.
Can you try with the ignite-2.0 release?
--AG
2017-05-30 17:00 GMT+03:00
How do you configure field1 to be an indexed field? Do you use
@QuerySqlField annotation? Can you share the execution plan of your query
(you need to run "explain select ..." query)?
Also, what is the result set size of your query?
--AG
2017-05-30 14:49 GMT+03:00 Pratham Joshi
It's pretty simple. I've added newbie label for it, anyone can pick it up.
2017-05-17 21:03 GMT+03:00 Denis Magda <dma...@gridgain.com>:
> Alex, thanks.
>
> Can the ticket be resolved in 2.1?
>
>
> On Wednesday, May 17, 2017, Alexey Goncharuk <alexey.goncha...@gmail
Created a follow-up UX ticket:
https://issues.apache.org/jira/browse/IGNITE-5248
2017-05-17 19:20 GMT+03:00 Sergey Chugunov :
> Ajay,
>
> I managed to reproduce your issue. As I can see from logs you're starting
> Ignite using 32-bit JVM.
>
> To fix your issue just use
Hi Chris,
One of the most significant changes made in 2.0 was moving to an off-heap
storage by default. This means that each time you do a get(), your value
gets deserialized, which might be an overhead (though, I would be a bit
surprised if this causes the 10x drop).
Can you try setting
This does not look like a bug to me. Rendezvous affinity function is
stateless, while FairAffinityFunction relies on the previous partition
distribution among nodes, thus it IS stateful. The partition distribution
would be the same if caches were created on the same cluster topology and
then a
Hi Mauricio,
You encounter this exception because SingletonFactory actually stores a
reference to the factory instance which is later is serialized. Instead of
SingletonFactory, you can
use org.apache.ignite.configuration.IgniteReflectionFactory which does not
store this reference and can be
Hi,
Ignite uses standard Java engines for SSL, so this depends on the version
of JDK you are running. See, for example, this [] post on how to disable
cipher suites on Oracle JDK.
Hope this helps,
AG
[1]
ons you might get down a bit, however, in view of the
> FGC time I fear that such an approach causes a significant impact for
> applications which need to keep huge caches.
>
> Kind regards,
> Peter
>
>
>
> 2017-01-26 15:57 GMT+01:00 Alexey Goncharuk <alexey.goncha.
Hi Peter,
Leaving defragmentation to Ignite is one of the reasons we are trying
PageMemory approach. In Ignite 1.x we basically use OS memory allocator to
place a value off-heap. Once the OS has given us a pointer, the memory
cannot be moved around unless we free this region, thus the
Hello Steve,
You are right, Ignite requires all fields participating in affinity
calculation to be included in the key. The main reason behind this
restriction is that Ignite is a distributed system and it is an absolute
requirement to be able to calculate affinity based only on a key.
Imagine
Hi Yuci,
Ignite uses Spring XML for configuration creation, so standard
PropertyPlaceholderConfigurer perfectly meets your needs.
Just add
to your configuration file and it will do the trick. Make sure to consult
the PropertyPlaceholderConfigurer javadoc for the available system
properties
Hi Alisher,
As Nicolae suggested, try parallelizing your scan using per-partition
iterator. This should give you almost linear performance growth up to the
number of available CPUs.
Also make sure to set CacheConfiguration#copyOnRead flag to false.
--AG
2016-11-28 19:31 GMT+03:00 Marasoiu
Hi,
Currently SQL queries do not participate in transactions in any way, so you
can see partially committed data from other transactions.
In other words, if a thread 1 updates keys 1, 2, 3, 4 and started
transaction commit, and thread 2 issues an SQL query, this query may see
keys 1, 2 updated
Hi Tracyl,
Can you describe in greater detail what you are trying to achieve? To my
knowledge, predicate pushdown is a term usually used for map-reduce jobs.
The concept of Ignite's jobs and tasks is more similar to fork-join rather
than map-reduce semantics, so we could better help you if you
Hi Patrick,
I was not able to reproduce this issue neither under 8u51 nor on 8u101
under Mac using your code. Can you share the reproducer which does not use
Ignite with us when it's available?
2016-09-09 11:43 GMT+03:00 wbyeh :
> Val,
>
> It's definitely not an ignite issue.
Hi,
In FULL_ASYNC mode the API call returns before the update message is sent
to a remote node, let alone the response receipt from the remote node. This
means that in FULL_ASYNC mode you can stop your client even before the grid
knows that you wanted to put something in the cache. You need to
Hi,
You need to make your EntryProcessor a static class, otherwise it captures
a reference to your enclosing class which causes the serialization
exception.
2016-08-24 17:54 GMT+03:00 Vladislav Pyatkov :
> Hello,
>
> Could you please provide reproduced example?
>
> On Wed,
You need to implement only GridSecurityProcessor and return implementation
instance from PluginProvider#createComponent().
DiscoverySpiNodeAuthenticator is an internal interface and Ignite already
has an implementation which delegates to
GridSecurityProcessor#authenticateNode().
2016-08-19 11:52
Hi,
The plugin activation mechanism changed since RC1 to Java Service Provider
[1]. You need to add a META-INF/services/your.plugin.Provider entry to your
plugin jar in order for plugin to be activated. The file name should be the
fully-qualified name of your plugin provider and it should contain
Hi,
If I understand correctly, you want to reduce the total number of
partitions for your cache to 2. Is there any reason you want to do this? It
is impossible to change the number of partitions without the full cluster
restart, so if at some point in time you want to add more nodes to your
Ross,
The optimization you suggested does not work in the case when remote filter
is present, but it indeed works for your case. I created a ticket for this
optimization: https://issues.apache.org/jira/browse/IGNITE-3607
2016-07-29 17:51 GMT+03:00 ross.anderson :
>
Hi,
Ignite 1.6 requires data to be properly collocated in order for joins to
work correctly. Namely, data being joined from tables Kc21 and Kc24 must be
collocated. See [1] for more details on affinity collocation and [2] for
more details on how SQL queries work. Also, take a look
at
Hi,
The answers are inline:
Hi, all
> I am researching the cluster rebalance, and the sync mode
> is CacheWriteSynchronizationMode.PRIMARY_SYNC, when rebalance completed,
> how does it ensure that the primary partition has already synchronized with
> backup partition because it possible
I remember asking this question on Spark user list and parallelize() was
the suggested option to run a closure on all Spark workers. Paolo, I like
the idea with foreachPartition() - maybe we can crete a fake RDD with
partition number equal to the number of Spark workers and then map each
partition
Hi,
Good point! You can go ahead and create a ticket for this. It looks really
simple to implement, so you can either fix it by yourself, or somebody from
the community will pick it up.
Thanks,
AG
Hi,
As Dmitriy pointed out, there is no a reliable way to timeout a transaction
once the commit phase has begun.
If there is a chance that your cache store may stall for unpredictable
amount of time, this should be handled within the store and possibly throw
an exception, but this will result in
Hi,
As Andrey pointed out, now you can grab an expiry policy factory from
Ignite's cache configuration, create an instance and get durations you
need. I agree that this way a bit awkward and it only covers a configured
ExpiryPolicy, currently there is no way to check if an instance of
IgniteCache
Kristian,
Are you sure you are using the latest 1.7-SNAPSHOT for your production
data? Did you build binaries yourself? Can you confirm the commit# of the
binaries you are using? The issue you are reporting seems to be the same as
IGNITE-3305 and, since the fix was committed only a couple of days
ribing was by doing
> an explicit call to cache.rebalance().get() on the new node.
>
> Kristian
>
>
> 2016-06-13 20:03 GMT+02:00 Alexey Goncharuk <alexey.goncha...@gmail.com>:
> > Kristian,
> >
> > I am a little bit confused by the example you provided in yo
Kristian,
I am a little bit confused by the example you provided in your first
e-mail. From the code I see that you create a cache dynamically by calling
getOrCreateCache, and the next line asserts that cache size is equal to a
knownRemoteCacheSize. This does not make sense to me because cache
Note that IgniteDataStreamer implements AutoCloseable, so the code in the
example you are referring to is correct because data streamer is used in
try-with-resources block. It is not required to call flush() before calling
close() because close() will flush the data automatically.
Hi Amit,
You can also close() the streamer or call flush() explicitly to make sure
all the added data was added to the cache.
2016-06-04 10:57 GMT-07:00 visagan :
> The Streamer actually buffers the data. Buffer Default Size is 1024, either
> the buffer size is reached Or
Ignite _does_ use a separate thread pool to persist data in write-behind
mode, this is the essential difference between write-through and
write-behind. However, if your cache load rate is significantly higher than
the database insert rate, the write queue will grow faster than background
threads
Hi,
Ignite client automatically checks the partition counter and filters out
duplicate events, you do not need to do it manually to get rid of
duplicates. However, starting from Ignite 1.6 update counter is available
through CacheQueryEvent API.
2016-06-03 5:23 GMT-07:00 M Singh
ite 1.4.0
>
> -- 原始邮件 --
> *发件人:* "Alexey Goncharuk";<alexey.goncha...@gmail.com>;
> *发送时间:* 2016年6月3日(星期五) 中午1:02
> *收件人:* "user"<user@ignite.apache.org>;
> *主题:* Re: put data and then get it , but it returns null in
Hi,
Which version of Ignite are you using?
2016-06-02 21:55 GMT-07:00 往事如烟 :
> thanks for your answer, I don't use configure file, so almost we used the
> default value, only set some items as follows:
>
> *CacheConfiguration
David,
Have you considered using continuous queries for your use-case [1]? Even if
there were such a thing as a transaction event, I do not see how you can
reliably (read - in a proper order) publish this information to Kafka.
Say, you have 2 clients and 2 server nodes. First client executes a
Hi,
SPI stands for Service Provider Interface. In Ignite it is an isolated
abstracted component which can be plugged in to provide new or
replace/extend existing functionality.
For example, you can implement your own CollisionSPI to control how
ComputeJobs are scheduled on a local node, or
I think it makes sense not to validate store configuration unless we know
that the entry is enlisted as WRITE.
I've created the issue: https://issues.apache.org/jira/browse/IGNITE-3086
2016-05-04 5:28 GMT-07:00 Denis Magda :
> As Val already mentioned you can't mix
Hi,
Scala does not automatically place annotations to generated fields, you
need to use the annotation as follows:
@(AffinityKeyMapped @field) val marketSectorId:Int = 0
Hi,
As long as cache configuration is the same, affinity assignment for such
caches will be identical, so you do not need to explicitly specify cache
dependency. On the other hand, if cache configurations do differ, it is not
always possible to collocate keys properly, so for this case such a
Ravi,
It's been a while since I used Hibernate last time, but as far as I
remember, you may do either:
* call any method (e.g. size()) on your proxied collection to trigger lazy
collection initialization
* Call Hibernate.initialize(collectionProxy)
* Use @ManyToOne(fetch = FetchType.EAGER) in
Denis,
Updates are always queued on primary nodes when write-behind is enabled,
regardless of atomicity mode. This is required because otherwise updates
can be written to the database in a wrong order.
We did not queue database updates on backups because we did not have a
mechanism that would
>From the error message "Spring XML configuration path is invalid:
/home/test/SparkIgniteStreaming/config/example-cache.xm" my guess is that
the configuration file is absent on the Spark executor node.
2016-04-04 8:17 GMT-07:00 Yakov Zhdanov :
> Thanks for sharing the code.
Hi Arthi,
Can you elaborate more on what you want to achieve by collocation based on
two fields?
If you have a class A, which is used as a cache key, has a field aKey, then
setting this field as an affinity key tells ignite that an instance of
class A should always be stored on the same node
Jimmy,
The approach you suggested will not work either. Consider a situation when
concurrent updates are required for your object. In this case there is a
chance that you modify version 1 of your object, but when you do a
cache.get(), you will receive already updated, different version of your
Hi,
It may be the case that you can utilize BinaryObjectBuilder instead of
HashMap for the use-case you described [1]. It is an abstraction that was
created to handle cases when no class definitions exist. You can also
change the structure of your binary objects in runtime.
So the code you have
It looks like we are missing an option to tell IgniteRDD to work with
binary objects. When an iterator is created, it tries to deserialize
objects, and since you do not have a corresponding class, the exception
occurs. I will create a ticket for this shortly.
Despite this, you should still be
Hi,
Consistency between nodes is guaranteed in ATOMIC mode, however, the
read-after-write guarantee is met in the following cases:
- cache write synchronization mode is FULL_SYNC. In this mode cache.put()
will not return control until all data nodes (primary and backup)
responsible for the data
Dmitriy,
You should have used the same entity name in QueryEntity as the one you
used when creating a builder, i.e.
queryEntity.setValueType("DT1")
because you can have multiple value types stored in one cache.
I will create a ticket to throw a proper exception when BinaryObject is
used in query
Yep, BinaryObjectBuilder should definitely be a solution for this. You can
obtain an instance of Ignite from IgniteContext and use the IgniteBinary
interface to get an instance of BinaryObjectBuilder to build object
structures dynamically. And you can use QueryEntity class to describe the
index
Myron,
I believe IGNITE-2645 should be fixed in the near future since the issue is
critical, and will definitely be included to 1.6.
As for the IGNITE-1018, I will not speculate on the timelines because the
issue has some workarounds, even though it is possible that it will be
fixed for 1.6 if
Oh, I see now what you mean, IGNITE-1018 has escaped my view. Then, until
IGNITE-1018 is fixed, the only guaranteed approach is to wait on a CDL.
Here is the pseudo-code that I have in mind:
LifecycleBean or after Ignition.start():
// Populate your node local map
CountDownLatch init =
Myron,
What approach did you use initially to initialize the node local map?
IgniteNode is considered to be fully functional as soon as Ignition.start()
method returns control, so any operations done on NodeLocalMap after the
node start should be considered to be run concurrently with
Myron,
We have a specific test for the exact use-case you have described and it
passes - see IgniteAtomicCacheEntryProcessorNodeJoinTest. I tried to play
with the configuration (added test store, tried different memory modes),
but was not able to make the test fail.
Is there any change you can
>
> Thanks for your suggestion. I did not follow this:
>
> "For this use-case I would suggest using single cache puts (the same way
> you
> insert data to Oracle) and combine it with write-behind store writing to
> HDFS, this should give you better latencies."
>
> Are you suggesting not using
Kobe,
I am not sure this is a fair comparison because writing a file to IGFS
involves 3 operations: updating the metadata cache (empty file creation),
actual file writing and then updating the metadata cache again (update the
file size).
For this use-case I would suggest using single cache puts
I see no fundamental reasons why it cannot be supported, however, as far as
I know, current queue implementation starts several nested transactions on
more than one system caches, so re-writing this into single transaction and
supporting system and user caches in one transaction may require quite
Folks,
The current implementation of IgniteCache.lock(key).lock() has the same
semantics as the transactional locks - cache topology cannot be changed
while there exists an ongoing transaction or an explicit lock is held. The
restriction for transactions is quite fundamental, the lock() issue can
A little correction: in this particular case inputStream does return 0
which leads to an infinite loop, however, generally this may be not the
case, so the implementation should not read beyond object boundary anyways.
Hello Myron,
Your implementation of Externalizable interface is incorrect. The number of
bytes that can be read from the object input stream passed to the
readExternal() method is not limited, so you need to make sure that you do
not read more bytes than the number of bytes written.
The correct
Myron,
Thank you for reporting the issue. The assertion happens when the value is
present in the store, absent in the cache and you run invokeAll(). As a
temporary solution, you can either call invoke() for each particular key
individually, or call getAll() for the keys prior to calling
Ravi,
A small typo sneaked to the code snippet - the lock() call was omitted - it
should be like this (I also omitting the try-finally block for the
simplicity)
IgniteCache cache = ...;
Lock lock = cache.lock(key);
lock.lock();
// ... process while lock is held
lock.unlock();
Myron,
I tried to reproduce this assertion on ignite-1.5, but with no luck. Can
you share your full cache configuration, the number of nodes in your
clusterr and a code snippet allowing to reproduce the issue?
Thanks,
AG
Myron,
This is a known usability issue, see [1]. You need to set
atomicWriterOrderMode to PRIMARY in order to make entry processors to work
correctly. I will cross-post this mail to devlist in order to raise the
ticket priority.
[1] https://issues.apache.org/jira/browse/IGNITE-2088
--AG
I think it makes sense to add a native binary marshaller support for such
types (at least for platform interoperability standpoint). I will create a
ticket.
2016-01-26 3:31 GMT+03:00 vkulichenko :
> Agree. I made a fix in master to ignore JDK classes when printing
Hi Andrey,
You need to properly collocate your data in order to have correct join
results when using partitioned caches (it does not matter whether you join
tables within one partitioned cache or join tables across different
partitioned caches). Please refer to documentation [1] and example [2].
If you need an ability to run ad-hoc SQL, then you're right and you need to
have one PARTITIONED cache and all others should be REPLICATED. However, if
you know your SQL queries in advance, usually you can some up with a
collocation strategy for multiple PARTITIONED caches.
I believe the
Ravi,
IgniteCache does not have a method like getAll() because this can simply
trigger an OOME on the node that receiving the data.
If you want to process all entries on a single node, you can use
IgniteCache#query() method and pass an instance of ScanQuery object to this
method. Resulting
Nino,
Can you please elaborate why you want to start several nodes per call in
your architecture? Generally, the number of nodes is a factor of number of
machines you have, in rare cases you may need to start an Ignite client per
external call, however, usually it is not very effective.
If you
Btw,
Even though the same query is executed on all nodes, Ignite will
automatically filter out the keys that does not belong to the local node
upon cache loading. When the number of nodes is high, this is not very
effective since only a small part of data (roughly K/N, where K is the
number of
Paulo,
The issue has been fixed and the fix was merged to master. You should be
able to cherry-pick the commit 02dbcfd8ed2701a4f415c8871d0b8fd08bfa0583 and
build Ignite from sources with this fix.
Paulo,
This should have been fixed - corresponding marshaller tests have been
added, I have just verified this. Can you please share your configuration,
full node logs and a small code snippet that shows how to reproduce the
issue?
Kira,
I also wanted to mention that probably the most obvious way you can gain
performance benefit from Ignite compared to Spark is using indexing for SQL
queries. Last time I checked, spark did not have indexes, so each SQL query
in Spark implies a full scan of the data set. In Ignite you can
ode, it's still not sorted based on the idx ?
>>
>> But I found this in the documentation.
>>
>>
>> Cheers
>>
>>
>>
>> On Wed, Dec 23, 2015 at 3:20 PM, Alexey Goncharuk <
>> alexey.goncha...@gmail.com> wrote:
>>
>>>
Hi Welly,
A few other suggestions:
- If you expect your data to be collocated by chId, you need to use @field
annotation on series key, just the way you did it with SQL fields:
case class SeriesKey(@(AffinityKeyMapped @field) chId: UUID, idx:
Long) extends Serializable
- You need to specify
Hello Jennifer,
I created a sample Spark stream words example and combined it with the code
you have provided (both Java and POM), and it worked fine for me. Can you
share the specific compile error/exception you see with us?
Thanks,
AG