// Bcc’ed user-list and added dev-list.
2.5 scope is already frozen, and I assume this isn’t a kind of issue that would
be added at this point.
Andrey Gura, could you please comment on that?
Thanks,
Stan
From: Paul Anderson
Sent: 7 мая 2018 г. 10:33
To: user@ignite.apache.org
Subject: Hi...
Hi,
Do you have native persistence enabled?
What is your Ignite version?
If the Ignite version is 2.4+ and you have persistence, the problem is most
likely with baseline topology.
You need to make sure that the restarted node is in the baseline for the
rebalance to happen, either by keeping
Hi,
To start with Ignite transactions, please use the following docs:
- Ignite transactions overview on readme.io:
https://apacheignite.readme.io/docs/transactions
- Transaction class Javadoc:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/transactions/Transaction.html
-
Hi,
Ignite doesn’t provide built-in support for authentication, so the built-in
control.bat/sh also don’t have stubs for that.
So yes, I guess you need to write your own tool.
A tool like that would be pretty simple though – just start a client node,
parse command line arguments and
map them
Let's keep the discussion in the newer thread:
http://apache-ignite-users.70518.x6.nabble.com/How-dose-ignite-implement-concurrent-control-in-transaction-tt21580.html
Thanks,
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Look at the bottom exceptions in the chain:
=
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
class with given class loader for unmarshalling (make
sure same versions of all classes are available on all nodes or enable
peer-class-loading)
This thread seems to be a duplicate of
http://apache-ignite-users.70518.x6.nabble.com/Dependencies-required-to-use-ignite-in-embedded-mode-tt21619.html.
Let’s stick to the latter one.
the_palakkaran, please avoid posting the same question several times.
Thanks,
Stan
From: Pavel Vinokurov
Sent:
Well, with such a big leap between versions I’d say whatever works is good :)
Being nervous about the jar hell is always wise.
Going through the dependencies is probably the most straightforward option,
unfortunately.
You could try to reduce the number of Ignite modules you’re using - optional
As you can see in the log, there is an exception
--
2018-05-23 00:41:58,863 ERROR [org.apache.ignite.logger.log4j.Log4JLogger]
Got exception while starting (will rollback startup routine).
class org.apache.ignite.IgniteException: Failed to load class names
properties file packaged with ignite
Hi,
Could you please start with -DIGNITE_QUIET=false and share the logs and the
configuration?
In general, Ignite can start with just ignite-core.
A couple of modules that are also used very often are:
- ignite-indexing to use SqlQuery and SqlFieldQuery
- ignite-spring to use XML-based Ignite
This fix is a part of the transactional SQL feature
(https://issues.apache.org/jira/browse/IGNITE-4191) which is not fully
implemented yet.
I believe it is pretty much impossible to pick this commit separately and apply
to a previous Ignite version.
Stan
From: kotamrajuyashasvi
Sent: 22 мая
Hi,
If you mean IgniteQueue then, unfortunately, there is no way to describe it in
the XML.
You have to call Ignite.queue() to create one.
Thanks,
Stan
From: arunkjn
Sent: 10 июня 2018 г. 10:14
To: user@ignite.apache.org
Subject: Initialize ignite queue when node is starting
Hi,
I give my
Hi,
Try to configure the existing timeouts:
- failureDetectionTimeout
- clientFailureDetectionTimeout
- networkTimeout
Not sure if the Future you mention really needs timeout. If there was a
timeout, it would be either failureDetectionTimeout or networkTimeout anyway.
Currently the calling
How do you monitor your CPU usage? Do you know which processes consume CPU? Are
you sure it is Ignite’s process? Is CPU consumed more in user space or in
system space? Can you share the actual stats?
>From what I see in the thread dump, there is at least some activity on this
>Ignite:
Hi,
Have you tried setting consistentId and perform the activation as Pavel
suggested?
I’d also suggest to go through the usage scenarios described at
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios.
Thanks,
Stan
From: siva
Sent: 11 июня 2018 г. 14:16
To:
Hi,
Yep, that’s a bug. Daemon nodes (like the ones ignitevisorcmd starts) seem to
break baseline topology processing.
Filed https://issues.apache.org/jira/browse/IGNITE-8774.
Stan
From: szj
Sent: 9 июня 2018 г. 1:27
To: user@ignite.apache.org
Subject: Re: Ignite 2.5 nodes do not rejoin the
every 30 seconds. If no query, the cpu
should be very low. But yesterday it is a exception. Very strange.
shawn.du
邮箱:shawn...@neulion.com.cn
Signature is customized by Netease Mail Master
On 06/11/2018 17:43, Stanislav Lukyanov wrote:
How do you monitor your CPU usage? Do you know which
al.processors.query.h2.IgniteH2Indexing$13@110ac52b],
process=true]
-- CALL COMPLETED
83837380032 2018-06-11 13:52:21.903 [https-jsse-nio-8080-exec-4] DEBUG
com.xxx.lk.backend.cache.impl.RefreshTokenCache - Cache operation completed
On Mon, Jun 11, 2018 at 12:27 PM, Stanislav Lukyanov
You need to provide distinct names to the instances when starting them in the
same JVM.
Set IgniteConfiguration::igniteInstanceName property in both configurations.
Stan
From: Jeff Jiao
Sent: 12 июня 2018 г. 9:58
To: user@ignite.apache.org
Subject: Re: create two client instance in one JVM to
slow, it will make CPU busy?
Thanks
Shawn
On 6/12/2018 00:01,Stanislav Lukyanov wrote:
Sorry, but there isn’t much else to be said without additional data.
If consistent usage of resources on that host is important, I suggest to setup
some monitoring so that if it happens again there’ll be
are partitioned and client should go to server A or B depending
on which partition was resolved for the key, right?
Did you mean that if client C is using server A as gateway, all traffic goes
through server A?
On Wed, Jun 13, 2018 at 4:43 PM, Stanislav Lukyanov
wrote:
On the client
The plugin is shipped as a part of the GridGain Enterprise and Ultimate
editions (which is basically Apache Ignite + GridGain plugins).
The download links are here: https://www.gridgain.com/resources/download.
The description of the GridGain WebConsole is here:
Hi,
1. Yes
2. This is controlled by the AffinityFunction::assignPartitions implementation.
If the default one doesn’t suite you, you have to write another one yourself.
Reading this thread I have a feeling that this is a case of the XY problem
Hi Praveen,
Is it on all nodes or just one? How many nodes do you have?
Is the cluster fully responsive, or do some operations appear frozen forever?
It looks like a memory/GC issue to me.
jmap shows 2GB of on-heap data. What is the max heap size? Try increasing it.
Thanks,
Stan
From:
There is no default expiry policy, and no default eviction policy for the
on-heap caches (just in case: expiry and eviction are not the same thing, see
https://apacheignite.readme.io/docs/evictions and
https://apacheignite.readme.io/docs/expiry-policies).
I see that most of the threads in the
On the client disconnection:
The client is connected to one of the server nodes, using it as a sort of
gateway to the cluster.
If the gateway server fails, the client supposed to attempt to reconnect to
other IPs.
Right now I can’t say for sure whether it does so indefinitely, or has some
Hi,
> Also will entries of a cache be in multiple partitions on the same node?
Yes.
> Or can a partition have entries of different cache ?
No. A partition can only belong to one cache.
Stan
From: the_palakkaran
Sent: 13 июня 2018 г. 17:30
To: user@ignite.apache.org
Subject: Local peek from
Check out the Javadoc of the Ignite.queue() -
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#queue-java.lang.String-int-org.apache.ignite.configuration.CollectionConfiguration-.
You can pass null if you don’t want to start a new queue, just get an existing
one by
I see messages in the log that are not from Ignite, but which also suggest that
there is some network issue:
2018-06-09 10:19:59 [a9fc3882] severe [native] Exception in controller:
receiveExact() ... error reading, 70014, End of file found. Retrying every
10 seconds.
2018-06-09 10:19:59
Quick googling suggests to use gdb:
https://stackoverflow.com/a/37375351/4153863
Again, Ignite doesn’t play any role here – whatever works for any Java
application should work for Ignite as well.
Stan
From: naresh.goty
Sent: 11 июня 2018 г. 20:56
To: user@ignite.apache.org
Subject: Re: Ignite
Hi,
peerClassLoading only works for Ignite’s Compute Grid – i.e. for the tasks you
send via ignite.compute() calls.
Other classes, such as 3rd party libraries, need to be present on all nodes.
Stan
From: Dmitry Pavlov
Sent: 9 июня 2018 г. 13:08
To: user; Vishwas Bm
Subject: Fwd: Question on
If you’re asking whether you can use other DBMS as a persistence layer, then
yes.
You can use anything (any DBMS, or any kind of custom storage) as a 3rd party
persistence – see https://apacheignite.readme.io/docs/3rd-party-store.
If you’re asking whether you can use multiple DBMS backing the
Most likely you've run into this bug:
https://issues.apache.org/jira/browse/IGNITE-8210
It was fixed in 2.5, try updating to that version.
Thanks,
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Please resend this to the Ignite developers list - d...@ignite.apache.org. The
discussions of code changes happen there.
user@ignite.apache.org is for more user-oriented questions.
Thanks,
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I don't really have much to say but to try reducing/balancing on heap cache
size.
If you have a lot of objects on heap, you need to have a large heap,
obviously.
If you need to constantly add/remove on-heap objects, you'll have a lot of
work for GC.
Perhaps you can review your architecture to
Well, that’s diving a bit deeper than the “don’t do cache operations” rule of
thumb, but let’s do that.
When receiver is invoked for key K, it’s holding the lock for K.
It is safe to do invoke on that K (especially if you control the invoked code)
since it is locked already.
But it is not safe
Hi,
Regarding service redeployment on reactivation – that’s
https://issues.apache.org/jira/browse/IGNITE-8205,
already fixed in the master branch, will be in 2.6.
Regarding executing cancel() in the exchange thread – agree, that doesn’t seem
right.
I guess currently we just run init() and
Hi,
How many data entries do you have?
Are IDs that you use for affinity mapping evenly distributed among the entries?
Can you show the code that you use to define the affinity mapping and your
cache configuration?
Also, what exactly do you mean by "node gets X IDs to work with"?
Do you mean
To add a detail, value will have an index created for it if it is a primitive
(or an “SQL-friendly” type like Date).
I don’t think there is an easy way to avoid that. You could use a wrapper for
the primitive value, but it will also
have some overhead and it’s hard to say whether it will be
AFAIU you simultaneously have about 1 million entries which correspond to 5-10
groups (measurements).
Is that correct?
If so, that might be the reason of the distribution that you see. Even though
you have a lot of entries,
you only have a few affinity mapped groups. Ignite tries keep all
Hi,
You can start Ignite in the same JVM as your application by using
Ignition.start() method. You can start multiple Ignite instances,
just create different IgniteConfiguration for them with different instance
names.
IgniteConfiguration serverConfig = new IgniteConfiguration()
${IGNITE_HOME} , as in my classpath of application I am using only required
jars of ignite
-Rajesh
On Mon, Jan 29, 2018 at 5:41 PM, Stanislav Lukyanov <stanlukya...@gmail.com>
wrote:
It will be under ${IGNITE_HOME}/work/db.
Please refer to
https://apacheignite.readme.io/docs/distributed-persistent
It will be under ${IGNITE_HOME}/work/db.
Please refer to
https://apacheignite.readme.io/docs/distributed-persistent-store which covers
this topic in details.
Stan
From: Rajesh Kishore
Sent: 29 января 2018 г. 15:04
To: user@ignite.apache.org
Subject: default location for ignite file db
Hi All.
Hi,
Whenever an entry is touched, the expiry policy of the view that was used
for that will be consulted to get a new TTL. It means that each time you
touch an entry through a view with `EternalExpiryPolicy` its TTL will be
reset to ETERNAL. You could say that the `bypassCache` from your example
Hi Prasad,
// Please send different questions separately – this way it’s easier to answer
and to search for existing answers
> Also, I am loading the cache from oracle table using loadCache method. If the
> persistence is enabled and if the data is already persisted, I want to make
> sure
r solution to achieve this.
Thanks,
Prasad
On Wed, Feb 21, 2018 at 1:33 PM, Stanislav Lukyanov <stanlukya...@gmail.com>
wrote:
Hi Prasad,
// Please send different questions separately – this way it’s easier to answer
and to search for existing answers
> Also, I am loading the cache from
Hi,
Just to confirm – you’re talking about Ignite’s native persistence, right?
Can you share the code and configs you use in your testing?
Thanks,
Stan
From: VT
Sent: 23 февраля 2018 г. 12:02
To: user@ignite.apache.org
Subject: SSD writes get slower and slower
Hello,
I am trying to test use
Hi Ariel,
BinaryMarshaller is used to store data by default. All the data will be stored
and sent across the cluster in the Ignite’s binary format.
You don’t need to use Serializable since BInaryMarshaller doesn’t rely on the
standard Java serialization.
Also, there are cases when Ignite will
Hi Arseny,
This behavior of the `Runtime.availableProcessors()` is actually a recognized
issue of the Hotspot, see [1]. It was fixed not that long ago for JDK 9 and
8uX, and I can see correct values returned in my Docker environment with JDK
8u151, although I believe it depends on a specific
Hi Arseny,
Both OpenJDK and Oracle JDK 8u151 should do.
I’ve checked and it seems that it is indeed about the specific way of invoking
Docker. By default it doesn’t restrict container to particular CPUs, but you
can make it do that by using `--cpuset-cpus` flag.
Example:
Without cpuset (Docker
Hi,
Looks like you’re performing a cache operation (invoke()) from a StreamReceiver
– this is not allowed.
Check out this SO answer
https://stackoverflow.com/questions/43891757/closures-stuck-in-2-0-when-try-to-add-an-element-into-the-queue.
Stan
From: breischl
Sent: 21 июня 2018 г. 19:35
To:
Hi,
You don’t need to use DDL here.
It’s better to
- Define Key class to be like
class Key { [QuerySqlField] String CacheKey; [AffinityKeyMapped] long
AffinityKey; }
- set your Key and UserData to be indexed
cacheCfg.QueryEntities =
{
new QueryEntity(typeof(Key),
Hi,
I guess this is the problem: https://issues.apache.org/jira/browse/IGNITE-8987
Stan
From: zhouxy1123
Sent: 2 августа 2018 г. 9:57
To: user@ignite.apache.org
Subject: in a four node ignite cluster,use atomicLong and process is stuck
incountdown latch
hi ,
in a four node ignite cluster,use
FYI the issue https://issues.apache.org/jira/browse/IGNITE-8774 is now fixed,
the fix will be available in Ignite 2.7.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Seems to be answered in a nearby thread:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-yarn-cache-store-class-not-found-tt23016.html
Stan
From: debashissinha
Sent: 28 июля 2018 г. 13:01
To: user@ignite.apache.org
Subject: Ignite client and server on yarn with cache read through
Hi ,
I
What's your version?
Do you use native persistence?
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
This is the commit https://github.com/apache/ignite/commit/bab61f1.
Fixed in 2.7.
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I’d look into calling control.sh or ignitevisorcmd.sh and parsing their output.
E.g. check that control.sh --cache can connect to the local node and return one
of your caches.
However, this check is not purely for the local node, as the command will
connect to the cluster as a whole.
A more
Hi,
I think all you need is to store (a+b)/c in a separate field of the key and
annotate it with @AffinityKeyMapped:
class Key {
private long a, b, c;
@AffinityKeyMapped
private long affKey;
public Key(long a, long b, long c) {
this.a =
Hi,
I’d suggest try upgrading from 2.2 to 2.6.
One thing that stands out in your configs is expiryPolicy settings.
expiryPolicy property was deprecated, you need to use expiryPolicyFactory
instead.
I happen to have recently shared a sample of setting it in an XML config on SO:
Random-LRY selects the minimum of 5 timestamps (5 pages, 1 timestamp for each
page).
Random-2-LRU selects the minimum of 10 timestamps (5 pages, 2 timestamps for
each page).
My advice is not to go that deep. Random-2-LRU I protected from the “one-hit
wonder” and has a very tiny overhead
Most likely it’s either
https://issues.apache.org/jira/browse/IGNITE-8023
or
https://issues.apache.org/jira/browse/IGNITE-7753
Until these issues are fixed, avoid starting/restarting nodes while cluster
activation is in progress.
Thanks,
Stan
From: Calvin KL Wong, CLSA
Sent: 19 июля 2018 г.
If you’re using GridGain, it would be better to contact their customer support.
Also, check out this docs page about queries in WebConsole:
https://apacheignite-tools.readme.io/docs/queries-execution
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:54
To: user@ignite.apache.org
Subject: Insert
If you’re using GridGain, it would be better to contact their customer support.
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:59
To: user@ignite.apache.org
Subject: How to increase memory of Gridgrain database.
Can anyone tells, how to increase a memory of Gridgrain database?
I tried below
You can write INSERT the same way as you write SELECT.
Syntax for INSERT is described here:
https://apacheignite-sql.readme.io/docs/insert
Thanks,
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 13:43
To: user@ignite.apache.org
Subject: RE: Insert Query in Gridgrain WebConsole
In this blog insert
The functionality you’re looking for is generally called Rolling Upgrade.
Ignite doesn’t support clusters with mixed versions out of the box.
There are third-party solutions on top of Ignite, such as GridGain, that do
have that.
Thanks,
Stan
From: KR Kumar
Sent: 16 июля 2018 г. 12:44
To:
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
From: Prasad Bhalerao
Sent: 25 июля 2018 г. 10:03
To: user@ignite.apache.org
Subject: ***UNCHECKED*** Executing SQL on cache using affinnity key
Hi,
Is there any way to
the sql using a
affinity key so that it gets executed only on a node which owns that data?
Thanks,
Prasad
On Wed, Jul 25, 2018 at 3:01 PM Stanislav Lukyanov
wrote:
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
Please don’t send messages like “?”. Mailing list is not a chatroom, messages
need to be complete and meaningful.
Reproducer is a code snippet or project that can be run standalone to reproduce
the problem.
The behavior you see isn’t expected nor known, so we need to see it before we
can
Are you trying to use MS SQL tools to connect to Ignite?
I don’t think that’s possible – at least, Ignite doesn’t claim to support that.
Stan
From: wt
Sent: 24 июля 2018 г. 16:01
To: user@ignite.apache.org
Subject: odbc - conversion failed because data value overflowed
I have a SQL server
I don’t know how that MSSQL studio works, but it might be dependent on T-SQL or
some system views/tables specific for MSSQL.
DBeaver doesn’t use any DB-specific features and is known to work with Ignite.
Take a look at this page: https://apacheignite-sql.readme.io/docs/sql-tooling
Stan
From:
With writeThrough an entry in the cache will never be "dirty" in that sense -
cache store will update the backing DB at the same time the cache update
happens.
From: monstereo
Sent: 14 августа 2018 г. 22:39
To: user@ignite.apache.org
Subject: Re: Eviction Policy on Dirty data
yes, using
java:422)
~[?:1.8.0_152]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90)
~[ignite-core-2.3.0-clsa.20180130.59.jar:2.3.0-clsa.20180130.59]
Thanks,
Calvin
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
Sent: Monday, July 0
Ignite transactions support this with REPEATABLE_READ isolation level.
More info here: https://apacheignite.readme.io/docs/transactions
Stan
From: Prasad Bhalerao
Sent: 9 июля 2018 г. 14:50
To: user@ignite.apache.org
Subject: Does ignite support Read-consistent queries?
Read-consistent
Hi Gregory,
> So I would need a function that returns a node id from a Cache Key then a
> function returning a node of the cluster give its id
I don’t quiet get how it getting a node for a key fits here (you’d need to know
some key then),
but ignite.affinity().mapNodeToKey() does this.
How
Hi Calvin,
It should work the same for all queries. Ideally OptimizedMarshaller shouldn’t
even be used (except for JDK classes).
How did you check which marshaller is used in each case?
Can you share the code of your POJO? Or perhaps a runnable reproducer?
Can you also share the logs/check
> This keeps the performance stable, but I'm not sure how to persist the
> expired data properly
Perhaps you could listen to the EXPIRED events of the cache that stores the
data you want to persist,
create an additional cache with persistence enabled and put all your expired
data there.
Thus
I believe that’s correct, although listening for
javax.cache.event.EventType.EXPIRED in ContinuousQuery
may be more straightforward.
Also, I don’t think you need to disable eager TTL for that.
Eager TTL is about when to pay for deletion of the old data – other words, if
you specify a TTL
with
Hi,
Ignite uses JCache’s ExpiryPolicy, and that API only provides a way to specify
three kinds of TTL – for creation, modification and access.
AFAIU you’d like to tune TTL on a finer level, having different values per
different operations. If so, you can’t do that just with ExpiryPolicy.
But
Hi Rajeev,
Generally `getAll` should behave in a more optimized manner, not simply
delegate to `get`.
If you have a stack trace could you please share it ? It would help to pinpoint
the code path
that needs to be looked at.
Also, can you show some code and tell a bit about your configuration?
Hi,
When you create a cache via Java API you provide a value type to the
QueryEntity.
Then the table name for SQL queries will have a form
..
Can you share the code that you use to create the caches?
Then it would be easier to tell what will be the exact names of the tables.
Thanks,
Stan
Hi,
It’s IgniteConfiguration.segmentationPolicy property.
However, note that Apache Ignite doesn’t provide standard mechanisms for
network segmentation detection.
By default, a node will only be moved to detected state when it is explicitly
notified by coordinator that it had failed and was
It looks like you have some issues with your maven installation. Make sure
you're able to build some simple projects, e.g. go over the maven guide
https://maven.apache.org/guides/getting-started/.
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
It seems to me that there are two questions in this post:
1) How to store cross-referencing data in an IgniteCache?
2) How to extract data from server based on some complex filter?
First, on the cross-references. To store a reference to another entity,
which is stored as a separate entry in
Hi,
Your code for getting a partition ID is correct.
Ignite can't use your "id" field to calculate partition - it just doesn't
know anything about it. It doesn't depend on the name, so calling it "id"
doesn't help, and it doesn't depend on the backing Oracle DB, so it doesn't
know that "id" is
Hi,
So, in short, you want to have some data structure basically separate from
Ignite that would just be using Ignite's SQL engine. Is that right? If so,
LOCAL caches sound like the closest what Ignite has for this. I'm not sure
if it would be much simpler if it didn't use Ignite's
> Curious to know How fast is Disk KV persistence ? since Ignite iterates
over all keys and indexes to do the computation. Is Disk KV persistence is
as efficient as in other stable NoSQL database like Cassandra ?
> Does the number of partitions helps in better key lookup access from Disk
> ?
>
Generally, you should have an index for each combination of fields you use in
a query. Primary key gives you an implicit index, but you need to create the
rest yourself.
In your case, I'd suggest to have AccountID as a primary key (without
PartyID) and also create a compound index for the pair
Ignite will take care of evicting the data from memory itself. The hot data
will be in the memory, and the rarely accessed data will be loaded on
demand. Other words, rarely accessed data will not occupy memory space all
the time.
Thanks,
Stan
--
Sent from:
There may be some misunderstanding.
What I meant was that the data most likely WILL have good distribution with
assetGroupId being @AffinityMappedId. To have a good distribution it is
generally enough to have a lot of data groups, so that it is likely that
each partition stores more or less equal
> Cache is based on hit or miss ratio or simple LRU
LRU
> But as I mentioned in point #3 there might be services which hit multiple
> times but good latency is not the requirement. I dont want cache to evict
> any records when querying to such few Tables.
What you could do in this case is
Hi Naveen,
There is no scheduled date for it yet. 2.4 release has just been voted for
and accepted.
Based on the version history (https://ignite.apache.org/download.cgi) one
could say that the average time between releases is about 2-3 months, so
it's quite possible that 2.5 will be released in
Hi,
A terminology nitpicking: it seems that you’re talking about expiry; eviction
is a similar mechanism but is based on the data size, not time.
Have you tried setting CacheConfiguration.eagerTtl = false
(https://apacheignite.readme.io/v2.3/docs/expiry-policies#section-eager-ttl)?
With that
Hi Naveen,
The native persistence doesn’t require you to reload data to memory, the tables
should be available after the activation.
How do you start the nodes? Do you use ignite.sh/ignite.bat?
Have you downloaded Ignite as .zip archive or via Maven?
It’s possible that your persistence files
Hi Naveen,
Please refer to this page https://apacheignite.readme.io/docs/3rd-party-store.
In short, you need to implement a CacheStore or use one of the standard
implementations (CacheJdbcBlobStore, CacheJdbcPojoStore)
and add it to your cache configuration.
Also, you can set
You don’t have to explicitly set sqlSchema for a cache – the default value is
the cache name.
Both cache1.query() and cache2.query() can access the data in both caches,
the only difference is the default schema name that will be used in the query.
If both caches have the same schema name or if
Hi Prachi,
Any chance you can help with that?
Thanks,
Stan
From: ihorps
Sent: 6 марта 2018 г. 12:10
To: user@ignite.apache.org
Subject: Ignite readme.io - documentation export (pdf, etc)?
hi all
I tried to find some option on readme.io to export Apache Ignite
documentation into some static
Hi Naveen,
When you’re using Ignite SQL queries they aren’t processed by the Oracle’s SQL
engine.
Instead, Ignite will process the SQL itself (via H2 database engine) and get
the data from its cache.
For that to work you need to first preload the data to the cache via a
CacheStore.loadCache()
Hi Jeff,
I believe its Long.MAX_VALUE, so I wouldn’t worry about reaching that cap in
one’s lifetime :)
Thanks,
Stan
From: Jeff Jiao
Sent: 7 марта 2018 г. 10:52
To: user@ignite.apache.org
Subject: max size of Ignite cache (not memory size, i mean max amount ofrecords)
hi Igniter,
Is there a
Hi Olexandr,
Is it reproducible? Can you share a full thread dump?
Also, are there any related messages in the logs (make sure you’re running with
`java -DIGNITE_QUEIT=false` or with `ignite.sh -v`)?
Thanks,
Stan
From: Olexandr K
Sent: 6 марта 2018 г. 12:48
To: user@ignite.apache.org
1 - 100 of 265 matches
Mail list logo