Hi drosso,
Luckily there is no mystery. By the way, what version of Ignite do you use?
The clue to strange behavior here is topology change during your test
execution. As I see, Putter node is a server data node as well, so it will
hold some data partitions on it and consequently will receive
Hi All,
Is remote listen events are available is Ignite C#.Net?
I could see there are available in java. If they are available in C#.Net ,
could you please provide sample code to listen of cache expire event
listener ?
[image: image.png]
Thanks and Regards,
Hemasundara Rao Pottangi | Senior
I think you are saying if I had a persistent store using Ignite 2.5 and
upgraded to 2.6 I might see those errors?
In my case the grid in question was first created with 2.6.
*From:* Dmitriy Pavlov
*Sent:* Friday, October 5, 2018 9:46 AM
*To:* user@ignite.apache.org
*Subject:* Re: WAL
Hi Prasad,
G1 is a good choice for big heaps, if you have big GC pauses you need to
investigate this, might be GC tuning is required, might be you just need to
increase heap size, it depends on your case and very specific.
Also, please make sure that you use 31GB heap, or > 40 GB heap, numbers
"Ignite will only use one index per table"
I assume you mean "Ignite will only use one index per table per query"?
On Thu, Oct 11, 2018 at 1:55 PM Stanislav Lukyanov
wrote:
> Hi,
>
>
>
> It is a rather lengthy thread and I can’t dive into details right now,
>
> but AFAICS the issue now is
Thanks!
So does it mean that CacheConfiguration.queryParallelism is really an H2
settings?
On Tue, Oct 9, 2018 at 4:27 PM Stanislav Lukyanov
wrote:
> In short, Ignite replaces H2’s storage level with its own.
>
> For example, Ignite implements H2’s Index interface with its own off-heap
> data
Hi,
Is anyone using on heap cache with more than 30 gb jvm heap size in
production?
Which gc algorithm are you using?
If yes have you faced any issues relates to long gc pauses?
Thanks,
Prasad
Thanks!
Could you please clarfiy "*In case of a composite index, it will apply the
columns one by one"? *
Igntie (or rather H2?) needs to load the data into heap in order to do the
groupBy & aggregations. We were hoping that only data that matches the
category filter will be loaded.
*What does
Uhm, don’t have a tested example but it seems pretty trivial.
It would be something like
@Bean
public SpringCacheManager SpringCacheManager() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("WebGrid");
// set more Ignite parameters
Hi,
It is a rather lengthy thread and I can’t dive into details right now,
but AFAICS the issue now is making affinity key index to work with a secondary
index.
The important things to understand is
1) Ignite will only use one index per table
2) In case of a composite index, it will apply the
Do you have any sample here ? the bean define in ignite configuration would
not work with spring boot context. we need to instantiate spring cache
manager with in spring boot context.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
You’re creating a new cache on each heath check call and never
destroy them – of course, that leads to a memory leak; it’s also awful for the
performance.
Don’t create a new cache each time. If you really want to check that cache
operations work,
use the same one every time.
Thanks,
Stan
Yep, `order by` is usually needed for `limit`, otherwise you’ll get random rows
of the dataset
as by default there is no ordering.
If I’m not mistaken, in Ignite this options work on both map and reduce steps.
E.g. `limit 100` will first take the 100 rows from each node, then combine them
in a
Refer to this:
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios
Stan
From: the_palakkaran
Sent: 9 октября 2018 г. 22:59
To: user@ignite.apache.org
Subject: Re: Configuration does not guarantee strict
consistencybetweenCacheStore and Ignite data storage upon
Thank you reply very much!
in standalone h2 mode, if want to make `limit` and `offset` work correctly
the sql must be take a `order` clause.
as I can't find document address how h2 works in ignite cluster.(I just know
it works something like map-reduce.
so I guess if `limit` ,`offset` and
`limit` and `offset` should work, with usual semantics.
Thanks,
Stan
From: kcheng.mvp
Sent: 11 октября 2018 г. 18:59
To: user@ignite.apache.org
Subject: Can I use limit and offset these two h2 features for pagination
I know in h2 standalone mode, we can use *limit* and *offset*
I know in h2 standalone mode, we can use *limit* and *offset*
features(functions) for pagination. Not sure in ignite I can still use these
two function for same purpose?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
Well one (weak) reason is that somebody else could already have put data to
a cache using the "dash" convention, and in that case we are unable to
retrieve this data (eg. how can we (automatically) deserialize the
"user-name" field?)
also, allowing this mapping would be in line with
Hi,
You can get all caches configuration from Java API:
IgniteCache cache =
dr1Hub.getOrCreateCache("CACHE");
CacheConfiguration conf =
cache.getConfiguration(CacheConfiguration.class);
System.out.println("");
No, seems there are no methods for that.
I assume it could be added to the metadata response. But why do you need it?
Stan
From: wt
Sent: 11 октября 2018 г. 15:12
To: user@ignite.apache.org
Subject: client probe cache metadata
Hi
The rest service has a meta method that returns fields and index
this is brilliant. thanks for this information.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi,
// Sidenote: better not to ask two unrelated questions in a single email. It
complicates things if the threads grow.
Roughly speaking, REPLICATED cache is the same as PARTITIONED with an infinite
number of backups.
The behavior is supposed to always be the same. Some components cut corners
Hi,
According '127.0.0.1#47500', '0:0:0:0:0:0:0:1/0#47500':
It's just the different views of the same address (localhost). For example,
Ignite started on 10.0.75.1 could be available using next addresses from the
same node:
>>> Local node addresses: [LAPTOP-I5CE4BEI/0:0:0:0:0:0:0:1,
>>>
Hi
The rest service has a meta method that returns fields and index info. I
can't see another rest method that would show what a cache config is like
(backup, atomic or transactional, if metrics tracking is enabled, and
whatever other relevant cache config there might be. Can i get this from the
Hi,
Nope, it doesn’t work like that.
Names of fields in the Java class are always the same as the names of the
fields in BinaryObject.
Frankly, I don’t really see a strong use case for the customization you’re
asking for.
If it is just to support different naming conventions in different
I think this describes what you want:
https://apacheignite-fs.readme.io/docs/secondary-file-system
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 13:51
To: user@ignite.apache.org
Subject: Load data into particular Ignite cache from HDFS
Can you tell me a method to load files from HDFS
It looks like your use case is having Ignite as a cache for HDFS, as described
here https://ignite.apache.org/use-cases/hadoop/hdfs-cache.html.
Try using this guide
https://apacheignite-fs.readme.io/docs/secondary-file-system.
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 9:19
To:
Well, you need to wait for the IGNITE-7153 fix then.
Or contribute it! :)
I checked the code, and it seems to be a relatively easy fix. One needs to
alter the GridRedisProtocolParser
to use ParserState in the way GridTcpRestParser::parseMemcachePacker does.
Stan
From: Michael Fong
Sent: 11
Hi,
is it possible to deserialize cache entries to Java objects, and specify
field names mapping using annotations?
In example, let's say cache entries follow a "with-dashes" naming
convention:
binaryObject.toString() = "domain.User[id=... idHash=... user-name=Paul ]"
And I deserialize it with
hi
I can't find additional information on evictions and transactions that i
need to document.
1) does the eviction policies apply at a node level or cluster level. I
understand with replicated it probably is a node level but for a partitioned
cache how does it work
2) if i have 4 tables and 1
It worked. Directory set for IGNITE_HOME was getting cleaned up because it
wasn't on a mount point or docker volume. Once I set IGNITE_HOME to a
directory that persisted between container stop and start, the ID generator
worked as expected.
--
Sent from:
Hi Ivan,
thank you for your interest! here below you can find the code for the 2
sample programs:
*** ServerNode.java **
package TestATServerMode;
import javax.cache.Cache;
import javax.cache.event.CacheEntryEvent;
import javax.cache.event.CacheEntryUpdatedListener;
import
32 matches
Mail list logo