I personally feel that this should be handled on higher level.
In my view, Ignite Compute subsystem is about simple delivery of a job
to a remote server. It has failover and all, but to really use that you have to
go for ComputeTaskSplitAdapter or similar things, and they’re hard to get right.
I’d
Hi,
If you’d like to contribute (that’s great!) please follow this process:
https://ignite.apache.org/community/contribute.html
In short, you need to introduce yourself as a contributor on
d...@ignite.apache.org,
create a pull request on GitHub, run tests and request a review.
Good luck!
Stan
27; ignite cluster into a spark cluster
Hi Stan,
Thank you for providing the information.
Yes, it's exact 'embedded deployment', but it looks like to be deprecated
because of performance issues.
Stanislav Lukyanov wrote
> Embedded Mode Deprecation
> Embedded mode implies starting I
Yep, that sounds right.
The primary ways to load data into Ignite from another source are
a custom cache store or a data streamer.
Data streamer will mostly outperform cache stores for initial/periodic data
loading.
Cache stores are good when you need to sync data between Ignite and backing DB
Hi,
Sorry, I didn’t really get what you mean here…
Can you explain:
- What exactly you’re doing?
- How is it different from normal Ignite-Spark integration, e.g. from shared
deployment (https://apacheignite-fs.readme.io/docs/installation-deployment)?
- What would be an alternative solution you’re
Hi,
Nope, there is no priority queue for affinity calls.
ComputeTaskSplitAdapter seems to be the best way to handle this.
As an alternative, you could do something similar with a cluster singleton
service:
submit tasks to it, sort by priority, and send them to the servers as
affinityRun.
Not s
`CREATE
TABLE …. RECORD_VALID_FROM TIMESTAMP`.
Not sure if it is an issue and what is the correct usage and behavior,
but the easiest workaround I can think of is to use plain Long values for
timestamps.
Stan
From: Stanislav Lukyanov
Sent: 15 октября 2018 г. 17:17
To: user@ignite.apache.org
A guess: the value is being saved, but due to an issue with name or type
matching in the QueryEntity
SQL engine doesn’t return it.
Look for the problem in the cache config (queryEntities property), pay
attention to the names, etc.
Stan
From: wt
Sent: 15 октября 2018 г. 12:47
To: user@ignite.ap
On your questions
> 1) How does one increase write throuput without increasing number of clients
> (the server nodes are underutilized at the moment)
Actually, adding more clients is the supposed way of increasing throughput if
servers have capacity.
> 2) We have use cases where we many have man
Hi,
If you want help, you have to explain what is your problem.
Sharing code is great, but people also need to understand what exactly you’re
trying to do
and what to look for.
Also, your code seems to contain private classes (com.inn.*) so no one will be
able to run it anyway.
Stan
From: shru
Hi,
Were you able to find the root cause of this?
If yes, what was it?
The error indicates a network connection issue, so I guess
the solution should be about the network configuration.
Stan
From: eugene miretsky
Sent: 21 августа 2018 г. 23:35
To: user@ignite.apache.org
Subject: Spark Dataframe
Nope.
Here is the JIRA to add that: https://issues.apache.org/jira/browse/IGNITE-1683.
Seems like the functionality was broken at some point, and because of that it
was removed from Ignite.
Someone needs to address that and bring the API back to C#.
Stan
From: Hemasundara Rao
Sent: 12 октября 20
There is an error “Failed to update index, incorrect key class”.
Any chance you’ve changed an integer field to a string one, or something like
that?
Changing field types is generally not supported.
Stan
From: wt
Sent: 12 октября 2018 г. 14:06
To: user@ignite.apache.org
Subject: RE: data streamer
Yes, there is a direct support for UUID.
If you don’t know where the error is coming from, please share the code and the
logs.
Stan
From: wt
Sent: 12 октября 2018 г. 13:00
To: user@ignite.apache.org
Subject: data streamer - failed to update keys (GUID)
hi
I just wanted to check something. I ha
Yes, sure.
From: Dave Harvey
Sent: 11 октября 2018 г. 23:59
To: user@ignite.apache.org
Subject: Re: Query 3x slower with index
"Ignite will only use one index per table"
I assume you mean "Ignite will only use one index per table per query"?
On Thu, Oct 11, 2018 at 1:55 P
tree, then merge them.
Stan
From: eugene miretsky
Sent: 11 октября 2018 г. 20:58
To: user@ignite.apache.org
Subject: Re: Role of H2 datbase in Apache Ignite
Thanks!
So does it mean that CacheConfiguration.queryParallelism is really an H2
settings?
On Tue, Oct 9, 2018 at 4:27 PM Stanislav
Uhm, don’t have a tested example but it seems pretty trivial.
It would be something like
@Bean
public SpringCacheManager SpringCacheManager() {
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("WebGrid");
// set more Ignite parameters if
Hi,
It is a rather lengthy thread and I can’t dive into details right now,
but AFAICS the issue now is making affinity key index to work with a secondary
index.
The important things to understand is
1) Ignite will only use one index per table
2) In case of a composite index, it will apply the co
You’re creating a new cache on each heath check call and never
destroy them – of course, that leads to a memory leak; it’s also awful for the
performance.
Don’t create a new cache each time. If you really want to check that cache
operations work,
use the same one every time.
Thanks,
Stan
Fr
Yep, `order by` is usually needed for `limit`, otherwise you’ll get random rows
of the dataset
as by default there is no ordering.
If I’m not mistaken, in Ignite this options work on both map and reduce steps.
E.g. `limit 100` will first take the 100 rows from each node, then combine them
in a t
Refer to this:
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios
Stan
From: the_palakkaran
Sent: 9 октября 2018 г. 22:59
To: user@ignite.apache.org
Subject: Re: Configuration does not guarantee strict
consistencybetweenCacheStore and Ignite data storage upon restarts
`limit` and `offset` should work, with usual semantics.
Thanks,
Stan
From: kcheng.mvp
Sent: 11 октября 2018 г. 18:59
To: user@ignite.apache.org
Subject: Can I use limit and offset these two h2 features for pagination
I know in h2 standalone mode, we can use *limit* and *offset*
features(functio
No, seems there are no methods for that.
I assume it could be added to the metadata response. But why do you need it?
Stan
From: wt
Sent: 11 октября 2018 г. 15:12
To: user@ignite.apache.org
Subject: client probe cache metadata
Hi
The rest service has a meta method that returns fields and index
Hi,
// Sidenote: better not to ask two unrelated questions in a single email. It
complicates things if the threads grow.
Roughly speaking, REPLICATED cache is the same as PARTITIONED with an infinite
number of backups.
The behavior is supposed to always be the same. Some components cut corners
Hi,
Nope, it doesn’t work like that.
Names of fields in the Java class are always the same as the names of the
fields in BinaryObject.
Frankly, I don’t really see a strong use case for the customization you’re
asking for.
If it is just to support different naming conventions in different places
I think this describes what you want:
https://apacheignite-fs.readme.io/docs/secondary-file-system
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 13:51
To: user@ignite.apache.org
Subject: Load data into particular Ignite cache from HDFS
Can you tell me a method to load files from HDFS in
It looks like your use case is having Ignite as a cache for HDFS, as described
here https://ignite.apache.org/use-cases/hadoop/hdfs-cache.html.
Try using this guide
https://apacheignite-fs.readme.io/docs/secondary-file-system.
Stan
From: Divya Darshan DD
Sent: 26 сентября 2018 г. 9:19
To: user@
Well, you need to wait for the IGNITE-7153 fix then.
Or contribute it! :)
I checked the code, and it seems to be a relatively easy fix. One needs to
alter the GridRedisProtocolParser
to use ParserState in the way GridTcpRestParser::parseMemcachePacker does.
Stan
From: Michael Fong
Sent: 11 октя
Looks like you’re not actually starting Ignite there.
You need to either
- Provide Ignite configuration path via SpringcacheManager.configurationPath
- Provide Ignite configuration bean via SpringcacheManager.configuration
- Start Ignite manually in the same JVM prior to the SB app initialization
I
. Simpler
implementation could just raise exceptions on queries when policy is ..._SAFE
and some partitions are unavailable.
Thanks again,
Roman
On Tue, Oct 9, 2018 at 2:54 PM Stanislav Lukyanov
wrote:
Hi,
I’ve tried your test and it works as expected, with some partitions lost and
the final
In short, Ignite replaces H2’s storage level with its own.
For example, Ignite implements H2’s Index interface with its own off-heap data
structures underneath.
When Ignite executes an SQL query, it will ask H2 to process it, then H2 will
callback to Ignite’s implementations
of H2’s interfaces (
Ignite 2.0 is FAR from being recent :)
Try Ignite 2.6. If it doesn’t work, share configuration, logs and code that
starts Ignite.
Thanks,
Stan
From: ignite_user2016
Sent: 9 октября 2018 г. 22:00
To: user@ignite.apache.org
Subject: Re: Ignite on Spring Boot 2.0
Small update -
I also upgraded i
Hi,
Can’t say much about Java EE usage exactly but overall the idea is
1) Start an Ignite node named “foo”, in the same JVM as the application, before
Hibernate is initialized
2) Specify `org.apache.ignite.hibernate.ignite_instance_name=foo` in the
Hibernate session properties
3) Do NOT specify
Hi,
Please share configurations and full logs from all nodes.
Stan
From: ApacheUser
Sent: 8 октября 2018 г. 17:49
To: user@ignite.apache.org
Subject: Spark Ignite Data Load failing On Large Cache
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage) Sp
Hi,
I’ve tried your test and it works as expected, with some partitions lost and
the final size being ~850 (~150 less than on the start).
Am I missing something?
Thanks,
Stan
From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy optio
Well, exactly what it says – Ignite doesn’t guarantee consistency between
CacheStore and Native Persistence.
Because of this you may end up with different data in the 3rd party DB and
Ignite’s persistence,
so using such configuration is not advised.
See this page
https://apacheignite.readme.io/d
Hi,
Looks like a JVM bug.
The message means that bytecode generated for a lambda in the Ignite code is
structurally incorrect,
but the bytecode is generated in parts by the javac and JVM, so the issue must
be there.
I suggest you upgrade to the latest JDK 8 (currently 8u181) and see if it helps
The call
setIndexedTypes(Long.class, Person.class)
will search for all `@QuerySqlField` fields in Person and create indexes for
all of them with `index=true`.
For example, if your class looks like this
class Person {
@QuerySqlField(index = true)
private String name;
Hi,
AFAICS Ignite doesn’t even use json4s itself. I assume it’s only in the
dependencies and binary distribution for Spark to work.
So, if Spark actually needs 3.2.X you can try using that.
You can remove/replace the Ignite’s json4s jar with the required one, or use
your-favorite-build-system’s
Hi,
If the issue is that the node is starting with a new persistent storage each
time
there could some issue with file system.
The algorithm to choose a persistent folder is
1) If IgniteConfiguration.consistentId is specified, use the folder of that name
2) Otherwise, check if there are any exis
Hi,
No, the behavior you’re describing is not expected.
Moreover, there is a bunch of tests in Ignite that make sure the reads are
consistent during the rebalance,
so this shouldn’t be possible.
Can you create a small reproducer project and share it?
Thanks,
Stan
From: Shrikant Haridas Sonone
Hi,
Generally, your initial thoughts on the expected latency are correct –
PRIMARY_SYNC allows you not to wait for the writes
to complete on the backups. However, some of the operations still have to be
completed on all nodes (e.g. acquiring key locks),
so increasing the number of backups does a
Hi,
Clients should be able to connect to the cluster normally.
Please share logs if you’re still seeing this.
Stan
From: hulitao198758
Sent: 27 июля 2018 г. 14:32
To: user@ignite.apache.org
Subject: Re: The problem after the ignite upgrade
I opened the persistent storage cluster, version 2.3 a
It definitely does.
IGNITE-7153 seems to be applicable for objects greater than 8 kb. Is that your
case?
If so then I guess that has to be the same issue.
Stan
From: Michael Fong
Sent: 9 октября 2018 г. 15:54
To: user@ignite.apache.org
Subject: BufferUnderflowException on GridRedisProtocolParser
I see a node in the topology flacking up and down every minute in the
restart1.log:
TcpDiscoveryNode [id=d6e52510-3380-4258-8a8e-798640b1786c, addrs=[10.29.42.49,
127.0.0.1], sockAddrs=[/10.29.42.49:47500, /127.0.0.1:47500], discPort=47500,
order=596, intOrder=302, lastExchangeTime=153715439345
Hi,
The thing is that the PK index is currently created roughly as
CREATE INDEX T(_key)
and not
CREATE INDEX T(customer_id, date).
You can’t use the _key column in the WHERE clause directly, so the query
optimizer can’t use the index.
After the IGNITE-8386 is fixed the index will be cre
With writeThrough an entry in the cache will never be "dirty" in that sense -
cache store will update the backing DB at the same time the cache update
happens.
From: monstereo
Sent: 14 августа 2018 г. 22:39
To: user@ignite.apache.org
Subject: Re: Eviction Policy on Dirty data
yes, using cachest
This is the commit https://github.com/apache/ignite/commit/bab61f1.
Fixed in 2.7.
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
What's your version?
Do you use native persistence?
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
FYI the issue https://issues.apache.org/jira/browse/IGNITE-8774 is now fixed,
the fix will be available in Ignite 2.7.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Seems to be answered in a nearby thread:
http://apache-ignite-users.70518.x6.nabble.com/Ignite-yarn-cache-store-class-not-found-tt23016.html
Stan
From: debashissinha
Sent: 28 июля 2018 г. 13:01
To: user@ignite.apache.org
Subject: Ignite client and server on yarn with cache read through
Hi ,
I h
Hi,
I guess this is the problem: https://issues.apache.org/jira/browse/IGNITE-8987
Stan
From: zhouxy1123
Sent: 2 августа 2018 г. 9:57
To: user@ignite.apache.org
Subject: in a four node ignite cluster,use atomicLong and process is stuck
incountdown latch
hi ,
in a four node ignite cluster,use a
Hi,
You don’t need to use DDL here.
It’s better to
- Define Key class to be like
class Key { [QuerySqlField] String CacheKey; [AffinityKeyMapped] long
AffinityKey; }
- set your Key and UserData to be indexed
cacheCfg.QueryEntities =
{
new QueryEntity(typeof(Key), typeo
Please don’t send messages like “?”. Mailing list is not a chatroom, messages
need to be complete and meaningful.
Reproducer is a code snippet or project that can be run standalone to reproduce
the problem.
The behavior you see isn’t expected nor known, so we need to see it before we
can commen
the sql using a
affinity key so that it gets executed only on a node which owns that data?
Thanks,
Prasad
On Wed, Jul 25, 2018 at 3:01 PM Stanislav Lukyanov
wrote:
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
What do you mean by “execute select query on cache using affinity key”
and what is the problem you’re trying to solve?
Stan
From: Prasad Bhalerao
Sent: 25 июля 2018 г. 10:03
To: user@ignite.apache.org
Subject: ***UNCHECKED*** Executing SQL on cache using affinnity key
Hi,
Is there any way to ex
I don’t know how that MSSQL studio works, but it might be dependent on T-SQL or
some system views/tables specific for MSSQL.
DBeaver doesn’t use any DB-specific features and is known to work with Ignite.
Take a look at this page: https://apacheignite-sql.readme.io/docs/sql-tooling
Stan
From: wt
Are you trying to use MS SQL tools to connect to Ignite?
I don’t think that’s possible – at least, Ignite doesn’t claim to support that.
Stan
From: wt
Sent: 24 июля 2018 г. 16:01
To: user@ignite.apache.org
Subject: odbc - conversion failed because data value overflowed
I have a SQL server instan
Random-LRY selects the minimum of 5 timestamps (5 pages, 1 timestamp for each
page).
Random-2-LRU selects the minimum of 10 timestamps (5 pages, 2 timestamps for
each page).
My advice is not to go that deep. Random-2-LRU I protected from the “one-hit
wonder” and has a very tiny overhead compar
Hi,
I’d suggest try upgrading from 2.2 to 2.6.
One thing that stands out in your configs is expiryPolicy settings.
expiryPolicy property was deprecated, you need to use expiryPolicyFactory
instead.
I happen to have recently shared a sample of setting it in an XML config on SO:
https://stackoverf
Hi,
I think all you need is to store (a+b)/c in a separate field of the key and
annotate it with @AffinityKeyMapped:
class Key {
private long a, b, c;
@AffinityKeyMapped
private long affKey;
public Key(long a, long b, long c) {
this.a = a;
You can write INSERT the same way as you write SELECT.
Syntax for INSERT is described here:
https://apacheignite-sql.readme.io/docs/insert
Thanks,
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 13:43
To: user@ignite.apache.org
Subject: RE: Insert Query in Gridgrain WebConsole
In this blog insert
If you’re using GridGain, it would be better to contact their customer support.
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:59
To: user@ignite.apache.org
Subject: How to increase memory of Gridgrain database.
Can anyone tells, how to increase a memory of Gridgrain database?
I tried below sni
If you’re using GridGain, it would be better to contact their customer support.
Also, check out this docs page about queries in WebConsole:
https://apacheignite-tools.readme.io/docs/queries-execution
Stan
From: UmeshPandey
Sent: 19 июля 2018 г. 12:54
To: user@ignite.apache.org
Subject: Insert Qu
Most likely it’s either
https://issues.apache.org/jira/browse/IGNITE-8023
or
https://issues.apache.org/jira/browse/IGNITE-7753
Until these issues are fixed, avoid starting/restarting nodes while cluster
activation is in progress.
Thanks,
Stan
From: Calvin KL Wong, CLSA
Sent: 19 июля 2018 г. 5:
I’d look into calling control.sh or ignitevisorcmd.sh and parsing their output.
E.g. check that control.sh --cache can connect to the local node and return one
of your caches.
However, this check is not purely for the local node, as the command will
connect to the cluster as a whole.
A more loca
The functionality you’re looking for is generally called Rolling Upgrade.
Ignite doesn’t support clusters with mixed versions out of the box.
There are third-party solutions on top of Ignite, such as GridGain, that do
have that.
Thanks,
Stan
From: KR Kumar
Sent: 16 июля 2018 г. 12:44
To: user@ig
Ignite transactions support this with REPEATABLE_READ isolation level.
More info here: https://apacheignite.readme.io/docs/transactions
Stan
From: Prasad Bhalerao
Sent: 9 июля 2018 г. 14:50
To: user@ignite.apache.org
Subject: Does ignite support Read-consistent queries?
Read-consistent queries:
Hi Gregory,
> So I would need a function that returns a node id from a Cache Key then a
> function returning a node of the cluster give its id
I don’t quiet get how it getting a node for a key fits here (you’d need to know
some key then),
but ignite.affinity().mapNodeToKey() does this.
How abou
[?:1.8.0_152]
at
org.apache.ignite.internal.processors.cache.query.GridCacheQueryResponseEntry.readExternal(GridCacheQueryResponseEntry.java:90)
~[ignite-core-2.3.0-clsa.20180130.59.jar:2.3.0-clsa.20180130.59]
Thanks,
Calvin
From: Stanislav Lukyanov [mailto:stanlukya...@gmail.com]
Sent: Monday, July 0
Hi Calvin,
It should work the same for all queries. Ideally OptimizedMarshaller shouldn’t
even be used (except for JDK classes).
How did you check which marshaller is used in each case?
Can you share the code of your POJO? Or perhaps a runnable reproducer?
Can you also share the logs/check th
I don't really have much to say but to try reducing/balancing on heap cache
size.
If you have a lot of objects on heap, you need to have a large heap,
obviously.
If you need to constantly add/remove on-heap objects, you'll have a lot of
work for GC.
Perhaps you can review your architecture to avoi
Well, that’s diving a bit deeper than the “don’t do cache operations” rule of
thumb, but let’s do that.
When receiver is invoked for key K, it’s holding the lock for K.
It is safe to do invoke on that K (especially if you control the invoked code)
since it is locked already.
But it is not safe t
Hi,
Looks like you’re performing a cache operation (invoke()) from a StreamReceiver
– this is not allowed.
Check out this SO answer
https://stackoverflow.com/questions/43891757/closures-stuck-in-2-0-when-try-to-add-an-element-into-the-queue.
Stan
From: breischl
Sent: 21 июня 2018 г. 19:35
To:
Data region and data storage metrics are always local.
There are no cluster-wide versions for them.
See an issue for the documentation improvement:
https://issues.apache.org/jira/browse/IGNITE-8726
There is a code snippet that can help to collect metrics from a client via
Compute.
Stan
From: a
There is no default expiry policy, and no default eviction policy for the
on-heap caches (just in case: expiry and eviction are not the same thing, see
https://apacheignite.readme.io/docs/evictions and
https://apacheignite.readme.io/docs/expiry-policies).
I see that most of the threads in the d
The plugin is shipped as a part of the GridGain Enterprise and Ultimate
editions (which is basically Apache Ignite + GridGain plugins).
The download links are here: https://www.gridgain.com/resources/download.
The description of the GridGain WebConsole is here:
https://docs.gridgain.com/docs/web-
Hi Praveen,
Is it on all nodes or just one? How many nodes do you have?
Is the cluster fully responsive, or do some operations appear frozen forever?
It looks like a memory/GC issue to me.
jmap shows 2GB of on-heap data. What is the max heap size? Try increasing it.
Thanks,
Stan
From: praveeng
Hi,
1. Yes
2. This is controlled by the AffinityFunction::assignPartitions implementation.
If the default one doesn’t suite you, you have to write another one yourself.
Reading this thread I have a feeling that this is a case of the XY problem
(https://meta.stackexchange.com/questions/66377/wha
Check out the Javadoc of the Ignite.queue() -
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/Ignite.html#queue-java.lang.String-int-org.apache.ignite.configuration.CollectionConfiguration-.
You can pass null if you don’t want to start a new queue, just get an existing
one by
are partitioned and client should go to server A or B depending
on which partition was resolved for the key, right?
Did you mean that if client C is using server A as gateway, all traffic goes
through server A?
On Wed, Jun 13, 2018 at 4:43 PM, Stanislav Lukyanov
wrote:
On the client
Hi,
> Also will entries of a cache be in multiple partitions on the same node?
Yes.
> Or can a partition have entries of different cache ?
No. A partition can only belong to one cache.
Stan
From: the_palakkaran
Sent: 13 июня 2018 г. 17:30
To: user@ignite.apache.org
Subject: Local peek from clie
On the client disconnection:
The client is connected to one of the server nodes, using it as a sort of
gateway to the cluster.
If the gateway server fails, the client supposed to attempt to reconnect to
other IPs.
Right now I can’t say for sure whether it does so indefinitely, or has some
timeou
You need to provide distinct names to the instances when starting them in the
same JVM.
Set IgniteConfiguration::igniteInstanceName property in both configurations.
Stan
From: Jeff Jiao
Sent: 12 июня 2018 г. 9:58
To: user@ignite.apache.org
Subject: Re: create two client instance in one JVM to co
slow, it will make CPU busy?
Thanks
Shawn
On 6/12/2018 00:01,Stanislav Lukyanov wrote:
Sorry, but there isn’t much else to be said without additional data.
If consistent usage of resources on that host is important, I suggest to setup
some monitoring so that if it happens again there’ll be at
Quick googling suggests to use gdb:
https://stackoverflow.com/a/37375351/4153863
Again, Ignite doesn’t play any role here – whatever works for any Java
application should work for Ignite as well.
Stan
From: naresh.goty
Sent: 11 июня 2018 г. 20:56
To: user@ignite.apache.org
Subject: Re: Ignite N
I see messages in the log that are not from Ignite, but which also suggest that
there is some network issue:
2018-06-09 10:19:59 [a9fc3882] severe [native] Exception in controller:
receiveExact() ... error reading, 70014, End of file found. Retrying every
10 seconds.
2018-06-09 10:19:59
nternal.processors.query.h2.IgniteH2Indexing$13@110ac52b],
process=true]
-- CALL COMPLETED
83837380032 2018-06-11 13:52:21.903 [https-jsse-nio-8080-exec-4] DEBUG
com.xxx.lk.backend.cache.impl.RefreshTokenCache - Cache operation completed
On Mon, Jun 11, 2018 at 12:27 PM, Stanislav Lukyan
every 30 seconds. If no query, the cpu
should be very low. But yesterday it is a exception. Very strange.
shawn.du
邮箱:shawn...@neulion.com.cn
Signature is customized by Netease Mail Master
On 06/11/2018 17:43, Stanislav Lukyanov wrote:
How do you monitor your CPU usage? Do you know which
Hi,
Yep, that’s a bug. Daemon nodes (like the ones ignitevisorcmd starts) seem to
break baseline topology processing.
Filed https://issues.apache.org/jira/browse/IGNITE-8774.
Stan
From: szj
Sent: 9 июня 2018 г. 1:27
To: user@ignite.apache.org
Subject: Re: Ignite 2.5 nodes do not rejoin the clus
Hi,
Have you tried setting consistentId and perform the activation as Pavel
suggested?
I’d also suggest to go through the usage scenarios described at
https://apacheignite.readme.io/docs/baseline-topology#section-usage-scenarios.
Thanks,
Stan
From: siva
Sent: 11 июня 2018 г. 14:16
To: user@ig
How do you monitor your CPU usage? Do you know which processes consume CPU? Are
you sure it is Ignite’s process? Is CPU consumed more in user space or in
system space? Can you share the actual stats?
>From what I see in the thread dump, there is at least some activity on this
>Ignite: sys-strip
Hi,
Try to configure the existing timeouts:
- failureDetectionTimeout
- clientFailureDetectionTimeout
- networkTimeout
Not sure if the Future you mention really needs timeout. If there was a
timeout, it would be either failureDetectionTimeout or networkTimeout anyway.
Currently the calling threa
Hi,
If you mean IgniteQueue then, unfortunately, there is no way to describe it in
the XML.
You have to call Ignite.queue() to create one.
Thanks,
Stan
From: arunkjn
Sent: 10 июня 2018 г. 10:14
To: user@ignite.apache.org
Subject: Initialize ignite queue when node is starting
Hi,
I give my cac
If you’re asking whether you can use other DBMS as a persistence layer, then
yes.
You can use anything (any DBMS, or any kind of custom storage) as a 3rd party
persistence – see https://apacheignite.readme.io/docs/3rd-party-store.
If you’re asking whether you can use multiple DBMS backing the s
Hi,
peerClassLoading only works for Ignite’s Compute Grid – i.e. for the tasks you
send via ignite.compute() calls.
Other classes, such as 3rd party libraries, need to be present on all nodes.
Stan
From: Dmitry Pavlov
Sent: 9 июня 2018 г. 13:08
To: user; Vishwas Bm
Subject: Fwd: Question on pee
Please resend this to the Ignite developers list - d...@ignite.apache.org. The
discussions of code changes happen there.
user@ignite.apache.org is for more user-oriented questions.
Thanks,
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Most likely you've run into this bug:
https://issues.apache.org/jira/browse/IGNITE-8210
It was fixed in 2.5, try updating to that version.
Thanks,
Stan
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
This thread seems to be a duplicate of
http://apache-ignite-users.70518.x6.nabble.com/Dependencies-required-to-use-ignite-in-embedded-mode-tt21619.html.
Let’s stick to the latter one.
the_palakkaran, please avoid posting the same question several times.
Thanks,
Stan
From: Pavel Vinokurov
Sent:
Look at the bottom exceptions in the chain:
=
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to find
class with given class loader for unmarshalling (make
sure same versions of all classes are available on all nodes or enable
peer-class-loading) [clsLdr=org.apache.felix.
101 - 200 of 271 matches
Mail list logo