Hi vgrigorev,
I used your suggestion to do health check for each node. But I got memory
leak issue and exit with OOM error: java heap space.
Below is my example code:
// I create one bean to collect what I want info which include IP, hostname,
createtime and then return json string.
It’s an AbstractMethodError caused since the library isn’t compiled with
the 5.2 hibernate spec.
On Tue, Oct 9, 2018 at 1:23 AM yarden wrote:
> Thanks a lot for your answer.
> Can you share what was the error you encountered on Hibernate 5.2?
>
>
>
>
>
> --
> Sent from:
Anyone who can tell me how to do an update operation with *join table*?
I have tried the SQL below:
1.update A set A.a1=1 join table(devId varchar=?) B on A.devId=B.devId *SQL
ERROR*
2.update A, table(devId varchar=?) B set A.a2=1 where A.devId=B.devId *SQL
ERROR*
3.update A set A.a1=1 where
Thank you, I will try queryParallelism.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Please find sample project running on SB 2.0.. when I try to instantiate it
gives following exception -
2018-10-09 16:54:28.207 WARN 14068 --- [ main]
io.undertow.servlet : UT015020: Path /* is secured for
some HTTP methods, however it is not secured for [HEAD,
In short, Ignite replaces H2’s storage level with its own.
For example, Ignite implements H2’s Index interface with its own off-heap data
structures underneath.
When Ignite executes an SQL query, it will ask H2 to process it, then H2 will
callback to Ignite’s implementations
of H2’s interfaces
How do I tell Ignite to include a node in the baseline topology? Isn't it the
hosts that I set to TCPIpFinder? Also, is it possible to dynamically add a
new node to the cluster?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
I will share a sample application soon.. stay tuned ...
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Stan,
thanks for looking into it. I agree with you that this is the observed
behaviour, but it is not what I would expect.
My expectation would be to get an exception when I attempt to query
unavailable partitions. Ideally this behavior would be dependent on the
PartitionLossPolicy. When
Ignite 2.0 is FAR from being recent :)
Try Ignite 2.6. If it doesn’t work, share configuration, logs and code that
starts Ignite.
Thanks,
Stan
From: ignite_user2016
Sent: 9 октября 2018 г. 22:00
To: user@ignite.apache.org
Subject: Re: Ignite on Spring Boot 2.0
Small update -
I also upgraded
Hi,
Can’t say much about Java EE usage exactly but overall the idea is
1) Start an Ignite node named “foo”, in the same JVM as the application, before
Hibernate is initialized
2) Specify `org.apache.ignite.hibernate.ignite_instance_name=foo` in the
Hibernate session properties
3) Do NOT specify
If you use native persistence it will use that data, you will have to
make sure you get the cachestore used yourself if the original data
change, normally you would use the cache store without native
persistence but that depends on how you want to use it, if you restart
the node everything
Hi,
Please share configurations and full logs from all nodes.
Stan
From: ApacheUser
Sent: 8 октября 2018 г. 17:49
To: user@ignite.apache.org
Subject: Spark Ignite Data Load failing On Large Cache
Hi,I am testing large Ignite Cache of 900GB, on 4 node VM(96GB RAM, 8CPU and
500GB SAN Storage)
Small update -
I also upgraded ignite to recent version but no use.
Current version is Ignite 2.0
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Did not fully understand what it says, but does it like even if values in my
DB is changed, it wont automatically get updated into the ignite cache? Or
is the concept something else? I just have a read through enabled cache.
I also had the below error before it, just noticed. Is it related to the
Hi,
I’ve tried your test and it works as expected, with some partitions lost and
the final size being ~850 (~150 less than on the start).
Am I missing something?
Thanks,
Stan
From: Roman Novichenok
Sent: 2 октября 2018 г. 22:21
To: user@ignite.apache.org
Subject: Re: Partition Loss Policy
I have a cluster running on AWS VPC with TcpDiscoveryS3IpFinder SPI. I
noticed that there are a couple of entries in the S3 bucket
'127.0.0.1#47500', '0:0:0:0:0:0:0:1%lo#47500' along with entries for the
private IPs of the nodes. If anyone understands what these are, can you
please share the
Hello,
I have been struggling with this question myself for a while now too.
I think the documents are very ambiguous on how exactly H2 is being used.
The document that you linked say
"Apache Ignite leverages from H2's SQL query parser and optimizer as well
as the execution planner. Lastly, *H2
Well, exactly what it says – Ignite doesn’t guarantee consistency between
CacheStore and Native Persistence.
Because of this you may end up with different data in the 3rd party DB and
Ignite’s persistence,
so using such configuration is not advised.
See this page
Hi,
Why do I get the below error while starting up ignite clustered?
Both Ignite native persistence and CacheStore are configured for cache
'cacheStoresExampleCache'. This configuration does not guarantee strict
consistency between CacheStore and Ignite data storage upon restarts.
Consult
Hi,
Looks like a JVM bug.
The message means that bytecode generated for a lambda in the Ignite code is
structurally incorrect,
but the bytecode is generated in parts by the javac and JVM, so the issue must
be there.
I suggest you upgrade to the latest JDK 8 (currently 8u181) and see if it
When running Ignite 2.6 in a docker container, I sporadically get VerifyError
13:56:59,154 SEVERE [] (exchange-worker-#52%app-grid%) Critical system error
detected. Will be handled accordingly to configured handler [hnd=class
o.a.i.failure.StopNodeOrHaltFailureHandler, failureCtx=FailureContext
Hello..
I am using Ignite with Spring boot version 1.5.8, all works well but when I
try to upgrade my spring boot app to 2.0.1 version some how
SpringCacheManager does not work.
Here is my piece of code -
/**
* Provides SpringCacheManager to enable 2nd level caching on Ignite
*
The call
setIndexedTypes(Long.class, Person.class)
will search for all `@QuerySqlField` fields in Person and create indexes for
all of them with `index=true`.
For example, if your class looks like this
class Person {
@QuerySqlField(index = true)
private String name;
Hi,
AFAICS Ignite doesn’t even use json4s itself. I assume it’s only in the
dependencies and binary distribution for Spark to work.
So, if Spark actually needs 3.2.X you can try using that.
You can remove/replace the Ignite’s json4s jar with the required one, or use
your-favorite-build-system’s
Hi Ilya,
I have tried it, and got the same performance as forcing using category
index in my initial benchmark - query is 3x slowers and uses only one
thread.
>From my experiments so far it seems like Ignite can either (a) use
affinity key and run queries in parallel, (b) use index but run the
Hi,
If the issue is that the node is starting with a new persistent storage each
time
there could some issue with file system.
The algorithm to choose a persistent folder is
1) If IgniteConfiguration.consistentId is specified, use the folder of that name
2) Otherwise, check if there are any
Only the data for that node is stored there, backup data on backup
nodes, primary data on primary nodes and so on.
Mikael
Den 2018-10-09 kl. 17:27, skrev ApacheUser:
Hi Team,
How does the data stored when the persistenc eis enabled?
Does Ignite Store all data in all Nodes enabled
Hi Team,
How does the data stored when the persistenc eis enabled?
Does Ignite Store all data in all Nodes enabled persistence or just the
data which in that node?
Ex: I have 4 node Ignite Cluster and persistence is enabled in all 4 nodes
(activated after 4 nodes come up) . When the data is
Hi,
No, the behavior you’re describing is not expected.
Moreover, there is a bunch of tests in Ignite that make sure the reads are
consistent during the rebalance,
so this shouldn’t be possible.
Can you create a small reproducer project and share it?
Thanks,
Stan
From: Shrikant Haridas Sonone
Hi,
Generally, your initial thoughts on the expected latency are correct –
PRIMARY_SYNC allows you not to wait for the writes
to complete on the backups. However, some of the operations still have to be
completed on all nodes (e.g. acquiring key locks),
so increasing the number of backups does
Hello!
1. I don't think you should use LIMIT because it is very hard to make sure
that all entries are updated exactly once. It is better to partition data
by values of some field.
2. It is hard to say outright, did you try queryParallelism?
Regards,
--
Ilya Kasnacheev
вт, 9 окт. 2018 г. в
Hello!
I think you could model it with ATOMIC cache:
while (true) {
long time = cache.get(host);
if (time < System.currentTimeMillis() && cache.replace(host, time, time
+ hostDelay) {
// do request to host
// break
else
// sleep or do other requests in the
Hi,
I'm working on prototyping a web crawler using Ignite as the crawl-db. I'd
like to ensure the crawler obey's the appropriate Craw-Delay time as set in
a site's robots.txt file - the way I have this setup now, is by submitting
"candidates" to an Ignite cache. A local listener is setup to
Hi,
Clients should be able to connect to the cluster normally.
Please share logs if you’re still seeing this.
Stan
From: hulitao198758
Sent: 27 июля 2018 г. 14:32
To: user@ignite.apache.org
Subject: Re: The problem after the ignite upgrade
I opened the persistent storage cluster, version 2.3
Ilya -
Thank for your reply and point out the problem may exist in my code, I will
try to split the operation into smaller ones.
And I still have two problems:
1. Can I split the operation with LIMIT?
2. Where is time spent in the operation?
--
Sent from:
It definitely does.
IGNITE-7153 seems to be applicable for objects greater than 8 kb. Is that your
case?
If so then I guess that has to be the same issue.
Stan
From: Michael Fong
Sent: 9 октября 2018 г. 15:54
To: user@ignite.apache.org
Subject: BufferUnderflowException on
Hi, all
We are evaluating Ignite compatibility with Redis protocol, and we hit an
issue as the following:
Does the stacktrace look a bit like IGNITE-7153?
java.nio.BufferUnderflowException
at java.nio.HeapByteBuffer.get(HeapByteBuffer.java:151)
at
Hello!
Have you tried setting queryParallelism on this cache to some high value
(such as your number of CPU cores)?
Note that this query will likely cause lifting all data in cache to heap at
once, which is probably not what you expect. My recommendation is also to
try splitting this operation
Hi -
I have a cache with more than 1 million records, when I update the whole
records in it, it spends a long time(almost 6 minutes).
Here is the SQL:
"update " + IgniteTableKey.T_DEVICE_ONLINE_STATUS.getCode()+ " set
isOnline=0, mqttTime=" + System.currentTimeMillis() / 1000 + " where 1=1";
Thanks a lot for your answer.
Can you share what was the error you encountered on Hibernate 5.2?
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
41 matches
Mail list logo