Hi,
We have a service, which is essentially a file system crawler, and we're
using Ignite to store the overall state of the job. The state is represented
by simple objects with fields like ID, Name, Path, and State. The State
field is either "Candidate" or "Document". A Candidate is metadata
>our a stream receiver called invoke() and that in turn did another invoke,
which was the actual bug.
So Ignite's invoke() implementation called itself?
>It was helpful when we did the invoke using a custom thread pool,
I'm not sure I understand the concept here. Is the idea to have an
Hello,
In that case, only 'createdTime' fall back to OptimizedMarshaller.
Thanks,
Slava.
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
2.4 should be OK.
What you showed that the stream receiver called invoke() and did not get an
answer, not a deadlock. Nothing looks particularly wrong there. When we
created this bug, it was our a stream receiver called invoke() and that in
turn did another invoke, which was the actual bug.
It
Also...
>What you showed that the stream receiver called invoke() and did not get an
answer, not a deadlock.
It's not that I'm getting back a null, it's that all the threads are blocked
waiting on the invoke() call, and no progress is being made. That sounds a
lot like a deadlock. I guess you
Evgenii,
We use Ignite-as Im Memory Database for Tableau and SQL, we dont use Java.
We use spark to load data into Ingite by Spark streaming realtime data.
So if any user runs select * from table, the server nodes going OOME. We
need to control that behaviour i sthere any way?
Thanks
--
Sent
There is a lazy flag for jdbc string: jdbc:ignite:thin://
192.168.0.15/lazy=true
Evgenii
2018-06-28 22:38 GMT+03:00 ApacheUser :
> Evgenii,
>
> We use Ignite-as Im Memory Database for Tableau and SQL, we dont use Java.
> We use spark to load data into Ingite by Spark streaming realtime data.
>
Good to know, thanks!
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Just found a bunch of these in my logs as well. Note this is showing
starvation in the system threadpool, not the datastreamer threadpool, but
perhaps they're related?
[2018-06-28T17:39:55,728Z](grid-timeout-worker-#23)([]) WARN - G - >>>
Possible starvation in striped pool.
Thread name:
There is no such field in IgniteConfiguration:
https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/configuration/IgniteConfiguration.html
Why do you think that it should work?
You can set lazy flag when you creating SqlFieldsQuery object from java
Evgenii
2018-06-28 20:32
Hi Ignite Team,
I am trying set SqlFieldsQuery to seTLazy to avoid OOME on Server nodes. By
Config file has below setting
but getting below
Hi Oleg,
The issue you mentioned IGNITE-8659 [1] is caused by IGNITE-5874 [2] that
will not a part of ignite-2.6 release.
For now, 'ExpiryPolicy with persistence' is totally broken and all it's
fixes are planned to the next 2.7 release.
[1] https://issues.apache.org/jira/browse/IGNITE-8659
[2]
Your original stack trace shows a call to your custom stream receiver which
appears to itself call invoke(). I can only guess that your code does, but
it appears to be making an call off node to something that is not returning.
Hi,
Reduce will be done on node to which JDBC or thin client connected, it could
be either client or server node.
Thanks!
-Dmitry
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Andrew,
The issue, that you filed, shows a different problem. It doesn't address
lock's reentrancy.
Lock's implementation contains a counter, that tracks, how many times it
was acquired.
But it's ignored in the *unlock()* method.
So, I think, the lock shouldn't be released, until the *unlock()*
Hi Calvin,
1. Enlist I mean that if you want, for example, to get to see what fields
present in BinaryObject. In other words, if you want to work with
BinaryObject directly. For POJO serialization/deserialization this should
not be and issue at all.
2-3. In your case, you have and java.time.Ser
It's incorrect to use cache object to calculate cache data size. What you
got now is a footprint of the Ignite infrastructure used to manage your
data, but not a footprint of your data itself, since data are stored in
off-heap and this tool only calculate on-heap size of objects referenced by
They are separate issues. One is that the isLocalLocked() method returns
incorrect results if a remote lock is held.
The other is more serious because, as Denis has commented, it means that two
nodes can end up holding a lock at the same time.
Although, I appreciate, that if lock code is using
Hi,
As I remember we already discussed here with Jon:
http://apache-ignite-users.70518.x6.nabble.com/If-a-lock-is-held-by-another-node-IgniteCache-isLocalLocked-appears-to-return-incorrect-results-td22110.html#a22149
that isLocalLocked method works incorrectly in case if several nodes started
Thanks, Dmitry.
>>2-3. In your case, you have and java.time.Ser in one of the fields of your
>>POJO (or maybe inside of depended object), and it is Externalizable. In such
>>case BinaryMarshalelr falls back to OptimizedMarshaller with all the issues.
>>Try to remove it from your POJOs or make
Michael,
I still don't get, how you measured the occupied space and how many backups
you have.
Could you clarify?
Denis
вт, 26 июн. 2018 г. в 12:50, Michaelikus :
> This is example of data stored in cache taked from visor.
>
> java.lang.Long | 2147604480 <(214)%20760-4480> |
>
Evgenii,
what happens if the user doesn't set that limit or forget to set on client
tool?,
we set that but some one testing without the lazy=true to prove that Apache
Ignite is not stable.
Thanks
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Thanks Dave. I am using Ignite v2.4.0. Would a newer version potentially
help?
This problem seems to come and go. I didn't hit it for a few days, and now
we've hit it on two deployments in a row. It may be some sort of timing or
external factor that provokes it. The most recent case we hit the
23 matches
Mail list logo