Hi Vlad - Thanks for the response.
In this test, we write once and no reads/updates throughout the process.
Would that still result into reading of pages. Also, are you telling that
Ignite writes/evicts half filled pages to disk and read them back later when
you have to append a key/value to the
Hi - Yes, Ignite native persistence is enabled.
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Guys - If I have my data streamer's allowOverwrite is set to false which
is default, will this cause my read IOPS go up as it need to check if the
key already exist? Bcoz the behavior that we see is that the write
performance goes down significantly after it has inserted few billion rows
and
Hi Guys - I have a five node cluster and all the nodes are part of the
baseline. Ignite native persistence is enabled. If one of the nodes in the
baseline is down and then we destroy a cache, it removes the cache on all
the remaining four nodes and cache is completely destroyed. But now when I
Hi Ilya - I have tried that but not firing the event. But it does fire for
put and putAll
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Guys - When I am adding entries into cache through DataStreamer.addData,
does it invoke the event listeners configured for EVT_CACHE_ENTRY_CREATED,
EventType.EVT_CACHE_OBJECT_PUT??
I am configuring a local listener in the following way
engine.events().localListen(cacheChangeHandler,
Checked the logs and nothing like that happened where went down.
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Alex - Here is the cache configuration for the cache
CacheConfiguration cacheConfig = new
CacheConfiguration<>();
cacheConfig.setCacheMode(CacheMode.PARTITIONED);
cacheConfig.setRebalanceMode(CacheRebalanceMode.ASYNC);
Hi Guys - I have ignite cache with persistence enabled. Three days back I was
having 11 billion events in the cache and yesterday it became around 10
billion and now its around 8.9 billion. Ignite data is constantly going down
on its own. Am I doing something wrong with my config or have you guys
Hi guys - in some of the nodes, the wal files are not getting deleted and as
a result its volume is getting full and node is crashing.
Here are the configs related to that
Its 2.7.6
and configuration is as follows:
Its 2.7.6
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi - Here is the complete stack trace from the logs
[2020-04-15 13:43:07,271][ERROR][data-streamer-stripe-2-#51][root] Failed to
set initial value for cache entry: DataStreamerEntry
[key=UserKeyCacheObjectImpl [part=207, val=50792583101, hasValBytes=true],
val=CacheObjectByteArrayImpl
Hi Guys - Occasionally my streamer throws exception saying DataStreamer
closed and when I dig into it, I found this error in the ignite logs "Failed
to allocate temporary buffer for checkpoint". What does this mean??
Thanx and Regards,
KR Kumar
--
Sent from:
Hi Gianluca Bonetti - Initially if I restart when the data is around 100
million entries, it takes about 3 minutes, now I am testing it with 270
million and it takes about 12 minutes. Upon looking into the ignite code,
this is where its taking time in the class
org.apache.ignite.internal.IgntionEx
Hi Gianluca Bonetti - Thanks for the help. I actually have that JVM parameter
and now I have removed that. Looks like its working and I will do few more
rounds of testing and then update you.
Also any idea why ignite.active(true) taking almost 5 minutes whenever i
restart the node?
Again a big
Hi Guys - I have this problem that's very recent and nothing significant
changed in the system but all of a sudden still takes lot of time to
restart. I do a graceful shutdown of one of the nodes and restart the node
after some updates. Now the ignite takes almost 30 minutes to initiate and
that's
Hi Guys - How do i control the rebalancing programmatically i.e. I don't want
the rebalancing to happen immediately when node goes out of the cluster (
most of the times, graceful shutdown for updates). I will have rest endpoint
or some command through i will initiate the rebalance if at all
Hi Mike - Thanks for the reply. Please take a look at the point to point
details.
Ignite need to sync memory and disk time to time, if you write rate is
bigger than disk speed that means Ignite will never able to sync memory and
disk, that's why it starts to throttle your writes.
Its a
HI - Thanks for getting back to me. This is happening in our test environment
and we have 4TB gp2 disks i.e. general purpose SSD with 12000 IOPS. Its a
very write intensive work where we compress records that are anything
between 1kb to 4kb and push them to ignite. Right now the throughput is
Hi All - I see this message in the logs
[2020-03-12 15:23:29,896][INFO
][comcastprod-1-StoreFlushWorker-7][PageMemoryImpl] Throttling is applied to
page modifications [*percentOfPartTime=0.54*, markDirty=8185 pages/sec,
checkpointWrite=14893 pages/sec, estIdealMarkDirty=8182 pages/sec,
Hi All - I see this message in the logs
[2020-03-12 15:23:29,896][INFO
][comcastprod-1-StoreFlushWorker-7][PageMemoryImpl] Throttling is applied to
page modifications [*percentOfPartTime=0.54*, markDirty=8185 pages/sec,
checkpointWrite=14893 pages/sec, estIdealMarkDirty=8182 pages/sec,
Sorry. I think I did not phrase the question properly. I stopped the ignite
with Ignition.stop(false) and when restarted the ignite, I am getting the
following error
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Guys - I am getting this error these days, What does this mean and why am
I getting into this error when I am doing a Ignition.stop(false) when I
shutdown the server.
Caused by: class org.apache.ignite.IgniteCheckedException: Failed to start
SPI: TcpDiscoverySpi [addrRslvr=null,
Hi - We are using "BACKGROUND" as the walmode. Is that a problem?
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Ilya - The other thread looks like is also ignite, here is the thread dump
reference of the thread
"Thread-687" #2758 daemon prio=5 os_prio=0 tid=0x7f2afc00d800 nid=0x897f
runnable [0x7f299bffe000]
java.lang.Thread.State: RUNNABLE
at
Hi Guys - I randomly bump into this error. Can you guys tell when and why I
get into this error ( I mean what circumstances). all i am doing is a
controlled shutdown. Thats all nothing else.
Also How do i recover the data from such error as most of the times, I end
up loosing the data
class
Ilya - Thanks for the response. Why would these JDBC threads be constantly
querying something even when the system is idle? I mean even when I am not
running any queries in the applications, Looks like something is being read
from the ignite persistence through JDBC sql?
Thanx and Regards,
KR
Hi Guys - Who creates these threads in Ignite Runtime. As I see these threads
are not named and seems to taking significant amount of processing time.
These two threads are either doing disk based io or Ignite SQL stuff. My
question is are these Ignite's internal threads ??
Thread
One more exception that happens randomly is the following when I add node the
topology and the jam quits
[2019-12-15 03:14:49,171][ERROR][sys-#553][GridDhtPartitionSupplier] Failed
to continue supplying [grp=SQL_PUBLIC_EVENTS_IDX_DETL_TENANT_1,
demander=ef593867-af13-46ac-b9ea-5a6adcfcb2b4,
H Guys - I see this exception in the logs when I add two nodes the baseline
topology after sometime?? This is very random and happens only in one
environment as of now:
[2019-12-12 13:00:41,909][WARN
][exchange-worker-#143][GridDhtPartitionsExchangeFuture] Unable to await
partitions release latch
Hi Ilya - We do not have this message in the logs that means we have to
increase the resources I guess. thanx for the help in this regard.
Thanx and Regards,.
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Anton - Initially we had the wal and wal archive configured to different
folders, later we changed the config to the same folder and restarted the
cluster. Is that a problem?
Thanx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
HI ilya - Thanx for the reply. What you said is exactly what's happening I
guess. We do see this warning that pages will be rotated from disk and the
write process will slow down.
Here is the message that we see in the logs:
WARN ][data-streamer-stripe-22-#59][PageMemoryImpl] Page replacements
When we are writing the data to ignite thru streamer ( key, value) and Ignite
JDBC ( into couple of SQL tables) we get very high throughput when read IOPS
are low and Write IOPS are high and we get very low throughput when Reads
and Writes are competing. So my question when I am writing to the
Hi - I am bumping into the following error frequently and causing the data
loss whenever we shutdown the ignite node during data rebalance. I am
shutting down the ignite in a safe mode i.e.
Ignition.stop(false);
Here is the stack trace :
Caused by: class
Hi Anton - This is probably not our issue as I am not seeing any Exceptions
in the log
Thnx and Regards,
KR Kumar
--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/
Hi Anton - Thanks a lot and this helps me understand the problem.
I am still trying to get the logs from production and it might take some
more time.
I did see a message in the logs saying "checkpoint process failed" - What
are the consequences and how should i handle such errors. What are the
Hi - I am currently using ignite version 2.7.6 and the files do get deleted
whenever i restart the server but after that they continuously stack-up. One
thing that i have noticed in the log files is this message "Could not clear
historyMap due to WAL reservation on cp:". I checked the code and
Thanks for your reply guys.
I am not sure if I have really solved the problem but this is how i fixed
it. Initially i was adding the nodes to the baseline topology through code.
I have removed that and now adding the nodes thru control.sh which case i am
not loosing any data. I do not know the
Hi - The application is doing two things, one thread is writing 2kb size
events to the ignite cache as a key value and other thread is executing
ignite SQLs thru ignite jdbc connections. The throughput is anything between
25K to 40K events per second on the cache size. We are using data streamer
41 matches
Mail list logo