--
> Ilya Kasnacheev
>
>
> пн, 14 янв. 2019 г. в 21:54, John Smith :
>
>> So if it's all running inside DC/OS it works ni issues. So wondering what
>> would be the strategy if external clients want to connect either Ignite
>> being inside the contaimer env or out
So if it's all running inside DC/OS it works ni issues. So wondering what
would be the strategy if external clients want to connect either Ignite
being inside the contaimer env or outside... Just REST?
On Fri., Jan. 11, 2019, 15:00 John Smith Yeah this doesn't work on the dev environment either
> Hello!
>>
>> I'm afraid that visor will try to connect to your client and will wait
>> until this is successful.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пт, 11 янв. 2019 г. в 20:01, John Smith :
>>
>>> Humm maybe not.
:
> Hello!
>
> Are you sure that your Visor node is able to connect to client node via
> communication port? Nodes in cluster need to be able to do that, which is
> somewhat unexpected in case of client node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 11 янв. 2019 г.
Hi, sorry if this a double post I tried through nabble and I don't think it
came through...
So using 2.7...
I have a 3 node cluster started with ignite.sh and that works perfectly
fine. I'm also able to connect to the cluster with visor and I can also run
top, cache etc... commands no problem.
And it seems to say like that indefinitely. I let it go for 5 minutes and
nothing has printed to the console or logs.
On Fri, 11 Jan 2019 at 12:49, John Smith wrote:
> I can confirm I just tested it. There is no stack trace. Basically the
> client connects, no errors, the cache command
at 14:12, John Smith wrote:
> And it seems to say like that indefinitely. I let it go for 5 minutes and
> nothing has printed to the console or logs.
>
> On Fri, 11 Jan 2019 at 12:49, John Smith wrote:
>
>> I can confirm I just tested it. There is no stack trace. Basically t
:58 PM Denis Magda, wrote:
> Hey John,
>
> Check this integration out. It should support what you are looking for:
> https://docs.gridgain.com/docs/certified-kafka-connector
>
> -
> Denis
>
>
> On Fri, Mar 22, 2019 at 6:17 AM John Smith wrote:
>
>> Or are you
So anyone attempting to use Apache Ignite SQL as system of record?
Hi, I have a bunch of Json records in Kafka. I would like to either UPSERT
or DELETE a record from my Ignite cache based on the "type" specified in
the Json record. What's the best way to do this or what feature of
Kafka/Ignite I should use?
And how about DELETE?
On Fri, 15 Mar 2019 at 16:58, aealexsandrov wrote:
> Hi,
>
> You can use the Kafka streamer for these purposes:
>
> https://apacheignite-mix.readme.io/docs/kafka-streamer
>
> Also, take a look at this thread. It contains examples of how to work with
> JSON files:
>
>
>
Or are you saying I need to write custom streamer?
On the streamer how do we get notified of data coming in? The examples
don't show that. They only show how to connect... Or is that the only
function of streamer?
On Fri, Mar 22, 2019, 9:12 AM John Smith, wrote:
> You mean I need to write
You mean I need to write my own Kafka connect connector using the cache API
and from there decide to do put or remove?
On Tue, Mar 19, 2019, 8:02 PM aealexsandrov,
wrote:
> Hi,
>
> Yes, looks like the KafkaStreamer doesn't support the DELETE behavior. It
> was created to loading data to Ignite.
Thanks.
One last question.
If I set the off-heap to 3GB but I want to store more data then the
allowable heap size, that means I need to have set the disk persistence to
bigger file size correct?
And the heap will only contain the active/latest entries while the rest
will be on disk?
On Fri, 12
Hi, using the calculator spread sheet downloaded here:
https://apacheignite.readme.io/docs/capacity-planning
I have
10 000 000 objects
100 bytes average
0 backups
According to the calculator I need 2.7 GB and about 4.3GB of disk.
This includes the 30% indexes and the 100% in RAM calculations and
n both cases.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 23 мая 2019 г. в 19:05, John Smith :
>
>> Also is there a difference between these two?
>>
>> ignite.getOrCreateCache(cacheConfig, nearConfig).withExpiryPolicy();
>>
&g
I think it should at least time out and show stats of the nodes it could
reach? I don't see why it's dependant on client nodes.
On Thu, 30 May 2019 at 11:58, John Smith wrote:
> Sorry pressed enter to quickly
>
> So basically I'm 100% sure if visor cache command cannot reach t
Hi, any thoughts on this?
On Fri, 31 May 2019 at 10:21, John Smith wrote:
> I think it should at least time out and show stats of the nodes it could
> reach? I don't see why it's dependant on client nodes.
>
> On Thu, 30 May 2019 at 11:58, John Smith wrote:
>
>> Sorry pre
ср, 5 июн. 2019 г. в 22:34, John Smith :
>
>> Hi, any thoughts on this?
>>
>> On Fri, 31 May 2019 at 10:21, John Smith wrote:
>>
>>> I think it should at least time out and show stats of the nodes it could
>>> reach? I don't see why it's depe
Sorry pressed enter to quickly
So basically I'm 100% sure if visor cache command cannot reach the client
node then it just stays there not doing anything.
On Thu, 30 May 2019 at 11:57, John Smith wrote:
> Hi, running 2.7.0
>
> - I have a 4 node cluster and it seems to be running
Hi, running 2.7.0
- I have a 4 node cluster and it seems to be running ok.
- I have clients connecting and doing what they need to do.
- The clients are set as client = true.
- The clients are also connecting from various parts of the network.
The problem with ignite visor cache command is if
I looked here: http://apache.org/dist/ignite/deb/ and it's not in Bintray
either.
Thanks
could check that.
>
> There should be messages related to connection attempts.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 13 июн. 2019 г. в 00:06, John Smith :
>
>> The clients are in the same low latency network, but they are running
>> inside container network. W
Hi, so I have 3 machines with 8GB RAM and 96GB disk each.
I have configured the persistence as
Looking at the logs:
Topology snapshot [ver=3, locNode=xx, servers=3, clients=0,
state=INACTIVE, CPUs=12,
;
>
> пт, 14 июн. 2019 г. в 22:41, John Smith :
>
>> Hi, It's 100% that.
>>
>> I'm just stating that my applications run inside a container network and
>> the Ignite is installed on it's own VMS. The networks see each other and
>> this works. Also Visor can c
e, fast and low-latency.
>
> It is not recommended to connect thick clients from different networks.
> Use thin clients where possible.
>
> You can file a ticket against Apache Ignite JIRA regarding visor behavior
> if you like.
>
> Regards,
> --
> Ilya Kasnacheev
The clients are in the same low latency network, but they are running
inside container network. While ignite is running on it's own cluster. So
from that stand point they all see each other...
On Wed, 12 Jun 2019 at 17:04, John Smith wrote:
> Ok thanks
>
> On Mon, 10 Jun 2019 at 04
anagement and monitoring.
> Not sure that Ilya’s statement makes a practical sense.
>
> Looping in our Visor experts. Alexey, Yury, could you please check out the
> issue?
>
> Denis
>
> On Tuesday, June 18, 2019, John Smith wrote:
>
>> Ok but visor is used to
llo!
>
> It is recommended to turn off failure detection since its default config
> is not very convenient. Maybe it is also fixed in 2.7.5.
>
> This just means some operation took longer than expected and Ignite
> panicked.
>
> Regards,
>
> чт, 20 июн. 2019 г.,
t.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
On Thu, 20 Jun 2019 at 10:08, John Smith wrote:
> Ok, where do I look for the v
Ok, where do I look for the visor logs when it hangs? And it's not a no
caches issue the cluster works great. It when visor cannot reach a specific
client node.
On Thu., Jun. 20, 2019, 8:45 a.m. Vasiliy Sisko,
wrote:
> Hello @javadevmtl
>
> I failed to reproduce your problem.
> In case of any
Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()
Will the expire policy be set on the underlying cache or the near cache?
lied to near cache, I guess, if this is implemented at all)
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 23 мая 2019 г. в 00:18, John Smith :
>
>> Hi, when we use ignite getOrCreateNearCache().withExpirePolicy()
>>
>> Will the expire policy be set on the underlying cache or the near cache?
>>
>
t;
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 23 мая 2019 г. в 17:53, John Smith :
>
>> So then I should create my regular cache first... Set the expiry policy
>> on that and then create near cache on top of that?
>>
>> On Thu, 23 May 2019 at 08:48, Ilya K
Hi running 2.7.0
I have a four node cluster. And I have inserted some records from an
application running on my laptop (wifi network) using the thin client.
Now I'm using the Ignite Visor to connect to the cluster from my laptop
(wifi network) and it seems to hang.
visor> open
Hi running 2.7.0,
I have a 4 node cluster running with off-heap persistence works great!
I then by mistake tried to create a REPLICATED cache with
LruEvictionPolicy. So we know if the cache is off-heap mode it cannot be
created.
But this seems to have borked the cluster, it shut down and now it
13:39, Evgenii Zhuravlev :
>
>> Hi
>>
>> I believe that you can just remove folder related to the newly created
>> cache, have you tried to do this?
>>
>> Evgenii
>>
>> чт, 2 мая 2019 г. в 23:15, John Smith :
>>
>>> Hi running 2.7.0
I can confirm this for 2.7.0 DEB package also. I don't think the DEB and
RPM packages come with ignitevisorcmd.
If you look under /usr/share/apache-ignite/bin it's not there...
On Mon, 1 Jul 2019 at 05:44, Michaelikus wrote:
> Hi!
>
> It was installed from official RPM repository.
>
>
>
>
>
>
Should I do this on the server nodes or the client nodes?
On Tue, 25 Jun 2019 at 10:18, Maxim.Pudov wrote:
> You could increase failureDetectionTimeout [1] from default value of 1
> to
> 6 or so.
>
> https://apacheignite.readme.io/docs/tcpip-discovery#section-failure-detection-timeout
>
Hi, running 2.7.0
I noticed one of my nodes was down. It seems to have turned itself off,
because of: Ignite node is in invalid state due to a critical failure.
I attached logs here:
https://www.dropbox.com/s/82li1020a5ig4ty/ignite-failled.log?dl=0
Sorry the regular full mesh client. Maybe some threadPoolSizes
On Fri, 30 Aug 2019 at 11:23, Alexandr Shapkin wrote:
> Hello,
>
>
>
> Not a thin-client tuning in general, but you can check a serialization
> settings in order to make sure you do not use a default one.
>
>
rom single partition
topic and then does a GET per kafka record. I know as a single consumer
thread the application without cache can handle give or take 2000. So if
figure with a bit of tuning I can get up to 1000 Gets.
Also using async GET cache.
On Fri, 30 Aug 2019 at 12:00, John Smith wrote:
>
Hi, is there any specific client settings we can set to tune the client
performance, maybe some thread pools or any stuff like that?
1 thread.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 30 авг. 2019 г. в 19:11, John Smith :
>
>> Actually some more details.
>>
>> I have a partitioned cache with about 4Million records over 3 nodes. When
>> I do a get from the REST API I can hit a
tty descriptive. Node was dropped from topology because of
> long GC pauses.
>
> Either find ways to decrease GC pauses, or increase
> failureDetectionTimeout.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 28 авг. 2019 г. в 00:18, John Smith :
>
>> H
may decide to swap something out. I
> recommend decreasing heap to 2G if possible. Should also make GC faster.
>
> I'm not sure how to enable GC logs when running a package.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 28 авг. 2019 г. в 17:21, John Smith :
>
>
> Well, my recommendation is to find a way to enable GC logs and collect
> regular logs as well, from all nodes.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> ср, 28 авг. 2019 г. в 18:18, John Smith :
>
>> The drop box link here:
>> https://www.dropbox.com/s/etm
,
> --
> Ilya Kasnacheev
>
>
> ср, 28 авг. 2019 г. в 15:13, John Smith :
>
>> I'm not doing anything fancy with the cache I have 3 million records
>> partitioned cache over 3 servers. And all I do is some put and gets. Unless
>> I have a bad config?
>>
>&
Yeah, initial tests show improvements just by switching to async get.
On Tue, 3 Sep 2019 at 11:06, John Smith wrote:
> Actually, I looked closer at my code. Cannot use getAll() and the Queue is
> single partition, so can't use multiple threads, the application is per
> event and we ha
rds,
> --
> Ilya Kasnacheev
>
>
> вт, 29 окт. 2019 г. в 22:21, John Smith :
>
>> Hi, the GC logs where also provided and we determined, there was no GC
>> pressure. At least what I understood from the thread above. We also enabled
>> some extra thread info
Sorry here is the gc logs for all 3 machines:
https://www.dropbox.com/s/chbbxigahd4v9di/gc-logs.zip?dl=0
On Wed, 16 Oct 2019 at 15:49, John Smith wrote:
> Hi, so it happened again here is my latest gc.log stats:
> https://gceasy.io/diamondgc-report.jsp?oTxnId_value=a215d573-d1cf-4d5
t the physical CPU is not overutilized and no VMs that
> run on it are starving.
>
> Denis
> On 10 Oct 2019, 19:03 +0300, John Smith , wrote:
>
> Do you know of any good tools I can use to check the VM?
>
> On Thu, 10 Oct 2019 at 11:38, Denis Mekhanikov
> wrote:
>
>> > Hi Den
I also see this printing every few seconds on my client application...
org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi Accepted
incoming communication connection [locAddr=/xxx.xxx.xxx.68:47101,
rmtAddr=/xxx.xxx.xxx.82:49816
On Mon, 21 Oct 2019 at 12:04, John Smith wrote:
>
onment:
> https://apacheignite.readme.io/docs/vmware-deployment
>
> Denis
> On 17 Oct 2019, 17:41 +0300, John Smith , wrote:
>
> Ok I have metribeat running on the VM hopefully I will see something...
>
> On Thu, 17 Oct 2019 at 05:09, Denis Mekhanikov
> wrote:
>
>> T
gt; 47100-47200) but not the other way around.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 21 окт. 2019 г. в 19:36, John Smith :
>
>> I also see this printing every few seconds on my client application...
>> org.apache.ignite.spi.communication.tcp.TcpCommunicati
ly they may segment and
> stop.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 25 окт. 2019 г. в 00:08, John Smith :
>
>> Is it possible this is somehow causing the issue of the node stopping?
>>
>> On Thu, 24 Oct 2019 at 11:24, Ilya Kasnacheev
>> wro
as, node tries to connect to wrong address /
> itself) but more detailed analysis of logs is needed.
>
> You can try specifying localHost property in IgniteConfiguration to make
> sure correct address is used.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 28 окт.
hronizer$ConditionObject@491d897d,
> ownerName=null, ownerId=-1]
>
> Long GC will cause node to segment.
>
> Try either decreasing heap size / making sure full GC does not happen, or
> increase failureDetectionTimeout (clientFailureDetectionTimeout if set) on
> all nodes.
>
>
Ok I have metribeat running on the VM hopefully I will see something...
On Thu, 17 Oct 2019 at 05:09, Denis Mekhanikov
wrote:
> There are no long pauses in the GC logs, so it must be the whole VM pause.
>
> Denis
> On 16 Oct 2019, 23:07 +0300, John Smith , wrote:
>
> Sorry h
> enabling them for troubleshooting purposes.
> Check the lifecycle of your virtual machines. There is a high chance that
> the whole machine is frozen, not just the Ignite node.
>
> Denis
> On 10 Oct 2019, 18:25 +0300, John Smith , wrote:
>
> Hi Dennis, so are you saying I
nt statistics may also reveal some interesting details.
> You can learn about safepoints here:
> https://blog.gceasy.io/2016/12/22/total-time-for-which-application-threads-were-stopped/
>
> Denis
> On 9 Oct 2019, 23:14 +0300, John Smith , wrote:
>
> So the error sais to set clientF
Hi Dennis, so are you saying I should enable GC logs + the safe point logs
as well?
On Thu, 10 Oct 2019 at 11:22, John Smith wrote:
> You are correct, it is running in a VM.
>
> On Thu, 10 Oct 2019 at 10:11, Denis Mekhanikov
> wrote:
>
>> Hi!
>>
>> There are t
So the error sais to set clientFailureDetectionTimeout=3
1- Do I put a higher value than 3?
2- Do I do it on the client or the server nodes or all nodes?
3- Also if a client is misbehaving why shutoff the server node?
On Thu, 3 Oct 2019 at 21:02, John Smith wrote:
> But if i
So I have been monitoring my node and the same one seems to stop once in a
while.
https://www.dropbox.com/s/7n5qfsl5uyi1obt/ignite-logs.zip?dl=0
I have attached the GC logs and the ignite logs. From what I see from
gc.logs I don't see big pauses. I could be wrong.
The machine is 16GB and I have
idWorker
> [name=partition-exchanger, igniteInstanceName=xx, finished=false,
> heartbeatTs=1568931981805]]]
>
>
>
>
> -
> Denis
>
>
> On Thu, Oct 3, 2019 at 11:50 AM John Smith wrote:
>
>> So I have been monitoring my node and the same one seems to stop once in
&
487 0 0 0
LRU
Some other machines I have noticed only have like 10 dropped packets vs
this machine has millions even though it's 1%?
On Wed, 30 Oct 2019 at 10:38, John Smith wrote:
> We have done two things so far...
>
> 1- Disabled client metrics on all clients.
> 2- We n
Hi getting allot of these messages. It seems to be coming from a single
client. The client was restarted briefly because of updates...
The client is an apache flink streaming job.
[21:38:12,775][INFO][grid-nio-worker-tcp-comm-1-#25%xx%][TcpCommunicationSpi]
Accepted incoming communication
o long timeouts on
> client preventing it from understanding that it's dropped from cluster
> already.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> чт, 6 февр. 2020 г. в 00:44, John Smith :
>
>> Hi getting allot of these messages. It seems to be coming from a single
actually think that the optimal way is to have your own wrapper API
> which is only source of cache gets and which does this accounting under the
> hood.
>
> Then it can invoke the same cache entry to keep track of number of reads.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
&g
Hi, getting the message in the subject line
I'm pretty sure I have all my nodes enabled with
I'm guessing this cannot work with client enabled nodes only?
igniteConfig.setClientMode(true);
it.
>
> Evgenii
>
> ср, 22 апр. 2020 г. в 16:22, John Smith :
>
>> Hi, getting the message in the subject line
>>
>> I'm pretty sure I have all my nodes enabled with
>>
>>
>>
>> I'm guessing this cannot work with client enabled nodes only?
>>
>> igniteConfig.setClientMode(true);
>>
>>
>>
Hi I want to store a key/value and If that key has been accessed more than
3 times for example remove it. What is the best way to do this?
Ok bu the event just tells me if the key was read correct? I need to keep a
count of how many times each key was read globally.
The other way I was thinking of doing it is by having a cache as
Cache and then use cache.invoke(, new
CounterEntryProcessor())
And then in the EntryProcessor...
; --
> Ilya Kasnacheev
>
>
> ср, 22 апр. 2020 г. в 19:35, John Smith :
>
>> Hi, akonresh understood, but then I would need another cache to keep
>> track of those counts.
>>
>> Ilya would a EntryProcessor allow for that with the invoke? Because
>> cr
Zhuravlev
wrote:
> yes
>
> ср, 22 апр. 2020 г. в 18:31, John Smith :
>
>> So client enabled nodes nee to set it also?
>>
>> On Wed, 22 Apr 2020 at 19:52, Evgenii Zhuravlev
>> wrote:
>>
>>> Hi John,
>>>
>>> Yes, you're right, this
to add it to the config template and use it
> for all clients.
>
> Evgenii
>
> чт, 23 апр. 2020 г. в 07:03, John Smith :
>
>> Ah ok. So other option is to copy my jar to the lib folders of each
>> server node correct?
>>
>> Like if one application needs a specific EntryP
Ok let me try get them...
On Thu., May 7, 2020, 1:14 p.m. Evgenii Zhuravlev,
wrote:
> Hi,
>
> It looks like the third server node was not a part of this cluster before
> restart. Can you share full logs from all server nodes?
>
> Evgenii
>
> чт, 7 мая 2020 г. в 09:
oblem-and-GridSegmentationProcessor-td14590.html
>
> Evgenii
>
> пт, 8 мая 2020 г. в 14:30, John Smith :
>
>> How though? It's the same cluster! We haven't changed anything
>> this happened on it's own...
>>
>> All I did was reboot the node and the cluster fixed
I mean both the prefer IPV4 and the Zookeeper discovery should be on the
"central" cluster as well as all nodes specifically marked as client = true?
On Mon, 11 May 2020 at 09:59, John Smith wrote:
> Should be on client nodes as well that are specifically setClient = true?
>
Hi Evgenii, here the logs.
https://www.dropbox.com/s/ke71qsoqg588kc8/ignite-logs.zip?dl=0
On Fri, 8 May 2020 at 09:21, John Smith wrote:
> Ok let me try get them...
>
> On Thu., May 7, 2020, 1:14 p.m. Evgenii Zhuravlev, <
> e.zhuravlev...@gmail.com> wrote:
>
>&
ache
> proxy object using withExpiryPolicy.
>
> Evgenii
>
> чт, 7 мая 2020 г. в 09:46, John Smith :
>
>> Hi running 2.7.0
>>
>> I created a cache with ModifiedExpiryPolicy
>>
>> Can we change the policy of the created cache? I know we can do per
Hi, running 2.7.0 on 3 deployed on VMs running Ubuntu.
I checked the state of the cluster by going to: /ignite?cmd=currentState
And the response was:
{"successStatus":0,"error":null,"sessionToken":null,"response":true}
I also checked: /ignite?cmd=size=
2 nodes where reporting 3 million
Hi running 2.7.0
I created a cache with ModifiedExpiryPolicy
Can we change the policy of the created cache? I know we can do per write
but can we change the default of the existing cache to another policy?
using this
> "cache" object, will have a new policy.
>
> Evgenii
>
> чт, 7 мая 2020 г. в 10:39, John Smith :
>
>> Ok cool. I create my cache using a template and the rest API, but when I
>> start my application I do...
>>
>> cache = this.ignite.cach
vers=2, clients=2,
> state=ACTIVE, CPUs=15, offheap=20.0GB, heap=19.0GB]
> [03:56:43,389][INFO][disco-event-worker-#42%xx%][GridDiscoveryManager]
> ^-- Baseline [id=0, size=3, online=2, offline=1]
>
> So, it's just 2 different clusters.
>
> Best Regards,
> Evgenii
>
gt; No, the client will be getting an exception on an attempt to get an
> IgniteCache instance.
>
> -
> Denis
>
>
> On Fri, Aug 14, 2020 at 4:14 PM John Smith wrote:
>
>> Yeah I can maybe use vertx event bus or something to do this... But now I
>> have to tie th
seems that you have too-long full GC. Either make sure it does not
> happen, or increase failureDetectionTimeout to be longer than any expected
> GC.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пн, 17 авг. 2020 г. в 17:51, John Smith :
>
>> Hi guys it seems
e the failed node has to be involved.
>
> Btw, what's the tool you are using for the monitoring? Looks nice.
>
> -
> Denis
>
>
> On Thu, Aug 20, 2020 at 6:44 AM John Smith wrote:
>
>> Hi here is an example of our cluster during our normal "high" usage. The
ompute, etc.). But I'll let the maintainers of those modules clarify.
>
> -
> Denis
>
>
> On Fri, Aug 14, 2020 at 1:44 PM John Smith wrote:
>
>> Hi Denis, so to understand it's all operations or just the query?
>>
>> On Fri., Aug. 14, 2020, 12:53 p.m. Denis
e when the cluster is not activated yet.
> Does this work for you?
>
> -
> Denis
>
>
> On Fri, Aug 14, 2020 at 3:12 PM John Smith wrote:
>
>> Is there any work around? I can't have an HTTP server block on all
>> requests.
>>
>> 1- I need to figure out why I lose
rations fail if
> the cluster is deactivated. Could you propose the change by starting a
> discussion on the dev list? You can refer to this user list discussion for
> reference. Let me know if you need help with this.
>
> -
> Denis
>
>
> On Thu, Aug 13, 2020 at 5:55 PM John Smit
You can create templates in the XML and programmatically when you say
getOrCreate() you can specify the template to use and pass in random name
for the cache name ...
Hi guys it seems every couple of weeks we lose a node... Here are the logs:
https://www.dropbox.com/sh/8cv2v8q5lcsju53/AAAU6ZSFkfiZPaMwHgIh5GAfa?dl=0
And some extra details. Maybe I need to do more tuning then what is already
mentioned below, maybe set a higher timeout?
3 server nodes and 9
Hi, I'm running an ignite cluster on VMs running on Openstack and using
regular network stack nothing special here.
My CLIENT (client=true) applications are running in DC/OS using docker
container in bridged network mode.
When using TCP discovery everything works nice and dandy. But I recently
So, I'm guessing that the client node reports multiple I.Ps to the
zookeeper so then when another node tries to connect to the client node to
create the full mesh maybe it connecting to the wrong I.P?
On Mon, 1 Jun 2020 at 11:26, akorensh wrote:
> Hi,
>ZookeperDiscoverySpi
> <
>
Any news on this? Thanks
On Thu., May 28, 2020, 1:10 p.m. John Smith, wrote:
> Hi, I'm running an ignite cluster on VMs running on Openstack and using
> regular network stack nothing special here.
>
> My CLIENT (client=true) applications are running in DC/OS using docker
> conta
Hi, testing some failover scenarios etc...
When we call cache.getAsync() and the state of the cluster is not active.
It seems to block.
I implemented a cache repository as follows and using Vertx.io. It seems to
block at the cacheOperation.apply(cache)
So when I call myRepo.get(myKey) which
5 Jul 2020 at 17:45, Evgenii Zhuravlev
wrote:
> John,
>
> Then you should just get a new builder every time when you need it:
> myIgniteInstance.binary().builder("MyKey"). I don't see why you need to
> reuse builder from multiple threads here.
>
> Evgenii
>
> ср,
2. No, you still can work with BinaryObjects instead of actual classes.
>
> Evgenii
>
> ср, 15 июл. 2020 г. в 08:50, John Smith :
>
>> Hi Evgenii, it works good. I have two questions...
>>
>> 1- Is the BinaryObjectBuilder obtained from
>> myIgniteInstance.binary
I get the below exception on my client...
#1 I rebooted the cache nodes error still continued.
#2 restarted the client node error went away.
#3 this seems to happen every few weeks.
#4 is there some sort of timeout values and retries I can put?
#5 cache operations seem to block when rebooting the
1 - 100 of 216 matches
Mail list logo