; Please, let us know it you will face one.
>
>
> On Fri, Jun 29, 2018 at 1:56 AM Olexandr K
> wrote:
>
>> Hi Andrey,
>>
>> Thanks for clarifying this.
>> We have just a single persistent cache and I reworked the code to get rid
>> of expirati
/jira/browse/IGNITE-8659
>
>
> On Fri, Jun 22, 2018 at 8:43 AM Olexandr K
> wrote:
>
>> Hi Team,
>>
>> Issue is still there in 2.5.0
>>
>> Steps to reproduce:
>> 1) start 2 servers + 2 clients topology
>> 2) start load testing on client nodes
&g
omLink(CacheDataRowAdapter.java:150)
~[ignite-core-2.5.0.jar:2.5.0]
at
org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter.initFromLink(CacheDataRowAdapter.java:102)
~[ignite-core-2.5.0.j
BR, Oleksandr
On Thu, Jun 14, 2018 at 2:51 PM, Olexandr K
wrote:
> Upgraded
Yes, you are right, hashes for 00.wal and 01.wal are different before/after
load test
(last-modified-time still showing june 15th)
BEFORE
PS F:\ignite-wal\V_HP_LK_DCN01> get-filehash .\*
Algorithm Hash
-
SHA256
got it, thanks
On Wed, Jun 20, 2018 at 11:57 AM, dkarachentsev
wrote:
> Hi Oleksandr,
>
> It's OK for discovery, and this message is printed only in debug mode:
>
> if (log.isDebugEnabled())
> log.error("Exception on direct send: " +
> e.getMessage(),
> e);
>
> Just turn off
WAL mode is default one, all caches are FULL_ASYNC
...and now I see one of my Ignite servers updated WAL files but other one
still have 19th last-modification-time
it looks like WAL files are updated once per day
Hi Ignite team,
I noticed that nothing is written into WAL files even on Ignite restart.
My testing steps:
1) bounce application and ignite cluster
2) perform load testing
3) bounce application and ignite cluster
4) check ignite files:
data files have recent modification time - OK
latest WAL
Hi Igniters,
I'm getting "connect timed out" errors on each cluster restart
Errors are logged ~10 times before cluster activation
Everything is working fine after that
They are looking as false alarms... looks like nodes are trying to connect
each other when they are not UP yet.
Why it is logged
ch time a node restarts and it is always a UUID.
>
> ConsistentID is a UUID by default, but it doesn’t have to be the same as
> NodeID and doesn’t have to have a form of UUID – any string works (and even
> any Object with an idempotent toString(), but tsss - don’t tell anyone!
Upgraded to 2.5.0 and didn't get such error so far..
Thanks!
On Wed, Jun 13, 2018 at 4:58 PM, dkarachentsev
wrote:
> It would be better to upgrade to 2.5, where it is fixed.
> But if you want to overcome this issue in your's version, you need to add
> ignite-indexing dependency to your
I'm using key/value API.
Should I define index for the key explicitly?
This sounds strange... Can you give a sample how I can do this in xml
please?
Here is one of my cache configurations.
Actually I'm storing UUID per UUID here: IgniteCache
EAFF-73AF-4E2E-99A7-8F5DF58A3C40, order=1, clientMode=false]
logs\v-hp-lk-dcn02\ignite.log:274:>>> Local node
[ID=BBA63A1F-559E-461C-B7ED-B10CE3DE33CC, order=7, clientMode=false]
On Tue, Jun 12, 2018 at 9:48 PM, Olexandr K
wrote:
> Hi, Dmitry
>
> server nodes start with ignite-serv
Hi, Dmitry
server nodes start with ignite-server.xml and client nodes with
ignite-client.xml
server node hosts: v-hp-lk-dcn01, v-hp-lk-dcn02
http://www.springframework.org/schema/beans;
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
Hi Ignite team,
I'm faced with baseline topology issue.
Here are my testing steps:
1) start 2 server (A, B) and 2 client nodes (C, D)
2) ensure baseline topology consists of 2 server nodes
3) stop server node A
4) start server node A
5) stop server node B
... oops, it cannot be started anymore
value will be low enough to detect node failures and
> high enough to allow regular operations to pass.
>
> If that doesn’t help, please share the full logs from all nodes.
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Olexandr K
> *Sent: *11 июня 2018 г. 1
c
> since it performs substantial work in the calling thread. I’ll give it
> another look and file a bug.
>
> For now, as a workaround (aside from configuring the timeouts) I’d suggest
> to use an ExecutorService to call put()/putAsync(), getting a cancelable
> Future from the start
Hi Igniters,
I'm testing our system for availability.
It uses Ignite as key/value persistent cache.
Here is my test:
1) start 2 server and 2 client nodes
2) run heavy load on client nodes (some application logic which cause cache
calls)
3) stop 1 server node
Here I expect all in-progress cache
>
> A tool like that would be pretty simple though – just start a client node,
> parse command line arguments and
>
> map them to the methods of the ignite.cluster().
>
>
>
> Stan
>
>
>
> *From: *Olexandr K <olexandr.kundire...@gmail.com>
>
Hi guys,
I configured Ignite user/password authentication by adding custom plugin.
It works fine in server/client nodes and visor but I can't find any auth
support in control.bat
I checked it's source code and don't see any place where I can provide
credentials.
Should I write my own control
Hi Val, Ignite team
Reviewed all opened Ignite ports after your comments.
Everything is clear except loopback-related ports.
Here is what I see in Resource Monitor (Windows Server 2012 R2)
(prunsrv.exe is common-daemon java service wrapper running JVM 1.8 inside)
I don't understand what port
yes, everything is clear now
thanks!
On Thu, Apr 26, 2018 at 2:05 PM, slava.koptilin
wrote:
> Hi,
>
> 1) start 2 server nodes
> ^ the cluster was started at the first time. There was no baseline
> topology yet.
>
> 2) control.bat --state => Cluster is inactive
> 3)
but I didn't setup baseline topology
doc says: "To form the baseline topology from a set of nodes, use the
./control.sh --baseline set command along with a list of the nodes' consist
ent IDs:"
is it auto-resolving after first cluster activation and we don't need to
use "control.bat --baseline"
Hi Igniters,
I observe strange cluster activation behaviour in ignite 2.4.0
1) start 2 server nodes
2) control.bat --state => Cluster is inactive
3) control.bat --activate => Cluster activated
4) stop both nodes
5) start them again
6) control.bat --state => Cluster is active
Q: why my cluster
Thanks Val
one more question on this
if I want to minimize number of potentially-used ports (because of our
security guys)
is it ok to have configuration like below one?
here we have 3 server nodes each running on separate server, they always
use the same 47500 port - no ranges
As I understood
what about UDP ports? what they are used for in Ignite?
On Thu, Apr 19, 2018 at 10:40 PM, vkulichenko wrote:
> Most of these seem to ephemeral ports assigned to discovery and
> communication
> clients when they connect to well known configured ports on server
On Thu, Apr 5, 2018 at 11:22 PM, Olexandr K <olexandr.kundire...@gmail.com>
wrote:
> Hi team,
>
> I'm getting strange exception when trying to save Key/Value pair.
> When I tested locally everything was fine.
> Now I deployed application to servers cluster and stuck with this e
Hi team,
I'm getting strange exception when trying to save Key/Value pair.
When I tested locally everything was fine.
Now I deployed application to servers cluster and stuck with this error.
What does it mean? My both Ignite server nodes are UP.
Here is configuration for this cache:
s (make sure you’re running
> with `java -DIGNITE_QUEIT=false` or with `ignite.sh -v`)?
>
>
>
> Thanks,
>
> Stan
>
>
>
> *From: *Olexandr K <olexandr.kundire...@gmail.com>
> *Sent: *6 марта 2018 г. 12:48
> *To: *user@ignite.apache.org
> *Subject: *How to rec
n temp dir.
> Start scripts always provide this setting, but in developer's environment
> it may be missed.
>
> Setting up work/home/persistenceStoreDir will probably help to locate data
> files in configured place, so inserted data will be loaded always.
>
> Sincerely,
> Dmit
Hi Team,
I tried the following scenario:
1) start local ignite server node
2) start client node and put some key/value pairs
3) bounce server node
4) put some more key/value pairs from client node
at step (4) ignite put(..) operation just hanged
I expected ignite client to take care of
Hi Team,
I'm trying to configure persistent cache.
Specifically I need to cache key-value pairs and they should be also stored
to persistent storage.
Here is my configuration:
[ignite.xml]
...
Yeah, just wanted to confirm I'm not missing obvious things
Thanks!
On Tue, Feb 27, 2018 at 12:40 AM, vkulichenko wrote:
> Oleksandr,
>
> Generally, this heavily depends on your use case, workload, amount of data,
> etc... I don't see any issues with your
Hi guys,
What is recommended hardware for for Ignite node if it is used as
distributed cache and persistent store?
I see you mentioned that separate SSDs are recommended for WAL, index and
data:
https://apacheignite.readme.io/docs/durable-memory-tuning
What can you say about such minimal
Thank you!
I'm taking Ignite for our project then
Will come with more concrete/technical questions soon )
On Fri, Feb 23, 2018 at 1:23 AM, vkulichenko
wrote:
> Hi Oleksandr,
>
> 1. Yes, Ignite is production ready.
> 2. I doubt there is a fully pledged example
Hi Ignite team,
I'm evaluating Ignite as distributed cache + key value store for our
client.
I have read most of the documentation but wanted to clarify some points:
1) Is Ignite production-ready for Windows Java 8 deployments? (Our client
has Windows cluster ready and devops team in place).
2)
35 matches
Mail list logo