Hi,
I'm trying to use Apache Ignite to store protobuf messages in cache, trying
to speed up our web interface.
I'm using the thin client for nodejs, and the 2.12.0 of Ignite.
I've 2 function:
async function get_cache_value(name, key)
{
const igniteClient = new IgniteClient();
let value = nul
Hi,
Backups are configured, please recheck cacheconfig. Its set to 1, thats why
node 3 has backups from node 1 and node two as shown in pic attached to my
previous reply.
On Mon, Jan 31, 2022, 20:46 Stephen Darlington <
stephen.darling...@gridgain.com> wrote:
> According to the configuration you
According to the configuration you shared, you don’t have backups.
I would expect to see something like:
CacheConfiguration deptCacheConfig = new
CacheConfiguration<>(DEPT_CACHE);
deptCacheConfig.setCacheMode(CacheMode.PARTITIONED);
deptCacheConfig.setBackups(1);
IgniteCache deptCache =
ignite.
Hi, igniters! Join the virtual gathering on February 16 to get an overview
of the latest Ignite releases. We will start from the brief overview of
Ignite 2.12 release and then continue with Ignite 3 Alpha 4 updates paying
special attention to the recently added transactional API. Ignite.
Speakers:
Hi,
We are using ignite 2.11.1 to experiment with ignite snapshots. We tried
steps mentioned on below page to restore ignite data from snapshot
https://ignite.apache.org/docs/latest/snapshots/snapshots
But we get the below error when we start a cluster after copying data
manually as mentioned on t
It looks like the app pool gets recycled, but the Ignite node is not shut
down gracefully, causing the abrupt disconnect.
You can use code like this to stop all Ignite nodes gracefully when the
application exits:
AppDomain.CurrentDomain.ProcessExit += Ignition.StopAll(true);
On Sat, Jan 29, 2022
New setup :
Node configuration :
Started 3 different nodes with this config :
./ignite.sh ../config/custom-config.xml
Client : It adds data once and queries the cache
cfg.setClientMode(true);
try
Hi, thanks for pointing out the problem of why data is being stored on the
same machine.
The 2nd point:
As I mentioned in steps, we are getting the same exception even with
backups.
Could you please point out other problems with configuration as you
mentioned
On Mon, Jan 31, 2022 at 4:35 PM Steph
There’s a lot to unpack here, but there are (at least) two problems with your
configuration.
First, you should not be activating your cluster in your startup script. That’s
an operation that needs to be completed only once, not every time the cluster
(or node) starts. This is probably why all t
Hi Guys,
I observed below behavior with persistence enabled. Could you please check
and confirm if it's a bug or I missed some configuration.
Steps:
1. Start ignite node1 and 2 with persistence enabled
2. Step 1 will write few entries to ignite cache
3. All entries go to one node
4. Stop node cont
10 matches
Mail list logo