Re: Native persistence and upgrading

2020-12-14 Thread Stanislav Lukyanov
Yes, you can upgrade from an older version to a newer one and keep the data, it 
will just work.
You don't really need snapshots for that, although I assume snapshots would 
also work.

> On 10 Dec 2020, at 16:20, xero  wrote:
> 
> Hi Dimitry and community,
> Is this still true? My intention is to do it between versions 2.7.6 and
> 2.8.1/2.9. Basically, I want to only update the docker image keeping the
> volumes so that I can recover the persisted data. I couldn't find
> documentation regarding this topic.
> 
> On the other hand, release_2.9 introduced Cluster Snapshots. Are these
> snapshots version agnostic? or there are some considerations regarding which
> versions are compatible with the created snapshot? Could this be an
> alternative to solve my issue (upgrading without losing the persisted data)? 
> 
> Thanks in advance for the time.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: High availability of local listeners for ContinuousQuery or Events

2020-11-13 Thread Stanislav Lukyanov
Hi,

It's true that currently you need to implement something for Continuous Queries 
failover in your application code.
Continuous Queries have setInitialQuery API to help with that.

How it's supposed to work is: you define an initial query (e.g. an SQL query) 
which fetches the data that's already in the cache when the CQ is being 
registered.
In case of failover, your initial query should return the data your CQ listener 
has missed (or might have missed) while it was down.
For example, the following design would work
- The cache has a "last update time" field and an SQL index over that field
- The initial query of the CQ queries all data updated in the last 30 minutes; 
it will be fast because of the index
- After you've started the CQ, you process the initial query results; if your 
CQ listener was down for less than 30 minutes, it will receive all updates it 
has missed (plus some that it has already seen - it needs to filter them out or 
just be idempotent)

I'm sure that an out-of-the-box HA mechanism for CQ listeners will be added in 
the future versions of the platform.

Stan

> On 30 Oct 2020, at 11:01, 38797715 <38797...@qq.com> wrote:
> 
> Hi Igor,
> 
> We hope that if the local listener node fails, we can have a mechanism 
> similar to fail over. Otherwise, if the local listener node fails and 
> restarts, the events during the failure will be lost.
> 
> 在 2020/10/30 上午12:04, Igor Belyakov 写道:
>> Hi,
>> 
>> In case the node, which registered a continuous query fails, the continuous 
>> query will be undeployed from the cluster. The cluster state won't be 
>> changed.
>> 
>> It's not a good practice to write the business code in a remote filter. 
>> Could you please clarify more details regarding your use case?
>> 
>> Igor
>> 
>> On Thu, Oct 29, 2020 at 4:46 PM 38797715 <38797...@qq.com 
>> > wrote:
>> Hi community,
>> 
>> For local listeners registered for ContinuousQuery and Events, is there 
>> a corresponding high availability mechanism design? That is, if the node 
>> registering the local listener fails, what state will the cluster be?
>> 
>> If do not register a local listener, but write the business code in the 
>> remote filter and return false, is this a good practice?
>> 
>> 



Re: Ignite timeouts and trouble interpreting the logs

2020-11-12 Thread Stanislav Lukyanov
Hi,

This looks weird but with the right logs we should figure it out.

One thing that I don't like about these settings is the asymmetry of the 
server's and client's timeouts.
The server will use clientFailureDetectionTimeout=30s when talking to the 
client.
The client will use failureDetectiontimeout=120s when talking to the server.

In general, I recommend setting failureDetectionTimeout on the client to be 
equal to clientFailureDetectionTimeout non the server.
In your case it means adding clientFailureDetectionTimeout=12

Next, on the logs. Let's try adding more info to easier debug it the next time 
it occurs.

1. Let's make sure client logs are collected. For that easier make sure you 
configure logging on the client - see here 
https://ignite.apache.org/docs/latest/logging.

2. Add DEBUG logs for Discovery subsystem. E.g. for log4j 2 configured via XML 
add the following


3. If you see a client not being able to connect - try taking a thread dump. 
It'll help to understand what's happening on the client at the time.

With logs from the client and the server perhaps we'll be able to find out 
what's happening.

Stan

> On 31 Oct 2020, at 00:21, tschauenberg  wrote:
> 
> First some background.  Ignite 2.8.1 with a 3 node cluster, two webserver
> client nodes, and one batch processing client node that comes and goes.
> 
> The two webserver thick client nodes and the one batch processing thick
> client node have the following configuration values:
> * IgniteConfiguration.setNetworkTimeout(6)
> * IgniteConfiguration.setFailureDetectionTimeout(12)
> * TcpDiscoverySpi.setJoinTimeout(6)
> * TcpCommunicationSpi.setIdleConnectionTimeout(Long.MAX_VALUE)
> 
> The server nodes do not have any timeouts set and are currently using all
> defaults.  My understanding is that means they are using:
> * failureDetectionTimeout 1
> * clientFailureDetectionTimeout 3
> 
> Every so often the batch processing client node fails to connect to the
> cluster.  We try to connect the batch processing client node to a single
> node in the cluster using:
> TcpDiscoverySpi.setIpFinder(TcpDiscoveryVmIpFinder().setAddresses(single
> node ip)
> 
> I see the following stream of logs on the server node the client connects to
> and I am hoping you can shed some light into what timeout values I have set
> incorrectly what values I need to set instead.
> 
> In these logs I have obfuscated the client IP to 10.1.2.xxx and the server
> IP as 10.1.10.xxx
> 
> 
> On the server node that the client tries to connect to I see the following
> sequence of messages:
> 
> [20:21:28,092][INFO][exchange-worker-#42][GridCachePartitionExchangeManager]
> Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
> [topVer=4146, minorTopVer=0], force=false, evt=NODE_JOINED,
> node=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3]
> 
> So the client joined the cluster almost at exactly the same time it tried to
> join which seems good so far.
> 
> Then I see
> [20:21:54,726][INFO][db-checkpoint-thread-#56][GridCacheDatabaseSharedManager]
> Skipping checkpoint (no pages were modified) [checkpointBeforeLockTime=6ms,
> checkpointLockWait=0ms, checkpointListenersExecuteTime=6ms,
> checkpointLockHoldTime=8ms, reason='timeout']
> [20:21:58,044][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:47585
> client]-#4176][TcpDiscoverySpi] Finished serving remote node connection
> [rmtAddr=/10.1.2.xxx:47585, rmtPort=47585
> 
> [20:21:58,045][WARNING][grid-timeout-worker-#23][TcpDiscoverySpi] Socket
> write has timed out (consider increasing
> 'IgniteConfiguration.failureDetectionTimeout' configuration property)
> [failureDetectionTimeout=1, rmtAddr=/10.1.2.xxx:47585, rmtPort=47585,
> sockTimeout=5000]
> 
> I don't understand this socket timeout line because that remote address is
> the client remote address so I don't know what it was doing here and this
> failureDetectionTimeout isn't the clientFailureDetectionTimeout which I
> don't get.
> 
> It then seems to connect just fine to the client discovery here
> 
> [20:22:10,170][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
> discovery accepted incoming connection [rmtAddr=/10.1.2.xxx, rmtPort=56921]
> [20:22:10,170][INFO][tcp-disco-srvr-[:47500]-#3][TcpDiscoverySpi] TCP
> discovery spawning a new thread for connection [rmtAddr=/10.1.2.xxx,
> rmtPort=56921]
> [20:22:10,171][INFO][tcp-disco-sock-reader-[]-#4178][TcpDiscoverySpi]
> Started serving remote node connection [rmtAddr=/10.1.2.xxx:56921,
> rmtPort=56921]
> [20:22:10,175][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:56921
> client]-#4178][TcpDiscoverySpi] Initialized connection with remote client
> node [nodeId=1b91b2a5-05ac-4809-8a3d-c1c2efb6a3e3,
> rmtAddr=/10.1.2.xxx:56921]
> [20:22:27,870][INFO][tcp-disco-sock-reader-[1b91b2a5 10.1.2.xxx:56921
> client]-#4178][TcpDiscoverySpi] Finished serving remote node connection
> [rmtAddr=/10.1.2.xxx:56921, rmtPort=56921
> 
> The client hits its timeout at 20:22:28 which is the 

[Webinar] Networking in Apache Ignite - Talk at Sep 9 2020

2020-09-09 Thread Stanislav Lukyanov
Hi Igniters,

I'll be giving a webinar titled Networking in Apache Ignite. We'll look at 
Apache Ignite's networking components - Discovery and Communication - to see 
how they work together to implement various networking functions.
The webinar will be held at 10 AM PT / 1 PM ET / 5 PM GMT. Details are here 
https://www.gridgain.com/resources/webinars/how-apache-ignite-achieves-performance-and-fault-tolerance-its-network.

Will be glad to see you there!

Stan

Re: apache-ignite compatibility with armhf(32-bit arm linux)

2020-07-24 Thread Stanislav Lukyanov
It should work fine. I ran Ignite on various platforms non-Intel platforms,
including arm. There were some issues in the past but modern versions work
well. But you do have to keep in mind that release testing for Ignite is
done on Intel platforms. Also arm can bring some surprises in terms of
performance.

I haven't tried the debian package myself. On one hand, it should work - I
believe there is no platform-dependent code in the package. On the other
hand, I believe debian packages are bound to CPU architecture, aren't they?
So it probably won't allow you to install it.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[Meetup] The Role and Specifics of Networking - Talk at June 11 2020 for Bay Area IMC Meetup

2020-06-10 Thread Stanislav Lukyanov
Hi Igniters,

Tomorrow I'll be talking at an online meetup of the Bay Area In-Memory 
Computing community. The subject is The Role and Specifics of Networking in 
Distributed Systems. We'll use Apache Ignite's protocols as an example - 
experienced Ignite users will guess that we'll be looking at Discovery and 
Communication protocols and their functions.

Please join us at 12:30 PM PDT, June 11. More info: 
https://www.meetup.com/ru-RU/Bay-Area-In-Memory-Computing/events/271016164/

See you there!

Stan

Re: PutAll Behavior Single vs Multiple Servers

2019-12-11 Thread Stanislav Lukyanov
This is a very common pitfall with distributed systems - comparing 1 node
vs 3 nodes. In short, this is not correct to compare them.

When you write to one node each write does the following:
1) client sends the request to the server
2) server updates data
3) server sends the response to the client

When you write to three nodes with backups=1 each write needs to
1) client sends the request to the primary server
2) primary server updates data
3) primary server sends the request to the backup
4) backup updates data
5) backup sends the response to the primary
6) primary sends the response to the client

So, when you have more than one node you can actually have backups, and
having backups you need to do much more for each write.

If you want to check scalability, compare 3 nodes vs 4 vs 5, but 1 vs 3 is
not a fair comparison.

Also, you shouldn't start more than one node per host. They will just
compete for the same resources.

Stan

On Mon, Dec 9, 2019 at 9:57 PM Victor  wrote:

> Any pointers to understand this behavior?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Local node terminated after segmentation

2019-12-11 Thread Stanislav Lukyanov
In Ignite a node can go into "segmented" state in two cases really: 1. A
node was unavailable (sleeping. hanging in full GC, etc) for a long time 2.
Cluster detected a possible split-brain situation and marked the node as
"segmented".

Yes, split-brain protection (in GridGain implementation and in theory too)
doesn't protect your node from stopping. It protects you from having two
segments that are alive at the same time which could lead to data
inconsistency over time.

Regarding Discovery and large clusters. If your cluster is too big for the
ring-based TcpDiscoverySpi to work well then you should use Zookeeper
Discovery which was created specifically to support large clusters.

Stan

On Mon, Dec 9, 2019 at 4:02 PM Prasad Bhalerao 
wrote:

>
> Can someone please advise on this?
>>
>> -- Forwarded message -
>> From: Prasad Bhalerao 
>> Date: Fri, Nov 29, 2019 at 7:53 AM
>> Subject: Re: Local node terminated after segmentation
>> To: 
>>
>>
>> I had checked the resource you mentioned, but I was confused with
>> grid-gain doc  describing it as protection against split-brain. Because if
>> the node is segmented the only thing one can do is stop/restart/noop.
>> I was just wondering how it provides protection against split-brain.
>> Now I think by protection it means kill the segmented node/nodes or
>> restart it and bring it back in the cluster .
>>
>> Ignite uses TcpDiscoverSpi to send a heartbeat the next node in the ring
>> right to check if the node is reachable or not.
>> So the question in what situation one needs one more ways to check if the
>> node is reachable or not using different resolvers?
>>
>> Please let me know if my understanding is correct.
>>
>> The article you mentioned, I had checked that code. It requires a node to
>> be configured in advance so that resolver can check if that node is
>> reachable from local host. It doesn't not check if all the nodes are
>> reachable from local host.
>>
>> Eg: node1 will check for node2 and node2 will check for node 3 and node 3
>> will check for node1 to complete the ring
>> Just wondering how to configure this plugin in prod env with large
>> cluster.
>> I tried to check grid-gain doc to see if they have provided any sample
>> code to configure their plugins just to get an idea but did not find any.
>>
>> Can you please advise?
>>
>>
>> Thanks,
>> Prasad
>>
>> On Thu 28 Nov, 2019, 11:41 PM akurbanov >
>>> Hello,
>>>
>>> Basically this is a mechanism to implement custom logical/network
>>> split-brain protection. Segmentation resolvers allow you to implement a
>>> way
>>> to determine if node has to be segmented/stopped/etc in method
>>> isValidSegment() and possibly use different combinations of resolvers
>>> within
>>> processor.
>>>
>>> If you want to check out how it could be done, some articles/source
>>> samples
>>> that might give you a good insight may be easily found on the web, like:
>>>
>>> https://medium.com/@aamargajbhiye/how-to-handle-network-segmentation-in-apache-ignite-35dc5fa6f239
>>>
>>> http://apache-ignite-users.70518.x6.nabble.com/Segmentation-Plugin-blog-or-article-td27955.html
>>>
>>> 2-3 are described in the documentation, copying the link just to point
>>> out
>>> which one:
>>> https://apacheignite.readme.io/docs/critical-failures-handling
>>>
>>> By default answer to 2 is: Ignite doesn't ignote node FailureType
>>> SEGMENTATION and calls the failure handler in this case. Actions that are
>>> taken are defined in failure handler.
>>>
>>> AbstractFailureHandler class has only SYSTEM_WORKER_BLOCKED and
>>> SYSTEM_CRITICAL_OPERATION_TIMEOUT ignored by default. However, you might
>>> override the failure handler and call .setIgnoredFailureTypes().
>>>
>>> Links:
>>> Extend this class:
>>>
>>> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java
>>> — check for custom implementations used in Ignite tests and how they are
>>> used.
>>>
>>> Sample from tests:
>>>
>>> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java
>>>
>>> Failure processor:
>>>
>>> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java
>>>
>>> Best regards,
>>> Anton
>>>
>>>
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>


Re: Client (weblogic) attempting to rejoin cluster causes shutdown.

2019-12-10 Thread Stanislav Lukyanov
Ok, there is a lot to dig through here but let me try with establishing
simple things first.
1. If two nodes (client or server) have the same cache specified in the
configuration, the configs must be identical.
2. If one node has a cache configuration, it will be shared between all
nodes automatically.
3. Client doesn't store data (except for LOCAL caches or near caches)

Try only specifying caches and data regions on your server.
Does it help?

Stan

On Tue, Dec 10, 2019 at 7:58 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> Unfortunately it's hard to say without looking at complete logs from your
> nodes. There's too many questions and not enough clues.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 6 дек. 2019 г. в 03:56, Steven.Gerdes :
>
>> The production instance has issues with ignite heaping out, the solution
>> we
>> attempted to implement was to set the default data region to have swap
>> enabled and also set a eviction policy on the server with a maxMemorySize
>> such that it was much less then the Xmx jvm memory size.
>>
>> Testing locally with a dev version of our server (weblogic acting as
>> ignite
>> with client mode enabled) and the docker instance of ignite 2.7.6 it
>> appears
>> as though using this configuration does not solve ignites instability
>> issues.
>>
>> Many different configurations were attempted (for the full config see
>> bottom
>> of post). The desired configuration would be one which the client has no
>> cache and the server does all the caching. That was done with attempting
>> the
>> below on the server:
>>
>>   
>>
>>   
>>
>> > class="org.apache.ignite.cache.eviction.lru.LruEvictionPolicyFactory">
>>   
>>
>>   
>> 
>>   
>>
>> With the above server configuration 3 attempts were made to the client
>> configuration:
>>
>> 1. Mirrored configuration on the client
>> 2. Similar configuration with maxSize set to 0 (an attempt at ensuring the
>> client didn't try to cache)
>> 3. enabling IGNITE_SKIP_CONFIGURATION_CONSISTENCY_CHECK and not having any
>> eviction policy for the client
>>
>> All three of these configurations resulted in the client weblogic to
>> disconnect from the cluster and finally to die (while attempting to
>> reconnect to ignite)
>>
>> Error from client before death:
>>   | Servlet failed with an Exception
>>   | java.lang.IllegalStateException: Grid is in invalid state to perform
>> this operation. It either not started yet or has already being or have
>> stopped [igniteInstanceName=null, state=STOPPING]
>>   | at
>>
>> org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:201)
>>   | at
>>
>> org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:95)
>>   | at
>> org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3886)
>>   | at
>>
>> org.apache.ignite.internal.IgniteKernal.transactions(IgniteKernal.java:2862)
>>   | at
>>
>> org.apache.ignite.cache.websession.CustomWebSessionFilter.init(CustomWebSessionFilter.java:273)
>>   | Truncated. see log file for complete stacktrace
>>
>> Error in ignitevisorcmd.sh:
>> SEVERE: Blocked system-critical thread has been detected. This can lead to
>> cluster-wide undefined behaviour [threadName=tcp-disco-msg-worker,
>> blockedFor=17s]
>> Dec 06, 2019 12:17:27 AM java.util.logging.LogManager$RootLogger log
>> SEVERE: Critical system error detected. Will be handled accordingly to
>> configured handler [hnd=StopNodeFailureHandler
>> [super=AbstractFailureHandler
>> [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED, SEGMENTATION]]],
>> failureCtx=FailureContext [type=SYSTEM_WORKER_BLOCKED, err=class
>> o.a.i.IgniteException: GridWorker [name=tcp-disco-msg-worker,
>> igniteInstanceName=null, finished=false, heartbeatTs=1575591430824]]]
>> class org.apache.ignite.IgniteException: GridWorker
>> [name=tcp-disco-msg-worker, igniteInstanceName=null, finished=false,
>> heartbeatTs=1575591430824]
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1831)
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance$2.apply(IgnitionEx.java:1826)
>> at
>>
>> org.apache.ignite.internal.worker.WorkersRegistry.onIdle(WorkersRegistry.java:233)
>> at
>>
>> org.apache.ignite.internal.util.worker.GridWorker.onIdle(GridWorker.java:297)
>> at
>>
>> org.apache.ignite.internal.processors.timeout.GridTimeoutProcessor$TimeoutWorker.body(GridTimeoutProcessor.java:221)
>> at
>> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
>> at java.lang.Thread.run(Thread.java:748)
>>
>> Sometimes in testing it was possible for the client to successfully
>> reconnect. But I could not see why it was inconsistent with this behavior.
>>
>> A separate test was conducted in which there was no eviction policy or
>> on-heap enabled on either the client or server. This seems to 

Re: How sql works for near cache

2019-12-10 Thread Stanislav Lukyanov
Not out of the box but you could use SQL or ScanQuery for that.

With SQL:
SELECT _key FROM mycache
(given that your cache is SQL-enabled).

With ScanQuery:
cache.query(new ScanQuery(), Cache.Entry::getKey)
(may need to fix type errors to compile this)

Stan

On Wed, Dec 4, 2019 at 2:36 AM Hemambara  wrote:

> Is there any way we can get complete keyset or values from a near cache.
> Something like cache.keyset()
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Error while adding the node the baseline topology

2019-11-05 Thread Stanislav Lukyanov
This message actually looks worrisome:
2019-10-22 10:31:42,441][WARN
][data-streamer-stripe-3-#52][PageMemoryImpl] Parking
thread=data-streamer-stripe-3-#52 for timeout (ms)=771038

It means that Ignite's throttling algorithm has decided to put a thread to
sleep for 771 seconds.

Can you share your persistence configuration (DataStorageConfiguration or
PersistenceStorageConfiguration).

Thanks,
Stan

On Thu, Oct 31, 2019 at 2:39 AM Denis Magda  wrote:

> Have you tried to turn of the failure handling following  the previously
> shared documentation page? It looks like some timeouts need to be tuned.
>
> Denis
>
> On Friday, October 25, 2019, krkumar24061...@gmail.com <
> krkumar24061...@gmail.com> wrote:
>
>> Hi - The application is doing two things, one thread is writing 2kb size
>> events to the ignite cache as a key value and other thread is executing
>> ignite SQLs thru ignite jdbc connections. The throughput is anything
>> between
>> 25K to 40K events per second on the cache size. We are using data streamer
>> for writing the key value cache. The cluster has 4 nodes with 198GB ram
>> and
>> 48 cores.
>>
>> We got a similar error again and here is the error description:
>>
>> [2019-10-25 10:16:45,399][ERROR][disco-event-worker-#142][G] Blocked
>> system-critical thread has been detected. This can lead to cluster-wide
>> undefined behaviour [threadName=data-streamer-stripe-0, blockedFor=2032s]
>> [2019-10-25 10:16:45,399][WARN ][disco-event-worker-#142][G] Thread
>> [name="data-streamer-stripe-0-#49", id=80, state=WAITING, blockCnt=7,
>> waitCnt=5352642]
>>
>> [2019-10-25 10:16:45,399][ERROR][disco-event-worker-#142][root] Critical
>> system error detected. Will be handled accordingly to configured handler
>> [hnd=StopNodeOrHaltFailureHandler [tryStop=false, timeout=0,
>> super=AbstractFailureHandler [ignoredFailureTypes=[SYSTEM_WORKER_BLOCKED,
>> SYSTEM_CRITICAL_OPERATION_TIMEOUT]]], failureCtx=FailureContext
>> [type=SYSTEM_WORKER_BLOCKED, err=class o.a.i.IgniteException: GridWorker
>> [name=data-streamer-stripe-0, igniteInstanceName=null, finished=false,
>> heartbeatTs=1572010973019]]]
>>
>> Thanx and Regards,
>> KR Kumar
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> -
> Denis
>
>


Re: How to insert data?

2019-11-05 Thread Stanislav Lukyanov
There are multiple ways to configure a cache to use SQL. The easiest is to
use @QuerySqlField annotation. Check out this doc
https://www.gridgain.com/docs/8.7.6/developers-guide/SQL/sql-api#querysqlfield-annotation
.

On Tue, Nov 5, 2019 at 5:52 PM BorisBelozerov 
wrote:

> I have 3 nodes, and I code in each node:
> The 1st node: in Main function
> Ignite ignite=Ignition.start();
> CacheConfiguration cacheConfiguration = new
> CacheConfiguration();
> QueryEntity valQE = new QueryEntity();
> valQE.setKeyFieldName("key");
> valQE.setKeyType("java.lang.Integer");
> valQE.setValueType("DataX");
> valQE.addQueryField("key", "java.lang.Integer", "key");
> valQE.addQueryField("value","java.lang.String","value");
>
> cacheConfiguration.setQueryEntities(java.util.Arrays.asList(valQE));
> cacheConfiguration.setName("tdCache");
> IgniteCache dtCache =
> ignite.getOrCreateCache(cacheConfiguration);
> The 2nd and 3rd node:
> Ignite ignite=Ignition.start();
> I open ignitevisorcmd.bat but I can't see DataX or tdCache
> I use sqlline.bat: select * from "tdCache".DATAX; but I can't select it:
> Schema "tdCache" not found
> Can you help me?? Thank you
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite-rest-http in classpath error

2019-11-05 Thread Stanislav Lukyanov
Hi,

Web Console requires ignite-rest-http module to be enabled. It is not
enabled by default in Ignite binaries nor Docker image.
The steps that you've taken are done while the container is running - so,
AFTER the Ignite process has started. That's why copying the module has no
effect.

Try setting env variable OPTION_LIBS=ignite-rest-http for your Docker
image. It will tell the script running in the image to copy the module
before starting Ignite process.

Stan

On Tue, Nov 5, 2019 at 12:35 AM jrobison-sb  wrote:

> We're attempting to run Apache Ignite as a standalone application, not
> bundled into our own custom application or anything. We're running it using
> the apacheignite/ignite:2.7.6 docker image.
>
> We're trying to use the ignite-web-agent, but when we start the agent via
> `./apache-ignite/libs/ignite-web-agent-2.7.0/ignite-web-agent.sh`, it fails
> saying:
>
> ```
> [2019-11-04 19:51:15,698][WARN ][pool-1-thread-1][RestExecutor] Failed
> connect to cluster. Please ensure that nodes have [ignite-rest-http] module
> in classpath (was copied from libs/optional to libs folder).
> ```
>
> Can anybody tell me what we're doing wrong?
>
> Steps we're taking:
>
> 1. Run the apacheignite/ignite:2.7.6 docker image. The following steps then
> happen inside that container.
> 2. Unzip ignite-web-agent-2.7.0.zip into apache-ignite/libs
> 3. mv apache-ignite/libs/optional/ignite-rest-http apache-ignite/libs/
> 4. Set up apache-ignite/libs/ignite-web-agent-2.7.0/default.properties with
> the necessary server-uri and token
> 5. Run ./apache-ignite/libs/ignite-web-agent-2.7.0/ignite-web-agent.sh and
> get the above error saying that ignite-rest-http isn't in the classpath.
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite singel instance

2019-11-05 Thread Stanislav Lukyanov
First, 1700 TPS given your transaction structure is 17 simple
operations per second, which is quite substantial - especially if you're
doing that from a single thread / single ODBC client.

Second, note that TRANSACTIONAL_SNAPSHOT is in beta and is not ready for
production use. There are no claims about performance or stability of that
cache mode.

Third, you don't need the index on key01 - it is created automatically
because it is a primary key. But you do need to set INLINE SIZE of the
default index. Run your Ignite server with system property or environment
varialy IGNITE_MAX_INDEX_PAYLOAD_SIZE=64.

Finally, the operations you're doing don't look like something to be done
with SQL. Consider using key-value API in Ignite C++ instead -
https://apacheignite-cpp.readme.io/docs.

Stan

On Sat, Nov 2, 2019 at 8:15 PM Evgenii Zhuravlev 
wrote:

> Hi,
>
> Do you use only one ODBC client? Can you start one more ODBC client and
> check the performance?
>
> Thanks,
> Evgenii
>
> сб, 2 нояб. 2019 г. в 16:47, Siew Wai Yow :
>
>> Hi,
>>
>> We are doing POC on ignite performance using ODBC driver but the
>> performance is always capped at around 1700 TPS which is too slow. It is
>> local ignite service. All tuning tips from Ignite page has been applied, no
>> bottleneck from CPU and Memory. At the moment not turn on persistence yet,
>> it will be worse if turn on. This POC is very crucial to our product
>> roadmap.
>>
>> Any tips? Thank you.
>>
>> Test case,
>> 1 x insert --> 49 x select -->49 x update --> 1 x delete
>> repeat for 5 times.
>>
>> *const char* insert_sql = "insert into CDRTEST
>> values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";*
>> *const char* update_sql = "update CDRTEST set
>> value01=?,value02=?,value03=?,value04=?,value05=?,value21=?,value22=?,value23=?,value24=?,value25=?
>> where key01=?";*
>> *const char* delete_sql = "delete from CDRTEST where key01=?";*
>> *const char* select_sql = "select value01 from CDRTEST where
>> key01=?";*
>>
>> *retcode = SQLExecDirect(hstmt,
>> reinterpret_cast(const_cast("CREATE TABLE IF NOT EXISTS
>> CDRTEST ( "*
>> *  "key01 VARCHAR PRIMARY KEY, "*
>> *  "value01 LONG, "*
>> *  "value02 LONG, "*
>> *  "value03 LONG, "*
>> *  "value04 LONG, "*
>> *  "value05 LONG, "*
>> *  "value06 LONG, "*
>> *  "value07 LONG, "*
>> *  "value08 LONG, "*
>> *  "value09 LONG, "*
>> *  "value10 LONG, "*
>> *  "value11 LONG, "*
>> *  "value12 LONG, "*
>> *  "value13 LONG, "*
>> *  "value14 LONG, "*
>> *  "value15 LONG, "*
>> *  "value16 LONG, "*
>> *  "value17 LONG, "*
>> *  "value18 LONG, "*
>> *  "value19 LONG, "*
>> *  "value20 LONG, "   *
>> *  "value21 VARCHAR, "*
>> *  "value22 VARCHAR, "*
>> *  "value23 VARCHAR, "*
>> *  "value24 VARCHAR, "*
>> *  "value25 VARCHAR, "*
>> *  "value26 VARCHAR, "*
>> *  "value27 VARCHAR, "*
>> *  "value28 VARCHAR, "*
>> *  "value29 VARCHAR, "*
>> *  "value30 VARCHAR, "*
>> *  "value31 VARCHAR, "*
>> *  "value32 VARCHAR, "*
>> *  "value33 VARCHAR, "*
>> *  "value34 VARCHAR, "*
>> *  "value35 VARCHAR, "*
>> *  "value36 VARCHAR, "*
>> *  "value37 VARCHAR, "*
>> *  "value38 VARCHAR, "*
>> *  "value39 VARCHAR) "*
>> *  "WITH
>> \"template=partitioned,atomicity=TRANSACTIONAL_SNAPSHOT,WRITE_SYNCHRONIZATION_MODE=FULL_ASYNC\"")),
>> SQL_NTS);*
>> *CHECK_ERROR(retcode, "Fail to create table", hstmt,
>> SQL_HANDLE_STMT);*
>>
>> *retcode = SQLExecDirect(hstmt,
>> reinterpret_cast(const_cast("CREATE INDEX key01t_idx ON
>> CDRTEST(key01) INLINE_SIZE 64")), SQL_NTS);*
>>
>> *CHECK_ERROR(retcode, "Fail to create index", hstmt,
>> SQL_HANDLE_STMT); *
>>
>>
>> Below are configuration we used,
>>
>> **
>>
>> *http://www.springframework.org/schema/beans
>> "*
>> *   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance
>> "*
>> *   xsi:schemaLocation="http://www.springframework.org/schema/beans
>> *
>> *http://www.springframework.org/schema/beans/spring-beans.xsd
>> ">*
>>
>> *> class="org.apache.ignite.configuration.IgniteConfiguration">*
>>
>> **
>> *> class="org.apache.ignite.configuration.BinaryConfiguration">*
>> **
>>
>> **
>> *> class="org.apache.ignite.binary.BinaryBasicIdMapper">*
>> **
>> **
>> **
>> **
>> **
>>
>> *  *
>> *> class="org.apache.ignite.configuration.ClientConnectorConfiguration">*
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> *  *
>> **
>> *  *
>>
>> **
>> *   > class="org.apache.ignite.configuration.DataStorageConfiguration">*
>> *  *
>> *  *
>> * > 

Re: Cluster in AWS can not have more than 100 nodes?

2019-11-05 Thread Stanislav Lukyanov
Each node is supposed to add its own IP and port to the S3 bucket when it
starts. That said, I wouldn't check the cluster state based on the contents
of the bucket alone.
Check your logs for errors. Try using some tools (e.g. check out Web
Console - either the one in Ignite
https://apacheignite-tools.readme.io/docs/getting-started or from GridGain
https://www.gridgain.com/products/software/enterprise-edition/management-tool)
to see whether all nodes are there.

If you don't see any clues, please share your logs from any of the running
servers. Ideally, a log from one of the 100 servers which joined the
cluster, and a log from one of the 28 that didn't.

Stan

On Fri, Nov 1, 2019 at 2:41 AM codeboyyong  wrote:

> No . The thing is we started ignite in embedded mode as web app, we can see
> there are 128 instances.
> But when we check he S3 bucket , it always only have 100 its, which means
> only 100 nodes joined the ignite cluster.  Not sure if there are another
> way
> to get the cluster size.
> Since we are running as web app, we actually dont have all the access of
> command line tools.
>
> Thank You
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How does Apache Ignite distribute???

2019-11-05 Thread Stanislav Lukyanov
I believe that the correct answer to your question - don't do that.

The strength of distributed systems is that you have a number of identical
pieces which you can scale out virtually with no limits.

If your cluster is heterogenous - i.e. all the nodes are different in size,
amount of data and power - you may easily run into issues with uneven load
distribution and eventually into performance issues.
Even more, you can run into cluster management issues, e.g. if one big node
leaves the cluster then the remaining small nodes will redistribute the
data and may not have enough space to do that.

If you really-really want something like that, and you think that your
specific use case is not going to suffer from uneven load or cluster
management issues, then go with the suggestion from Stephen - run multiple
nodes on big machines.

Stan

On Fri, Nov 1, 2019 at 4:36 AM BorisBelozerov 
wrote:

> Can you give me some code to implement it? Thank you!!
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Query execution in ignite

2019-11-01 Thread Stanislav Lukyanov
This is not exactly correct.
When you do an SQL query with only PARTITIONED tables, or with a mix of
PARTITIONED and REPLICATED, the data will be taken from the primary
partions of PARTITIONED tables and *all* partitions of REPLICATED tables.
When you do an SQL query with only REPLICATED tables, the data will be
taken from all partitions (as they're all always available).

E.g. take this query
SELECT * FROM A JOIN B WHERE A.FIELD_A = B.FIELD_B

This query will work correctly if the data is collocated, i.e. in the
following cases:
- A and B are both REPLICATED
- A is PARTIONED and B is REPLICATED
- A is REPLICATED and B is PARTITIONED
- A and B are both PARTITIONED with FIELD_A and FIELD_B both being their
affinity keys

If A and B are both PARTITIONED but FIELD_A or FIELD_B are not affinity
keys then the data is not correctly collocated and the query will
return incorrect results.

Stan

Stan

On Mon, Oct 28, 2019 at 9:34 PM  wrote:

> In PARTITIONED mode *SQL queries are executed* over each node’s *primary*
> partitions only.
>
>
>
> Here is more information about distributed joins:
> https://apacheignite-sql.readme.io/docs/distributed-joins
>
>
>
>
>
>
>
> *From:* Prasad Bhalerao 
> *Sent:* Saturday, October 26, 2019 12:25 AM
> *To:* user@ignite.apache.org
> *Subject:* Re: Query execution in ignite
>
>
>
>
>
> Question is specifically about primary and secondary partitions.
>
>
>
> So in case of replicated cache ignite scans primary and secondary
> partitions of any one of the node of the cluster to to fetch the data.
>
>
>
> But is it the case with partitioned cache.
>
> I mean in case of partitioned cache when SQL is executed, does ignite scan
> primary as well as secondary partitions of each node in the cluster or it
> just scans primary partitions of all the nodes in the cluster as the query
> is being executed on all nodes?
>
>
>
> On Fri 25 Oct, 2019, 10:46 PM 
> Hi,
>
>
>
>If a query is executed against a fully *REPLICATED* data then Ignite
> will send it to a single cluster node and run it over the local data there.
>
>
>
>
>
>
>
>  if a query is executed over a *PARTITIONED* data, then the execution
> flow will be the following:
>
> The query will be parsed and split into multiple map queries and a single
> reduce query.
>
> · All the map queries are executed on all the nodes where
> required data resides.
>
> · All the nodes provide result sets of local execution to the
> query initiator (reducer) that, in turn, will accomplish the reduce phase
> by properly merging provided result sets.
>
>
>
>
>
>
>
>  More information here:
> https://apacheignite-sql.readme.io/docs/how-ignite-sql-works
>
> Thanks, Alex
>
>
>
> *From:* Prasad Bhalerao 
> *Sent:* Friday, October 25, 2019 1:31 AM
> *To:* user@ignite.apache.org
> *Subject:* Query execution in ignite
>
>
>
> Hi,
>
>
>
> When SQL is executed, does ignite always scan only primary partitions of
> all available nodes in cluster irrespective of cache mode partitioned or
> replicated?
>
>
>
>
>
>
>
> Thanks ,
>
> Prasad
>
>


Re: Throttling getAll

2019-11-01 Thread Stanislav Lukyanov
The right answer to this is probably not to use getAll in such cases.
If you want to load data in batches then you should either split the keys
yourself or use Query APIs, like ScanQuery or SqlQuery.

Stan

On Mon, Oct 28, 2019 at 10:36 PM Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
agupta...@bloomberg.net> wrote:

> Ack. I've create a JIRA to track this.
>
> https://issues.apache.org/jira/browse/IGNITE-12334
>
>
>
> From: user@ignite.apache.org At: 10/28/19 09:08:10
> To: user@ignite.apache.org
> Subject: Re: Throttling getAll
>
> You might want to open a ticket. Of course, Ignite is open source and I’m
> sure the community would welcome a pull request.
>
> Regards,
> Stephen
>
> On 28 Oct 2019, at 12:14, Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
> agupta...@bloomberg.net> wrote:
>
> 
> Thanks Ilya for your response.
>
> Even if my value objects were not large, nothing stops clients from doing
> a getAll with say 100,000 keys. Having some kind of throttling would still
> be useful.
>
> -Abhishek
>
>
>
> - Original Message -
> From: Ilya Kasnacheev 
> To: ABHISHEK GUPTA
> CC: user@ignite.apache.org
> At: 28-Oct-2019 07:20:24
>
> Hello!
>
> Having very large objects is not a priority use case of Apache Ignite.
> Thus, it is your concern to make sure you don't run out of heap when doing
> operations on Ignite caches.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> сб, 26 окт. 2019 г. в 18:51, Abhishek Gupta (BLOOMBERG/ 919 3RD A) <
> agupta...@bloomberg.net>:
>
>> Hello,
>> I've benchmarked my grid for users (clients) to do getAll with upto 100
>> keys at a time. My value objects tend to be quite large and my worry is if
>> there are errant clients might at times do a getAll with a larger number of
>> keys - say 1000. If that happens I worry about GC issues/humongous
>> objects/OOM on the grid. Is there a way to configure the grid to auto-split
>> these requests into smaller batches (smaller number of keys per batch) or
>> rejecting them?
>>
>>
>> Thanks,
>> Abhishek
>>
>>
>


Re: Drop index do not release memory used ?

2019-11-01 Thread Stanislav Lukyanov
Hi,

What version do you use?
There was an issue with recycling pages between data and indexes which has
been fixed in 2.7 https://issues.apache.org/jira/browse/IGNITE-4958.
In AI 2.7 and later this should be working fine.

Stan

On Sat, Oct 26, 2019 at 5:22 PM yann Blazart  wrote:

> Yes the data pages in reuse list are reused for new objects.
>
> If you drop table, you get back all pages used in reuse list.
>
> But it seems that it's not true for drop indexes
>
> Le sam. 26 oct. 2019 à 16:18, Andrey Dolmatov  a
> écrit :
>
>> Ignite believe that Ignite will reuse that memory. But it is a question,
>> does Ignite reuse index data blocks for data blocks.
>>
>> On Fri, Oct 25, 2019, 15:28 yann.blaz...@externe.bnpparibas.com <
>> yann.blaz...@externe.bnpparibas.com> wrote:
>>
>>> Hello all.
>>>
>>> If you can remember, I found a way to compute the real size of memory
>>> used in offheap, using the reuselist size.
>>>
>>> As I'm facing some limits on my hardware, I'm trying to optimize my
>>> memory consumption, in pure memory, no persistence on hdd or ssd.
>>>
>>> For that as I have to execute plenty of request on my stored data, I saw
>>> that indexes consumes a lot of memory.
>>>
>>> To improve that, in my algorithm I tried to create tables with only pk,
>>> no indexes at first.
>>>
>>> Then before each request, I tried to create indexes , execute request,
>>> then drop indexes.
>>>
>>> What I see, is that drop index do not release memory...
>>>
>>> Everything is release only when we drop table.
>>>
>>> Is this normal  ?
>>>
>>>
>>> Thnaks and regards.
>>> This message and any attachments (the "message") is
>>> intended solely for the intended addressees and is confidential.
>>> If you receive this message in error,or are not the intended
>>> recipient(s),
>>> please delete it and any copies from your systems and immediately notify
>>> the sender. Any unauthorized view, use that does not comply with its
>>> purpose,
>>> dissemination or disclosure, either whole or partial, is prohibited.
>>> Since the internet
>>> cannot guarantee the integrity of this message which may not be
>>> reliable, BNP PARIBAS
>>> (and its subsidiaries) shall not be liable for the message if modified,
>>> changed or falsified.
>>> Do not print this message unless it is necessary, consider the
>>> environment.
>>>
>>>
>>> --
>>>
>>> Ce message et toutes les pieces jointes (ci-apres le "message")
>>> sont etablis a l'intention exclusive de ses destinataires et sont
>>> confidentiels.
>>> Si vous recevez ce message par erreur ou s'il ne vous est pas destine,
>>> merci de le detruire ainsi que toute copie de votre systeme et d'en
>>> avertir
>>> immediatement l'expediteur. Toute lecture non autorisee, toute
>>> utilisation de
>>> ce message qui n'est pas conforme a sa destination, toute diffusion ou
>>> toute
>>> publication, totale ou partielle, est interdite. L'Internet ne
>>> permettant pas d'assurer
>>> l'integrite de ce message electronique susceptible d'alteration, BNP
>>> Paribas
>>> (et ses filiales) decline(nt) toute responsabilite au titre de ce
>>> message dans l'hypothese
>>> ou il aurait ete modifie, deforme ou falsifie.
>>> N'imprimez ce message que si necessaire, pensez a l'environnement.
>>>
>>>


Re: Question on submitted post

2019-08-26 Thread Stanislav Lukyanov
The message "Failed to deserialize object
[typeName=io.grpc.internal.InternalHandlerRegistry]"
means InternalHandlerRegistry is being sent between the nodes - which it
shouldn't be.
What you need to do is to find where is it being sent. You shouldn't pass
any gRPC to any Ignite configuration, or, I believe, to any Ignite method.
Also, be careful with compute jobs that use lambdas and anonymous classes.
By their nature they catch objects around them too easy, so often you'll
see a compute job accidentally bringing some context over the network. The
best way to avoid it is to use `public static` classes instead of anonymous
classes and lambdas - verbose but bulletproof.

If you still have an issue with this please share more complete code and
config example. A small reproducer on github would be ideal.

Stan

On Mon, Aug 26, 2019 at 4:09 PM Pascoe Scholle 
wrote:

> Hi Stan,
>
> Thanks for your response. I have tried this, but it has not fixed the
> issue.
> The grpc server class was moved into the service where the interface
> methods "init","execute" and "cancel" perform an initialization of the
> serve, as well start and stopping respectively. But this was already
> implemented.
>
> Does this issue not have something to do with class loading? The second
> node that is started doesn't have any information of the custom classes for
> serialization. I have also moved all JARS of these custom classes into the
> libs folder in ignite but that has also not helped.
>
> On Fri, Aug 23, 2019 at 6:56 PM Stanislav Lukyanov 
> wrote:
>
>> Hi,
>>
>> It looks like the issue is that you're ending up sending an instance of
>> your gRPC server inside your service. This approach is generally incorrect.
>> What you should do is
>> - not pass gRPC to the service instance
>> - add an init() method implementation to your service
>> - in your init() start your gRPC server
>>
>> Stan
>>
>> On Thu, Aug 22, 2019 at 10:52 AM Pascoe Scholle 
>> wrote:
>>
>>> Hi there,
>>>
>>> How long does it usually take, for a post to be seen on the forum? Maybe
>>> I made a mistake so I will provide my question here. Excuse me if I am
>>> being impatient:
>>>
>>>
>>> =
>>> Good afternoon everyone,
>>>
>>> I have recently run into an issue and I think the problem lies in the
>>> server node configuration. I will attache the output of the stack trace at
>>> the bottom, however I first wish to explain what the software does and how
>>> we are using ignite.
>>>
>>> I start multiple server nodes with peerClassEnabled set to true, using
>>>  a TcpDiscoveryVmIpFinder and dont set anything other than a port range for
>>> the ipFinder.
>>>
>>> Using the REST protocol a ComputeTaskAdapter task is executed which
>>> starts a service and this in turn starts a grpc server, I have placed some
>>> scala code to show what I mean.
>>>
>>> class StartService extends ComputeTaskAdapter[String, Any]{
>>>   var ignite: Ignite = null;
>>>   @IgniteInstanceResource
>>>   def setIgnite(someIgnite: Ignite): Unit = {
>>> ignite = someIgnite
>>>   }
>>>
>>>  def map(...)={
>>> ...
>>> // port is an integer
>>> val server = new GrpcServer(ignite, port);
>>>
>>> val service = new ServiceImpl(name, server);
>>> /*
>>> within the method execute of the Service interface, server.start() is
>>> called
>>> */
>>>
>>> val serviceconfig = new ServiceConfiguration();
>>>   serviceconfig.setName(name);
>>>   serviceconfig.setTotalCount(1);
>>>   serviceconfig.setMaxPerNodeCount(1);
>>>   ignite.services().deploy(serviceconfig);
>>> ...
>>> }
>>>
>>> }
>>>
>>> this task returns a map with some non important variables.
>>>
>>> The grpc server takes the ignite instance created within the above
>>> mentioned computeTask as a variable, I am not sure if this could be the
>>> cause of the issue.
>>>
>>> Using grpc protocol, we create a ComputeTask which is executed by the
>>> grpc server some more code below:
>>>
>>> class GrpcServer(val ignite:Ignite, val port:Int) extends ..Some Grpc
>>> stuff..{
>>>
>>> def someGrpcProtocol(request: Message):Future[String]={
>>> val newTask = new SomeTask();
>>>
>>> ignite.compute(ignite.cluster()).execute(n

Re: Ignite WAL and WAL archive size estimation

2019-08-26 Thread Stanislav Lukyanov
Hi,

In normal circumstances checkpoint is triggered on timeout, e.g. every 3
minutes (controlled by checkpointFrequency). So, the size of the checkpoint
is the amount of data written/updated in a 3-minute interval.
The best way to estimate it in your system is to enable data storage
metrics (DataStorageMetrics.setMetricsEnabled(true)) and check the metric
getLastCheckpointTotalPagesNumber().

Stan

On Mon, Aug 26, 2019 at 4:59 PM Venkata Bhagavatula 
wrote:

> Hi All,
>
> In the link:
> https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-LocalCrashRecovery
>
> Following is mentioned about the Estimation:
> what is est. maximum data volume to be writen on 1 checkpoint?  Is it the
> size of 1 wal segment or 1 checkpoint buffer size? Estimating disk space
>
> WAL Work maximum used size: walSegmentSize * walSegments = 640Mb (default)
>
> in case Default WAL mode - this size is used always,
>
> in case other modes best case is 1 segment * walSegmentSize
>
> WAL Work+WAL Archive max size may be estimated by
>
>1. average load or
>2. by maximum size.
>
> 1st way is applicable if checkpoints are triggered mostly by timer
> trigger.
> Wal size = 2*Average load(bytes/sec) * trigger interval (sec) *
> walHistSize (number of checkpoints)
> Where 2 multiplier coming from physical & logical WAL Records.
>
> 2nd way: Checkpoint is triggered by segments max dirty pages percent. Use
> persisted data regions max sizes:
> sum(Max configured DataRegionConfiguration.maxSize) * 75% - est. maximum
> data volume to be writen on 1 checkpoint.
> Overall WAL size (before archiving) = 2* est. data volume * walHistSize =
> 1,5 * sum(DataRegionConfiguration.maxSize) * walHistSize
>
> Note applying WAL compressor may significiantly reduce archive size.
>
>
> Thanks n Regards,
>
> Chal
>


Re: HTTP/2 implementations do not robustly handle abnormal traffic and resource exhaustion

2019-08-26 Thread Stanislav Lukyanov
Hi,

AFAICS this is not about the *protocol*, this is about *implementations* of
the protocol. I've followed the links and found this matrix of vulnerable
technologies:
https://vuls.cert.org/confluence/pages/viewpage.action?pageId=56393752
>From this matrix, Ignite uses only Node.js in WebConsole, but isn't bound
to any particular version AFAIK. Make sure to install the latest Node,js
for your WebConsole.
Ignite doesn't use any other vulnerable technologies in the list.

Stan

On Sun, Aug 25, 2019 at 7:06 PM Ashfaq Ahamed MH 
wrote:

> Hi ,
>
> There is a vulnerability reported in the usage of HTTP/2 protocol. so we
> would like to know if Ignite uses this protocol. Details of the
> vulnerability  is in the below link.
>
> https://www.kb.cert.org/vuls/id/605641/
>
> Regards
>


Re: Support for latest version of MongoDB in Ignite web console

2019-08-26 Thread Stanislav Lukyanov
Hi,

I believe support for MongoDB 4.x is already implemented in
https://issues.apache.org/jira/browse/IGNITE-10847.
Also, I believe Ignite doesn't require a specific version of MongoDB. Have
you tried to install the latest 3.4.x version?

Thanks,
Stan

On Sun, Aug 25, 2019 at 7:04 PM Ashfaq Ahamed MH 
wrote:

> Hi ,
> We have received the below vulnerability for the mongodb version - 3.4.4.
>
> VAMS :MongoDB Server 3.4.x  3.4.22, 3.6.x  3.6.13, 4.0.x 
> 4.0.9,
> 4.1.x  4.1.9 - Improper Authorisation Vulnerability -
> SERVER-38984(CVE-2019-2386): SVM-49539
>
> After user deletion in MongoDB Server the improper invalidation of
> authorisation sessions allows an authenticated user's session to persist
> and
> become conflated with new accounts, if those accounts reuse the names of
> deleted ones. [CVE-2019-2386]
>
> Vendor Affected Components:
> MongoDB Server 3.4.x < 3.4.22
> MongoDB Server 3.6.x < 3.6.13
> MongoDB Server 4.0.x < 4.0.9
> MongoDB Server 4.1.x < 4.1.9
>
>
>
> I could see that the mongodb version supported in Ignite 2.7.5 is MongoDB
> (version >=3.2.x <=3.4.15).
> Is there any plans to upgrade the version of the MongoDB to mitigate the
> vulnerability
>
> Regards
>


Re: Cache Miss Using Thin Client

2019-08-23 Thread Stanislav Lukyanov
Hi,

I'm thinking this could be related to differences in the binary marshaller
configuration.
Are you using Java thin client? What version? What is the cache key type?
Are you setting a BinaryConfiguration explicitly on the client or server?

Thanks,
Stan

On Fri, Aug 23, 2019 at 3:38 PM  wrote:

> Hello,
>
> I have one Spring Boot app running as a client node which uses
> SpringCacheManager + @Cacheable annotation on a service call. This is
> demonstrating expected read-through behaviour.
>
> I have a second app where I'm trying to implement the same behaviour using
> the thin-client. This is able to successfully "get" entries put in the
> cache through this application but not those using the application above,
> even if the key appears to be the same.
>
> Both applications are using a key class from the same dependency and are
> obviously populated with the same attributes. I've used the "query" method
> on the cache to retrieve all the cache entries, have verified they're using
> the same server node, the entries are there and so on.
>
> Any ideas why the "get" method from thin-client cannot find entries "put"
> by the client node? Or, any suggestions on appropriate logging to assist
> diagnosis?
>
> Thanks,
>
> Simon.
>
> This e-mail and any attachments are confidential and intended solely for
> the addressee and may also be privileged or exempt from disclosure under
> applicable law. If you are not the addressee, or have received this e-mail
> in error, please notify the sender immediately, delete it from your system
> and do not copy, disclose or otherwise act upon any part of this e-mail or
> its attachments.
>
> Internet communications are not guaranteed to be secure or virus-free. The
> Barclays Group does not accept responsibility for any loss arising from
> unauthorised access to, or interference with, any Internet communications
> by any third party, or from the transmission of any viruses. Replies to
> this e-mail may be monitored by the Barclays Group for operational or
> business reasons.
>
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Barclays Group is personal to the
> sender and is not given or endorsed by the Barclays Group.
>
> Barclays Execution Services Limited provides support and administrative
> services across Barclays group. Barclays Execution Services Limited is an
> appointed representative of Barclays Bank UK plc, Barclays Bank plc and
> Clydesdale Financial Services Limited. Barclays Bank UK plc and Barclays
> Bank plc are authorised by the Prudential Regulation Authority and
> regulated by the Financial Conduct Authority and the Prudential Regulation
> Authority. Clydesdale Financial Services Limited is authorised and
> regulated by the Financial Conduct Authority.
>


Re: Question on submitted post

2019-08-23 Thread Stanislav Lukyanov
Hi,

It looks like the issue is that you're ending up sending an instance of
your gRPC server inside your service. This approach is generally incorrect.
What you should do is
- not pass gRPC to the service instance
- add an init() method implementation to your service
- in your init() start your gRPC server

Stan

On Thu, Aug 22, 2019 at 10:52 AM Pascoe Scholle 
wrote:

> Hi there,
>
> How long does it usually take, for a post to be seen on the forum? Maybe I
> made a mistake so I will provide my question here. Excuse me if I am being
> impatient:
>
>
> =
> Good afternoon everyone,
>
> I have recently run into an issue and I think the problem lies in the
> server node configuration. I will attache the output of the stack trace at
> the bottom, however I first wish to explain what the software does and how
> we are using ignite.
>
> I start multiple server nodes with peerClassEnabled set to true, using  a
> TcpDiscoveryVmIpFinder and dont set anything other than a port range for
> the ipFinder.
>
> Using the REST protocol a ComputeTaskAdapter task is executed which starts
> a service and this in turn starts a grpc server, I have placed some scala
> code to show what I mean.
>
> class StartService extends ComputeTaskAdapter[String, Any]{
>   var ignite: Ignite = null;
>   @IgniteInstanceResource
>   def setIgnite(someIgnite: Ignite): Unit = {
> ignite = someIgnite
>   }
>
>  def map(...)={
> ...
> // port is an integer
> val server = new GrpcServer(ignite, port);
>
> val service = new ServiceImpl(name, server);
> /*
> within the method execute of the Service interface, server.start() is
> called
> */
>
> val serviceconfig = new ServiceConfiguration();
>   serviceconfig.setName(name);
>   serviceconfig.setTotalCount(1);
>   serviceconfig.setMaxPerNodeCount(1);
>   ignite.services().deploy(serviceconfig);
> ...
> }
>
> }
>
> this task returns a map with some non important variables.
>
> The grpc server takes the ignite instance created within the above
> mentioned computeTask as a variable, I am not sure if this could be the
> cause of the issue.
>
> Using grpc protocol, we create a ComputeTask which is executed by the grpc
> server some more code below:
>
> class GrpcServer(val ignite:Ignite, val port:Int) extends ..Some Grpc
> stuff..{
>
> def someGrpcProtocol(request: Message):Future[String]={
> val newTask = new SomeTask();
>
> ignite.compute(ignite.cluster()).execute(newTask, someinput);
> Future("Request is being processed");
> }
>
> }
>
>
> If a single server node is started, the program runs without problems.
> However, adding more nodes and trying to execute the new tasks on a remote
> node or on a node that has a certain attribute gives me a massive stack
> trace in the face.
> Basically, if I want to execute a task on a node where the service and
> grpc server do not reside, the exception happens.
>
> I have placed all custom classes within a jar that lies in the libs folder
> of the ignite-bin project.
> We are currently on version 2.7
>
> If you require anything else just let me know, ill be on it asap.
>
> Thanks for any help that may come my way.
>
> Cheers!
>
> Here is most of the stack trace:
> class org.apache.ignite.binary.BinaryObjectException: Failed to read field
> [name=server]
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:192)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize(BinaryReaderExImpl.java:1716)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.readField(BinaryReaderExImpl.java:1984)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor$DefaultFinalClassAccessor.read0(BinaryFieldAccessor.java:703)
> at
> org.apache.ignite.internal.binary.BinaryFieldAccessor.read(BinaryFieldAccessor.java:188)
> at
> org.apache.ignite.internal.binary.BinaryClassDescriptor.read(BinaryClassDescriptor.java:875)
> at
> org.apache.ignite.internal.binary.BinaryReaderExImpl.deserialize0(BinaryReaderExImpl.java:1764)

Re: ZooKeeper Discovery - Handling large number of znodes and their cleanup

2019-08-23 Thread Stanislav Lukyanov
Hi Abhishek,

What's your Ignite version? Anything else to note about the cluster? E.g.
frequent topology changes (clients or servers joining and leaving, caches
starting and stopping)? What was the topology version when this happened?

Regarding the GC. Try adding -XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCApplicationConcurrentTime to your logging options, and share
the GC logs. Sometimes there are long pauses which can be seen in the logs
which are not GC pauses. Check the "Total time for which application
threads were stopped" and "Stopping threads took".

Stan

On Wed, Aug 21, 2019 at 7:17 PM Abhishek Gupta (BLOOMBERG/ 731 LEX) <
agupta...@bloomberg.net> wrote:

> Hello,
> I'm using ZK based discovery for my 6 node grid. Its been working smoothly
> for a while until suddenly my ZK node went OOM. Turns out there were 1000s
> of znodes, many with data about ~1M + there were suddenly a lot of stuff ZK
> requests (tx log was huge).
>
> One symptom on the grid to notes is that when this happened my nodes were
> heavily stalling (this is a separate issue to discuss - they're stalling
> with lots of high JVM pauses but GC logs appear alright) and were also
> getting heavy write from DataStreamers.
>
> I see the joinData znode having many 1000s of persistent children. I'd
> like to undersstand why so many znodes were created under 'jd' and what's
> the best way to prevent this and clean up these child nodes under jd.
>
>
> Thanks,
> Abhishek
>
>
>
>
>


Re: One of Ignite pod keeps crashing and not joining the cluster

2019-08-22 Thread Stanislav Lukyanov
Hi,

Please share
- Ignite version you're running
- Exact steps and events (a node was restarted, a client joined, etc)
- Logs of all three servers

Thanks,
Stan

On Mon, Aug 19, 2019 at 3:27 PM radha jai  wrote:

> Hi ,
>  Ignite being deployed on the kubernetes, there were 3 replicas of ignite
> server, The sever was up and running for some days, and data being injected
> successfully, after that suddenly  I am getting below error on one of the
> server pod, which is getting restating mutiple times:
>Failed to process custom exchange task:
> ClientCacheChangeDummyDiscoveryMessage
>  [reqId=6b5f6c50-a8c9-4b04-a461-49bfd0112eb0, cachesToClose=null,
> startCaches=[BgwService]] java.lang.NullPointerException| at
> org.apache.ignite.internal.processors.cache.CacheAffinitySharedManager.processClientCachesChanges(CacheAffinitySharedManager.java:635)|
> at
> org.apache.ignite.internal.processors.cache.GridCacheProcessor.processCustomExchangeTask(GridCacheProcessor.java:391)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.processCustomTask(GridCachePartitionExchangeManager.java:2475)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2620)|
> at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2539)|
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)|
> at java.lang.Thread.run(Thread.java:748)"
>
> Below is my ignite-xml file:
> ignite-config.xml:
> 
> 
> http://www.springframework.org/schema/beans;
>  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>  xmlns:util="http://www.springframework.org/schema/util;
>  xsi:schemaLocation="
>   http://www.springframework.org/schema/beans
>   http://www.springframework.org/schema/beans/spring-beans.xsd
>   http://www.springframework.org/schema/util
>   http://www.springframework.org/schema/util/spring-util.xsd;>
> 
> 
>  
> class="org.apache.ignite.configuration.ConnectorConfiguration">
> value="/opt/ignite/conf/jetty-server.xml" />
>
>  
>  
>  
>class="org.apache.ignite.configuration.DataStorageConfiguration">
>   
>   
>   
>class="org.apache.ignite.configuration.DataRegionConfiguration">
>
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>   
>  
>  
>   
>
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
> 
> 
> 
>
>   
>  
> 
> 
> Trans
> Info
> Msg
> 
> 
> 
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>
>   factory-method="factoryOf">
>
>  
>
>
>  
>
>  
>
>  
>   class="org.apache.ignite.configuration.CacheConfiguration">
>
>
>
>
>  
>  
> 
>
> 
> 
>
> Regards
> radha
>


Re: Ignite backup/restore Cache-wise

2019-08-22 Thread Stanislav Lukyanov
GridGain Snapshots allow you to take a backup on a live, working cluster.
If you can allow to stop the cluster activity while snapshot is being taken
you can:
- Deactive the cluster (e.g. control.sh --deactivate)
- Copy the persistence files (you would need work/binary_meta,
work/marshaller, work/db)
- Activate the cluster

To restore the data:
- Deactivate the cluster
- Place the backup files to the original location
- Activate the cluster

You may try to exclude certain caches from work/db, or only copy certain
caches, but be aware that this is not a supported use case so you may need
to figure our the correct action sequence that would work for you.

Stan


Re: SQL delete command is slow and can cause OOM

2019-05-08 Thread Stanislav Lukyanov
Hi,

You have to also fetch values to do a "compare-and-delete". Before deleting
each entry you check if it has been concurrently modified. If it was then
it's possible that the entry doesn't match your WHERE anymore.
So yes, for now deleting a large number of entries is heap-intensive.

It should improve when
https://issues.apache.org/jira/browse/IGNITE-9182?jql=text%20~%20%22delete%20lazy%22%20and%20project%20%3D%20ignite
is fixed. When it is done, you'll be able to set lazy=true for a DELETE and
avoid the OOM.

For now though I suggest to change the query in a way that it returns less
entries, or use key-value API for deletions.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite DataStreamer Memory Problems

2019-04-24 Thread Stanislav Lukyanov
Can you share your full configuration (Ignite config and JVM options) and
the server logs of Ignite?

Which version of Ignite you use?

Can you confirm that on this version and configuration simply disabling
Ignite persistence removes the problem?
If yes, can you try running with walMode=NONE? It will help to rule out at
least some possibilities.

Also, if you can share a reproducer to this problem it should be easy for
us to debug this.

Stan

On Tue, Apr 23, 2019 at 6:42 AM kellan  wrote:

> Any suggestions from where I can go from here? I'd like to find a way to
> isolate this problem before I have to look into another storage/grid
> solutions. A lot of work has gone into integrating Ignite into our
> platform,
> and I'd really hate to start from scratch. I can provide as much
> information
> as needed to help pinpoint this problem/do additional tests on my end.
>
> Are there any projects out there that have successfully run Ignite on
> Kubernetes with Persistence and a high-volume write load?
>
> I've been looking into using third-party persistence but we require SQL
> queries to fetch the bulk of our data and it seems like this isn't really
> possible with Cassandra, et al, unless I can know in advance what data
> needs
> to be loaded into memory. Is that a safe assumption to make?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite DataStreamer Memory Problems

2019-04-21 Thread Stanislav Lukyanov
I've put a full answer on SO -
https://stackoverflow.com/questions/55752357/possible-memory-leak-in-ignite-datastreamer/55786023#55786023
.

In short, so far it doesn't look like a memory leak to me - just a
misconfiguration.
There is a memory pool in JVM for direct memory buffers which is by default
bounded by the value of `-Xmx`. Most applications would use minuscule
amount of it, but in some it can grow - and grow to the size of the heap,
making your total Java usage not roughly `heap + data region` but `heap *
2 + data region`.

Set walSegmentSize=64mb and -XX:MaxDirectMemorySize=256mb and I think it's
going to be OK.

Stan

On Sun, Apr 21, 2019 at 11:51 AM Denis Magda  wrote:

> Hello,
>
> Copying Evgeniy and Stan, our community experts who'd guide you through.
> In the meantime, please try to capture the OOM with this approach:
>
> https://docs.oracle.com/javase/8/docs/technotes/guides/troubleshoot/tooldescr007.html
>
> -
> Denis
>
>
> On Sun, Apr 21, 2019 at 8:49 AM kellan  wrote:
>
>> Update: I've been able to confirm a couple more details:
>>
>> 1. I'm experiencing the same leak with put, putAll as I am with the
>> DataStreamer
>> 2. The problem is resolved when persistence is turned off
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


RE: GridGain perfomance

2019-03-27 Thread Stanislav Lukyanov
First, this is a mailing list for Apache Ignite, although the results would be 
more or less equal as GridGian is based on Ignite.

Second, the question is too broad.
You shouldn’t really think about running on 1 core as Ignite is for scaling to 
many cores and machines.
The performance will vary greatly between different data schemes and use cases.

Stan

From: Илья Ширенин
Sent: 7 февраля 2019 г. 9:27
To: user@ignite.apache.org
Subject: GridGain perfomance

Hi! Could you please tell me about grigain perfomace such as
1) Count of operations per second(on 1 core)
2) RAM capacity needed for single GG operation
Thanks!



Re: Ignite Client getting OOM, GridAffinityProcessor grows in size

2019-03-27 Thread Stanislav Lukyanov
The memory leak looks very much like
https://issues.apache.org/jira/browse/IGNITE-7918.
Can you check on 2.7?

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Event listeners on servers only

2019-03-26 Thread Stanislav Lukyanov
The options I see
1. Register a local listener on each node; you can call localListen() from a 
broadcast() job or when the node starts. 
2. Deploy a cluster-singleton service that calls remoteListen() in its 
initialize().

I guess the first one will perform better.

Stan 

From: maros.urbanec
Sent: 26 марта 2019 г. 15:59
To: user@ignite.apache.org
Subject: Event listeners on servers only

Hi all,
  we're faced with the following requirement - when a cache entry expires
and is about to get removed from the cache, listen to the event, alter an
attribute on the entry and write it to some other cache.

It can be implemented as a client-side event listener, but that ceases to
function as soon as the client leave the topology.

/UUID listenerId = ignite.events().remoteListen(
(e, uuid) -> {
System.out.println("Expired event - executed on the
client");
return true;
},
e -> {
System.out.println("Expired event - executed on one of the
servers");
return true;
},
EventType.EVT_CACHE_OBJECT_EXPIRED
);/

Calling /ignite.events(ignite.cluster().forServers()).remoteListen / instead
makes no difference as long as I can tell.

Is there any way to run an event listener on the server without a
corresponding client? Is there any way for the listener to outlive its
client?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Ignoring model fields

2019-02-20 Thread Stanislav Lukyanov
Regular Java `transient` keyword should suffice. Add it to all fields that
shouldn't be serialized.


Alan Ward wrote
> Is there a way (preferably annotation-based) to exclude certain fields in
> user-defined model classes from Ignite (cache, query, etc.), similar to
> how
> Jackson has a @JsonIgnore annotation to exclude a field from
> serialization/deserialization.
> 
> Thanks,
> Alan





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: External multiple config files for Docker installation

2019-02-20 Thread Stanislav Lukyanov
Ignite's Docker image doesn't have a parameter for that.
You can create your own Dockerfile extending Ignite's image and define and
handle a new parameter there.

Stan


kyro wrote
> Hi akurbanov,
> 
> Yes, I have the lines as you mentioned in my config. I was asking as to
> how
> do I pass the two config files when using the docker run command? In the
> docker installation documentation it is written that we can give the uri
> for
> one config file, but how do I import the jettyconfig.xml file?
> 
> Thanks,
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index inline size

2019-02-20 Thread Stanislav Lukyanov
Depends on the use case. Sometimes you want to save the memory as much as
possible, and then you would use a lower inline size.

However, in most cases you actually need a higher value because that will
greatly improve the performance.
Starting 2.7 there are warnings with a recommended size (calculated based on
your actual data) and a way to set it.
A couple of ways to change inline size are missing in the warnings though -
check https://issues.apache.org/jira/browse/IGNITE-11355.

Stan


colinc wrote
> The documentation that you referenced states that the
> IGNITE_MAX_INDEX_PAYLOAD_SIZE system property defines the default max -
> and
> that this defaults to 10.
> 
> Since it's only a maximum value, is there any reason why it can't be a bit
> higher - say 100? Or is it strongly encouraged to keep indexed fields
> shorter than this?
> 
> Regards,
> Colin.
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: is peerClassLoadingEnabled in client mode makes difference

2019-01-23 Thread Stanislav Lukyanov
Yes, that’s actually the intended usage.

Stan

From: shivakumar
Sent: 23 января 2019 г. 20:47
To: user@ignite.apache.org
Subject: is peerClassLoadingEnabled in client mode makes difference

when peerClassLoadingEnabled is enabled in client node which joins the
cluster of servers, if any class/jar placed in client class path, is it
possible to use those classes/jar by servers?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Baselined node rejoining crashes other baseline nodes - DuplicateKeyError

2019-01-23 Thread Stanislav Lukyanov
Hi,

I’ve reproduced this and have a fix – I guess it’ll be available with 2.8.
Meanwhile I can only suggest not to create indexes without an explicit name.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 12:39
To: user@ignite.apache.org
Subject: RE: Baselined node rejoining crashes other baseline nodes - 
DuplicateKeyError

Stan, thanks for the visibility. 

-1-
Over the last year, we move from various versions of ignite 2.4, 2.5 to 2.7.
I always keep work folder in tact. 
-2-
Over a period of development, we might have tried to create index a second
or many times on the same column on which an index already existed. Now,
could that cause a confusion at ignite level, especially in a multi-node
scenario? Was something out of sync? Was a check missing?
-3-
Over a period of time, we dropped the table several times and recreated the
table several times and indexes. Was something stable left out in work
folder. We always used 2 or more nodes. 
-4-
Over a period of time, we saw issues with index creation as well. My
colleague posted another strange behaviour with index creation. See the
issue here,
http://apache-ignite-users.70518.x6.nabble.com/Failing-to-create-index-on-Ignite-table-column-td26252.html#a26258
Summary is if we don't give index names the ignite gives exceptions.
 

Something seems to be wrong with Ignite index handling in multi-node
environment. 

Regarding your point 2 (jira), absolutely, makes sense not to crash the node
on this exception. We have about 100GB data (tables) on ignite and the only
work around right now seems to be 

Boot node 1. Keep its work folder. 
Boot node 2 after removing its work folder

This scenario though works, gives the cluster a down-time of about 1-2 hours
and this is not acceptable for our customers. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: sql fields visibility without sql indexing

2019-01-18 Thread Stanislav Lukyanov
There are implicit indexes needed to make SQL work – one on primary key, one on 
affinity key.
They are also stored in index.bin, so it’s expected to see it growing.

Stan

From: Yuriy
Sent: 18 января 2019 г. 13:37
To: user@ignite.apache.org
Subject: sql fields visibility without sql indexing

Hi.

I want to see sql fields without sql indexing.
I use @QuerySqlField() annotation and invoke setIndexedTypes cache
configuration method.

But I watching that index.bin file is growing up.

I dot not use sql index. I need to see sql fields only. 
Why this file is growing up?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: server always shutdown

2019-01-17 Thread Stanislav Lukyanov
Hi,

You have to provide the details – configs, usage scenarios, error messages, 
logs.
Without that no one will be able to tell what’s wrong with your cluster.

Stan

From: hulitao198758
Sent: 17 января 2019 г. 8:42
To: user@ignite.apache.org
Subject: server always shutdown

Ignite. Sh start multi-node server side is very unstable, often appear process 
termination situation, I now is running on a server three cluster nodes, three 
servers make up three cluster environment, often a server node will hang, I do 
not know what situation 

Sent from the Apache Ignite Users mailing list archive at Nabble.com.



RE: Failing to create index on Ignite table column

2019-01-16 Thread Stanislav Lukyanov
Don’t think there is any Ignite API for that yet.

Stan

From: Shravya Nethula
Sent: 16 января 2019 г. 12:42
To: user@ignite.apache.org
Subject: RE: Failing to create index on Ignite table column

Hi Stan,

Thank you! This information is helpful.

Do you know any Ignite API through which I can get the indexes of a
particular table?

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Baselined node rejoining crashes other baseline nodes - Duplicate KeyError

2019-01-16 Thread Stanislav Lukyanov
Hi,

Left a comment in the issue.
In short, the problem is that you got a duplicate index on one of your nodes 
somehow, 
even though it shouldn’t happen. Need to figure out, how.

Can you tell what you do with the cluster when it is running?
I’m particularly interested in any of the actions related to cache/table/index 
creation and deletion.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 5:54
To: user@ignite.apache.org
Subject: Baselined node rejoining crashes other baseline nodes - Duplicate 
KeyError

I have two nodes on which we have 3 tables which are partitioned.  Index are
also built on these tables. 

For 24 hours caches work fine.  The tables are definitely distributed across
both the nodes

Node 2 reboots due to some issue - goes out of the baseline - comes back and
joins the baseline.  Other baseline nodes crash and in the logs we see
duplicate Key error

[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
accepted incoming connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-srvr-#2[TcpDiscoverySpi] TCP discovery
spawning a new thread for connection [rmtAddr=/192.168.1.7, rmtPort=45102]
[10:38:35,437][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Started
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102]
[10:38:35,451][INFO]tcp-disco-sock-reader-#12[TcpDiscoverySpi] Finished
serving remote node connection [rmtAddr=/192.168.1.7:45102, rmtPort=45102
[10:38:35,457][SEVERE]tcp-disco-msg-worker-#3[TcpDiscoverySpi]
TcpDiscoverSpi's message worker thread failed abnormally. Stopping the node
in order to prevent cluster wide instability.
*java.lang.IllegalStateException: Duplicate key
at org.apache.ignite.cache.QueryEntity.checkIndexes(QueryEntity.java:223)
at org.apache.ignite.cache.QueryEntity.makePatch(QueryEntity.java:174)*


Logs and confurations are attached here 
https://issues.apache.org/jira/browse/IGNITE-8728
please offer any suggestions 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Does set streaming off command flush data?

2019-01-16 Thread Stanislav Lukyanov
Yes. It closes the underlying streamers, which in turn flushes the data.

Stan

From: yangjiajun
Sent: 16 января 2019 г. 11:41
To: user@ignite.apache.org
Subject: Does set streaming off command flush data?

Hello.

The ignite's doc says we should close the JDBC/ODBC connection so that all
data is flushed to the cluster while use streaming mode.Does set streaming
off command do the same so that we can reuse the connection?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread Stanislav Lukyanov
Yes.
You can also use an environment variable instead of the system property:
IGNITE_BPLUS_TREE_LOCK_RETRIES=10 ignite.sh …

Stan

From: mahesh76private
Sent: 16 января 2019 г. 11:29
To: user@ignite.apache.org
Subject: RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

how do I set it ?

should I boot ignite node (ignite.sh) with the following switch ?

java ...   -DIGNITE_BPLUS_TREE_LOCK_RETRIES=10

regards
Mahesh





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: ignite continuous query with XML

2019-01-16 Thread Stanislav Lukyanov
No, you have to use actual code. 

Stan

From: shivakumar
Sent: 16 января 2019 г. 11:08
To: user@ignite.apache.org
Subject: ignite continuous query with XML

is there a way to configure continuous query using spring XML? is there any 
example or reference for configuring continuous query with XML? 

Sent from the Apache Ignite Users mailing list archive at Nabble.com.



RE: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

2019-01-16 Thread Stanislav Lukyanov
It means that Ignite couldn’t find the place it needed in a B+ tree in 1000 
iterations.
It could mean either that there is a high contention on the tree (it changes a 
lot, and 
one thread is unlucky and couldn’t keep up with the speed), or that the tree is 
corrupted.

Try to set a larger value to the IGNITE_BPLUS_TREE_LOCK_RETRIES property (e.g. 
10).
If you see still see the exception then it’s a corruption. If you don’t – it’s 
a contention.

Stan

From: mahesh76private
Sent: 16 января 2019 г. 7:48
To: user@ignite.apache.org
Subject: failure due to IGNITE_BPLUS_TREE_LOCK_RETRIES

On 2.7, we are regularly seeing the below message and then the nodes stop. 


[16:45:04,759][SEVERE][disco-event-worker-#63][] JVM will be halted
immediately due to the failure: [failureCtx=FailureContext
[type=CRITICAL_ERROR, err=class o.a.i.IgniteCheckedException: Maximum number
of retries 1000 reached for Put operation (the tree may be corrupted).
Increase IGNITE_BPLUS_TREE_LOCK_RETRIES system property if you regularly see
this message (current value is 1000).]]


Can you please through some light on what this error is?





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Is there a way to allow overwrite when set streaming on?

2019-01-16 Thread Stanislav Lukyanov
Use `SET STREAMING ON ALLOW_OVERWRITE ON`.
It’s a shame it’s not documented. Filed 
https://issues.apache.org/jira/browse/IGNITE-10952 for that.

Stan

From: yangjiajun
Sent: 16 января 2019 г. 9:19
To: user@ignite.apache.org
Subject: Is there a way to allow overwrite when set streaming on?

Hello.

We can set streaming on while insert data to ignite using sql.I want to
enable data overwrite in this mode.Is it possible?

https://apacheignite-sql.readme.io/docs/set
https://apacheignite.readme.io/docs/data-streamers#section-allow-overwrite



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: SQLFieldsQuery timeout is not working

2019-01-15 Thread Stanislav Lukyanov
Hi,

What’s your Ignite version?
Can you share Ignite and cache configs and the query SQL?

Thanks,
Stan 

From: garima.j
Sent: 15 января 2019 г. 14:18
To: user@ignite.apache.org
Subject: SQLFieldsQuery timeout is not working

Hello, 

I'm using the below code to execute a SQL fields query : 

SqlFieldsQuery qry = new
SqlFieldsQuery(jfsIgniteSQLFilter.getSQLQuery()).setTimeout(timeout,TimeUnit.MILLISECONDS);
 
List listFromCache = cache.query(qry).getAll();

The query doesn't timeout at all. My timeout is 5 milliseconds and the data
is retrieved in 168 ms without timing out.

Please let me know what am I missing.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Extra console output from logs.

2019-01-15 Thread Stanislav Lukyanov
Hi,

First, try to disable IGNITE_QUIET.

If still seeing duplicated messages after that, make sure you don’t have 
multiple slf4j adapters in the classpath.

Let me know if that helps.

Thanks,
Stan 

From: javadevmtl
Sent: 9 января 2019 г. 21:43
To: user@ignite.apache.org
Subject: Re: Extra console output from logs.

More precisely this is what we see...

This line is good:
{"appTimestamp":"2019-01-09T18:29:34.298+00:00","threadName":"vert.x-worker-thread-0","level":"INFO","loggerName":"org.apache.ignite.internal.IgniteKernal%xx-dev","message":"\n\n>>>
   
__    \n>>>   /  _/ ___/ |/ /  _/_  __/ __/  \n>>> 
_/ // (7 7// /  / / / _/\n>>> /___/\\___/_/|_/___/ /_/ /___/   \n>>>
\n>>> ver. 2.7.0#20181130-sha1:256ae401\n>>> 2018 Copyright(C) Apache
Software Foundation\n>>> \n>>> Ignite documentation:
http://ignite.apache.org\n"}

The below shouldn't print:
[13:29:34]__   
[13:29:34]   /  _/ ___/ |/ /  _/_  __/ __/ 
[13:29:34]  _/ // (7 7// /  / / / _/   
[13:29:34] /___/\___/_/|_/___/ /_/ /___/  
[13:29:34] 
[13:29:34] ver. 2.7.0#20181130-sha1:256ae401
[13:29:34] 2018 Copyright(C) Apache Software Foundation
[13:29:34] 
[13:29:34] Ignite documentation: http://ignite.apache.org
[13:29:34] 
[13:29:34] Quiet mode.
[13:29:34]   ^-- Logging by 'Slf4jLogger
[impl=Logger[o.a.i.i.IgniteKernal%xx-dev], quiet=true]'
[13:29:34]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[13:29:34] 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Thread got interrupted while trying to acquire table lock & Gotinterrupted while waiting for future to complete

2019-01-15 Thread Stanislav Lukyanov
Looks like the thread just ended.

Do you see a similar issue? Do you have a reproducer?

Stan 

From: bintisepaha
Sent: 14 января 2019 г. 23:07
To: user@ignite.apache.org
Subject: Re: Thread got interrupted while trying to acquire table lock & 
Gotinterrupted while waiting for future to complete

Was there any resolution to this?




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: SQL performance very slow compared to KV in pure-in-memory cache?

2019-01-14 Thread Stanislav Lukyanov
Do you use the same config for both runs?
If you use ignite-sql.xml for SQL and ignite.xml for key-value than key-value 
keys 
indeed should be much faster, because SQL features are not used there and Ignite
doesn’t have to maintain all the SQL indexes, etc.

Stan

From: summasumma
Sent: 7 января 2019 г. 12:24
To: user@ignite.apache.org
Subject: Re: SQL performance very slow compared to KV in pure-in-memory cache?

Thanks Naveen.

I am definitely not looking for better performance in SQL than KV store. But
just wanted to know if the results im getting in scale of 1:4 performance
between SQL and KV store is as expected or not, say a particular operations
give 400k ops in KV but same size records with same operations gives only
100k ops in SQL.

Furthermore, is there any performance tuning which we can do to improve the
SQL performance slightly?

Note: i have tried sqlOnheapCacheEnabled in cache configuration, but still
performance didn't improve.

Thanks,
...summa



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Failed to read data from remote connection

2019-01-14 Thread Stanislav Lukyanov
Not really. The amount of direct memory needed doesn’t grow with the node count 
nor the amount of data you store.

Stan

From: wangsan
Sent: 12 января 2019 г. 9:30
To: user@ignite.apache.org
Subject: RE: Failed to read data from remote connection

Yeath, set a larger MaxDirectMemorySize . 
But, I am afraid of when nodes size be more larger.The directmemory will be
larger with node sizes.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Ignite 2.7 Persistence

2019-01-11 Thread Stanislav Lukyanov
Running the query the first time isn’t really like loading all data into memory 
and then doing the query. I would assume that
it is much less efficient – all kinds of locking and contention may be 
involved. Also, the reads are done via random disk access, while when reading 
from
CSV you’re reading sequentially.

I assume that there are ways to make queries on a cold storage more efficient.
One would probably need to spend a lot of time on that collecting and analyzing 
JFRs and other profiling data.
On the other hand, having an ability to do a hot restart will probably solve 
the issue for most users.

Stan

From: gweiske
Sent: 11 января 2019 г. 2:03
To: user@ignite.apache.org
Subject: RE: Ignite 2.7 Persistence

Thanks for the replies. Yes, subsequent queries are faster, but the time to
run the query the first time (i.e. load the data into memory) after a
restart can be measured in hours and is significantly longer than loading
the data from a csv file. That does not seem right. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: ignite-cassandra-store module has incorrect dependencies

2019-01-11 Thread Stanislav Lukyanov
FTR there is a JIRA issue for that fixed in 2.8 
https://issues.apache.org/jira/browse/IGNITE-10856.

Stan

From: Serg
Sent: 2 января 2019 г. 15:51
To: user@ignite.apache.org
Subject: Re: ignite-cassandra-store module has incorrect dependencies


Unfortunately I could not just change my pom  because we use ignite in
docker and this is a part of modules inside docker.
Of course As solution I can  build my own docker but this is not very
useful. 

Also you can check that tests of cassandra modules fails :(
https://github.com/apache/ignite/tree/master/modules/cassandra even if I
update dependencies tests can not run embedded cassandra in my environment.

I updated  cassandra driver to  3.6.0 and add netty-resolver directly in
docker and this solved this problem.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: TcpCommunicationSpi failed to establish connect to node node will bedropped from cluster.

2019-01-11 Thread Stanislav Lukyanov
Well, it means that node 6 didn’t answer to the node 2. 
Was the node 6 dropped from the cluster after that?

It could happen because of a network issue, GC on the node 6, perhaps some 
OS/VM issue.
Can’t say anything more specific with that little info.

Stan

From: sehii
Sent: 4 января 2019 г. 7:45
To: user@ignite.apache.org
Subject: TcpCommunicationSpi failed to establish connect to node node will 
bedropped from cluster.

I have a question.

our company system uses Ignite1.9  version.

we made the topology with 8 nodes.(01-08)

In our business logic, every one minute all nodes have to check other nodes
status each other by Ignite ping .

when 02_node sent ignite ping to the 06_node, the exception was occured. 
Exception is related timeout

That  is exception log.

==
2018-12-29 05:59:09.163[WARN ] [MonitoringManager : TCheckStatus]
[TcpCommunicationSpi]
TcpCommunicationSpi failed to establish connect to node node will be dropped
from cluster 
  [rmtNode=TcpDiscoveryNode[id=a5a3dd09-ba72-42be-b3cc-644ca282b4b0
  , addrs=[x.x.x.x2]
  ,sockAddrs=[/x.x.x.x2:17301]
  ,discPort=17301
  ,order=5
  ,intOrder=5
  ,lastExchangeTime=1544663946652
  ,loc=false,ver=1.9.0# 20170302-sha1:a8169da
  , isClient=false]err=class o.a.i.IgniteCheckedException: Failed to connect
to node ( is node still alive?)

Make sure that each ComputeTask and
cache Transaction has a timeout set in order to prevent parties from waiting
forever in case of network issue [
nodeId=a5a3dd09-ba72-42be-b3cc-644ca282b4b0
, addrs=[/172.31.13.52:18301]]
,connectErrs=[class o.a.i.IgniteCheckedException: Failed to connect to
address


can you help me??

I want to know what i have to do , what i have to check.






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Do elements with EternalExpiryPolicy in cache ever get evicted?

2019-01-11 Thread Stanislav Lukyanov
Yes, that’s how EternalExpiryPolicy works. You can achieve the same by just not 
using an expiry policy at all – Ignite doesn’t evict entries by default.

Stan

From: rick_tem
Sent: 7 января 2019 г. 12:38
To: user@ignite.apache.org
Subject: Re: Do elements with EternalExpiryPolicy in cache ever get evicted?

Hi,

Does anyone know the answer to this?

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How to debug network issues in cluster

2019-01-11 Thread Stanislav Lukyanov
+1 to all points.

Generally, the message “Local node SEGMENTED” generally means that the cluster 
decided that the node is dead and kicked it out.
The next time the node tried to send a message to the cluster, it received an 
answer “you’re segmented” meaning “we’ve kicked you out, sorry”.
It usually happens when the node is unavailable for some time – either due to 
GC, network issues, OS/supervisor not giving the node CPU time, etc.
The primary remedy for this issue is indeed increasing failureDetectionTimeout.

Stan

From: Loredana Radulescu Ivanoff
Sent: 7 января 2019 г. 20:29
To: user@ignite.apache.org
Subject: Re: How to debug network issues in cluster

As an Ignite user, here are my two cents:

- if you were never able to get the node to join the cluster, check that there 
are no firewalls/rules blocking the Ignite ports (telnet might be a quick way 
to do that)
- check that the IPs printed by TcpDiscoverySpi are the correct ones; if you 
have virtual network adapters enabled then the wrong IP might be chosen, so the 
IP discovery will fail. This can happen if you use VirtualBox or Docker, for 
instance.
- for intermittent issues, you can try increasing the default failure detection 
timeout, which is 10s, I think. Somewhere in the Ignite doc it's recommended to 
use 30s if the JVM is on AWS.
- how did you configure IP discovery? In my case, I've always used static IP 
discovery with shared enabled - TcpDiscoveryVmIpFinder 

On Sun, Jan 6, 2019 at 6:04 AM Prasad Bhalerao  
wrote:
Hi,

I am consistently getting "Node is out of topology" message in logs on node-1 
and in other node, node-2 getting message "Timed out waiting for message 
delivery receipt (most probably, the reason is in long GC pauses on remote 
node; consider tuning GC and increasing '"

I have checked the network bandwidth using iperf and it is 470 Mbit per sec. I 
have also checked the gc logs and max pause time is 140 ms.

If it is really happening because of network issues, it there any way to debug 
it?

If it is happening because of gc, I would have seen it in gc logs.

Can someone please help me out with this? 

Log messages on node-1:
2019-01-06 13:48:19,036 125016 [tcp-disco-srvr-#3%springDataNode%] INFO  
o.a.i.s.d.tcp.TcpDiscoverySpi - TCP discovery accepted incoming connection 
[rmtAddr=/10.114.113.65, rmtPort=35651]
2019-01-06 13:48:19,037 125017 [tcp-disco-srvr-#3%springDataNode%] INFO  
o.a.i.s.d.tcp.TcpDiscoverySpi - TCP discovery spawning a new thread for 
connection [rmtAddr=/10.114.113.65, rmtPort=35651]
2019-01-06 13:48:19,037 125017 [tcp-disco-sock-reader-#5%springDataNode%] INFO  
o.a.i.s.d.tcp.TcpDiscoverySpi - Started serving remote node connection 
[rmtAddr=/10.114.113.65:35651, rmtPort=35651]
2019-01-06 13:48:19,040 125020 [tcp-disco-msg-worker-#2%springDataNode%] WARN  
o.a.i.s.d.tcp.TcpDiscoverySpi - Node is out of topology (probably, due to 
short-time network problems).
2019-01-06 13:48:19,041 125021 [disco-event-worker-#62%springDataNode%] WARN  
o.a.i.i.m.d.GridDiscoveryManager - Local node SEGMENTED: TcpDiscoveryNode 
[id=a5827f51-096a-4c98-af4f-564d2d3e769d, addrs=[10.114.113.53, 127.0.0.1], 
sockAddrs=[/127.0.0.1:47500, 
qagmscore02.p13.eng.in03.qualys.com/10.114.113.53:47500], discPort=47500, 
order=2, intOrder=2, lastExchangeTime=1546782499034, loc=true, 
ver=2.7.0#20181130-sha1:256ae401, isClient=false]
2019-01-06 13:48:19,041 125021 [tcp-disco-sock-reader-#5%springDataNode%] INFO  
o.a.i.s.d.tcp.TcpDiscoverySpi - Finished serving remote node connection 
[rmtAddr=/10.114.113.65:35651, rmtPort=35651
2019-01-06 13:48:19,866 125846 [tcp-comm-worker-#1%springDataNode%] INFO  
o.a.i.s.d.tcp.TcpDiscoverySpi - Pinging node: 
cd9803ac-b810-447e-818e-ab51dada59d8



RE: Failing to create index on Ignite table column

2019-01-11 Thread Stanislav Lukyanov
I know one way to do that – connect via SQLLine and execute !indexes command.
Check out this doc https://apacheignite-sql.readme.io/docs/sqlline

Stan

From: Shravya Nethula
Sent: 7 января 2019 г. 13:49
To: user@ignite.apache.org
Subject: Re: Failing to create index on Ignite table column

Hi,

Is there any way to check the list of column names on which index has been
created for a particular table?

Regards,
Shravya Nethula.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

2019-01-11 Thread Stanislav Lukyanov
awsCredentialsProvider is just a property of the  TcpDiscoveryS3IpFinder.
If the class is loaded then the property should also be there.

I’ll try this as well and let you know if that works for me.

Stan

From: Max Barrios
Sent: 11 января 2019 г. 4:47
To: user@ignite.apache.org
Subject: Re: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

I’m still seeing this error even when passing a value via ‘ref’. 

Looks like there’s some ignite lib not getting loaded that can resolve the 
awsCredentials property, when running in a spark cluster.

Is there any comprehensive guidance for running a spark app using ignite in a 
spark cluster? 
Sent from my iPhone

On Jan 10, 2019, at 08:25, Stanislav Lukyanov  wrote:
Hi,
 
Were you able to solve this?
 
It seems that your config is actually fine… The feature was added by 
https://issues.apache.org/jira/browse/IGNITE-4530.
 
Does it work if you replace `ref` with just a value?
Like 

    

 
Stan
 
From: Max Barrios
Sent: 12 декабря 2018 г. 23:51
To: user@ignite.apache.org
Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials
 
I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,
 
However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml
 
I have AWS EC2 Instance Metadata Service for my instances so that the creds can 
be loaded from there. 
 
However, there's no guidance or documentation on how to do this. Is this even 
supported?
 
For example, I want to do this:

  ...
  
    
  
    
  
  
    
  
    
  

 



 
But I get this exception when I try the above:
 
Error setting property values; nested exception is 
org.springframework.beans.NotWritablePropertyException: Invalid property 
'awsCredentialsProvider' of bean class 
[org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder]: Bean 
property 'awsCredentialsProvider' is not writable or has an invalid setter 
method. Does the parameter type of the setter match the return type of the 
getter?
 
If using an AWS Credentials Provider *is* supported, where are the bean 
properties for documented, so I can see what I may be doing wrong? Are there 
any working examples for anything other than BasicAWSCCredentials?
 
Please help. 
 
Max
 



RE: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

2019-01-10 Thread Stanislav Lukyanov
Hi,

Were you able to solve this?

It seems that your config is actually fine… The feature was added by 
https://issues.apache.org/jira/browse/IGNITE-4530.

Does it work if you replace `ref` with just a value?
Like 




Stan

From: Max Barrios
Sent: 12 декабря 2018 г. 23:51
To: user@ignite.apache.org
Subject: Amazon S3 Based Discovery NOT USING BasicAWSCredentials

I am running Apache Ignite 2.6.0 in AWS and am using S3 Based Discovery,

However, I DO NOT want to embed AWS Access or Secret Keys in my ignite.xml

I have AWS EC2 Instance Metadata Service for my instances so that the creds can 
be loaded from there. 

However, there's no guidance or documentation on how to do this. Is this even 
supported?

For example, I want to do this:

  ...
  

  

  
  

  

  






But I get this exception when I try the above:

Error setting property values; nested exception is 
org.springframework.beans.NotWritablePropertyException: Invalid property 
'awsCredentialsProvider' of bean class 
[org.apache.ignite.spi.discovery.tcp.ipfinder.s3.TcpDiscoveryS3IpFinder]: Bean 
property 'awsCredentialsProvider' is not writable or has an invalid setter 
method. Does the parameter type of the setter match the return type of the 
getter?

If using an AWS Credentials Provider *is* supported, where are the bean 
properties for documented, so I can see what I may be doing wrong? Are there 
any working examples for anything other than BasicAWSCCredentials?

Please help. 

Max



RE: Question about add new nodes to ignite cluster.

2019-01-10 Thread Stanislav Lukyanov
Here “cache start” is a rather internal wording.
It means “cache adapter machinery will be initialized”.

In case of ASYNC rebalancing the cache will first appear on the node as
existing but storing no data until it is rebalanced.

In practice, ASYNC rebalancing means that the node will start (Ignition.start() 
will return)
immediately, not waiting for the rebalance.
SYNC rebalancing means that the node will start only after all data was 
processed.

For example, say you have the code
Ignite ignite = Ignition.start(cfg);
System.out.println(Ignite.cache(“foo”).get(“k”));
where cache “foo” is a part of the configuration ‘cfg’.
Here, if “foo” has ASYNC rebalancing the value will be printed immediately.
If “foo” has SYNC rebalancing the value will be printed only after the 
rebalancing has completed.

Stan

From: Justin Ji
Sent: 22 декабря 2018 г. 13:24
To: user@ignite.apache.org
Subject: RE: Question about add new nodes to ignite cluster.

Thank for your replies!

I agree with "the node doesn’t serve any requests."

But the documents write that:

Asynchronous rebalancing mode. Distributed caches will start immediately and
will load all necessary data from other available grid nodes in the
background.

under Rebalance Modes
https://apacheignite.readme.io/docs/rebalancing

what does "start immediately" mean? and what are the differences between
SYNC and ASYNC?

Looking forward to your reply~

Justin



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Do we require to set MaxDirectMemorySize JVM parameter?

2019-01-10 Thread Stanislav Lukyanov
> In my case, I have configured swap storage 
> (https://apacheignite.readme.io/docs/swap-space) but *not* Ignite durable 
> memory. If DataRegion maxSize is say 100GB and my physical RAM is 50GB
> then 
> the swap file will be 100GB but Ignite will also use some portion (<50GB)
> of 
> the available physical RAM for off-heap cache data storage. 

I assume by Durable Memory you mean Native Persistence. They're
(confusingly) different - Durable Memory is just the name of the Ignite's
memory architecture, not necessarily with Persistence enabled.

> My question is about how to limit the size of this portion while still 
> allowing the DataRegion to specify a large swap file for use as overflow
> of 
> less regularly accessed data. 

I don't think it's possible. Just use Native Persistence instead - you'll
get the memory distribution that you want
(dataRegionConfiguration.maxSize=8gb will do the trick), and actual
persistence as a bonus.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Pain points of Ignite user community

2019-01-10 Thread Stanislav Lukyanov
Hi Rohan,

Sorry, the publishing took some time.
In case you’re still interested, here’s the article: 
https://www.gridgain.com/resources/blog/checklist-assembling-your-first-apacher-ignitetm-cluster

Thanks,
Stan

From: Rohan Honwade
Sent: 29 ноября 2018 г. 8:15
To: user@ignite.apache.org
Subject: Re: Pain points of Ignite user community

Thank you Stan.

Denis, I don’t intend to speak for my employer. The content will be my personal 
opinion.

Regards,
Rohan


On Nov 28, 2018, at 8:05 PM, Stanislav Lukyanov  wrote:

Hi,
 
I expect a write-up on some of the Ignite pitfalls to be out soon – ping me 
next week.
 
Stan
 
From: Rohan Honwade
Sent: 29 ноября 2018 г. 0:42
To: user@ignite.apache.org
Subject: Pain points of Ignite user community
 
Hello,
 
I am currently creating some helpful blog articles for Ignite users. Can 
someone who is active on this mailing list or the StackOverflow Ignite section  
please let me know what are the major pain points that users face when using 
Ignite? 
 
Regards,
RH




RE: There is no property called StartSize in CacheConfiguration

2019-01-10 Thread Stanislav Lukyanov
The .Net page seems to be outdated. The startSize property isn’t there anymore.
Check out the main one - 
https://apacheignite-net.readme.io/docs/performance-tips.

Stan

From: Peter Sham
Sent: 9 декабря 2018 г. 8:22
To: user@ignite.apache.org
Subject: There is no property called StartSize in CacheConfiguration

I am reading performance tips on Ignite.Net 
(https://apacheignite-net.readme.io/docs/performance-tips#section-tune-cache-start-size)
 and upon "Tune Cache Start Size", there should be a property called 
"StartSize" in CacheConfiguration. But there is no such property. What should 
the configuration property for setting initial cache size? Cannot find it on 
API documentation. Anyone can help? 

Sent from the Apache Ignite Users mailing list archive at Nabble.com.



RE: Ignite 2.7 Persistence

2019-01-10 Thread Stanislav Lukyanov
Hi,

That’s right, Ignite nodes restart “cold” meaning that they become operational 
without the data in the RAM.
It allows to restart as quickly as possible, but the price is that the first 
operations have to load data from the disk, meaning that the performance will 
be much lower.

Here is a ticket to allow turn on a “hot restart” mode - 
https://issues.apache.org/jira/browse/IGNITE-10152.
There is also an improvement that allows to manually load data of a specific 
partition in an efficient way - 
https://issues.apache.org/jira/browse/IGNITE-8873. If you iterate over all 
partitions after the node start it may shorten the warmup period.

Stan 

From: Glenn Wiebe
Sent: 8 января 2019 г. 18:02
To: user@ignite.apache.org
Subject: Re: Ignite 2.7 Persistence

I am new to Ignite, but as I understand it, after cluster restart, data is 
re-hydrated into memory as the nodes receive requests for their partitions' 
entries. So, a first query would be as slow as a distributed disk-based query. 
Subsequent queries should have some (depending on memory available) information 
in memory and thus faster. 

So, my question, is this the first query execution since startup?
Given that you have sufficient memory to hold this particular cache, I would 
expect subsequent query executions to take advantage of memory resident query 
processing.

Additionally I have done a quick look (but could not find) at whether Ignite 
caches in memory store aggregates (like counts) which may be able to be 
returned without reading actual data as here.

Good luck!

On Tue, Jan 8, 2019 at 7:55 AM gweiske  wrote:
I am using Ignite 2.7 with persistence enabled on a single VM with 128 GB RAM
in Azure and separate external HDD drives each for wal, walarchive and
storage. I loaded 20 GB of data/50,000,000 rows, then shut down Ignite and
restarted the hosting VM, started and activated Ignite and ran a simple
query
that requires sorting through all the data (SELECT DISTINCT  FROM 
;). The query has been running for hours now. Looking at the memory, instead
of the expected ~42 GB it is currently at 5.7GB (*slowly* increasing). Any
ideas why it might be that slow? 
The same scenario with SSD drives (this time 1 drive for wal and walarchive,
a second one for storage) finishes in about 5500 seconds (still slow).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How to define a cache template?

2018-12-31 Thread Stanislav Lukyanov
What do you mean by “global”? The settings of a cache template are not 
automatically applied to every cache.
Cache template is just a way to reuse the same settings in multiple CREATE 
TABLE SQL commands.

There is no such thing as “global cache settings” in Ignite.

Stan

From: yangjiajun
Sent: 29 декабря 2018 г. 4:56
To: user@ignite.apache.org
Subject: How to define a cache template?

Hello!

I want to make some cache settings global.It means I need to define a cache
template and then use it,right?But I did not find any docs related to this.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Ouch! Argument is invalid: Cache name must not be null or empty.

2018-12-31 Thread Stanislav Lukyanov
As I’ve said in a thread nearby, there is no such thing as “global cache 
settings”.
Usually you’ll need to repeat the configuration paratemers for all caches.

You can avoid code/config duplication in multiple ways though.
If you explain your use case in detail, I can suggest a way to do that.

Stan

From: yangjiajun
Sent: 28 декабря 2018 г. 11:11
To: user@ignite.apache.org
Subject: Re: Ouch! Argument is invalid: Cache name must not be null or empty.

Hello.Thanks for your reply.

I want to make such settings global.Do this settings only affect specific
caches?


ezhuravlev wrote
> As you see, the example in config has 

> , which means that it's
> just a part of the configuration. Exceptions message says "Cache name must
> not be null" and that really the problem that you faced, just define cache
> name property to resolve this issue.
> 
> Evgenii
> 
> 
> чт, 27 дек. 2018 г. в 22:28, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I use ignite 2.6 as a database and try to tune data rebalancing according
>> to
>> following doc:
>>
>> https://apacheignite.readme.io/docs/rebalancing#section-rebalance-message-throttling
>> But I get an exception when I set those settings.
>>
>> class org.apache.ignite.IgniteException: Ouch! Argument is invalid: Cache
>> name must not be null or empty.
>> at
>>
>> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:990)
>> at org.apache.ignite.Ignition.start(Ignition.java:355)
>> at
>>
>> org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:301)
>> Caused by: class org.apache.ignite.IgniteCheckedException: Ouch! Argument
>> is
>> invalid: Cache name must not be null or empty.
>> at
>> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1133)
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
>> at
>>
>> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
>> at
>> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
>> at
>>
>> org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1069)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:955)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:854)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:724)
>> at
>> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:693)
>> at org.apache.ignite.Ignition.start(Ignition.java:352)
>> ... 1 more
>> Caused by: java.lang.IllegalArgumentException: Ouch! Argument is invalid:
>> Cache name must not be null or empty.
>> at
>>
>> org.apache.ignite.internal.util.GridArgumentCheck.ensure(GridArgumentCheck.java:109)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheUtils.validateCacheName(GridCacheUtils.java:1590)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.addCacheOnJoin(GridCacheProcessor.java:738)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.addCacheOnJoinFromConfig(GridCacheProcessor.java:808)
>> at
>>
>> org.apache.ignite.internal.processors.cache.GridCacheProcessor.start(GridCacheProcessor.java:707)
>> at
>>
>> org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
>> at
>> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:980)
>> ... 10 more
>> Failed to start grid: Ouch! Argument is invalid: Cache name must not be
>> null
>> or empty.
>>
>>
>> Here is my config file:
>> example-default2.xml
>> <
>> http://apache-ignite-users.70518.x6.nabble.com/file/t2059/example-default2.xml>
>>
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: CAP Theorem (CP? or AP?)

2018-12-27 Thread Stanislav Lukyanov
There were two bugs actually, but the problem is basically the same, just in 
different cases.
SQL + Partition Loss Policies issue (fixed in 2.7) - 
https://issues.apache.org/jira/browse/IGNITE-8927 (the issue says “hang” but 
the visible behavior actually varies)
SQL + Partition Loss Policies + Native Persistence issue (fixed in 2.8) - 
https://issues.apache.org/jira/browse/IGNITE-9841

If you don’t use native persistence then various SELECTs should work as you 
expect on 2.7.
If you do need persistence then you could try working with master (e.g. take a 
nightly build – but don’t use it in any real environments).

Stan

From: joseheitor
Sent: 24 декабря 2018 г. 18:40
To: user@ignite.apache.org
Subject: Re: CAP Theorem (CP? or AP?)

Guys, thank you both for your informative and helpful responses.

I have explicitly configured the cache-template with the additional
property:



And have observed the following behaviour:

1. [OK] Attempting to get a specific record(s) which resides in a lost
partition does indeed return an exception, as expected.

2. [?] Doing a SELECT COUNT(*) however, still succeeds without error, but
obviously reports the wrong number of total records (understandably). But
shouldn't any operation against a cache with lost partitions result in an
Exception? How will my application know that the result is valid and can be
trusted to be accurate?

Another question please - Stanislav, what is the issue with Ignite
persistence, that was fixed in 2.7 for in-memory, but will only be fixed for
Ignite native persistence in version 2.8...?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: First write to new cache on new cluster.

2018-12-21 Thread Stanislav Lukyanov
Hi,

Are you sure it is only the first write?
I’m not aware of any lazy initialization there, so I wouldn’t expect the first 
write to be slow.

Which version do you use?
And what is the performance difference of the first transaction compared to the 
second?

I have two ideas why it may work slowly on the first write
1) WAL roll-over – every time you start to write to a new WAL segment it will 
initialize it first; perhaps it is done synchronously for the first segment.
If it is the reason then the next roll-over will also be slow and cause 
timeout. Try to check if there are periodic slow transactions.
2) Usual JVM warmup – JVM may need to load some classes, JIT simple methods, 
etc.
I guess -Xcompile would help with JIT here - it will force JVM to JIT methods 
before the first execution.
Although it seems unlikely that the performance difference would be so big that 
it causes timeouts.

Thanks,
Stan

From: Andrey Davydov
Sent: 12 декабря 2018 г. 19:36
To: user@ignite.apache.org
Subject: First write to new cache on new cluster.

Hello all,

The first write transaction on empty cluster with persistence enabled takes 
some seconds. After that, cluster works fast. It seems it is because of WAL 
file creation and so on. But I can’t setup real transaction timeout in 
configuration, because it cause exception on the first operation.

I can do dummy transaction to force initialization. But it seems not best 
practice.

If there is convenient way to force persistence initialization?

Andrey.




RE: adding into baseline topology

2018-12-20 Thread Stanislav Lukyanov
Hi,

Can you show the logs?

Thanks,
Stan

From: Som Som
Sent: 20 декабря 2018 г. 12:08
To: user@ignite.apache.org
Subject: adding into baseline topology

i am trying to add node into baseline topology: 
using (var ignite = Ignition.StartFromApplicationConfiguration())
            {
var baselineTopology = 
ignite.GetCluster().GetBaselineTopology();
var node = 
ignite.GetCluster().GetNode(Guid.Parse("344BEAA8-4E59-4673-8397-858AB919B895"));
    baselineTopology.Add(node);
               ignite.GetCluster().SetBaselineTopology(baselineTopology);
ignite.GetCluster().SetActive(true);
                }

But it does not work because node 344be... still not in baseline topologi. What 
am i doing wrong?



RE: I encountered a problem when restarting ignite

2018-12-19 Thread Stanislav Lukyanov
Hi,

Sending a SIGQUIT signal forces VM to print a thread dump to its stdout:
kill -3 

Stan

From: Justin Ji
Sent: 13 декабря 2018 г. 5:20
To: user@ignite.apache.org
Subject: Re: I encountered a problem when restarting ignite

Akurbanov - 

Thank for your reply!
I have tried to dump the thread stacks, but I don't know how to dump the
thread stacks from a docker container since it only contains a simplified
JRE, does not have JSTACK tools, and I also googled a lot of information and
found that there is no suitable method.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Effective way to pre-load data around 10 TB

2018-12-19 Thread Stanislav Lukyanov
The problem might be in HDD not performing fast enough, and also suffering from 
random reads
(IgniteCache::preloadPartition at least tries to read sequentially).

Also, do you have enough RAM to store all data? If not, you shouldn’t preload 
all the data, just the amount that fits into RAM.

Anyway, I think that your best chance is to implement the same thing 
https://issues.apache.org/jira/browse/IGNITE-8873 does.
E.g. you can try to backport the commit on top of 2.6.

Stan

From: Naveen
Sent: 5 декабря 2018 г. 7:59
To: user@ignite.apache.org
Subject: RE: Effective way to pre-load data around 10 TB

Thanks Stan, this may take little longer time to implement, we are in hurry
to build this functionality of preloading the data. 

Can someone correct me how to improve this pre-load process.

This is how we are preloading. 

1. Send an Async request for all the partitions with the below code, below
loop will get repeated for all the caches we have 

for (int i = 0; i < affinity.partitions(); i++) {
List cacheList = 
Arrays.asList(cacheName);
affinityRunAsync= 
compute.affinityRunAsync(cacheList, i, new
DataPreloadTask(cacheList, i));

}

2. Inside DataPreloadTask which is running on the Ignite node. 
I just execute scan query for the given partition and iterate thru the
cursor. not doing anything else. 


IgniteCache igniteCache = 
localIgnite.cache(cacheName);
try (QueryCursor> cursor = 
igniteCache.query(new
ScanQuery().setPartition(partitionNo))) {

for (Cache.Entry entry : cursor) {
}

}
}

However, this seems to be quite slow. Taking more than 3 hours to read one
cache which has 400 M records. We have 30 such caches to load data, so not
fining this so efficient. 

Can we improve this, we do have very powerful machines with 128 CPU, 2 TB
RAM, HDD, our CPU utilization is also not so high when we are preloading the
data. 
Changing thread pool size will have any impact this read ???

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Question about add new nodes to ignite cluster.

2018-12-19 Thread Stanislav Lukyanov
Well, in short - it does, don’t worry about it :)

Unfortunately I’m not aware of a proper design document explaining the process 
in detail.
But simply put, Ignite will wait for the new node to obtain all of the data it 
needs to store.
While that’s happening, the node doesn’t serve any requests.
When all data is transferred, Ignite will route the new requests to the new 
node, and start 
removing the transferred data from the old nodes.

Stan

From: Justin Ji
Sent: 3 декабря 2018 г. 5:26
To: user@ignite.apache.org
Subject: Re: Question about add new nodes to ignite cluster.

Another question:

How the client APIs get or put data to the rebalancing cluster(Async Mode)
when adding a new node to the cluster, from the old nodes or the new node?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Failed to read data from remote connection

2018-12-19 Thread Stanislav Lukyanov
“OOME: Direct buffer memory” means that MaxDirectMemorySize is too small.
Set a larger MaxDirectMemorySize value.

Stan

From: wangsan
Sent: 18 декабря 2018 г. 5:08
To: user@ignite.apache.org
Subject: Re: Failed to read data from remote connection

Now the cluster have 100+ nodes, when 'Start check connection process'
happens,
Some node will throw oom with Direct buffer memory (java nio).
When check connections,Many nio socker will be create ,Then oom happens?

How to fix the oom except larger xmx?

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Did anyone write custom Affinity function?

2018-12-19 Thread Stanislav Lukyanov
You could write a custom affinity function, and some people do, but as far as I 
can see you don’t need it.
You just chose a poor affinity key.

You need to have MANY affinity keys, much more than there are partitions, and 
have MANY partitions, much more than nodes.
That will make sure that the default affinity function distribute data properly.
But more importantly that will make sure that your system will scale well.
If you have number of groups equal to the number of nodes than you can’t just 
increase the number of nodes to scale – you need 
to change your data model as well. To scale properly you need to have your data 
model work with different number of nodes.

FYI Ignite used to have a different affinity function that always distributed 
partitions evenly.
It had some issues and was eventually replaced and removed, although people do 
try to bring it back time to time.
See 
http://apache-ignite-developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-td19987.html

Thanks,
Stan

From: ashishb008
Sent: 19 декабря 2018 г. 9:09
To: user@ignite.apache.org
Subject: Re: Did anyone write custom Affinity function?

Yeah, we were planning to increase group IDs.
Did anybody write custom Affinity function? If it is already written that
will be helpful to us. 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2018-12-04 Thread Stanislav Lukyanov
Hi,

Unfortunately, it's not going to be near future.
It's definitely not 2.7 (which was frozen a long time ago), almost
definitely not 2.8 (because it would probably take to much time to do and
2.8 is supposed to figure out other stuff related to Java 9+ support).
You can always workaround this by providing your own classloading logic
right in the compute task, using a sophisticated class loader in
IgniteConfiguration.classLoader, etc. Otherwise, please stay tuned and
eventually this feature will be added to the roadmap.

Stan



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: When will Apache Ignite support Java 11?

2018-12-03 Thread Stanislav Lukyanov
The support for Java 9 *should* mean support for Java 11, the compatibility gap 
between the two is not big.

Moreover, I would (and going to) push for almost completely skipping the 
testing on Java 9 – it is 
in end-of-life already, so providing support for it is kind of pointless. Java 
11 is what should be supported by Ignite 2.8.

That said, I honestly don’t see everyone jumping from the Java 8 train any time 
soon.
Gap between 8 and 9+ (although not to big in reality) still makes people stay 
on 8,
and Oracle’s competitors are ready to offer alternative support.
So, I’d say that Java 8 is still going to be the main target for at least 
Ignite 2.8.

But this is all just a speculation for now as no plans were set in stone yet.
Stay tuned at d...@ignite.apache.org.

Stan

From: Loredana Radulescu Ivanoff
Sent: 26 ноября 2018 г. 21:23
To: user@ignite.apache.org
Subject: Re: When will Apache Ignite support Java 11?

Hello,

The current plan is that Oracle will stop updates for Java 8 commercial users 
after January 2019, and Java 11 is the next LTS release, so is there a plan to 
have Ignite working with Java 11 by then?

Thank you.

On Thu, Nov 22, 2018 at 10:49 PM Petr Ivanov  wrote:
Hi!


Full Java 9+ support is planned for 2.8 at least.
Currently it will work more or less on Java9. Java10/11 work is not guaranteed.

> On 22 Nov 2018, at 21:22, monstereo  wrote:
> 
> Is there any plan to support Java 11 for Apache Ignite?
> 
> If the next version of the Apache Ignite (2.7) will support Java 11, when it
> will be released?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How to filter ip interfaces in TcpDiscoveryJdbcIpFinder

2018-12-03 Thread Stanislav Lukyanov
Currently you can only use IGNITE_LOCAL_HOST or TcpDiscoverySpi.localAddress 
for this.
You can automate setting these addresses via an external script, like
MY_IP=`ifconfig | grep `
java  -DIGNITE_LOCAL_HOST=$MY_IP
or put it into the Ignite config like


Stan

From: Luckyman
Sent: 3 ноября 2018 г. 23:36
To: user@ignite.apache.org
Subject: Re: How to filter ip interfaces in TcpDiscoveryJdbcIpFinder

Thx for reply, but it's not usable for our situation.

We have are Exalogic cluster with a lot of different network .
Each virtual node have it's own address. As I understand I should set the ip
address on each node.
But I want to set target network address and network mask once and each node
of ignite use ip address from  this network ignoring other networks on host.  



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Create index got stuck and freeze whole cluster.

2018-12-03 Thread Stanislav Lukyanov
Hi,

The only thing I can say is that your troubles seem to have started way before.

I see a bunch of “Found long running cache future” repeating, and then exchange 
for
stopping SQL_PUBLIC_USERLEVEL cache that never completes. 
Would need logs going further (at least minutes) into the past to see what went 
wrong.

Stan

From: Ray
Sent: 30 октября 2018 г. 9:21
To: user@ignite.apache.org
Subject: Create index got stuck and freeze whole cluster.

I'm using a five nodes Ignite 2.6 cluster.
When I try try to create index on table with10 million records using sql
"create index on table(a,b,c,d)", the whole cluster freezes and prints the
following log for 40 minutes.

2018-10-30T02:48:44,086][WARN
][exchange-worker-#162][GridDhtPartitionsExchangeFuture] Unable to await
partitions release latch within timeout: ServerLatch [permits=4,
pendingAcks=[20aa5929-3f26-4923-87a3-27b4f6d4f744,
ec5be25e-6601-468c-9f0e-7ab7c8caa9e9, 45819b05-a338-4bc4-b104-f0c7567fd49d,
cbb80db7-b342-4b97-ba61-97d57c194a1a], super=CompletableLatch [id=exchange,
topVer=AffinityTopologyVersion [topVer=202, minorTopVer=1]]]

I noticed one of the servers(log in server3.zip) is stuck in checkpoint
process, and this server acts as coordinator in PME.
In the log I see only 856610 pages needs to be flushed to disk, but the
checkpoint takes 32 minutes to finish.
While another node takes 7 minutes to finish writing 919060 pages to disk.
Also the disk usage on the slow checkpoint server is not 100%.

Here's the whole log file for 5 servers.
server1.zip
  
server2.zip
  
server3.zip
  
server4.zip
  
server5.zip
  




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Avoiding Docker Bridge network when using S3 discovery

2018-12-03 Thread Stanislav Lukyanov
Hi,

Have you been able to solve this?
I think specifying TcpDiscoverySpi.localAddress should work.

Stan

From: Dave Harvey
Sent: 17 октября 2018 г. 20:10
To: user@ignite.apache.org
Subject: Avoiding Docker Bridge network when using S3 discovery

When we use S3 discovery and Ignite containers running under ECS using host 
networking, the S3 bucket end up with 172.17.0.1#47500 along with the other 
server addresses.   Then on cluster startup we must wait for the network 
timeout.    Is there a way to avoid having this address pushed to the S3 
bucket?     
Visor shows:
| Address (0)                 | 10.32.97.32                              |
| Address (1)                 | 172.17.0.1                               |
| Address (2)                 | 127.0.0.1        






Disclaimer
The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more Click Here.



RE: Ignite can't activate

2018-12-03 Thread Stanislav Lukyanov
Hi,

Reproduced that and filed https://issues.apache.org/jira/browse/IGNITE-10516.
Thanks for reporting.

Stan

From: yangjiajun
Sent: 29 ноября 2018 г. 10:52
To: user@ignite.apache.org
Subject: Re: Ignite can't activate

Hello.
Here is a reproducer for my case:
1.Start a node with persistence enabled.
2.Create a table without cache group and create an index on it.
3.Create another table and assign a cache group to it.Use same name to
create an index on this table.
4.Stop the node.
5.Restart the node and  do activate.Then you will see the exception. 


yangjiajun wrote
> Hello.
> My ignite can't activate after restart.I only have one node which is ver
> 2.6.The exception cause ignite can't activate is :
> 
> [14:18:05,802][SEVERE][exchange-worker-#110][GridCachePartitionExchangeManager]
> Failed to wait for completion of partition map exchange (preloading will
> not
> start): GridDhtPartitionsExchangeFuture
> [firstDiscoEvt=DiscoveryCustomEvent
> [customMsg=null, affTopVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], crd=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false],
> exchId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=1,
> minorTopVer=1], discoEvt=DiscoveryCustomEvent [customMsg=null,
> affTopVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> super=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=5cb09b16-6b97-4e01-a0c5-0d035293ea2e, addrs=[10.0.209.119],
> sockAddrs=[dggphicprb01094/10.0.209.119:9001], discPort=9001, order=1,
> intOrder=1, lastExchangeTime=1543471362559, loc=true,
> ver=2.6.0#20180710-sha1:669feacc, isClient=false], topVer=1,
> nodeId8=5cb09b16, msg=null, type=DISCOVERY_CUSTOM_EVT,
> tstamp=1543471364603]], nodeId=5cb09b16, evt=DISCOVERY_CUSTOM_EVT],
> added=true, initFut=GridFutureAdapter [ignoreInterrupts=false, state=DONE,
> res=false, hash=1392437407], init=false, lastVer=GridCacheVersion
> [topVer=0,
> order=1543471361380, nodeOrder=0], partReleaseFut=PartitionReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[ExplicitLockReleaseFuture [topVer=AffinityTopologyVersion
> [topVer=1, minorTopVer=1], futures=[]], AtomicUpdateReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> DataStreamerReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[]], LocalTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], futures=[]],
> AllTxReleaseFuture [topVer=AffinityTopologyVersion [topVer=1,
> minorTopVer=1], futures=[RemoteTxReleaseFuture
> [topVer=AffinityTopologyVersion [topVer=1, minorTopVer=1],
> futures=[]],
> exchActions=null, affChangeMsg=null, initTs=1543471364624,
> centralizedAff=false, forceAffReassignment=true, changeGlobalStateE=class
> o.a.i.IgniteException: Duplicate index name
> [cache=SQL_PUBLIC_TABLE_TEMP_TEST1_R_1_X, schemaName=PUBLIC,
> idxName=TABLE_TEMP_99_R_1_X_ROMA3C_BSP_BATCH_ID,
> existingTable=TABLE_TEMP_99_R_1_X, table=TABLE_TEMP_TEST1_R_1_X],
> done=true, state=DONE, evtLatch=0, remaining=[], super=GridFutureAdapter
> [ignoreInterrupts=false, state=DONE, res=class
> o.a.i.IgniteCheckedException:
> Cluster state change failed., hash=721409668]]
> class org.apache.ignite.IgniteCheckedException: Cluster state change
> failed.
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.finishExchangeOnCoordinator(GridDhtPartitionsExchangeFuture.java:2697)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.onAllReceived(GridDhtPartitionsExchangeFuture.java:2467)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1149)
>   at
> org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:712)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body0(GridCachePartitionExchangeManager.java:2419)
>   at
> org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2299)
>   at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>

RE: Fair queue polling policy?

2018-12-03 Thread Stanislav Lukyanov
I think what you’re talking about isn’t fairness, it’s round-robinness.
You can’t distribute a single piece of work among multiple nodes fairly – one 
gets it and others don’t.
Yes, it could be using different node each time, but it I don’t really a use 
case for that.

The queue itself isn’t a load balancer implementation, it doesn’t even need to 
care about fairness or anything.
All it need is to implement queue interface efficiently.

I think I can explain the fact that one node gets the data most of the time.
It’s probably due to that the first value (when the queue is empty) always has 
the same key – and always ends up on the same node.
So the behavior is not in that the same client get’s the value – it’s in that 
the same server always stores the first (second, third) value.
When all the servers try to get and remove the same value, the one closest to 
it (i.e. the one storing it) wins.
We probably could randomize the distribution – but it’s going to cost us in 
terms of code complexity and, maybe, performance. 

Overall, I don’t think it’s a bug in Ignite, and we would need a solid 
justification to change the behavior.

Do you have a use case when a random distribution is important?

Stan

From: Peter
Sent: 30 ноября 2018 г. 17:30
To: user@ignite.apache.org
Subject: Re: Fair queue polling policy?

Hello,

I have found this discussion about the same topic and indeed the example there 
works and the queues poll fair.

And when I tweak the sleep after put and take, so that the queue stays mostly 
empty all the, I can reproduce the unfair behaviour!
https://github.com/karussell/igniteexample/blob/master/src/main/java/test/IgniteTest.java

I'm not sure if this is a bug as it should be the responsibility of the client 
to avoid overloading itself. E.g. in my case this happened because I allowed 
too many threads for the tasks on the polling side, leading to too frequent 
polling, which leads to this mostly empty queue.

But IMO it should be clarified in the documentation as one expects a round 
robin behaviour even for empty queues. And e.g. in low latency environments 
and/or environments with many clients this could make problems. I have created 
an issue about it here: https://issues.apache.org/jira/browse/IGNITE-10496

Kind Regards
Peter

Am 30.11.18 um 01:44 schrieb Peter:
Hello,
My aim is a queue for load balancing that is described in the documentation: 
create an "ideally balanced system where every node only takes the number of 
jobs it can process, and not more."
I'm using jdk8 and ignite 2.6.0. I have successfully set up a two node ignite 
cluster where node1 has same CPU count (8) and same RAM as node2 but slightly 
slower CPU (virtual vs. dedicated). I created one unbounded queue in this 
system (no collection configuration, also no config for cluster except 
TcpDiscoveryVmIpFinder).
I call queue.put on both nodes at an equal rate and have one non-ignite-thread 
per node that does "queue.take()" and what I expect is that both machines go 
equally fast into the 100% CPU usage as both machines poll at their best 
frequency. But what I observe is that the slower node (node1) gets approx. 5 
times more items via queue.take than node2. This leads to 10% CPU usage on 
node2 and 100% CPU usage on node1 and I never had the case where it was equal.
What could be the reason? Is there a fair polling configuration or some 
anti-affine? Or is it required to do queue.take() inside a Runnable submitted 
via ignite.compute().something?
I also played with CollectionConfiguration.setCacheMode but the problem 
persists. Any pointers are appreciated.
Kind Regards
Peter




RE: Ignite not scaling as expected !!! (Thread dump provided)

2018-12-03 Thread Stanislav Lukyanov
How many cores does each node have?
Numbers of which threads do you increase? The ones doing the get() calls?

Thread dump from the client isn’t that interesting. Better to look what’s going 
on the servers.
You need to monitor your resources – CPU, Network IO, Disk IO. You may hit the 
limit on all of them.
You need to monitor your GC – perhaps that’s what’s taking all of the resources.
You need to look into your cache store – read-through is enabled, so you might 
be hitting performance issues 
with your cache store implementation and/or backing database.
You may need to have a performance profile – take JFR from all nodes.

Performance is a very complex topic and Ignite is a very complex system.
One can’t say for sure what’s going on just by looking at the config.
You have to look into everything at once to be able to make sense of it,
and it’s hardly something that can be done on a user mailing list.

Stan

From: the_palakkaran
Sent: 2 декабря 2018 г. 17:01
To: user@ignite.apache.org
Subject: Re: Ignite not scaling as expected !!! (Thread dump provided)

Hi Amir,

I have two server nodes and 1 client node. I have two caches, one that holds
entire accounts from DB and another counter cache that is used for counter
operations. The server nodes are deployed on two different nodes and
clustered together. A client that is also on one of the machines of the two
server nodes deployed tries to access data from the caches.


Ignite Configuration is provided below:
*

  
  
  
 

   
  
 

ip1:47500..47509
ip2:47500..47509

  
   

 
  
  




  
  
 

 
  
  
  
 

   
  
 
 
 
 
 
  
   

 
  
  
 












 
  







   
   
   
   
   
   
   
   
   
  
 
  
   


  
  
 

   
   
   
  
   

 
  
  *



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Ignite cache.getAll takes a long time

2018-12-03 Thread Stanislav Lukyanov
I guess it could be caused by https://issues.apache.org/jira/browse/IGNITE-5003 
which is mentioned in that thread.
Also, make sure that your cache store code doesn’t cause you troubles – that 
you don’t open a new connection every time, 
don’t have unnecessary blocking, etc.

Stan

From: Justin Ji
Sent: 3 декабря 2018 г. 13:10
To: user@ignite.apache.org
Subject: RE: Ignite cache.getAll takes a long time

Stan - 

Thank for your reply!

Yes, the getAll and putAll(async) executed in parallel(amount of operations
executed at the same time).

But I think it may be caused by the write-behind, when I disabled the
write-behind the timeout disappeared, and I enabled the write-behind the
timeout appeared.

It is a little similar to
http://apache-ignite-users.70518.x6.nabble.com/write-behind-performance-impacting-main-thread-Write-behind-buffer-is-never-full-td17940.html



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Query regarding Ignite unit tests

2018-12-03 Thread Stanislav Lukyanov
Hi,

This is better to be asked on the dev-list – added that to the To, and Bcc’ed 
user-list.

I actually don’t think you can run tests for a specific module – either a 
single test, or a single test suite, or all of them.
I would usually either run a single test from IDEA or run all tests via 
TeamCity https://ci.ignite.apache.org.

Igniters, please help Namrata here with the best practices of working with 
tests.

Stan

From: Namrata Bhave
Sent: 3 декабря 2018 г. 14:54
To: user@ignite.apache.org
Subject: Query regarding Ignite unit tests

Hi,

I have recently started working with Apache Ignite. Build on x86 Ubuntu 16.04 
is complete. However, while running tests using `mvn test` command, the 
execution gets stuck while running `ignite-core` module.
Hence started running tests on individual modules, where similar behavior was 
seen in ignite-indexing, ignite-clients and ignite-ml modules as well.
I have tried setting JAVA heap settings, running on a system with 32GB RAM. 
Is there a way to avoid this and get complete test results? Also, is there any 
CI or such environment where I can get results of unit tests?

Would appreciate any help provided.

Thanks and Regards,
Namrata



RE: Effective way to pre-load data around 10 TB

2018-11-29 Thread Stanislav Lukyanov
Hi,

Currently the best option is IgniteCache::preloadPartition method added in
https://issues.apache.org/jira/browse/IGNITE-8873.

There is a JIRA ticket to allow pre-loading data before the node joins the 
cluster:
https://issues.apache.org/jira/browse/IGNITE-10152.

Stan

From: Naveen
Sent: 29 ноября 2018 г. 12:39
To: user@ignite.apache.org
Subject: Effective way to pre-load data around 10 TB

HI

We are using Ignite 2.6.

AS we already know, after the cluster restart, every GET call gets data from
DISK for the first time and loads into RAM and subsequent calls data will
read from RAM only..
First time GET calls are 10 times slower than read from RAM, which we wanted
to avoid by pre-loading the entire data into RAM after the cluster restart.

So here am exploring efficient ways to read entire data once so that it will
pre-load the data into RAM, so GET calls from client will be much faster. 

Running ScanQuery on all the partitions of the cache would be good way to
read data very fast in very less time ? OR any other better ways of
achieving the same


Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: some cases (IF NOT EXISTS) in the CREATE TABLE statement does notwork

2018-11-29 Thread Stanislav Lukyanov
A bug for this is filed: https://issues.apache.org/jira/browse/IGNITE-10414

Stan

From: Qingping
Sent: 27 ноября 2018 г. 4:26
To: user@ignite.apache.org
Subject: some cases (IF NOT EXISTS) in the CREATE TABLE statement does notwork

Question=== 
When testing Ignite 2.6.0 (2018-07-16), it was found that in some cases (IF
NOT EXISTS) in the CREATE TABLE statement does not work.

[Wrong case] 

After the first run is successful, the code will prompt "Table already
exists: CITY" in the next iteration:

java.sql.SQLException: Table already exists: CITY
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:751)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:210)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.executeUpdate(JdbcThinStatement.java:338)
at ignite.examp.IgniteSQL_Error.main(IgniteSQL_Error.java:47)
---
public class IgniteSQL_Error 
{
public static void main(String[] aArgvs)
{
try
{
Ignition.setClientMode(true);
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setIgniteInstanceName("IgniteSQL_Error");

TcpDiscoveryVmIpFinder ipFinder = new 
TcpDiscoveryVmIpFinder();
ipFinder.setAddresses(Arrays.asList("192.168.0.219", 
"192.168.0.220",
"192.168.0.221"));
TcpDiscoverySpi spi = new TcpDiscoverySpi();
spi.setIpFinder(ipFinder);
cfg.setDiscoverySpi(spi);
try (Ignite ignite = Ignition.start(cfg))
{

Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
final String jdbcConnUrl = 
"jdbc:ignite:thin://127.0.0.1/";
try (Connection conn = 
DriverManager.getConnection(jdbcConnUrl))
{
try (Statement stmt = 
conn.createStatement())
{
stmt.executeUpdate("CREATE 
TABLE IF NOT EXISTS City(id LONG PRIMARY
KEY, name VARCHAR)");

System.out.println("[Fine]CREATE TABLE IF NOT EXISTS City");
}
}
}
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}

[Correct case] 

The following code can always run successfully (in the JDBC connection URL,
directly specify one of the server nodes of the Ignite cluster)
---
public class IgniteSQL_Ok 
{
public static void main(String[] aArgvs)
{
try
{
Class.forName("org.apache.ignite.IgniteJdbcThinDriver");
final String jdbcConnUrl = 
"jdbc:ignite:thin://192.168.0.219/";
try (Connection conn = 
DriverManager.getConnection(jdbcConnUrl))
{
try (Statement stmt = conn.createStatement())
{
stmt.executeUpdate("CREATE TABLE IF NOT 
EXISTS City(id LONG PRIMARY
KEY, name VARCHAR)");
System.out.println("[Fine]CREATE TABLE 
IF NOT EXISTS City");
}
}
}
catch (Exception ex)
{
ex.printStackTrace();
}
}
}




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Client stucks and doesn't connect

2018-11-29 Thread Stanislav Lukyanov
Hi,

The interesting (and disappointing) part is the NPE:
java.lang.NullPointerException: null
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.sendJoinRequest(ClientImpl.java:666)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:546)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.access$900(ClientImpl.java:128)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.tryJoin(ClientImpl.java:1846)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl$MessageWorker.body(ClientImpl.java:1757)

Which version do you use? 
Is this reproducible? Every time? 

Thanks,
Stan


From: Dmitry Lazurkin
Sent: 20 ноября 2018 г. 15:44
To: user@ignite.apache.org
Subject: Client stucks and doesn't connect

Hello.

Ignite client stops connecting to server after exception:

2018-11-19 16:00:49,257 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Resolved addresses from IP finder:
[/10.48.14.1:47500]
2018-11-19 16:00:49,257 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Send join request
[addr=/10.48.14.1:47500, reconnect=true,
locNodeId=cd323c53-d1de-4608-8eec-e373b1f68b71]
2018-11-19 16:00:49,258 [tcp-client-disco-reconnector-#5] ERROR
o.a.i.s.d.tcp.TcpDiscoverySpi - Exception on joining: Connection refused
(Connection refused)
java.net.ConnectException: Connection refused (Connection refused)
    at java.net.PlainSocketImpl.socketConnect(Native Method)
    at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
    at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
    at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
    at java.net.Socket.connect(Socket.java:589)
    at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1450)
    at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.openSocket(TcpDiscoverySpi.java:1413)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.sendJoinRequest(ClientImpl.java:637)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.joinTopology(ClientImpl.java:546)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl.access$900(ClientImpl.java:128)
    at
org.apache.ignite.spi.discovery.tcp.ClientImpl$Reconnector.body(ClientImpl.java:1408)
    at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
2018-11-19 16:00:49,258 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Failed to join to address
[addr=/10.48.14.1:47500, recon=true, errs=[java.net.ConnectException:
Connection refused (Connection refused)]]
2018-11-19 16:00:51,344 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Resolved addresses from IP finder:
[/10.48.14.1:47500]
2018-11-19 16:00:51,344 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Send join request
[addr=/10.48.14.1:47500, reconnect=true,
locNodeId=cd323c53-d1de-4608-8eec-e373b1f68b71]
2018-11-19 16:00:51,364 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Message has been sent to address
[msg=TcpDiscoveryClientReconnectMessage
[routerNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20,
lastMsgId=c42b1bc2761-bbc61e78-97e8-49cd-844e-2dc3e8aacc68,
super=TcpDiscoveryAbstractMessage [sndNodeId=null,
id=48722bc2761-cd323c53-d1de-4608-8eec-e373b1f68b71,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]], addr=/10.48.14.1:47500,
rmtNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20]
2018-11-19 16:00:51,365 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Received response to join request
[addr=/10.48.14.1:47500, res=100]
2018-11-19 16:00:51,365 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Will wait before retry join.
2018-11-19 16:00:53,365 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Send join request
[addr=/10.48.14.1:47500, reconnect=true,
locNodeId=cd323c53-d1de-4608-8eec-e373b1f68b71]
2018-11-19 16:00:53,368 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Message has been sent to address
[msg=TcpDiscoveryClientReconnectMessage
[routerNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20,
lastMsgId=c42b1bc2761-bbc61e78-97e8-49cd-844e-2dc3e8aacc68,
super=TcpDiscoveryAbstractMessage [sndNodeId=null,
id=a8722bc2761-cd323c53-d1de-4608-8eec-e373b1f68b71,
verifierNodeId=null, topVer=0, pendingIdx=0, failedNodes=null,
isClient=true]], addr=/10.48.14.1:47500,
rmtNodeId=9c68d70f-883e-4a21-938b-05f3f6f98d20]
2018-11-19 16:00:53,368 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Received response to join request
[addr=/10.48.14.1:47500, res=1]
2018-11-19 16:00:53,372 [tcp-client-disco-reconnector-#5] DEBUG
o.a.i.s.d.tcp.TcpDiscoverySpi - Received reconnect response
[success=false, 

RE: Ignite cache.getAll takes a long time

2018-11-29 Thread Stanislav Lukyanov
Hi,

Start by looking at what’s going on in the cluster when you see these long 
reads.
Collecting a JFR record or a GC log would be nice.

Also, do you see that with concurrent reads and writes, i.e. do you have getAll 
and putAll executing in parallel?
Perhaps there is some sort of contention between them.

Finally, try to find correlation between the key sets you get high latency for. 
Perhaps there is something special about these keys.

Stan

From: Justin Ji
Sent: 29 ноября 2018 г. 5:24
To: user@ignite.apache.org
Subject: Re: Ignite cache.getAll takes a long time

I use the default thread pool settings, can it improve the performance after
I increase the system thread pool?

Another question, should I add the configuration in server nodes?

Looking forward to your reply



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Pain points of Ignite user community

2018-11-28 Thread Stanislav Lukyanov
Hi,

I expect a write-up on some of the Ignite pitfalls to be out soon – ping me 
next week.

Stan

From: Rohan Honwade
Sent: 29 ноября 2018 г. 0:42
To: user@ignite.apache.org
Subject: Pain points of Ignite user community

Hello,

I am currently creating some helpful blog articles for Ignite users. Can 
someone who is active on this mailing list or the StackOverflow Ignite section  
please let me know what are the major pain points that users face when using 
Ignite? 

Regards,
RH



RE: java.lang.ClassCastException:org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl

2018-11-28 Thread Stanislav Lukyanov
Hi,

Try a full fresh start:
- stop all nodes
- clean work directory (D:\ApacheIgnite2_6\ApacheIgnite\work)
- make sure all nodes use the same configuration
- start nodes again

Stan

From: userx
Sent: 20 ноября 2018 г. 8:11
To: user@ignite.apache.org
Subject: Re: 
java.lang.ClassCastException:org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl

Hi Ilya,

Thank you for your reponse.
Any configuration I choose, like the one below,





http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>

























127.0.0.1:47500..47502










It is still giving the same error.

java.lang.ClassCastException:
org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl cannot be cast
to
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryEx




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How to restart Apache Ignite nodes via command line

2018-11-28 Thread Stanislav Lukyanov
Well, as you said you need to write some scripts. To shutdown the cluster you 
kill Ignite processes, 
to load new configurations – copy the configuration files. No magic here, and 
nothing specific to Ignite.

Stan

From: Max Barrios
Sent: 28 ноября 2018 г. 0:40
To: user@ignite.apache.org
Subject: How to restart Apache Ignite nodes via command line

Is there any way to restart Apache Ignite nodes via command line? I want to 
* load new configurations 
* shutdown the cluster to upgrade my VM instances
* do general maintenance
and just can't find any documentation that shows me how to do this via the 
command line. Similar devops actions with other techs, I have running using 
ansible/bash scripts.I want to do the same with Ignite. 

Thanks

Max





RE: PublicThreadPoolSize vs FifoQueueCollisionSpi.ParallelJobsNumber

2018-11-28 Thread Stanislav Lukyanov
Hi,

1) No.
With publilcThreadPoolSize=16 and parallelJobsNumber=32 you’ll have 32 jobs 
submitted to executor with 16 threads,
which means that 16 jobs will be executing and 16 will be waiting in the public 
thread pool’s queue.

2) Yes – see setWaitingJobsNumber.

Stan

From: Prasad Bhalerao
Sent: 28 ноября 2018 г. 9:20
To: user@ignite.apache.org
Subject: PublicThreadPoolSize vs FifoQueueCollisionSpi.ParallelJobsNumber

Hi,

What will happen in following case:

publilcThreadPoolSize is to 16.

But FifoQueueCollisionSpi is used instead of NoopCollisionSpi.
FifoQueueCollisionSpi.setParallelJobsNumber(32);

1) Will ignite execute 32 jobs in parallel even though the publicThreadPool 
size is set to 16?

2) Is there any configuration to set fifo queue size as well so that number of 
jobs that be submitted to this queue can be limited?

Thanks,
Prasad



RE: Slow Data Insertion On Large Cache : Spark Streaming

2018-11-12 Thread Stanislav Lukyanov
Hi,

Do you use persistence? Do you have more data on disk than RAM size?
If yes, it’s almost definitely 
https://issues.apache.org/jira/browse/IGNITE-9519.
If no, it still can be the same issue.
Try running on 2.7, it should be released soon.

Stan

From: ApacheUser
Sent: 5 ноября 2018 г. 20:10
To: user@ignite.apache.org
Subject: Slow Data Insertion On Large Cache : Spark Streaming

Hi Team,

We have 6 node Ignite cluster with 72CPU, 256GB  RAM and 5TB Storage . Data
ingested using Spark Streaming  into Ignite Cluster for SQL and Tableau
Usage.

I have couple of Large tables with 200ml rows with (200GB) and 800ml rows
with (500GB)  .
The insertion is taking more than 40secs if there is already existing
Composite key, if new row its around 10ms.

We have Entry, Main and Details tables, "Entry" cache has single field "id"
primary key, second cache "Main"  is with composite Primary key "id" and
"mainid" third Cache "Details" with composite primary key "id","mainrid" and
"detailid". "id" is the affinity key for all and some other small tables.

1. Is there any performance of insertion/updation diffeence  for  single
field primary key vs multi field primary key?
 will it make any differenc if I convert composite primary key as singe
field primary Key?
  like  concatanate all composite fields and make sigle filed primary key?

2.what are ignite.sh and Config parameters needs tuning?

My Spark Dataframe save options (Save to Ignite)

 .option(OPTION_STREAMER_ALLOW_OVERWRITE, true)
.mode(SaveMode.Append)
.save()

My Ignite.sh

JVM_OPTS="-server -Xms10g -Xmx10g -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=512m"
JVM_OPTS="${JVM_OPTS} -XX:+AlwaysPreTouch"
JVM_OPTS="${JVM_OPTS} -XX:+UseG1GC"
JVM_OPTS="${JVM_OPTS} -XX:+ScavengeBeforeFullGC"
JVM_OPTS="${JVM_OPTS} -XX:+DisableExplicitGC"
JVM_OPTS="${JVM_OPTS} -XX:+HeapDumpOnOutOfMemoryError "
JVM_OPTS="${JVM_OPTS} -XX:HeapDumpPath=${IGNITE_HOME}/work"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDetails"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCTimeStamps"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDateStamps"
JVM_OPTS="${JVM_OPTS} -XX:+UseGCLogFileRotation"
JVM_OPTS="${JVM_OPTS} -XX:NumberOfGCLogFiles=10"
JVM_OPTS="${JVM_OPTS} -XX:GCLogFileSize=100M"
JVM_OPTS="${JVM_OPTS} -Xloggc:${IGNITE_HOME}/work/gc.log"
JVM_OPTS="${JVM_OPTS} -XX:+PrintAdaptiveSizePolicy"
JVM_OPTS="${JVM_OPTS} -XX:MaxGCPauseMillis=100"

export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true

default-Config.xml






http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>







 



   



 





 






  











  
   











  
  

















  


  

  

  


  
  
   

  

 
 

  
  

  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
 64.x.x.x:47500..47509
 64.x.x.x:47500..47509

  

  

 

RE: Cannot query cache by affinity key when custom cache template is used

2018-10-22 Thread Stanislav Lukyanov
Also crossposting from SO :)
https://stackoverflow.com/questions/52879064/cannot-query-cache-by-affinity-key-when-custom-cache-template-is-used/52935802#52935802

Apparently, it's a bug in Ignite. Filed 
https://issues.apache.org/jira/browse/IGNITE-9964. Thanks for reporting!

The issue only appears when you put data via withKeepBinary(). If you use SQL 
INSERT instead, SELECT works fine.

I suggest you use INSERT instead of constructing BinaryObjects manually - it's 
much easier and allows to workaround the bug. If you have to use BinaryObjects 
then you can try added the first row via INSERT and use binary afterwards - 
this also worked in my tests.

Stan

From: adam
Sent: 18 октября 2018 г. 21:39
To: user@ignite.apache.org
Subject: Cannot query cache by affinity key when custom cache template is used

I am crossposting this from StackOverflow.

I noticed that when I query a cache which was created with a custom cache
template and include the cache's affinity key in the WHERE clause, no
results are returned. 

I am running Ignite 2.5 with the following configuration:
  
http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>

   
 
   
   
   
   
 
   



And here is my test code. The code creates 3 caches. The first one is a
"root" cache which defines colocation for the others. The other two are
caches colocated by the root's key. The first colocated cache
(colocated_default) uses the PARTITIONED template and works as expected. The
second (colocated_custom) uses the "myCacheTemplate" created in the above
configuration. I insert one cache entry into each cache, where the entries
in the colocated cache have an affinity key equal to the root cache entry's
key.

I then query the caches. I first run a query to ensure there is one entry in
each cache. Then I run a query where the affinity key is equal to value
inserted. The results for me show that I am able to select from by affinity
from both of the PARTITIONED caches, but get no results for the
"colocated_custom" cache. Here is the code:

/**
 * Test which shows that creating a cache with a custom cache
configuration template doesn't allow
 * for SQL queries to use an affinity key in the WHERE clause.
 */
public class App {

   public static void main(String[] args) {
  // Start Ignite.
  Ignition.setClientMode(true);
  final Ignite ignite = Ignition.start(new IgniteConfiguration());

  // Create caches. Create a root entity, and two entities which are
colocated by the root's ID.
  // One uses the custom cache template and one just uses the
PARTITIONED template.
  final List createTableStringBuilders = new
ArrayList<>();

  final StringBuilder createRoot = new StringBuilder();
  createRoot.append("CREATE TABLE IF NOT EXISTS root (\n");
  createRoot.append("  \"key\" VARCHAR(24) NOT NULL,\n");
  createRoot.append("  \"data\" VARCHAR(100),\n");
  createRoot.append("  PRIMARY KEY(\"key\"))\n");
  createRoot.append(
 "WITH \"template=PARTITIONED, affinity_key=key,
cache_name=root, value_type=root\";");
  createTableStringBuilders.add(createRoot);

  final StringBuilder createColocatedDefault = new StringBuilder();
  createColocatedDefault.append("CREATE TABLE IF NOT EXISTS
colocated_default (\n");
  createColocatedDefault.append("  \"root_key\" VARCHAR(24) NOT
NULL,\n");
  createColocatedDefault.append("  \"key\" VARCHAR(24) NOT
NULL,\n");
  createColocatedDefault.append("  \"data\" VARCHAR(100),\n");
  createColocatedDefault.append("  PRIMARY KEY(\"root_key\",
\"key\"))\n");
  createColocatedDefault.append(
 "WITH \"template=PARTITIONED, affinity_key=root_key,
cache_name=colocated_default, key_type=colocated_default_key,
value_type=colocated_default\";");
  createTableStringBuilders.add(createColocatedDefault);

  final StringBuilder createColocatedCustom = new StringBuilder();
  createColocatedCustom.append("CREATE TABLE IF NOT EXISTS
colocated_custom (\n");
  createColocatedCustom.append("  \"root_key\" VARCHAR(24) NOT
NULL,\n");
  createColocatedCustom.append("  \"key\" VARCHAR(24) NOT NULL,\n");
  createColocatedCustom.append("  \"data\" VARCHAR(100),\n");
  createColocatedCustom.append("  PRIMARY KEY(\"root_key\",
\"key\"))\n");
  createColocatedCustom.append(
 "WITH \"template=myCacheTemplate, affinity_key=root_key,
cache_name=colocated_custom, key_type=colocated_custom_key,
value_type=colocated_custom\";");
  createTableStringBuilders.add(createColocatedCustom);


RE: IGNITE-8386 question (composite pKeys)

2018-10-17 Thread Stanislav Lukyanov
Yep, just create a separate index.
(I saw in your other messages that you’re already trying that)

Stan

From: eugene miretsky
Sent: 18 сентября 2018 г. 17:56
To: user@ignite.apache.org
Subject: Re: IGNITE-8386 question (composite pKeys)

So how should we work around it now? Just create a new index for (customer_id, 
date)?

Cheers,
Eugene

On Mon, Sep 17, 2018 at 10:52 AM Stanislav Lukyanov  
wrote:
Hi,
 
The thing is that the PK index is currently created roughly as
    CREATE INDEX T(_key)
and not
    CREATE INDEX T(customer_id, date).
 
You can’t use the _key column in the WHERE clause directly, so the query 
optimizer can’t use the index.
 
After the IGNITE-8386 is fixed the index will be created as a multi-column 
index, and will behave the way you expect (e.g. it will be used instead of the 
affinity key index).
 
Stan
 
From: eugene miretsky
Sent: 12 сентября 2018 г. 23:45
To: user@ignite.apache.org
Subject: IGNITE-8386 question (composite pKeys)
 
Hi, 
 
A question regarding 
https://issues.apache.org/jira/browse/IGNITE-8386?focusedCommentId=16511394=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16511394
 
It states that a pkey index with a  compoise pKey is "effectively useless". 
Could you please explain why is that? We have a pKey that we are using as an 
index. 
 
Also, on our pKey is (customer_id, date) and affinity column is customer_id. I 
have noticed that most queries use AFFINITY_KEY index. Looking at the source 
code, AFFINITY_KEY index should not even be created since the first field of 
the pKey  is the affinity key. Any idea what may be happening? 
 
Cheers,
Eugene
 



RE: Ignite complains for low heap memory

2018-10-17 Thread Stanislav Lukyanov
Put your -X* options before -jar. That’s how java command line works.

Stan

From: Lokesh Sharma
Sent: 17 октября 2018 г. 14:08
To: user@ignite.apache.org
Subject: Re: Ignite complains for low heap memory

I get the same output in the logs when I run the application with Xms set to 2 
GB. I ran this command:

java -jar target/cm.jar -Xms2024m -Xmx4024m

On Wed, Oct 17, 2018 at 4:26 PM aealexsandrov  wrote:
Hi,

The heap metrics that you see in topology message shows the max heap value
that your cluster can use:

Math.max(m.getHeapMemoryInitialized(), m.getHeapMemoryMaximum()

Initial heap size (-Xms ) is different from the maximum (-Xmx). Your JVM
will be started with Xms amount of memory and will be able to use a maximum
of Xmx amount of memory.

Looks like Ignite has the recommendation to set Xms to at least 512mb. 
About JVM tunning you can read here:

https://apacheignite.readme.io/docs/jvm-and-system-tuning

BR,
Andrei 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: How can I unsubscribe from the email server?

2018-10-16 Thread Stanislav Lukyanov
See the “unsubscribe” links here: 
https://ignite.apache.org/community/resources.html
They’ll open your mail client with a draft of the message – just send it.

Stan

From: Lucky
Sent: 16 октября 2018 г. 11:56
To: user@ignite.apache.org
Subject: How can I unsubscribe from the email server?

There are too many emails .
How can I unsubscribe from the email server?
Thanks.

 



RE: Priority on affinityCalls

2018-10-16 Thread Stanislav Lukyanov
I personally feel that this should be handled on higher level.
In my view, Ignite Compute subsystem is about simple delivery of a job
to a remote server. It has failover and all, but to really use that you have to
go for ComputeTaskSplitAdapter or similar things, and they’re hard to get right.

I’d rather see some sort of distributed job scheduler as a higher level 
subsystem,
e.g. built as an Ignite Service.

Or, one could say that all you need is a distributed priority queue.
We already have a regular FIFO queue as an Ignite Data Structure.
Why not add another one?

In any case, I think you should ask on d...@ignite.apache.org
what other people think.
And the best way to get this feature is to contribute it! :)

Stan

From: chewie
Sent: 16 октября 2018 г. 11:31
To: user@ignite.apache.org
Subject: RE: Priority on affinityCalls

Hi Stan,

Thanks for your quick response! Could something like prioritized callables
be considered in a future version? Currently in my project a bunch of client
nodes send all kinds of affinityCalls to the server nodes and a real-time
interaction task and a slow moving batch job probably shouldn't be executed
with the same urgency. I just feel I wouldn't be the only one gaining from
such a feature.

/Anders



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



  1   2   3   >