I start ignite instance by "Ignition.start("ignite.xml")" in my server, but I
want to control the stopping process(press ctrl+c) myself, for example:
private static void waitForShutdownCommand() {
Thread shutdown = new Thread() {
public void run() {
try {
Actually the fix will be a little more complicated, because the variable "em"
has already been updated to the new type before the last batch has been
executed.
From: Gordon Reid (Nine Mile) [mailto:gordon.r...@ninemilefinancial.com]
Sent: Wednesday, 26 April 2017 12:05 PM
To:
Thanks, that was very helpful.
Sean
Sent: Saturday, April 22, 2017 at 9:27 AM
From: "Dmitriy Setrakyan"
To: user
Subject: Re: Master-Worker Pattern Possible In Ignite?
This question has already been answered here:
Hello,Assuming there is no more jobs or tasks going (that I csn comtrol from an application perspective) I would like to know when the database is in sync with the caches. Otherwise I cannot get a coherent snapshot. Knowing that there is no jobs ongoing and the queue is empty would be enough. How
We are running Ignite 1.8.0 within Docker on an AWS instance. Ignite
registers with Zookeeper. We use a BasicAddressResolver in the Ignite config
so that only the Docker host IP is registered. This works fine until the
tcp-disco-ip-finder-cleaner thread kicks in and adds the loopback and the
Hi, Alena, there are several different requests, let's try to separate them.
A1. Wrong Hive query results:
Is this use case easily reproducible? Now it appears, it is not. Please try
to track it down, as possible:
- Run Ignite nodes with -v option and see console logs of the nodes: are
there
Hi Steve,
I don't think it's currently possible and frankly I'm not sure I understand
what it actually means. Can you clarify what is implied in "no more write
behind operations waiting for completion"? We could probably check if the
queue is empty, but what if new updates happen right after or
Responded here:
http://apache-ignite-users.70518.x6.nabble.com/XMX-XMS-for-embedded-ignite-in-client-mode-td12244.html
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-set-Xmx-Xms-on-embedded-client-tp12243p12246.html
Sent from the Apache Ignite
JVM parameters are set for JVM, not for an Ignite node. So basically the
correct way depends on how your application is organized and started.
-Val
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/XMX-XMS-for-embedded-ignite-in-client-mode-tp12244p12245.html
Hello all,
I'd appreciate your advice on the following issue, that I have had for
some time:
When connecting to an existing ignite cluster in client mode, I get the
warning
"Initial heap size is 504MB (should be no less than 512MB, use -Xms512m
-Xmx512m)."
and I was wondering how to
Hello,
Iwould like to enable the write-behind mode but since Ihave to sync the
overall process with other jobs (extract to the datawarehouse), I would like
to know how to make sure that there are no more write behind operations
waiting for completion. I had a look at the API but I do not see
Hi,
Ignite supports simple continuous mapping API[1] for a stream of data.
I do not understand why you will not can filtered a data on map job or
implement you own interface. But I think, you can got issue with collocation
in this approach.
[1]:
We have two Node Apache Igniter Server cluster and Several Application client
nodes join the cluster.
If both Apache Ignite cluster nodes are down for some reason, is there a
config to set up to fallback to a scenario where Application nodes store web
Session in their own JVM and continue
Alper,
Please check that fields in your DeploymentEvent.class also Serializable(or
transient).
Also, you can run ignite in debug mode and add breakpoint to the place
where NotSerializableException throwed and check which object faced a
problem with serialization.
Evgenii
2017-04-25 15:55
Hi Ivan,
Thanks a lot for very useful guidelines! Using your explanations I could run
Ignite Map-Reduce on my cluster but only twice (next attempts failed with
various errors) and result was unexpected.
I run simle query
/select calday, count(*) from price.toprice where calday between
Hi Evgenii.
If you mean "List events = e.getValue();" it is ArrayList.
On Tue, Apr 25, 2017 at 12:21 PM, Evgenii Zhuravlev <
e.zhuravlev...@gmail.com> wrote:
> Hi,
>
> Which List implementation do you use? Are you sure that it's implement
> Serializable? List not implement Serializable by
Thanks Andrey. I'll see how it behaves when 2.0 comes out in a few weeks.
Thanks again and kind regards,
Rick
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/IgniteCacheProxy-connection-failure-in-REPLICATAED-mode-tp11769p12235.html
Sent from the Apache
If you configure sslContextFactory then client communicate through security
socket, because client work as node of cluster (using CommunicationSPI).
--
View this message in context:
Hi,
Yes, I'm aware that you can use SSL. That only secures data going between
nodes, however. Haven't tried it yet, but it isn't going to help you with
seeing all the values in the cache through Visor.
Best,
Rick
--
View this message in context:
Thanks Andrew for the explanation.
Just one additional question. Lets say i have an apache ignite cluster
running on 2 different machines, and we give noOfBackups of a replicated
cache as 1, each of the 2 machines will hold 1/2 of the primary partitions
and 1/2 of the backup partition.
Hi Tuco,
Seems, there is a misunderstanding.
Ignite does not have master and slave nodes. All nodes are equals.
There are primary and backup partitions distributed among grid nodes.
SQL queries are running on index of primary partitions on all data nodes.
Actually, index contains backup data,
Hi Woo,
It may be reasonable, if you see, nodes resources utilization is too low
and rising per-node-buffer size have no effect (that means you prepare data
for nodes too slow).
Of course, you should check first if network isn't a bottleneck.
On Tue, Apr 25, 2017 at 10:08 AM, woo charles
Thanks Andrew,
A follow up question, if i have 4 machines comprising a cluster of a
distributed cache with backups set as 1, will 2 machines be master and 2 be
slaves, or each machine will hold the 1/4th data of master data and 1/4th of
slave data of another master.
If it is former, then we
Hi Tuco,
Backup partition is full copy of primary partition, so you will run sql
query on same data.
There is no need to run query on backups as we can run in on primaries in
multiple thread with same effect.
We already test approach with running query in multiple threads when every
thread works
Hi,
Which List implementation do you use? Are you sure that it's implement
Serializable? List not implement Serializable by itself.
Evgenii
2017-04-24 19:18 GMT+03:00 Alper Tekinalp :
> Hi.
>
> I try to implement continuous query and it works fine on single node but
> on
If SQL queries would execute not only on all masters, but on at least one
instances from (M1,S11,S12)+(M2,S21,S22)+(M3,S31,S32), then the reads could
be scaled linearly by increasing the no of slaves.
ie a query would execute not on all masters, but on any combination of nodes
where complete data
Hi,
Look here
http://h2database.com/html/grammar.html#time
Sergi
2017-04-24 21:28 GMT+03:00 javastuff@gmail.com
:
> It would be helpful to understand how the date and timestamp are stored
> with
> writeDate and writeTimestamp specifically for the format
rick_tem,
Why you not to use SSL/TLS configuration[1]?
In this case all nodes (including visorcmd) will be communicate through a
security socket.
jackbaru,
In my point of view, those places (which was be in the report) do not
relevant to security. This is internal usage of standard platform
Since every key in cache is unique then every entity in the same cache is
unambiguously idebtified by its key.
If you need to have two ebtities with key 1, for example, then just create
key class for each entity, for example Cache1Key and Cache2Key each with
single int ID field.
Kind regards,
At first if you are having only one backup you surely data lose when kill 3
nodes (you can to do that when only kill one by one and waiting rebalancing
complete after each).
Could you please attache a full log file at leas one node where the remap
failed messages are present?
--
View this
Hi, I got it. But 1 doubt. Where are we defining that idB belong to ClassB?
--
View this message in context:
http://apache-ignite-users.70518.x6.nabble.com/How-to-create-class-affinity-tp12211p12219.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.
Same data set mean that I separate original data into 2 parts & input them
from 2 separate programs.
E.g. a data set with id 1 - 100. Program A input id 1-50. Program B input
51 - 100.
2017-04-21 17:24 GMT+08:00 Andrey Mashenkov :
> Hi Woo,
>
> DataStreamer is
I've tried it again but the write behind still doesn't work. Like I said
before, if I only use write through, I'm able to write the data to my
database, but when I enable write behind, it just won't write even if I wait
until 5 seconds (writeBehindFlushFrequency default value). I've read the
I've tried it again but the write behind still doesn't work. Like I said
before, if I only use write through, I'm able to write the data to my
database, but when I enable write behind, it just won't write even if I wait
until 5 seconds (writeBehindFlushFrequency default value). I've read the
Hello,
Please refer to the corresponding documentation section
https://apacheignite.readme.io/docs/affinity-collocation
In short, for your case, just introduce a key class for ClassC like this:
public class ClassCKey {
private int id;
// ClassB ID which will be used for affinity.
35 matches
Mail list logo