how did you guys resolve this ? thanks.
On Tue, 7 Aug 2018 at 13:03, Evgenii Zhuravlev
wrote:
> Hi,
>
> What kind of compute jobs do you run? Do you start new jobs inside jobs?
> Can you share thread dumps?
>
> Evgenii
>
> 2018-08-07 1:48 GMT+03:00 boomi :
>
>> Hello,
>>
>> We are having a possi
Hi,
I am little new to ignite and trying out few features.
How to log swap space movement ? what is the default location of swap on
disk ?
Thanks.
HI,
is ignite supports alias names for for projection fields of entity ?
ex :
select value as value, count(*) as count from Person where value = ? group
by value
Thanks
HI ,
I am trying in query as given the below link and it is not working
https://apacheignite.readme.io/docs/sql-queries#performance-and-usability-considerations
sudo code :
List inParameter = new ArrayList();
inParameter.add("name0");
inParameter.add("name1");
String sql = "select p.id, p.nam
Args(new Object[] {
> inParameter.toArray() });
>
> This will by perform, because the method (SqlFieldsQuery.setArgs()) using
> varargs.
>
> On Mon, Oct 10, 2016 at 2:47 PM, Anil wrote:
>
>> HI ,
>>
>> I am trying in query as given the below link and it is not working
>>
&g
er.toArray()}, new Object[] { anotherINParameter.toArray()});
Apologies - i am sure i did something wrong. Thanks.
On 10 October 2016 at 20:37, Vladislav Pyatkov wrote:
> Anil,
>
> Ignite has not itself DSL for SQL, but you can use any ANSI SQL generator.
> Ignite supported ANSI-99
ignite supports multiple IN queries ? i tried the following and no luck.
String sql = "select p.id, p.name from Person p join table(name VARCHAR(15)
= ?) i on p.name = i.name join table(id varchar(15) = ?) k on p.id = k.id";
SqlFieldsQuery query = new SqlFieldsQuery(sql).setArgs(new Object[] {
i
HI,
we have around 18 M records in hbase which needs to be loaded into ignite
cluster.
i was looking at
http://apacheignite.gridgain.org/v1.7/docs/data-loading
https://github.com/apache/ignite/tree/master/examples
is there any approach where each ignite node loads the data of one hbase
region
October 2016 at 19:03, Alexey Kuznetsov wrote:
> Hi, Anil.
>
> It depends on your use case.
> How many nodes will be in your cluster?
> All 18M records will be in one cache or many caches?
> How big single record? What will be the key?
> You need only load or you also nee
ws sorted by keys lexicographically you should perform full scan in
> HBase table. So the simplest way for parallelization data loading from
> HBase to Ignite it concurrently scan regions and stream all rows to one or
> more DataStreamer.
>
>
> On Tue, Oct 11, 2016 at 4:11 PM, Anil
if you see any anti-pattern in terms of ignite ?
Thanks.
On 11 October 2016 at 20:49, Anil wrote:
> Thank you Vladislav and Andrey. I will look at the document and give a
> try.
>
> Thanks again.
>
> On 11 October 2016 at 20:47, Andrey Gura wrote:
>
>> Hi,
>>
Thank you.
On 12 October 2016 at 15:56, Taras Ledkov wrote:
> Hi,
>
> FailoverSpi is used to process jobs failures.
>
> The AlwaysFailoverSpi implementation is used by default. One tries to
> submit a job the 'maximumFailoverAttempts' (default 5) times .
>
When loading huge data into ignite - i see the following exception.. my
configuration include off heap as 0 and swap storage to true.
org.apache.ignite.logger.java.JavaLogger error
SEVERE: Failed to process directory: /tmp/ignite/work/ipc/shmem
java.io.FileNotFoundException: /tmp/ignite/work/ipc/
What is the recommended file descriptor limit for ignite data load ?
Thanks.
On 13 October 2016 at 15:16, Taras Ledkov wrote:
> Please check the file descriptors OS limits.
>
>
> On 13.10.2016 12:36, Anil wrote:
>
>>
>> When loading huge data into ignite - i see t
: java.lang.NullPointerException
at java.security.CodeSource.readObject(CodeSource.java:587)
... 154 more
Could you please let me know root cause of the exception ?
I know, the above exception stack trace is not clear, i can share the code
if required.
Thanks,
Anil
On 11 October 2016 at 20:49, Anil wrote:
> Th
Hi Val,
I have attached the sample program. please take a look and let me know if
you have any questions.
after spending some time, i noticed that the exception is happening only
when processing of number of parallel callable's with broadcast.
Thanks,
Anil
On 15 October 2016 at
nd no luck. Looks like the failures
of attached classes also because of serialization.
is there any way to overcome this ?
Thanks
On 18 October 2016 at 14:31, Vladislav Pyatkov wrote:
> Hi Anil,
>
> The implementation of IgniteCallable looks like very doubtful.
> When you invoke
nite:cfg://cache=test-map@file
:C:/Anil/ignite.xml");
grid name is mandatory to access the ignite in #1
Same ignite configuration with client mode is set true (though it is not
set to true.. it is set to true in driver) in #2 is failing.. #2 is working
only with different grid name (genera
Thank you Val. this is really helpful.
On 20 October 2016 at 23:32, vkulichenko
wrote:
> Hi,
>
> This is the name of local Ignite instance within current JVM. This allows
> you to start several nodes within one JVM and access different instances
> within one application using Ignition.ignite(nam
HI,
i have loaded around 20 M records into 4 node ignite cluster.
Following is the ignite metrics logged in the log of one node.
^-- Node [id=c0e3dc45, name=my-grid, uptime=20:17:27:096]
^-- H/N/C [hosts=4, nodes=4, CPUs=32]
^-- CPU [cur=0.23%, avg=0.26%, GC=0%]
^-- Heap [used=999MB
HI Ignite team,
is there way to configure fixed resources (threads) only for ignite queries
?
jobs, queries and other tasks will be running on ignite cluster at the same
time. So having fixed set of resources for query would have not impact on
query response time.
I see public and system thread
HI,
I was loading data into ignite cache using parallel tasks by broadcasting
the task. Each taks (IgniteCallable implementation) has its own data
streamer. was it correct approach.
Loading data into ignite cache using data streamer is very slow compared
normal cache.put.
is that expected ? or n
HI Val,
I have attached sample reproducer. thanks.
On 21 October 2016 at 21:12, Vladislav Pyatkov wrote:
> Hi Anil,
>
> This sounds is very questionable.
> Could you please attach your sources?
>
> On Fri, Oct 21, 2016 at 5:16 PM, Anil wrote:
>
>> HI,
>>
Thanks Val. Computations and Queries using different thread pools. this
should be enough.
On 22 October 2016 at 00:09, vkulichenko
wrote:
> Hi,
>
> Public thread is used for computations and system pool is used for all
> cache
> operations (gets, puts, queries, etc.). Having separate thread po
Thank you Manu for pointing out the issue.
i have created data streamer object for each IgniteCallable task and it
looks good.
one datastreamer could be not be shared with all taks as it is not
serialized.
Thanks
On 22 October 2016 at 22:34, Manu wrote:
> Hi,
>
> Your are creating new data st
i created data streamer object for each Callable task in my actual program
and did not work. it worked In test program i will check in the actual
program again and let you know. thanks.
On 23 October 2016 at 11:42, Anil wrote:
> Thank you Manu for pointing out the issue.
>
> i have cre
HI,
Does ignite jdbc connection is fault tolerant ? as it is distributed
cluster and any node can go down at any point of time.
and Does it support connection pool ? as each connection is starting ignite
with client mode true.
Thanks for your clarifications.
Hi,
I am playing with kafka streamer for my use case and noticed that message
has to value of the ignite cache.
getStreamer().addData(msg.key(), msg.message());
(
https://github.com/apache/ignite/blob/master/modules/kafka/src/main/java/org/apache/ignite/stream/kafka/KafkaStreamer.java
)
i tried
Thanks Manu.
if i understand it correctly, if connection is closed due to cluster node
failure, client will automatically recreate connection using discovery
configuration.
and jdbc connection does not support connection pool.
thanks for your help.
On 24 October 2016 at 18:12, Manu wrote:
typo correction.
Thanks Manu.
>
> if i understand it correctly, if connection is closed due to cluster node
> failure, client will automatically recreate connection using discovery
> configuration.
>
> and *jdbc connection does support connection pool*.
>
> thanks for your help.
>
>
>
>
>
> On 24
No Val. A message cannot be converted into number of cache entries using
value decoder. am i wrong ?
Thanks.
On 25 October 2016 at 02:42, vkulichenko
wrote:
> Hi,
>
> There are keyDecoder and valueDecoder that you can specify when creating
> the
> KafkaStreamer. Is that what you're looking for?
Thank you Manu. This is really helpful.
On 24 October 2016 at 20:07, Manu wrote:
> If you use ignite jdbc driver, to ensure that you always get a valid ignite
> instance before call a ignite operation I recommend to use a datasource
> implementation that validates connection before calls and cre
Hi,
Does ignite supports composite affinity key ?
Thanks
Thanks Sergej. i was trying the same. i will test this for my usecase.
On 25 October 2016 at 14:51, Sergej Sidorov
wrote:
> Hi, Anil!
>
> What do you mean by "composite affinity key"? What problem you want to
> solve?
> If you want to use several fields as an affin
Agree. makes sense.
one quick question.. will those affinity mapped objects retrieved when
select * from Person is fired ?
On 25 October 2016 at 15:25, Sergi Vladykin
wrote:
>
>> cache.put(department.getKey(), department);
>> cache.put(person.getKey(), person);
>>
>>
> As a side note: it is act
Should we use Tuple Extractor ? i still need to look at code and give a try.
On 25 October 2016 at 09:18, Anil wrote:
> No Val. A message cannot be converted into number of cache entries using
> value decoder. am i wrong ?
>
> Thanks.
>
> On 25 October 2016 at 02:42, vk
transformed into number of cache entries.
I have created custom kafka data streamer with custom multiple tuple
extractor implementation and it looks good and working.
Thanks.
On 25 October 2016 at 23:56, vkulichenko
wrote:
> Anil,
>
> Decoders convert binary message from Kafka to a key-v
This has been resolved Val. Thanks
On 26 October 2016 at 14:58, vdpyatkov wrote:
> Hi Anil,
>
> I doubt, about this fields can serialize correctly:
>
> private Scan scan;
> private QueryPlan queryPlan;
>
> You need will get rid of this fields from serialized object
HI,
What is the best way to load a normalized data from a db into flatten
object into ignite cache.
Ex :
CacheObject {
// person details
// person address
// person comany infomraiton
}
both in RDBMS and nosql point of view
Thanks.
Looks perfect. Could you please elaborate "this change breaks API
compatibility" ? Thanks
On 27 October 2016 at 01:42, vkulichenko
wrote:
> Hi,
>
> OK, I see now what you mean and it looks like this is not supported right
> now. I created a ticket [1], however the way I propose it will break API
@ignite.apache.org
Hi Anil,
Is there one-to-one relation between person and address/company? If yes,
then you just load three rows from different tables and create an object
based on fetch information. You can load data via IgniteDataStreamer or
within your implementation of CacheStore:
https
Got you. thanks.
On 27 October 2016 at 09:30, vkulichenko
wrote:
> I changed the definition of the class (generic types, in particular).
> Compilation will be broken for those who already use KafkaStreamer.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x
HI,
i am using dynamic table join to support IN Query as given in the ignite
documentation.
How to write query with multiple IN queries with OR condition ? other than
UNION.
Ex : name IN ('a', 'b') OR id IN ('1', '2')
Thanks
Hi Val,
the below one is multiple IN queries with AND but not OR. correct ?
SqlQiuery with join table worked for IN Query and the following prepared
statement is not working.
List inParameter = new ArrayList<>();
inParameter.add("8446ddce-5b40-11e6-85f9-005056a90879");
inParameter.add("f5822409-
Any inputs please ?
On 28 October 2016 at 09:27, Anil wrote:
> Hi Val,
>
> the below one is multiple IN queries with AND but not OR. correct ?
>
> SqlQuery with join table worked for IN Query and the following prepared
> statement is not working.
>
> List inP
second try.
On 28 October 2016 at 15:24, Anil wrote:
> Any inputs please ?
>
> On 28 October 2016 at 09:27, Anil wrote:
>
>> Hi Val,
>>
>> the below one is multiple IN queries with AND but not OR. correct ?
>>
>> SqlQuery with join table work
HI,
Can we use jdbc connection to query text indexed fields ? if yes, can you
point me to the examples ?
Thanks
HI,
What are the lucene features supported by Ignite Text query ?
i see case insensitive term search is working and regex is not working.
Could you please point me the complete documentation of ignite text query
features ?
Thanks.
index analyzers ?
Thanks for your help.
On 2 November 2016 at 16:40, Anil wrote:
> HI,
>
> What are the lucene features supported by Ignite Text query ?
>
> i see case insensitive term search is working and regex is not working.
> Could you please point me the complete docum
HI,
I have created custom kafka data streamer for my use case and i see
following exception.
java.lang.IllegalStateException: Data streamer has been closed.
at
org.apache.ignite.internal.processors.datastreamer.DataStreamerImpl.enterBusy(DataStreamerImpl.java:360)
at
org.apache.ig
e updating the cache - {} {} " ,msg, ex);
}
return entries;
}
});
kafkaStreamer.start();
}catch (Exception ex){
logger.error("Error in kafka data streamer ", ex);
}
Please let me know if you see any issues. thanks.
On 3 November 2016 at 15:59, A
());
}
cache size with #1 & #3 is 0
cache size with #2 is 1 as expected.
Have you see similar issue before ?
Thanks
On 3 November 2016 at 16:33, Anton Vinogradov
wrote:
> Anil,
>
> Is it first and only exception at logs?
>
> Is it possible to debud
Regarding data streamer - i will reproduce and share the complete code.
Regarding cache size() - i understood it completely now. i set auto flush
interval and it looks good now.
Thanks to both of you.
On 3 November 2016 at 17:41, Vladislav Pyatkov wrote:
> Hi Anil,
>
> Please att
HI ,
null values are returned as "null" with ignite jdbc result set.
private T getTypedValue(int colIdx, Class cls) throws SQLException {
ensureNotClosed();
ensureHasCurrentRow();
try {
T val = cls == String.class ? (T)String.valueOf(curr.get(colIdx
- 1)) :
, Nov 3, 2016 at 5:39 PM, Andrey Gura wrote:
>
>> String.valuOf(null) return "null" string by contract.
>>
>> On Thu, Nov 3, 2016 at 5:33 PM, Anil wrote:
>>
>>> HI ,
>>>
>>> null values are returned as "null" with ignite
Mashenkov
wrote:
> No, wasNull is true if column value is null otherwise false.
> You can have wasNull() is true but getInt() is zero. E.g. getInt() return
> type is primitive and default value (zero) shoud be return for NULL fields.
>
>
> On Thu, Nov 3, 2016 at 9:11 PM, Anil wrot
.apache.org/jira/browse/IGNITE-4175
>
> Thanks Anil !
>
>
> On Thu, Nov 3, 2016 at 9:45 PM, Anil wrote:
>
>> following code i see in the JdbcResultSet. val is not null for null
>> values for string types. then val == null is false. correct ?
>>
>> T val = cls
Are there any differences between iginite 1.7.0-SNAPSHOT and 1.7 ? please
clarify.
Thanks.
Hi Manu and All,
Ignite jdbc connection is very slow for very first time even with data
source.
Consecutive queries are very fast. queries with 1 mins time duration
becoming slow.
PoolProperties p = new PoolProperties();
p.setUrl("jdbc:ignite:cfg://cache=TEST_CACHE@file:" +
System.getPr
i see it is because of client timeout. and has been resolved. thanks.
On 6 November 2016 at 00:36, Anil wrote:
> Hi Manu and All,
>
> Ignite jdbc connection is very slow for very first time even with data
> source.
>
> Consecutive queries are very fast. queries with 1
Hi Vladislav,
i was trying swap space and see it is not working in unix but my test
program worked on windows.
i can swap folders but no files. Did you see any issue in the below
configuration ?
CacheConfiguration pConfig = new
CacheConfiguration();
pConfig.setName("Person_Cache"
Thanks.
On 9 November 2016 at 17:27, vdpyatkov wrote:
> Hi Anil,
>
> You should not to pay attention on version warning message in nightly build
> or manual build version. Because the message is right only for versions
> which already released (this is simple comparison wit
HI,
Data streamer closed exception is very frequent. I did not see any explicit
errors/exception about data streamer close. the excption i see only when
message is getting added.
I have 4 node ignite cluster and each node have consumer to connection and
push the message received to streamer.
Wha
024 * 1024);
pConfig.setStartSize(200);
pConfig.setStatisticsEnabled(true);
Thanks for your help.
On 9 November 2016 at 19:56, Anil wrote:
> HI,
>
> Data streamer closed exception is very frequent. I did not see any
> explicit errors/exception about data streamer close. the excption i s
HI Val,
I was trying to do following
select * from Person p where p.id in ('1', '3') OR name in ('name0',
'name1') order by id
both id and name fields are indexed.
As IN query does not use index as per
http://apacheignite.gridgain.org/docs/sql-queries#performance-and-usability-considerations
I have couple of questions on ignite jdbc connection. Could you please
clarify ?
1. Should connection be closed like other jdbc db connection ? - I see
connection close is shutdown of ignite client node.
2. Connection objects are not getting released and all connections are busy
?
3. Connection po
Any help in understanding below ?
On 10 November 2016 at 16:31, Anil wrote:
> I have couple of questions on ignite jdbc connection. Could you please
> clarify ?
>
> 1. Should connection be closed like other jdbc db connection ? - I see
> connection close is shutdown of ignite c
recreated. Correct ?
In the above case, kafka streamer tries to getStreamer and push the data
but streamer is not available.
Thanks.
On 11 November 2016 at 14:00, Anton Vinogradov wrote:
> Anil,
>
> Unfortunately,
> at com.test.cs.cache.KafkaCacheDataStreamer.addMess
HI Anton,
Sounds perfect !
#1 - i will reproduce and share you the logs. i have project commitment for
coming Monday
#2 - I will work on this from coming Tuesday.
Thanks
On 11 November 2016 at 14:25, Anton Vinogradov
wrote:
> Anil,
>
>
>> I suspect there is a problem when n
gt; server node fails. All you need is proper IP finder configuration.
>
>
> On Thu, Nov 10, 2016 at 5:01 PM, Anil wrote:
>
>> Any help in understanding below ?
>>
>> On 10 November 2016 at 16:31, Anil wrote:
>>
>>> I have couple of questions on ignite
Hi,
I am seeing few logs in ignite cluster as below. Should we really worry
about these ?
2016-11-11 11:48:14 WARN grid-nio-worker-2-#42%my-grid%
TcpCommunicationSpi:480 -
>> Selector info [idx=2, keysCnt=1]
Connection info [rmtAddr=/X.X.X.X:47100, locAddr=/X.X.X.X:54216,
msgsSent=6519, msgsAcke
Hi,
This is strange. i tried to couple of times and did not see any index in
explain plan. Let me give a try again.
Thanks for you response.
On 15 November 2016 at 15:04, vdpyatkov wrote:
> Hi Anil,
>
> I tred to reproduce your case, but got expected behavior (indexes are
>
HI,
i am still seeing no index used. Can you verify the below query please?
explain select * from (
( select * from Person p join table(joinId varchar(10) =
('anilkd1','anilkd2')) i on p.id = i.joinId)
UNION
(select * from Person p join table(name varchar(10) = ('Anil1', 'Anil5')) i
on p.name =
Thank you. You saved my time.
May i know working ignite version ? i see it is issue in h2 db itself.
Thanks.
On 15 November 2016 at 19:30, Vladislav Pyatkov
wrote:
> Hi Anil,
>
> You are right. I have checked this on not released version, but in 7.0.0
> indexes are not used by
Hi,
Is there any way to limit the number of results of ignite query so that
results wont create out of memory even query misses to limit the results?
When number of parallel queries are running on a node.. this is very
crucial.. each query would have some results.
i see query timeout also not im
HI Val,
When max heap is set, explicit eviction policy is not needed. correct ?
Thanks.
On 21 November 2016 at 23:42, vkulichenko
wrote:
> An entry will go to swap only if it's evicted from the cache [1]. If
> eviction
> policy is not configured, this will never happen.
>
> [1] https://apach
I see it is evicting the entries to swap.
there was an email chain that says about memory leak in case of off-heap +
swap + eviction. Do you remember and have any workaround ?
Thanks
On 24 November 2016 at 10:25, vkulichenko
wrote:
> It is needed, without eviction policy entries will not be ev
HI,
i have to implemented export functionality of the results (can be large
number) of ignite query.
Is there any way to get the query results in steaming manner ? are there
any better ways to achieve this instead of invoking the query number number
of times with offset and skip ?
Thanks.
, vkulichenko
wrote:
> Hi Anil,
>
> While you iterate through the QueryCursor, the results will be fetched in
> pages and you will have only one page at a time in local client memory. The
> page size can be controlled via Query.setPageSize() method.
>
> -Val
>
>
>
>
HI Val,
thanks for response.
OOO is on server. Does server have separate memory blocks for each query
results ? how does that work when number of number of queries executed in
parallel ?
Thanks
On 16 November 2016 at 02:31, vkulichenko
wrote:
> Do you mean OOM on the server or on the client?
Hi,
I am using KafkaStreamer to populate the data into cache.
i see following message in logs
java.lang.IllegalStateException: Grid is in invalid state to perform this
operation. It either not started yet or has already being or have stopped
[gridName=vertx.ignite.node.7f903539-ac5a-4cd1-9425-c0
:
> Anil,
>
> JDBC driver actually uses the same Ignite API under the hood, so the
> results
> are fetched in pages as well.
>
> As for the query parser question, I didn't quite understand what you mean.
> Can you please give more details?
>
> -Val
>
>
&g
}
all cursor rows are fetched and sent to #1
next() method is a just iterator on available rows. agree ? this is not a
streaming from server to client. Am i wrong ?
Thanks
On 2 December 2016 at 04:11, vkulichenko
wrote:
> Anil,
>
> While you iterate through the ResultSet on t
Hi Val,
Thanks for clarification. I understand something and i will give a try.
Thanks.
On 2 December 2016 at 23:17, vkulichenko
wrote:
> Anil,
>
> The JdbcQueryTask is executed each time the next page is needed. And the
> number of rows returned by the task is limited
HI,
is there any way to update a column value in ignite other than
EntryProcessor ?
IgniteCache#invokeAll(keys, EntryProcessor) need the keys in hand.
? thanks
Thanks
On 3 December 2016 at 08:49, Anil wrote:
> Hi Val,
>
> Thanks for clarification. I understand something and i will give a try.
>
> Thanks.
>
> On 2 December 2016 at 23:17, vkulichenko
> wrote:
>
>> Anil,
>>
>> The JdbcQueryTask is
Hi Saji,
Ignite jdbc connection does not need any connection pooling. Creating a
connection in ignite is starting ignite with client mode as true and it is
expensive.
Connection object is thread safe and can be used to run number of queries
in parallel. default connection timeout is 3 sec [1]
Hi,
I am using vertx-ignite to establish ignite cluster and see cache is
stopped in logs.
May I know the cases where cache is stopped ? i can share the logs if
required.
Thanks.
threads and then
> stop the grid.
>
> 2016-12-14 11:12 GMT+07:00 Anil :
>
>> HI,
>>
>> I have attached the logs. thanks
>>
>> Thanks.
>>
>> On 13 December 2016 at 18:59, dkarachentsev
>> wrote:
>>
>>> Hi,
>>>
>
HI,
how ignite cluster internally behaves when a node disconnects from cluster ?
Lets say A , B, C are three nodes formed ignite cluster.
When A disconnects from the cluster, ignite instance on A is still run ?
now will there be two clusters A , B & C ?
Please clarify.
Thanks
Hi Anton,
Thank you very much.
On 14 December 2016 at 17:49, Anton Vinogradov wrote:
> Anil,
>
> This situation described here https://gridgain.readme.
> io/docs/network-segmentation
>
>
>
> On Wed, Dec 14, 2016 at 2:48 PM, Anil wrote:
>
>> HI,
>>
&
i see network segmentation in logs. ignore my previous question. thanks.
On 14 December 2016 at 16:06, Anil wrote:
>
> You are correct. i am using vertx-ignite. i dont understand the reason for
> stopping the ignite.
>
> I am using KafkaStreamer and it must be stopped on a node w
.
otherwise, it would lead to data problem. and i am using vertx-ignite.
please share if there are other cluster manager better than vertx ?
Thanks for you help.
- Anil
HI,
are there any recommendations (other than affinity) for complex queries
like below.
SELECT DISTINCT b.id, b.filename, a1.name
FROM a a1
JOIN b
ON b.id = a1.id
LEFT JOIN (select a.id, max(rank) latest from a group by a.id) a2
ON a2.id = a1.id
Thanks.
Can someone point me to the ignite xml configuration for network
segmentation ?
i see IgniteConfiguration property names for network segmentation and
setting methods are different :)
Thanks
On 14 December 2016 at 18:20, Anil wrote:
> Hi Anton,
>
> Thank you very much.
>
>
>
Thanks Val.
But the default STOP option not looks correct. There is always chance of
rejoining of cluster.
But if ignite cache is stopped because of segmentation, rejoin has no
meaning.
Creating different clusters still fine as long as cache is not stopped in
case of network segmentation.
is the
Hi Val,
I agree with you on "separate clusters running". but ignite is stopped.
Separate clusters has no meaning. agree?
Thanks
On 22 December 2016 at 00:25, vkulichenko
wrote:
> Anil,
>
> All this is very use case specific. STOP is used by default because it's
> th
do you say ? thanks
1. http://apache-ignite-users.70518.x6.nabble.com/Cache-stopped-td9503.html
Thanks.
On 23 December 2016 at 00:29, vkulichenko
wrote:
> Anil,
>
> Nothing is stopped. Once again, in case of segmentation you will have two
> independently running clusters. And in of
Hi Val,
I have created a complex query and looks good in terms of query plan. But
query execution time is very high.
query -
SELECT
P.serialnumber,
iP.count,
cnt.itemnumber,
cnt.status,
cnt.enddate
FROM Product P
left JOIN ( SELECT serialnumber, COUNT(*) AS count FR
Hi,
is it mandatory to use key of a cache as part of the affinity key of
another cache ?
i see all examples on github are using the same.
In my scenario, key of a cache (person) cannot be part of affinity key of
another cache (person details). how can we achieve affinity?
Thanks.
1 - 100 of 258 matches
Mail list logo