Re: In which case will ignite run following sql?

2018-11-23 Thread yangjiajun
Hello.

I think I find the case.I do a delete test and then see such logs.


ezhuravlev wrote
> Hi,
> 
> Ignite doesn't run a query like this internally. It's executed in
> client-connector pool, which means that one of your JDBC or ODBC drivers
> send this query. How often do you see something like that?
> 
> Evgenii
> 
> чт, 22 нояб. 2018 г. в 15:13, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I found a strange sql in ignite logs.I didn't find such sql in my
>> application. The table contains about 7 million data and such table-scan
>> queries can occupy a lot of heap memory. Is this an internal operation?
>> Or
>> is there something wrong in my application?
>>
>> [16:58:53,139][WARNING][client-connector-#218][IgniteH2Indexing] Query
>> execution is too long [time=1188084 ms, sql='SELECT
>> __Z0._KEY __C0_0,
>> __Z0._VAL __C0_1
>> FROM PUBLIC.TABLE_6950_R_1_1 __Z0', plan=
>> SELECT
>> __Z0._KEY AS __C0_0,
>> __Z0._VAL AS __C0_1
>> FROM PUBLIC.TABLE_6950_R_1_1 __Z0
>> /* PUBLIC.TABLE_6950_R_1_1.__SCAN_ */
>> , parameters=[]]
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: In which case will ignite run following sql?

2018-11-23 Thread yangjiajun
Hello,thanks for response.

I see 79 logs like this in about 40w ignite's logs.Such queries are
targeting on different tables and some of them has where conditions.These
logs warn these queries are taking long time.My application only has sqls
like 'select * from XXX'.I also try to execute above sql in a test code with
jdbc thin connection,but get following exception:

Exception in thread "main" java.sql.SQLException: class
org.apache.ignite.binary.BinaryObjectException: Custom objects are not
supported
at
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:751)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:210)
at
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:473)
at spark.IgniteSql.main(IgniteSql.java:29)

Does ignite's cache mechanism run such queries?BTW,I did not enable on heap
cache. 




ezhuravlev wrote
> Hi,
> 
> Ignite doesn't run a query like this internally. It's executed in
> client-connector pool, which means that one of your JDBC or ODBC drivers
> send this query. How often do you see something like that?
> 
> Evgenii
> 
> чт, 22 нояб. 2018 г. в 15:13, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I found a strange sql in ignite logs.I didn't find such sql in my
>> application. The table contains about 7 million data and such table-scan
>> queries can occupy a lot of heap memory. Is this an internal operation?
>> Or
>> is there something wrong in my application?
>>
>> [16:58:53,139][WARNING][client-connector-#218][IgniteH2Indexing] Query
>> execution is too long [time=1188084 ms, sql='SELECT
>> __Z0._KEY __C0_0,
>> __Z0._VAL __C0_1
>> FROM PUBLIC.TABLE_6950_R_1_1 __Z0', plan=
>> SELECT
>> __Z0._KEY AS __C0_0,
>> __Z0._VAL AS __C0_1
>> FROM PUBLIC.TABLE_6950_R_1_1 __Z0
>> /* PUBLIC.TABLE_6950_R_1_1.__SCAN_ */
>> , parameters=[]]
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteBiPredicate anonymous class - ClassNotFoundException

2018-11-23 Thread joseheitor
Thanks, Maxim - it is working.

Tips for others landing on this post... I fixed my problem by:

- Enabling peer classloading () on each node
- Building a JAR with my model classes and manually deploying them on each
server node's /libs folder



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [RESOLVED] Passing parameters to IgniteBiPredicate

2018-11-23 Thread joseheitor
Please ignore my previous post - it is working correctly:

String date = "2018-10-21";
ScanQuery filter = new ScanQuery<>(
new IgniteBiPredicate() {
@Override
public boolean apply(Integer key, Transaction trans) {
  return trans.getDate().equals(date);
}
}
);
List result = database.getCache().query(filter).getAll();




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does ignite keep loading result when the jdbc thin connection is dead?

2018-11-23 Thread yangjiajun
Hello.

Yes.I mean heap memory.I used dbeaver to run a table scan query on a table
which has millions data.My query caused ignite exhaust its heap so I kill my
connection.But ignite did not release its heap memory.I guess ignite was
still trying to load result of my query.Is my guess right?
 


ezhuravlev wrote
> Hi,
> 
> What do you mean by "Ignite does not release memory"? What kind of memory
> it still consume? Do you mean heap?
> 
> Evgenii
> 
> чт, 22 нояб. 2018 г. в 15:55, yangjiajun <

> 1371549332@

>>:
> 
>> Hello.
>>
>> I use jdbc thin connection to access ignite with lazy mode.Ignite does
>> not
>> release memory after my query connection is dead.Does ignite keep loading
>> result when the connection  die?
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does a Ignite Cache return size metrics for a cache in local mode

2018-11-23 Thread Evgenii Zhuravlev
Hi,

Answered you on stackoverflow:
https://stackoverflow.com/questions/53193997/does-a-ignite-cache-return-metrics-for-a-cache-in-local-mode/53453990#53453990

Evgenii

пн, 19 нояб. 2018 г. в 12:39, Paddy :

> I posted this on Stack Overflow a while back, but this is probably a better
> place for the question. It seems like Ignite does not return the cache size
> metrics for a cache in local mode. The code I used to test this in Ignite
> 2.6 is:
>
> IgniteConfiguration igniteConfig = new IgniteConfiguration();
> CacheConfiguration cacheConfig = new CacheConfiguration("testCache");
> cacheConfig.setStatisticsEnabled(true);
> igniteConfig.setCacheConfiguration(cacheConfig);
> cacheConfig.setCacheMode(CacheMode.LOCAL);
>
> try (Ignite ignite = Ignition.start(igniteConfig)) {
> IgniteCache cache = ignite. String>getOrCreateCache(cacheConfig.getName());
> cache.put("key", "val");
> cache.put("key2", "val2");
> cache.remove("key2");
>
> System.out.println(cache.localMetrics());
> }
>
> I get:
>
> CacheMetricsSnapshot [reads=0, puts=2, hits=0, misses=0, txCommits=0,
> txRollbacks=0, evicts=0, removes=1, putAvgTimeNanos=8054.916,
> getAvgTimeNanos=0.0, rmvAvgTimeNanos=3732.072, commitAvgTimeNanos=0.0,
> rollbackAvgTimeNanos=0.0, cacheName=testCache, offHeapGets=0,
> offHeapPuts=0,
> offHeapRemoves=0, offHeapEvicts=0, offHeapHits=0, offHeapMisses=0,
> offHeapEntriesCnt=1, heapEntriesCnt=0, offHeapPrimaryEntriesCnt=1,
> offHeapBackupEntriesCnt=1, offHeapAllocatedSize=0, size=0, keySize=0,
> isEmpty=false, dhtEvictQueueCurrSize=-1, txThreadMapSize=0, txXidMapSize=0,
> txCommitQueueSize=0, txPrepareQueueSize=0, txStartVerCountsSize=0,
> txCommittedVersionsSize=0, txRolledbackVersionsSize=0,
> txDhtThreadMapSize=0,
> txDhtXidMapSize=-1, txDhtCommitQueueSize=0, txDhtPrepareQueueSize=0,
> txDhtStartVerCountsSize=0, txDhtCommittedVersionsSize=-1,
> txDhtRolledbackVersionsSize=-1, isWriteBehindEnabled=false,
> writeBehindFlushSize=-1, writeBehindFlushThreadCnt=-1,
> writeBehindFlushFreq=-1, writeBehindStoreBatchSize=-1,
> writeBehindTotalCriticalOverflowCnt=-1, writeBehindCriticalOverflowCnt=-1,
> writeBehindErrorRetryCnt=-1, writeBehindBufSize=-1, totalPartitionsCnt=0,
> rebalancingPartitionsCnt=0, keysToRebalanceLeft=0, rebalancingKeysRate=0,
> rebalancingBytesRate=0, rebalanceStartTime=-1, rebalanceFinishTime=-1,
> rebalanceClearingPartitionsLeft=0, keyType=java.lang.Object,
> valType=java.lang.Object, isStoreByVal=true, isStatisticsEnabled=true,
> isManagementEnabled=false, isReadThrough=false, isWriteThrough=false,
> isValidForReading=true, isValidForWriting=true]
>
> Which shows that, put & remove metrics seem to be working, but size is 0.
> I've tried both cache.metrics().getSize() and
> cache.localMetrics().getSize()
> but they seem to give the same result. If I change the cache mode to
> REPLICATED or PARTITIONED then the cache size is correct.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Local messaging not happening asynchronously

2018-11-23 Thread Evgenii Zhuravlev
Hi,

It is an optimization, that runs operations which could be executed
locally, in user thread. The same will work for cache operations too.

First of all, do you really need to send a message to all nodes, including
the local? You can send the message to the remote nodes only, it will
automatically resolve your problem.

Additionally, you can create a different thread for such kind of operations
and invoke sending from this pool.

Evgenii

ср, 21 нояб. 2018 г. в 03:06, Bob Newcomer :

> Hello, would it be useful if I provided a test-case to help explain the
> situation?
>
>
> On Sat, Nov 17, 2018, at 11:27 AM, Bob Newcomer wrote:
>
> Hello,
>
> I have a distributed app that has an ignite client instance connected to a
> grid. Every app registers for the same topic and uses it to broadcast
> information to its peers and itself to perform async work in the ignite
> thread pool. The handler for this topic may take several minutes to perform
> its job, but I thought this would be OK because the docs for sendOrdered
> say:
>
> Note that local listeners are always executed in public thread pool, no
> matter default or withAsync()
> 
> mode is used. [1]
>
> [1] -
> https://ignite.apache.org/releases/latest/javadoc/org/apache/ignite/IgniteMessaging.html#sendOrdered-java.lang.Object-java.lang.Object-long-
>
> However, in my testing when I send to the whole cluster, the message sent
> to the local listener (the message topic handler registered in the same
> ignite client instance as the one sending) gets handled in the same thread
> as sendOrdered() and blocks the sendOrdered() until it has been completed.
>
> I had a look through the code and could not see an obvious way to make the
> local handler happen on an ignite thread pool instead of blocking the
> sender. How can I make the local handler get called in an ignite thread
> pool?
>
> Appreciate any assistance.
>
>
>


Re: Does ignite keep loading result when the jdbc thin connection is dead?

2018-11-23 Thread Evgenii Zhuravlev
Hi,

What do you mean by "Ignite does not release memory"? What kind of memory
it still consume? Do you mean heap?

Evgenii

чт, 22 нояб. 2018 г. в 15:55, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I use jdbc thin connection to access ignite with lazy mode.Ignite does not
> release memory after my query connection is dead.Does ignite keep loading
> result when the connection  die?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: LIKE operator issue in Ignite Database

2018-11-23 Thread Evgenii Zhuravlev
Hi,

Do you have a small reproducer for this? I've tried to create a simple
one(see attached), but it works for me.

Evgenii

чт, 22 нояб. 2018 г. в 12:18, Shravya Nethula <
shravya.neth...@aline-consulting.com>:

> Hi,
>
> We are trying to execute the following select queries with LIKE operator:
>
> SELECT * FROM u24_166004_ltb
> where customer_id like ('%059')
> ORDER BY id asc OFFSET 0 ROWS FETCH NEXT 1012 ROWS ONLY
> There are records in the table where customer_id endsWith ('059') but still
> the query returns empty set.
>
> SELECT * FROM u24_166004_ltb
> where customer_id like ('%9')
> ORDER BY id asc OFFSET 0 ROWS FETCH NEXT 1012 ROWS ONLY
> This query returns correct results.
>
> The LIKE operator is not working as expected everytime. Why is it so? Is
> LIKE operator fully supported in Ignite? Is there any equivalent for LIKE
> operator in Ignite?
> Please go through the enclosed attachments for more details.
>
> Regards,
> Shravya Nethula. LIKE_with_(%059).jpeg
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1559/LIKE_with_%28%25059%29.jpeg>
>
> LIKE_with_(%9).jpeg
> <
> http://apache-ignite-users.70518.x6.nabble.com/file/t1559/LIKE_with_%28%259%29.jpeg>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


SqlLikeTest.java
Description: Binary data


Re: Ignite startu is very slow

2018-11-23 Thread Evgenii Zhuravlev
Hi,

How many heap do you have?

>From logs I see, that before this, cluster was not stopped properly and the
checkpoint was not saved to the disk. Because of that, after the start,
nodes started to applying WAL changes(
https://apacheignite.readme.io/docs/write-ahead-log). It took 1491578ms,
probably because you have a very small heap size. Also, usually messages
like "Possible too long JVM pause: 2710 milliseconds." is a symptom of a
long GC pauses.

Evgenii

пт, 23 нояб. 2018 г. в 08:04, kvenkatramtreddy :

> Please help.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: In which case will ignite run following sql?

2018-11-23 Thread Evgenii Zhuravlev
Hi,

Ignite doesn't run a query like this internally. It's executed in
client-connector pool, which means that one of your JDBC or ODBC drivers
send this query. How often do you see something like that?

Evgenii

чт, 22 нояб. 2018 г. в 15:13, yangjiajun <1371549...@qq.com>:

> Hello.
>
> I found a strange sql in ignite logs.I didn't find such sql in my
> application. The table contains about 7 million data and such table-scan
> queries can occupy a lot of heap memory. Is this an internal operation? Or
> is there something wrong in my application?
>
> [16:58:53,139][WARNING][client-connector-#218][IgniteH2Indexing] Query
> execution is too long [time=1188084 ms, sql='SELECT
> __Z0._KEY __C0_0,
> __Z0._VAL __C0_1
> FROM PUBLIC.TABLE_6950_R_1_1 __Z0', plan=
> SELECT
> __Z0._KEY AS __C0_0,
> __Z0._VAL AS __C0_1
> FROM PUBLIC.TABLE_6950_R_1_1 __Z0
> /* PUBLIC.TABLE_6950_R_1_1.__SCAN_ */
> , parameters=[]]
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Why drop table operations affect all update operations.

2018-11-23 Thread Evgenii Zhuravlev
Hi,

"DROP table" operation also destroying an underlying cache. This change
should be properly synchronized between all the nodes, that's why it sends
a special Discovery message to all nodes with the new minor topology
change. Without this, not all nodes in the cluster will know about stopping
the cache, which may lead to the undefined behavior, where some nodes still
will process operations related to this cache.

Best Regards,
Evgenii

чт, 22 нояб. 2018 г. в 16:23, yangjiajun <1371549...@qq.com>:

> Hello.
>
> According to my understanding,drop table command will block all operations
> on the same table.But in my experiment,all update operations slow down when
> there is a drop table operation even these operations aren't targeting the
> same table.I guess drop table statement causes ignite sync all update
> operations.Is my guess right?
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Query Slow

2018-11-23 Thread Evgenii Zhuravlev
Hi,

>From your description, it looks like it could be a long GC pause. How much
heap do you have? Can you collect GC logs from analysis?

Evgenii

пт, 23 нояб. 2018 г. в 22:02, Skollur :

> Hello
>
> I have ignite server running with 2 nodes. Each node has 30 cache stores
> with one table in each with REPLICATE configured. Found query is taking
> longer to return when simultaneous 100 requests were sent to ignite. In the
> first iteration the query is quick. But as number of hits more, same query
> with different parameter takes longer. Note I have INDEX for each of joins
> and query is quick for one request and takes about 400 milli seconds and
> same query takes longer about 30 seconds when multiple simultaneous
> requests
> were processing. Is there any solution?
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Ignite Query Slow

2018-11-23 Thread Skollur
Hello

I have ignite server running with 2 nodes. Each node has 30 cache stores
with one table in each with REPLICATE configured. Found query is taking
longer to return when simultaneous 100 requests were sent to ignite. In the
first iteration the query is quick. But as number of hits more, same query
with different parameter takes longer. Note I have INDEX for each of joins
and query is quick for one request and takes about 400 milli seconds and
same query takes longer about 30 seconds when multiple simultaneous requests
were processing. Is there any solution?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite Query Slow

2018-11-23 Thread Skollur
Hello

I have ignite server running with 2 nodes. Each node has 30 cache stores
with one table in each with REPLICATE configured. Found query is taking
longer to return when simultaneous 100 requests were sent to ignite. In the
first iteration the query is quick. But as number of hits more, same query
with different parameter takes longer. Note I have INDEX for each of joins
and query is quick for one request and takes about 400 milli seconds and
same query takes longer when multiple simultaneous requests. Is there any
solution?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Suppressing reflective serialisation in Ignite

2018-11-23 Thread Pavel Tupitsyn
Sorry, I'm wrong.
You can of course just call writer.WriteObject(..) as a fallback, no need
for BinaryFormatter.
Your only goal is to throw exception for *some* types, right?

On Fri, Nov 23, 2018 at 8:37 PM Pavel Tupitsyn  wrote:

> Hi Raymond,
>
> Exceptions implement ISerializable, and are serialized that way in Ignite.
> However, there is no "fallback" mechanism that you ask about (I should file
> a ticket, this is a good catch).
>
> So the workaround is to use .NET BinaryFormatter to serialize
> non-IBInarizable types and write them as byte array (WriteByteArray /
> ReadByteArray).
> Does this work for you?
>
> Thanks,
> Pavel
>
> On Fri, Nov 23, 2018 at 2:05 AM Raymond Wilson 
> wrote:
>
>> Hi Pavel,
>>
>> I have been using your suggestion with good effect. Thank you again for
>> suggesting it.
>>
>> I just ran into a case where an exception was thrown that stated
>> System.AggregateException could not be serialised within this class.
>>
>> While the BinarizableSerializer is good an ensuring all our serialization
>> contexts are covered with IBinarySerializer, it seems obvious that things
>> like Exception derivatives can not be. How would you modify this approach
>> to make exceptions use the default reflective serialization?
>>
>> Thanks,
>> Raymond.
>>
>>
>> On Tue, Nov 13, 2018 at 9:20 AM Pavel Tupitsyn 
>> wrote:
>>
>>> Hi Raymond,
>>>
>>> Yes, you can do that by implementing IBinarySerializer like this:
>>>
>>> class BinarizableSerializer : IBinarySerializer
>>> {
>>> public void WriteBinary(object obj, IBinaryWriter writer)
>>> {
>>> if (obj is IBinarizable bin)
>>> {
>>> bin.WriteBinary(writer);
>>> }
>>>
>>> throw new Exception("Not IBinarizable: " + obj.GetType());
>>>
>>> }
>>>
>>> public void ReadBinary(object obj, IBinaryReader reader)
>>> {
>>> if (obj is IBinarizable bin)
>>> {
>>> bin.ReadBinary(reader);
>>> }
>>>
>>> throw new Exception("Not IBinarizable: " + obj.GetType());
>>> }
>>> }
>>>
>>> Then set it globally in IgniteConfiguration:
>>>
>>> var cfg = new IgniteConfiguration
>>> {
>>> BinaryConfiguration = new BinaryConfiguration
>>> {
>>> Serializer = new BinarizableSerializer()
>>> }
>>> };
>>>
>>>
>>>
>>> On Thu, Nov 8, 2018 at 9:28 PM Raymond Wilson <
>>> raymond_wil...@trimble.com> wrote:
>>>
 Hi Denis,

 Yes, I understand reflective serialisation uses binarizable
 serialisation under the hood (and it's fast and easy to use). But it has
 issues in the face of schema changes so it is better (and recommended in
 the Ignite docs) to use Binarizable serialization for production.

 I want to make sure all my serialization contexts are covered by
 explicit IBinarizable serialization. A simple approach would be to turn off
 reflective serialization to ensure cases where we have missed it fail
 explicitly. Is that possible?

 Thanks,
 Raymond.


 On Thu, Nov 8, 2018 at 1:10 PM Denis Magda  wrote:

> Hi Raymond,
>
> If to believe this page, the reflective serialization converts an
> object to the binary format (sort of marked with IBaniralizable interface
> implicitly):
>
> https://apacheignite-net.readme.io/docs/serialization#section-ignite-reflective-serialization
>
> --
> Denis
>
>
> On Tue, Nov 6, 2018 at 1:01 PM Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> We are currently converting our use of Ignite reflective
>> serialisation to use IBinarizable based serialisation [using Ignite 2.6
>> with c# client]
>>
>> What I would like to do is enforce a policy of not using reflective
>> serialisation to ensure we have all the bases covered.
>>
>> Is there a way to do this in Ignite?
>>
>> Thanks,
>> Raymond.
>>
>>


Re: Suppressing reflective serialisation in Ignite

2018-11-23 Thread Pavel Tupitsyn
Hi Raymond,

Exceptions implement ISerializable, and are serialized that way in Ignite.
However, there is no "fallback" mechanism that you ask about (I should file
a ticket, this is a good catch).

So the workaround is to use .NET BinaryFormatter to serialize
non-IBInarizable types and write them as byte array (WriteByteArray /
ReadByteArray).
Does this work for you?

Thanks,
Pavel

On Fri, Nov 23, 2018 at 2:05 AM Raymond Wilson 
wrote:

> Hi Pavel,
>
> I have been using your suggestion with good effect. Thank you again for
> suggesting it.
>
> I just ran into a case where an exception was thrown that stated
> System.AggregateException could not be serialised within this class.
>
> While the BinarizableSerializer is good an ensuring all our serialization
> contexts are covered with IBinarySerializer, it seems obvious that things
> like Exception derivatives can not be. How would you modify this approach
> to make exceptions use the default reflective serialization?
>
> Thanks,
> Raymond.
>
>
> On Tue, Nov 13, 2018 at 9:20 AM Pavel Tupitsyn 
> wrote:
>
>> Hi Raymond,
>>
>> Yes, you can do that by implementing IBinarySerializer like this:
>>
>> class BinarizableSerializer : IBinarySerializer
>> {
>> public void WriteBinary(object obj, IBinaryWriter writer)
>> {
>> if (obj is IBinarizable bin)
>> {
>> bin.WriteBinary(writer);
>> }
>>
>> throw new Exception("Not IBinarizable: " + obj.GetType());
>>
>> }
>>
>> public void ReadBinary(object obj, IBinaryReader reader)
>> {
>> if (obj is IBinarizable bin)
>> {
>> bin.ReadBinary(reader);
>> }
>>
>> throw new Exception("Not IBinarizable: " + obj.GetType());
>> }
>> }
>>
>> Then set it globally in IgniteConfiguration:
>>
>> var cfg = new IgniteConfiguration
>> {
>> BinaryConfiguration = new BinaryConfiguration
>> {
>> Serializer = new BinarizableSerializer()
>> }
>> };
>>
>>
>>
>> On Thu, Nov 8, 2018 at 9:28 PM Raymond Wilson 
>> wrote:
>>
>>> Hi Denis,
>>>
>>> Yes, I understand reflective serialisation uses binarizable
>>> serialisation under the hood (and it's fast and easy to use). But it has
>>> issues in the face of schema changes so it is better (and recommended in
>>> the Ignite docs) to use Binarizable serialization for production.
>>>
>>> I want to make sure all my serialization contexts are covered by
>>> explicit IBinarizable serialization. A simple approach would be to turn off
>>> reflective serialization to ensure cases where we have missed it fail
>>> explicitly. Is that possible?
>>>
>>> Thanks,
>>> Raymond.
>>>
>>>
>>> On Thu, Nov 8, 2018 at 1:10 PM Denis Magda  wrote:
>>>
 Hi Raymond,

 If to believe this page, the reflective serialization converts an
 object to the binary format (sort of marked with IBaniralizable interface
 implicitly):

 https://apacheignite-net.readme.io/docs/serialization#section-ignite-reflective-serialization

 --
 Denis


 On Tue, Nov 6, 2018 at 1:01 PM Raymond Wilson <
 raymond_wil...@trimble.com> wrote:

> We are currently converting our use of Ignite reflective serialisation
> to use IBinarizable based serialisation [using Ignite 2.6 with c# client]
>
> What I would like to do is enforce a policy of not using reflective
> serialisation to ensure we have all the bases covered.
>
> Is there a way to do this in Ignite?
>
> Thanks,
> Raymond.
>
>


Ignite 2.7

2018-11-23 Thread Andrey Davydov
Hello,

When 2.7 release will be available? We have some problems with 
https://issues.apache.org/jira/browse/IGNITE-7972 in CI process of our system 
(test runner server is not powerful enough and we get NPE from Ignite on every 
third build attempt =((( )

Andrey.



Re: Read SQL table via cache api

2018-11-23 Thread yongjec
Hi Andrei,

I captured below logs with the -DIGNITE_QUIET=false flag. There is a 5-hour
difference between the two time zones.


Server Log 
ignite-ef771a6b.log
 
 


Client Log
ignite-86fd5276.log
 
 

Thank you,
Yong



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Continuous queries and duplicates

2018-11-23 Thread Sobolewski, Krzysztof
Hi,

I'm wanting to use a ContinuousQuery and there is a slight issue with how it 
transitions from the initial query to the notifications phase. It turns out 
that if there are additions to the cache happening while the continuous query 
runs, an entry may be reported twice - once by the initial query and once by 
the listener. This is confirmed by experiment, BTW :) The initial query in this 
case is an SqlQuery.

So my question is: is this intentional? Or is it a bug? Is there something I 
can do to mitigate this? Is this is an issue of isolation level?

Thanks a lot for any pointers :)
-KS



Your Personal Data: We may collect and process information about you that may 
be subject to data protection laws. For more information about how we use and 
disclose your personal data, how we protect your information, our legal basis 
to use your information, your rights and who you can contact, please refer to: 
www.gs.com/privacy-notices


Re: IgniteBiPredicate anonymous class - ClassNotFoundException

2018-11-23 Thread Maxim.Pudov
I guess you have 2 nodes, but only one of them has that class in its
classpath.
Check out this article:
https://apacheignite.readme.io/docs/zero-deployment#section-peer-class-loading



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread aealexsandrov
Hi,

Could you please set -DIGNITE_QUIET=false to turn off the quiet mode and
re-attach the logs?

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread yongjec
Another observation is that if I take out the effective_date field
(Timestamp) from the key, and use only the Int and Double columns as the
primary keys, the server finds it without any problem.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread yongjec
Hi Andrei,

Thank you for your reply. 

Here is the server log. However, I do not see any relevant information in
it. The cache get request was made from the client between line 117 (client
joins the topology) and 127 (client leaves the topology).

ignite-db357bff.log
 
 

Regards,
Yong



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does it affect the put, get, query operation when add or remove a node?

2018-11-23 Thread Justin Ji
Great, thank you very much!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Read SQL table via cache api

2018-11-23 Thread aealexsandrov
Hi,

I don't think that it's possible to understand a reason without logs. Please
attach them here or possible if you can reproduce it then you may file a
Jira issue and attach logs there with a full description of the problem.

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Does it affect the put, get, query operation when add or remove a node?

2018-11-23 Thread aealexsandrov
Hi,

The general answer is yes because the rebalancing of data will be started
and you will not be able to process data until partition map exchange will
not be completed.

https://apacheignite.readme.io/docs/rebalancing
https://cwiki.apache.org/confluence/display/IGNITE/%28Partition+Map%29+Exchange+-+under+the+hood

However, in the case of persistence, you will have the baseline topology. In
this case, rebalancing will be started only in case of changing of the
topology (adding/removing data nodes to topology). Other node starts/stops
will no affect your operations.

https://apacheignite.readme.io/docs/baseline-topology

BR,
Andrei



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: can ignite accelerate sparkOnHive

2018-11-23 Thread Jörn Franke
I think the more important question is why you need this. There are many 
different ways on accelerating warehouse depending on what you want to achieve.

> Am 23.11.2018 um 07:56 schrieb lk_hadoop :
> 
> hi all,
> I think use hive as DW and use spark do some OLAP on hive is quite common 
> . So ,can ignite accelerate sparkOnHive? and is there something special to do 
> with this , or all what I should do is just make HDFS as IGFS 
> SecondaryFileSystem ? 
>  
> 2018-11-23
> lk_hadoop