Re: IN Query

2016-10-27 Thread Anil
Hi Val,

the below one is multiple IN queries with AND but not OR. correct ?

SqlQiuery with join table worked for IN Query and the following prepared
statement is not working.

List inParameter = new ArrayList<>();
inParameter.add("8446ddce-5b40-11e6-85f9-005056a90879");
inParameter.add("f5822409-5b40-11e6-ae7c-005056a91276");
inParameter.add("9f445a19-5b40-11e6-ab1a-005056a95c7a");
inParameter.add("fd12c96f-5b40-11e6-83f6-005056a947e8");
PreparedStatement statement = conn.prepareStatement("SELECT  p.name FROM
Person p join table(joinId VARCHAR(25) = ?) k on p.id = k.joinId");
statement.setObject(1, inParameter.toArray());
ResultSet rs = statement.executeQuery();

Thanks for your help.


On 28 October 2016 at 01:18, vkulichenko 
wrote:

> Hi,
>
> You can join multiple virtual tables in the same way:
>
> select p.name from Person p
> join table(id bigint = ?) i on p.id = i.id
> join table(name bigint = ?) j on p.name = j.name
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/IN-Query-tp8551p8562.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: BinaryObject pros/cons

2016-10-27 Thread Valentin Kulichenko
Cross-posting this to dev list.

Vladimir,

To be honest, I don't see much difference between null values for objects
and zero values for primitives. From BinaryObject semantics standpoint,
both are default values for corresponding types. These values will be
returned from the BinaryObject.field() method regardless of whether we
actually save then in the byte array or not. Having said that, why don't we
just skip them during write?

You optimization will be still useful though, because there are often a lot
of ints and longs that are not zeros, but still small and can fit 1-2
bytes. We already added such compaction in direct message marshaling and it
reduced overall traffic by around 30%.

-Val


On Thu, Oct 27, 2016 at 2:21 PM, Vladimir Ozerov 
wrote:

> Hi,
>
> I am not very concerned with null fields overhead, because usually it
> won't be significant. However, there is a problem with zeros. User object
> might have lots of int/long zeros, this is not uncommon. And each zero will
> consume 4-8 additional bytes. We probably will implement special
> optimization which will write such fields in special compact format.
>
> Vladimir.
>
> On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi,
>>
>> Yes, null values consume memory. I believe this can be optimized, but I
>> haven't seen issues with this so far. Unless you have hundreds of fields
>> most of which are nulls (very rare case), the overhead is minimal.
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


答复: BinaryObject pros/cons

2016-10-27 Thread Shawn Du
 

Hi,

 

In one of our project, we compact java objects which have lots of long zeros. 
It save disk greatly when serialization. 

Thus slow Disk IO operations become fast CPU operations and performance is 
improved.  

Data compact algorithm is inspired by this paper 
http://www.vldb.org/pvldb/vol8/p1816-teller.pdf.

 

Thanks

Shawn

发件人: Vladimir Ozerov [mailto:voze...@gridgain.com] 
发送时间: 2016年10月28日 5:21
收件人: user@ignite.apache.org
主题: Re: BinaryObject pros/cons

 

Hi,

 

I am not very concerned with null fields overhead, because usually it won't be 
significant. However, there is a problem with zeros. User object might have 
lots of int/long zeros, this is not uncommon. And each zero will consume 4-8 
additional bytes. We probably will implement special optimization which will 
write such fields in special compact format.

 

Vladimir.

 

On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko  > wrote:

Hi,

Yes, null values consume memory. I believe this can be optimized, but I
haven't seen issues with this so far. Unless you have hundreds of fields
most of which are nulls (very rare case), the overhead is minimal.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

 



Re: Any plan for a book on Apache Ignite

2016-10-27 Thread Denis Magda
Prachi,

Would you add a reference to this book [1] under “Community”->”Resources” 
section? It contains useful and valuable information a lot of guys who will 
start working with Ignite will benefit from.  

[1] https://leanpub.com/ignite 

—
Denis

> On Sep 18, 2016, at 4:28 AM, srecon  wrote:
> 
> The book "High performance in-memory computing with Apache Ignite" has been
> released last night. The book is available from the Lean pub in three
> different formats by the following  URL   
> 
> https://leanpub.com/ignite.
> 
> Kind regards,
>  Shamim Ahmed.
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Any-plan-for-a-book-on-Apache-Ignite-tp357p7821.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Freshest Articles About Ignite

2016-10-27 Thread Denis Magda
Igniters,

Just want to share with you interesting articles I've came across recently. 
This might be a good reading for your upcoming weekend.

Deep dive into Apache Ignite architectures and components:
http://www.infoworld.com/article/3135070/data-center/fire-up-big-data-processing-with-apache-ignite.html
 


Building CEP solution with Apache Ignite and Apache Storm:
https://www.javacodegeeks.com/2016/10/complex-event-processing-cep-apache-storm-apache-ignite.html
 


Enjoy,
Denis

Re: aiex webSocket error

2016-10-27 Thread Alec Lee

When I inspect on Chrome, the error is like this:

ocLazyLoad.componentLoaded Array[3]
angular.js:11706 ocLazyLoad.moduleLoaded toaster
4livereload.js?host=localhost:74 WebSocket connection to 
'ws://localhost:35729/livereload' failed: Error in connection establishment: 
net::ERR_CONNECTION_REFUSEDexports.Connector.Connector.connect @ 
livereload.js?host=localhost:74
4livereload.js?host=localhost:74 WebSocket connection to 
'ws://localhost:35729/livereload' failed: Error in connection establishment: 
net::ERR_CONNECTION_REFUSEDexports.Connector.Connector.connect @ 
livereload.js?host=localhost:74(anonymous function) @ 
livereload.js?host=localhost:55(anonymous function) @ 
livereload.js?host=localhost:1124


thanks



> On Oct 25, 2016, at 11:33 AM, vkulichenko  
> wrote:
> 
> Hi,
> 
> Looks like the image is not loading. Can you describe the issue in more
> detail? Are there any exceptions on client and/or server side?
> 
> -Val
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/aiex-webSocket-error-tp8451p8484.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: BinaryObject pros/cons

2016-10-27 Thread Vladimir Ozerov
Hi,

I am not very concerned with null fields overhead, because usually it won't
be significant. However, there is a problem with zeros. User object might
have lots of int/long zeros, this is not uncommon. And each zero will
consume 4-8 additional bytes. We probably will implement special
optimization which will write such fields in special compact format.

Vladimir.

On Thu, Oct 27, 2016 at 10:55 PM, vkulichenko  wrote:

> Hi,
>
> Yes, null values consume memory. I believe this can be optimized, but I
> haven't seen issues with this so far. Unless you have hundreds of fields
> most of which are nulls (very rare case), the overhead is minimal.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Apache Spark & Ignite Integration

2016-10-27 Thread vkulichenko
Hi,

There is no direct support for data frames or data sets right now. This is
planned for the future though.

For now you can convert IgniteRDD to data set and any other RDD (IgniteRD is
an extension of RDD), but you will lose all the advantages. Both savePairs
and sql methods are available only on IgniteRDD.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Apache-Spark-Ignite-Integration-tp8556p8568.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite + Spark Streaming on Amazon EMR Exception

2016-10-27 Thread vkulichenko
Well, the exception is actually coming from Spark, not Ignite, so the issue
is most likely there. System provided I suggested should fix the issue in
any case though. Not sure how this should be done in EMR.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Spark-Streaming-on-Amazon-EMR-Exception-tp8410p8569.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Does read-through work with replicated caches ?

2016-10-27 Thread vkulichenko
Ignite doesn't do merge, see my explanation here:
http://apache-ignite-users.70518.x6.nabble.com/Cluster-breaks-down-on-network-failure-td8549.html

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Does-read-through-work-with-replicated-caches-tp8547p8567.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: PeerClassLoading for Cache Load

2016-10-27 Thread vkulichenko
This is not correct. Peer class loading should be used only for Compute Grid
(closures and distributed tasks). Classes that are part of cache
configuration (cache store implementations, eviction and expiry policies,
etc.) must be explicitly deployed on all server nodes. For model classes
that are saved in cache Ignite uses binary format, so these are not required
to be deployed on server classpath at all.

-Val 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/PeerClassLoading-for-Cache-Load-tp8545p8565.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Inject the data through IgniteDataStreamer

2016-10-27 Thread vkulichenko
Hi,

It can be any class that is serialized by binary marshaller. In your case
it's most likely one of your classes that are streamed via streamer and/or
stored in cache.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Inject-the-data-through-IgniteDataStreamer-tp8505p8564.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: BinaryObject pros/cons

2016-10-27 Thread vkulichenko
Hi,

Yes, null values consume memory. I believe this can be optimized, but I
haven't seen issues with this so far. Unless you have hundreds of fields
most of which are nulls (very rare case), the overhead is minimal.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/BinaryObject-pros-cons-tp8541p8563.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: jvm setting to turn off the Garbage collector

2016-10-27 Thread vkulichenko
Abhishek,

There is no way to entirely disable GC.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/jvm-setting-to-turn-off-the-Garbage-collector-tp8508p8561.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MapReduce with Apache-Ignite

2016-10-27 Thread vkulichenko
lalit wrote
> You've mentioned that spilling task is worked upon. Does it mean in the
> current version because of eviction we may loose mapper output? or the
> node will crash complaining the out of memory error?

In this case Ignite will consume all the available memory and crash with
OOME.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/MapReduce-with-Apache-Ignite-tp8007p8560.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: jvm setting to turn off the Garbage collector

2016-10-27 Thread Abhishek Jain
Thanks Val for your response. We want to perform some memory analysis.

Regards
Abhishek

On Wed, Oct 26, 2016 at 2:47 PM, vkulichenko 
wrote:

> Hi Abhishek,
>
> If you disable GC, you will run out of memory very quickly. Can you please
> clarify why you want to do this?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.705
> 18.x6.nabble.com/jvm-setting-to-turn-off-the-Garbage-
> collector-tp8508p8520.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Ignite + Spark Streaming on Amazon EMR Exception

2016-10-27 Thread Geektimus
Hi Val,Thanks for your suggestion, the thing at this point is that this is
running inside EMR (Elastic Map Reduce in Amazon) in embedded mode and I
don't have control over the system properties. I tried to use them inside a
bootstrap action for the cluster and I found that it was deprecated for the
kind of instance that I'm using.Also, I tried to use the new json conf API
for EMR with
{"Classification": "spark-env","Configurations": [   
{"Classification": "export","Properties": { 
  
"JAVA_HOME": "/usr/lib/jvm/java-1.8.0","JAVA_OPTS":
"-Djava.net.preferIPv4Stack=true",   
"java.net.preferIPv4Stack": "true"}}],  
 
"Properties": {}}
*Note:* This is just a fragment.None of them work in the cluster, Im seeing
IPs like "0:0:0:0:0:0:0:1%1" or "0:0:0:0:0:0:0:1%lo". Btw, I found that the
"%1" and "%lo" are the scope IDs for a IPv6.I don't know if the issue is an
incompatibility between Ignite and Spark Streaming or just an error with
Spark handling IPv6'sFYI, In the Spark code I found this  marvelous comment

  
in the method that is failing! ¬¬
// This is potentially broken - when dealing with ipv6 addresses for
example, sigh ...// but then hadoop does not support ipv6 right now.// For
now, we assume that if port exists, then it is valid - not check if it is an
int > 0
I'm still trying to find a solution.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Spark-Streaming-on-Amazon-EMR-Exception-tp8410p8558.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.

Re: Killing a node under load stalls the grid with ignite 1.7

2016-10-27 Thread bintisepaha
yes I think you are write. Is there any setting that we can use in write
behind that will not lock the entries?
the use case is we have is like this

Parent table - Order (Order Cache)
Child Table - Trade (Trade Cache)

We only have write behind on Order Cache and when writing that we write
order and trade table both. so we query trade cache from order cache store
writeAll() which is causing the above issue. We need to do this because we
cannot write trade in the database without writing order. Foreign key
constraints and data-integrity. 

Do you have any recommendations to solve this problem? We cannot use
write-through. How do we make sure 2 tables are written in an order if they
are in separate caches?

Thanks,
Binti



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Killing-a-node-under-load-stalls-the-grid-with-ignite-1-7-tp8130p8557.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Does read-through work with replicated caches ?

2016-10-27 Thread Marcus Simonsen
I am also interested in this topic as it's relevant to my use case.

Wouldn't you need to configure segmentation policy?
https://gridgain.readme.io/docs/network-segmentation

Hazelcast had a concept of merge strategy to recover reconcile data - I'm
not sure if Ignite has something similar.

-marcus


On Thu, Oct 27, 2016 at 7:59 AM, Kristian Rosenvold 
wrote:

> What we don't understand is that after a network failure, we see requests
> for data that was processed by node A (replicated cache running write
> through) that is not visible to node B. At the time of the request, node B
> has no in-memory knowledge of the data (since it was just created by Node
> A), and we would expect it to fall through to the database. This does not
> seem to happen ?
>
> This is ignite 1.7.0.
>
> Kristian
>
>
> 2016-10-27 13:20 GMT+02:00 Vladislav Pyatkov :
>
>> Hi,
>>
>> It will be working, because replicated cache implemented as partition
>> with number of backups equivalent to number of nodes.
>>
>> On Thu, Oct 27, 2016 at 2:13 PM, Kristian Rosenvold <
>> krosenv...@apache.org> wrote:
>>
>>> Does this configuration actually do anything with a replicated cache, if
>>> so what does it do ?
>>>
>>> Kristian
>>>
>>>
>>
>>
>> --
>> Vladislav Pyatkov
>>
>
>


Re: AssertionError

2016-10-27 Thread thammoud
Thank you. I've been trying to reproduce to no avail. Glad you found the
cause. Will keep an eye.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/AssertionError-tp8321p8554.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: org.h2.api.JavaObjectSerializer not found

2016-10-27 Thread flexvalley
Sorry i'm new with this forum... i post without sub before



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/org-h2-api-JavaObjectSerializer-not-found-tp8538p8553.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


IN Query

2016-10-27 Thread Anil
HI,

i am using dynamic table join to support IN Query as given in the ignite
documentation.

How to write query with multiple IN queries with OR condition ? other than
UNION.

Ex : name IN ('a', 'b') OR id IN ('1', '2')

Thanks


Re: Does read-through work with replicated caches ?

2016-10-27 Thread Kristian Rosenvold
What we don't understand is that after a network failure, we see requests
for data that was processed by node A (replicated cache running write
through) that is not visible to node B. At the time of the request, node B
has no in-memory knowledge of the data (since it was just created by Node
A), and we would expect it to fall through to the database. This does not
seem to happen ?

This is ignite 1.7.0.

Kristian


2016-10-27 13:20 GMT+02:00 Vladislav Pyatkov :

> Hi,
>
> It will be working, because replicated cache implemented as partition with
> number of backups equivalent to number of nodes.
>
> On Thu, Oct 27, 2016 at 2:13 PM, Kristian Rosenvold  > wrote:
>
>> Does this configuration actually do anything with a replicated cache, if
>> so what does it do ?
>>
>> Kristian
>>
>>
>
>
> --
> Vladislav Pyatkov
>


Re: Does read-through work with replicated caches ?

2016-10-27 Thread Vladislav Pyatkov
Hi,

It will be working, because replicated cache implemented as partition with
number of backups equivalent to number of nodes.

On Thu, Oct 27, 2016 at 2:13 PM, Kristian Rosenvold 
wrote:

> Does this configuration actually do anything with a replicated cache, if
> so what does it do ?
>
> Kristian
>
>


-- 
Vladislav Pyatkov


Does read-through work with replicated caches ?

2016-10-27 Thread Kristian Rosenvold
Does this configuration actually do anything with a replicated cache, if so
what does it do ?

Kristian


PeerClassLoading for Cache Load

2016-10-27 Thread Labard
Hi.
I want to use peerClassLoading for cacheConfig. I have empty config on
servers nodes, and different configs with few caches and Persistent Store
Adapters on few client nodes. I work with all nodes in one cluster.
Can I use peerClassLoading instead of deploying all cache and adapters
classes on each node?

Thank you for your help, Labard.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/PeerClassLoading-for-Cache-Load-tp8545.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


答复: 答复: ignite used too much memory

2016-10-27 Thread Shawn Du
Hi Andrey Mashenkov,

 

I checkout pr/1101 pr/1037 and have a test both of them. Things maybe go better 
but not resolved.

 

This is my cache state,  I think it is full, for code like this:

 

config.setSwapEnabled(swapEnable);
if (swapEnable)
{
EvictionPolicy policy = new FifoEvictionPolicy(1);
config.setEvictionPolicy(policy);
}

 

new caches should be in memory and the old will be evicted into disk. The 
memory should stopping growing.

After my test, it is not the case. It is still keep growing fast, at speed of 
500M/30minutes. 

I can understand that there will need some extra memory to maintain something, 
but it leaks too fast.

 

Thanks

Shawn

 

 

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_cityid(@c1)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_regionid(@c2) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_cityid(@c3)  | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_regionid(@c4)| PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_cityid(@c5)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_regionid(@c6) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| videoviewcount_by_cityid(@c7)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| videoviewcount_by_regionid(@c8) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|


答复: 答复: ignite used too much memory

2016-10-27 Thread Shawn Du
BTW, I use ONHEAD_TIERED memory mode. 

 

I don’t enable off heap.

 

Thanks

Shawn

 

发件人: Shawn Du [mailto:shawn...@neulion.com.cn] 
发送时间: 2016年10月27日 15:30
收件人: 'user@ignite.apache.org'
主题: 答复: 答复: ignite used too much memory

 

Hi Andrey Mashenkov,

 

I checkout pr/1101 pr/1037 and have a test both of them. Things maybe go better 
but not resolved.

 

This is my cache state,  I think it is full, for code like this:

 

config.setSwapEnabled(swapEnable);
if (swapEnable)
{
EvictionPolicy policy = new FifoEvictionPolicy(1);
config.setEvictionPolicy(policy);
}

 

new caches should be in memory and the old will be evicted into disk. The 
memory should stopping growing.

After my test, it is not the case. It is still keep growing fast, at speed of 
500M/30minutes. 

I can understand that there will need some extra memory to maintain something, 
but it leaks too fast.

 

Thanks

Shawn

 

 

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_cityid(@c1)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinesessioncount_by_regionid(@c2) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_cityid(@c3)  | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlineviewcount_by_regionid(@c4)| PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_cityid(@c5)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| onlinevisitorcount_by_regionid(@c6) | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|

+-+-+---+---+---+---+---+---+

| videoviewcount_by_cityid(@c7)   | PARTITIONED | 2 | min: 0 (0 / 0)
| min: 0| min: 0| min: 0| min: 0|

| | |   | avg: 5000.00 
(5000.00 / 0.00) | avg: 0.00 | avg: 0.00 | avg: 0.00 | avg: 0.00 |

| | |   | max: 1 (1 
/ 0)| max: 0| max: 0| max: 0| max: 0|


Re: How to avoid data skew in collocate data with data

2016-10-27 Thread Vladislav Pyatkov
Hi,

One AffinityKey binding to one node always. This is AffinityKey meaning
(all entries with one AffinityKey stored into one node).

On Thu, Oct 27, 2016 at 9:45 AM, ght230  wrote:

> If there are too many data related to one affinity key, even more than the
> capacity of one node.
>
> Will Ignite automatically split that data and stored them in several nodes?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-
> data-with-data-tp8454p8539.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


BinaryObject pros/cons

2016-10-27 Thread Shawn Du
Hi expert,

 

BinaryObject gives great convenience to create dynamic objects. Does it have
disadvantages? 

 

Take below example, does each field name will consume memory or not?  Many
RDBMS which has fixed table schema, column name willn't use memory/disk.
How does it for BinaryObject?

 

If some fields' value are null, does it consume memory? 

 

+===

===+

|Key Class |   Key|   Value Class   |
Value |

+===

===+

| java.lang.String | 3E47D5ACA7A05D59 | o.a.i.i.binary.BinaryObjectImpl |
videoviewcount_by_regionid [idHash=312942657, hash=0, site_product=testsite,
regionID=35415317, value=4, timestamp=147755040]  |

 

 

Thanks

Shawn



Re: [EXTERNAL] Re: SLF4J AND LOG4J delegation exception with ignite dependency

2016-10-27 Thread chevy
Attached gradle file. Please help me out to resolve exception caused by 
ignite-http-rest as I need to use rest services.

--
Regards,
Chetan.

From: "vdpyatkov [via Apache Ignite Users]" 

Date: Wednesday, October 26, 2016 at 8:24 PM
To: "Chetan.V.Yadav" 
Subject: Re: [EXTERNAL] Re: SLF4J AND LOG4J delegation exception with ignite 
dependency

Hi,

Could you please, provide your gradle file?

On Mon, Oct 24, 2016 at 9:13 PM, chevy <[hidden 
email]> wrote:
My current dependency looks like below [1](I am getting error pop up in eclipse 
when I use name in exclude as you have suggested). But I still get the same 
exception [2] mentioned below -


1.  dependencies {

  compile ("org.apache.ignite:ignite-rest-http:1.7.0") {
exclude group: "org.slf4j"}

//compile 'org.slf4j:slf4j-api:1.7.21'
 compile("org.springframework.boot:spring-boot-starter-web")
  compile("org.springframework.boot:spring-boot-starter-jdbc")
 compile("org.springframework.boot:spring-boot-starter-data-jpa")
  compile("mysql:mysql-connector-java:5.1.34")

  // Ignite dependencies
 compile group: 'org.apache.ignite', name: 'ignite-core', version: '1.7.0'
 compile group: 'org.apache.ignite', name: 'ignite-spring', version: '1.7.0'
 compile group: 'org.apache.ignite', name: 'ignite-indexing', version: 
'1.7.0'
  compile group: 'org.apache.ignite', name: 'ignite-rest-http', version: 
'1.7.0'

  configurations {
runtime.exclude group: 'org.slf4j'
  }
}



2.   Exception:

java.lang.NoSuchMethodError: 
org.eclipse.jetty.util.log.StdErrLog.setProperties(Ljava/util/Properties;)V

at 
org.apache.ignite.internal.processors.rest.protocols.http.jetty.GridJettyRestProtocol.(GridJettyRestProtocol.java:72)

at java.lang.Class.forName0(Native Method)

at java.lang.Class.forName(Class.java:264)

at 
org.apache.ignite.internal.processors.rest.GridRestProcessor.startHttpProtocol(GridRestProcessor.java:831)

at 
org.apache.ignite.internal.processors.rest.GridRestProcessor.start(GridRestProcessor.java:451)

at 
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1589)

at 
org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:880)

at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:1739)

at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1589)

at 
org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1042)

at 
org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:964)

at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:850)

at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:749)

at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:619)

at 
org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:589)

at org.apache.ignite.Ignition.start(Ignition.java:347)

at com.boot.NodeStartup.main(NodeStartup.java:21)


--
Regards,
Chetan.

From: "vdpyatkov [via Apache Ignite Users]" >
Date: Monday, October 24, 2016 at 9:07 PM
To: "Chetan.V.Yadav" <[hidden 
email]>
Subject: [EXTERNAL] Re: SLF4J AND LOG4J delegation exception with ignite 
dependency

Hi,

1) You can exclude slf4j-log4j12 dependency from ignite-rest-http.
Like this:
compile ('org.apache.ignite:ignite-rest-http:1.6.0') {
exclude group: "org.slf4j", name: "slf4j-log4j12"
  }

2) Ignite 1.6 supports h2 1.3 version. You need to use last version 1.7 of 
Ignite, with support h2 1.4.

On Sat, Oct 22, 2016 at 11:58 AM, chevy <[hidden email]> wrote:
1. Now I am getting below exception. Saw in one of the threads that removing
ignite-rest-http will solve the issue which it does. But I need to include
rest-api as I will be using ignite rest services. Please help me fix this.

java.lang.NoSuchMethodError:
org.eclipse.jetty.util.log.StdErrLog.setProperties(Ljava/util/Properties;)V
at
org.apache.ignite.internal.processors.rest.protocols.http.jetty.GridJettyRestProtocol.(GridJettyRestProtocol.java:72)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:264)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.startHttpProtocol(GridRestProcessor.java:831)
at
org.apache.ignite.internal.processors.rest.GridRestProcessor.start(GridRestProcessor.java:451)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1549)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:876)
at

Re: How to avoid data skew in collocate data with data

2016-10-27 Thread ght230
If there are too many data related to one affinity key, even more than the
capacity of one node.

Will Ignite automatically split that data and stored them in several nodes?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/How-to-avoid-data-skew-in-collocate-data-with-data-tp8454p8539.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.