Re: Expected serialization performance of Ignite .NET

2019-12-02 Thread camer314
It seems a combination of a better spec machine and a parallel for loop has
improved performance, although it still takes 8 seconds to run through all
the cache items.

Here is some basic test code...would appreciate any tips on how to improve
access in this type of usage pattern:

https://wtwdeeplearning.blob.core.windows.net/temp/ignitetest.zip?st=2019-12-03T05%3A47%3A38Z=2019-12-12T05%3A47%3A00Z=rl=2018-03-28=b=t%2FXw4bpAFRo7aKdpIbwLfTFOB4Sv%2FeetSi%2FvVSRjg8w%3D

On my VM it takes 25 seconds to populate and 8 seconds to retrieve.

What is the most efficient way to iterate all items on a local cache? In a
real situation I would not know the keys i have in the cache and the only
way I could get decent throughput on the read was the parallel loop which
implies i know the keys already.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Alter table issue

2019-12-02 Thread Shravya Nethula
Hi,

I added a new column in an existing table using the following query:
ALTER TABLE person ADD COLUMN (order_id LONG)

Now, I am trying to change the datatype of the new column, so I tried executing 
the following queries:
ALTER TABLE person DROP COLUMN (order_id)
ALTER TABLE person ADD COLUMN (order_id VARCHAR(64))

Now when I am trying to insert a row with "varchar" value in "order_id" column, 
it is throwing the following error:
Error: Wrong value has been set 
[typeName=SQL_PUBLIC_PERSON_fc7e0bd5_d052_43c1_beaf_fb01b65f2f96, 
fieldName=ORDER_ID, fieldType=long, assignedValueType=String]

In this link: https://apacheignite-sql.readme.io/docs/alter-table
The command does not remove actual data from the cluster which means that if 
the column 'name' is dropped, the value of the 'name' will still be stored in 
the cluster. This limitation is to be addressed in the next releases.

I saw that there is a limitation in Alter Table. So in which release can this 
support be provided? What is the tentative date?



Regards,

Shravya Nethula,

BigData Developer,

[cid:18fe433f-515a-4b53-bd76-358144bbf8a5]

Hyderabad.


Re: Kafka to Ignite

2019-12-02 Thread Evgenii Zhuravlev
Hi,

Probably you can use Kafka streamer:
https://apacheignite-mix.readme.io/docs/kafka-streamer

Evgenii

пн, 2 дек. 2019 г. в 05:29, ashishb888 :

> What are better ways to stream data from Kafka to Ignite cache?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: How to identify if re-balancing completed

2019-12-02 Thread Evgenii Zhuravlev
Hi Akash,

Here is the doc for such monitoring:
https://www.gridgain.com/docs/latest/administrators-guide/monitoring-metrics/metrics#monitoring-rebalancing

Also, org.apache.ignite.internal.processors.cache.CacheLocalMetricsMXBeanImpl
contains more metrics related to the rebalance.

Evgenii

пн, 2 дек. 2019 г. в 07:12, Akash Shinde :

> Hi,
>
> If there are multiple nodes in cluster and if one of the node/nodes goes
> down or a new node/nodes is added how to make sure that re-balancing of all
> cache partitions has been completed successfully? I need this to implement
> rolling restart.
>
> Thanks,
> Aksah
>
>


Re: [Webinar] How to Migrate Your Data Schema to Apache Ignite

2019-12-02 Thread Humphrey
Having trouble registering. Link to apply for the webinar not active/working.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


[Webinar] How to Migrate Your Data Schema to Apache Ignite

2019-12-02 Thread Иван Раков
Folks,

On Wednesday evening I'll present a talk on how to adopt Ignite SQL
for your real-world cases. The following topics will be discussed:
- What Ignite SQL is good / bad for
- Known (from my perspective) scenarios when using Ignite SQL might be
successful
- Pitfalls and surprises of distributed SQL: what should be taken into account
- Ignite SQL performance fine-tuning

Please feel free to join the webinar if you are interested:
https://www.gridgain.com/resources/webinars/how-migrate-your-data-schema-apache-ignite

--
Best Regards,
Ivan Rakov


Re: Improving Get operation performance

2019-12-02 Thread ezhuravlev
As you have 4 nodes on the same machine now, you have a lot of context
switching, probably all the nodes just competing for CPU resources with each
other.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite native persistence

2019-12-02 Thread niamin
Tried with appending cache name as schema with no success. Attach is a link
to my project:
https://drive.google.com/open?id=1F53um8TeUK45U3SOW0_Vlj04S8DWyjKI

Please take a look. I suspect it is something in my code.

Thanks,
Naushad



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Cache data not being stored on server nodes

2019-12-02 Thread swattal
Hi Andrei,

Thank you for the response. I do see expiration listener being called on the
server node. What I don’t see is the value attached with Event. The key is
present but the value is null. I would have expected the expired value to be
present in the event as well.  I think i am puzzled over client behavior and
am surely missing something. When a put(key, value) call is made on the
client would that put the data in one of the servers automatically if the
cache is configured in PARTIONED mode or does the server have to listen for
CacheEntryCreation event to add the key, value pair in memory?

Thanks,
Sumit



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Server Nodes Stopped Unexpectedly

2019-12-02 Thread Humphrey
I'm not sure if this would help.

We used to have also trouble when a node (client or server) don't have the
following property set:
'java.net.preferIPv4Stack'. Make sure all nodes have this property set
correctly.

2019-07-22 09:22:47,269 19417663 [disco-event-worker-#61%springDataNode%]
WARN  o.a.i.i.m.d.GridDiscoveryManager - Local node's value of
'java.net.preferIPv4Stack' system property differs from remote node's (all
nodes in topology should have identical value) [locPreferIpV4=true,
rmtPreferIpV4=null, locId8=54c2fb2f, rmtId8=312d096e,
rmtAddrs=[qagmsweb01.p05.eng.sjc01.xyx.com/10.44.81.30, /127.0.0.1],
rmtNode=ClusterNode [id=312d096e-6ba7-4038-b877-ce237e5227df, order=42,
addr=[10.44.81.30, 127.0.0.1], daemon=false]]

Humphrey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Server Nodes Stopped Unexpectedly

2019-12-02 Thread ezhuravlev
Hi,

Answers to questions from the previous message will be based on the provided
logs, since it's not clear what happened there yet.

IgniteConfiguration.setNetworkTimeout: 
It is a global timeout for high-level operations where a network is
involved.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Ignite bulk data load issue

2019-12-02 Thread ezhuravlev
This message:
 Blocked system-critical thread has been detected. This can lead to
cluster-wide undefined behaviour [threadName=data-streamer-stripe-0,
blockedFor=34s]
is being printed in case if you're loading data to Ignite much faster than
it can write to the disk. The disk is a bottleneck here, as you mentioned
before, you chose a disk with a really small IOPS. Just give a better disk
to the system and you will see a better performance.

By the way, how much heap do you have for a JVM?

Evgenii 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite AI learning resources

2019-12-02 Thread joseheitor
Awesome - Thanks (have just started looking through the tutorial examples)

Will look through the links to resources too. Being new to AI, it is
difficult to know where to start, but am committed to Ignite for other
reasons, so would like to learn by using the Ignite ML features...

Looking forward to 2.8 and the new docs too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite AI learning resources

2019-12-02 Thread zaleslaw
Hi, we are going to release new AI next 2 months with new really cool ML and
TensorFlow integration.

No documentation and papers released and written yet, but I hope to publish
them next 2-3 months too.

Also I could recommend to learn this  code tutorial

  

and  examples

  
Also recommend cool videos from the conferences on my channel  Ignite ML
playlist

  
and especially  this one

  

Also you could read a few outdated but cool posts about Ignite ML
foundations (API changed somewhere for last year)

https://dzone.com/articles/introduction-to-machine-learning-with-apache-ignit
https://dzone.com/articles/genetic-algorithms-with-apache-ignite
https://dzone.com/articles/using-linear-regression-with-apache-ignite
https://dzone.com/articles/using-k-nn-classification-with-apache-ignite
https://dzone.com/articles/using-apache-ignites-machine-learning-for-fraud-de
https://dzone.com/articles/using-k-means-clustering-with-apache-ignite





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite + Spark installation

2019-12-02 Thread Deepak
thank.. will try and revert back



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


How to identify if re-balancing completed

2019-12-02 Thread Akash Shinde
Hi,

If there are multiple nodes in cluster and if one of the node/nodes goes
down or a new node/nodes is added how to make sure that re-balancing of all
cache partitions has been completed successfully? I need this to implement
rolling restart.

Thanks,
Aksah


Fwd: Local node terminated after segmentation

2019-12-02 Thread Prasad Bhalerao
Can someone please advise on this?

-- Forwarded message -
From: Prasad Bhalerao 
Date: Fri, Nov 29, 2019 at 7:53 AM
Subject: Re: Local node terminated after segmentation
To: 


I had checked the resource you mentioned, but I was confused with grid-gain
doc  describing it as protection against split-brain. Because if the node
is segmented the only thing one can do is stop/restart/noop.
I was just wondering how it provides protection against split-brain.
Now I think by protection it means kill the segmented node/nodes or restart
it and bring it back in the cluster .

Ignite uses TcpDiscoverSpi to send a heartbeat the next node in the ring
right to check if the node is reachable or not.
So the question in what situation one needs one more ways to check if the
node is reachable or not using different resolvers?

Please let me know if my understanding is correct.

The article you mentioned, I had checked that code. It requires a node to
be configured in advance so that resolver can check if that node is
reachable from local host. It doesn't not check if all the nodes are
reachable from local host.

Eg: node1 will check for node2 and node2 will check for node 3 and node 3
will check for node1 to complete the ring
Just wondering how to configure this plugin in prod env with large cluster.
I tried to check grid-gain doc to see if they have provided any sample code
to configure their plugins just to get an idea but did not find any.

Can you please advise?


Thanks,
Prasad

On Thu 28 Nov, 2019, 11:41 PM akurbanov  Hello,
>
> Basically this is a mechanism to implement custom logical/network
> split-brain protection. Segmentation resolvers allow you to implement a way
> to determine if node has to be segmented/stopped/etc in method
> isValidSegment() and possibly use different combinations of resolvers
> within
> processor.
>
> If you want to check out how it could be done, some articles/source samples
> that might give you a good insight may be easily found on the web, like:
>
> https://medium.com/@aamargajbhiye/how-to-handle-network-segmentation-in-apache-ignite-35dc5fa6f239
>
> http://apache-ignite-users.70518.x6.nabble.com/Segmentation-Plugin-blog-or-article-td27955.html
>
> 2-3 are described in the documentation, copying the link just to point out
> which one: https://apacheignite.readme.io/docs/critical-failures-handling
>
> By default answer to 2 is: Ignite doesn't ignote node FailureType
> SEGMENTATION and calls the failure handler in this case. Actions that are
> taken are defined in failure handler.
>
> AbstractFailureHandler class has only SYSTEM_WORKER_BLOCKED and
> SYSTEM_CRITICAL_OPERATION_TIMEOUT ignored by default. However, you might
> override the failure handler and call .setIgnoredFailureTypes().
>
> Links:
> Extend this class:
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/failure/AbstractFailureHandler.java
> — check for custom implementations used in Ignite tests and how they are
> used.
>
> Sample from tests:
>
> https://github.com/apache/ignite/blob/master/modules/core/src/test/java/org/apache/ignite/failure/SystemWorkersBlockingTest.java
>
> Failure processor:
>
> https://github.com/apache/ignite/blob/master/modules/core/src/main/java/org/apache/ignite/internal/processors/failure/FailureProcessor.java
>
> Best regards,
> Anton
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Expected serialization performance of Ignite .NET

2019-12-02 Thread Pavel Tupitsyn
Can you please attach a working project to reproduce those numbers?

It is hard to say without the code: a class with 21 properties can vary in
size a lot.
There are many other things at play - JVM options, RAM size, benchmark
method, etc.

On Mon, Dec 2, 2019 at 2:08 PM camer314 
wrote:

> I have a 21 property C# class (mix of int and string) and am using
> IBinarizable interface as suggested in the documentation.
>
> My cache is configured such that each cache entry is a collection of these
> objects, lets say each cache item is a List.
>
> I have 10 million instance of this class. For simplicity lets say each
> cache
> entry holds 10 of these objects, amounting to 1 million cache entries.
>
> Each object is roughly about 120 bytes long, so 10 million = ~1.2 gigabytes
> of data stored.
>
> I am using a LOCAL cache and a simple foreach loop over the cache takes in
> the region of 25 seconds. This seems like an eternity. I understand there
> is
> a lot of serialization happening, probably a lot of garbage collecting as
> well, but it still seems like a large amount of time to effectively move
> memory from one location to another.
>
> Does that time seem exorbitant to you given the above specs or is it
> expected?
>
> What is the optimal way to lay out cache items locally for cache read
> iteration (that is, compute needs to iterate the entire cache)?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Deadlock on concurrent calls to getAll and invokeAll on cache with read-through

2019-12-02 Thread peter108418
Hi

We have recently started to encounter what appears to be deadlocks on one of
our new clusters. We believe it may be due to "data patterns" being slightly
different and more dense than our other existing (working) production
clusters. We have some workarounds, but we think this might be an issue with
Ignite. Hopefully someone is able to narrow down the cause further? :)

Firstly, I'll describe the issues that we are seeing, and how to reproduce.
Then I'll try explain what we are trying to accomplish, maybe there is a
better solution to our problem?

*The problem:*
We have encountered an odd deadlock issue when, on the same cache where
read-through is enabled, concurrently calls are made to "getAll" and
"invokeAll". We are sorting the keys similarly across both calls.
Replacing one "side" with either multiple "get"s, or multiple "invoke"s,
seems to fix the problem, but performance is worse.

I have created a test case that can reproduce it. The test creates 
- 1 thread doing a getAll({1, 2}), 
- 2 threads doing an invokeAll({2, 3}) and an invokeAll({1, 3})

These 3 threads are executed, and may or may not end up in a deadlock,
usually the test case captures the deadlock state before 50 repetitions.
Please see attached sample maven project to reproduce:
https://drive.google.com/open?id=1GJ78dsulJ0XG-erNkN_vm3ordKr0nqS6
Run with "mvn clean test"

I have also posted the test code, (partial) log output and (partial)
stacktrace below.


*What we are trying to do:*
I believe our use-case to be fairly "normal"? We use Ignite as a
cache-layer, with a read-through adapter to a backing data-store. As data
continuously enters the backing data-store, we have a service that keeps the
Ignite cache up-to-date.

We have a large amount of historical data, going years back. The backing
data-store is the "master", we are not using Ignite Persistence. We use
Ignite as a cache layer as typically we recalculate on the same data
multiple times. We key our data by time chunks, where the "value" is a
container/collection of records within the time-range defined by the key.
We decided to go with an IgniteCache with read-through enabled to automate
cache-loading. To reduce the number of queries against the data-store, we
usually call "getAll" on the cache, as the resulting set of keys provided to
the CacheStore.loadAll can often be merged into a smaller number of queries
(example: joining time-ranges "08:00:00 - 08:15:00" and "08:15:00 -
08:30:00" to larger single time-range "08:00:00 - 08:30:00").

As we continuously load new data into the backing data-store, entries in
Ignite become inconsistent with the data-store, especially those around
"now" but out-of-order records also occur.
To handle this, we have a separate Ignite Service that fetches new records
from the data-store and updates the Ignite Cache using invokeAll and an
entry-processor.
Our reasoning here is to only forward the "new" records (in the scale of 10s
of records) and merged them into the container (containing 1000s of records)
"locally", instead of "getting" the container, merging and then "putting"
which would transfer a large amount of data back and forth.





Relevant log fragment:


Dump of relevant threads:





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Kafka to Ignite

2019-12-02 Thread ashishb888
What are better ways to stream data from Kafka to Ignite cache?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Transaction operations using the Ignite Thin Client Protocol

2019-12-02 Thread Igor Sapego
Ivan,

You are right. Though now we have transactions support in thin client
protocol,
It is only now implemented for Java. Also, C++ thin client yet to support
SQL.

Best Regards,
Igor


On Sat, Nov 30, 2019 at 9:35 AM Ivan Pavlukhin  wrote:

> Igor,
>
> Could you please elaborate whether C++ thin client is going to have
> transactions support in 2.8? AFAIR, it was implemented only for Java
> thin client.
>
> пт, 29 нояб. 2019 г. в 18:29, Stephen Darlington
> :
>
> >
> > The ticket says “Fix version: 2.8” so I would assume it would be
> available then. Currently planned for late January.
> >
> > > On 29 Nov 2019, at 13:58, dkurzaj  wrote:
> > >
> > > Hello,
> > >
> > > Since this improvement :
> https://issues.apache.org/jira/browse/IGNITE-9410
> > > is resolved, I'd assume that it is now possible to do SQL transactions
> using
> > > the C++ thin client, though I'm not sure it is yet since I did not find
> > > documentation about that. Would someone happen to know more about this
> > > subject?
> > >
> > > Thank you!
> > >
> > > Dorian
> > >
> > >
> > >
> > > --
> > > Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> >
> >
>
>
> --
> Best regards,
> Ivan Pavlukhin
>


Re: Ignite on-heap & off-heap caches

2019-12-02 Thread ashishb888
Thank you Andrei.

So for on-heap cache I need set Xms and -Xmx option to allocate the memory.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


java.lang.AssertionError : 0

2019-12-02 Thread yann.blaz...@externe.bnpparibas.com
Hello, when I do some integration test with spring, at the end of the test, my 
code is calling Ignite.close(). I also had this stack trace under some loads 
with multi threads and putall on caches.

I see this stacktrace : 


java.lang.AssertionError: 0
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getPageEntrySize(AbstractDataPageIO.java:149)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.getPageEntrySize(AbstractDataPageIO.java:140)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.actualFreeSpace(AbstractDataPageIO.java:1201)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.setRealFreeSpace(AbstractDataPageIO.java:190)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO.removeRow(AbstractDataPageIO.java:754)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$RemoveRowHandler.run(AbstractFreeList.java:286)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList$RemoveRowHandler.run(AbstractFreeList.java:261)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.util.PageHandler.writePage(PageHandler.java:279)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.write(DataStructure.java:256)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.removeDataRowByLink(AbstractFreeList.java:571)
at 
org.apache.ignite.internal.processors.cache.persistence.RowStore.removeRow(RowStore.java:79)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl$1.apply(IgniteCacheOffheapManagerImpl.java:2929)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl$1.apply(IgniteCacheOffheapManagerImpl.java:2926)
at 
org.apache.ignite.internal.processors.cache.tree.AbstractDataLeafIO.visit(AbstractDataLeafIO.java:185)
at 
org.apache.ignite.internal.processors.cache.persistence.tree.BPlusTree.destroy(BPlusTree.java:2348)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$CacheDataStoreImpl.destroy(IgniteCacheOffheapManagerImpl.java:2926)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.destroyCacheDataStore0(IgniteCacheOffheapManagerImpl.java:1323)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.destroyCacheDataStore(IgniteCacheOffheapManagerImpl.java:1308)
at 
org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.stop(IgniteCacheOffheapManagerImpl.java:251)
at 
org.apache.ignite.internal.processors.cache.CacheGroupContext.stopGroup(CacheGroupContext.java:751)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2579)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCacheGroup(GridCacheProcessor.java:2572)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stopCaches(GridCacheProcessor.java:1094)
at 
org.apache.ignite.internal.processors.cache.GridCacheProcessor.stop(GridCacheProcessor.java:1059)
at org.apache.ignite.internal.IgniteKernal.stop0(IgniteKernal.java:2356)
at org.apache.ignite.internal.IgniteKernal.stop(IgniteKernal.java:2228)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop0(IgnitionEx.java:2612)
at 
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.stop(IgnitionEx.java:2575)
at org.apache.ignite.internal.IgnitionEx.stop(IgnitionEx.java:379)
at org.apache.ignite.Ignition.stop(Ignition.java:225)
at org.apache.ignite.internal.IgniteKernal.close(IgniteKernal.java:3568)


What's happening ? 


Thanks for your help.

Regards.
This message and any attachments (the "message") is
intended solely for the intended addressees and is confidential. 
If you receive this message in error,or are not the intended recipient(s), 
please delete it and any copies from your systems and immediately notify
the sender. Any unauthorized view, use that does not comply with its purpose, 
dissemination or disclosure, either whole or partial, is prohibited. Since the 
internet 
cannot guarantee the integrity of this message which may not be reliable, BNP 
PARIBAS 
(and its subsidiaries) shall not be liable for the message if modified, changed 
or falsified. 
Do not print this message unless it is necessary, consider the environment.

--

Ce message et toutes les pieces jointes (ci-apres le "message") 
sont etablis a l'intention exclusive de ses destinataires et 

Re: Does Ignite support nested joins for partitioned cache?

2019-12-02 Thread DS
Hi, 
Thankyou for replying.

*cache configration for all 4 tables :
* SQL_PUBLIC_PERSON.SQL_PUBLIC_PERSON

  
SQL_PUBLIC_CITY.SQL_PUBLIC_CITY

  
SQL_PUBLIC_MEDICAL_INFO.SQL_PUBLIC_MEDICAL_INFO

  
SQL_PUBLIC_BLOOD_GROUP_INFO.SQL_PUBLIC_BLOOD_GROUP_INFO

  


*Expecting result :
*I am expecting 7 rows but only getting 4 .
Also,data for column blood_group and universal_donor is missing.

*Below are the screenshots and cache_configration of all 4 tables and the
query result .
*
*Table :person
*
 

*Table :city 
*
 

*Table :medical_info
*

 

*Table: blood_group_info
*

 

*Result of join query : 
*
 






--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Expected serialization performance of Ignite .NET

2019-12-02 Thread camer314
I have a 21 property C# class (mix of int and string) and am using
IBinarizable interface as suggested in the documentation.

My cache is configured such that each cache entry is a collection of these
objects, lets say each cache item is a List.

I have 10 million instance of this class. For simplicity lets say each cache
entry holds 10 of these objects, amounting to 1 million cache entries.

Each object is roughly about 120 bytes long, so 10 million = ~1.2 gigabytes
of data stored.

I am using a LOCAL cache and a simple foreach loop over the cache takes in
the region of 25 seconds. This seems like an eternity. I understand there is
a lot of serialization happening, probably a lot of garbage collecting as
well, but it still seems like a large amount of time to effectively move
memory from one location to another.

Does that time seem exorbitant to you given the above specs or is it
expected?

What is the optimal way to lay out cache items locally for cache read
iteration (that is, compute needs to iterate the entire cache)?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: node down after Caught unhandled exception in NIO worker thread (restart the node) log

2019-12-02 Thread Ilya Kasnacheev
Hello!

Well, you have an awful lot of pool starvation messages in your server
logs. This may suggest problems with network or the load.

You also hit a long GC:
[2019-11-22T21:19:45,964][WARN ][jvm-pause-detector-worker][IgniteKernal]
Possible too long JVM pause: 15252 milliseconds.

Which was accompanied by a lot of connection re-establishment:

[2019-11-22T21:19:49,142][WARN
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Communication SPI
session write timed out (consider increasing 'socketWriteTimeout'
configuration property) [remoteAddr=/192.168.199.122:9508,
writeTimeout=2000]
[2019-11-22T21:19:49,352][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.116:43306]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.107:44274]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.57:5130]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.112:5766]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.114:64992]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.108:28644]
[2019-11-22T21:19:49,353][INFO
][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.124:47200]
[2019-11-22T21:19:49,140][INFO
][grid-nio-worker-tcp-comm-1-#201][TcpCommunicationSpi] Established
outgoing communication connection [locAddr=/192.168.199.60:52484, rmtAddr=/
192.168.199.65:47100]
[2019-11-22T21:19:49,124][INFO
][grid-nio-worker-tcp-comm-0-#200][TcpCommunicationSpi] Accepted incoming
communication connection [locAddr=/192.168.199.60:47100, rmtAddr=/
192.168.199.106:13940]

It is recommended to adjust your timeouts (or heap sizes) so that long GC
does not cause all communication connections to be severed.

Regards,
-- 
Ilya Kasnacheev


пн, 25 нояб. 2019 г. в 16:57, ihalilaltun :

> Hi Igniters,
>
> We had a strange node-down incident after getting following log (we've been
> using ignite in production for almost 1 year and we're getting this error
> for the first time)
>
> [2019-11-22T21:19:54,222][INFO
> ][grid-nio-worker-tcp-comm-3-#203][TcpCommunicationSpi] Established
> outgoing
> communication connection [locAddr=/192.168.199.60:43720,
> rmtAddr=/192.168.199.222:47100]
>
> [2019-11-22T21:19:54,230][ERROR][grid-nio-worker-tcp-comm-0-#200][TcpCommunicationSpi]
> Caught unhandled exception in NIO worker thread (restart the node).
> java.nio.channels.CancelledKeyException: null
> at
> sun.nio.ch.SelectionKeyImpl.ensureValid(SelectionKeyImpl.java:73)
> ~[?:1.8.0_201]
> at
> sun.nio.ch.SelectionKeyImpl.interestOps(SelectionKeyImpl.java:82)
> ~[?:1.8.0_201]
> at
>
> java.nio.channels.spi.AbstractSelectableChannel.register(AbstractSelectableChannel.java:204)
> ~[?:1.8.0_201]
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.bodyInternal(GridNioServer.java:1997)
> ~[ignite-core-2.7.6.jar:2.7.6]
> at
>
> org.apache.ignite.internal.util.nio.GridNioServer$AbstractNioClientWorker.body(GridNioServer.java:1794)
> [ignite-core-2.7.6.jar:2.7.6]
> at
> org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
> [ignite-core-2.7.6.jar:2.7.6]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_201]
>
> [2019-11-22T21:19:54,343][ERROR][grid-nio-worker-tcp-comm-2-#202][TcpCommunicationSpi]
> Failed to process selector key [ses=GridSelectorNioSessionImpl
> [worker=DirectNioClientWorker [super=AbstractNioClientWorker [idx=2,
> bytesRcvd=617063277634, bytesSent=8878293076427, bytesRcvd0=107695,
> bytesSent0=727192, select=true, super=GridWorker
> [name=grid-nio-worker-tcp-comm-2, igniteInstanceName=null, finished=false,
> heartbeatTs=1574457593322, hashCode=1772114147, interrupted=false,
> runner=grid-nio-worker-tcp-comm-2-#202]]],
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> inRecovery=null, outRecovery=null, super=GridNioSessionImpl
> [locAddr=/192.168.199.60:47100, rmtAddr=/192.168.199.68:62054,
> createTime=1574457593307, closeTime=0, bytesSent=38, bytesRcvd=42,
> bytesSent0=38, bytesRcvd0=42, sndSchedTime=1574457593322,
> lastSndTime=1574457593322, lastRcvTime=1574457593322, 

Re: Ignite on-heap & off-heap caches

2019-12-02 Thread Andrei Aleksandrov

Hi,

No, heap and off-heap memory are different features.

*Heap *memory uses -Xms and -Xmx option to allocate the memory used for 
different operations and generally can't be used for data storing (in 
case of you don't set on-heap caching). Java GC will work with current 
memory.


*Off-heap* memory uses *initial *and *max *properties for region size 
that should be set in the data region configuration. This memory will be 
used for data storage. Java GC will not work with current memory.


You can read more here:

https://apacheignite.readme.io/docs/memory-architecture

BR,
Andrei

11/28/2019 3:36 PM, ashishb888 пишет:

I have below question:

Do both on-heap & off-heap caches use memory from data regions (by setting
initial & max
of DataRegionConfiguration)?

Does Ignite use heap provided to the application (-Xms & -Xmx) for cache
storage?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: can't use "cache -a "command , warn info is " the cluster is inactive"

2019-12-02 Thread Ilya Kasnacheev
Hello!

I think the restart of your nodes is what fixed it for you.

Regards,
-- 
Ilya Kasnacheev


пн, 2 дек. 2019 г. в 13:25, ?青春狂-^ :

> Hi
>I used bin/control.sh --host 172.17.122.126 --activate
>
> But it still did not work properly, it print “ [WARN ] Can not perform
> the operation because the cluster is inactive.”
>
> I changed the config file by added ports,  I found this config can use
> "top " or other commands ,it seem work properly , the config is:
> 
> 
> 172.17.122.126:47500..47509
> 172.17.122.127:47500..47509
> 172.17.122.128:47500..47509
> 
> 
>
> before this ,config is :
> 
> 
> 172.17.122.126
> 172.17.122.127
> 172.17.122.128
> 
> 
>
> I dont know why?
>
> -- 原始邮件 --
> *发件人:* "Ilya Kasnacheev";
> *发送时间:* 2019年11月29日(星期五) 下午5:27
> *收件人:* "user";
> *主题:* Re: can't use "cache -a "command , warn info is " the cluster is
> inactive"
>
> Hello!
>
> You have to activate your cluster, by doing something like
>
> bin/control.sh --host 172.17.122.126 --activate
>
> This is a requirement on the first start of persistent cluster, after all
> nodes has joined.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 29 нояб. 2019 г. в 11:34, ?青春狂-^ :
>
>> because of the program'Exception, I wanted to see the cache infos.
>> so I use the ./ignitevisorcmd.sh  command.
>> then I use "open -d"
>> and  "cache -a"
>>
>> but it print warn info :
>> [WARN ] Can not perform the operation because the cluster is inactive.
>>
>>
>> I have 3 nodes of ignite ,and process is alive.
>>
>> the config xml is :
>>
>>   
>> > class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
>> 
>> > class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
>> 
>> 
>> 172.17.122.126
>> 172.17.122.127
>> 172.17.122.128
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>>
>>


Re: Issue while using transactions in sql apache ignite console grid gain

2019-12-02 Thread harinath
Hi,

Thank you for the confirmation.

Thanks,
Harinath



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite AI learning resources

2019-12-02 Thread Andrei Aleksandrov

Hi,

I don't think that exists some special resources. You can try to ask 
your questions on mail lists and read existed documentation:


1)Ignite user list - http://apache-ignite-users.70518.x6.nabble.com/
2)Ignite developers list - 
http://apache-ignite-developers.2346864.n4.nabble.com/
3)Ignite documentation portal - 
https://apacheignite.readme.io/docs/getting-started


Also exist a lot of different articles in the internet.

BR,
Andrei

12/2/2019 11:47 AM, joseheitor пишет:

Hi,

Can anyone recommend some resources to learn the fundamentals of ML and DL,
and how to use these techniques in practical ways with the Apache Ignite AI
platform?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


?????? can't use "cache -a "command , warn info is " the cluster is inactive"

2019-12-02 Thread ???????-^
Hi 

 I used bin/control.sh --host 172.17.122.126 --activate

But it still did not work properly?? it print ?? [WARN ] Can not perform the 
operation because the cluster is inactive.??



 

 I changed the config file by added ports,  I found this config can use 
"top " or other commands ??it seem work properly , the config is:


Re: Cache data not being stored on server nodes

2019-12-02 Thread Andrei Aleksandrov

Hi,

Please read about expiration here:

https://apacheignite.readme.io/docs/expiry-policies

Expiry Policy specifies the amount of time that must pass before an 
entry is considered expired


 * In-Memory Mode (data is stored solely in RAM): expired entries are
   *purged *from RAM completely.

 * Memory + Ignite persistence: expired entries are *removed *from both
   memory and disk tiers. Note that expiry policies will remove entries
   from the partition files on disk without freeing up space. The space
   will be reused to write subsequent entries.

 * Memory + 3rd party persistence: expired entries are *removed *from
   the memory tier only (Ignite) and left untouched in the 3​rd party
   persistence (RDBMS, NoSQL, and other databases).

 * Memory + Swap: expired entries are *removed *from both RAM and swap
   files.

So it's expected that entry will be removed after expiration.

BR,
Andrei
12/2/2019 7:39 AM, swattal пишет:

I have recently started using Ap=
ache Ignite for my application and had questions about where the data gets
stored in the cache. I have two nodes which act as Cached servers and
another node which acts as a client. I am using the Zookeeper discovery SPI
for discovery. While putting the data in the cache my server nodes gets the
events of entry creation but on expiration i would assume that both Key and
Value are present on my server side nodes CacheExpiredListener. The
expiration listener is invoked with the right key but with the value being
null. This makes me believe that the put call on the client just gets
limited to caching on the client side and doesn’t send entries to
  server cache nodes. Is there a config setting that i am missing?

Thanks,
Sumit



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: When jconsole is used to access the cluster, the heap memory usage increases gradually

2019-12-02 Thread Ilya Kasnacheev
Hello!

No, JVM will likely perform more thorough GC sometime in the future to
regain more heap. Until that time it is free to grow.

Regards,
-- 
Ilya Kasnacheev


пн, 2 дек. 2019 г. в 13:03, 李玉珏@163 <18624049...@163.com>:

> Hi,
>
> Use the default configuration to start a node through ignite.sh, then
> access the node through jconsole tool, and then you will find that the
> heap memory usage is increasing. Is this a potential problem?
>
>


Re: Does Ignite support nested joins for partitioned cache?

2019-12-02 Thread Ilya Kasnacheev
Hello!

Ignite supports nested queries, but only as map queries (there is just one
reduce query).

What does this mean for your query I know not. Can you show cache
configurations for your queries, some data and expected/actual result for
that data? Maybe you are getting an error of some kind?

Regards,
-- 
Ilya Kasnacheev


пн, 2 дек. 2019 г. в 13:13, DS :

> I have been trying to implement nested joins with or without subquery  but
> unable get right results.
> My caches are partitioned ,so enabling Allow non-collocated joins check.
>
> P.S.
> It has been observed that if i am tyring to join 3 tables,i get expected
> result but for more than 3 tables worries me.
>
> Please let me know if i am doing something wrong with query below:
>
> SELECT
> _medical_info.age,_medical_info.name,_medical_info.city_id,_medical_info.weight,_medical_info.blood_group,blood_group_info.universal_donor
>
> FROM
>( SELECT
> _person.age,_person.name,_person.city_id,medical_info.blood_group,medical_info.weight
>
> FROM (
> SELECT person.age,person.name,city.city_id
> FROM person
> LEFT JOIN  city ON person.city_id = city.city_id
> ) AS _person
> LEFT JOIN  medical_info ON _person.name = medical_info.name
> ) AS _medical_info
> LEFT JOIN  blood_group_info ON _medical_info.blood_group =
> blood_group_info.blood_group
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Primary Entry Count are not correct in cache

2019-12-02 Thread Ilya Kasnacheev
Hello!

Unfortunately there is too few information. How does your loader work? How
large is the difference between expected and actual? How does it change
with time? Have you tried other means of checking cache size?

Regards,
-- 
Ilya Kasnacheev


пт, 29 нояб. 2019 г. в 21:35, Akash Shinde :

> Hi I have created a cache with following configuration. I started four
> nodes,each node on different machines.
> I have loaded this cache with loader.
> Issue: I am not performing any operation on this cache but I am able to
> see the primary key count not constant. Its keep on changing after some
> time. I am taking this key count from gridgain web console. Ideally my
> loader query result count should match with primary entries in cache.
> Ignite version 2.6.0.
> Could someone  suggest why this is happening?
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *CacheConfiguration subscriptionCacheCfg = new 
> CacheConfiguration<>(CacheName.SUBSCRIPTION_CACHE.name());subscriptionCacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);subscriptionCacheCfg.setWriteThrough(false);subscriptionCacheCfg.setReadThrough(true);subscriptionCacheCfg.setRebalanceMode(CacheRebalanceMode.ASYNC);subscriptionCacheCfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);subscriptionCacheCfg.setBackups(2);Factory
>  storeFactory = 
> FactoryBuilder.factoryOf(SubscriptionDataLoader.class);subscriptionCacheCfg.setCacheStoreFactory(storeFactory);subscriptionCacheCfg.setIndexedTypes(DefaultDataKey.class,
>  
> SubscriptionData.class);subscriptionCacheCfg.setSqlIndexMaxInlineSize(47);RendezvousAffinityFunction
>  affinityFunction = new 
> RendezvousAffinityFunction();affinityFunction.setExcludeNeighbors(true);subscriptionCacheCfg.setAffinity(affinityFunction);subscriptionCacheCfg.setStatisticsEnabled(true);subscriptionCacheCfg.setPartitionLossPolicy(PartitionLossPolicy.READ_WRITE_SAFE);*
>
>
> Thanks,
>
> Akash
>
>


Does Ignite support nested joins for partitioned cache?

2019-12-02 Thread DS
I have been trying to implement nested joins with or without subquery  but
unable get right results.
My caches are partitioned ,so enabling Allow non-collocated joins check.

P.S.
It has been observed that if i am tyring to join 3 tables,i get expected
result but for more than 3 tables worries me.

Please let me know if i am doing something wrong with query below:

SELECT
_medical_info.age,_medical_info.name,_medical_info.city_id,_medical_info.weight,_medical_info.blood_group,blood_group_info.universal_donor
 
FROM 
   ( SELECT
_person.age,_person.name,_person.city_id,medical_info.blood_group,medical_info.weight
 
FROM (
SELECT person.age,person.name,city.city_id 
FROM person 
LEFT JOIN  city ON person.city_id = city.city_id
) AS _person 
LEFT JOIN  medical_info ON _person.name = medical_info.name 
) AS _medical_info  
LEFT JOIN  blood_group_info ON _medical_info.blood_group =
blood_group_info.blood_group 


 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


When jconsole is used to access the cluster, the heap memory usage increases gradually

2019-12-02 Thread 李玉珏

Hi,

Use the default configuration to start a node through ignite.sh, then 
access the node through jconsole tool, and then you will find that the 
heap memory usage is increasing. Is this a potential problem?




Ignite AI learning resources

2019-12-02 Thread joseheitor
Hi,

Can anyone recommend some resources to learn the fundamentals of ML and DL,
and how to use these techniques in practical ways with the Apache Ignite AI
platform?

Thanks,
Jose



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/