2.4.0 with Tomcat 8.5 and Java 9

2018-04-02 Thread Eric Ham
 Hello,

>From the suggestion on my other thread to try a newer Tomcat, I decided to
spin up Tomcat 8.5.29 with Oracle JDK 9.0.4. I'm attempting to use web
session clustering based on the following pages [1] and [2] as I saw the
2.4.0 release notes say Java 9 is now supported. I copied the following
jars over for Tomcat to load:

ignite-core-2.4.0.jar
ignite-log4j-2.4.0.jar
ignite-spring-2.4.0.jar
ignite-web-2.4.0.jar

However, when I startup Tomcat I get the following error messages (listed
below) in the localhost.2018-04-02.log file. The following 2 lines:

Caused by: java.lang.RuntimeException: jdk.internal.misc.JavaNioAccess
class is unavailable.
Caused by: java.lang.IllegalAccessException: class
org.apache.ignite.internal.util.GridUnsafe cannot access class
jdk.internal.misc.SharedSecrets (in module java.base) because module
java.base does not export jdk.internal.misc to unnamed module @464a014c

seem related to [3] IGNITE-7352, which says that it should be fixed in
2.4.0 to support Java 9.

Either I'm missing a step or the 7352 fix didn't make it over? Please let
me know what additional information I can provide to help resolve this.

Regards,
-Eric

[1] https://ignite.apache.org/releases/latest/javadoc/org/
apache/ignite/startup/servlet/ServletStartup.html


[2] https://apacheignite-mix.readme.io/docs/web-session-clustering

[3] https://issues.apache.org/jira/browse/IGNITE-7352

02-Apr-2018 11:27:55.914 INFO [localhost-startStop-1]
org.apache.catalina.core.ApplicationContext.log No Spring
WebApplicationInitializer types detected on classpath
02-Apr-2018 11:27:56.739 SEVERE [localhost-startStop-1]
org.apache.catalina.core.ApplicationContext.log StandardWrapper.Throwable
 java.lang.ExceptionInInitializerError
at
org.apache.ignite.internal.util.IgniteUtils.(IgniteUtils.java:759)
at
org.apache.ignite.startup.servlet.ServletStartup.init(ServletStartup.java:138)
at javax.servlet.GenericServlet.init(GenericServlet.java:158)
at
org.apache.catalina.core.StandardWrapper.initServlet(StandardWrapper.java:1144)
at
org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1091)
at
org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:983)
at
org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4939)
at
org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5249)
at
org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at
org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:754)
at
org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:730)
at
org.apache.catalina.core.StandardHost.addChild(StandardHost.java:734)
at
org.apache.catalina.startup.HostConfig.deployWAR(HostConfig.java:986)
at
org.apache.catalina.startup.HostConfig$DeployWar.run(HostConfig.java:1857)
at
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.RuntimeException: jdk.internal.misc.JavaNioAccess
class is unavailable.
at
org.apache.ignite.internal.util.GridUnsafe.javaNioAccessObject(GridUnsafe.java:1459)
at
org.apache.ignite.internal.util.GridUnsafe.(GridUnsafe.java:118)
... 19 more
Caused by: java.lang.IllegalAccessException: class
org.apache.ignite.internal.util.GridUnsafe cannot access class
jdk.internal.misc.SharedSecrets (in module java.base) because module
java.base does not export jdk.internal.misc to unnamed module @464a014c
at
java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
at
java.base/java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:589)
at java.base/java.lang.reflect.Method.invoke(Method.java:556)
at
org.apache.ignite.internal.util.GridUnsafe.javaNioAccessObject(GridUnsafe.java:1456)
... 20 more

02-Apr-2018 11:27:56.740 SEVERE [localhost-startStop-1]
org.apache.catalina.core.StandardContext.loadOnStartup Servlet [Ignite] in
web application [/base] threw load() exception
 java.lang.IllegalAccessException: class
org.apache.ignite.internal.util.GridUnsafe cannot access class
jdk.internal.misc.SharedSecrets (in module java.base) because module
java.base does not export jdk.internal.misc to unnamed module @464a014c
at
java.base/jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
at

Re: Slow data load in ignite from S3

2018-04-02 Thread David Harvey
When I did this, I found that with Ignite Persistence,  there is a lot of
write amplification (many times more bytes written to SSD than data bytes
written) the during the checkpoints, which makes sense because ignite
writes whole pages, and each record written dirties pieces of many pages.

The SSD write latency and throughput become critical.On slower devices
(e.g., EBS GP2) separating the WAL  can help a bit, but the key is device
write speed.On AWS, I found I needed to use local storage.

On Mon, Apr 2, 2018 at 6:22 AM, Andrey Mashenkov  wrote:

> Hi Rahul,
>
> Possibly, mostly a new data is loaded to Ignite.
> I meant, Ignite allocate new pages, rather than update ones.
>
> In that case, you may not get benefit from increasing checkpoint region
> size. It will just deffer a checkpoint.
>
> Also, you can try to move WAL and ignite store to different disks and to
> set region initial size to reduce or avoid region extents allocation .
>
> On Mon, Apr 2, 2018 at 9:59 AM, rahul aneja 
> wrote:
>
>> Hi Andrey,
>>
>> Yes we are using SSD. Earlier we were using default checkpoint buffer 256
>> MB , in order to reduce the frequency, we increased the buffer size , but
>> it didn’t have any impact on performance
>>
>> On Fri, 30 Mar 2018 at 10:49 PM, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Possibly, storage is a bottleneck or checkpoint buffer is too large.
>>> Do you use Provissioned IOPS SSD?
>>>
>>>
>>> On Fri, Mar 30, 2018 at 3:32 PM, rahul aneja 
>>> wrote:
>>>
 Hi ,

 We are trying to load orc data (around 50 GB) on s3  from spark using
 dataframe API. It starts fast with good write throughput  and then after
 sometime throughput just drops and it gets stuck.

 We also tried changing multiple configurations , but no luck
 1. enabling checkpoint write throttling
 2. disabling throttling and increasing checkpoint buffer


 Please find below configuration and properties of the cluster


1. 10 node cluster r4.4xl (EMR aws) and shared with spark
2.  ignite is started with -Xms20g -Xmx30g
3.  Cache mode is partitioned

4. persistence is enabled
5. DirectIO is enabled
6. No backup

 
>>> guration.DataStorageConfiguration”>



>>> guration.DataRegionConfiguration”>
>>> value=“true”/>
>>>value=“#{20L * 1024 * 1024 * 1024}“/>

>>> 1024 * 1024}“/>







 Thanks in advance,

 Rahul Aneja



>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.


Re: How to insert multiple rows/data into Cache once

2018-04-02 Thread Andrey Mashenkov
1a. No. DataStreamer doesn't supports transactions.
1b. SQL doesn't support transactions and it is under active development for
now [1].

2. DataStreamer updates is not propagated to CacheStore by default, you can
force this with allowOverwrite option [2].
DataStreamer uses individual entry updates, but you can change this with
setting your own reciever.
DataStreamer sends update to primary and backups, and update cache
directly, so there is no need to start cluster-wide operation on each
update, that may costs.

4. Ignite has powerful invoke() method that allows you can implement your
own logic in EntryProcessor.


[1]
https://cwiki.apache.org/confluence/display/IGNITE/IEP-3%3A+Transactional+SQL

[2]
https://apacheignite.readme.io/v1.0/docs/data-streamers#section-allow-overwrite


On Sat, Mar 31, 2018 at 9:03 PM, Prasad Bhalerao <
prasadbhalerao1...@gmail.com> wrote:

> Can someone please reply?
>
> Thanks,
> Prasad
>
>
> On Sat, Mar 31, 2018, 9:02 AM Prasad Bhalerao <
> prasadbhalerao1...@gmail.com> wrote:
>
>> Hi Andrey,
>>
>> I have similar requirement and I am using cache.putAll method to update
>> existing entries or to insert new ones.
>> I will be updating/inserting close to 3 million entries in one go.
>>
>> I am using wrte through approach to update/insert/delete the data in
>> oracle tables.
>> I am using cachestores writeAll/ deleteAll method to achieve this.
>>
>> I am doing this in single  ignite distributed transaction.
>>
>>
>> Now the question is,
>> 1a) Can I use streamer in ignite transaction?
>> 1b) Can I use ignite jdbc bulk update, insert, delete with ignite
>> distributed transaction?
>>
>> 2) if I use streamer will it invoke cache store writeAll method?
>> I meant does write through approach work with streamer.
>>
>>
>> 3) If I use Jdbc bulk mode for cache update and insert or delete, will it
>> invoke cache store's wrieAll and deleteAll method?
>> Does write through approach work with jdbc bulk update/insert/ delete?
>>
>>
>> 4) Does ignite have any apis on cache only for update purpose? Put/putAll
>> will insert or overwrite. What if I just want to update existing entries ?
>>
>>
>> Thanks,
>> Prasad
>>
>> On Fri, Mar 30, 2018, 11:12 PM Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> Ignite has 2 JDBC drivers.
>>> 1. Client driver [1] starts client node (has all failover features) and
>>> you have to pass client node config in URL.
>>> 2. Thin Driver [2] that connect directly to one of Ignite server node.
>>>
>>> So, you need a first one to be able to use streaming mode.
>>>
>>> [1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver
>>> [2] https://apacheignite-sql.readme.io/docs/jdbc-driver
>>>
>>> On Fri, Mar 30, 2018 at 1:16 PM,  wrote:
>>>
 Hi Andrey,



 I am trying to run [2], as:

 // Register JDBC driver.

 Class.forName("org.apache.ignite.IgniteJdbcDriver");

 // Opening connection in the streaming mode.

 Connection conn = DriverManager.getConnection("
 jdbc:ignite:cfg://streaming=true@file:///etc/config/ignite-jdbc.xml");



 However, I'm a bit confused about that setting in [2] about the
 ignite-jdbc.xml.



 I do not know how to find or create the xml, and here I run the ignite
 node via JVM.



 If I can write java code to produce the ignite-jdbc or not? Or only
 complete Spring XML configuration?



 By the way, I have tried the [1], that worked well.



 Finally, I still need to use the SQL as a client node, and quick write
 data into cache.



 Thank you for helping me



 Rick





 *From:* Andrey Mashenkov [mailto:andrey.mashen...@gmail.com]
 *Sent:* Thursday, March 29, 2018 6:20 PM
 *To:* user@ignite.apache.org
 *Subject:* Re: How to insert multiple rows/data into Cache once



 Hi,



 Try to use DataStreamer for fast cache load [1].

 If you need to use SQL, you can try to use bulk mode updated via JDBC
 [2]





 Also a COPY SQL command [3] will be available in next 2.5 release.

 The feature is already in master, you can try to build from it. See
 example [4]

 .



 [1] https://apacheignite.readme.io/docs/data-streamers

 [2] https://apacheignite.readme.io/v2.0/docs/jdbc-
 driver#section-streaming-mode

 [3] https://issues.apache.org/jira/browse/IGNITE-6917

 [4] https://github.com/apache/ignite/blob/master/examples/
 src/main/java/org/apache/ignite/examples/sql/SqlJdbcCopyExample.java



 On Thu, Mar 29, 2018 at 11:30 AM,  wrote:

 Dear all,



 I am trying to use the SqlFieldsQuery sdk to insert data to one cache
 on Ignite.



 I can insert one data into one cache at 

Re: Using 3rd party DB together with native persistence (WAS: GettingInvalid state exception when Persistance is enabled.)

2018-04-02 Thread Andrey Mashenkov
Hi

It is very easy for user to shoot himself in a foot... and they do it again
and again e.g. like this one [1].

[1]
http://apache-ignite-users.70518.x6.nabble.com/Data-Loss-while-upgrading-custom-jar-from-old-jar-in-server-and-client-nodes-td20505.html


On Thu, Mar 8, 2018 at 11:28 PM, Dmitriy Setrakyan 
wrote:

> To my knowledge, the 2.4 release should have support for both persistence
> mechanisms, native and 3rd party, working together. The release is out for
> a vote already:
> http://apache-ignite-developers.2346864.n4.nabble.
> com/VOTE-Apache-Ignite-2-4-0-RC1-td27687.html
>
> D.
>
> On Mon, Feb 26, 2018 at 2:43 AM, Humphrey  wrote:
>
>> I think he means when *write-through* and *read-through* modes are
>> enabled on
>> the 3rd party store, data might be written/read to/from one of those
>> persistence storage (not on both).
>>
>> So if you save data "A" it might be stored in the 3rd party persistence,
>> and
>> not in the native. When data "A" is not in the cache it might try to look
>> it
>> up from the native persistence, where it's not available. Same could
>> happen
>> with updates, if "A" was updated to "B" it could have changed in the 3rd
>> party but when requesting for the data again you might in one case get "A"
>> an other case "B" depending on the stores it reads the data from.
>>
>> At least that is what I understand from his consistency between both
>> stores.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Any idea what this is ?

2018-04-02 Thread Andrey Mashenkov
Hi,

Config looks ok.
Looks like some object still can't be unmarshalled due to some reason.
Can you share a reproducer?

On Wed, Mar 28, 2018 at 2:13 PM, Mikael  wrote:

> Hi!
>
> It behaves a bit different if I try to use BinaryConfiguration.SetClassNames,
> I added the following to the Ignite configuration, I hope that is the
> correct way to do it ?
> http://www.springframework.org/schema/beans;
> 
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> 
>xmlns:util="http://www.springframework.org/schema/util;
> 
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util.xsd;>
> 
> 
>   
> 
>
>  org.usf.gateway.service.RtuServiceWrapper
>
> 
>   
> 
> ... the rest of the configuration
>
>
> Without the above I got the same as before:
>
> 13:01:03 [srvc-deploy-#50] ERROR: Failed to initialize service (service
> will not be deployed): RTU_1_10
> org.apache.ignite.IgniteCheckedException: Cannot find metadata for object
> with compact footer: -389806882
> at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9908)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> copyAndInject(GridServiceProcessor.java:1422)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.
> GridServiceProcessor.redeploy(GridServiceProcessor.java:1343)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> processAssignment(GridServiceProcessor.java:1932)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> onSystemCacheUpdated(GridServiceProcessor.java:1595)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.
> GridServiceProcessor.access$300(GridServiceProcessor.java:124)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor$
> ServiceEntriesListener$1.run0(GridServiceProcessor.java:1577)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor$
> DepRunnable.run(GridServiceProcessor.java:2008)
> [ignite-core-2.4.0.jar:2.4.0]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [?:1.8.0_144]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [?:1.8.0_144]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
> Caused by: org.apache.ignite.binary.BinaryObjectException: Cannot find
> metadata for object with compact footer: -389806882
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.
> getOrCreateSchema(BinaryReaderExImpl.java:2008)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:284) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:183) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryReaderExImpl.<
> init>(BinaryReaderExImpl.java:162) ~[ignite-core-2.4.0.jar:2.4.0]
> at 
> org.apache.ignite.internal.binary.GridBinaryMarshaller.deserialize(GridBinaryMarshaller.java:310)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.binary.BinaryMarshaller.
> unmarshal0(BinaryMarshaller.java:99) ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.marshaller.AbstractNodeNameAwareMarshalle
> r.unmarshal(AbstractNodeNameAwareMarshaller.java:82)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9902)
> [ignite-core-2.4.0.jar:2.4.0]
> ... 10 more
>
> When I added the above to the configuration file I get this instead:
>
> 13:04:28 [srvc-deploy-#50] ERROR: Failed to initialize service (service
> will not be deployed): RTU_1_10
> org.apache.ignite.IgniteCheckedException: Cannot find schema for object
> with compact footer [typeId=-389806882, schemaId=1942057561]
> at 
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9908)
> [ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.GridServiceProcessor.
> copyAndInject(GridServiceProcessor.java:1422)
> ~[ignite-core-2.4.0.jar:2.4.0]
> at org.apache.ignite.internal.processors.service.
> GridServiceProcessor.redeploy(GridServiceProcessor.java:1343)
> [ignite-core-2.4.0.jar:2.4.0]
> at 

Re: Performance of Ignite integrating with PostgreSQL

2018-04-02 Thread Andrey Mashenkov
Hi,

1. Have you tried to test your disk write speed? it is possible it was
formatted with no align respect.
2. Have you tried to check if there is any Postgres issues? E.g. via
querying postgres directly. Do you see higher disk pressure?
3. Is it possible you generate data too slowly? Have you tries to run a
multi-threaded test?



On Tue, Mar 27, 2018 at 12:44 PM,  wrote:

> Hi Vinokurov,
>
>
>
> I tried to run your code for 30 minutes monitored by “atop”.
>
> And the average write speed is about 2151.55 KB per second.
>
> Though the performance is better.
>
> But there is still a gap with your testing result.
>
> Is there anything I can improve?
>
> Thanks.
>
>
>
> There is my hardware specifications.
>
> CPU:
>
>   Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz
>
>   4 cores
>
> Memory:
>
>   16 GB
>
>
>
> Atop observations:
>
> disk   busy   read/s KB/read  writ/s   KB/writ  avque  avserv
> _dsk_
>
> sda89%29.714.8  116.318.5  13.1   6.13 ms
>
>
>
>
>
> Print out parts of time per putAll:
>
> 221ms
>
> 23ms
>
> 22ms
>
> 60ms
>
> 56ms
>
> 71ms
>
> 140ms
>
> 105ms
>
> 117ms
>
> 69ms
>
> 91ms
>
> 89ms
>
> 32ms
>
> 271ms
>
> 24ms
>
> 23ms
>
> 55ms
>
> 90ms
>
> 69ms
>
> 1987ms
>
> 337ms
>
> 316ms
>
> 322ms
>
> 339ms
>
> 101ms
>
> 170ms
>
> 22ms
>
> 41ms
>
> 43ms
>
> 110ms
>
> 668ms
>
> 29ms
>
> 27ms
>
> 28ms
>
> 24ms
>
> 22ms
>
>
>
>
>
> *From:* Pavel Vinokurov [mailto:vinokurov.pa...@gmail.com
> ]
> *Sent:* Thursday, March 22, 2018 11:07 PM
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> In your example you add the same key/values into cache, so it's just
> overwrites entries and persists only 100 entries.
>
> Please look at the project https://bitbucket.org/vinokurov-pavel/ignite-
> postgres . I have ~70-100 Mb/s on my SSD.
>
>
>
> 2018-03-22 11:55 GMT+03:00 :
>
> Hi Vinokurov,
>
>
>
> I changed my code
>
> >> IgniteCache igniteCache = 
> >> ignite.getOrCreateCache("testCache
> ");
>
> To
>
> IgniteCache igniteCache = ignite.cache("testCache");
>
> And update to 2.4.0 version.
>
>
>
> But the writing speed is still about 100 KB per second.
>
>
>
>
>
> Below is jdbc connection initialization:
>
> @Autowired
>
> public NamedParameterJdbcTemplate jdbcTemplate;
>
> @Override
>
> public void start() throws IgniteException {
>
> ConfigurableApplicationContext context = new ClassPathXmlApplicationContext
> ("postgres-context.xml");
>
> this.jdbcTemplate = context.getBean(NamedParameterJdbcTemplate.class);
>
> }
>
>
>
>
>
> The PostgreSQL configuration, “postgres-context.xml” :
>
> 
>
> http://www.springframework.org/schema/beans;
>
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>
>xmlns:context="http://www.springframework.org/schema/context;
>
>xsi:schemaLocation="
>
>http://www.springframework.org/schema/beans
>
>http://www.springframework.org/schema/beans/spring-beans.xsd
>
>http://www.springframework.org/schema/context
>
>http://www.springframework.org/schema/context/spring-context.xsd;>
>
>
>
> 
>
> 
>
>
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>
> 
>   class="org.springframework.jdbc.core.namedparam.
> NamedParameterJdbcTemplate">
>
> 
>
> 
>
> 
>
>
>
>
>
>
>
> Thanks.
>
>
>
>
>
> *From:* Vinokurov Pavel [mailto:vinokurov.pa...@gmail.com]
> *Sent:* Thursday, March 22, 2018 1:50 PM
>
>
> *To:* user@ignite.apache.org
> *Subject:* Re: Performance of Ignite integrating with PostgreSQL
>
>
>
> Also it makes sense to use new 2.4 version.
>
>
>
> 2018-03-22 8:37 GMT+03:00 Vinokurov Pavel :
>
> >> IgniteCache igniteCache = 
> >> ignite.getOrCreateCache("testCache
> ");
>
> please, change to  ignite.cache("testCache") to be sure the we use
> configuration from the file.
>
>
>
> 2018-03-22 8:19 GMT+03:00 Vinokurov Pavel :
>
> You already showed the cache configuration, but could you show jdbc
> connection initialization
>
>
>
> 2018-03-22 7:59 GMT+03:00 Vinokurov Pavel :
>
> Hi,
>
>
>
> Could you please show the "PATH/example-cache.xml" file.
>
>
>
> 2018-03-21 9:40 GMT+03:00 :
>
> Hi Vinokurov,
>
>
>
> Thanks for your reply.
>
> I try to write batches by 100 entries.
>
> And I got a worse result.
>
> The writing speed is down to 12.09 KB per second.
>
> Below is my code which I try to use putAll and writeAll to rewrite.
>
> Did I make some mistakes?
>
>
>
>
>
>
>
> Main function:
>
> Ignite ignite = Ignition.start("PATH/example-cache.xml");
>
> IgniteCache igniteCache = 
> ignite.getOrCreateCache("testCache
> ");
>
> for(int i = 0; i < 100; i++)
>
> {
>
>  parameterMap.put(Integer.toString(i), 

Re: Spark 'close' API call hangs within ignite service grid

2018-04-02 Thread Andrey Mashenkov
Hi,

Socket exception can be caused by wrong network configuration or firewall
configuration.
If node will not be able to send an response to other node then it can
cause grid hanging.

(Un)marshall exceptions (that is not caused by network exceptions) is a
signal that smth goes wrong.


On Fri, Mar 23, 2018 at 8:48 AM, akshaym 
wrote:

> I have pushed the sample application to  github
>   . Please check
> it
> once.
>
> Also, I am able to get rid of the hang issue with spark.close API call by
> adding "igniteInstanceName" property. Not sure if its a right approach
> though.
> I came up with this solution, while debugging this issue. What I observed
> is
> that during saving dataframe to ignite, it needs Ignite context. It first
> checks if the context is already there, if it is exists it uses that
> context
> to save dataframe in ignite and on spark close API call it tries to close
> the same context.
> As I am trying to run this spark job as an ignite service, I wanted it to
> run continuously. So closing the ignite context was causing this issue. So
> to make dataframe APIs to create new context everytime, I added
> "igniteInstanceName" property to config which I am apsing to new ignite DF
> APIs.
>
> Though it resolves the hang issue it is still showing some socket
> connection
> and unmarshalling exceptions. Do I need to worry about it? How can I get
> rid
> of those?
>
> Also, Any trade-offs if we use Spark As Ignite Service when executed with
> Yarn?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Using Ignite grid during rebalancing operations

2018-04-02 Thread Raymond Wilson
I’ve been reading how Ignite manages rebalancing as a result of topology
changes here: https://apacheignite.readme.io/docs/rebalancing



It does not say so explicitly, but reading between the lines suggests that
the Ignite grid will respond to both read and write activity while
rebalancing is in progress when using ASYNC mode. Is this correct?



If another node is added to the grid midway through the grid rebalancing
from a previous addition of a node does grid rebalancing reorient to
handling the two new nodes, or does it rebalance for the first node
addition then rebalance again for the second new node addition?



Thanks,

Raymond.


Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-04-02 Thread Andrey Mashenkov
Hi,

>In this case also, since we dont have eviction in place, all the time data
>is retrieved from RAM only, the only time request goes to Oracle is for
>Upserts and delete.
Here, all queries run on data in RAM only. Ignite just propagate updates to
back store (to Oracle via CacheStore impl)
with keeping consistency guaranties according to CacheStore configuration.

>So if oracle DB is loaded heavily while running these queries also, it does
>not affect the cluster performance when it comes to data retrievals, am i
>right ??
Not quite, CacheStore affects performance due to either CacheStore updates
are synchronous or CacheStore buffer is limited.

With Ignite persistence you are not limited with RAM. PageMemory concept
allow you to query data that resides on disk.
What OOM do you mean Ignite or JVM, and on what side: client or server?
Why you think with same dataset OOM with persistence has higher probability
than with no persistence and no eviction?


On Mon, Apr 2, 2018 at 9:09 AM, Naveen  wrote:

> HI
>
> Let me rephrase my question, guess I have conveyed my question correctly.
>
> Lets take an example
>
> Ignite cluster with backing store as RDBMS - oracle and no eviction in
> place.
>
> As we all know, we can run complex queries on Oracle to retrieve desired
> data.
> In this case also, since we dont have eviction in place, all the time data
> is retrieved from RAM only, the only time request goes to Oracle is for
> Upserts and delete.
> So if oracle DB is loaded heavily while running these queries also, it does
> not affect the cluster performance when it comes to data retrievals, am i
> right ??
> Inserts/Update/Deletes on DB may get slower if DB is loaded with these
> heavy
> queries.
>
> Similar way, how can we achieve this with ignite cluster with native
> persistence ?
> When native persistence is used, if we run some heavy queries on cluster
> thru SQLLINE, it may give out of memory error and node might crash also.
> How
> can we avoid this, it should run on backing store, I am fine if thge query
> exection takes longer time, but cluster should not get crashed.
>
> Hope I conveyed my requirements more clear this time.
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite Client Heap out of Memory issue

2018-04-02 Thread Andrey Mashenkov
Hi Shawn,

The OOM error occurs on remote server node, there were not sufficient
memory to process request,
but other threads were not affected by this.
Looks like, Ignite was able to recover from the error as it was suppressed
and  reply to client has been sended.


On Mon, Apr 2, 2018 at 8:22 AM, shawn.du  wrote:

> Hi Andrey,
>
> Thanks for your replay. It still confused me for:
> 1) for storm worker process, If it is  because of OOM and crashed. it
> should dump the heap. for I set  -XX:+HeapDumpOnOutOfMemoryError
> but it didn't.  For storm worker, it behaves like a normal fatal error
> which make storm worker restart.
> 2) It did make ignite server heap dump. for I analyzed the dump, it is
> ignite server's. (seeing from PID and java command etc.)
>
> anyone can explain? Thanks.
>
> Thanks
> Shawn
>
> On 3/31/2018 01:36,Andrey Mashenkov
>  wrote:
>
> Hi Shawn,
>
> 1. Ignite use off heap to store cache entries. Client store no cache data.
> Cache in LOCAL mode can be used on client side, and it should use offheap
> of course.
>
> All data client retreive from server will be in offheap
>
> 2. It is not IgniteOutOfMemory error, but JVM OOM.
> So, try to investigate if there is a memory leak in your code.
>
> On Fri, Mar 30, 2018 at 6:36 AM, shawn.du  wrote:
>
>> Hi,
>>
>> My Ignite client heap OOM yesterday.  This is the first time we encounter
>> this issue.
>>
>> My ignite client colocates within Storm worker process. this issue cause
>> storm worker restart.
>> I have several questions about it: our ignite version is 2.3.0
>> 1) if ignite in client mode, it use offheap? how to set the max
>> onheap/offheap memory to use.
>> 2) our storm worker have 8G memory, ignite client print OOM, it doesn't
>> trigger storm worker to dump the heap.
>> but we get a ignite server's heap dump.  ignite server didn't die.
>> The ignite server's heap dump is very small. only have 200M.
>> which process is OOM? worker or ignite server?
>>
>> This is logs:  Thanks in advance.
>>
>>   Suppressed: org.apache.ignite.IgniteCheckedException: Failed to update
>> keys on primary node.
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.UpdateErrors.addFailedKeys(UpdateErrors.java:124)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridNearAtomicUpdateRes
>> ponse.addFailedKeys(GridNearAtomicUpdateResponse.java:342)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.upda
>> teAllAsyncInternal0(GridDhtAtomicCache.java:1784) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.upda
>> teAllAsyncInternal(GridDhtAtomicCache.java:1627) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.proc
>> essNearAtomicUpdateRequest(GridDhtAtomicCache.java:3054)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache.acce
>> ss$400(GridDhtAtomicCache.java:129) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.
>> apply(GridDhtAtomicCache.java:265) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.distributed.dht.atomic.GridDhtAtomicCache$5.
>> apply(GridDhtAtomicCache.java:260) ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:1060)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:579)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:378)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:304)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager.access$100(GridCacheIoManager.java:99)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.pro
>> cessors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:293)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.man
>> agers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.man
>> agers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:1183)
>> ~[stormjar.jar:?]
>> at org.apache.ignite.internal.man
>> 

Re: Distributed lock

2018-04-02 Thread Andrey Mashenkov
Hi,

If lock will be resides on node that hold the lock, how newly joined node
will discover just locked lock instance by lock name to add itself in
waiting queue?
Who will be responsible for waiting queue handing? Does the queue should be
transferred when owner has changed?
And finally, how this will resolve the issue with lock-owner node failure?
Who will be next owner?


On Mon, Apr 2, 2018 at 10:54 AM, Green <15151803...@163.com> wrote:

> Hi,Roman
>   Thank you for the reply.
>   I think i should change the value of cacheMode to replicated, it is more
> safe.
>   why not cache the lock on the node who own the lock? If the node leaves
> topology,  it will has no effect on other nodes.
>
> Thanks
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Slow data load in ignite from S3

2018-04-02 Thread Andrey Mashenkov
Hi Rahul,

Possibly, mostly a new data is loaded to Ignite.
I meant, Ignite allocate new pages, rather than update ones.

In that case, you may not get benefit from increasing checkpoint region
size. It will just deffer a checkpoint.

Also, you can try to move WAL and ignite store to different disks and to
set region initial size to reduce or avoid region extents allocation .

On Mon, Apr 2, 2018 at 9:59 AM, rahul aneja  wrote:

> Hi Andrey,
>
> Yes we are using SSD. Earlier we were using default checkpoint buffer 256
> MB , in order to reduce the frequency, we increased the buffer size , but
> it didn’t have any impact on performance
>
> On Fri, 30 Mar 2018 at 10:49 PM, Andrey Mashenkov <
> andrey.mashen...@gmail.com> wrote:
>
>> Hi,
>>
>> Possibly, storage is a bottleneck or checkpoint buffer is too large.
>> Do you use Provissioned IOPS SSD?
>>
>>
>> On Fri, Mar 30, 2018 at 3:32 PM, rahul aneja 
>> wrote:
>>
>>> Hi ,
>>>
>>> We are trying to load orc data (around 50 GB) on s3  from spark using
>>> dataframe API. It starts fast with good write throughput  and then after
>>> sometime throughput just drops and it gets stuck.
>>>
>>> We also tried changing multiple configurations , but no luck
>>> 1. enabling checkpoint write throttling
>>> 2. disabling throttling and increasing checkpoint buffer
>>>
>>>
>>> Please find below configuration and properties of the cluster
>>>
>>>
>>>1. 10 node cluster r4.4xl (EMR aws) and shared with spark
>>>2.  ignite is started with -Xms20g -Xmx30g
>>>3.  Cache mode is partitioned
>>>
>>>4. persistence is enabled
>>>5. DirectIO is enabled
>>>6. No backup
>>>
>>> 
>>>>> DataStorageConfiguration”>
>>>
>>>
>>>
>>>>> DataRegionConfiguration”>
>>>
>>>>>value=“#{20L * 1024 * 1024 * 1024}“/>
>>>
>>>>> 1024 * 1024}“/>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Thanks in advance,
>>>
>>> Rahul Aneja
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Upgrade from 2.1.0 to 2.4.0 resulting in error within transaction block

2018-04-02 Thread Yakov Zhdanov
Cross posting to dev.

Vladimir Ozerov, can you please take a look at NPE from query processor
(see below - GridQueryProcessor.typeByValue(GridQueryProcessor.java:1901))?

--Yakov

2018-03-29 0:19 GMT+03:00 smurphy :

> Code works in Ignite 2.1.0. Upgrading to 2.4.0 produces the stack trace
> below. The delete statement that is causing the error is:
>
> SqlFieldsQuery sqlQuery = new SqlFieldsQuery("delete from EngineFragment
> where " + criteria());
> fragmentCache.query(sqlQuery.setArgs(criteria.getArgs()));
>
> The code above is called from within a transactional block managed by a
> PlatformTransactionManager which is in turn managed by Spring's
> ChainedTransactionManager. If the @Transactional annotation is removed from
> the surrounding code, then the code works ok...
>
> 2018-03-28 15:50:05,748 WARN  [engine 127.0.0.1] progress_monitor_2 unknown
> unknown {ProgressMonitorImpl.java:112} - Scan
> [ec7af5e8-a773-40fd-9722-f81103de73dc] is unable to process!
> javax.cache.CacheException: Failed to process key '247002'
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
> IgniteCacheProxyImpl.java:618)
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheProxyImpl.query(
> IgniteCacheProxyImpl.java:557)
> at
> org.apache.ignite.internal.processors.cache.GatewayProtectedCacheProxy.
> query(GatewayProtectedCacheProxy.java:382)
> at
> com.company.core.dao.ignite.IgniteFragmentDao.delete(
> IgniteFragmentDao.java:143)
> at
> com.company.core.dao.ignite.IgniteFragmentDao$$FastClassBySpringCGLIB$$
> c520aa1b.invoke()
> at org.springframework.cglib.proxy.MethodProxy.invoke(
> MethodProxy.java:204)
> at
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.
> invokeJoinpoint(CglibAopProxy.java:720)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(
> ReflectiveMethodInvocation.java:157)
> at
> org.springframework.dao.support.PersistenceExceptionTranslatio
> nInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(
> ReflectiveMethodInvocation.java:179)
> at
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.
> intercept(CglibAopProxy.java:655)
> at
> com.company.core.dao.ignite.IgniteFragmentDao$$EnhancerBySpringCGLIB$$
> ce60f71c.delete()
> at
> com.company.core.core.service.impl.InternalScanServiceImpl.
> purgeScanFromGrid(InternalScanServiceImpl.java:455)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection
> (AopUtils.java:302)
> at
> org.springframework.aop.framework.JdkDynamicAopProxy.
> invoke(JdkDynamicAopProxy.java:202)
> at com.sun.proxy.$Proxy417.purgeScanFromGrid(Unknown Source)
> at com.company.core.core.async.tasks.PurgeTask.process(
> PurgeTask.java:85)
> at sun.reflect.GeneratedMethodAccessor197.invoke(Unknown Source)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection
> (AopUtils.java:302)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.
> invokeJoinpoint(ReflectiveMethodInvocation.java:190)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(
> ReflectiveMethodInvocation.java:157)
> at
> org.springframework.transaction.interceptor.TransactionInterceptor$1.
> proceedWithInvocation(TransactionInterceptor.java:99)
> at
> org.springframework.transaction.interceptor.TransactionAspectSupport.
> invokeWithinTransaction(TransactionAspectSupport.java:281)
> at
> org.springframework.transaction.interceptor.TransactionInterceptor.invoke(
> TransactionInterceptor.java:96)
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(
> ReflectiveMethodInvocation.java:179)
> at
> org.springframework.aop.framework.JdkDynamicAopProxy.
> invoke(JdkDynamicAopProxy.java:208)
> at com.sun.proxy.$Proxy418.process(Unknown Source)
> at
> com.company.core.core.async.impl.ProgressMonitorImpl._
> runTasks(ProgressMonitorImpl.java:128)
> at
> com.company.core.core.async.impl.ProgressMonitorImpl.lambda$null$0(
> ProgressMonitorImpl.java:98)
> at java.util.concurrent.Executors$RunnableAdapter.
> call(Executors.java:511)
> at
> 

Re: Distributed lock

2018-04-02 Thread Green
Hi,Roman
  Thank you for the reply.
  I think i should change the value of cacheMode to replicated, it is more
safe.
  why not cache the lock on the node who own the lock? If the node leaves
topology,  it will has no effect on other nodes.

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: stop sending messages pls

2018-04-02 Thread Вячеслав Коптилин
Hello,

To unsubscribe from the user mailing list send a letter to
user-unsubscr...@ignite.apache.org with a word "Unsubscribe" without quotes
as a topic.

If you have a mailing client, follow an unsubscribe link here:
https://ignite.apache.org/community/resources.html#mail-lists

Thanks,
S.

2018-04-02 10:12 GMT+03:00 andriy.kasat...@kyivstar.net <
andriy.kasat...@kyivstar.net>:

> My email is andriy.kasat...@kyivstar.net. Can you please unsubscribe this
> mail.
>


Re: Distributed lock

2018-04-02 Thread Roman Guseinov
Hi,

Yes, you are right. Default backups count is zero by default. This way it is
possible to lose some locks if one of the nodes leaves topology.

You are able to set backups count in AtomicConfiguration:


...











Best Regards,
Roman



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


stop sending messages pls

2018-04-02 Thread andriy.kasat...@kyivstar.net
My email is andriy.kasat...@kyivstar.net. 
Can you please unsubscribe this mail.


Re: Slow data load in ignite from S3

2018-04-02 Thread rahul aneja
Hi Andrey,

Yes we are using SSD. Earlier we were using default checkpoint buffer 256
MB , in order to reduce the frequency, we increased the buffer size , but
it didn’t have any impact on performance

On Fri, 30 Mar 2018 at 10:49 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> Possibly, storage is a bottleneck or checkpoint buffer is too large.
> Do you use Provissioned IOPS SSD?
>
>
> On Fri, Mar 30, 2018 at 3:32 PM, rahul aneja 
> wrote:
>
>> Hi ,
>>
>> We are trying to load orc data (around 50 GB) on s3  from spark using
>> dataframe API. It starts fast with good write throughput  and then after
>> sometime throughput just drops and it gets stuck.
>>
>> We also tried changing multiple configurations , but no luck
>> 1. enabling checkpoint write throttling
>> 2. disabling throttling and increasing checkpoint buffer
>>
>>
>> Please find below configuration and properties of the cluster
>>
>>
>>1. 10 node cluster r4.4xl (EMR aws) and shared with spark
>>2.  ignite is started with -Xms20g -Xmx30g
>>3.  Cache mode is partitioned
>>
>>4. persistence is enabled
>>5. DirectIO is enabled
>>6. No backup
>>
>> 
>>> class=“org.apache.ignite.configuration.DataStorageConfiguration”>
>>
>>
>>
>>> class=“org.apache.ignite.configuration.DataRegionConfiguration”>
>>
>>>value=“#{20L * 1024 * 1024 * 1024}“/>
>>
>>> 1024 * 1024}“/>
>>
>>
>>
>>
>>
>>
>>
>> Thanks in advance,
>>
>> Rahul Aneja
>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Distributed lock

2018-04-02 Thread Green
Hi
  I want to use reentrantLock of ignite.
  In the code, the default backups is zero in AtomicConfiguration.
  When a node leaves topology, some locks cached on this node will lost?
Should i modify the configuration of atomicConfiguration?

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Running heavy queries on Ignite cluster on backing store directly without impacting the cluster

2018-04-02 Thread Naveen
HI

Let me rephrase my question, guess I have conveyed my question correctly.

Lets take an example

Ignite cluster with backing store as RDBMS - oracle and no eviction in
place.

As we all know, we can run complex queries on Oracle to retrieve desired
data. 
In this case also, since we dont have eviction in place, all the time data
is retrieved from RAM only, the only time request goes to Oracle is for
Upserts and delete. 
So if oracle DB is loaded heavily while running these queries also, it does
not affect the cluster performance when it comes to data retrievals, am i
right ??
Inserts/Update/Deletes on DB may get slower if DB is loaded with these heavy
queries.

Similar way, how can we achieve this with ignite cluster with native
persistence ?
When native persistence is used, if we run some heavy queries on cluster
thru SQLLINE, it may give out of memory error and node might crash also. How
can we avoid this, it should run on backing store, I am fine if thge query
exection takes longer time, but cluster should not get crashed.

Hope I conveyed my requirements more clear this time.

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Where is the data stored in Durable Memory?

2018-04-02 Thread Roman Guseinov
Hi Lucky,

Please, check if IGNITE_HOME env. variable or JVM property is properly
configured. The persisting data should be located in ${IGNITE_HOME}/work
directory.

If you are on linux, check /tmp/ignite directory as a default one.

Details about file types and folder structure you can find here [1].

Best Regards,
Roman

[1]
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood#IgnitePersistentStore-underthehood-IgnitePersitentStore



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/