Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts

2020-08-27 Thread xmw45688
How do we enable the persistence for this atomicLong? My understanding is
that atomicLong or atomicReference can't be persisted.  Even if it can be
persisted, isn't next value reset and started from the initial value.  

My issue is that the cached sequence was removed from Spring bean.  How do I
re-initialize this sequence once the Ignite server node restarted while the
client node was still running.  

The initialization of the sequence is from the client side using
@PostConstruct.  We need a way to re-initialize with the max value from DB
when the ignite server restarted and the client node was connected to the
ignite server.

@PostConstruct
@Override
public void initSequence() {
Long maxId = userRepository.getMaxId();
if (maxId == null) { maxId = 0L; }
LOG.info("Max User id: {}", maxId);
userSeq = igniteInstance.atomicLong("userSeq", maxId, true);
userSeq.getAndSet(maxId);
}
 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to confirm that disk compression is in effect?

2020-08-27 Thread 38797715

Hi,

create table statement are as follows:

CREATETABLEPI_COM_DAY
(COM_ID VARCHAR(30) NOTNULL,
ITEM_ID VARCHAR(30) NOTNULL,
DATE1 VARCHAR(8) NOTNULL,
KIND VARCHAR(1),
QTY_IOD DECIMAL(18, 6) ,
AMT_IOD DECIMAL(18, 6) ,
QTY_PURCH DECIMAL(18, 6) ,
AMT_PURCH DECIMAL(18,6) ,
QTY_SOLD DECIMAL(18,6) ,
AMT_SOLD DECIMAL(18, 6) ,
AMT_SOLD_NO_TAX DECIMAL(18, 6) ,
QTY_PROFIT DECIMAL(18, 6) ,
AMT_PROFIT DECIMAL(18, 6) ,
QTY_LOSS DECIMAL(18,6) ,
AMT_LOSS DECIMAL(18, 6) ,
QTY_EOD DECIMAL(18, 6) ,
AMT_EOD DECIMAL(18,6) ,
UNIT_COST DECIMAL(18,8) ,
SUMCOST_SOLD DECIMAL(18,6) ,
GROSS_PROFIT DECIMAL(18, 6) ,
QTY_ALLOCATION DECIMAL(18,6) ,
AMT_ALLOCATION DECIMAL(18,2) ,
AMT_ALLOCATION_NO_TAX DECIMAL(18, 2) ,
GROSS_PROFIT_ALLOCATION DECIMAL(18,6) ,
SUMCOST_SOLD_ALLOCATION DECIMAL(18,6) ,
PRIMARYKEY(COM_ID,ITEM_ID,DATE1)) 
WITH"template=cache-partitioned,CACHE_NAME=PI_COM_DAY";

CREATEINDEX IDX_PI_COM_DAY_ITEM_DATE ONPI_COM_DAY(ITEM_ID,DATE1);

I don't think there's anything special about it.
Then we imported 10 million data using the COPY command.Data is 
basically the actual production data, I think the dispersion is OK, not 
artificial data with high similarity.
I would like to know if there are test results for the function of disk 
compression? Most of the other memory databases also have the function 
of data compression, but it doesn't look like it is now, or what's wrong 
with me?


在 2020/8/28 上午12:39, Michael Cherkasov 写道:
Could you please share your benchmark code? I believe compression 
might depend on data you write, if it full random, it's difficult to 
compress the data.


On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com 
> wrote:


Hi,

We turn on disk compression to see the trend of execution time and
disk space.

Our expectation is that after disk compression is turned on,
although more CPU is used, the disk space is less occupied.
Because more data is written per unit time, the overall execution
time will be shortened in the case of insufficient memory.

However, it is found that the execution time and disk consumption
do not change significantly. We tested the
diskPageCompressionLevel values as 0, 10 and 17 respectively.

Our test method is as follows:
The ignite-compress module has been introduced.

The configuration of ignite is as follows:


http://www.springframework.org/schema/beans";

xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";

xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans.xsd
">








































Re: Item not found, B+Tree is corrupted and critical error detected after add nodes to baseline topology

2020-08-27 Thread Evgenii Zhuravlev
Hi,

Can you attach logs in normal format? It's really hard to read it. Also,
please attach full logs from nodes, not only the stacktrace.

Thanks,
Evgenii

вт, 25 авг. 2020 г. в 19:27, Steven Zheng :

> Hi community,
> Currently I have 25 nodes in my ignite cluster and all of them were added
> into the baseline, and I was trying to add another 5 nodes into it. The
> data in my cluster is about 8TB and the persistence is enabled.
> At first I start all the 5 nodes; then execute in the command line:
> ```
> bin/control.sh --baseline add ${my_node_id}
> ```
> Meanwhile, there is still read/write workloads on the cluster.
>
> After a few minutes, a critical error detected , several nodes crashed and
> emits the logs like this(json format):
> ```
> {
>   "message": "Critical system error detected. Will be handled accordingly
> to configured handler [hnd=StopNodeFailureHandler
> [super=AbstractFailureHandler [ignoredFailureTypes=UnmodifiableSet
> [SYSTEM_WORKER_BLOCKED, SYSTEM_CRITICAL_OPERATION_TIMEOUT]]],
> failureCtx=FailureContext [type=CRITICAL_ERROR, err=class
> o.a.i.i.processors.cache.persistence.tree.CorruptedTreeException: B+Tree is
> corrupted [pages(groupId, pageId)=[IgniteBiTuple [val1=-1526563570,
> val2=844420635165881]], cacheId=-2021712086, cacheName=MY_SQL_TABLE,
> indexName=AFFINITY_KEY, msg=Runtime failure on search row: Row@d00bc1[
> key: SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ",
>   "loggerFqcn": "org.apache.logging.log4j.spi.AbstractLogger",
>   "thrown": {
> "localizedMessage": "B+Tree is corrupted [pages(groupId,
> pageId)=[IgniteBiTuple [val1=-1526563570, val2=844420635165881]],
> cacheId=-2021712086, cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY,
> msg=Runtime failure on search row: Row@d00bc1[ key:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ]]",
> "message": "B+Tree is corrupted [pages(groupId, pageId)=[IgniteBiTuple
> [val1=-1526563570, val2=844420635165881]], cacheId=-2021712086,
> cacheName=MY_SQL_TABLE, indexName=AFFINITY_KEY, msg=Runtime failure on
> search row: Row@d00bc1[ key:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf_KEY
> [idHash=2015873411, hash=-2113698004, CARDNUMBER=25323111322, IP=], val:
> SQL_FEAT_MY_SQL_TABLE_7f8539ef_08a1_421e_a0b8_60f80cdbbbdf
> [idHash=1729397221, hash=2090272279, MY_BIZ_UID=0, ETL_TIME=1596023962011]
> ][ 25323111322, , 0, 1596023962011 ]]",
> "commonElementCount": 0,
> "cause": {
>   "localizedMessage": "java.lang.IllegalStateException: Item not
> found: 27",
>   "message": "java.lang.IllegalStateException: Item not found: 27",
>   "commonElementCount": 21,
>   "cause": {
> "localizedMessage": "Item not found: 27",
> "message": "Item not found: 27",
> "commonElementCount": 21,
> "name": "java.lang.IllegalStateException",
> "extendedStackTrace": [{
> "line": 351,
> "method": "findIndirectItemIndex",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 459,
> "method": "getDataOffset",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 501,
> "method": "readPayload",
> "exact": false,
> "file": "AbstractDataPageIO.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.tree.io.AbstractDataPageIO",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 325,
> "method": "readIncomplete",
> "exact": false,
> "file": "CacheDataRowAdapter.java",
> "location": "ignite-core-2.8.0-SNAPSHOT.jar",
> "class":
> "org.apache.ignite.internal.processors.cache.persistence.CacheDataRowAdapter",
> "version": "2.8.0-SNAPSHOT"
>   },
>   {
> "line": 261,
> "

Re: Ignite's memory consumption

2020-08-27 Thread Michael Cherkasov
512M of heap, plus 512M of offheap, plus some space for java metaspace, if
you don't set, it is unlimited, set to: -XX:MaxMetaspaceSize=256m

so, in total it will consume up to 1.25GB.
But even with what you described, I don't see any problem 760MB is less
than limits you set to heap+off-heap.

ср, 26 авг. 2020 г. в 09:17, Dana Milan :

> Hi all Igniters,
>
> I am trying to minimize Ignite's memory consumption on my server.
>
> Some background:
> My server has 16GB RAM, and is supposed to run applications other than
> Ignite.
> I use Ignite to store a cache. I use the TRANSACTIONAL_SNAPSHOT mode and I
> don't use persistence (configuration file attached). To read and update the
> cache I use SQL queries, through ODBC Client in C++ and through an
> embedded client-mode node in C#.
> My data consists of a table with 5 columns, and I guess around tens of
> thousands of rows.
> Ignite metrics tell me that my data takes 167MB ("CFGDataRegion region
> [used=167MB, free=67.23%, comm=256MB]", This region contains mainly this
> one cache).
>
> At the beginning, when I didn't tune the JVM at all, the Apache.Ignite
> process consumed around 1.6-1.9GB of RAM.
> After I've done some reading and research, I use the following JVM options
> which have brought the process to consume around 760MB as of now:
> -J-Xms512m
> -J-Xmx512m
> -J-Xmn64m
> -J-XX:+UseG1GC
> -J-XX:SurvivorRatio=128
> -J-XX:MaxGCPauseMillis=1000
> -J-XX:InitiatingHeapOccupancyPercent=40
> -J-XX:+DisableExplicitGC
> -J-XX:+UseStringDeduplication
>
> Currently Ignite is up for 29 hours on my server. When I only started the
> node, the Apache.Ignite process consumed around 600MB (after my data
> insertion, which doesn't change much after), and as stated, now it consumes
> around 760MB. I've been monitoring it every once in a while and this is not
> a sudden rise, it has been rising slowly but steadily ever since the node
> has started.
> I used DBeaver to look into node metrics system view
> , and I turned on the
> garbage collector logs. The garbage collector log shows that heap is
> constantly growing, but I guess this is due to the SQL queries and their
> results being stored there. (There are a few queries in a second, the
> results normally contain one row but can contain tens or hundreds of rows).
> After every garbage collection the heap usage is between 80-220MB. This is
> in accordance to what I see under HEAP_MEMORY_USED system view metric.
> Also, I can see that NONHEAP_MEMORY_COMITTED is around 102MB and
> NONHEAP_MEMORY_USED is around 98MB.
>
> My question is, what could be causing the constant growth in memory usage?
> What else consumes memory that doesn't appear in these metrics?
>
> Thanks for your help!
>


Re: Ignite client node raises "Sequence was removed from cache" after Ignite Server node restarts

2020-08-27 Thread Michael Cherkasov
Hi, as I remember AtomicInteger uses the default data region as storage, so
if you don't have persistence for it, then it will be lost after restart.
Try to enable persistence and check it again.

ср, 26 авг. 2020 г. в 18:08, xmw45688 :

> *USE CASE *- use IgniteAtomicLong for table sequence generation (may not be
> correct approach in a distributed environment).
>
> *Ignite Server *(start Ignite as server mode) -
> apache-ignite-2.8.0.20190215
> daily build
> *Ignite Service* (start Ignite as client mode) - use Ignite Spring to
> initialize the sequence, see code snippet below.
> *code snippet*
>
> IgniteAtomicLong userSeq;
>
> @Autowired
> UserRepository userRepository;
>
> @Autowired
> Ignite igniteInstance;
>
> @PostConstruct
> @Override
> public void initSequence() {
> Long maxId = userRepository.getMaxId();
> if (maxId == null)
>
> { maxId = 0L; }
> LOG.info("Max User id: {}", maxId);
> userSeq = igniteInstance.atomicLong("userSeq", maxId, true);
> userSeq.getAndSet(maxId);
> }
>
> @Override
> public Long getNextSequence() {
> return userSeq.incrementAndGet();
> }
>
> *Exception*
> This code works well until the Ignite Server restarted (Ignite Service was
> not restarted).  It raised "Sequence was removed from cache" after Ignite
> Server node restarted.
>
> 020-08-11 16:14:46 [http-nio-8282-exec-3] ERROR
> c.p.c.p.service.PersistenceService - Error while saving entity:
> java.lang.IllegalStateException: Sequence was removed from cache: userSeq
> at
>
> org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.removedError(AtomicDataStructureProxy.java:145)
> at
>
> org.apache.ignite.internal.processors.datastructures.AtomicDataStructureProxy.checkRemoved(AtomicDataStructureProxy.java:116)
> at
>
> org.apache.ignite.internal.processors.datastructures.GridCacheAtomicLongImpl.incrementAndGet(GridCacheAtomicLongImpl.java:94)
>
> *Tried to reinitialize when the server node is down. But raises another
> exception - "cannot start/stop cache within lock or transaction"*
>
> How to solve such issues?  Any suggestions are appreciated.
>
> @Override
> public Long getNextSequence() {
> if (useSeq == null || userSeq.removed())
>
> { initSeqence(); }
> return userSeq.incrementAndGet();
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: ignite partition mode

2020-08-27 Thread Michael Cherkasov
I'm not sure how you defined that only one node store data, but if so, it
means only one node in baseline:
https://www.gridgain.com/docs/latest/developers-guide/baseline-topology#baseline-topology
add the second node to baseline and it will store data too.

чт, 27 авг. 2020 г. в 15:08, Denis Magda :

> Explain the steps in detail
> -
> Denis
>
>
> On Wed, Aug 26, 2020 at 5:13 AM itsmeravikiran.c <
> itsmeravikira...@gmail.com> wrote:
>
>> By using ignite web console
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>


Re: ignite partition mode

2020-08-27 Thread Denis Magda
Explain the steps in detail
-
Denis


On Wed, Aug 26, 2020 at 5:13 AM itsmeravikiran.c 
wrote:

> By using ignite web console
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Executing select query in Partition mode (cluster with two nodes) taking more than 10 minutes.

2020-08-27 Thread Denis Magda
Hi Bhavesh,

Follow these basic performance tuning recommendations for SQL
 to
understand how to optimize the query. Take the execution plan first. Most
likely, you need to set secondary indexes. If that's not the case, then
confirm the query doesn't cause long stop-the-world pauses.

-
Denis


On Tue, Aug 25, 2020 at 6:09 AM Bhavesh Modi 
wrote:

>
>
> Hi Team,
>
>
>
>   I have created cluster with two nodes. After creating new
> table ,  we are executing select query for first time , It is taking too
> much time to fetch data. (almost 10+ min)
>
>   Please find attachment my server xml configuration and below
> is  client configuration class . any suggestion to solve this problem,
> thanks in advance
>
>
>
>   public static synchronized void init() {
>
>  try {
>
> Ignition.setClientMode(true);
>
>
>
> DataStorageConfiguration storageCfg = new
> DataStorageConfiguration();
>
> storageCfg.setWalMode(WALMode.BACKGROUND);
>
>
>
> IgniteConfiguration cfg = new IgniteConfiguration();
>
> cfg.setDataStorageConfiguration(storageCfg);
>
> // Ignite Cluster changes
>
> cfg.setPeerClassLoadingEnabled(true);
>
> cfg.setRebalanceThreadPoolSize(4);
>
>
>
> TcpDiscoverySpi discoverySpi = new TcpDiscoverySpi();
>
> TcpDiscoveryVmIpFinder ipFinder = new
> TcpDiscoveryVmIpFinder();
>
> ipFinder.setAddresses( Arrays.asList("127.0.0.1:47500
> ","12.16.0.19:47500"));
>
>
>
>
>
> discoverySpi.setLocalPort(47500);
>
>
>
>
>
>
> discoverySpi.setIpFinder(ipFinder).setJoinTimeout(3);
>
>
>
> TcpCommunicationSpi commSpi = new
> TcpCommunicationSpi();
>
>
>
>
> commSpi.setConnectTimeout(3).setLocalPort(48110);
>
>
>
> // this timeout is used to reconnect client to server
> if server has
>
> // failed/restarted
>
>
>
>  cfg.setClientFailureDetectionTimeout(4320);
>
>
>
> cfg.setDiscoverySpi(discoverySpi);
>
> cfg.setCommunicationSpi(commSpi);
>
>
>
> ignite = Ignition.start(cfg);
>
> ignite.cluster().active(true);
>
>
>
>ignite.cluster().baselineAutoAdjustEnabled(true);
>
>ignite.cluster().baselineAutoAdjustTimeout(0);
>
>
>
>  } catch (Exception e) {
>
> logger.error("Error in starting ignite cluster", e);
>
>  }
>
>}
>
>
>
>
>
>
> “Confidentiality Notice: The contents of this email message and any
> attachments are intended solely for the addressee(s) and may contain
> confidential and/or privileged information and may be legally protected
> from disclosure. If you are not the intended recipient of this message or
> their agent, or if this message has been addressed to you in error, please
> immediately alert the sender by reply email and then delete this message
> and any attachments. If you are not the intended recipient, you are hereby
> notified that any use, dissemination, copying, or storage of this message
> or its attachments is strictly prohibited.”
>


Re: Increase the indexing speed while loading the cache from an RDBMS

2020-08-27 Thread Srikanta Patanjali
Thanks Anton & Mikhail for your responses.

I'll try to disable the WAL and test the cache preload performance. Also
will execute a JFR to capture some info.

@Mikhail I've checked the RDBMS perf metrics. I do not see any spike in the
CPU or memory usage. Is there anything in particular that could cause a
bottleneck from the DB perspective ? The DB in use is a Postgres DB.

@Anton Is there a sample/best practice reference code for Using
Datastreamers to directly query the DB to preload the cache ? My intention
is to use Spring Data via IgniteRepositories. In order to use an Ignite
repository, I would have to preload the cache. And now, to preload the
cache, I would have to query the DB. A bit of a chicken and egg problem.

Regards,
Srikanta

On Thu, Aug 27, 2020 at 7:54 PM Mikhail Cherkasov 
wrote:

> btw, might be bottleneck is your RDBMS, it can just stream data slowly,
> slower then ignite can save, if make sense to check this version too.
>
> On Thu, Aug 27, 2020 at 10:46 AM akurbanov  wrote:
>
>> Hi,
>>
>> Which API are you using to load the entries and what is the node
>> configuration? I would recommend to share the configs and try utilizing
>> the
>> data streamer.
>>
>> https://apacheignite.readme.io/docs/data-streamers
>>
>> I would recommend recording a JFR to find where the VM spends most time.
>>
>> Best regards,
>> Anton
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
> --
> Thanks,
> Mikhail.
>


Re: How to stop a node from dying when memory is full?

2020-08-27 Thread Denis Magda
Folks, this article should be relevant to you. It covers all techniques to
avoid the OOM except for swapping:
https://www.gridgain.com/resources/blog/out-of-memory-apache-ignite-cluster-handling-techniques

You can use swapping as a part of your toolbox to survive the time when a
node is running out of the memory space:
https://apacheignite.readme.io/docs/swap-space

-
Denis


On Thu, Aug 27, 2020 at 2:13 AM danami  wrote:

> I'd like to extend Colin's question.
>
> What if I'm using TRANSACTIONAL_SNAPSHOT mode, therefore I can't use Page
> Eviction?
> How then, other than persistence, can I avoid OOM errors?
> Can I write a custom failure handler, to clear the cache/data region for
> example? Is it technically possible? (If so, how?) Will it work or is it
> bad
> and inconsistent as Colin suggested?
> Is using persistence my only option to avoid OOM errors or do I have other
> choices?
>
> Thank you for your help,
> Dana
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Increase the indexing speed while loading the cache from an RDBMS

2020-08-27 Thread Mikhail Cherkasov
btw, might be bottleneck is your RDBMS, it can just stream data slowly,
slower then ignite can save, if make sense to check this version too.

On Thu, Aug 27, 2020 at 10:46 AM akurbanov  wrote:

> Hi,
>
> Which API are you using to load the entries and what is the node
> configuration? I would recommend to share the configs and try utilizing the
> data streamer.
>
> https://apacheignite.readme.io/docs/data-streamers
>
> I would recommend recording a JFR to find where the VM spends most time.
>
> Best regards,
> Anton
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Thanks,
Mikhail.


Re: How to stop a node from dying when memory is full?

2020-08-27 Thread Mikhail Cherkasov
Hi Dana,

Do you have java.lang.OOM or IgniteOOM ?

if you use TRANSACTIONAL_SNAPSHOT then it's transactional data, page
eviction means that you just remove some random pages, so it doesn't make
sense to have TRANSACTIONAL_SNAPSHOT mode and remove random data at
the same time, it just destroys the whole idea of transactions and data
consistency.
Might be I miss something about your case, but I would say if you have
TRANSACTIONAL_SNAPSHOT you just can not use page eviction, if you can use
page eviction, then don't use TRANSACTIONAL_SNAPSHOT.

Regarding custom failure handler, it should work, I don't see any reason
why it shouldn't, I would really appreciate if you will send us some update
about this approach.

Thanks,
Mike.


On Thu, Aug 27, 2020 at 2:13 AM danami  wrote:

> I'd like to extend Colin's question.
>
> What if I'm using TRANSACTIONAL_SNAPSHOT mode, therefore I can't use Page
> Eviction?
> How then, other than persistence, can I avoid OOM errors?
> Can I write a custom failure handler, to clear the cache/data region for
> example? Is it technically possible? (If so, how?) Will it work or is it
> bad
> and inconsistent as Colin suggested?
> Is using persistence my only option to avoid OOM errors or do I have other
> choices?
>
> Thank you for your help,
> Dana
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


-- 
Thanks,
Mikhail.


Re: Increase the indexing speed while loading the cache from an RDBMS

2020-08-27 Thread akurbanov
Hi,

Which API are you using to load the entries and what is the node
configuration? I would recommend to share the configs and try utilizing the
data streamer.

https://apacheignite.readme.io/docs/data-streamers

I would recommend recording a JFR to find where the VM spends most time.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Select clause is quite more slow than same Select on MySQL

2020-08-27 Thread Denis Magda
This resource

tries to explain your issue in a bit more detail. In short, there are
scenarios when an RDBMS can easily show much better performance numbers.

-
Denis


On Thu, Aug 27, 2020 at 9:41 AM manueltg89 <
manuel.trinidad.gar...@hotmail.com> wrote:

> Hi Mike, thanks for information. I will try that.
>
> Thanks.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Increase the indexing speed while loading the cache from an RDBMS

2020-08-27 Thread Michael Cherkasov
Hi Srikanta,
 Have you tried to load data without indexto compare time?

I think Wal disabling for data load can save some hours for you:
https://www.gridgain.com/docs/latest/developers-guide/persistence/native-persistence#disabling-wal

Thanks,
Mike.

On Wed, Aug 26, 2020, 8:47 AM Srikanta Patanjali 
wrote:

> Currently I'm using Apache Ignite v2.8.1 to preload a cache from the
> RDBMS. There are two tables with each 27M rows. The index is defined on a
> single column of type String in 1st table and Integer in the 2nd table.
> Together the total size of the two tables is around 120GB.
>
> The preloading process (triggered using loadCacheAsync() from within a
> Java app) takes about 45hrs. The cache is persistence enabled and a
> common EBS volume (SSD) is being used for both the WAL and other locations.
>
> I'm unable to figure out the bottleneck for increasing the speed.
>
> Apart from defining a separate path for WAL and the persistence, is there
> any other way to load the cache faster (with indexing enabled) ?
>
>
> Thanks,
> Srikanta
>


Re: Select clause is quite more slow than same Select on MySQL

2020-08-27 Thread manueltg89
Hi Mike, thanks for information. I will try that.

Thanks.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: How to confirm that disk compression is in effect?

2020-08-27 Thread Michael Cherkasov
Could you please share your benchmark code? I believe compression might
depend on data you write, if it full random, it's difficult to compress the
data.

On Wed, Aug 26, 2020, 8:26 PM 38797715 <38797...@qq.com> wrote:

> Hi,
>
> We turn on disk compression to see the trend of execution time and disk
> space.
>
> Our expectation is that after disk compression is turned on, although more
> CPU is used, the disk space is less occupied. Because more data is written
> per unit time, the overall execution time will be shortened in the case of
> insufficient memory.
>
> However, it is found that the execution time and disk consumption do not
> change significantly. We tested the diskPageCompressionLevel values as 0,
> 10 and 17 respectively.
>
> Our test method is as follows:
> The ignite-compress module has been introduced.
>
> The configuration of ignite is as follows:
> 
> http://www.springframework.org/schema/beans";
> 
> xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
>  xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd";>
>  "org.apache.ignite.configuration.IgniteConfiguration">
> 
> 
> 
> 
> 
> 
>  />
> 
> 
> 
> 
>  "org.apache.ignite.configuration.CacheConfiguration">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>


Re: Select clause is quite more slow than same Select on MySQL

2020-08-27 Thread Michael Cherkasov
Hi Manuel,

Ignite is a grid solution, it is supposed to be scaled horizontally, so
MySQL can be more efficient in some cases, but it can not scaled well. So
ignite will win when you can not fit data on one machine and need to scale.
So, load more data, use more machines and at some point Apache Ignite will
show better performance.

Thanks,
Mike.

On Thu, Aug 27, 2020, 4:51 AM manueltg89 
wrote:

> Hi!,
>
> I have a table with 4 millions of registers, with only 1 node in Apache
> Ignite. I make a simple query to get all registers:
>
> select * from items;
>
> I make the query in Apache Ignite and this takes 83 seconds and same query
> in MySQL takes 33 seconds.
>
> Is it normal?
>
> Thanks in advance.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Non-loopback local Ips:172.17.0.1...

2020-08-27 Thread akurbanov
Hi Kay, 

Most likely it is default Docker gateway between host and bridge network.

Best regards,
Anton



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


RE: Hibernate 2nd Level query cache with Ignite

2020-08-27 Thread Tathagata Roy
If it helps here is my dependency tree



From: Tathagata Roy
Sent: Thursday, August 27, 2020 8:35 AM
To: user 
Subject: RE: Hibernate 2nd Level query cache with Ignite

Thanks for responding @Evgenii

Attaching the logs for both. There is no signification info in the ignite 
server. In application logs all logs after line 6629 is when application was 
able to reconnect with server after the server restart

From: Evgenii Zhuravlev 
mailto:e.zhuravlev...@gmail.com>>
Sent: Wednesday, August 26, 2020 4:58 PM
To: user mailto:user@ignite.apache.org>>
Subject: Re: Hibernate 2nd Level query cache with Ignite

Hi,

Can you please share full logs from client and server nodes?

Thanks,
Evgenii

ср, 26 авг. 2020 г. в 14:26, Tathagata Roy 
mailto:tathagata@raymondjames.com>>:
Hi,

I am trying to do a POC on hibernate 2nd level cache with Apache Ignite. With 
this configuration I was able to make it work

spring.jpa.properties.hibernate.cache.use_second_level_cache=true
spring.jpa.properties.hibernate.cache.use_query_cache=true
spring.jpa.properties.hibernate.generate_statistics=false
spring.jpa.properties.hibernate.cache.region.factory_class=org.apache.ignite.cache.hibernate.HibernateRegionFactory
spring.jpa.properties.org.apache.ignite.hibernate.default_access_type=READ_ONLY




 org.gridgain
 ignite-hibernate_5.3
 8.7.23
 
 
 org.hibernate
 hibernate-core
 
 
 



@Bean
@ConditionalOnMissingBean
public IgniteConfiguration igniteConfiguration(DiscoverySpi discoverySpi, 
CommunicationSpi communicationSpi) {
IgniteConfiguration igniteConfiguration = new IgniteConfiguration();
igniteConfiguration.setClientMode(clientMode);
igniteConfiguration.setMetricsLogFrequency(0);

igniteConfiguration.setGridLogger(new Slf4jLogger());

igniteConfiguration.setDiscoverySpi(discoverySpi);
igniteConfiguration.setCommunicationSpi(communicationSpi);
igniteConfiguration.setFailureDetectionTimeout(failureDetectionTimeout);

CacheConfiguration cc = new CacheConfiguration<>();
cc.setName(“Entity1”);
cc.setCacheMode(CacheMode.REPLICATED);



CacheConfiguration cc1 = new CacheConfiguration<>();
cc1.setName(“default-query-results-region”);
cc1.setCacheMode(CacheMode.REPLICATED);



CacheConfiguration cc2 = new CacheConfiguration<>();
cc2.setName(“default-update-timestamps-region”);
cc2.setCacheMode(CacheMode.REPLICATED);

igniteConfiguration.setCacheConfiguration(cc);



return igniteConfiguration;
}




I am testing this with external ignite node, but if the external ig node is 
restarted , I see the error when trying to access Entity1

"errorMessage": "class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to 
perform cache operation (cache is stopped): Entity1; nested exception is 
java.lang.IllegalStateException: class 
org.apache.ignite.internal.processors.cache.CacheStoppedException: Failed to 
perform cache operation (cache is stopped): Entity1",

It looks like the issue is as reported here ,

https://stackoverflow.com/questions/46053089/ignite-cache-reconnection-issue-cache-is-stopped
https://issues.apache.org/jira/browse/IGNITE-5789


Are there any other way without restaring the client application we can make it 
work?
[INFO] com.rj.funcauth:entfunctionalauth:war:2.0.0-SNAPSHOT
[INFO] +- mypackage:rj-funcauth-api:jar:2.0.0-SNAPSHOT:compile
[INFO] |  +- mypackage:rjdf-web:jar:3.3-SNAPSHOT:compile
[INFO] |  |  +- mypackage:rjdf-cluster-messaging-api:jar:3.3-SNAPSHOT:compile
[INFO] |  |  +- org.springframework:spring-webmvc:jar:5.1.8.RELEASE:compile
[INFO] |  |  |  \- org.springframework:spring-web:jar:5.1.8.RELEASE:compile
[INFO] |  |  +- 
org.springframework.boot:spring-boot-starter-security:jar:2.1.6.RELEASE:compile
[INFO] |  |  |  +- 
org.springframework.security:spring-security-config:jar:5.1.5.RELEASE:compile
[INFO] |  |  |  \- 
org.springframework.security:spring-security-web:jar:5.1.5.RELEASE:compile
[INFO] |  |  +- 
org.springframework.ldap:spring-ldap-core:jar:2.3.2.RELEASE:compile
[INFO] |  |  +- 
org.springframework.security:spring-security-ldap:jar:5.1.5.RELEASE:compile
[INFO] |  |  |  \- 
org.springframework.security:spring-security-core:jar:5.1.5.RELEASE:compile
[INFO] |  |  +- org.slf4j:slf4j-api:jar:1.7.26:compile
[INFO] |  |  +- ja

Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-08-27 Thread Petr Ivanov
https://issues.apache.org/jira/browse/IGNITE-13388 
 is ready for review.
Iliya, can you help with it?



> On 27 Aug 2020, at 12:26, Petr Ivanov  wrote:
> 
> Hi!
> 
> 
> As a temporary workaround, please see this article [1].
> Meanwhile, I will try to figure out how to create more generalised JDK 
> dependency.
> 
> 
> 
> [1] 
> https://stackoverflow.com/questions/57031649/how-to-install-openjdk-8-jdk-on-debian-10-buster
> 
> 
>> On 6 May 2020, at 07:27, rakshita04  wrote:
>> 
>> Hi Team,
>> 
>> We are using apache-ignite for our C++ application based on debian
>> 10(buster). We tried using apache-ignite_2.8.0-1_all.deb package but it
>> depends on openjdk-8 which is older version and not available for debian 10
>> buster. I have below questions-
>> 1. Why older version of openjdk is used in deb package?
>> 2. Is there any possibility of new released version of deb package with
>> source apache-ignite_2.8.0 with newer openjdk version?
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
> 



Re: Ignite test takes several days

2020-08-27 Thread Evgenii Zhuravlev
Hi,

No, it's not normal. Do you really want to run all the tests locally, or
you just want to build the project? If you want just to build it, I suggest
skipping tests by using -Dmaven.*test*.*skip*=true flag.

Evgenii

чт, 27 авг. 2020 г. в 06:33, Cong Guo :

> Hi,
>
> I try to build the ignite-core on my workstation. I use the original
> ignite-2.8.1 source package. The test, specifically
> GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
> normal? I run "mvn clean package" directly. Should I configure anything in
> advance? Thank you.
>


Ignite test takes several days

2020-08-27 Thread Cong Guo
Hi,

I try to build the ignite-core on my workstation. I use the original
ignite-2.8.1 source package. The test, specifically
GridCacheWriteBehindStoreLoadTest, has been running for several days. Is it
normal? I run "mvn clean package" directly. Should I configure anything in
advance? Thank you.


Re: Cache Expiry policy not working..

2020-08-27 Thread Evgenii Zhuravlev
Hi,
Please share the full maven project with a reproducer then. I wan't able to
reproduce the same behaviour with a code you shared before.

Evgenii

чт, 27 авг. 2020 г. в 01:40, kay :

> Hi I didn't notice a Cache Name
>
> cache name is NC_INITPGECONTRACT_CACHE
>
> Thank you.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Select clause is quite more slow than same Select on MySQL

2020-08-27 Thread manueltg89
Hi!, 

I have a table with 4 millions of registers, with only 1 node in Apache
Ignite. I make a simple query to get all registers:

select * from items;

I make the query in Apache Ignite and this takes 83 seconds and same query
in MySQL takes 33 seconds.

Is it normal?

Thanks in advance.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-08-27 Thread Petr Ivanov
Hi!


As a temporary workaround, please see this article [1].
Meanwhile, I will try to figure out how to create more generalised JDK 
dependency.



[1] 
https://stackoverflow.com/questions/57031649/how-to-install-openjdk-8-jdk-on-debian-10-buster


> On 6 May 2020, at 07:27, rakshita04  wrote:
> 
> Hi Team,
> 
> We are using apache-ignite for our C++ application based on debian
> 10(buster). We tried using apache-ignite_2.8.0-1_all.deb package but it
> depends on openjdk-8 which is older version and not available for debian 10
> buster. I have below questions-
> 1. Why older version of openjdk is used in deb package?
> 2. Is there any possibility of new released version of deb package with
> source apache-ignite_2.8.0 with newer openjdk version?
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-08-27 Thread Artem Budnikov
I've created a JIRA issue: 
https://issues.apache.org/jira/browse/IGNITE-13388


-Artem

On 27.08.2020 12:05, Petr Ivanov wrote:

Hi, Artem,


Can you file and issue with description what exactly do not work and 
why, please?



On 27 Aug 2020, at 11:55, Artem Budnikov > wrote:


Looks like this issue wasn't fixed in 2.8.1. The Ignite deb package 
v. 2.8.1 can't be installed on debian 10:




$ sudo apt-get install apache-ignite
...
The following packages have unmet dependencies:
 apache-ignite : Depends: openjdk-8-jdk but it is not installable or
    oracle-java8-installer but it is not installable
E: Unable to correct problems, you have held broken packages.



Is anyone working on this? I was going to update the Ignite 2.9 
installation instruction for Debian, but if the issue is staying with 
us, I guess I shouldn't bother.


-Artem

On 27.05.2020 11:55, Ilya Kasnacheev wrote:

Hello!

You can build debian package from apache-ignite source deliverable. 
There you can fix the dependency.


Regards,
--
Ilya Kasnacheev


ср, 27 мая 2020 г. в 11:51, rakshita04 
>:


Can we have the newer debian package for apache ignite with
newer version of
openjdk as dependencies?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/






Re: How to stop a node from dying when memory is full?

2020-08-27 Thread danami
I'd like to extend Colin's question.

What if I'm using TRANSACTIONAL_SNAPSHOT mode, therefore I can't use Page
Eviction?
How then, other than persistence, can I avoid OOM errors?
Can I write a custom failure handler, to clear the cache/data region for
example? Is it technically possible? (If so, how?) Will it work or is it bad
and inconsistent as Colin suggested?
Is using persistence my only option to avoid OOM errors or do I have other
choices? 

Thank you for your help,
Dana




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-08-27 Thread Petr Ivanov
Hi, Artem,


Can you file and issue with description what exactly do not work and why, 
please?


> On 27 Aug 2020, at 11:55, Artem Budnikov  wrote:
> 
> Looks like this issue wasn't fixed in 2.8.1. The Ignite deb package v. 2.8.1 
> can't be installed on debian 10:
> 
> 
>> $ sudo apt-get install apache-ignite
>> ...
>> The following packages have unmet dependencies:
>>  apache-ignite : Depends: openjdk-8-jdk but it is not installable or
>> oracle-java8-installer but it is not installable
>> E: Unable to correct problems, you have held broken packages.
> 
> 
> Is anyone working on this? I was going to update the Ignite 2.9 installation 
> instruction for Debian, but if the issue is staying with us, I guess I 
> shouldn't bother.
> 
> -Artem
> 
> On 27.05.2020 11:55, Ilya Kasnacheev wrote:
>> Hello!
>> 
>> You can build debian package from apache-ignite source deliverable. There 
>> you can fix the dependency.
>> 
>> Regards,
>> -- 
>> Ilya Kasnacheev
>> 
>> 
>> ср, 27 мая 2020 г. в 11:51, rakshita04 > >:
>> Can we have the newer debian package for apache ignite with newer version of
>> openjdk as dependencies?
>> 
>> 
>> 
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/ 
>> 



Re: apache-ignite_2.8.0-1_all.deb package has older version of openjdk as dependency

2020-08-27 Thread Artem Budnikov
Looks like this issue wasn't fixed in 2.8.1. The Ignite deb package v. 
2.8.1 can't be installed on debian 10:



$ sudo apt-get install apache-ignite
...
The following packages have unmet dependencies:
 apache-ignite : Depends: openjdk-8-jdk but it is not installable or
    oracle-java8-installer but it is not installable
E: Unable to correct problems, you have held broken packages.


Is anyone working on this? I was going to update the Ignite 2.9 
installation instruction for Debian, but if the issue is staying with 
us, I guess I shouldn't bother.


-Artem

On 27.05.2020 11:55, Ilya Kasnacheev wrote:

Hello!

You can build debian package from apache-ignite source deliverable. 
There you can fix the dependency.


Regards,
--
Ilya Kasnacheev


ср, 27 мая 2020 г. в 11:51, rakshita04 >:


Can we have the newer debian package for apache ignite with newer
version of
openjdk as dependencies?



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/




Re: Cache Expiry policy not working..

2020-08-27 Thread kay
Hi I didn't notice a Cache Name

cache name is NC_INITPGECONTRACT_CACHE

Thank you.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/