Re: java.lang.UnsupportedOperationException: Invalid fast index type: -21

2018-11-05 Thread Evgenii Zhuravlev
Hi,

I've added the same field to one of the objects in Ignite example and
everything works fine. Can you share the full reproducer or at least the
whole object?

Thanks,
Evgenii

чт, 1 нояб. 2018 г. в 15:11, Yuriy :

> I have made the index on bite[] field:
> @QuerySqlField (index = true, notNull = true)
> private byte[] sha1;
>
> After loading data the cluster (Ignate 2.4) break down with errors in log:
> org.apache.ignite.IgniteException: Runtime failure on row: Row@4eaa09d2[
> key: 63654, val: model.File [idHash=1152876998, hash=-392466620, sha1=[13,
> 120, 111, -46, -93, -15, 94, -12, -77, 14, -51, 108, 29, -22, 18, 65, 84,
> -100, 19, 105]]]
> and
> java.lang.UnsupportedOperationException: Invalid fast index type: -21
>
> How it is possible to index and use as a key of the byte[] field?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Dependency resource injection with OpenWebBeans CDI instead of Spring

2018-11-05 Thread Aaron Anderson
Hi,
I would like to configure Ignite to use OpenWebBeans and CDI to inject fields 
into deserialized IgniteRunnables instead of using Spring. It looks like 
org.apache.ignite.internal.processors.resource.GridResourceInjector is 
available for this purpose. Is there an easy way to register a custom 
implementation of this interface with Ignite? 
Additionally, do fields intended to be injection points need to use the 
transient keyword so they are not inadvertently serialized or is there another 
way to handle that?
Thanks,
Aaron

RE: Is there Ignite Eviction low water mark

2018-11-05 Thread HEWA WIDANA GAMAGE, SUBASH
In Ignite 1.x vs 2.x, when we get notified about an eviction (from cache 
events),  will Ignite literally delete the entry, or can they be remain  for 
certain period and later get wiped out.

The implementation of eviction from 1.x to 2.x is a different, new one?


From: Denis Mekhanikov [mailto:dmekhani...@gmail.com]
Sent: Monday, November 05, 2018 7:22 AM
To: user
Subject: Re: Is there Ignite Eviction low water mark

This email is from an external source - exercise caution regarding links and 
attachments.
Hi!

Ignite 2.x has a mechanism called page 
eviction.
 It's configured using 
DataRegion#pageEvictionMode.
Page eviction removes entries from a data region until either 
DataRegionConfiguration#evictionThreshold
 is reached,
or 
DataRegionConfiguration#emptyPagesPoolSize
 pages are available in the free list.
It's applied only when persistence is disabled. Otherwise data is just spilled 
to disk.

Ignite 1.x has a different kind of eviction, since it doesn't have page memory 
nor data regions.
It removes data until occupied memory is bellow 
LruEvictionPolisy#maxSize.
This is similar to on-heap eviction 
policy
 in Ignite 2.x, but you don't need to use it
unless you know exactly what you're doing and what an on-heap cache is.

Denis

пт, 2 нояб. 2018 г. в 21:35, HEWA WIDANA GAMAGE, SUBASH 
mailto:subash.hewawidanagam...@fmr.com>>:
Hi all,
This is to understand how eviction works in Ignite cache.

For example, let’s say the LRU eviction policy is set to max 100MB. Then, when 
the cache size reached 100MB, how much of LRU entries will get evicted ? Is 
there any low water mark/percentage ? Like eviction policy will remove 20% of 
the cache, and then let it again reach back to 100MB to clean up again.

Also please confirm whether the behavior is same in Ignite 1.9 vs 2.6.


Security Setup

2018-11-05 Thread Skollur
I am trying to setup basic login and password. I have coded as below but
getting an error as in the Ignite documentation. 

IgniteConfiguration cfg = new IgniteConfiguration();
// Ignite persistence configuration.
DataStorageConfiguration storageCfg = new
DataStorageConfiguration();
// Enabling the persistence.
   
storageCfg.getDefaultDataRegionConfiguration().setPersistenceEnabled(true);
// Applying settings.
cfg.setDataStorageConfiguration(storageCfg);
// Enable authentication
cfg.setAuthenticationEnabled(true);
===
12:20:25,012][SEVERE][main][IgniteKernal%idb] Exception during start
processors, node will be stopped and close connections
class org.apache.ignite.IgniteCheckedException: Failed to start processor:
GridProcessorAdapter []
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1742)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:995)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2014)
at
org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1723)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1151)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:671)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:596)
at org.apache.ignite.Ignition.start(Ignition.java:327)
at
com.idb.cache.init.ServerNodeCodeStartup.main(ServerNodeCodeStartup.java:15)
Caused by: class org.apache.ignite.IgniteCheckedException: Reading
marshaller mapping from file com failed; last symbol of file name is
expected to be numeric.
at
org.apache.ignite.internal.MarshallerMappingFileStore.getPlatformId(MarshallerMappingFileStore.java:223)
at
org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(MarshallerMappingFileStore.java:164)
at
org.apache.ignite.internal.MarshallerContextImpl.onMarshallerProcessorStarted(MarshallerContextImpl.java:539)
at
org.apache.ignite.internal.processors.marshaller.GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor.java:114)
at
org.apache.ignite.internal.IgniteKernal.startProcessor(IgniteKernal.java:1739)
... 8 more
Caused by: java.lang.NumberFormatException: For input string: "m"
at
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Integer.parseInt(Integer.java:580)
at java.lang.Byte.parseByte(Byte.java:149)
at java.lang.Byte.parseByte(Byte.java:175)
at
org.apache.ignite.internal.MarshallerMappingFileStore.getPlatformId(MarshallerMappingFileStore.java:220)



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Slow Data Insertion On Large Cache : Spark Streaming

2018-11-05 Thread ApacheUser
Hi Team,

We have 6 node Ignite cluster with 72CPU, 256GB  RAM and 5TB Storage . Data
ingested using Spark Streaming  into Ignite Cluster for SQL and Tableau
Usage.

I have couple of Large tables with 200ml rows with (200GB) and 800ml rows
with (500GB)  .
The insertion is taking more than 40secs if there is already existing
Composite key, if new row its around 10ms.

We have Entry, Main and Details tables, "Entry" cache has single field "id"
primary key, second cache "Main"  is with composite Primary key "id" and
"mainid" third Cache "Details" with composite primary key "id","mainrid" and
"detailid". "id" is the affinity key for all and some other small tables.

1. Is there any performance of insertion/updation diffeence  for  single
field primary key vs multi field primary key?
 will it make any differenc if I convert composite primary key as singe
field primary Key?
  like  concatanate all composite fields and make sigle filed primary key?

2.what are ignite.sh and Config parameters needs tuning?

My Spark Dataframe save options (Save to Ignite)

 .option(OPTION_STREAMER_ALLOW_OVERWRITE, true)
.mode(SaveMode.Append)
.save()

My Ignite.sh

JVM_OPTS="-server -Xms10g -Xmx10g -XX:+AggressiveOpts
-XX:MaxMetaspaceSize=512m"
JVM_OPTS="${JVM_OPTS} -XX:+AlwaysPreTouch"
JVM_OPTS="${JVM_OPTS} -XX:+UseG1GC"
JVM_OPTS="${JVM_OPTS} -XX:+ScavengeBeforeFullGC"
JVM_OPTS="${JVM_OPTS} -XX:+DisableExplicitGC"
JVM_OPTS="${JVM_OPTS} -XX:+HeapDumpOnOutOfMemoryError "
JVM_OPTS="${JVM_OPTS} -XX:HeapDumpPath=${IGNITE_HOME}/work"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDetails"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCTimeStamps"
JVM_OPTS="${JVM_OPTS} -XX:+PrintGCDateStamps"
JVM_OPTS="${JVM_OPTS} -XX:+UseGCLogFileRotation"
JVM_OPTS="${JVM_OPTS} -XX:NumberOfGCLogFiles=10"
JVM_OPTS="${JVM_OPTS} -XX:GCLogFileSize=100M"
JVM_OPTS="${JVM_OPTS} -Xloggc:${IGNITE_HOME}/work/gc.log"
JVM_OPTS="${JVM_OPTS} -XX:+PrintAdaptiveSizePolicy"
JVM_OPTS="${JVM_OPTS} -XX:MaxGCPauseMillis=100"

export IGNITE_SQL_FORCE_LAZY_RESULT_SET=true

default-Config.xml






http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
   http://www.springframework.org/schema/beans
   http://www.springframework.org/schema/beans/spring-beans.xsd;>







 



   



 





 






  











  
   











  
  

















  


  

  

  


  
  
   

  

 
 

  
  

  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
  64.x.x.x:47500..47509
 64.x.x.x:47500..47509
 64.x.x.x:47500..47509

  

  

  

  





Thanks





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Long activation times with Ignite persistence enabled

2018-11-05 Thread Denis Mekhanikov
Naveen,

40 caches is quite a lot. It means, that Ignite needs to handle 40 *
(number of partitions) files.
By default each cache has 1024 partitions.
This is quite a lot, and a disk is the bottleneck here. Changing of thread
pool sizes won't save you.
If you divide your caches into cache groups, then they will share the same
partitions, thus number of files will be reduced.
You can also try reducing the number of partitions, but it may lead to
uneven distribution of data between nodes.
Any of these changes will require reloading of the data.

You can record a *dstat* on the host machine to make sure, that disk is the
weak place.
If its utilization is high, while CPU is not used, then it means, that you
need a faster disk.

Denis


пн, 5 нояб. 2018 г. в 17:10, Naveen :

> Hi Denis
>
> We have only 40 caches in our cluster.
> If we introduce grouping of caches, guess we need to reload the data from
> scratch, right ??
>
> We do have very powerful machines as part of cluster, they are 128 CPU very
> high end boxes and huge resources available, by increasing any of the below
> thread pools, can we reduce the cluster activation time.
>
> System Pool
> Public Pool
> Striped Pool
> Custom Thread Pools
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Long activation times with Ignite persistence enabled

2018-11-05 Thread Gianluca Bonetti
Hello

In my case of slow startup, as suggested from a member of this mailing
list, I deleted the -XX:+AlwaysPreTouch command line option from JVM
launch, and the cluster got back to very fast startup.
Don't know if you are using this option, hope it helps.

Cheers
Gianluca

Il giorno lun 5 nov 2018 alle ore 14:10 Naveen 
ha scritto:

> Hi Denis
>
> We have only 40 caches in our cluster.
> If we introduce grouping of caches, guess we need to reload the data from
> scratch, right ??
>
> We do have very powerful machines as part of cluster, they are 128 CPU very
> high end boxes and huge resources available, by increasing any of the below
> thread pools, can we reduce the cluster activation time.
>
> System Pool
> Public Pool
> Striped Pool
> Custom Thread Pools
>
> Thanks
> Naveen
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Long activation times with Ignite persistence enabled

2018-11-05 Thread Naveen
Hi Denis

We have only 40 caches in our cluster.
If we introduce grouping of caches, guess we need to reload the data from
scratch, right ??

We do have very powerful machines as part of cluster, they are 128 CPU very
high end boxes and huge resources available, by increasing any of the below
thread pools, can we reduce the cluster activation time. 

System Pool
Public Pool
Striped Pool
Custom Thread Pools

Thanks
Naveen



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Is there Ignite Eviction low water mark

2018-11-05 Thread Denis Mekhanikov
Hi!

Ignite 2.x has a mechanism called page eviction
.
It's configured using DataRegion#pageEvictionMode

.
Page eviction removes entries from a data region until either
DataRegionConfiguration#evictionThreshold

is
reached,
or DataRegionConfiguration#emptyPagesPoolSize

pages
are available in the free list.
It's applied only when persistence is disabled. Otherwise data is just
spilled to disk.

Ignite 1.x has a different kind of eviction, since it doesn't have page
memory nor data regions.
It removes data until occupied memory is bellow LruEvictionPolisy#maxSize

.
This is similar to on-heap eviction policy

in Ignite 2.x, but you don't need to use it
unless you know exactly what you're doing and what an on-heap cache is.

Denis

пт, 2 нояб. 2018 г. в 21:35, HEWA WIDANA GAMAGE, SUBASH <
subash.hewawidanagam...@fmr.com>:

> Hi all,
>
> This is to understand how eviction works in Ignite cache.
>
>
>
> For example, let’s say the LRU eviction policy is set to max 100MB. Then,
> when the cache size reached 100MB, how much of LRU entries will get evicted
> ? Is there any low water mark/percentage ? Like eviction policy will remove
> 20% of the cache, and then let it again reach back to 100MB to clean up
> again.
>
>
>
> Also please confirm whether the behavior is same in Ignite 1.9 vs 2.6.
>


Re: Apache Ignite - Near Cache consistency

2018-11-05 Thread Denis Mekhanikov
Correct.
Strong consistency is guaranteed for atomic caches as well, including near
cache entries.

Denis

пн, 5 нояб. 2018 г. в 11:21, ales :

> Thanks Denis for the answer.
>
> Actually i am using "ATOMIC" atomicity mode (ie no transaction).
> I have been told that it may be linked to the backup synchronicity mode
> (FULL_SYNC would ensure consistency between nodes and eliminate stale
> data).
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Apache Ignite - Near Cache consistency

2018-11-05 Thread ales
Thanks Denis for the answer.

Actually i am using "ATOMIC" atomicity mode (ie no transaction).
I have been told that it may be linked to the backup synchronicity mode
(FULL_SYNC would ensure consistency between nodes and eliminate stale data).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/