Re: Sharing Dataset Across Multiple Ignite Processes with Same Physical Page Mappings, SharedRDD

2018-01-30 Thread vkulichenko
Umur,

No, it doesn't use shared memory and I doubt what you tell is even possible.
However, I still not sure I understand what is the purpose of all this. What
is your ultimate goal here?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Get TTL of the specific (K,V) entry

2018-01-30 Thread vkulichenko
Ariel,

There is no way to do this with the current API. What is the use case for
this?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Serialization problem when combining Spring boot (hateoas) with Ignite

2018-01-30 Thread vkulichenko
Looks like you have a node filter in cache configuration and using lambda to
provide it. I would recommend to create a static class instead, deploy it on
all nodes in topology (both clients and servers) and then restart. Most
likely the issue will go away.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Rebalancing mode set to None

2018-01-30 Thread vkulichenko
Ranjit,

Is it a really frequent event for node to crash in the middle of loading
process? If so, then I think you should fix that instead of working around
by disabling rebalancing. Such configuration definitely has a lot drawbacks
and therefore can cause issues.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Serialization problem when combining Spring boot (hateoas) with Ignite

2018-01-30 Thread Amir Akhmedov
Hi Pim,

Can you please provide more details on your issue?
1. What is the issue and what is the expected behavior?
2. Server and client configurations
3. It would be nice to have a small reproducer for better understanding.

Thanks,
Amir

On Tue, Jan 30, 2018 at 2:19 PM, Pim D  wrote:

> Update:
> Not quite sure if this is the problem but the issue is with client nodes.
> I have peer class loading disabled and the client is also logging unknown
> classes of Ignite lifecyclebeans which are present on server nodes (but
> definitely NOT on client nodes).
>
> This behaviour seems really awkward to me.
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Serialization problem when combining Spring boot (hateoas) with Ignite

2018-01-30 Thread Pim D
Update:
Not quite sure if this is the problem but the issue is with client nodes.
I have peer class loading disabled and the client is also logging unknown
classes of Ignite lifecyclebeans which are present on server nodes (but
definitely NOT on client nodes).

This behaviour seems really awkward to me.





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Get TTL of the specific (K,V) entry

2018-01-30 Thread Ariel Tubaltsev
Hi

I'm wondering if there is a way to get TTL of specific (K,V) entry,
something similar to Redis TTL:
https://redis.io/commands/ttl

Thank you
Ariel



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Serialization problem when combining Spring boot (hateoas) with Ignite

2018-01-30 Thread Pim D
Hi,

I'm encountering an issue when I start an ignite client inside a spring boot
app (with hate-aos).
It seems as if there are classloader or marshalling conflicts between both
frameworks.
Can anyone confirm (or even better: a clue on how to solve this?):




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Protect Ignite clusters from Meltdown and Spectre

2018-01-30 Thread Denis Magda
Apache Ignite community applied security patches against the notorious Meltdown 
Spectre vulnerabilities and completed performance testing of general operations 
and workloads that are typical for Ignite deployments.

The details are under the link:
https://blogs.apache.org/ignite/entry/meltdown-and-spectre-patches-show


—
Denis

> On Jan 8, 2018, at 1:11 PM, Denis Magda  wrote:
> 
> Make sure your Ignite clusters are protected from the fierce vulnerabilities 
> that rocked the world:
> https://blogs.apache.org/ignite/entry/protecting-apache-ignite-from-meltdown
> 
> —
> Denis



Re: [Ignite 2.0.0] Stopping the node in order to prevent cluster wide instability.

2018-01-30 Thread wcherry
I am also experiencing this issue. I'm running ignite in a kubernetes cluster
and I am trying to do a rolling update. so I have 2 ignite nodes running and
I am using K8's rolling update api in a deployment. eg. I am running an
application that starts up the 2 nodes. the nodes cluster and I then build
my project through a jenkins pipeline and use Helm to upgrade the
deployment. k8 takes over and with the deployment brings one node down, puts
it back up, waits a minute and then brings the other down and puts it back
up. 

https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deployment

as it does this sometimes it works and other times ignite fails to connect
to the other node and cluster. 
k8 brings down a node and tries to put it back up but because it fails K8
stops the rolling update. so we have an old node running and new broken
node. 


[16:38:45]__  
[16:38:45]   /  _/ ___/ |/ /  _/_  __/ __/
[16:38:45]  _/ // (7 7// /  / / / _/
[16:38:45] /___/\___/_/|_/___/ /_/ /___/
[16:38:45]
[16:38:45] ver. 2.3.0#20171028-sha1:8add7fd5
[16:38:45] 2017 Copyright(C) Apache Software Foundation
[16:38:45]
[16:38:45] Ignite documentation: http://ignite.apache.org
[16:38:45]
[16:38:45] Quiet mode.
[16:38:45]   ^-- To see **FULL** console log here add -DIGNITE_QUIET=false
or "-v" to ignite.{sh|bat}
[16:38:45]
[16:38:45] OS: Linux 4.4.0-77-generic amd64
[16:38:45] VM information: Java(TM) SE Runtime Environment 1.8.0_152-b16
Oracle Corporation Java HotSpot(TM) 64-Bit Server VM 25.152-b16
[16:38:45] Configured plugins:
[16:38:45]   ^-- None
[16:38:45]
[16:38:46] Message queue limit is set to 0 which may lead to potential OOMEs
when running cache operations in FULL_ASYNC or PRIMARY_SYNC modes due to
message queues growth on sender and receiver sides.
[16:38:46] Security status [authentication=off, tls/ssl=off]

SEVERE: TcpDiscoverSpi's message worker thread failed abnormally. Stopping
the node in order to prevent cluster wide instability.
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:111)
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:315)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1228)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:523)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:478)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:855)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1837)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4328)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2635)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processMessage(ServerImpl.java:2447)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$MessageWorkerAdapter.body(ServerImpl.java:6648)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.body(ServerImpl.java:2533)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

Jan 30, 2018 4:38:48 PM org.apache.ignite.logger.java.JavaLogger error
SEVERE: Runtime error caught during grid runnable execution: IgniteSpiThread
[name=tcp-disco-msg-worker-#2]
java.lang.NullPointerException
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandlerV2.getEventFilter(CacheContinuousQueryHandlerV2.java:111)
at
org.apache.ignite.internal.processors.cache.query.continuous.CacheContinuousQueryHandler.register(CacheContinuousQueryHandler.java:315)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.registerHandler(GridContinuousProcessor.java:1228)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onDiscoDataReceived(GridContinuousProcessor.java:523)
at
org.apache.ignite.internal.processors.continuous.GridContinuousProcessor.onGridDataReceived(GridContinuousProcessor.java:478)
at
org.apache.ignite.internal.managers.discovery.GridDiscoveryManager$5.onExchange(GridDiscoveryManager.java:855)
at
org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.onExchange(TcpDiscoverySpi.java:1837)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$RingMessageWorker.processNodeAddedMessage(ServerImpl.java:4328)
at

IndexingSpi#remove not called on key migration

2018-01-30 Thread zbyszek

I am testing IndexingSpi in cluster of 2 nodes for the the following cache
config:

private IgniteCache createCache() {
CacheConfiguration cCfg = new
CacheConfiguration<>();
cCfg.setName("MyCache");
cCfg.setStoreKeepBinary(true);
cCfg.setCacheMode(CacheMode.PARTITIONED);
cCfg.setOnheapCacheEnabled(false);
cCfg.setCopyOnRead(false);
cCfg.setBackups(1);
cCfg.setWriteBehindEnabled(false);
cCfg.setReadThrough(false);
return ignite.getOrCreateCache(cCfg).withKeepBinary();
}

I have observed that method org.apache.ignite.spi.indexing.IndexingSpi#store
is called for all keys, both primay and backup, 
resulting in both nodes having built duplicated index. In addition,
org.apache.ignite.spi.indexing.IndexingSpi#remove is not called when the
keys migrate to another node (for example due to second node joining the
cluster).
Is that by design or rather a bug? I would expect the receive remove()
callback when primary keys migrate from the node, so I can free resources
occupied by corresponding index.

Thank you in advance for your help,
zbyszek




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re: Cannot connect the ignite server after running one or two days

2018-01-30 Thread xiang jie
Hi, Evgenii

Thank you for your reply. 

The logs files are too big and same mostly, so collect the logs at the
moment of connection. 

Because clients cannot connect to cluster any more and always show these
error messages when we try to connect them again, I don't think it's the
problem of that moment for long GC pause.  

At the same time, there are clients still can connect but after restarting
these clients several times,  they may cannot connect to cluster either.  As
time goes on, all clients cannot connect to cluster.  If we restart one
server node is still no use, clients cannot connect. When restart all server
nodes and load data again, everything is OK.  Do these circumstances
indicate that it is not a network problem? 

There are several clients connected by VPN,  is it possible to the client's
restart regularly causing ignite socket communication to a certain degree of
obstruction and becoming more and more serious as time goes by?


Thanks

-邮件原件-
发件人: ezhuravlev [mailto:e.zhuravlev...@gmail.com] 
发送时间: 2018年1月30日 23:00
收件人: user@ignite.apache.org
主题: Re: Re: Cannot connect the ignite server after running one or two days

Hi,

Looks like logs from the server is still not full. If you've checked them
and you sure that you don't have any exceptions in it before witnessing this
problem, then, I think that you could have some connection problems or a
long GC pause. Do you have any network monitoring? Also, I would recommend
checking GC logs at this moment(or share it with community) on all nodes.

Also, Why did you set "queryThreadPoolSize" value="32" while you have only
24 CPUs for all 5 hosts? It's definitely will reduce performance due to a
lot of context-switching.

If everything okay with GC logs, it's possible to check TCP dumps, just to
understand where the connection breaks off.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



Re: Rebalancing mode set to None

2018-01-30 Thread Evgenii Zhuravlev
Hi,

Do you understand that in this case rebalance won't happen automatically at
all and you will need to make it manually?

Evgenii

2018-01-30 18:16 GMT+03:00 Ranjit Sahu :

> Hi Guys,
>
> If i set back up count to 3 and rebalncing mode to None , do you think
> there are any issues? I want to avoid the rebalncing of data when a node
> crashes and a new one joins which is slowing down loading the data to
> cache.
>
> Thanks,
> Ranjit
>


Rebalancing mode set to None

2018-01-30 Thread Ranjit Sahu
Hi Guys,

If i set back up count to 3 and rebalncing mode to None , do you think
there are any issues? I want to avoid the rebalncing of data when a node
crashes and a new one joins which is slowing down loading the data to
cache.

Thanks,
Ranjit


Re: Re: Cannot connect the ignite server after running one or two days

2018-01-30 Thread ezhuravlev
Hi,

Looks like logs from the server is still not full. If you've checked them
and you sure that you don't have any exceptions in it before witnessing this
problem, then, I think that you could have some connection problems or a
long GC pause. Do you have any network monitoring? Also, I would recommend
checking GC logs at this moment(or share it with community) on all nodes.

Also, Why did you set "queryThreadPoolSize" value="32" while you have only
24 CPUs for all 5 hosts? It's definitely will reduce performance due to a
lot of context-switching.

If everything okay with GC logs, it's possible to check TCP dumps, just to
understand where the connection breaks off.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-01-30 Thread Ilya Kasnacheev
Hello!

I don't think that anything will get evicted with the configuration that
you have provided.

I think you should check whether keys are really unique (yes I remember
that you include currentTimeMillis in them, still it makes sense to
double-check) and also that all values are of type Data. If some of them
are not of type Data, SQL will not see them.

Can you split your data into more batches (e.g. 10 batches, 20k records
each), provide counts after every batch is ingested?

Regards,

-- 
Ilya Kasnacheev

2018-01-30 17:28 GMT+03:00 Raghav :

> Hi,
>
> I would like to add below points :
> 1) Ignite YARN is started once [Server] and it will not be stopped between
> iterations. This means that only once the Ignite nodes are negotiated
> between YARN and Ignite. Once finalized this should be the same.
>
> Please find below the server logs.
> [12:30:46] Topology snapshot [ver=1, servers=1, clients=0, CPUs=48,
> heap=9.9GB]
> [12:30:46] Topology snapshot [ver=2, servers=2, clients=0, CPUs=96,
> heap=20.0GB]
> [12:30:47] Topology snapshot [ver=3, servers=3, clients=0, CPUs=144,
> heap=30.0GB]
> [12:30:47] Topology snapshot [ver=4, servers=4, clients=0, CPUs=192,
> heap=39.0GB]
> [12:30:47] Topology snapshot [ver=5, servers=5, clients=0, CPUs=240,
> heap=49.0GB]
> [12:30:47] Topology snapshot [ver=6, servers=6, clients=0, CPUs=240,
> heap=59.0GB]
> [12:30:47] Topology snapshot [ver=7, servers=7, clients=0, CPUs=240,
> heap=69.0GB]
> [12:30:48] Topology snapshot [ver=8, servers=8, clients=0, CPUs=240,
> heap=79.0GB]
> [12:30:48] Topology snapshot [ver=9, servers=9, clients=0, CPUs=240,
> heap=89.0GB]
> [12:30:48] Topology snapshot [ver=10, servers=10, clients=0, CPUs=240,
> heap=99.0GB]
> [12:50:26] Topology snapshot [ver=11, servers=10, clients=1, CPUs=240,
> heap=120.0GB]
> [12:54:18] Topology snapshot [ver=12, servers=10, clients=0, CPUs=240,
> heap=99.0GB]
> [12:56:07] Topology snapshot [ver=13, servers=10, clients=1, CPUs=240,
> heap=120.0GB]
> [13:00:49] Topology snapshot [ver=14, servers=10, clients=0, CPUs=240,
> heap=99.0GB]
> [13:06:28] Topology snapshot [ver=15, servers=10, clients=1, CPUs=240,
> heap=120.0GB]
> [13:07:17] Topology snapshot [ver=16, servers=10, clients=0, CPUs=240,
> heap=99.0GB]
>
>
> 2) Only ignite clients are started and stopped in different Iterations. As
> we could see the client count becomes 0 and 1 whereas the servers count
> remain the same as 10
>
> 3) /tmp is a HDFS path where we have configured for Ignite and provided in
> cluster.properties. We could change this to any path.
>
> It would be helpful if there is a way to enable persistence in YARN
> deployment.
>
> Thank you.
>
> Best Regards,
> Raghav
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-01-30 Thread Raghav
Hi,

I would like to add below points :
1) Ignite YARN is started once [Server] and it will not be stopped between
iterations. This means that only once the Ignite nodes are negotiated
between YARN and Ignite. Once finalized this should be the same. 

Please find below the server logs.
[12:30:46] Topology snapshot [ver=1, servers=1, clients=0, CPUs=48,
heap=9.9GB]
[12:30:46] Topology snapshot [ver=2, servers=2, clients=0, CPUs=96,
heap=20.0GB]
[12:30:47] Topology snapshot [ver=3, servers=3, clients=0, CPUs=144,
heap=30.0GB]
[12:30:47] Topology snapshot [ver=4, servers=4, clients=0, CPUs=192,
heap=39.0GB]
[12:30:47] Topology snapshot [ver=5, servers=5, clients=0, CPUs=240,
heap=49.0GB]
[12:30:47] Topology snapshot [ver=6, servers=6, clients=0, CPUs=240,
heap=59.0GB]
[12:30:47] Topology snapshot [ver=7, servers=7, clients=0, CPUs=240,
heap=69.0GB]
[12:30:48] Topology snapshot [ver=8, servers=8, clients=0, CPUs=240,
heap=79.0GB]
[12:30:48] Topology snapshot [ver=9, servers=9, clients=0, CPUs=240,
heap=89.0GB]
[12:30:48] Topology snapshot [ver=10, servers=10, clients=0, CPUs=240,
heap=99.0GB]
[12:50:26] Topology snapshot [ver=11, servers=10, clients=1, CPUs=240,
heap=120.0GB]
[12:54:18] Topology snapshot [ver=12, servers=10, clients=0, CPUs=240,
heap=99.0GB]
[12:56:07] Topology snapshot [ver=13, servers=10, clients=1, CPUs=240,
heap=120.0GB]
[13:00:49] Topology snapshot [ver=14, servers=10, clients=0, CPUs=240,
heap=99.0GB]
[13:06:28] Topology snapshot [ver=15, servers=10, clients=1, CPUs=240,
heap=120.0GB]
[13:07:17] Topology snapshot [ver=16, servers=10, clients=0, CPUs=240,
heap=99.0GB]


2) Only ignite clients are started and stopped in different Iterations. As
we could see the client count becomes 0 and 1 whereas the servers count
remain the same as 10

3) /tmp is a HDFS path where we have configured for Ignite and provided in
cluster.properties. We could change this to any path. 

It would be helpful if there is a way to enable persistence in YARN
deployment.

Thank you.

Best Regards,
Raghav




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-01-30 Thread Ilya Kasnacheev
I can see two options here:

- Between iteration 1 and iteration 2 some nodes were stopped. Perhaps some
new nodes were started. Data on stopped nodes became unavailab.e
- Cache key collisions between iterations 1 and 2 so that 80% keys are
identical and only 20% are distinct the second time.

I expect it is the former. When you ask YARN to run 10 Ignite nodes, I
guess it will start them on random machines, and not on the same ones every
time. This will lead to different set of machines next time and lost data.

I don't think you should be using persistence with YARN. In fact, /tmp in
paths should give you the hint that you should not depend on availability
of data between runs.

Regards,

-- 
Ilya Kasnacheev

2018-01-30 16:21 GMT+03:00 Raghav :

> Hi,
>
> 1) Load data to cache
>
>  var cacheConf: CacheConfiguration[Long, Data] = new
> CacheConfiguration[Long, Data]("DataCache")
> cacheConf.setCacheMode(CacheMode.PARTITIONED)
> cacheConf.setIndexedTypes(classOf[Long], classOf[Data])
> val cache = ignite.getOrCreateCache(cacheConf)
> var dataMap = getDataMap()
> cache.*putAll*(dataMap)
>
> There is no possibility of having duplicate keys as currentTimeInMillis
> along with loop count is included.
>
> 2) Count Logic:
>
> val sql1 = "select * from DataCache"
> val count = cache.*query*(new SqlFieldsQuery(sql1)).getAll.size()
>
> Used query instead of metrics.
>
> 3) Errors: No error in server as well as client logs.
>
> Also, checked for folder creation in ignite work
> directory[IGNITE_WORKING_DIR] as per
> https://cwiki.apache.org/confluence/display/IGNITE/
> Ignite+Persistent+Store+-+under+the+hood.
> But no folders created for persistence.
>
> Thank you.
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-01-30 Thread Raghav
Hi,

1) Load data to cache 

 var cacheConf: CacheConfiguration[Long, Data] = new
CacheConfiguration[Long, Data]("DataCache")
cacheConf.setCacheMode(CacheMode.PARTITIONED)
cacheConf.setIndexedTypes(classOf[Long], classOf[Data])
val cache = ignite.getOrCreateCache(cacheConf)
var dataMap = getDataMap()
cache.*putAll*(dataMap)

There is no possibility of having duplicate keys as currentTimeInMillis
along with loop count is included. 

2) Count Logic:

val sql1 = "select * from DataCache"
val count = cache.*query*(new SqlFieldsQuery(sql1)).getAll.size()

Used query instead of metrics.

3) Errors: No error in server as well as client logs.

Also, checked for folder creation in ignite work
directory[IGNITE_WORKING_DIR] as per
https://cwiki.apache.org/confluence/display/IGNITE/Ignite+Persistent+Store+-+under+the+hood.
But no folders created for persistence.

Thank you. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Understanding configurations for IGFS

2018-01-30 Thread ilya.kasnacheev
Hello!

Sorry that I didn't answer earlier. Took some courage.

> int getPerNodeBatchSize()
Internally, IGFS uses DataStreamer. This is DataStreamer's
perNodeBufferSize() that is called when it is created.

> int getPerNodeParallelBatchCount()
Ditto. It is perNodeParallelOperations().

So, remote node in this context is affinity node (not excluding local I
assume) that DataStreamer sends a batch to. I don't see how it would affect
backups, looks orthogonal.

> int getPrefetchBlocks()
When you request a block N of file, it will also prefetch block N + 1, N +
2, ..., N + prefetchBlocks.

> is there any reason to use TRANSACTIONAL for dataCacheConfiguration
I assume that IGFS uses transactions internally to guarantee metadata
consistency.

> Similarly, does the setExpiryPolicyFactory in dataCacheConfiguration and
> metaCacheConfiguration have any effect?
I think it would have effect, but I'm afraid it's not the effect you are
expecting. I couldn't find any tests on this, so I would not recommend it.

> Similarly, does the eviction policy configured for dataCacheConfiguration
> and metaCacheConfiguration have any effect?
> Does IGFS sync the eviction of entries in the data and the metadata cache?
> Even if I use 2 different data regions for the 2 caches? A metadata entry
> with no data entries can be useful, but not the other way around

I don't think you should expire metaCache, but it seems that you can expire
dataCache, and there's even IgfsPerBlockLruEvictionPolicy with configurable
filter for that.

metaCache should be much smaller that dataCache so you should probably just
expire latter and leave former as is.

> The readThrough, writeThrough,writeBehind fields for the
> CacheConfiguration dataCacheConfiguration and metaCacheConfiguration have
> any effect?

Frankly speaking, I can't recommend doing anything funny to caches created
for IGFS. Better stick to recommended defaults.

> Is there any recommended ratio between the page size used for the
> DataStorageConfiguration for a DataRegionConfiguration used for the IGFS
> dataCacheConfiguration, and the block size configured for IGFS? 

This is a very interesting question right here. I suspect you should
benchmark several combinations, see if there's any recommendations. I think
that IGFS was mainly created when Ignite was mostly on-heap and not in
durable memory, so maybe nobody ever figured this one out.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Binary type has different affinity key fields

2018-01-30 Thread Вячеслав Коптилин
Hi Thomas,

Let's start with the table (I will use java api for that)

// Create dummy cache to act as an entry point for SQL queries (new
SQL API which do not require this
// will appear in future versions, JDBC and ODBC drivers do not
require it already).
CacheConfiguration cacheCfg = new
CacheConfiguration<>(DUMMY_CACHE_NAME).setSqlSchema("PUBLIC");
IgniteCache dummyCache = ignite.getOrCreateCache(cacheCfg);

// Create UserCache table based on the partitioned template.
dummyCache.query(new SqlFieldsQuery(
"CREATE TABLE UserCache (id BIGINT, username VARCHAR, password
varchar, PRIMARY KEY (username, password)) " +
"WITH \"template=partitioned," +
"affinitykey=username," +
"cache_name=UserCache," +
"key_type=org.apache.ignite.examples.CredentialsKey," +
"value_type=org.apache.ignite.examples.Credentials\"")).getAll();


one important thing that should be mentioned here is that SQL is
case-insensitive and therefore table name and column names will be
automatically converted to *upper case*.
if you want to preserve the case, you need to put double quotes around the
table name and columns.
for instance:

// Create UserCache table based on the partitioned template.
dummyCache.query(new SqlFieldsQuery(
"CREATE TABLE \"UserCache\" (\"id\" BIGINT, \"username\" VARCHAR, \"
password\" varchar, PRIMARY KEY (\"username\", \"password\")) " +
"WITH \"template=partitioned," +
"affinitykey=username," +
"cache_name=UserCache," +
"key_type=org.apache.ignite.examples.CredentialsKey," +
"value_type=org.apache.ignite.examples.Credentials\"")).getAll();

optionally, you can create indices on UserCache

// Create indices.
dummyCache.query(new SqlFieldsQuery("CREATE INDEX on UserCache
(username)")).getAll();
dummyCache.query(new SqlFieldsQuery("CREATE INDEX on UserCache
(password)")).getAll();


next step is defining CredentialsKey and Credential classes.
in accordance with the documentation
https://apacheignite-sql.readme.io/docs/create-table#section-examples
the PRIMARY KEY columns will be used as the object's key, the rest of the
columns will belong to the value.

public class CredentialsKey {

// Please take into account my note about case-insensitive SQL
@AffinityKeyMapped
private String USERNAME;

private String PASSWORD;

public CredentialsKey(String username, String password) {
this.USERNAME = username;
this.PASSWORD = password;
}

public String getUsername() {return USERNAME;}

public void setUsername(String username) {this.USERNAME = username;}

public String getPassword() {return PASSWORD;}

public void setPassword(String password) {this.PASSWORD = password;}
}


public class Credentials {
private long ID;

public Credentials(long id) {
this.ID = id;
}

public long getId() {return ID;}

public void setId(long id) {this.ID = id;}

@Override public String toString() {return "Credentials=[id=" + ID + "]";}
}


Now, you can populate the cache/table via JCache API

IgniteCache testCache = ignite.cache("UserCache");
testCache.put(new CredentialsKey("username-1", "password-1"), new
Credentials(1L));
testCache.put(new CredentialsKey("username-2", "password-2"), new
Credentials(2L));
testCache.put(new CredentialsKey("username-3", "password-3"), new
Credentials(3L));

or SQL API

SqlFieldsQuery qry = new SqlFieldsQuery("INSERT INTO UserCache (id,
username, password) VALUES (?, ?, ?)");
dummyCache.query(qry.setArgs(1L, "username-sql-1", "password-5")).getAll();
dummyCache.query(qry.setArgs(2L, "username-sql-2", "password-6")).getAll();
dummyCache.query(qry.setArgs(3L, "username-sql-3", "password-7")).getAll();

Best regards,

Slava.



2018-01-26 12:27 GMT+03:00 Thomas Isaksen :

> Hi Slava
>
> Thanks for pointing out my mistakes with the template.
> I have attached the java classes in question and the ignite config file
> that I am using .
>
> I create the table using DDL as follows:
>
> CREATE TABLE UserCache (
> id bigint,
> username varchar,
> password varchar,
> PRIMARY KEY (username, password)
> )
> WITH "template=userCache, affinitykey=username, cache_name=UserCache,
> key_type=no.toyota.gatekeeper.ignite.key.CredentialsKey,
> value_type=no.toyota.gatekeeper.authenticate.Credentials";
>
> Next I try to put one entry into my cache:
>
> @Test
> Public void testIgnite()
> {
> Ignition.setClientMode(true);
> Ignite ignite = Ignition.start("/config/test-config.xml");
> IgniteCache cache =
> ignite.cache("UserCache");
> // this blows up
> cache.put(new CredentialsKey("foo","bar"), new
> Credentials("foo","bar","resourceId"));
> }
>
> I am not sure my code is correct but I get the same error when I try to
> insert a row using SQL.
>
> INSERT INTO UserCache (id,username,password) VALUES (1, 'foo','bar');
>
> --
> Thomas Isaksen
>
> 

Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-01-30 Thread Andrey Mashenkov
Hi,

1. How do you load data to cache? Is it possible keys have duplicates?
2. How did you check there are 120k records in cache? Is it whole cache
metric or node local metric?
3. Are there any error in logs?


On Tue, Jan 30, 2018 at 3:17 PM, Raghav  wrote:

> Hello,
>
> Am trying to enable Ignite Native Persistence in Ignite Yarn Deployment.
> Purpose of this is to have no eviction of data at all from Ignite grids.
> Whenever the memory is full the data should get stored in disc.
>
> But when I try to add large number of records to Ignite Grid, the data is
> getting evicted.
>
> Example :
> In Iteration 1 added 10 records. Expected and actual count of records
> is
> 10.
> In Iteration 2 added another 10 records. But instead of expected 20
> records, there were only around 12 records. Guess the remaining got
> evicted from grid and there is data loss.
>
> Kindly guide me to enable persistence without data eviction so that there
> is
> no data loss.
>
> Please find below the details.
>
> Ignite Version : 2.3.0
>
> Cluster details for Yarn Deployment:
>
> IGNITE_NODE_COUNT=10
> IGNITE_RUN_CPU_PER_NODE=5
> IGNITE_MEMORY_PER_NODE=10096
> IGNITE_VERSION=2.3.0
> IGNITE_PATH=/tmp/ignite/2.3.0/apache-ignite-fabric-2.3.0-bin.zip
> IGNITE_RELEASES_DIR=/tmp/ignite/2.3.0/releases
> IGNITE_WORKING_DIR=/tmp/ignite/2.3.0/work
> IGNITE_XML_CONFIG=/tmp/ignite/2.3.0/config/ignite-config.xml
> IGNITE_USERS_LIBS=/tmp/ignite/2.3.0/libs
> IGNITE_LOCAL_WORK_DIR=/local/home/ignite/2.3.0
>
> Ignite Configuration for Yarn deployment:
>
> 
> http://www.springframework.org/schema/beans;
>xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
>xmlns:util="http://www.springframework.org/schema/util;
>xsi:schemaLocation="
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
> http://www.springframework.org/schema/util
> http://www.springframework.org/schema/util/spring-util-2.0.xsd;>
> 
> 
>  
>  class="org.apache.ignite.configuration.DataStorageConfiguration">
>   
> class="org.apache.ignite.configuration.DataRegionConfiguration">
> 
> 
>   
> 
> 
> 
> 
> 
> 
>  class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
> 
>  class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.
> TcpDiscoveryVmIpFinder">
> 
>   
>  :47500
> 
> 
> 
> 
>  value="1000"/>
>  value="1000"/>
>  value="1000"/>
>  value="50"/>
>  value="1000"/>
> 
> 
> 
> 
>
> Thanks in Advance !!!
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: ClassCastException Issues

2018-01-30 Thread slava.koptilin
Hi Svonn,

So, the issue is related to Kafka configuration. Am I right?
If you think that something should be done on Ignite side, please share a
small reproducer.

Thanks!



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Long activation times with Ignite persistence enabled

2018-01-30 Thread ilya.kasnacheev
Hello!

Can you please also provide full cache configuration and your hardware specs
(especially for storage)?
After 2.4 release is done, I hope your case will get attention of PDS
developers.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Index question.

2018-01-30 Thread Mikael
No worries, the value is a hashmap that store some configuration and I 
just use index on the key to get the stuff out of there so should not be 
a problem.


Mikael

Den 2018-01-30 kl. 10:19, skrev Stanislav Lukyanov:


To add a detail, value will have an index created for it if it is a 
primitive (or an “SQL-friendly” type like Date).


I don’t think there is an easy way to avoid that. You could use a 
wrapper for the primitive value, but it will also


have some overhead and it’s hard to say whether it will be more 
efficient than having an index for the value.


Stan

*From: *Amir Akhmedov 
*Sent: *30 января 2018 г. 1:32
*To: *user@ignite.apache.org 
*Subject: *Re: Index question.

Ok, then it makes sense.
You cannot pass null into setIndexedTypes(). But if you don't put any 
@QuerySqlField annotation in value class or declare fields/indexes 
through Java API then no columns/indexes will be created in SQL storage.


On Mon, Jan 29, 2018 at 3:59 PM, Mikael > wrote:


Thanks, the reason I need SQL is that my key is not a primitive,
it's a class made of 2 int and one String and I need to query on
parts of the key and not the entire key, like select all keys
where one of the integers is equal to 55 for example.

Mikael

Den 2018-01-29 kl. 19:05, skrev Amir Akhmedov:

Hi Mikael,

1. This is just a warning informing you that Ignite's object
serialization will differ from yours Externalizable
implementation. By default Ignite will serialize all fields in
object and if you want to customize then you need implement
Binarylizable interface or set custom serializer as stated in
warning message.

Even if you did not specify any @QuerySqlField in your object
Ignite stores the whole serialized object in SQL table under
_val field for internal usage.

The open question is why do you need SQL if you are using only
key based search? You can make exactly the same using Java
Cache API.

2. You can leave Externalizable implementation in the class,
it won't hurt.

3. Please check bullet #1, if you don't want indexes then you
don't need create them.

Thanks,

Amir




--

Sincerely Yours Amir Akhmedov





RE: Index question.

2018-01-30 Thread Stanislav Lukyanov
To add a detail, value will have an index created for it if it is a primitive 
(or an “SQL-friendly” type like Date).
I don’t think there is an easy way to avoid that. You could use a wrapper for 
the primitive value, but it will also 
have some overhead and it’s hard to say whether it will be more efficient than 
having an index for the value. 

Stan

From: Amir Akhmedov
Sent: 30 января 2018 г. 1:32
To: user@ignite.apache.org
Subject: Re: Index question.

Ok, then it makes sense.
You cannot pass null into setIndexedTypes(). But if you don't put any 
@QuerySqlField annotation in value class or declare fields/indexes through Java 
API then no columns/indexes will be created in SQL storage.

On Mon, Jan 29, 2018 at 3:59 PM, Mikael  wrote:
Thanks, the reason I need SQL is that my key is not a primitive, it's a class 
made of 2 int and one String and I need to query on parts of the key and not 
the entire key, like select all keys where one of the integers is equal to 55 
for example.
Mikael

Den 2018-01-29 kl. 19:05, skrev Amir Akhmedov:
Hi Mikael,
1. This is just a warning informing you that Ignite's object serialization will 
differ from yours Externalizable implementation. By default Ignite will 
serialize all fields in object and if you want to customize then you need 
implement Binarylizable interface or set custom serializer as stated in warning 
message.
Even if you did not specify any @QuerySqlField in your object Ignite stores the 
whole serialized object in SQL table under _val field for internal usage.
The open question is why do you need SQL if you are using only key based 
search? You can make exactly the same using Java Cache API.

2. You can leave Externalizable implementation in the class, it won't hurt.
3. Please check bullet #1, if you don't want indexes then you don't need create 
them.

Thanks,
Amir




-- 
Sincerely Yours Amir Akhmedov