NTILE function (ignite)

2018-01-22 Thread sindhu somisetty
IS  NTILE function is possible in ignite database 



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Cannot connect the ignite server after running one or two days

2018-01-22 Thread xiang jie
Hi,

 

We have deployed ignite 2.3 and created about 100 caches and load data from
oracle into these caches. 

 

But clients(started in developing tomcat web apps, we often start and stop
these apps) often cannot connect the ignite server after one or two days. By
searching logs, we found below errors in server side:

 

 

[2018-01-23 12:43:30,979][WARN
][grid-timeout-worker-#23%igniteCosco%][TcpDiscoverySpi] Socket write has
timed out (consider increasing 'IgniteConfiguration.failureDetectionTimeout'
configuration property) [failureDetectionTimeout=1,
rmtAddr=/172.41.27.66:3871, rmtPort=3871, sockTimeout=5000]

[2018-01-23
12:43:30,980][ERROR][tcp-disco-sock-reader-#63%igniteCosco%][TcpDiscoverySpi
] Caught exception on message read
[sock=Socket[addr=/172.41.27.66,port=3871,localport=47500],
locNodeId=89865e33-c722-4989-b663-a75c936b068f,
rmtNodeId=ed9d5cc9-11d3-4bfd-a648-1ae718ac16f9]

class org.apache.ignite.IgniteCheckedException: Failed to deserialize object
with given class loader: sun.misc.Launcher$AppClassLoader@7390d1e8

  at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:129)

  at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(Abstr
actNodeNameAwareMarshaller.java:94)

  at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9740)

  at
org.apache.ignite.spi.discovery.tcp.ServerImpl$SocketReader.body(ServerImpl.
java:5946)

  at org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)

Caused by: java.net.SocketException: Socket closed

  at java.net.SocketInputStream.socketRead0(Native Method)

  at java.net.SocketInputStream.read(SocketInputStream.java:152)

  at java.net.SocketInputStream.read(SocketInputStream.java:122)

  at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)

  at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)

  at java.io.BufferedInputStream.read(BufferedInputStream.java:334)

  at
org.apache.ignite.marshaller.jdk.JdkMarshallerInputStreamWrapper.read(JdkMar
shallerInputStreamWrapper.java:53)

  at
java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.java:2310)

  at
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2
323)

  at
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.j
ava:2794)

  at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)

  at java.io.ObjectInputStream.(ObjectInputStream.java:299)

  at
org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.(JdkMa
rshallerObjectInputStream.java:39)

  at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:119)

  ... 4 more

[2018-01-23
12:43:30,980][DEBUG][tcp-disco-sock-reader-#63%igniteCosco%][TcpDiscoverySpi
] Client connection failed
[sock=Socket[addr=/172.41.27.66,port=3871,localport=47500],
locNodeId=89865e33-c722-4989-b663-a75c936b068f,
rmtNodeId=ed9d5cc9-11d3-4bfd-a648-1ae718ac16f9]

[2018-01-23 12:43:30,981][INFO
][tcp-disco-sock-reader-#63%igniteCosco%][TcpDiscoverySpi] Finished serving
remote node connection [rmtAddr=/172.41.27.66:3871, rmtPort=3871

[2018-01-23
12:43:30,981][DEBUG][tcp-disco-sock-reader-#63%igniteCosco%][TcpDiscoverySpi
] Grid runnable finished normally: tcp-disco-sock-reader-#63%igniteCosco%

[2018-01-23
12:43:30,981][ERROR][tcp-disco-client-message-worker-#64%igniteCosco%][TcpDi
scoverySpi] Client connection failed
[sock=Socket[addr=/172.41.27.66,port=3871,localport=47500],
locNodeId=89865e33-c722-4989-b663-a75c936b068f,
rmtNodeId=ed9d5cc9-11d3-4bfd-a648-1ae718ac16f9,
msg=TcpDiscoveryNodeAddedMessage [node=TcpDiscoveryNode
[id=ed9d5cc9-11d3-4bfd-a648-1ae718ac16f9, addrs=[0:0:0:0:0:0:0:1, 127.0.0.1,
172.41.27.66, 192.168.126.1, 192.168.81.1,
2001:0:9d38:6ab8:8aa:3888:2532:627e],
sockAddrs=[/2001:0:9d38:6ab8:8aa:3888:2532:627e:0, /192.168.81.1:0,
/127.0.0.1:0, /0:0:0:0:0:0:0:1:0, /192.168.126.1:0, /172.41.27.66:0],
discPort=0, order=0, intOrder=340, lastExchangeTime=1516682580894,
loc=false, ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], dataPacket=o.a.
i.spi.discovery.tcp.internal.DiscoveryDataPacket@4d671e9d,
discardMsgId=null, discardCustomMsgId=null, top=[TcpDiscoveryNode
[id=fbf2344b-652b-4c5c-ad78-a470ec8c62e8, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.40.0.205, 192.168.122.1],
sockAddrs=[/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, /172.40.0.205:47500,
/192.168.122.1:47500], discPort=47500, order=618, intOrder=310,
lastExchangeTime=1516673277178, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false], TcpDiscoveryNode [id=48e4ab39-0baa-40da-8dff-382253b2f72a,
addrs=[0:0:0:0:0:0:0:1, 10.10.11.93, 127.0.0.1, 192.168.0.93, 192.168.1.90],
sockAddrs=[/127.0.0.1:0, /0:0:0:0:0:0:0:1:0, /192.168.1.90:0,
/10.10.11.93:0, /192.168.0.93:0], discPort=0, order=620, intOrder=312,
lastExchangeTime=1516673277178, loc=false, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=true], TcpDiscoveryNode [id=c57f1db3-878e-47c3-bd2b-f17882d0ba2d,

Re: Questions about SQL interface

2018-01-22 Thread Wolfram Huesken

Hello Slava,

the workaround totally works for me, I already have an Interceptor for 
certain caches anyway. Thanks a lot for looking into this. Will you take 
care of the jira ticket, too?


Cheers
Wolfram

On 23/01/2018 06:19, slava.koptilin wrote:

Hello,


Here is the SQL statement and the exception from the logs:
https://gist.github.com/wolframite/d0b28d8b7ce483f82b9fd145adb68abe

I tried this use-case with cache configuration you provided and I was able
to reproduce the issue.
When insert/update operation is executed via SQL api, the actual value of
the entry has the BinaryObject type instead of MemcachedEntry. It looks like
a bug, unfortunately.

As a temporary workaround, you can implement CacheInterceptor in the
following way:
public class CompressionInterceptor extends CacheInterceptorAdapter {
 @Override public Object onBeforePut(Cache.Entry entry, Object newVal) {
 if (newVal instanceof BinaryObject) {
 // value is updated via SQL
 BinaryObject newVal0 = (BinaryObject) newVal;

 ...
 }
 else {
 // value is updated via JCache api
 MemcachedEntry newVal0 = (MemcachedEntry) newVal;
 ...
 }
 ...
 }
}

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/



RE: Creating multiple Ignite grids on same machine

2018-01-22 Thread Raymond Wilson
Hi Alexey,

Works like a charm!  

Thanks,
 Raymond.

-Original Message-
From: Alexey Popov [mailto:tank2.a...@gmail.com]
Sent: Wednesday, January 10, 2018 9:03 PM
To: user@ignite.apache.org
Subject: Re: Creating multiple Ignite grids on same machine

Hi Raymond,

In your case you should configure:

1. different TcpDiscoverySpi local ports 2. different ports for
TcpDiscoveryVmIpFinder (Vm = Static for .NET). You should not use a default
ipFinder.
3. different TcpCommunicationSpi local ports

Please see sample Java xml-configs as a reference sample. You can do the
similar things with Ignite.Net 2.3 configuration.

Sample cluster 1 cfg:










127.0.0.1:48500..48509












Sample cluster 2 cfg:










127.0.0.1:47500..47509












Thank you,
Alexey



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re:Re:Re: Re:Re: delete data error

2018-01-22 Thread Lucky
Sorry, the fid is not UUID in tmpCompanyCuBaseDataCache , but the other are 
UUID.
The error is not happened only in this cache, the other are the same.
I found when I delete a single record ,it's nomal .But If I delete many records 
in a SQL,it will get the wrong.
Thanks.



At 2018-01-23 09:38:18, "Lucky"  wrote:

I put the entry like this:
 cache.put(entry.getFID(),entry);
The fid is a UUID, is only.
 
I'm very sure that the data in the cache is no problem.
All value is correct,and look like the other record.


 sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData where 
fid='1516093156643-53-33' "; 
 sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData where 
_key='1516093156643-53-33' "; 

 It can both execute correctly.  
 Then I execute "delete from 
\"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData" again, It got the same 
error, the key had changed to another one.
And When I delete this record ,execute again ,it's the same..


Thanks.
Lucky.







Re:Re: Re:Re: delete data error

2018-01-22 Thread Lucky
I put the entry like this:
 cache.put(entry.getFID(),entry);
The fid is a UUID, is only.
 
I'm very sure that the data in the cache is no problem.
All value is correct,and look like the other record.


 sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData where 
fid='1516093156643-53-33' "; 
 sql="delete from \"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData where 
_key='1516093156643-53-33' "; 

 It can both execute correctly.  
 Then I execute "delete from 
\"tmpCompanyCuBaseDataCache\".TmpCompanyCuBaseData" again, It got the same 
error, the key had changed to another one.
And When I delete this record ,execute again ,it's the same..


Thanks.
Lucky.






At 2018-01-22 22:28:59, "Ilya Kasnacheev"  wrote:

Hello!


Thank you for the log!


It looks like there's some internal consistency problem with the cache. From 
the log content the root cause is not apparent.


Can you please try getting the offending key from cache via Cache API (see if 
it's everything all right with it), and then deleting the key via cache API, 
and then retry the DELETE operation?


The key in question is '1516093156643-53-33'


Regards,





--

Ilya Kasnacheev




Key Value Store - control TTL refresh

2018-01-22 Thread Ariel Tubaltsev
I'd like to set TTL for all entries in some map.

Along with that I'd like to control operations that refresh TTL. For
instance simple Read/Write should update the TTL, but pulling the whole map
should not.

Is that something that can be done with current expiry policies?

https://apacheignite.readme.io/docs/expiry-policies



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Ignite 2.x upgrade guidelines

2018-01-22 Thread bintisepaha
Hi, we are upgrading ignite 1.7.0 to 2.3.0 soon. on 1.7 we were on heap and
used G1GC with 16 nodes each using 30 GB heap. Although we never ended up
using more than 40% heaps on any node at a given time.

With 2.3.0 it would be all off-heap. Are there any guidelines to follow or
things to check for performance wise before we upgrade in prod?

If you have a document about it or older post, that would also be helpful.

Thanks,
Binti



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Questions about SQL interface

2018-01-22 Thread slava.koptilin
Hello,

> Here is the SQL statement and the exception from the logs: 
> https://gist.github.com/wolframite/d0b28d8b7ce483f82b9fd145adb68abe
I tried this use-case with cache configuration you provided and I was able
to reproduce the issue.
When insert/update operation is executed via SQL api, the actual value of
the entry has the BinaryObject type instead of MemcachedEntry. It looks like
a bug, unfortunately.

As a temporary workaround, you can implement CacheInterceptor in the
following way:
public class CompressionInterceptor extends CacheInterceptorAdapter {
@Override public Object onBeforePut(Cache.Entry entry, Object newVal) {
if (newVal instanceof BinaryObject) {
// value is updated via SQL
BinaryObject newVal0 = (BinaryObject) newVal;

...
}
else {
// value is updated via JCache api
MemcachedEntry newVal0 = (MemcachedEntry) newVal;
...
}
...
}
}

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Long activation times with Ignite persistence enabled

2018-01-22 Thread Andrey Kornev
Alexey,

Thanks a lot for looking into this!

My configuration is very basic: 3 caches all using standard 1024 partitions, 
sharing a 1GB persistent memory region.

Please find below the stack trace of the exchange worker thread captured while 
the node's activation is in progress (2.4 Ignite branch).

Hope it helps!

Thanks!
Andrey

"exchange-worker-#42%ignite-2%" #82 prio=5 os_prio=31 tid=0x7ffe8bf1c000 
nid=0xc403 waiting on condition [0x7ed43000]
   java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:304)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.get0(GridFutureAdapter.java:177)
at 
org.apache.ignite.internal.util.future.GridFutureAdapter.getUninterruptibly(GridFutureAdapter.java:145)
at 
org.apache.ignite.internal.processors.cache.persistence.file.AsyncFileIO.read(AsyncFileIO.java:95)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStore.read(FilePageStore.java:324)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:306)
at 
org.apache.ignite.internal.processors.cache.persistence.file.FilePageStoreManager.read(FilePageStoreManager.java:291)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:656)
at 
org.apache.ignite.internal.processors.cache.persistence.pagemem.PageMemoryImpl.acquirePage(PageMemoryImpl.java:576)
at 
org.apache.ignite.internal.processors.cache.persistence.DataStructure.acquirePage(DataStructure.java:130)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.PagesList.init(PagesList.java:212)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.AbstractFreeList.(AbstractFreeList.java:367)
at 
org.apache.ignite.internal.processors.cache.persistence.freelist.CacheFreeListImpl.(CacheFreeListImpl.java:47)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore$1.(GridCacheOffheapManager.java:1041)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.init0(GridCacheOffheapManager.java:1041)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheOffheapManager$GridCacheDataStore.updateCounter(GridCacheOffheapManager.java:1247)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.updateCounter(GridDhtLocalPartition.java:835)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.casState(GridDhtLocalPartition.java:523)
- locked <0x00077a3d1120> (a 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtLocalPartition.(GridDhtLocalPartition.java:218)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.forceCreatePartition(GridDhtPartitionTopologyImpl.java:804)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restorePartitionState(GridCacheDatabaseSharedManager.java:2196)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.applyLastUpdates(GridCacheDatabaseSharedManager.java:2155)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.restoreState(GridCacheDatabaseSharedManager.java:1322)
at 
org.apache.ignite.internal.processors.cache.persistence.GridCacheDatabaseSharedManager.beforeExchange(GridCacheDatabaseSharedManager.java:1113)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.distributedExchange(GridDhtPartitionsExchangeFuture.java:1063)
at 
org.apache.ignite.internal.processors.cache.distributed.dht.preloader.GridDhtPartitionsExchangeFuture.init(GridDhtPartitionsExchangeFuture.java:661)
at 
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2329)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
at java.lang.Thread.run(Thread.java:748)

2018-01-22 11:30:01,049 INFO  [exchange-worker-#42%ContentStore-2%] 
GridCacheDatabaseSharedManager - Finished applying WAL changes 
[updatesApplied=0, time=68435ms]
2018-01-22 11:30:01,789 INFO  [main] GridDiscoveryManager - Topology snapshot 
[ver=4, servers=2, clients=0, CPUs=8, offheap=26.0GB, heap=4.0GB]
2018-01-22 11:30:01,789 INFO  [main] GridDiscoveryManager - Data Regions 
Configured:
2018-01-22 11:30:01,789 INFO  [main] GridDiscoveryManager -   ^-- default 
[initSize=256.0 MiB, maxSize=12.0 GiB, persistenceEnabled=false]
2018-01-22 11:30:01,789 INFO  [main] GridDiscoveryManager -   ^-- durable 
[initSize=256.0 MiB, maxSize=1.0 GiB, persistenceEnabled=true]




From: Alexey Goncharuk 

Question about persisting stream processing results

2018-01-22 Thread svonn
Hi!

I'm receiving two streams of events, stream one (1) is basically only used
as basis for interpolating and putting data in stream two (2).
Whenever an element in stream (1) arrives, the local listener of my
ContinuousQuery starting searching the previous element belonging to the
same group. More specifically, its a ScanQuery that compares some IDs and
searches for the one that has a timestamp bigger than the current one minus
1500ms while being smaller than the current timestamp. 

Currently, I want to persist stream (2) while keeping a stable performance.
Whats the best way to do that?

Simply activating Ignite persistence sounds like it's simply starting to
move data from off-heap RAM to the harddrive when the RAM space is
shrinking. However, if I understood it correctly, it will still query those
elements for all my stream processing queries. So trying to find the
previous element of stream (1) or trying to find all elements that are
between those two elements in stream (2) would become slower and slower the
longer the task runs.

The incoming data is relevant for about 5 minutes, thus I tried using an
expiration policy. This keeps the performance stable, but I'm not sure how
to persist the expired data properly. Also, for calibration purposes, I'm
generating a Map to store and apply calibration on elements - when I'm
activating the expiry policy, I'm starting to run in Nullpointer Exceptions
after about 5 minutes - is the policy also deleting the Map?

Best regards
Svonn





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: query on BinaryObject index and table

2018-01-22 Thread Denis Magda
The schema can be changed with ALTER TABLE ADD COLUMN command:
https://apacheignite-sql.readme.io/docs/alter-table 


To my knowledge this is supported for schemas that were initially configured by 
both DDL and QueryEntity/Annotations.

—
Denis

> On Jan 22, 2018, at 5:44 AM, Ilya Kasnacheev  
> wrote:
> 
> Hello Rajesh!
> 
> Table name can be specified in cache configuration's query entity. If not 
> supplied, by default it is equal to value type name, e.g. BinaryObject :)
> 
> Also, note that SQL tables have fixed schemas. This means you won't be able 
> to add a random set of fields in BinaryObject and be able to do SQL queries 
> on them all. You will have to declare all fields that you are going to use 
> via SQL, either by annotations or query entity:
> see https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> 
> 
> To add index, you should either specify it in annotations (via index=true) or 
> in query entity.
> 
> Regards,
> Ilya.
> 
> -- 
> Ilya Kasnacheev
> 
> 2018-01-21 15:12 GMT+03:00 Rajesh Kishore  >:
> Hi Denis,
> 
> This is my code:
> 
> CacheConfiguration cacheCfg =
> new CacheConfiguration<>(ORG_CACHE);
> 
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setBackups(1);
> cacheCfg
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_SYNC);
> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
> 
> IgniteCache cache = ignite.getOrCreateCache(cacheCfg);
> 
> if ( UPDATE ) {
>   System.out.println("Populating the cache...");
> 
>   try (IgniteDataStreamer streamer =
>   ignite.dataStreamer(ORG_CACHE)) {
> streamer.allowOverwrite(true);
> IgniteBinary binary = ignite.binary();
> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
> ;
> for ( long i = 0; i < 100; i++ ) {
>   streamer.addData(i,
>   objBuilder.setField("id", i)
>   .setField("name", "organization-" + i).build());
> 
>   if ( i > 0 && i % 100 == 0 )
> System.out.println("Done: " + i);
> }
>   }
> }
> 
> IgniteCache binaryCache =
> ignite.cache(ORG_CACHE).withKeepBinary();
> BinaryObject binaryPerson = binaryCache.get(54l);
> System.out.println("name " + binaryPerson.field("name"));
> 
> 
> Not sure, If I am missing some context here , if I have to use sqlquery , 
> what table name should I specify - I did not create table explicitly, do I 
> need to that?
> How would I create the index?
> 
> Thanks,
> Rajesh
> 
> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda  > wrote:
> 
> 
> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore  > > wrote:
> >
> > Hi,
> >
> > I have requirement that my schema is not fixed , so I have to use the 
> > BinaryObject approach instead of fixed POJO
> >
> > I am relying on OOTB file system persistence mechanism
> >
> > My questions are:
> > - How can I specify the indexes on BinaryObject?
> 
> https://apacheignite-sql.readme.io/docs/create-index 
> 
> https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> 
> 
> > - If I have to use sql query for retrieving objects , what table name 
> > should I specify, the one which is used for cache name does not work
> >
> 
> Was the table and its queryable fields/indexes created with CREATE TABLE or 
> Java annotations/QueryEntity?
> 
> If the latter approach was taken then the table name corresponds to the Java 
> type name as shown in this doc:
> https://apacheignite-sql.readme.io/docs/schema-and-indexes 
> 
> 
> —
> Denis
> 
> > -Rajesh
> 
> 
> 



Re: CacheWriterException: Failed to write entry to database

2018-01-22 Thread ilya.kasnacheev
Hello.

It seems that your Oracle data source is not configured properly:

Caused by: java.sql.SQLException: Invalid Oracle URL specified:
OracleDataSource.makeURL

Unfortunately you did not include that part of source so I can't say
anything more.

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Re:Re: delete data error

2018-01-22 Thread Ilya Kasnacheev
Hello!

Thank you for the log!

It looks like there's some internal consistency problem with the cache.
>From the log content the root cause is not apparent.

Can you please try getting the offending key from cache via Cache API (see
if it's everything all right with it), and then deleting the key via cache
API, and then retry the DELETE operation?

The key in question is '1516093156643-53-33'

Regards,


-- 
Ilya Kasnacheev

2018-01-22 4:13 GMT+03:00 Lucky :

>
> Is there any suggestion?
>
>
>
>
>
> At 2018-01-18 09:56:51, "Lucky"  wrote:
>
> This did not happen every time.
> When I run it several times,it will happen .And when it happened, then it 
> will happened every time.
>
> This table is simple; I insert some data and when I finish the job,I will 
> delete the data.
>
> Thanks.
>
>
> At 2018-01-17 20:20:34, "ilya.kasnacheev"  wrote:
> >Hello Lucky!
> >
> >Does this happen every time when you try, or it is a one-time occurrence?
> >
> >Can you please share logs from your nodes and cache/table configurations?
> >Ideally a small reproducer project.
> >
> >Regards,
> >
> >
> >
>
>
>
>
>
>
>
>
>
>


Re: query on BinaryObject index and table

2018-01-22 Thread Ilya Kasnacheev
Hello Rajesh!

Table name can be specified in cache configuration's query entity. If not
supplied, by default it is equal to value type name, e.g. BinaryObject :)

Also, note that SQL tables have fixed schemas. This means you won't be able
to add a random set of fields in BinaryObject and be able to do SQL queries
on them all. You will have to declare all fields that you are going to use
via SQL, either by annotations or query entity:
see https://apacheignite-sql.readme.io/docs/schema-and-indexes

To add index, you should either specify it in annotations (via index=true)
or in query entity.

Regards,
Ilya.

-- 
Ilya Kasnacheev

2018-01-21 15:12 GMT+03:00 Rajesh Kishore :

> Hi Denis,
>
> This is my code:
>
> CacheConfiguration cacheCfg =
> new CacheConfiguration<>(ORG_CACHE);
>
> cacheCfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL);
> cacheCfg.setBackups(1);
> cacheCfg
> .setWriteSynchronizationMode(CacheWriteSynchronizationMode.
> FULL_SYNC);
> cacheCfg.setIndexedTypes(Long.class, BinaryObject.class);
>
> IgniteCache cache = ignite.getOrCreateCache(
> cacheCfg);
>
> if ( UPDATE ) {
>   System.out.println("Populating the cache...");
>
>   try (IgniteDataStreamer streamer =
>   ignite.dataStreamer(ORG_CACHE)) {
> streamer.allowOverwrite(true);
> IgniteBinary binary = ignite.binary();
> BinaryObjectBuilder objBuilder = binary.builder(ORG_CACHE);
> ;
> for ( long i = 0; i < 100; i++ ) {
>   streamer.addData(i,
>   objBuilder.setField("id", i)
>   .setField("name", "organization-" + i).build());
>
>   if ( i > 0 && i % 100 == 0 )
> System.out.println("Done: " + i);
> }
>   }
> }
>
> IgniteCache binaryCache =
> ignite.cache(ORG_CACHE).withKeepBinary();
> BinaryObject binaryPerson = binaryCache.get(54l);
> System.out.println("name " + binaryPerson.field("name"));
>
>
> Not sure, If I am missing some context here , if I have to use sqlquery ,
> what table name should I specify - I did not create table explicitly, do I
> need to that?
> How would I create the index?
>
> Thanks,
> Rajesh
>
> On Sun, Jan 21, 2018 at 12:25 PM, Denis Magda  wrote:
>
>>
>>
>> > On Jan 20, 2018, at 7:20 PM, Rajesh Kishore 
>> wrote:
>> >
>> > Hi,
>> >
>> > I have requirement that my schema is not fixed , so I have to use the
>> BinaryObject approach instead of fixed POJO
>> >
>> > I am relying on OOTB file system persistence mechanism
>> >
>> > My questions are:
>> > - How can I specify the indexes on BinaryObject?
>>
>> https://apacheignite-sql.readme.io/docs/create-index
>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>
>> > - If I have to use sql query for retrieving objects , what table name
>> should I specify, the one which is used for cache name does not work
>> >
>>
>> Was the table and its queryable fields/indexes created with CREATE TABLE
>> or Java annotations/QueryEntity?
>>
>> If the latter approach was taken then the table name corresponds to the
>> Java type name as shown in this doc:
>> https://apacheignite-sql.readme.io/docs/schema-and-indexes
>>
>> —
>> Denis
>>
>> > -Rajesh
>>
>>
>


Re: CacheStoreAdapter write and delete are not being called by Ignite's GridCacheStoreManager

2018-01-22 Thread slava.koptilin
Hi Pim,

Well, you only configure CacheLoaderFactory which is used when a cache is
read-through or when loading data into a cache via the Cache#loadAll()
method.
So, you need to provide a CacheWriter implementation in order to enable
write-through behavior.
Please see, CacheConfiguration#setCacheWriterFactory().
Apache Ignite provides a convenient method
CacheConfiguration#setCacheStoreFactory() which allows specifying cache
store factory(please see CacheStore class). Something like as follows:
CacheConfiguration cacheCfg = new CacheConfiguration();
cacheCfg.setName("PlayerScores");
   
cacheCfg.setCacheStoreFactory(FactoryBuilder.factoryOf(PlayerScoreStoreAdapter.class));
...

Simple examples can be found here:
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcPersonStore.java
https://github.com/apache/ignite/blob/master/examples/src/main/java/org/apache/ignite/examples/datagrid/store/jdbc/CacheJdbcStoreExample.java

Thanks,
Slava.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: DataStreamer does not persist in cache when a receiver is set

2018-01-22 Thread Evgenii Zhuravlev
Hi,

Please share a full code of your EntryProcessor. Which logger do you use
here?

Thanks,
Evgenii

2018-01-20 21:00 GMT+03:00 Pim D :

> Hi,
>
> I have created a datastreamer that will stream data from a database to my
> cache.
> This works very nice, untill...
> ... I include a StreamTransformer in the data streamer.
> When the transformer is set, nothing gets stored in the cache?!?
>
> In a simple example my transformer extends the StreamTransformer and
> implements process:
> @Override
> public Object process(MutableEntry entry, Object...
> arg) throws EntryProcessorException {
> if (log.isDebugEnabled())
> log.debug("Laad " + entry.getKey() + " in de cache");
> if (arg.length==1 && arg[0] instanceof HighScore) {
> // Transformeer stream argument naar het cache object en zet
> deze
> entry.setValue((Integer) arg[0]);
> }
> return arg[0];
> }
>
> Eventually I would really like to transform the object read from the
> database to a new object which corresponds with my cache value object
> (basic
> ETL).
> But for now I first need to get the Streamer to work properly.
> Any clues why this is not working?
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: .Net standard target for Ignite.Net client

2018-01-22 Thread Pavel Tupitsyn
2.4 is expected in a couple of weeks:
http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-4-release-td26031.html

On Mon, Jan 22, 2018 at 1:12 PM, Raymond Wilson 
wrote:

> Hi Pavel,
>
> That is very good news! :)
>
> When do you expect 2.4 to be released
>
> Thanks,
> Raymond.
>
> Sent from my iPhone
>
> On 22/01/2018, at 9:22 PM, Pavel Tupitsyn  wrote
>
> Hi Raymond,
>
> Upcoming Ignite 2.4 has a lot of changes to run Ignite.NET under .NET
> Core on Linux and macOS (as well as Windows).
> However, we are not targeting .NET Standard, since it misses some crucial
> things like DynamicMethod.
>
> Please make sure that you don't confuse .NET Standard and .NET Core: only
> class libraries can target .NET Standard.
> Actual applications target .NET Core.
>
> In my understanding, nothing should prevent you from doing dockerized
> deployments on any OS with Ignite.NET 2.4.
> You can try nightly build and see if it works for your environment:
> https://cwiki.apache.org/confluence/display/IGNITE/Nightly+Builds
>
> Thanks,
> Pavel
>
> On Sun, Jan 21, 2018 at 11:55 PM, Raymond Wilson <
> raymond_wil...@trimble.com> wrote:
>
>> All,
>>
>>
>>
>> Are there any plans for porting the current Ignite.Net client to .Net
>> Standard? Has anyone investigated how much effort there would be involved?
>>
>>
>>
>> We would like to use dockerised deployments and as we use a .Net
>> development stack .Net Standard is our target platform for those
>> deployments.
>>
>>
>>
>> Thanks,
>>
>> Raymond.
>>
>>
>>
>
>


Re: .Net standard target for Ignite.Net client

2018-01-22 Thread Raymond Wilson
Hi Pavel,

That is very good news! :)

When do you expect 2.4 to be released 

Thanks,
Raymond. 

Sent from my iPhone

> On 22/01/2018, at 9:22 PM, Pavel Tupitsyn  wrote
> 
> Hi Raymond,
> 
> Upcoming Ignite 2.4 has a lot of changes to run Ignite.NET under .NET Core on 
> Linux and macOS (as well as Windows).
> However, we are not targeting .NET Standard, since it misses some crucial 
> things like DynamicMethod.
> 
> Please make sure that you don't confuse .NET Standard and .NET Core: only 
> class libraries can target .NET Standard.
> Actual applications target .NET Core.
> 
> In my understanding, nothing should prevent you from doing dockerized 
> deployments on any OS with Ignite.NET 2.4.
> You can try nightly build and see if it works for your environment: 
> https://cwiki.apache.org/confluence/display/IGNITE/Nightly+Builds
> 
> Thanks,
> Pavel
> 
>> On Sun, Jan 21, 2018 at 11:55 PM, Raymond Wilson 
>>  wrote:
>> All,
>> 
>>  
>> 
>> Are there any plans for porting the current Ignite.Net client to .Net 
>> Standard? Has anyone investigated how much effort there would be involved?
>> 
>>  
>> 
>> We would like to use dockerised deployments and as we use a .Net development 
>> stack .Net Standard is our target platform for those deployments.
>> 
>>  
>> 
>> Thanks,
>> 
>> Raymond.
>> 
>>  
>> 
> 


Re: Long activation times with Ignite persistence enabled

2018-01-22 Thread Alexey Goncharuk
Andrey,

Can you please describe in greater detail the configuration of your nodes
(specifically, number of caches and number of partitions). Ignite would not
load all the partitions into memory on startup simply because there is no
such logic. What it does, however, is loading meta pages for each partition
in each cache group to determine the correct cluster state and schedule
rebalancing, if needed. If the number of caches x number of partitions is
high, this may take a while.
If this is the case, you can either reduce the number of partitions or
group logical caches with the same affinity into physical cache group, so
that those caches will share the same partition file. See
CacheConfiguration#setGroupName(String) for more detail.

Last but not least, it looks very suspicious that with 0 pending updates it
took almost 90 seconds to read WAL. From the code, I see that this again
may be related to partition state recovery, I will need to re-check this
and get back to you later.

Thanks,
AG

2018-01-19 2:51 GMT+03:00 Andrey Kornev :

> Hello,
>
> I'm wondering if there is a way to improve the startup time of Ignite node
> when the persistence is enabled?
>
> It seems the time is proportional to the size (and number) of the
> partition files. This is somewhat surprising as I expected the startup
> time be the same (plus-minus some constant factor) regardless of the amount
> of data persisted.
>
> The delay looks to be due to Ignite loading *all* partition files for
> *all* persistence-enabled caches as part of a node's join. Here's an
> example of the startup log output:
>
> 2018-01-18 14:00:40,230 INFO  [exchange-worker-#42%ignite-1%]
> GridCacheDatabaseSharedManager - Read checkpoint status
> [startMarker=/tmp/storage/data/1/cp/1516311778910-d56f8ceb-2205-4bef-9ed3-a7446e34aa06-START.bin,
> endMarker=/tmp/storage/data/1/cp/1516311778910-d56f8ceb-
> 2205-4bef-9ed3-a7446e34aa06-END.bin]
> 2018-01-18 14:00:40,230 INFO  [exchange-worker-#42%ignite-1%]
> GridCacheDatabaseSharedManager - Applying lost cache updates since last
> checkpoint record [lastMarked=FileWALPointer [idx=1693, fileOff=7970054,
> len=60339], lastCheckpointId=d56f8ceb-2205-4bef-9ed3-a7446e34aa06]
> 2018-01-18 14:00:57,114 WARN  [exchange-worker-#42%ignite-1%]
> PageMemoryImpl - Page evictions started, this will affect storage
> performance (consider increasing DataRegionConfiguration#setMaxSize).
> 2018-01-18 14:02:05,469 INFO  [exchange-worker-#42%ignite-1%]
> GridCacheDatabaseSharedManager - Finished applying WAL changes
> [updatesApplied=0, time=85234ms]
>
> It took ≈1.5 minute to activate a node. To add insult to injury, the
> eviction kicked in and most of the loaded pages got evicted (in this
> test, I had the caches sharing a 1GB memory region loading about 10GB of
> data and index). In general, I think it's not unreasonable to expect
> 1-to-10 ratio of the data region size to the total persisted data size.
>
> Why load all that data in the first place? It seems like a huge waste of
> time. Can the data partitions be loaded lazily on demand while the index
> partition can still be loaded during node startup?
>
> Thanks
> Andrey
>
>


Re: .Net standard target for Ignite.Net client

2018-01-22 Thread Pavel Tupitsyn
Hi Raymond,

Upcoming Ignite 2.4 has a lot of changes to run Ignite.NET under .NET Core
on Linux and macOS (as well as Windows).
However, we are not targeting .NET Standard, since it misses some crucial
things like DynamicMethod.

Please make sure that you don't confuse .NET Standard and .NET Core: only
class libraries can target .NET Standard.
Actual applications target .NET Core.

In my understanding, nothing should prevent you from doing dockerized
deployments on any OS with Ignite.NET 2.4.
You can try nightly build and see if it works for your environment:
https://cwiki.apache.org/confluence/display/IGNITE/Nightly+Builds

Thanks,
Pavel

On Sun, Jan 21, 2018 at 11:55 PM, Raymond Wilson  wrote:

> All,
>
>
>
> Are there any plans for porting the current Ignite.Net client to .Net
> Standard? Has anyone investigated how much effort there would be involved?
>
>
>
> We would like to use dockerised deployments and as we use a .Net
> development stack .Net Standard is our target platform for those
> deployments.
>
>
>
> Thanks,
>
> Raymond.
>
>
>