Range queries on indexed columns

2017-06-12 Thread Anil
HI Team,

I have a table TEST with a indexed column COL_A. Does the following query
works ?

select * from Test where COL_A > '1' and COL_A < '2' offset 10  ROWS FETCH
NEXT 20 ROWS ONLY

As per my understanding of distributed systems, the query is sent to all
nodes and gets the 10 records from each node and return 10 (whatever
returns first)

as indexes are distributed, the above query may not return the records in
paginated way without adding sort like below.

select * from Test where COL_A > '1' and COL_A < '2' order by COL_A offset
10  ROWS FETCH NEXT 20 ROWS ONLY

do you see any overhead of sort here ?

Does it work in following way ?

send the query to all nodes and get 10 (based on sorting) records and sort
all results of each node at reducer and return final 10 .

Sort should not have any overhead here as sort and filter is done on
indexed column.

Please correct me if i am wrong. thanks.

Thanks


Re: Off-Heap On-Heap in Ignite-2.0.0

2017-06-12 Thread Megha Mittal
Hi Denis, thanks for answering. That completely cleared out my doubt. 

I would like to know one more thing. I was testing with ~1000 (10
million) records with each records of  ~1400 bytes. When I was trying to
keep everything on-heap I had to allocate around 15GB to on-heap to store
all these records. But when I switched off on-heap and kept everything
off-heap, same number of records got fit in 10 GB off heap memory region.
Does it imply that when ignite saves in on-heap it's in deserialized form
hence exact memory(no. of records * size of one record) is required. But
when it keeps in off-heap it is stored in serialized form hence less memory
than expected is required. Or is there some other reason for this memory
requirement difference.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13637.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


how to look at content of igfs

2017-06-12 Thread Antonio Si
Hi,

I have configured ignite to use hdfs as the secondary file system and I have
write the content of a dataframe to igfs.

Browsing the hdfs, I can see the files being written to hdfs. But is there
a way
I can look at the content of igfs directly? Would I  be able to do that in
ignitevisorcmd ?

Thanks.

Antonio.


Re: Off-Heap On-Heap in Ignite-2.0.0

2017-06-12 Thread Denis Magda
Megha,

>  Or is it required to give 1 GB to off heap
> memory region with no relation how much I allocate to on-heap. 

That’s the correct point.

All the data is stored in off-heap by default. If you give 500 MB to the 
off-heap then you’ll get out of memory exception once you go beyond that 
boundary. Java heap is no longer treated as a data storage and might be used as 
an extra caching layer for entries you have in the off-heap.

—
Denis

> On Jun 12, 2017, at 1:03 AM, Megha Mittal  wrote:
> 
> Hi Denis,
> 
> Thanks for your reply. The video was helpful. But I am still not clear on
> the fact that if I have 1 GB data, with onHeapCaching enabled if I give my
> off heap memory region a maximum size of 500 MB and 1 GB to my heap, then
> will this configuration work. Or is it required to give 1 GB to off heap
> memory region with no relation how much I allocate to on-heap. 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13617.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.



Re: Concurrent job execution and FifoQueueCollisionSpi.parallelJobsNumber=1

2017-06-12 Thread Ryan Ripken
I use IgniteMessaging to update status info from inside my compute jobs 
so publicThreadPoolSize=1 probably isn't going to work.


In your opinion, would a new CollisionSpi implementation that 
synchronized the onCollision method resolve the issue?




On 6/9/2017 5:01 PM, vkulichenko wrote:

Ryan,

Yes, this is about IgniteMessaging API.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Concurrent-job-execution-and-FifoQueueCollisionSpi-parallelJobsNumber-1-tp8697p13586.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.





Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-12 Thread Muthu
Thanks Nikolai..this is what i am doing...not sure if this is too
much..what do you think..the goal is to make sure that a UUID is unique
across the entire application (the problem is each node that is part of the
cluster would be doing this for different entities that it owns)

...
...
System.out.println(" in ObjectCacheMgrService.insertDepartment  for
dept : " + dept);
long t1 = System.currentTimeMillis();
*String uUID = new IgniteUuid(UUID.randomUUID(),
igniteAtomicSequence.incrementAndGet()).toString();*
long t2 = System.currentTimeMillis();
System.out.println("Time for UUID generation (millis) : " + (t2 - t1));
*dept.setId(uUID);*
* deptCache.getAndPut(uUID, dept);*
System.out.println(" in ObjectCacheMgrService.insertDepartment :
department  inserted successfully : " + dept);
...
...

Regards,
Muthu

On Mon, Jun 12, 2017 at 3:24 AM, Nikolai Tikhonov 
wrote:

> Muthu,
>
> Yes, you can use IgniteUUID as unique ID generator. What you will use
> depends your requirements. IgniteAtomicSequence takes one long and
> IgniteUUID takes 3 long. But getting new range sequence is distributed
> operation. You need to decied what more critical for your.
>
> On Fri, Jun 9, 2017 at 8:46 PM, Muthu  wrote:
>
>>
>> Missed adding this one...i know there is support for ID generation with
>> IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator
>>
>> The question is which one should i use...i want to use this to generate
>> unique ids for entities that are to be cached & persisted..
>>
>> Regards,
>> Muthu
>>
>>
>> On Fri, Jun 9, 2017 at 10:27 AM, Muthu  wrote:
>>
>>> Hi Folks,
>>>
>>> Is it possible to generate a Grid/Cluster unique UUID using IgniteUuid.
>>> I looked at the source code & static factory method *randomUuid
>>> *().
>>> It looks like it generates one with with a java.util.UUID (generated with
>>> its randomUUID) & an AutomicLong's incrementAndGet
>>>
>>> Can i safely assume that given that it uses a combination of UUID & long
>>> on the individual VMs that are part of the Grid/Cluster it will be unique
>>> or is there a better way?
>>>
>>> Regards,
>>> Muthu
>>>
>>
>>
>


Re: Automatic Persistence : Unable to bring up cache with web console generated models ("org.apache.ignite.IgniteCheckedException: Failed to register query type" exception is thrown)

2017-06-12 Thread Muthu
Thanks Alexey. Got it. Thanks for the helpful responses.

For the other issues w.r.t model generation & spring transaction
integration would it help to share my small POC project on GitHub? I think
these would be very helpful to have fixed. I can also submit a bug request
if needed.

I have some queries on the second point you mentioned...please see inline.

Regards,
Muthu

On Mon, Jun 12, 2017 at 6:32 AM, Alexey Kuznetsov 
wrote:

> Muthu,
>
> >>2. With Ignite/Web Console, if a DB table insert/update/delete operation
> were to happen to the database directly is there a way to have that
> automatically picked up into the cache (so its always in sync with the DB
> tables)?
> >> I know this is too much to ask. I ask because we currently have code
> which uses MyBatis as ORM to read/write to PostgreSQL DB & i need to build
> a cache without trying to integrate with MyBatis which is problematic.
> >> I have some tools in mind & things like messaging based loads, etc to
> try but if Ignite does this with some integration that would be great.
>
> First of all, I would like to state that it is not correct mix  Ignite
> and Web Console in this context.
> Web Console - is just an *EXTERNAL tool*. It could generate some code for
> you and also allows to execute some Ad-hock SQL queries and monitor your
> cluster. That's all.
>
> Second, there only one way to have caches and DB in sync - do ALL data
> modifications via Ignite caches only.
> And configure caches to write-through  to your DB.
>
[MUTHU] : I understand. I was more coming from our practical problem of how
easy (w.r.t to time) it is to have an Ignite cache with all the code
currently writing to DB via MyBatis.

>
> So, you should directly use Ignite as DB -> configure your ORM to work
> with Ignite as with DB via Ignite JDBC driver.
> And configure caches to write-through  to your DB.
>
[MUTHU] : I checked the JDBC driver support a bit back..Is it true to say
that if i have a grid with say 10 ignite caches (caching 10 different
tables in DB) can i just connect to the grid (having it all specified in
the config xml & execute queries on any/all of them including queries that
join from one or more of these caches?...pardon me..but i could not say
these for sure from the docs...if this kind of support is available then as
you mentioned we can use it just like a jdbc data store.

>
>
> --
> Alexey Kuznetsov
>


Re: swift store as secondary file system

2017-06-12 Thread Antonio Si
Thanks Nikolai. I am wondering if anyone has done something similar.

Thanks.

Antonio.

On Mon, Jun 12, 2017 at 3:30 AM, Nikolai Tikhonov 
wrote:

> Hi, Antonio!
>
> You can implement your own CacheStore which will propagate data to the
> swift. Or do you mean other integration with this product?
>
> On Sat, Jun 10, 2017 at 9:04 AM, Antonio Si  wrote:
>
>> Hi Alexey,
>>
>> I meant a swift object storage: https://wiki.openstack.org/wiki/Swift
>>
>> Thanks.
>>
>> Antonio.
>>
>>
>>
>> On Fri, Jun 9, 2017 at 6:38 PM, Alexey Kuznetsov 
>> wrote:
>>
>>> Hi, Antonio!
>>>
>>> What is a "swift store"?
>>> Could you give a link?
>>>
>>> On Sat, Jun 10, 2017 at 7:32 AM, Antonio Si 
>>> wrote:
>>>
 Hi,

 Is there a secondary file system implementation for a swift store?

 Thanks.

 Antonio.

>>>
>>>
>>>
>>> --
>>> Alexey Kuznetsov
>>>
>>
>>
>


Re: System Parameters to improve CPU utilization

2017-06-12 Thread Nikolai Tikhonov
Hi,

Can provide more details about your case? Which operations you perform
under grid?

On Fri, Jun 9, 2017 at 1:21 PM, rishi007bansod 
wrote:

> Hi,
>For my ignite data caching process i have recorded following
> statistics. In which I have found my CPU utilization is not much(only
> 60-70%). Also during this run high number of minor page faults, context
> switches/sec are seen. Are these parameters limiting my system performance?
> Are there any tuning that I can apply to improve CPU utilization?
>
> *CPU Utilization :*
> 
>
> *Page Faults :*
>  n13562/page_faults.png>
>
> *Context Switches/sec :*
>  n13562/contextswitchespersec.png>
>
> I have also tried increasing setStartCacheSize for cache but still same
> number of Page faults, Context switches/sec and CPU utilization is seen.
>
> *Page Faults when setCacheStartSize is set to 60*1024*1024(for 60M entries
> in our case):*
>  n13562/pagingsetStartSize.png>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/System-Parameters-to-improve-CPU-
> utilization-tp13562.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Automatic Persistence : Unable to bring up cache with web console generated models ("org.apache.ignite.IgniteCheckedException: Failed to register query type" exception is thrown)

2017-06-12 Thread Alexey Kuznetsov
Muthu,

>>2. With Ignite/Web Console, if a DB table insert/update/delete operation
were to happen to the database directly is there a way to have that
automatically picked up into the cache (so its always in sync with the DB
tables)?
>> I know this is too much to ask. I ask because we currently have code
which uses MyBatis as ORM to read/write to PostgreSQL DB & i need to build
a cache without trying to integrate with MyBatis which is problematic.
>> I have some tools in mind & things like messaging based loads, etc to
try but if Ignite does this with some integration that would be great.

First of all, I would like to state that it is not correct mix  Ignite and
Web Console in this context.
Web Console - is just an *EXTERNAL tool*. It could generate some code for
you and also allows to execute some Ad-hock SQL queries and monitor your
cluster. That's all.

Second, there only one way to have caches and DB in sync - do ALL data
modifications via Ignite caches only.
And configure caches to write-through  to your DB.

So, you should directly use Ignite as DB -> configure your ORM to work with
Ignite as with DB via Ignite JDBC driver.
And configure caches to write-through  to your DB.


-- 
Alexey Kuznetsov


RE: ignite 1.5 network imbalance

2017-06-12 Thread Libo Yu
I can give you the numbers. I have no idea why server 1 sent so much data to 
server 2 and 3.

Server 1
bytes sent = 51096699125 bytes received = 3457376624

Server 2
bytes sent = 3631468045 bytes received = 27549879339

Server 3
bytes sent = 2172264793 bytes received = 23901316825




From: Nikolai Tikhonov [mailto:ntikho...@apache.org]
Sent: Monday, June 12, 2017 6:45 AM
To: user@ignite.apache.org
Subject: Re: ignite 1.5 network imbalance

Hi Libo!

Would you describe your imbalance in percent? Also can you try to upgrade 
Ignite to 1.9 and check it?

On Fri, Jun 9, 2017 at 11:05 PM, Libo Yu 
> wrote:
Hi,

We have used embedded ignite cache on three application servers which are 
behind a load balancer.
The cache is set to PARTITIONED mode with backups=0. However, we noticed one 
node has
a large outbound traffic and the other two nodes both have large inbound 
traffic. I printed
out the partition number and local data size for each cache and they are almost 
the same.
We have been struggling with this issue for quite some time and cannot figure 
out what
caused this imbalance.  Note that we did not use client mode. I wonder if 
anybody has
experienced the same issue for 1.5. Thanks.

Regards,

Libo Yu




Re: High heap on ignite client

2017-06-12 Thread Anil
Do you have any advice on implementing large records export from ignite ?

I could not use ScanQuery right as my whole application built around Jdbc
driver and writing complex queries in scan query is very difficult.

Thanks

On 10 June 2017 at 18:48, Anil  wrote:

> I understand from the code that there is no cursor from h2 db (or ignite
> embed h2 db) internally and all mapper response consolidated at reducer. It
> means when exporting large number of records, all data is in memory.
>
>  if (send(nodes,
> oldStyle ?
> new GridQueryRequest(qryReqId,
> r.pageSize,
> space,
> mapQrys,
> topVer,
> extraSpaces(space, qry.spaces()),
> null,
> timeoutMillis) :
> new GridH2QueryRequest()
> .requestId(qryReqId)
> .topologyVersion(topVer)
> .pageSize(r.pageSize)
> .caches(qry.caches())
> .tables(distributedJoins ? qry.tables() : null)
> .partitions(convert(partsMap))
> .queries(mapQrys)
> .flags(flags)
> .timeout(timeoutMillis),
> oldStyle && partsMap != null ? new
> ExplicitPartitionsSpecializer(partsMap) : null,
> false)) {
>
> awaitAllReplies(r, nodes, cancel);
>
> *// once the responses from all nodes for the query received.. proceed
> further ?*
>
>   if (!retry) {
> if (skipMergeTbl) {
> List res = new ArrayList<>();
>
> // Simple UNION ALL can have multiple indexes.
> for (GridMergeIndex idx : r.idxs) {
> Cursor cur = idx.findInStream(null, null);
>
> while (cur.next()) {
> Row row = cur.get();
>
> int cols = row.getColumnCount();
>
> List resRow = new
> ArrayList<>(cols);
>
> for (int c = 0; c < cols; c++)
> resRow.add(row.getValue(c).
> getObject());
>
> res.add(resRow);
> }
> }
>
> resIter = res.iterator();
> }else {
>   // incase of split query scenario
> }
>
>  }
>
>   return new GridQueryCacheObjectsIterator(resIter, cctx,
> keepPortable);
>
>
> Query cursor is iterator which does column value mapping per page. But
> still all records of query are still in memory. correct?
>
> Please correct me if I am wrong. thanks.
>
>
> Thanks
>
>
> On 10 June 2017 at 15:53, Anil  wrote:
>
>>
>> jvm parameters used -
>>
>> -Xmx6144m -XX:NewSize=512m -XX:+UseTLAB -XX:+UseG1GC
>> -XX:MaxGCPauseMillis=500 -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC
>> -Xloggc:C:/Anil/dumps/gc-client.log -XX:+HeapDumpOnOutOfMemoryError
>> -XX:+PrintGCCause -XX:+PrintGCDetails -XX:+PrintAdaptiveSizePolicy
>> -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:+HeapDumpAfterFullGC
>> -XX:+ScavengeBeforeFullGC -XX:+DisableExplicitGC -XX:+AlwaysPreTouch
>> -XX:+PrintFlagsFinal -XX:HeapDumpPath=C:/Anil/dumps/heapdump-client.hprof
>>
>> Thanks.
>>
>> On 10 June 2017 at 15:06, Anil  wrote:
>>
>>> HI,
>>>
>>> I have implemented export feature of ignite data using JDBC Interator
>>>
>>> ResultSet rs = statement.executeQuery();
>>>
>>> while (rs.next()){
>>> // do operations
>>>
>>> }
>>>
>>> and fetch size is 200.
>>>
>>> when i run export operation twice for 4 L records whole 6B is filled up
>>> and never getting released.
>>>
>>> Initially i thought that operations transforting result set to file
>>> causing the memory full. But not.
>>>
>>> I just did follwoing and still the memory is growing and not getting
>>> released
>>>
>>> while (rs.next()){
>>>  // nothing
>>> }
>>>
>>> num #instances #bytes  class name
>>> --
>>>1:  55072353 2408335272  [C
>>>2:  54923606 1318166544  java.lang.String
>>>3:779006  746187792  [B
>>>4:903548  304746304  [Ljava.lang.Object;
>>>5:773348  259844928  net.juniper.cs.entity.InstallBase
>>>6:   4745694  113896656  java.lang.Long
>>>7:   692   44467680  sun.nio.cs.UTF_8$Decoder
>>>8:773348   30933920  org.apache.ignite.internal.bi
>>> nary.BinaryObjectImpl
>>>9:895627   21495048  

Re: messaging behavior

2017-06-12 Thread Nikolai Tikhonov
Hi,

Ignite does not accumulate messages which were sent to non-exist topic.
Messages will be lost in your case.

On Mon, Jun 12, 2017 at 12:30 PM, shawn.du  wrote:

> Hi,
>
> I am trying ignite topic based messaging. I wonder to know ignite behavior
> in blow case:
>
> Client A send a message with topic T1  to ignite server, but there are no
> topic listeners at this time, after for a while(like 1 or 2 minutes),
> Client B is online and subscribe topic T1, will client B get the message?
> if true, how long
> the message will stay in ignite queue and how to set it?
> how it is for ordered message?
>
> Thanks
> Shawn
>
>


Ignite Cache Metrics

2017-06-12 Thread Megha Mittal
Hi, I am trying to access Ignite-2.0.0 cache metrics. I have loaded one cache
into my cluster of 2 nodes with 1 backup (onHeapEnabled=false)

At an instant of time I am getting below ignite cache size using :

itemCache.size(CachePeekMode.OFFHEAP) = 2645679
itemCache.size(CachePeekMode.PRIMARY) = 2645679
itemCache.size(CachePeekMode.BACKUP) = 2635802
itemCache.metrics().getOffHeapPrimaryEntriesCount() = 2638451

Why is there a difference in above numbers. In my understanding primary and
backup count should be same and same as that of all the entries in cache .
Also count fetched using cache.size and cache.metrics should give same
numbers. 

Also can someone tell me how can I get off heap memory used . 
When I do itemCache.metrics().getOffHeapAllocatedSize() , I get '0' always.
Is there some other metric available to fetch off heap memory metrics. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cache-Metrics-tp13624.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite 1.5 network imbalance

2017-06-12 Thread Nikolai Tikhonov
Hi Libo!

Would you describe your imbalance in percent? Also can you try to upgrade
Ignite to 1.9 and check it?

On Fri, Jun 9, 2017 at 11:05 PM, Libo Yu  wrote:

> Hi,
>
>
>
> We have used embedded ignite cache on three application servers which are
> behind a load balancer.
>
> The cache is set to PARTITIONED mode with backups=0. However, we noticed
> one node has
>
> a large outbound traffic and the other two nodes both have large inbound
> traffic. I printed
>
> out the partition number and local data size for each cache and they are
> almost the same.
>
> We have been struggling with this issue for quite some time and cannot
> figure out what
>
> caused this imbalance.  Note that we did not use client mode. I wonder if
> anybody has
>
> experienced the same issue for 1.5. Thanks.
>
>
>
> Regards,
>
>
>
> Libo Yu
>
>
>


Re: swift store as secondary file system

2017-06-12 Thread Nikolai Tikhonov
Hi, Antonio!

You can implement your own CacheStore which will propagate data to the
swift. Or do you mean other integration with this product?

On Sat, Jun 10, 2017 at 9:04 AM, Antonio Si  wrote:

> Hi Alexey,
>
> I meant a swift object storage: https://wiki.openstack.org/wiki/Swift
>
> Thanks.
>
> Antonio.
>
>
>
> On Fri, Jun 9, 2017 at 6:38 PM, Alexey Kuznetsov 
> wrote:
>
>> Hi, Antonio!
>>
>> What is a "swift store"?
>> Could you give a link?
>>
>> On Sat, Jun 10, 2017 at 7:32 AM, Antonio Si  wrote:
>>
>>> Hi,
>>>
>>> Is there a secondary file system implementation for a swift store?
>>>
>>> Thanks.
>>>
>>> Antonio.
>>>
>>
>>
>>
>> --
>> Alexey Kuznetsov
>>
>
>


Re: FW: QueryCursor.iterator() hanges forever

2017-06-12 Thread Nikolai Tikhonov
Hello,

It looks strange. Could you share full example (how maven project)? Which
version of apache ignite do you use?

On Sat, Jun 10, 2017 at 1:14 PM, Reshma Bochare  wrote:

> Same thing works fine if executed at server side
>
>
>
> *From:* Reshma Bochare
> *Sent:* Friday, June 09, 2017 4:21 PM
> *To:* 'user@ignite.apache.org' 
> *Subject:* QueryCursor.iterator() hanges forever
>
>
>
> Hi,
>
> I am getting below error when iterated over QueryCursor.
>
>
>
>
>
> [2017-06-09 
> 16:12:58,947][ERROR][grid-nio-worker-2-#11%null%][GridDirectParser]
> Failed to read message [msg=GridIoMessage [plc=0, topic=null, topicOrd=-1,
> ordered=false, timeout=0, skipOnTimeout=false, msg=null],
> buf=java.nio.DirectByteBuffer[pos=2 lim=145 cap=32768],
> reader=DirectMessageReader [state=DirectMessageState [pos=0,
> stack=[StateItem [stream=DirectByteBufferStreamImplV2 
> [buf=java.nio.DirectByteBuffer[pos=2
> lim=145 cap=32768], baseOff=1356327248, arrOff=-1, tmpArrOff=0,
> tmpArrBytes=0, msgTypeDone=false, msg=null, mapIt=null, it=null, arrPos=-1,
> keyDone=false, readSize=-1, readItems=0, prim=0, primShift=0, uuidState=0,
> uuidMost=0, uuidLeast=0, uuidLocId=0, lastFinished=true], state=0], null,
> null, null, null, null, null, null, null, null]], lastRead=false],
> ses=GridSelectorNioSessionImpl [selectorIdx=2, queueSize=0,
> writeBuf=java.nio.DirectByteBuffer[pos=0 lim=32768 cap=32768],
> readBuf=java.nio.DirectByteBuffer[pos=2 lim=145 cap=32768], 
> recovery=GridNioRecoveryDescriptor
> [acked=23, resendCnt=0, rcvCnt=17, sentCnt=23, reserved=true, lastAck=16,
> nodeLeft=false, node=TcpDiscoveryNode 
> [id=69869e5b-703f-4a86-8ad9-12fd06dfe624,
> addrs=[0:0:0:0:0:0:0:1, **.**.*.**, 127.0.0.1],
> sockAddrs=[IND-***..***/**.**.8.76:0, /0:0:0:0:0:0:0:1:0, /
> 127.0.0.1:0], discPort=0, order=2, intOrder=2, lastExchangeTime=1497004966182,
> loc=false, ver=1.8.0#20161205-sha1:9ca40dbe, isClient=true],
> connected=true, connectCnt=2, queueLimit=5120, reserveCnt=7],
> super=GridNioSessionImpl [locAddr=/0:0:0:0:0:0:0:1:47100,
> rmtAddr=/0:0:0:0:0:0:0:1:50946, createTime=1497004978927, closeTime=0,
> bytesSent=26, bytesRcvd=182, sndSchedTime=1497004978927,
> lastSndTime=1497004978927, lastRcvTime=1497004978947, readsPaused=false,
> filterChain=FilterChain[filters=[GridNioCodecFilter
> [parser=o.a.i.i.util.nio.GridDirectParser@d1411b, directMode=true],
> GridConnectionBytesVerifyFilter], accepted=true]]]
>
> class org.apache.ignite.IgniteException: Invalid message type: -33
>
> at org.apache.ignite.internal.managers.communication.
> GridIoMessageFactory.create(GridIoMessageFactory.java:805)
>
> at org.apache.ignite.spi.communication.tcp.
> TcpCommunicationSpi$5.create(TcpCommunicationSpi.java:1631)
>
> at org.apache.ignite.internal.direct.stream.v2.
> DirectByteBufferStreamImplV2.readMessage(DirectByteBufferStreamImplV2.
> java:1144)
>
> at org.apache.ignite.internal.direct.DirectMessageReader.
> readMessage(DirectMessageReader.java:311)
>
> at org.apache.ignite.internal.managers.communication.
> GridIoMessage.readFrom(GridIoMessage.java:254)
>
> at org.apache.ignite.internal.util.nio.GridDirectParser.
> decode(GridDirectParser.java:84)
>
> at org.apache.ignite.internal.util.nio.GridNioCodecFilter.
> onMessageReceived(GridNioCodecFilter.java:104)
>
> at org.apache.ignite.internal.
> util.nio.GridNioFilterAdapter.proceedMessageReceived(
> GridNioFilterAdapter.java:107)
>
> at org.apache.ignite.internal.util.nio.
> GridConnectionBytesVerifyFilter.onMessageReceived(
> GridConnectionBytesVerifyFilter.java:123)
>
> at org.apache.ignite.internal.
> util.nio.GridNioFilterAdapter.proceedMessageReceived(
> GridNioFilterAdapter.java:107)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> HeadFilter.onMessageReceived(GridNioServer.java:2332)
>
> at org.apache.ignite.internal.util.nio.GridNioFilterChain.
> onMessageReceived(GridNioFilterChain.java:173)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> DirectNioClientWorker.processRead(GridNioServer.java:918)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.processSelectedKeysOptimized(
> GridNioServer.java:1583)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.bodyInternal(GridNioServer.java:1516)
>
> at org.apache.ignite.internal.util.nio.GridNioServer$
> AbstractNioClientWorker.body(GridNioServer.java:1289)
>
> at org.apache.ignite.internal.util.worker.GridWorker.run(
> GridWorker.java:110)
>
> at java.lang.Thread.run(Unknown Source)
>
>
>
>
>
> Configuration is as below:
>
>
>
> <*bean **id=**"igniteClientConfiguration" **class=*
> 

Re: Grid/Cluster unique UUID possible with IgniteUuid?

2017-06-12 Thread Nikolai Tikhonov
Muthu,

Yes, you can use IgniteUUID as unique ID generator. What you will use
depends your requirements. IgniteAtomicSequence takes one long and
IgniteUUID takes 3 long. But getting new range sequence is distributed
operation. You need to decied what more critical for your.

On Fri, Jun 9, 2017 at 8:46 PM, Muthu  wrote:

>
> Missed adding this one...i know there is support for ID generation with
> IgniteAtomicSequence @ https://apacheignite.readme.io/docs/id-generator
>
> The question is which one should i use...i want to use this to generate
> unique ids for entities that are to be cached & persisted..
>
> Regards,
> Muthu
>
>
> On Fri, Jun 9, 2017 at 10:27 AM, Muthu  wrote:
>
>> Hi Folks,
>>
>> Is it possible to generate a Grid/Cluster unique UUID using IgniteUuid. I
>> looked at the source code & static factory method *randomUuid
>> *().
>> It looks like it generates one with with a java.util.UUID (generated with
>> its randomUUID) & an AutomicLong's incrementAndGet
>>
>> Can i safely assume that given that it uses a combination of UUID & long
>> on the individual VMs that are part of the Grid/Cluster it will be unique
>> or is there a better way?
>>
>> Regards,
>> Muthu
>>
>
>


messaging behavior

2017-06-12 Thread shawn.du






Hi,I am trying ignite topic based messaging. I wonder to know ignite behavior in blow case:Client A send a message with topic T1  to ignite server, but there are no topic listeners at this time, after for a while(like 1 or 2 minutes), Client B is online and subscribe topic T1, will client B get the message? if true, how longthe message will stay in ignite queue and how to set it? how it is for ordered message?






ThanksShawn









Re: Node can't start. java.lang.NullPointerException in GridUnsafe.compareAndSwapLong()

2017-06-12 Thread Nikolai Tikhonov
Hi,

Seems that known issue with IBM JDK
http://www-01.ibm.com/support/docview.wss?uid=swg1IV76872. You need to
update on jdk which contains fixes.

On Fri, Jun 9, 2017 at 7:06 PM, Vladimir  wrote:

> Hi,
>
> Having no problems on Windows and Linux application suddenly couldn't start
> on IBM AIX with IBM J9 VM (build 2.8):
>
> Caused by: java.lang.NullPointerException
> at
> org.apache.ignite.internal.util.GridUnsafe.compareAndSwapLong(GridUnsafe.
> java:1228)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.util.OffheapReadWriteLock.
> readLock(OffheapReadWriteLock.java:122)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.pagemem.impl.PageMemoryNoStoreImpl.readLock(
> PageMemoryNoStoreImpl.java:450)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.util.PageHandler.readLock(PageHandler.java:181)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.util.PageHandler.readPage(PageHandler.java:152)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.DataStructure.read(
> DataStructure.java:319)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.findDown(BPlusTree.java:1115)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.doFind(BPlusTree.java:1084)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.database.
> tree.BPlusTree.findOne(BPlusTree.java:1048)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl$
> CacheDataStoreImpl.find(IgniteCacheOffheapManagerImpl.java:1143)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.IgniteCacheOffheapManagerImpl.
> read(IgniteCacheOffheapManagerImpl.java:361)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.unswap(
> GridCacheMapEntry.java:384)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet0(
> GridCacheMapEntry.java:588)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheMapEntry.innerGet(
> GridCacheMapEntry.java:474)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.localGet(GridPartitionedSingleGetFuture
> .java:380)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.mapKeyToNode(GridPartitionedSingleGetFuture
> .java:326)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.map(GridPartitionedSingleGetFuture
> .java:211)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.
> GridPartitionedSingleGetFuture.init(GridPartitionedSingleGetFuture
> .java:203)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.distributed.dht.colocated.
> GridDhtColocatedCache.getAsync(GridDhtColocatedCache.java:266)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get0(
> GridCacheAdapter.java:4482)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:4463)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.cache.GridCacheAdapter.get(
> GridCacheAdapter.java:1405)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.datastructures.
> DataStructuresProcessor.getAtomic(DataStructuresProcessor.java:586)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.processors.datastructures.
> DataStructuresProcessor.sequence(DataStructuresProcessor.java:396)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
> at
> org.apache.ignite.internal.IgniteKernal.atomicSequence(
> IgniteKernal.java:3419)
> ~[ignite-core-2.0.0.jar!/:2.0.0]
>
> Any workarounds?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Node-can-t-start-java-lang-NullPointerException-in-
> GridUnsafe-compareAndSwapLong-tp13573.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Off-Heap On-Heap in Ignite-2.0.0

2017-06-12 Thread Megha Mittal
Hi Denis,

Thanks for your reply. The video was helpful. But I am still not clear on
the fact that if I have 1 GB data, with onHeapCaching enabled if I give my
off heap memory region a maximum size of 500 MB and 1 GB to my heap, then
will this configuration work. Or is it required to give 1 GB to off heap
memory region with no relation how much I allocate to on-heap. 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Off-Heap-On-Heap-in-Ignite-2-0-0-tp13548p13617.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Data Analysis and visualization

2017-06-12 Thread Ishan Jain
I need to just get the price of a stock which is stored in hdfs with
timestamp and make a graph with the prices of that stock over time.

On Mon, Jun 12, 2017 at 1:03 PM, Jörn Franke  wrote:

> First you need the user requirements - without them answering your
> questions will be difficult
>
> > On 12. Jun 2017, at 07:08, ishan-jain  wrote:
> >
> > I am new to BIG Data .Just been working for a month.
> > I have HDFS data of stock prices. I need to perform data analysis(maybe
> some
> > ML) and visualizations(Graphs and charts). For that i need Mapreduce
> > functions. Which approach should i use?
> > 1. Stream data from IGFS into ignite cache and work on it?
> > 2. Use Hive with Tez and LLap function.(Should i use it with ignite or
> > independent and directly on HDFS. No info available on the net.)
> > 3. Use presto (Which is the better variant?(Hive or presto))
> > 4. Some other fast way with IGFS if possible.
> > 5. Also which open source tools should i use to accomplish this.
> > Any help would be appreciated.
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Data-Analysis-and-visualization-tp13614.html
> > Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Data Analysis and visualization

2017-06-12 Thread Jörn Franke
First you need the user requirements - without them answering your questions 
will be difficult 

> On 12. Jun 2017, at 07:08, ishan-jain  wrote:
> 
> I am new to BIG Data .Just been working for a month.
> I have HDFS data of stock prices. I need to perform data analysis(maybe some
> ML) and visualizations(Graphs and charts). For that i need Mapreduce
> functions. Which approach should i use?
> 1. Stream data from IGFS into ignite cache and work on it?
> 2. Use Hive with Tez and LLap function.(Should i use it with ignite or
> independent and directly on HDFS. No info available on the net.)
> 3. Use presto (Which is the better variant?(Hive or presto))
> 4. Some other fast way with IGFS if possible.
> 5. Also which open source tools should i use to accomplish this.
> Any help would be appreciated. 
> 
> 
> 
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Data-Analysis-and-visualization-tp13614.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.