Re: Ignite Cassandra AWS test framework

2016-12-01 Thread Igor Rudyak
Ok guys, just updated the documentation.

Igor

On Thu, Dec 1, 2016 at 4:44 PM, Igor Rudyak  wrote:

> Hi Riccardo,
>
> Thanks for noticing this. That's a documentation issue. There were number
> of refactorings for Ignite-Cassandra and documentation now is not 100% up
> to date.
>
> Igor
>
> On Thu, Dec 1, 2016 at 3:13 PM, Riccardo Iacomini <
> riccardo.iacom...@rdslab.com> wrote:
>
>> Hi Val,
>>
>> No, that's not the issue. I would like to follow the AWS ignite-cassandra
>> framework guide on the documentation, but some files the documentation says
>> I should modify are not in the test-package after I build ignite from
>> source.
>>
>> Just to be clear, this is the link to the guide I am actually trying to
>> follow:
>> https://apacheignite-mix.readme.io/docs/aws-infrastructure-deployment
>>
>> Il 01 dic 2016 11:56 PM, "vkulichenko" 
>> ha scritto:
>>
>> Hi Riccardo,
>>
>> Do you mean that you have problems starting Cassandra? If so, I think it's
>> better to ask on their lists :)
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Ignite-Cassandra-AWS-test-framework-tp9324p9347.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>>
>>
>


Re: Fetching large number of records

2016-12-01 Thread Anil
Hi Val,

I think No. Correct me if i am wrong.

i was looking at following code pieces -

1. JdbcResultSet#next()

try {
JdbcQueryTask.QueryResult res =
loc ? qryTask.call() :
ignite.compute(ignite.cluster().forNodeId(nodeId)).call(qryTask);

finished = res.isFinished();

it = res.getRows().iterator();

return next();
}

 res.getRows() returns all the results and all are in memory. correct ?

2. JdbcQueryTask#call

List rows = new ArrayList<>();

for (List row : cursor) {
List row0 = new ArrayList<>(row.size());

for (Object val : row)
row0.add(JdbcUtils.sqlType(val) ? val : val.toString());

rows.add(row0);

if (rows.size() == fetchSize) // If fetchSize is 0 then
unlimited
break;
}

all cursor rows are fetched and sent to #1

next() method is a just iterator on available rows. agree ? this is not a
streaming from server to client. Am i wrong ?

Thanks

On 2 December 2016 at 04:11, vkulichenko 
wrote:

> Anil,
>
> While you iterate through the ResultSet on the client, it will fetch
> results
> from server in pages. Are you looking for something else?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Fetching-large-number-of-records-tp9267p9342.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Can ignite-spark support spark 1.6.x?

2016-12-01 Thread Kaiming Wan
hi,

I have found that ignite-spark-1.7.0 doesn't support spark 2.0.x.   Does
it support spark 1.6.x?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Can-ignite-spark-support-spark-1-6-x-tp9358.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


when can ignite extending C++ API be released ?

2016-12-01 Thread smile
Hi, Igor:


 You said before that you are working on extending C++ API?? can you tell 
me when extending C++ API can be released? 


 Thank you very much!




--  --
??: "Igor Sapego";;
: 2016??11??8??(??) 8:27
??: "user"; 

: Re: can ignite C++ API support pub/sub or Listener ?



Hi,

There are no such API in C++ API yet. We are working on
extending C++ API though so it may appear in future.


Best Regards,
Igor



 
On Tue, Nov 8, 2016 at 10:41 AM, smile  wrote:
Hi, all
   I used ignite C++ API?? and I find that it dose not support pub/sub , also 
not support event?? so is there any other method that I can realize pub/sub and 
event notification ?


  Thank you very much!

Re: In-Memory SQL Grid

2016-12-01 Thread Denis Magda
Igniters,

I’ve finished reworking, enhancing and expanding our SQL Grid documentation 
preparing it to 1.8 release. 

If you recall, before it was a single page documentation and now it’s a big 
section that contains, I hope, all the valuable knowledge we can share with 
Ignite users. Just open the main page 
(https://apacheignite.readme.io/docs/sql-grid 
) and see what we have there. The 
only thing that is still missing is a “Distributed DML” documentation that will 
be made public once 1.8 gets released.

Sergi and other SQL committers & contributes I would encourage you looking over 
the changed documentation and provide or alter it if I’m wrong somewhere.

—
Denis 

> On Nov 23, 2016, at 5:16 PM, Dmitriy Setrakyan  wrote:
> 
> I like SQL Grid, seems consistent. I think that DML support is huge for the
> project!
> 
> D.
> 
> On Wed, Nov 23, 2016 at 12:08 PM, Denis Magda  wrote:
> 
>> Community,
>> 
>> Frankly speaking, Ignite’s SQL support goes far beyond the notion of Data
>> Grid starting from Apache Ignite 1.8.
>> SQL in Ignite is no longer about distributed ANSI-99 SELECT queries only.
>> 
>> It will be about the ability to connect to an Ignite Cluster not only from
>> Java, .NET and C++ sides, that have their own SQL APIs, but, in general,
>> from your favorite programming language or analytical tool by the usage of
>> Ignite’s JDBC and ODBC drivers.
>> 
>> Even after you’re connected to the cluster from JDBC or ODBC side you’re
>> no longer limited by distributed SQL queries. You can freely and easily
>> modify the data with INSERT, UPDATE and DELETE statements thanks to
>> comprehensive DML supported contributed by Alexander Paschenko in version
>> 1.8.  To prove this, Igor Sapego issued a guide <
>> https://apacheignite.readme.io/docs/pdo-interoperability> on how to start
>> working with an Ignite cluster from PHP side which doesn’t have its own API
>> implementation on Ignite side.
>> 
>> At all, all this means that SQL features set deserved its own component
>> which I propose to call In-Memory SQL Grid to conform the naming of the
>> rest of big components like Data Grid, Compute Grid, Service Grid.
>> 
>> I’ve already started reworking our documentation and will spend some time
>> by enriching and expanding it. However, this is a starting point:
>> https://apacheignite.readme.io/docs/sql-grid > io/docs/sql-grid>
>> 
>> Feel free to share your thoughts regarding the naming or content of the
>> doc in general.
>> 
>> —
>> Denis
>> 
>> [1] https://apacheignite.readme.io/docs/sql-grid



Re: SparkRDD with Ignite

2016-12-01 Thread vkulichenko
Vidhya,

Are you using Spring Cache? Because that's what this issue is about. I'm not
sure how this is related to your use case.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SparkRDD-with-Ignite-tp9160p9354.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Ignite Cassandra AWS test framework

2016-12-01 Thread Riccardo Iacomini
Hi Val,

No, that's not the issue. I would like to follow the AWS ignite-cassandra
framework guide on the documentation, but some files the documentation says
I should modify are not in the test-package after I build ignite from
source.

Just to be clear, this is the link to the guide I am actually trying to
follow:
https://apacheignite-mix.readme.io/docs/aws-infrastructure-deployment

Il 01 dic 2016 11:56 PM, "vkulichenko"  ha
scritto:

Hi Riccardo,

Do you mean that you have problems starting Cassandra? If so, I think it's
better to ask on their lists :)

-Val



--
View this message in context: http://apache-ignite-users.
70518.x6.nabble.com/Ignite-Cassandra-AWS-test-framework-tp9324p9347.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Issue with SQL Query Performance - Post Rebalance Operation

2016-12-01 Thread vkulichenko
Ganesh,

I'm not I understand what you mean here. There is no such a delay, indexes
on new node are created right away and updated during the rebalancing. There
is no way indexes are temporarily not used due to topology change. If there
is a slowdown, most likely it's caused by the rebalancing itself, or
probably there is something else that you're missing.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Issue-with-SQL-Query-Performance-Post-Rebalance-Operation-tp9308p9350.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Problem in forming cluster with multi-jvm in single machine

2016-12-01 Thread vkulichenko
Hi,

First of all, remove the ipFinder.setShared(true); line. With this nodes
will join each other if VmIpFinder is used.

If this doesn't help, please look through the logs and check which address
and port discovery binds to and compare it to address you provide in IP
finder. This should give you some pointers. You can also attach the logs
here, we will try to assist.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-in-forming-cluster-with-multi-jvm-in-single-machine-tp9327p9349.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cache stopped exception with KafkaStreamer

2016-12-01 Thread vkulichenko
Anil,

This means that you stopped the embedded client node before it loaded all
the data into the cache. Ignite and IgniteDataStreamer provided via
setIgnite() and setStreamer methods should not be closed before the Kafka
streamer is closed.

See the example here: https://apacheignite-mix.readme.io/docs/kafka-streamer

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cache-stopped-exception-with-KafkaStreamer-tp9326p9348.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: about GAR files and task deployment

2016-12-01 Thread vkulichenko
Hi,

Please properly subscribe to the mailing list so that the community can
receive email notifications for your messages. To subscribe, send empty
email to user-subscr...@ignite.apache.org and follow simple instructions in
the reply.


xjg125 wrote
> I test the DeploymentSpi  ,
> refer to the page for information about GAR files and task deployment. at
> "https://apacheignite.readme.io/docs/deployment-spi; ..
> 
> but have Exception  like this:
> 
> Exception in thread "main" class
> org.apache.ignite.IgniteDeploymentException: Unknown task name or failed
> to auto-deploy task (was task (re|un)deployed?):
> MapExampleCharacterCountTask
>   at
> org.apache.ignite.internal.util.IgniteUtils$8.apply(IgniteUtils.java:794)
>   at
> org.apache.ignite.internal.util.IgniteUtils$8.apply(IgniteUtils.java:792)
>   at
> org.apache.ignite.internal.util.IgniteUtils.convertException(IgniteUtils.java:913)
>   at
> org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:270)
>   at deploytest.Test1.main(Test1.java:33)
> Caused by: class
> org.apache.ignite.internal.IgniteDeploymentCheckedException: Unknown task
> name or failed to auto-deploy task (was task (re|un)deployed?):
> MapExampleCharacterCountTask
>   at
> org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:515)
>   at
> org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:447)
>   at
> org.apache.ignite.internal.IgniteComputeImpl.execute(IgniteComputeImpl.java:267)
>   ... 1 more
> 
> 
> i use  http protocol :
> deploymentSpi.setUriList(Arrays.asList("http://freq=5000@127.0.0.1/deptest/deployment.html;));
> 
> i check the user/xxx/AppData\Local\Temp\gg.uri.deployment.tmp directory ,
> Has downloaded the gar file and extract。。
> 
> in logfile i find :
> 
> [14:36:40,943][INFO][grid-uri-scanner-#2%null%][UriDeploymentSpi] Found
> new or updated GAR units
> [uri=http://freq=5000@127.0.0.1/deptest/deployment.html/test2.gar,
> file=C:\Users\XieJG\AppData\Local\Temp\gg.uri.deployment.tmp\06747073-8f10-41d4-998e-45cab8030a69\test23190349051081653991.gar,
> tstamp=1480567856000]
> [14:36:40,972][WARNING][grid-uri-scanner-#2%null%][UriDeploymentSpi]
> Processing deployment without descriptor file (it will cause full
> classpath scan) [path=META-INF/ignite.xml,
> gar=C:\Users\XieJG\AppData\Local\Temp\gg.uri.deployment.tmp\06747073-8f10-41d4-998e-45cab8030a69\dirzip_test23190349051081653991.gar]
> 
> ---
> 
> I can't find the format of META-INF/ignite.xml。Can you give me an example?
> Including META-INF/ignite.xml and task class
> 
> thanks

What is the class name of your task? How do you execute it?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/about-GAR-files-and-task-deployment-tp9322p9346.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: SparkRDD with Ignite

2016-12-01 Thread vkulichenko
Vidhya,

I'm not aware of this issue, but it sound pretty serious. Are there any
ticket and/or discussions about this you can point me to?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/SparkRDD-with-Ignite-tp9160p9345.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Unable to establish connection to Ignite server-System Thread Pool Issue

2016-12-01 Thread vkulichenko
Ganesh,

Can you grab a thread dump? What are these threads doing?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Unable-to-establish-connection-to-Ignite-server-System-Thread-Pool-Issue-tp9294p9344.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Objects in heap dump corresponding to binary and de-serialized version of entries

2016-12-01 Thread vkulichenko
BinaryObjectImpl is a binary object, while deserialized objects will be shown
as your classes. Deserialized objects are not stored by default. However, if
you have queries enabled, and do a read on server side (i.e. send a closure
there to do a local get), then the deserialized object will be cached inside
the binary object. And this is actually a bug hopefully which will be fixed
soon: https://issues.apache.org/jira/browse/IGNITE-4293

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Objects-in-heap-dump-corresponding-to-binary-and-de-serialized-version-of-entries-tp9321p9343.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Fetching large number of records

2016-12-01 Thread vkulichenko
Anil,

While you iterate through the ResultSet on the client, it will fetch results
from server in pages. Are you looking for something else?

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Fetching-large-number-of-records-tp9267p9342.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to process message: Creating partition which does not belong to local node

2016-12-01 Thread vkulichenko
Hi Yuci,

Basically, this error means that affinity function on node A gave node B as
a primary node for a key. Node A than sent a request to node B, but affinity
on node B doesn't think that this key belong locally. In other words,
affinity function worked inconsistently on different nodes.

Usually this is caused by incorrect hashCode() implementation of the key
class (as pointed out in the message). Unfortunately, the key itself is not
printed out, so I would recommend to put a breakpoint and go back through
the trace to see for which key the partition number is calculated
incorrectly.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-process-message-Creating-partition-which-does-not-belong-to-local-node-tp9332p9341.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Cluster hung after a node killed

2016-12-01 Thread vkulichenko
Hi Sam,

Can you create a small project on GitHub with instructions how to run? I
just tried to repeat the same test, but everything works fine for me.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9340.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Failed to wait for initial partition map exchange

2016-12-01 Thread vkulichenko
Hi Yuci,

1.8 in in final testing/bug fixing stage, so it should become available
soon. However, "Failed to wait for initial partition map exchange" is a very
generic error and can be caused by different circumstances. I would
recommend to build from 1.8 branch and run your tests on it ti check if the
issue is reproduced or not.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-initial-partition-map-exchange-tp6252p9338.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Unable to create cluster of Apache Ignite Server Containers running on individual VMs

2016-12-01 Thread Piali Mazumder Nath
Hi Val,



Thanks for pointing out the errors.

Since i was getting errors related to GC pauses, I had set offHeapMaxMemory
to 0 as per the below link.


http://apacheignite.gridgain.org/docs/performance-tips#tune-off-heap-memory

But now i have set it to -1



After doing the required changes and exposing the ports while creating the
container, i do not see those errors. Also i am able to ping the ignite
ports.

However the servers are still not joining. Getting the below errors on VM1
container which is started first. Initially the other node joins but again
it gets removed from the cluster.

[19:01:19,827][INFO][disco-event-worker-#44%null%][GridDiscoveryManager] *Added
new node to topology**:* TcpDiscoveryNode
[id=13e5b190-fa8e-4c8c-be62-9f09ae6fadb9, addrs=[0:0:0:0:0:0:0:1%lo,
127.0.0.1, 172.17.0.1, 172.18.0.1, 172.20.29.33], sockAddrs=[/
172.20.29.33:47500, /172.17.0.1:47500, /0:0:0:0:0:0:0:1%lo:47500, /
127.0.0.1:47500, /172.18.0.1:47500], discPort=47500, order=14, intOrder=8,
lastExchangeTime=1480618879814, loc=false,
ver=1.7.0#20160801-sha1:383273e3, isClient=false]

[19:01:19,828][INFO][disco-event-worker-#44%null%][GridDiscoveryManager]
*Topology
snapshot [ver=15, servers=2, clients=0, CPUs=4, heap=2.0GB]*

[19:01:19,829][WARNING][disco-event-worker-#44%null%][GridDiscoveryManager*]
Node FAILED:* TcpDiscoveryNode [id=13e5b190-fa8e-4c8c-be62-9f09ae6fadb9,
addrs=[0:0:0:0:0:0:0:1%lo, 127.0.0.1, 172.17.0.1, 172.18.0.1,
172.20.29.33], sockAddrs=[/172.20.29.33:47500, /172.17.0.1:47500,
/0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500, /172.18.0.1:47500],
discPort=47500, order=14, intOrder=8, lastExchangeTime=1480618879814,
loc=false, ver=1.7.0#20160801-sha1:383273e3, isClient=false]

[19:01:19,829][INFO][disco-event-worker-#44%null%][GridDiscoveryManager]
*Topology
snapshot [ver=15, servers=1, clients=0, CPUs=2, heap=1.0GB]*

[19:01:19,846][INFO][exchange-worker-#47%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=14, minorTopVer=0], evt=NODE_JOINED,
node=13e5b190-fa8e-4c8c-be62-9f09ae6fadb9]

[19:01:19,865][INFO][exchange-worker-#47%null%][GridCachePartitionExchangeManager]
Skipping rebalancing (nothing scheduled) [top=AffinityTopologyVersion
[topVer=15, minorTopVer=0], evt=NODE_FAILED,
node=13e5b190-fa8e-4c8c-be62-9f09ae6fadb9]



Getting the errors on VM2 container. Although in VM1 we get a message that
this container has joined but not joining message is there in its log

[18:59:27,919][WARNING][main][TcpDiscoverySpi] *Node has not been connected
to topology and will repeat join process.* Check remote nodes logs for
possible error messages. Note that large topology may require significant
time to start. Increase 'TcpDiscoverySpi.networkTimeout' configuration
property if getting this message on the starting nodes
[networkTimeout=2]





Please find below my current configuration [Since the issue persisted, I
have also included the Non-loopback local IPs as well]

I have retained the loopback address else the node is not coming up.



  



  



  



  



















127.0.0.1:47500..47509

172.26.116.67:47500..47509

172.18.0.1:47500..47509

172.17.0.1:47500..47509

172.20.29.33:47500..47509





















  

 

 

 

  

  





Can you please help?

Do we need to use address resolver?


On Wed, Nov 30, 2016 at 5:12 PM, vkulichenko 
wrote:

> Hi,
>
> This configuration is incorrect:
>
> 127.0.0.1:47100..47509
> 172.26.116.67:47100..47509
>
> First of all, since you're building a distributed cluster, you should not
> use loopback here. Put real addresses that discovery component binds to.
> Second of all, 47100 is a port for communication, not for discovery, so
> ranges should be 47500..47509 instead.
>
> In addition, please check that you not only can ping containers from each
> other, but that you can telnet to Ignite ports after first node is started.
> Sometimes it happens with Docker, that you can explicitly open ports.
>
> And finally, setting offHeapMaxMemory to zero actually enables and makes
> off-heap storage unlimited. To disable it should be set to -1, but this is
> actually a default value, so you don't need to do this either.
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Unable-to-create-cluster-of-Apache-
> 

Re: Cluster hung after a node killed

2016-12-01 Thread javastuff....@gmail.com
Val,

I have reproduced this with simple program - 
1. Node 1 - run example ExampleNodeStartup 
2. Node 2 - Run a program which create a transaction cache and add 100K
simple entries.  
cfg.setCacheMode(CacheMode.PARTITIONED); 
cfg.setAtomicityMode(CacheAtomicityMode.TRANSACTIONAL); 
cfg.setMemoryMode(CacheMemoryMode.OFFHEAP_TIERED); 
cfg.setSwapEnabled(false); 
cfg.setBackups(0); 
3. Node 3 - Run a program which takes a lock (cache.lock(key)) 
4. Kill Node 3 before it can unlock.
5. Node 4 - Run a program which tries to get cached data. 

Node4 not able to join cluster, it is hung. In-fact complete cluster is hung
andy operation from Node2 also hung.

-Sam



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Cluster-hung-after-a-node-killed-tp8965p9337.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: MapReduce Job stuck when using ignite hadoop accelerator

2016-12-01 Thread Andrey Mashenkov
Hi Kaiming,

 ^-- Public thread pool [active=80, idle=0, qSize=944]
There are long queue and 80 busy threads that seemd do no progress.
It looks like all of threads are blocked. Please, attach a thread-dump.


On Tue, Nov 29, 2016 at 6:40 AM, Kaiming Wan <344277...@qq.com> wrote:

> I can find the WARN in logs. The stuck should be caused by a long running
> cache operations. How to locate what cause the long running cache
> operations. My map-reduce job can run successfully without ignite hadoop
> accelerator.
>
> [GridCachePartitionExchangeManager] Found long running cache operations,
> dump IO statistics.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/MapReduce-Job-stuck-when-using-ignite-
> hadoop-accelerator-tp9216p9251.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
С уважением,
Машенков Андрей Владимирович
Тел. +7-921-932-61-82

Best regards,
Andrey V. Mashenkov
Cerr: +7-921-932-61-82


Re: Cache size in Partitioned mode as seen in visor

2016-12-01 Thread Alexey Kuznetsov
Hi, Mohammad!

Tomorrow I will try with examples-ignite.xml  and 4 server nodes with 1
backup.
And share my findings.

On Thu, Dec 1, 2016 at 1:36 AM, Mohammad Shariq 
wrote:

> Hi Alexplz let me know if u got my question...or I need to explain
> more...
>
> For now can u can plz validate my understanding about cache size with
> backups in Partitioned modedoes backup count figure in visor cache size
>
> On Wednesday, November 30, 2016, Mohammad Shariq 
> wrote:
>
>> Hi Alex
>> I will run the same in example in my personal laptop..currently bcoz of
>> security restrictions I cannot copy and paste the visor output and config
>> xml outside my corporate intranet...probably I will replicate the scenario
>> in my personal computer...
>>
>> For now can u can plz validate my understanding about cache size with
>> backups in Partitioned modedoes backup count figure in visor cache size
>>
>>
>> I am using
>>
>> 1.the examples-ignite.xml that is packaged with the ignite distribution,
>> and
>> 2. cache -a
>> 3. Visor output says...around 7500 cache size..
>>
>> On Wednesday, November 30, 2016, Alexey Kuznetsov 
>> wrote:
>>
>>> Hi, Mohammad!
>>>
>>> Could you provide.
>>>
>>> 1) Cache config.
>>> 2) Visor command you used.
>>> 3) Visor output.
>>>
>>> We will try to help.
>>>
>>> On Wed, Nov 30, 2016 at 9:20 PM, Mohammad Shariq 
>>> wrote:
>>>
 I have a partitioned cache size of 15000 and 4 server nodes with backup
 of 1. Through visor each node is showing a size of around 7500.
 Also, if I decrease or increase backup, the cache size on each node
 stays the same.

 I think since it is partitioned mode ...size of each node should be
 15000/4 approx

 On Wednesday, November 30, 2016, Mohammad Shariq 
 wrote:

> I think I am not getting the correct cache size of nodes in
> Partitioned mode.
>
> Can someone plz help me in understanding how cache size varies
> depending on number of backups and is the size shown in visor includes
> backup copies or not...
>

>>>
>>>
>>> --
>>> Alexey Kuznetsov
>>>
>>


-- 
Alexey Kuznetsov


Re: Package jar file path for sql Query Entity

2016-12-01 Thread Vladislav Pyatkov
Hi,

Ignite stores date as it specify in marshaller. Binary marshaller was
configured by default.
In case, if Binary marshaller was specified, all internal works will be
processed in binary view (hence do not need to store java classes on
server).
If you want to use binary view in own code, you should use "withKeepBinary"
cache method[1].
If your classes do not implement Binarylizable, the marshaller will get
values through reflection and builds binary view.
Is that clear explanation?

[1]:
https://apacheignite.readme.io/docs/binary-marshaller#binaryobject-cache-api

On Thu, Dec 1, 2016 at 5:02 PM, rishi007bansod 
wrote:

> Thanks. After importing same package on server and client side solved my
> problem(initially i wrote class in client node instead of importing it).
>
> But, I am still unclear about how ignite stores data
> case I  : In both Binary and serialized format, and de-serializes data when
> required
> case II : Only in binary format, and then there is no need of
> de-serialization
>
> Can you please explain what is default behavior of ignite for storing
> data(is it case I or case II). Also I have configured cache from xml file
> that I have attached and I have used binarylizable interface so does this
> mean my objects are stored in binary format only?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Getting-null-query-output-after-using-
> QueryEntity-tp9217p9331.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Re: Failed to wait for initial partition map exchange

2016-12-01 Thread yucigou
Hi Vladislav,

We have the same issue, "Failed to wait for initial partition map exchange",
when stopping or restarting one of the nodes in our cluster. In other words,
when stopping or restarting just one of the nodes, the whole cluster screws
up, i.e., the whole cluster stops working.

Just had a look at the ticket you mentioned,
https://issues.apache.org/jira/browse/IGNITE-3748, and it as said has been
fixed for version 1.8 Wonder if you know when version 1.8 would be released?
This has been in our production, and we really want this fix to be applied
as soon as possible.

Many thanks,
Yuci



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-wait-for-initial-partition-map-exchange-tp6252p9333.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Failed to process message: Creating partition which does not belong to local node

2016-12-01 Thread yucigou
Hi,

Our app is getting the following error messages. Wonder what is all about.
Should I go check the key.hashCode() implementations, but the log does not
point out which key it's talking about. What would the indication of this
message be: something seriously wrong with the cluster?

Many thanks,
Yuci

[12:55:32,875][ERROR][sys-#30%null%][GridCacheIoManager] Failed to process
message [senderId=c08bb6cc-c737-4122-bfcc-3b96f9f19b64, messageType=class
o.a.i.i.processors.cache.distributed.dht.atomic.GridNearAtomicUpdateRequest]
class
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtInvalidPartitionException
[part=13, msg=Creating partition which does not belong to local node (often
may be caused by inconsistent 'key.hashCode()' implementation) [part=13,
topVer=AffinityTopologyVersion [topVer=5, minorTopVer=0
], this.topVer=AffinityTopologyVersion [topVer=5, minorTopVer=0]]]
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.localPartition(GridDhtPartitionTopologyImpl.java:713)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtPartitionTopologyImpl.localPartition(GridDhtPartitionTopologyImpl.java:656)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedConcurrentMap.localPartition(GridCachePartitionedConcurrentMap.java:68)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridCachePartitionedConcurrentMap.putEntryIfObsoleteOrAbsent(GridCachePartitionedConcurrentMap.java:84)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.entry0(GridCacheAdapter.java:971)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.entryEx(GridCacheAdapter.java:939)
at
org.apache.ignite.internal.processors.cache.distributed.dht.GridDhtCacheAdapter.entryEx(GridDhtCacheAdapter.java:404)
at
org.apache.ignite.internal.processors.cache.GridCacheAdapter.entryEx(GridCacheAdapter.java:930)
at
org.apache.ignite.internal.processors.cache.GridCacheTtlManager.expire(GridCacheTtlManager.java:143)
at
org.apache.ignite.internal.processors.cache.GridCacheUtils.unwindEvicts(GridCacheUtils.java:987)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.processMessage(GridCacheIoManager.java:775)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.onMessage0(GridCacheIoManager.java:353)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.handleMessage(GridCacheIoManager.java:277)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager.access$000(GridCacheIoManager.java:88)
at
org.apache.ignite.internal.processors.cache.GridCacheIoManager$1.onMessage(GridCacheIoManager.java:231)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1238)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processRegularMessage0(GridIoManager.java:866)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$1700(GridIoManager.java:106)
at
org.apache.ignite.internal.managers.communication.GridIoManager$5.run(GridIoManager.java:829)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Failed-to-process-message-Creating-partition-which-does-not-belong-to-local-node-tp9332.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Package jar file path for sql Query Entity

2016-12-01 Thread rishi007bansod
Thanks. After importing same package on server and client side solved my
problem(initially i wrote class in client node instead of importing it). 

But, I am still unclear about how ignite stores data 
case I  : In both Binary and serialized format, and de-serializes data when
required
case II : Only in binary format, and then there is no need of
de-serialization

Can you please explain what is default behavior of ignite for storing
data(is it case I or case II). Also I have configured cache from xml file
that I have attached and I have used binarylizable interface so does this
mean my objects are stored in binary format only?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Getting-null-query-output-after-using-QueryEntity-tp9217p9331.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Objects in heap dump corresponding to binary and de-serialized version of entries

2016-12-01 Thread rishi007bansod
Hi, 
Is there any way we can differentiate between deserialized and binary
objects from heap dump? I want to get details about *how many deserialized
and how many binary objects* are present in heap dump. Is there any way we
can store data in only binary format instead of serialized data?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Objects-in-heap-dump-corresponding-to-binary-and-de-serialized-version-of-entries-tp9321p9330.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Fetching large number of records

2016-12-01 Thread Anil
Hi Val,

Yes. results are fetched in pages only. But each execute query will return
the results of a page. correct ?

I am checking if there way that direct query parser stream is available to
client though jdbc driver. Hope this is clear.

Thanks

On 1 December 2016 at 04:34, vkulichenko 
wrote:

> Anil,
>
> JDBC driver actually uses the same Ignite API under the hood, so the
> results
> are fetched in pages as well.
>
> As for the query parser question, I didn't quite understand what you mean.
> Can you please give more details?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Fetching-large-number-of-records-tp9267p9313.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Package jar file path for sql Query Entity

2016-12-01 Thread Vladislav Pyatkov
Hi,

I think in your case (when you configured "QueryEntity"), have not
necessary to copy classe (schema.jar) to a server. Ignite will by use
binary[1] format, and execute SQL (or other query) without deserialization.

But I definitly do not understand why it does not work...
You can to clarify are all data structures created as are you think or not.
Please, check it by using H2 debug console[2].

[1]: https://apacheignite.readme.io/docs/binary-marshaller
[2]:
https://apacheignite.readme.io/v1.7/docs/performance-and-debugging#using-h2-debug-console

On Tue, Nov 29, 2016 at 7:17 AM, rishi007bansod 
wrote:

> I have added warehouse.java containing class warehouse to package schema.
> Thats why I wrote schema.warehouse. Query I am executing is SELECT * from
> warehouse w WHERE w.w_id = 1 and entry corresponding to w_id = 1 is present
> in warehouse cache. schema.jar
> 
> this is my jar file.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Getting-null-query-output-after-using-
> QueryEntity-tp9217p9252.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Vladislav Pyatkov


Problem in forming cluster with multi-jvm in single machine

2016-12-01 Thread hitansu
I have written a program to start ignite in separate jvms to form a
cluster.It is starting the Ignite,but could not form a cluster. It is
printing following log.
  Failed to connect to any address from IP finder (will retry to join
topology every 2 secs): [/10.150.113.61:47500]

Below is my IgniteConfiguration setting.

TcpDiscoverySpi discoSpi = new TcpDiscoverySpi();
TcpDiscoveryVmIpFinder ipFinder = new TcpDiscoveryVmIpFinder();
String hostAddress = InetAddress.getLocalHost().getHostAddress();
List address= Arrays.asList(hostAddress);
//String portRangeStr= "5701-5800";
String portRangeStr= "47500-47509";
String[] portRange = portRangeStr.replace(" ", "").split("-");

  discoSpi.setLocalPortRange(Integer.parseInt(portRange[1]) -
Integer.parseInt(portRange[0]) + 1);

discoSpi.setLocalPort(Integer.parseInt(portRange[0]));


 ipFinder.setAddresses(address);
 ipFinder.setShared(true);
//  ipFinder.setAddresses(Collections.singletonList("0.0.0.0:5701..5800")); 
// 
ipFinder.setAddresses(Collections.singletonList("192.168.127.1:5701..5800"));

   discoSpi.setIpFinder(ipFinder);

I want to start ignite in separate jvm to emulate the real cluster.





--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Problem-in-forming-cluster-with-multi-jvm-in-single-machine-tp9327.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Cache stopped exception with KafkaStreamer

2016-12-01 Thread Anil
Hi,

I am using KafkaStreamer to populate the data into cache.

i see following message in logs

java.lang.IllegalStateException: Grid is in invalid state to perform this
operation. It either not started yet or has already being or have stopped
[gridName=vertx.ignite.node.7f903539-ac5a-4cd1-9425-c06f4b45c56d,
state=STOPPED]

I see cluster looks good and have enough resources.

How to handle cache close exception in case of Kafka streamer ?

Thanks


Re: Ignite 1.6.10 performance issue in ODBC connections

2016-12-01 Thread Igor Sapego
Hi, Thomas,

Recently, I have fixed an issue, that can be a cause for such a behavior -
[1].

This fix is going to be available in the upcoming Apache Ignite 1.8.

[1] - https://issues.apache.org/jira/browse/IGNITE-4249

Best Regards,
Igor

On Wed, Nov 30, 2016 at 1:45 PM, thomastechs  wrote:

> Hello,
> Any help on this is much appreciable..
>
> Thanks,
> Thomas.
>
>
>
> --
> View this message in context: http://apache-ignite-users.705
> 18.x6.nabble.com/Ignite-1-6-10-performance-issue-in-ODBC-con
> nections-tp9266p9292.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Ignite Cassandra AWS test framework

2016-12-01 Thread Riccardo Iacomini
I am trying to evaluate Cassandra and Ignite following the AWS deployment
guide in the documentation. I've built both Ignite 1.6 and 1.7 from source,
but in both cases the test package generated misses the configuration file
*bootstrap/aws/env.sh* I should modify according to the documentation. The
ganglia folder with the scripts for the deployment of the monitoring
cluster is missing too in *bootstrap/aws/. *How should I procede?

Thank you in advance for your time.

Riccardo Iacomini


*RDSLab*


Re: Objects in heap dump corresponding to binary and de-serialized version of entries

2016-12-01 Thread Andrey Mashenkov
Hi,

GridDhtAtomicCacheEntry represents cache entry. BinaryObject represents
serialized objects: entry keys and entry values.
GridH2ValueCacheObject, GridH2KeyValueRowOnHeap are index internal objects.

On Thu, Dec 1, 2016 at 10:13 AM, rishi007bansod 
wrote:

> I have loaded customer table with 3,899,999 entries in cache. Following is
> heap dump for same.
>
> 
>
> Which objects in this heap dump represents binary and de-serialized version
> of entries?
> *GridH2ValueCacheObject, GridH2KeyValueRowOnHeap* are these objects
> corresponding to de-serialized entries? Also what does
> *ConcurrentSkipListMap$Node* and *ConcurrentSkipListMap$Index* holds?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Objects-in-heap-dump-corresponding-to-
> binary-and-de-serialized-version-of-entries-tp9321.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
С уважением,
Машенков Андрей Владимирович
Тел. +7-921-932-61-82

Best regards,
Andrey V. Mashenkov
Cerr: +7-921-932-61-82