Apache ignite persistant store implementation

2017-04-03 Thread sweta Das
Hi
I have few clarifications on the read and write through implementation of 
ignite cache.
I have a mysql database with a simple table (1 int key and 2 columns of String).
I know that I have to start a ignite client node with cache store 
configuration. But do I also need to start the server node passing the jdbc 
driver connection details in the spring xml file?




continuous query equivalent to cache.get ?

2017-04-03 Thread Marasoiu Nicolae
Hi,


I want to get continuous updates of the value of a certain key on a certain 
cache, so I thought of a continuous query based on an initial query like this:

SELECT _value from CACHE where _key=KEY

Questions:

- is this possible with this syntax?

- is this an SQL Query (the initial query I mean)?

- does it need any configuration added on the cacheConfiguration?

I added setIndexedTypes(String,Object) since String is the key type.

I however get: "CacheException: Failed to find SQL table for type: String".


Another way to get a stream of updates on a key is to listen to cache events, 
however that is listening to all the caches. Even listening locally (since we 
are listening to a replicated cache updates and a low writes one), it would 
still be listening to a lot of updates just for a small fraction of them.


What would be other alternatives, to get a stream of updates on a key?


Thank you


Met vriendelijke groeten / Meilleures salutations / Best regards


Nicolae Marasoiu
Agile Developer

[http://signature.cegeka.com/2016/stripe-small.jpg]


E  nicolae.maras...@cegeka.com


[http://signature.cegeka.com/2016/stripe-large.jpg]


[http://signature.cegeka.com/2016/logo.jpg] CEGEKA 15-17 Ion Mihalache 
Blvd. Tower Center Building, 4th,5th,6th,8th,9th fl
RO-011171 Bucharest (RO), Romania
T +40 21 336 20 65
WWW.CEGEKA.COM   [LinkedIn]  





Re: Local node SEGMENTED. Data Lost on a Partitioned Cache with backups of 3

2017-04-03 Thread Raja
Hi Andrew,

Thanks for the reply. Here is the configuration that I use for the cluster.
Because we have are maintaining 3 data copies in the cluster, if data
loading from node is failed then it should be able to get the data from
other node. isn't it?

Thanks in advance for your support

Raja
 

http://www.springframework.org/schema/beans;
   xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
   xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd;>







  
  
  





 
 




list of static IPs








   


   

















--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Local-node-SEGMENTED-Data-Lost-on-a-Partitioned-Cache-with-backups-of-3-tp11683p11689.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Local node SEGMENTED. Data Lost on a Partitioned Cache with backups of 3

2017-04-03 Thread Andrey Mashenkov
Hi Raja,

Data loss can happend if rebalance is not finished by some reason between
nodes failures.
Also you can loss data that is being loaded from segmented node when
failure happens.

Would you please attach ignite configuration?


On Mon, Apr 3, 2017 at 9:43 PM, Raja  wrote:

> Hi,
>
> I have a Ignite cluster deployed on Hadoop/Yarn with cache configuration as
> follow
>
> Cache Mode : Partitioned
> Backups  : 3
> Default Segmentation behavior.
>
> Two nodes went down with "Local node SEGMENTED" error around same time at
> 7:00 AM and the another node failed with the same error around 15:00 PM.
>
> This resulted in data loss. We lost portion of data in cache. Technically
> we
> didn't had more than 2 node failures at a given time and we still lost
> data.
> *Is this possible to happen?*
>
> BTW, I found this warning in log...might be related to segmentation.
> Thought
> it will be useful in analysis.
> /
> *[main] WARN  org.apache.ignite.spi.checkpoint.noop.NoopCheckpointSpi  -
> Checkpoints are disabled (to enable configure any GridCheckpointSpi
> implementation)/*
>
> Appreciate any insight into this.
>
> Thanking you
> Raja
>
>
>
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Local-node-SEGMENTED-Data-Lost-on-a-
> Partitioned-Cache-with-backups-of-3-tp11683.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Ignite may start more service instances per node than maxPerNodeCount in case of manual redeploy

2017-04-03 Thread Andrey Mashenkov
Hi Evgeniy,

Looks like a bug. I've created a ticket [1].
Seems, this reporoduced only when service deployment with dynamic approach.
When Ignite starts with service configured - all looks fine for me.

[1] https://issues.apache.org/jira/browse/IGNITE-4907

On Mon, Apr 3, 2017 at 4:26 PM, Evgeniy Ignatiev <
yevgeniy.ignat...@gmail.com> wrote:

> Hello.
>
> Recently we discovered that if a service is configured in
> IgniteConfiguration with e.g. maxPerNodeCount equal to 1 and totalCount
> equal to 3, we start-up 2 Ignite nodes - then cancel the service after some
> time and redeploy it with the same config as ignite.services().deploy(...)
> - we observe more than instance of service starting per node.
>
> Here is a minimal code that demonstrates this issue:
> https://github.com/YevIgn/ignite-services-bug - After the call to
> ignite.services().deploy(...) - the output to console is "Started 3
> services" while "Started 2 services" is expected as there are only two
> nodes. This is with the Ignite 1.9.0.
>
> Could you please look into it?
>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: Stoppin listen for topics in Ignite

2017-04-03 Thread vkulichenko
Sorry, I don't understand. Why do you call stopListen if you still can
receive messages? Basically, if you do this concurrently, there is a race
condition and you don't know whether listener will be called or not for all
messages.

-Val



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Stoppin-listen-for-topics-in-Ignite-tp11604p11684.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: A question about cluster configuration (Are there any way to make client connection faster?)

2017-04-03 Thread vdpyatkov
Hi

If you are use port range, than this can consume more time, because
Discovery SPI try to connect to by all port in range.
But If you are specified single port, connection should be fast.
Why are you think that delay occur on discovery phase?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/A-question-about-cluster-configuration-Are-there-any-way-to-make-client-connection-faster-tp11655p11681.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: CPU Usage on distributed queries

2017-04-03 Thread vdpyatkov
Hi,

How was Ignite nodes and physical machines mapped?
Where Ignite nodes more there is CPU utilization greater, isn't it?

Where were SQL query generated (on one of a srever node or other)?

I think, CPU utilization shuold be same over of nodes, because SQL executed
as Map Reduce task.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/CPU-Usage-on-distributed-queries-tp11652p11680.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Distributed Closures VS Executor Service

2017-04-03 Thread Nikolai Tikhonov
Hi,

1) The better way using DataStreamer API. More details you can find there
[1].

2) Internal implementation SQL works in similar fashion. The query will be
parsed and split into multiple map queries and a single reduce query. All
the map queries are executed on all the data nodes where cache data
resides. All the nodes provide result sets of local execution to the query
initiator (reducing node) that, in turn, will accomplish the reduce phase
by properly merging provided result sets.

1. https://apacheignite.readme.io/docs/data-streamers

On Mon, Apr 3, 2017 at 11:29 AM, kmandalas <
kyriakos.manda...@iriworldwide.com> wrote:

> Hello Christo and thank you for your feedback.
>
> For the moment we have a Relational Database that is not distributed or
> partitioned. We have given some thought for configuring at least some
> database tables as cache store to Ignite and it not something that we
> exclude. Just for the moment due to some Network I/O performance issues of
> the Cloud platform that will host the solution we cannot say much.
>
> However, there is a strong chance at least one or two tables involved in
> the
> flow I described to be loaded into cache. For this reason I would like to
> ask the following:
>
> 1) Which is the best policy for loading into cache a table of about
> ~1.000.000 rows and use ANSI SQL to query it later? This table will be
> incremented periodically (after an ETL procedure) utill it will reach a
> final upper limit of about ~3.000.000 rows. So I would like the optimal way
> to stream it initially into cache on startup of a future deployment and
> update it after each ETL procedure.
>
> 2) If we use MapReduce & ForkJoin procedure how can we combine it with
> Affinity? There are examples for Distributed Closures but I do not see any
> for ComputeTask/ComputeJobAdapter etc. Ideally each job, should perform
> ANSI
> SQL query on the table that will be loaded and maintained in cache but on
> the rows that it keeps locally.
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-
> tp11192p11653.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Client near cache with Apache Flink

2017-04-03 Thread Andrey Gura
What is the source of your data in case *without* Ignite cache? I'm
trying to understand your design without Ignite. What does your code
instead of nearCache.get(key)?

Also the code snippet that creates code still should fail because it
try to create near cache for original cache that doesn't exist yet.

On Mon, Apr 3, 2017 at 7:03 PM, nragon
 wrote:
> Hi,
>
> The code is not all there but i think it's the important one.
> Cache access: this.nearCache.get(key); (Mentioned above)
> Cache name comes from table.getName() which, for instance, can be
> "LK_SUBS_UPP_INST".
> Data in ignite cache: String key and Object[] value. For now I only have one
> record for each cache (3 total)
> +==+
> |Key Class | Key |Value Class |
> Value  |
> +==+
> | java.lang.String | 1   | java.lang.Object[] | size=10, values=[1, 1, null,
> null, null, null, null, null, null, null] |
> +--+
>
> The cache is loaded with datastreamer from oracle database. Again, loading
> is not the problem.
> I'm trying to understand if this (this.nearCache.get(key)) is the best way
> to access for fast lookups.
>
> Thanks
>
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Client-near-cache-with-Apache-Flink-tp11627p11670.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 2.0

2017-04-03 Thread Anil
Hi Nikolai,

Yes. It is working :).

and vertx-ignite cluster also joined the cluster now :) . not sure how it
started working :)


Thanks.

On 3 April 2017 at 21:44, Nikolai Tikhonov  wrote:

> I mean that host equals machine. Can you try to start nodes with
> TcpDiscoveryVmIpFinder and check that ignite will be work properly?
>
> On Mon, Apr 3, 2017 at 7:08 PM, Anil  wrote:
>
>> Hi Nikolai,
>>
>> What do you mean by hosts ? Two machines able to connect each other.
>>
>> Thanks
>>
>> On 3 April 2017 at 21:12, Nikolai Tikhonov  wrote:
>>
>>> Hi Anil,
>>>
>>> Are you sure that you this addresses (172.31.1.189#47500,
>>> 172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?
>>>
>>> On Mon, Apr 3, 2017 at 12:20 PM, Anil  wrote:
>>>
 Hi Val,

 I tried without vertx and still nodes not joining the cluster. Looks
 like the issue is not because of vertx.

 Thanks

 On 3 April 2017 at 12:13, Anil  wrote:

> Hi Val,
>
> Can you help me in understanding following -
>
> 1. I see empty files created with 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
> reason ?
>
> - A number of nodes will have single set of 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>
> 2. a node is deleting the empty file of other node. the delete of file
> happens only during un-register (as per the code).
>
> logs of 192 -
>
> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
> in the bucket with name - 172.31.7.192#47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the
> bucket objects - 3
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.1.189 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.1.189 - 47500
> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
> in the bucket with name - 172.31.1.189#47500*
> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
> in the bucket with name - 172.31.7.192#47500
>
> logs of 189 -
>
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the
> bucket objects - 3
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.7.192 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.7.192 - 47500
> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
> in the bucket with name - 172.31.7.192#47500*
> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file
> in the bucket with name - 172.31.1.189#47500
>
> Thanks
>
> On 2 April 2017 at 14:34, Anil  wrote:
>
>> Hi Val,
>>
>> Nodes not joining the cluster. I just shared the logs to understand
>> if there is any info in the logs.
>>
>> Thanks
>>
>> On 2 April 2017 at 11:32, vkulichenko 
>> wrote:
>>
>>> Anil,
>>>
>>> I'm not sure I understand what is the issue in the first place. What
>>> is not
>>> working?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/2-0-tp11487p11640.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>

>>>
>>
>


Re: Pessimistic TXN did not release lock on a key, all subsequent txns failed

2017-04-03 Thread bintisepaha
Sorry for the late response. We do not close/destroy caches dynamically.
could you please explain this NPE?



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Pessimistic-TXN-did-not-release-lock-on-a-key-all-subsequent-txns-failed-tp10536p11676.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ScanQuery With BinaryObject

2017-04-03 Thread Andrey Mashenkov
Hi David,

Scan query results are never cached. It looks like your
IgniteBiPredicate implementation
is cached on server side.
Try to move this class to upper level or make it inner and make "prefix"
configurable with class constructor. This should work.

On Mon, Apr 3, 2017 at 9:24 AM, David Li  wrote:

> Sorry, please ignore the previous email, it was sent by mistake.
>
> 1. I download apache-ignite-fabric-1.9.0-bin.zip, and unzip it.
> 2. In terminal, I start an ignite instance by *bin/ignite.sh
> examples/config/example-ignite.xml*
> 3. I create a Java application, source code as below:
>
> public static void main(String[] args) {
> String ORG_CACHE = "org_cache_remote";
>* Ignition.setClientMode(true);*
> Ignite ignite = Ignition.start("example-ignite.xml");
> CacheConfiguration orgCacheCfg = new
> CacheConfiguration<>(ORG_CACHE);
> orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
> ignite.destroyCache(ORG_CACHE);
> IgniteCache cache = ignite.createCache(
> orgCacheCfg);
> cache.put(1L, new Organization(1L, "org1", true, "jurong east",
> ""));
> cache.put(2L, new Organization(2L, "org2", false, "orchard", ""));
> cache.put(3L, new Organization(3L, "org3", true, "jurong west",
> ""));
> cache.put(4L, new Organization(4L, "org4", false, "woodlands",
> ""));
> cache.put(5L, new Organization(5L, "org5", false, "changi", ""));
> // cache.put(6L, new Organization(6L, "org6", true, "jurong island",
> ""));
>
> IgniteCache binaryCache = cache.withKeepBinary();
>
> List> result;
>
> System.out.println("Scan by address");
> ScanQuery scanAddress = new ScanQuery<>(
> new IgniteBiPredicate() {
> @Override
> public boolean apply(Long aLong, BinaryObject binaryObject) {
> *// first time filter by jurong, got two entries, org1
> and org3*
> *// second time filter by changi, got two entries, org1
> and org3*
> *// third time filter by changi as well, uncomment org6,
> got three entries, org1, org3 and org6*
> *return
> binaryObject.field("address").startsWith("jurong");*
> }
> }
> );
> result = binaryCache.query(scanAddress).getAll();
> System.out.println("result: " + result.size());
> for (Cache.Entry entry : result) {
> System.out.println(entry.getValue().deserialize().toString());
> }
>
> ignite.close();
> }
>
> Here what I want to do is start a client node, connect to the server node
> started in step 2. Then I create a cache, put some data inside,
> then try to run a scan query to find entries by its address.
> The problem is when I run this program first time, it will return two
> entries, their addresses are started with "jurong", which is correct.
> When I run the program again, with changed value, eg. "changi", it should
> return one entry, somehow, it still return two entries with address started
> with "jurong", rather than "changi".
> When I uncomment the line of "org6", and run the program again, it will
> return three entries, all of their addresses are started with "jurong".
>
> I have no idea what is going on.
>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Re: 2.0

2017-04-03 Thread Nikolai Tikhonov
I mean that host equals machine. Can you try to start nodes with
TcpDiscoveryVmIpFinder and check that ignite will be work properly?

On Mon, Apr 3, 2017 at 7:08 PM, Anil  wrote:

> Hi Nikolai,
>
> What do you mean by hosts ? Two machines able to connect each other.
>
> Thanks
>
> On 3 April 2017 at 21:12, Nikolai Tikhonov  wrote:
>
>> Hi Anil,
>>
>> Are you sure that you this addresses (172.31.1.189#47500,
>> 172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?
>>
>> On Mon, Apr 3, 2017 at 12:20 PM, Anil  wrote:
>>
>>> Hi Val,
>>>
>>> I tried without vertx and still nodes not joining the cluster. Looks
>>> like the issue is not because of vertx.
>>>
>>> Thanks
>>>
>>> On 3 April 2017 at 12:13, Anil  wrote:
>>>
 Hi Val,

 Can you help me in understanding following -

 1. I see empty files created with 127.0.0.1#47500 and
 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
 reason ?

 - A number of nodes will have single set of 127.0.0.1#47500 and
 0:0:0:0:0:0:0:1%lo#47500 empty files.

 2. a node is deleting the empty file of other node. the delete of file
 happens only during un-register (as per the code).

 logs of 192 -

 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
 the bucket with name - 172.31.7.192#47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the
 bucket objects - 3
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 0:0:0:0:0:0:0:1%lo - 47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 127.0.0.1 - 47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 127.0.0.1 - 47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 172.31.1.189 - 47500
 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 172.31.1.189 - 47500
 *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
 in the bucket with name - 172.31.1.189#47500*
 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
 the bucket with name - 172.31.7.192#47500

 logs of 189 -

 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the
 bucket objects - 3
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 0:0:0:0:0:0:0:1%lo - 47500
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 127.0.0.1 - 47500
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 127.0.0.1 - 47500
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered (before) - 172.31.7.192 - 47500
 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
 registered - 172.31.7.192 - 47500
 *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file
 in the bucket with name - 172.31.7.192#47500*
 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
 the bucket with name - 172.31.1.189#47500

 Thanks

 On 2 April 2017 at 14:34, Anil  wrote:

> Hi Val,
>
> Nodes not joining the cluster. I just shared the logs to understand if
> there is any info in the logs.
>
> Thanks
>
> On 2 April 2017 at 11:32, vkulichenko 
> wrote:
>
>> Anil,
>>
>> I'm not sure I understand what is the issue in the first place. What
>> is not
>> working?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/2-0-tp11487p11640.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>

>>>
>>
>


Re: Storing JSON data on Ignite cache

2017-04-03 Thread Andrey Mashenkov
Hi Austin,

There is no way to query binary data as Ignite does not know its structure.
You should deserialize data before store it with Ignite or store it as
BinaryObjects.

The first approach required all Ignite nodes should have data structure
classes in their classpathes.
The second one required data structure classes to be in classpath on node
that will deserialize query results. Usually it is a client node.

See [1] and [2].

[1] https://apacheignite.readme.io/docs/binary-marshaller
[2]
http://apache-ignite-users.70518.x6.nabble.com/Working-Directly-with-Binary-objects-td5131.html

On Mon, Apr 3, 2017 at 4:17 PM, austin solomon 
wrote:

> Hi,
>
> My Ignite version is 1.9.0
>
> I am pushing JSON object into Ignite's cache using Apache Nifi's : GetKafka
> => PutIgniteCache processors.
> I could able to save the data in cache but the data it is in byte array
> format how can I query this.
>
> My Json object looks like this : {"id":1, "name":"USA", "population":
> 15} .
> When  print the cache data  got the following result:
>
>   byte[] bytes = (byte[]) cache.get("1");
>   String str = new String(bytes, StandardCharsets.UTF_8);
>   System.out.println("Cache get id = "+str);
>   Iterator it=cache.iterator();
>   while(it.hasNext()){
>   System.out.println("Cache next=="+it.next());
>   }
>
> Result:
>
>Cache get id = {"id":1, "name":"USA", "population": 15}
>Cache next==Entry [key=1, val=[B@5ab14cb9]
>Cache next==Entry [key=2, val=[B@5fb97279]
>Cache next==Entry [key=3, val=[B@439a8f59]
>
> How can I query this data or map this as key, value.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Storing-JSON-data-on-Ignite-cache-tp11660.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Best regards,
Andrey V. Mashenkov


Re: 2.0

2017-04-03 Thread Anil
Hi Nikolai,

What do you mean by hosts ? Two machines able to connect each other.

Thanks

On 3 April 2017 at 21:12, Nikolai Tikhonov  wrote:

> Hi Anil,
>
> Are you sure that you this addresses (172.31.1.189#47500,
> 172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?
>
> On Mon, Apr 3, 2017 at 12:20 PM, Anil  wrote:
>
>> Hi Val,
>>
>> I tried without vertx and still nodes not joining the cluster. Looks like
>> the issue is not because of vertx.
>>
>> Thanks
>>
>> On 3 April 2017 at 12:13, Anil  wrote:
>>
>>> Hi Val,
>>>
>>> Can you help me in understanding following -
>>>
>>> 1. I see empty files created with 127.0.0.1#47500 and
>>> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
>>> reason ?
>>>
>>> - A number of nodes will have single set of 127.0.0.1#47500 and
>>> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>>>
>>> 2. a node is deleting the empty file of other node. the delete of file
>>> happens only during un-register (as per the code).
>>>
>>> logs of 192 -
>>>
>>> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.7.192#47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>> bucket objects - 3
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 127.0.0.1 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 127.0.0.1 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 172.31.1.189 - 47500
>>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 172.31.1.189 - 47500
>>> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>>> the bucket with name - 172.31.1.189#47500*
>>> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.7.192#47500
>>>
>>> logs of 189 -
>>>
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the
>>> bucket objects - 3
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 127.0.0.1 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 127.0.0.1 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered (before) - 172.31.7.192 - 47500
>>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>>> registered - 172.31.7.192 - 47500
>>> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>>> the bucket with name - 172.31.7.192#47500*
>>> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>>> the bucket with name - 172.31.1.189#47500
>>>
>>> Thanks
>>>
>>> On 2 April 2017 at 14:34, Anil  wrote:
>>>
 Hi Val,

 Nodes not joining the cluster. I just shared the logs to understand if
 there is any info in the logs.

 Thanks

 On 2 April 2017 at 11:32, vkulichenko 
 wrote:

> Anil,
>
> I'm not sure I understand what is the issue in the first place. What
> is not
> working?
>
> -Val
>
>
>
> --
> View this message in context: http://apache-ignite-users.705
> 18.x6.nabble.com/2-0-tp11487p11640.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


>>>
>>
>


Re: Eager TTL and query

2017-04-03 Thread Nikolai Tikhonov
Hi,

"When entry accessed" is mind that entry will be involved by IgniteCache
API (put/get/invoke and etc). In during operation ttl will be checked.

Ignite SQL doesn't update ttl and entry won't be removed.


On Mon, Apr 3, 2017 at 4:28 PM, Tolga Kavukcu 
wrote:

> Hi Everyone,
>
> I have a question about cache ttl. I understand that when eager ttl is set
> to true for a cache, a thread created and expired entries are cleaned but
> this can cause some heap usage.
>
> Otherwise i can set eager ttl to false, when entry accessed ignite checks
> if entry expired or not it is very much clear.
>
>
> I wonder the terminology "When entry accessed" covers running query over
> cache. Let's say i didnt call any put or get, but i queried entire cache.
> Will expired entries will be removed during cache query execution ?
>
> Regards.
>
> --
>
> *Tolga KAVUKÇU*
>


Re: Client near cache with Apache Flink

2017-04-03 Thread nragon
Hi,

The code is not all there but i think it's the important one.
Cache access: this.nearCache.get(key); (Mentioned above)
Cache name comes from table.getName() which, for instance, can be
"LK_SUBS_UPP_INST".
Data in ignite cache: String key and Object[] value. For now I only have one
record for each cache (3 total)
+==+
|Key Class | Key |Value Class |
Value  |
+==+
| java.lang.String | 1   | java.lang.Object[] | size=10, values=[1, 1, null,
null, null, null, null, null, null, null] |
+--+

The cache is loaded with datastreamer from oracle database. Again, loading
is not the problem.
I'm trying to understand if this (this.nearCache.get(key)) is the best way
to access for fast lookups.

Thanks




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Client-near-cache-with-Apache-Flink-tp11627p11670.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: Storing JSON data on Ignite cache

2017-04-03 Thread Anton Vinogradov
Hi,

Seems you have to use Cache's query method with transformer.

public  QueryCursor query(Query qry, IgniteClosure
transformer);

Usage example:

IgniteClosure, Integer> transformer =
new IgniteClosure, Integer>() {
@Override public Integer apply(Cache.Entry e) {
return e.getKey();
}
};

List keys = cache.query(new ScanQuery(),
transformer).getAll();


On Mon, Apr 3, 2017 at 4:17 PM, austin solomon 
wrote:

> Hi,
>
> My Ignite version is 1.9.0
>
> I am pushing JSON object into Ignite's cache using Apache Nifi's : GetKafka
> => PutIgniteCache processors.
> I could able to save the data in cache but the data it is in byte array
> format how can I query this.
>
> My Json object looks like this : {"id":1, "name":"USA", "population":
> 15} .
> When  print the cache data  got the following result:
>
>   byte[] bytes = (byte[]) cache.get("1");
>   String str = new String(bytes, StandardCharsets.UTF_8);
>   System.out.println("Cache get id = "+str);
>   Iterator it=cache.iterator();
>   while(it.hasNext()){
>   System.out.println("Cache next=="+it.next());
>   }
>
> Result:
>
>Cache get id = {"id":1, "name":"USA", "population": 15}
>Cache next==Entry [key=1, val=[B@5ab14cb9]
>Cache next==Entry [key=2, val=[B@5fb97279]
>Cache next==Entry [key=3, val=[B@439a8f59]
>
> How can I query this data or map this as key, value.
>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Storing-JSON-data-on-Ignite-cache-tp11660.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: Client near cache with Apache Flink

2017-04-03 Thread Andrey Gura
Hi,

it still doesn't describe your use case in details. How exactly do you
use Ignite cache in your system? What kind of data do you cache and
what source of this data in case without Ignite?

Also it looks strange for me that you try to create near cache with
name that doesn't match with original cache name. This code should
fail at runtime.

On Sat, Apr 1, 2017 at 3:49 AM, nragon
 wrote:
> Starting with ignite:
>- 4 nodes managed by yarn
>- Node configurations are as above
>- 3 off heap caches with one record each
>
> Flink:
>- 5 task managers with 2 slots each and 2gb
>
> In my current job I'm reading from kafka, mapping some business rules with
> flink map functions and sink to hbase. This job without ignite cache
> processes 30k/s events. When I add ignite on flink map function to enrich
> data with cache.get(key) the performance drops to 2k/s.
> I'm bypassing data enrichment at first test(30k/s, just assigning a random
> value)
> Near cache should pull the one record from cache and from there it should be
> really quick right?
> Just wondering if the configurations above are the correct oned for fast
> lookup caches.
>
> Thanks
>
>
>
> --
> View this message in context: 
> http://apache-ignite-users.70518.x6.nabble.com/Client-near-cache-with-Apache-Flink-tp11627p11636.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 2.0

2017-04-03 Thread Nikolai Tikhonov
Hi Anil,

Are you sure that you this addresses (172.31.1.189#47500,
172.31.7.192#47500) reacheable from the hosts? Could you check it by telnet?

On Mon, Apr 3, 2017 at 12:20 PM, Anil  wrote:

> Hi Val,
>
> I tried without vertx and still nodes not joining the cluster. Looks like
> the issue is not because of vertx.
>
> Thanks
>
> On 3 April 2017 at 12:13, Anil  wrote:
>
>> Hi Val,
>>
>> Can you help me in understanding following -
>>
>> 1. I see empty files created with 127.0.0.1#47500 and
>> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
>> reason ?
>>
>> - A number of nodes will have single set of 127.0.0.1#47500 and
>> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>>
>> 2. a node is deleting the empty file of other node. the delete of file
>> happens only during un-register (as per the code).
>>
>> logs of 192 -
>>
>> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>> the bucket with name - 172.31.7.192#47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
>> objects - 3
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 127.0.0.1 - 47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 127.0.0.1 - 47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 172.31.1.189 - 47500
>> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 172.31.1.189 - 47500
>> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>> the bucket with name - 172.31.1.189#47500*
>> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>> the bucket with name - 172.31.7.192#47500
>>
>> logs of 189 -
>>
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
>> objects - 3
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 0:0:0:0:0:0:0:1%lo - 47500
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 127.0.0.1 - 47500
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 127.0.0.1 - 47500
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered (before) - 172.31.7.192 - 47500
>> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
>> registered - 172.31.7.192 - 47500
>> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
>> the bucket with name - 172.31.7.192#47500*
>> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
>> the bucket with name - 172.31.1.189#47500
>>
>> Thanks
>>
>> On 2 April 2017 at 14:34, Anil  wrote:
>>
>>> Hi Val,
>>>
>>> Nodes not joining the cluster. I just shared the logs to understand if
>>> there is any info in the logs.
>>>
>>> Thanks
>>>
>>> On 2 April 2017 at 11:32, vkulichenko 
>>> wrote:
>>>
 Anil,

 I'm not sure I understand what is the issue in the first place. What is
 not
 working?

 -Val



 --
 View this message in context: http://apache-ignite-users.705
 18.x6.nabble.com/2-0-tp11487p11640.html
 Sent from the Apache Ignite Users mailing list archive at Nabble.com.

>>>
>>>
>>
>


Re: SpringBoot Integration

2017-04-03 Thread Andrey Mashenkov
Hi Vishal,

See Ignite exmples source code:

   - org.apache.ignite.examples.misc.springbean.SpringBeanExample.
   -
   org.apache.ignite.examples.datagrid.store.spring.CacheSpringStoreExample

Also test project [1] may be helpful

[1]
https://github.com/Normal/apache-ignite-testproject/tree/master/src/main/groovy/ignite

On Mon, Apr 3, 2017 at 2:45 PM, vishal jain  wrote:

> Hi,
>
>
> I am new to ignite and am planning to use ignite as IMDG in my spring boot
> application. Do we have any spring template (Spring Data or JDBC based) for
> cache Load etc, any pointers will be highly appreciated.
>
>
> Thanks,
> VJ
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Error while running query against Ignite Server Node

2017-04-03 Thread Nikolai Tikhonov
Hi,

It's looks that you have long gc pause in your application. Did you see in
logs any message related with changing topology (node fail, node left, node
join and etc)? Could you enable gc logs and check it (for example
-Xloggc:./gc.log -XX:+PrintGCDetails -verbose:gc)?

On Sun, Apr 2, 2017 at 1:36 PM, rishi007bansod 
wrote:

> Hi,
> I am running 3 instances of ignite per machine(total 3 machines). Data is
> inserted in partitioned cache continuously and I am running query against
> this cache. After query I am getting above error. Is there any way I can
> make topology stable before querying data? For example, how can I query
> data
> up to certain  snapshot of cache when data is continuously getting added
> and
> I am querying data to avoid this error(As this error only occurs during run
> time query against data i.e. when cache is getting populated and not when
> cache is completely loaded)
>
> Thanks.
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Error-while-running-query-against-Ignite-Server-Node-
> tp11030p11643.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


SpringCacheManager

2017-04-03 Thread vishal jain
Hi,


Does "SpringCacheManager" supports Write Behind, Auto refresh etc?

If not then what is the alternative approach to implement these in
Spring-Ignite based application.


Thanks,
Vishal


Re: Server Client Connection

2017-04-03 Thread Nikolai Tikhonov
Yes, it's absolutely ok. Do not worry:)

On Mon, Apr 3, 2017 at 5:01 AM, woo charles 
wrote:

> Hi,
>
>
> >I have created 5 server nodes in 5 jvm process in 3 jvms.
> To be honest this point is not clear for me. Could you please describe how
> you run 5 jvm process in 3 jvms?
> Sorry for confused you. It should be "create 5 server nodes(java) in 3
> server".
>
> > After the client node has started up, the oldest server node will
> establish a connection to the client node through client's communication
> port (471XX) periodically.
> It's expected behavior one server node of cluster have direct connection
> with client node. Ignite cluster requires that each nodes have direct
> connection to any nodes of grid. You can change port range in
> TcpCommunicationSpi [1] and TcpDiscoverySpi [2].
> But I found that the oldest server node will establish connection to *all
> *client nodes. Is it normal?
>
>
>
> 2017-03-29 21:49 GMT+08:00 Nikolai Tikhonov :
>
>> Hi!
>>
>> >I have created 5 server nodes in 5 jvm process in 3 jvms.
>> To be honest this point is not clear for me. Could you please describe
>> how you run 5 jvm process in 3 jvms?
>>
>> > After the client node has started up, the oldest server node will
>> establish a connection to the client node through client's communication
>> port (471XX) periodically.
>> It's expected behavior one server node of cluster have direct connection
>> with client node. Ignite cluster requires that each nodes have direct
>> connection to any nodes of grid. You can change port range in
>> TcpCommunicationSpi [1] and TcpDiscoverySpi [2].
>>
>> > if it is because of Leader Election, can I have multiple leaders within
>> a cluster? If yes, how to do it?
>> You can get more details there [3].
>>
>> 1. https://apacheignite.readme.io/docs/clustering
>> 2. https://apacheignite.readme.io/docs/network-config
>> 3. https://apacheignite.readme.io/docs/leader-election
>>
>>
>> On Wed, Mar 29, 2017 at 4:16 AM, woo charles 
>> wrote:
>>
>>> Hi
>>>
>>> I have created 5 server nodes in 5 jvm process in 3 jvms.
>>>
>>> After the client node has started up, the oldest server node will
>>> establish a connection to the client node through client's communication
>>> port (471XX) periodically.
>>>
>>> May I know the reasons for this?
>>> if it is because of Leader Election, can I have multiple leaders within
>>> a cluster? If yes, how to do it?
>>> Also, can I change this connection behavior (e.g. client connects to
>>> server's communication port)?
>>>
>>
>>
>


Eager TTL and query

2017-04-03 Thread Tolga Kavukcu
Hi Everyone,

I have a question about cache ttl. I understand that when eager ttl is set
to true for a cache, a thread created and expired entries are cleaned but
this can cause some heap usage.

Otherwise i can set eager ttl to false, when entry accessed ignite checks
if entry expired or not it is very much clear.


I wonder the terminology "When entry accessed" covers running query over
cache. Let's say i didnt call any put or get, but i queried entire cache.
Will expired entries will be removed during cache query execution ?

Regards.

-- 

*Tolga KAVUKÇU*


Ignite may start more service instances per node than maxPerNodeCount in case of manual redeploy

2017-04-03 Thread Evgeniy Ignatiev

Hello.

Recently we discovered that if a service is configured in 
IgniteConfiguration with e.g. maxPerNodeCount equal to 1 and totalCount 
equal to 3, we start-up 2 Ignite nodes - then cancel the service after 
some time and redeploy it with the same config as 
ignite.services().deploy(...) - we observe more than instance of service 
starting per node.


Here is a minimal code that demonstrates this issue: 
https://github.com/YevIgn/ignite-services-bug - After the call to 
ignite.services().deploy(...) - the output to console is "Started 3 
services" while "Started 2 services" is expected as there are only two 
nodes. This is with the Ignite 1.9.0.


Could you please look into it?




Storing JSON data on Ignite cache

2017-04-03 Thread austin solomon
Hi,

My Ignite version is 1.9.0 

I am pushing JSON object into Ignite's cache using Apache Nifi's : GetKafka
=> PutIgniteCache processors.
I could able to save the data in cache but the data it is in byte array
format how can I query this.

My Json object looks like this : {"id":1, "name":"USA", "population":
15} .
When  print the cache data  got the following result: 

  byte[] bytes = (byte[]) cache.get("1");
  String str = new String(bytes, StandardCharsets.UTF_8);
  System.out.println("Cache get id = "+str);
  Iterator it=cache.iterator();
  while(it.hasNext()){
  System.out.println("Cache next=="+it.next());
  }

Result: 

   Cache get id = {"id":1, "name":"USA", "population": 15}
   Cache next==Entry [key=1, val=[B@5ab14cb9]
   Cache next==Entry [key=2, val=[B@5fb97279]
   Cache next==Entry [key=3, val=[B@439a8f59]

How can I query this data or map this as key, value.




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Storing-JSON-data-on-Ignite-cache-tp11660.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: ignite messaging disconnection behaviour

2017-04-03 Thread dkarachentsev
Hi Kevin,

Unfortunately community decided to leave it as is, because messages are sent
asynchronously by design, but connection establishing is not. That was made
intentionally and user should take it into account and resolve depending on
concrete situation.

Thanks!

-Dmitry.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/ignite-messaging-disconnection-behaviour-tp11218p11658.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: [GridCachePartitionExchangeManager] Pending transaction deadlock detection futures

2017-04-03 Thread Alexey Kuznetsov
Hi ght230,

I think that Visor CMD should work in client mode, but it is not
implemented yet.
Please create issue in Ignite JIRA for that.

On Sat, Apr 1, 2017 at 1:41 PM, ght230  wrote:

> I want to know should Visor CMD be worked in client mode or daemon mode?
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/GridCachePartitionExchangeManager-Pending-transaction-
> deadlock-detection-futures-tp11362p11638.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>



-- 
Alexey Kuznetsov


Re: Stoppin listen for topics in Ignite

2017-04-03 Thread daniels
Dear vkulichenko,I want to listen ignMessage.send  and IgnitePredicate
execution.



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Stoppin-listen-for-topics-in-Ignite-tp11604p11656.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


A question about cluster configuration (Are there any way to make client connection faster?)

2017-04-03 Thread ozgurnevres
The ignite configuration in my app.config (and web.config) is like below:


  48500
  

  192.168.1.100:48500

  


  48100


With that configuration, a client on the same network (on a different
computer) connects around 10-12 seconds.

If I use an IP block like 192.168.1.100:48500..48520 it takes even more
time.

Are there any way to make this faster? What is the best practice?
Thanks




--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/A-question-about-cluster-configuration-Are-there-any-way-to-make-client-connection-faster-tp11655.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


Re: 2.0

2017-04-03 Thread Anil
Hi Val,

I tried without vertx and still nodes not joining the cluster. Looks like
the issue is not because of vertx.

Thanks

On 3 April 2017 at 12:13, Anil  wrote:

> Hi Val,
>
> Can you help me in understanding following -
>
> 1. I see empty files created with 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
> reason ?
>
> - A number of nodes will have single set of 127.0.0.1#47500 and
> 0:0:0:0:0:0:0:1%lo#47500 empty files.
>
> 2. a node is deleting the empty file of other node. the delete of file
> happens only during un-register (as per the code).
>
> logs of 192 -
>
> 2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.7.192#47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
> objects - 3
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.1.189 - 47500
> 2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.1.189 - 47500
> *2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
> the bucket with name - 172.31.1.189#47500*
> 2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.7.192#47500
>
> logs of 189 -
>
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
> objects - 3
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 0:0:0:0:0:0:0:1%lo - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 127.0.0.1 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered (before) - 172.31.7.192 - 47500
> 2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
> registered - 172.31.7.192 - 47500
> *2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
> the bucket with name - 172.31.7.192#47500*
> 2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in
> the bucket with name - 172.31.1.189#47500
>
> Thanks
>
> On 2 April 2017 at 14:34, Anil  wrote:
>
>> Hi Val,
>>
>> Nodes not joining the cluster. I just shared the logs to understand if
>> there is any info in the logs.
>>
>> Thanks
>>
>> On 2 April 2017 at 11:32, vkulichenko 
>> wrote:
>>
>>> Anil,
>>>
>>> I'm not sure I understand what is the issue in the first place. What is
>>> not
>>> working?
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-ignite-users.705
>>> 18.x6.nabble.com/2-0-tp11487p11640.html
>>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>>
>>
>>
>


Re: Distributed Closures VS Executor Service

2017-04-03 Thread kmandalas
Hello Christo and thank you for your feedback. 

For the moment we have a Relational Database that is not distributed or
partitioned. We have given some thought for configuring at least some
database tables as cache store to Ignite and it not something that we
exclude. Just for the moment due to some Network I/O performance issues of
the Cloud platform that will host the solution we cannot say much.

However, there is a strong chance at least one or two tables involved in the
flow I described to be loaded into cache. For this reason I would like to
ask the following:

1) Which is the best policy for loading into cache a table of about
~1.000.000 rows and use ANSI SQL to query it later? This table will be
incremented periodically (after an ETL procedure) utill it will reach a
final upper limit of about ~3.000.000 rows. So I would like the optimal way
to stream it initially into cache on startup of a future deployment and
update it after each ETL procedure.

2) If we use MapReduce & ForkJoin procedure how can we combine it with
Affinity? There are examples for Distributed Closures but I do not see any
for ComputeTask/ComputeJobAdapter etc. Ideally each job, should perform ANSI
SQL query on the table that will be loaded and maintained in cache but on
the rows that it keeps locally.

 



--
View this message in context: 
http://apache-ignite-users.70518.x6.nabble.com/Distributed-Closures-VS-Executor-Service-tp11192p11653.html
Sent from the Apache Ignite Users mailing list archive at Nabble.com.


CPU Usage on distributed queries

2017-04-03 Thread woo charles
Hi,

I have conducted a test on distributed query performance.

I have created 5 server nodes(java) in 3 different server.

4 records are entered.

I created 80 Threads and each thread looped following sql 100 times.
select * from user where age<18
(about 50 records is selected)

*** age is not key / index

After all, I have found that the CPU usage of 2 server nodes (the oldest
and the second oldest) jump to a significantly high level (from 5% to
80%-90%) where other 3 nodes increase from 5% to 18% only.

Why this happens?


Re: 2.0

2017-04-03 Thread Anil
Hi Val,

Can you help me in understanding following -

1. I see empty files created with 127.0.0.1#47500 and
0:0:0:0:0:0:0:1%lo#47500 along with actualipaddress#port. any specific
reason ?

- A number of nodes will have single set of 127.0.0.1#47500 and
0:0:0:0:0:0:0:1%lo#47500 empty files.

2. a node is deleting the empty file of other node. the delete of file
happens only during un-register (as per the code).

logs of 192 -

2017-04-03 06:20:03 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.7.192#47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
objects - 3
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 127.0.0.1 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 127.0.0.1 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 172.31.1.189 - 47500
2017-04-03 06:21:05 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 172.31.1.189 - 47500
*2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
the bucket with name - 172.31.1.189#47500*
2017-04-03 06:21:15 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.7.192#47500

logs of 189 -

2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - listing the bucket
objects - 3
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 0:0:0:0:0:0:0:1%lo - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 127.0.0.1 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 127.0.0.1 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered (before) - 172.31.7.192 - 47500
2017-04-03 06:23:50 WARN  TcpDiscoveryS3IpFinder:480 - Address to be
registered - 172.31.7.192 - 47500
*2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - deleting file in
the bucket with name - 172.31.7.192#47500*
2017-04-03 06:24:00 WARN  TcpDiscoveryS3IpFinder:480 - Creating file in the
bucket with name - 172.31.1.189#47500

Thanks

On 2 April 2017 at 14:34, Anil  wrote:

> Hi Val,
>
> Nodes not joining the cluster. I just shared the logs to understand if
> there is any info in the logs.
>
> Thanks
>
> On 2 April 2017 at 11:32, vkulichenko 
> wrote:
>
>> Anil,
>>
>> I'm not sure I understand what is the issue in the first place. What is
>> not
>> working?
>>
>> -Val
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/2-0-tp11487p11640.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: ScanQuery With BinaryObject

2017-04-03 Thread David Li
Sorry, please ignore the previous email, it was sent by mistake.

1. I download apache-ignite-fabric-1.9.0-bin.zip, and unzip it.
2. In terminal, I start an ignite instance by *bin/ignite.sh
examples/config/example-ignite.xml*
3. I create a Java application, source code as below:

public static void main(String[] args) {
String ORG_CACHE = "org_cache_remote";
   * Ignition.setClientMode(true);*
Ignite ignite = Ignition.start("example-ignite.xml");
CacheConfiguration orgCacheCfg = new
CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
ignite.destroyCache(ORG_CACHE);
IgniteCache cache = ignite.createCache(orgCacheCfg);
cache.put(1L, new Organization(1L, "org1", true, "jurong east",
""));
cache.put(2L, new Organization(2L, "org2", false, "orchard", ""));
cache.put(3L, new Organization(3L, "org3", true, "jurong west",
""));
cache.put(4L, new Organization(4L, "org4", false, "woodlands", ""));
cache.put(5L, new Organization(5L, "org5", false, "changi", ""));
// cache.put(6L, new Organization(6L, "org6", true, "jurong island",
""));

IgniteCache binaryCache = cache.withKeepBinary();

List> result;

System.out.println("Scan by address");
ScanQuery scanAddress = new ScanQuery<>(
new IgniteBiPredicate() {
@Override
public boolean apply(Long aLong, BinaryObject binaryObject) {
*// first time filter by jurong, got two entries, org1 and
org3*
*// second time filter by changi, got two entries, org1 and
org3*
*// third time filter by changi as well, uncomment org6,
got three entries, org1, org3 and org6*
*return
binaryObject.field("address").startsWith("jurong");*
}
}
);
result = binaryCache.query(scanAddress).getAll();
System.out.println("result: " + result.size());
for (Cache.Entry entry : result) {
System.out.println(entry.getValue().deserialize().toString());
}

ignite.close();
}

Here what I want to do is start a client node, connect to the server node
started in step 2. Then I create a cache, put some data inside,
then try to run a scan query to find entries by its address.
The problem is when I run this program first time, it will return two
entries, their addresses are started with "jurong", which is correct.
When I run the program again, with changed value, eg. "changi", it should
return one entry, somehow, it still return two entries with address started
with "jurong", rather than "changi".
When I uncomment the line of "org6", and run the program again, it will
return three entries, all of their addresses are started with "jurong".

I have no idea what is going on.


Re: ScanQuery With BinaryObject

2017-04-03 Thread David Li
1. I download apache-ignite-fabric-1.9.0-bin.zip, and unzip it.
2. In terminal, I start an ignite instance by bin/ignite.sh
examples/config/example-ignite.xml
3. I create a Java application, source code as below:

public static void main(String[] args) {
String ORG_CACHE = "org_cache_remote";
Ignition.setClientMode(true);
Ignite ignite = Ignition.start("example-ignite.xml");
CacheConfiguration orgCacheCfg = new
CacheConfiguration<>(ORG_CACHE);
orgCacheCfg.setIndexedTypes(Long.class, Organization.class);
ignite.destroyCache(ORG_CACHE);
IgniteCache cache = ignite.createCache(orgCacheCfg);
cache.put(1L, new Organization(1L, "org1", true, "jurong east", ""));
cache.put(2L, new Organization(2L, "org2", false, "orchard", ""));
cache.put(3L, new Organization(3L, "org3", true, "jurong west", ""));
cache.put(4L, new Organization(4L, "org4", false, "woodlands", ""));
cache.put(5L, new Organization(5L, "org5", false, "changi",
""));cache.put(6L, new Organization(6L, "org6", true, "jurong
island", ""));


IgniteCache binaryCache = cache.withKeepBinary();

List> result;

System.out.println("Scan by address");
ScanQuery scanAddress = new ScanQuery<>(
new IgniteBiPredicate() {
@Override
public boolean apply(Long aLong, BinaryObject binaryObject) {
return
binaryObject.field("address").startsWith("jurong");
}
}
);
result = binaryCache.query(scanAddress).getAll();
System.out.println("result: " + result.size());
for (Cache.Entry entry : result) {
System.out.println(entry.getValue().deserialize().toString());
}

ignite.close();
}

Here what I want to do is start a client node, connect to the server
node started in step 2. Then I create a cache, put some data inside,

then try to run a scan query to find entries by its address.

The problem is when I run this program first time, it will return two
entries, their addresses are started with "jurong", which is correct.

When I run the program again, with changed value, eg. changi, it
should return one entry, somehow, it still return two entries with
address started with "jurong", rather than "changi".

When I




On Fri, Mar 31, 2017 at 7:40 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi David,
>
> Would you please share your code.
>
> On Fri, Mar 31, 2017 at 10:42 AM, David Li  wrote:
>
>> It is weird.
>>
>> I run the cache query example, scan query works fine.
>>
>> I create my cache query code with a local started server node, scan query
>> works fine.
>>
>> I start a server node from terminal, start a client node in my code and
>> issue scan query, the first query after the server node is started works
>> fine, all the following queries will just return the exactly same result as
>> the first query. I try adding data into the cache, the newly added data
>> will be returned as per the first query. It looks like the first query has
>> been kept somewhere.
>>
>>
>>
>>
>> On Thu, Mar 30, 2017 at 9:02 PM, Andrey Mashenkov <
>> andrey.mashen...@gmail.com> wrote:
>>
>>> Hi David,
>>>
>>> I've run your code and it works fine for me on ignite 1.7-1.9 versions
>>> and master branch.
>>>
>>> On Thu, Mar 30, 2017 at 12:19 PM, David Li 
>>> wrote:
>>>
 Hello,

 I am having a little issue with the ScanQuery for BinaryObject.

 Some code snippets

 IgniteCache cache = 
 ignite.cache(CacheConfig.CACHE_NAME);
 IgniteCache binaryCache = cache.withKeepBinary();

 // scan query
 IgniteBiPredicate filter = new IgniteBiPredicate() {
 @Override
 public boolean apply(Long key, BinaryObject value) {
 return false;
 }
 };
 ScanQuery scanQuery = new ScanQuery<>(filter);

 List> result = 
 binaryCache.query(scanQuery).getAll();


 I expect it will return an empty result, somehow it always returns 
 everything in the cache.

 Not sure what is wrong?


 BRs,

 David




>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>