RE: Ignite Thin Client Continuous Query

2018-09-11 Thread vkulichenko
Gordon,

Generally, having CQ on thin client would definitely be awesome. My only
point is that thin client has several technical limitations that would
introduce multiple "ifs" into the functionality. What exactly those ifs are,
and weather there is still value with all those ifs, is a big question for
me. An open question though, of course - by no means I'm trying to make a
claim that it doesn't make sense at all.

As for you use case, everything sounds reasonable to me. I would probably do
the following changes however:
- Move CQs to separate client node(s). That would separate the concerns, and
also simplify failover management.
- Use some other product to propagate updates from CQ to desktop apps (Kafka
maybe?). Basically, you need some sort of durable queue to deliver those
messages. You can definitely implement it from scratch, but this might be on
overkill.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignte cluster hang with GridCachePartitionExchangeManager

2018-09-11 Thread wangsan
Yes , It was blocked when do cache operation in discovery event listeners
when node left events arrival concurrently.  I just do cache operation in
another thread. Then the listener will not be blocked. 
The original cause may be that discovery event processor hold the server
latch.when do cache operation in the listener,the server latch will be
blocked(why,I am not sure).



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteUtils NoClassDefFoundError

2018-09-11 Thread akurbanov
Hello,

Did you properly set IGNITE_HOME pointing to binaries/build sources?

Regards



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite data can't be recovered after node fail

2018-09-11 Thread smovva
Where you able to resolve this? I'm in a very similar situation.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Node keeps crashing under load

2018-09-11 Thread eugene miretsky
Thanks Ilya,

We are writing to Ignite from Spark running in EMR. We don't know the
address of the node in advance, we have tried
1) Set localHost in Ignite configuration to 127.0.0.1, as per the example
online
2) Leave localHost unset, and let ignite figure out the host

I have attached more logs at the end.

My understanding is that Ignite should pick the first non-local address to
publish, however, it seems like it picks randomly one of (a) proper
address, (b) ipv6 address, (c) 127.0.0.1, (d)  172.17.0.1.

A few questions:
1) How do we force Spark client to use the proper address
2) Where is 172.17.0.1 coming from? It is usually the default docker
network host address, and it seems like Ignite creates a network interface
for it on the instance. (otherwise I have no idea where the interface is
coming from)
3) If there are communication errors, shouldn't the Zookeeper split brain
resolver kick in and shut down the dead node. Or shouldn't at least the
initiating node mark the remote node as dead?

[19:36:26,189][INFO][grid-nio-worker-tcp-comm-15-#88%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/172.17.0.1:47100,
rmtAddr=/172.21.86.7:41648]

[19:36:26,190][INFO][grid-nio-worker-tcp-comm-3-#76%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/0:0:0:0:0:0:0:1:47100,
rmtAddr=/0:0:0:0:0:0:0:1:52484]

[19:36:26,191][INFO][grid-nio-worker-tcp-comm-5-#78%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/127.0.0.1:47100,
rmtAddr=/127.0.0.1:37656]

[19:36:26,191][INFO][grid-nio-worker-tcp-comm-1-#74%Server%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/172.21.86.7:53272,
rmtAddr=ip-172-21-86-175.ap-south-1.compute.internal/172.21.86.175:47100]

[19:36:26,191][INFO][grid-nio-worker-tcp-comm-0-#73%Server%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/172.17.0.1:41648,
rmtAddr=ip-172-17-0-1.ap-south-1.compute.internal/172.17.0.1:47100]

[19:36:26,193][INFO][grid-nio-worker-tcp-comm-4-#77%Server%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/127.0.0.1:37656,
rmtAddr=/127.0.0.1:47100]

[19:36:26,193][INFO][grid-nio-worker-tcp-comm-2-#75%Server%][TcpCommunicationSpi]
Established outgoing communication connection
[locAddr=/0:0:0:0:0:0:0:1:52484, rmtAddr=/0:0:0:0:0:0:0:1%lo:47100]

[19:36:26,195][INFO][grid-nio-worker-tcp-comm-8-#81%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/172.17.0.1:47100,
rmtAddr=/172.21.86.7:41656]

[19:36:26,195][INFO][grid-nio-worker-tcp-comm-10-#83%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/0:0:0:0:0:0:0:1:47100,
rmtAddr=/0:0:0:0:0:0:0:1:52492]

[19:36:26,195][INFO][grid-nio-worker-tcp-comm-12-#85%Server%][TcpCommunicationSpi]
Accepted incoming communication connection [locAddr=/127.0.0.1:47100,
rmtAddr=/127.0.0.1:37664]

[19:36:26,196][INFO][grid-nio-worker-tcp-comm-7-#80%Server%][TcpCommunicationSpi]
Established outgoing communication connection [locAddr=/172.21.86.7:41076,
rmtAddr=ip-172-21-86-229.ap-south-1.compute.internal/172.21.86.229:47100]




On Mon, Sep 10, 2018 at 12:04 PM Ilya Kasnacheev 
wrote:

> Hello!
>
> I can see a lot of errors like this one:
>
> [04:05:29,268][INFO][tcp-comm-worker-#1%Server%][ZookeeperDiscoveryImpl]
> Created new communication error process future
> [errNode=598e3ead-99b8-4c49-b7df-04d578dcbf5f, err=class
> org.apache.ignite.IgniteCheckedException: Failed to connect to node (is
> node still alive?). Make sure that each ComputeTask and cache Transaction
> has a timeout set in order to prevent parties from waiting forever in case
> of network issues [nodeId=598e3ead-99b8-4c49-b7df-04d578dcbf5f,
> addrs=[ip-172-17-0-1.ap-south-1.compute.internal/172.17.0.1:47100,
> ip-172-21-85-213.ap-south-1.compute.internal/172.21.85.213:47100,
> /0:0:0:0:0:0:0:1%lo:47100, /127.0.0.1:47100]]]
>
> I think the problem is, you have two nodes, they both have 172.17.0.1
> address but it's the different address (totally unrelated private nets).
>
> Try to specify your external address (such as 172.21.85.213) with
> TcpCommunicationSpi.setLocalAddress() on each node.
>
> Regards,
> --
> Ilya Kasnacheev
>
>
> пт, 7 сент. 2018 г. в 20:01, eugene miretsky :
>
>> Hi all,
>>
>> Can somebody please provide some pointers on what could be the issue or
>> how to debug it? We have a fairly large Ignite use case, but cannot go
>> ahead with a POC because of these crashes.
>>
>> Cheers,
>> Eugene
>>
>>
>>
>> On Fri, Aug 31, 2018 at 11:52 AM eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Also, don't want to spam the mailing list with more threads, but I get
>>> the same stability issue when writing to Ignite from Spark. Logfile from
>>> the crashed node (not same node as before, probably random) is attached.
>>>
>>>  I have also attached a gc log from another node (I have gc logging
>>> enabled only on one 

RE: Ignite Thin Client Continuous Query

2018-09-11 Thread Gordon Reid (Nine Mile)
Hi Val,

It's a very simple, and I would say a very common use case. We want to send a 
filter to the grid, receive a snapshot, and then receive a continuous stream of 
updates that match that filter.

Consider a trading window, showing a grid of stocks. I want to subscribe to the 
continuously changing market prices  for a specific list of stocks, and maybe I 
want to subscribe to trade events for another specific set. The universe of 
stocks is huge and not all users will want to see the same stocks, so server 
side filtering is key.

We are under time pressure, so for now we will probably end up building our own 
lightweight framework based on sockets and json for serialization between the 
java server side and the .net user app side. On the server side we will have to 
build a proxy service that will be responsible for managing subscriptions from 
the user apps, and then making those corresponding subscriptions onto the grid, 
using CQ. When it receives the results of the CQ it will need to serialize and 
push events back to the user apps.

The high level use case is that we have a java ignite cluster which implements 
our trading platform deployed into a data centre. And we have a C# .NET desktop 
app which currently hosts an ignite thick client. This user app needs to 
connect to clusters which might be in the metro area, or even in other 
countries. We have found for metro, it's okay, but not great (despite a huge 
amount of time tuning comms parameters), and for remote cities, its unusable. 
If we were .net to .net we could (for example) do this easily with WCF. The 
user app just needs to
- send RPC style commands (which we currently do using ignite service grid)
(eg. start / stop trading strategy)
- get cache snapshots based on some filter
(eg. show me all the orders on Microsoft for yesterday)
- subscribe to cache updates based on some filter
( eg. show me all the orders on Microsoft for today so far, placed by trading 
strategy X, and stream new orders or changes to existing orders as they occur)

Thanks,
Gordon.

-Original Message-
From: vkulichenko 
Sent: Tuesday, September 11, 2018 10:47 AM
To: user@ignite.apache.org
Subject: RE: Ignite Thin Client Continuous Query

Gordon,

Yes, generally we do recommend using thin client for such applications.
However, it doesn't mean that you can't us client node in case it's really 
needed, although it might require some additional tuning.

Would you mind telling if you have any other technology in mind? I highly doubt 
that you will find anything that can provide functionality similar to CQ in 
Ignite, especially with the same delivery guarantees, while being based on a 
lightweight client. I believe you either will not succeed in finding such an 
alternative, or your use case does not require continuous queries in the first 
place. Can you give some more details on what you're trying to achieve? I might 
be able to suggest some other approach then.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


This email and any attachments are proprietary & confidential and are intended 
solely for the use of the individuals to whom it is addressed. Any views or 
opinions expressed are solely for those of the author and do not necessarily 
reflect those of Nine Mile Financial Pty. Limited. If you have received this 
email in error, please let us know immediately by reply email and delete from 
your system. Nine Mile Financial Pty. Limited. ABN: 346 1349 0252


RE: Ignite Thin Client Continuous Query

2018-09-11 Thread Gordon Reid (Nine Mile)
In my humble opinion there is a huge value here. We have these fantastic APIs 
between our cluster nodes, why should we have to go and use different APIs, and 
different serialization techniques in our end user apps? It’s totally 
acceptable that the reliability, and guaranteed delivery aspects are relaxed 
for the user app. The key is to use the same (or useful subset of) the cache 
interfaces.

From: Valentin Kulichenko 
Sent: Wednesday, September 12, 2018 6:33 AM
To: user@ignite.apache.org
Cc: isap...@apache.org
Subject: Re: Ignite Thin Client Continuous Query

Igor,

I just think that we're dealing with a trade off here, and that if we implement 
CQ for thin client, we will either end with a client that is not "thin" 
anymore, or semantics and guarantees of CQ would change so drastically that it 
would be a completely different feature. Either way, it's a big question 
weather there is a value in doing this.

I'm open to discussion though. If you have any particular suggestions, let's 
discuss them on dev list.

-Val

On Tue, Sep 11, 2018 at 5:26 AM Igor Sapego 
mailto:isap...@apache.org>> wrote:
Guys,

Personally, I do not see any problems, why we can not implement
Continuous Queries for thin clients. This will require a decent amount
of work, and will not give such  strong guaranties as thick clients
give (for example, on server crash thin client will get an exception and
will need to re-register listener once again), but to me it seems totally
implementable.

Val,

Why do you think that such features are unlikely to appear in thin clients?

Best Regards,
Igor


On Tue, Sep 11, 2018 at 3:07 PM Alexey Kuznetsov 
mailto:akuznet...@apache.org>> wrote:

Gordon,

How about to start several client nodes "near" to cluster and use them as 
"proxies" for your desktop GUI apps?
You may write some code that will push data from client node to you GUI app.
This will require some coding of course.

--
Alexey Kuznetsov


This email and any attachments are proprietary & confidential and are intended 
solely for the use of the individuals to whom it is addressed. Any views or 
opinions expressed are solely for those of the author and do not necessarily 
reflect those of Nine Mile Financial Pty. Limited. If you have received this 
email in error, please let us know immediately by reply email and delete from 
your system. Nine Mile Financial Pty. Limited. ABN: 346 1349 0252


RE: Ignite Thin Client Continuous Query

2018-09-11 Thread Gordon Reid (Nine Mile)
Thanks Alexy, yes we have considered this approach. But I would normally 
consider this type of architecture an optimization, not a base requirement. 
It’s rather heavy and to me only makes sense when we have a large number of end 
users and we want to minimize bandwidth to the remote locations.

From: Alexey Kuznetsov 
Sent: Tuesday, September 11, 2018 10:07 PM
To: user@ignite.apache.org
Subject: Re: Ignite Thin Client Continuous Query


Gordon,

How about to start several client nodes "near" to cluster and use them as 
"proxies" for your desktop GUI apps?
You may write some code that will push data from client node to you GUI app.
This will require some coding of course.

--
Alexey Kuznetsov


This email and any attachments are proprietary & confidential and are intended 
solely for the use of the individuals to whom it is addressed. Any views or 
opinions expressed are solely for those of the author and do not necessarily 
reflect those of Nine Mile Financial Pty. Limited. If you have received this 
email in error, please let us know immediately by reply email and delete from 
your system. Nine Mile Financial Pty. Limited. ABN: 346 1349 0252


Re: Ignite Thin Client Continuous Query

2018-09-11 Thread vkulichenko
Gaurav,

Web Console receives updates from web agent which periodically polls the
cluster.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Ignite Thin Client Continuous Query

2018-09-11 Thread Gaurav Bajaj
Guys,
Just wondering how does webconsole receievs updates from server
continuously?


Regards,
Gaurav

On 11-Sep-2018 10:31 PM, "Valentin Kulichenko" <
valentin.kuliche...@gmail.com> wrote:

> Igor,
>
> I just think that we're dealing with a trade off here, and that if we
> implement CQ for thin client, we will either end with a client that is not
> "thin" anymore, or semantics and guarantees of CQ would change so
> drastically that it would be a completely different feature. Either way,
> it's a big question weather there is a value in doing this.
>
> I'm open to discussion though. If you have any particular suggestions,
> let's discuss them on dev list.
>
> -Val
>
> On Tue, Sep 11, 2018 at 5:26 AM Igor Sapego  wrote:
>
>> Guys,
>>
>> Personally, I do not see any problems, why we can not implement
>> Continuous Queries for thin clients. This will require a decent amount
>> of work, and will not give such  strong guaranties as thick clients
>> give (for example, on server crash thin client will get an exception and
>> will need to re-register listener once again), but to me it seems totally
>> implementable.
>>
>> Val,
>>
>> Why do you think that such features are unlikely to appear in thin
>> clients?
>>
>> Best Regards,
>> Igor
>>
>>
>> On Tue, Sep 11, 2018 at 3:07 PM Alexey Kuznetsov 
>> wrote:
>>
>>>
>>> Gordon,
>>>
>>> How about to start several client nodes "near" to cluster and use them
>>> as "proxies" for your desktop GUI apps?
>>> You may write some code that will push data from client node to you GUI
>>> app.
>>> This will require some coding of course.
>>>
>>> --
>>> Alexey Kuznetsov
>>>
>>


Re: Ignite Thin Client Continuous Query

2018-09-11 Thread Valentin Kulichenko
Igor,

I just think that we're dealing with a trade off here, and that if we
implement CQ for thin client, we will either end with a client that is not
"thin" anymore, or semantics and guarantees of CQ would change so
drastically that it would be a completely different feature. Either way,
it's a big question weather there is a value in doing this.

I'm open to discussion though. If you have any particular suggestions,
let's discuss them on dev list.

-Val

On Tue, Sep 11, 2018 at 5:26 AM Igor Sapego  wrote:

> Guys,
>
> Personally, I do not see any problems, why we can not implement
> Continuous Queries for thin clients. This will require a decent amount
> of work, and will not give such  strong guaranties as thick clients
> give (for example, on server crash thin client will get an exception and
> will need to re-register listener once again), but to me it seems totally
> implementable.
>
> Val,
>
> Why do you think that such features are unlikely to appear in thin clients?
>
> Best Regards,
> Igor
>
>
> On Tue, Sep 11, 2018 at 3:07 PM Alexey Kuznetsov 
> wrote:
>
>>
>> Gordon,
>>
>> How about to start several client nodes "near" to cluster and use them as
>> "proxies" for your desktop GUI apps?
>> You may write some code that will push data from client node to you GUI
>> app.
>> This will require some coding of course.
>>
>> --
>> Alexey Kuznetsov
>>
>


Re: Configuring TcpDiscoveryKubernetesIpFinder

2018-09-11 Thread Jeff Simon
Hi Val,

Ok, for smaller files inline is much cleaner, and that's what we prefer.  But I 
agree, for larger files its not such a good idea.  Wew would really like to 
avoid having configuration spread out all over the place.

Really, all we need to do is set the service name...we don't want to use the 
default of 'ignite'.

Thanks.


On 9/11/18, 2:16 PM, "vkulichenko"  wrote:

Jeff,

Ignite configuration is an XML file which can be quite large. What is the
reason for the requirement to specify it inline?

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=ObqWq9831a7badpzAhIKIA=GDHaUsXuW5l_tQf8Lb0qSFuKsPCA5WAkjY2k5nQo7uw=W_muUks4YGxaoiZ6r0C4g2gTNOBUu1wWS9wusKu-_KU=iQ37DHrn47rZ4p7sSUkD4vYR5FkNczM7Ilh2lm_zGlY=


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Configuring TcpDiscoveryKubernetesIpFinder

2018-09-11 Thread vkulichenko
Jeff,

Ignite configuration is an XML file which can be quite large. What is the
reason for the requirement to specify it inline?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


IgniteUtils enables strictPostRedirect in a static block

2018-09-11 Thread xero
Hello Igniters,
We noticed that IgniteUtils class has a static initialization block (line
796 in version 2.6) in which System properties are being changed. In
particular, the property "http.strictPostRedirect" is set to "true".
This could change how an application behaves when referencing any class that
triggers this static block.

Is there any reason to have this property configured this way?

As a workaround we are forcing the initialization of this class in order to
override this property value back to false in a controlled way. We identify
that new version checker could be using this but, we would like to know if
disabling this property could cause any additional issue.

Any information would be appreciated

Thanks



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Configuring TcpDiscoveryKubernetesIpFinder

2018-09-11 Thread Jeff Simon

Hi,

I'm using the guide at https://apacheignite.readme.io/docs/stateless-deployment 
to get an ignite instance running.  Per that document, to configure 
TcpDiscoveryKubernetesIpFinder you need to create a spring config file, and 
then point to it using the CONFIG_URI env var.


- name: CONFIG_URI
  value: 
https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence.xml

This is not going to work for us.  Is there another option for specifying this 
config?  We would like to be able to supply this config inline, in the k8s 
'deployment' spec.containers.env.  Specifically, we need to set serviceName.  
The default 'ignite' is not a good choice.

Thanks, Jeff
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Load balancing ignite get requests

2018-09-11 Thread ezhuravlev
Hi,

Well, it depends on a lot of things - if you have small amount of the data,
which can easily fit in memory on each node, then, you can use Replicated
cache.

On the other hand, if you have quite big dataset, you may consider using
Partitioned cache and executing affinity runs.

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Partition map exchange in detail

2018-09-11 Thread Ilya Lantukh
1) It is.
2a) Ignite has retry mechanics for all messages, including PME-related ones.
2b) In this situation PME will hang, but it isn't a "deadlock".
3) Sorry, I didn't understand your question. If a node is down, but
DiscoverySpi doesn't detect it, it isn't PME-related problem.
4) How can you ensure that partition maps on coordinator are *latest *without
"freezing" cluster state for some time?

On Sat, Sep 8, 2018 at 3:21 AM, eugene miretsky 
wrote:

> Thanks!
>
> We are using persistence, so I am not sure if shutting down nodes will be
> the desired outcome for us since we would need to modify the baseline
> topolgy.
>
> A couple more follow up questions
>
> 1) Is PME triggered when client nodes join us well? We are using Spark
> client, so new nodes are created/destroy every time.
> 2) It sounds to me like there is a pontential for the cluster to get into
> a deadlock if
>a) single PME message is lost (PME never finishes, there are no
> retries, and all future operations are blocked on the pending PME)
>b) one of the nodes has a  long running/stuck pending operation
> 3) Under what circumastance can PME fail, while DiscoverySpi fails to
> detect the node being down? We are using ZookeeperSpi so I would expect the
> split brain resolver to shut down the node.
> 4) Why is PME needed? Doesn't the coordinator know the altest
> toplogy/pertition map of the cluster through regualr gossip?
>
> Cheers,
> Eugene
>
> On Fri, Sep 7, 2018 at 5:18 PM Ilya Lantukh  wrote:
>
>> Hi Eugene,
>>
>> 1) PME happens when topology is modified (TopologyVersion is
>> incremented). The most common events that trigger it are: node
>> start/stop/fail, cluster activation/deactivation, dynamic cache start/stop.
>> 2) It is done by a separate ExchangeWorker. Events that trigger PME are
>> transferred using DiscoverySpi instead of CommunicationSpi.
>> 3) All nodes wait for all pending cache operations to finish and then
>> send their local partition maps to the coordinator (oldest node). Then
>> coordinator calculates new global partition maps and sends them to every
>> node.
>> 4) All cache operations.
>> 5) Exchange is never retried. Ignite community is currently working on
>> PME failure handling that should kick all problematic nodes after timeout
>> is reached (see https://cwiki.apache.org/confluence/display/IGNITE/IEP-
>> 25%3A+Partition+Map+Exchange+hangs+resolving for details), but it isn't
>> done yet.
>> 6) You shouldn't consider PME failure as a error by itself, but rather as
>> a result of some other error. The most common reason of PME hang-up is
>> pending cache operation that couldn't finish. Check your logs - it should
>> list pending transactions and atomic updates. Search for "Found long
>> running" substring.
>>
>> Hope this helps.
>>
>> On Fri, Sep 7, 2018 at 11:45 PM, eugene miretsky <
>> eugene.miret...@gmail.com> wrote:
>>
>>> Hello,
>>>
>>> Out cluster occasionally fails with "partition map exchange failure"
>>> errors, I have searched around and it seems that a lot of people have had a
>>> similar issue in the past. My high-level understanding is that when one of
>>> the nodes fails (out of memory, exception, GC etc.) nodes fail to exchange
>>> partition maps. However, I have a few questions
>>> 1) When does partition map exchange happen? Periodically, when a node
>>> joins, etc.
>>> 2) Is it done in the same thread as communication SPI, or is a separate
>>> worker?
>>> 3) How does the exchange happen? Via a coordinator, peer to peer, etc?
>>> 4) What does the exchange block?
>>> 5) When is the exchange retried?
>>> 5) How to resolve the error? The only thing I have seen online is to
>>> decrease failureDetectionTimeout
>>>
>>> Our settings are
>>> - Zookeeper SPI
>>> - Persistence enabled
>>>
>>> Cheers,
>>> Eugene
>>>
>>
>>
>>
>> --
>> Best regards,
>> Ilya
>>
>


-- 
Best regards,
Ilya


Speakers needed for Apache DC Roadshow

2018-09-11 Thread Rich Bowen
We need your help to make the Apache Washington DC Roadshow on Dec 4th a 
success.


What do we need most? Speakers!

We're bringing a unique DC flavor to this event by mixing Open Source 
Software with talks about Apache projects as well as OSS CyberSecurity, 
OSS in Government and and OSS Career advice.


Please take a look at: http://www.apachecon.com/usroadshow18/

(Note: You are receiving this message because you are subscribed to one 
or more mailing lists at The Apache Software Foundation.)


Rich, for the ApacheCon Planners

--
rbo...@apache.org
http://apachecon.com
@ApacheCon


Re: Error installing Ignite on K8s

2018-09-11 Thread Jeff Simon
Hi Denis,

Yes, all of our apps reside in k8s, so there would be no need for external 
access.

Thanks for the info!

Jeff

From: Denis Magda 
Reply-To: "user@ignite.apache.org" 
Date: Tuesday, September 11, 2018 at 8:30 AM
To: "user@ignite.apache.org" 
Subject: Re: Error installing Ignite on K8s

Jeff,

The sessionAffinity is needed only if you plan to access Ignite cluster 
deployed in K8 from an application deployed outside of it. For instance, it 
will ensure that a remote JDBC session will stick to a specific Ignite pod.

However, if all your applications are deployed in K8 as well then you can 
freely disregard sessionAffinity parameter. Is this your case?

--
Denis

On Tue, Sep 11, 2018 at 10:16 AM Jeff Simon 
mailto:jeffsi...@fico.com>> wrote:
Yes if I remove session affinity it works.  So my question is does ignite 
require sessionAffinity?  And to be honest, I'm not really sure what we are 
using ignite for since it seems to be a multi-purpose app.  I think we are 
going to use for caching.  So would session affinity be required for caching?  
I'm guessing the answer would be 'no, it's not required.'

Thanks, Jeff

On 9/10/18, 4:15 PM, "vkulichenko" 
mailto:valentin.kuliche...@gmail.com>> wrote:

Does it work without specifying sessionAffinity?

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=ObqWq9831a7badpzAhIKIA=GDHaUsXuW5l_tQf8Lb0qSFuKsPCA5WAkjY2k5nQo7uw=nGR_kKy4EMqQjzu9FfD9BViiBYvvkZabdxLvh7lXjYw=Z-AZJ1xOOCKZyUNQNOndPrBGu7rnXR0zTXTyOSUwNbM=


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.
This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: Error installing Ignite on K8s

2018-09-11 Thread Denis Magda
Jeff,

The sessionAffinity is needed only if you plan to access Ignite cluster
deployed in K8 from an application deployed outside of it. For instance, it
will ensure that a remote JDBC session will stick to a specific Ignite pod.

However, if all your applications are deployed in K8 as well then you can
freely disregard sessionAffinity parameter. Is this your case?

--
Denis

On Tue, Sep 11, 2018 at 10:16 AM Jeff Simon  wrote:

> Yes if I remove session affinity it works.  So my question is does ignite
> require sessionAffinity?  And to be honest, I'm not really sure what we are
> using ignite for since it seems to be a multi-purpose app.  I think we are
> going to use for caching.  So would session affinity be required for
> caching?  I'm guessing the answer would be 'no, it's not required.'
>
> Thanks, Jeff
>
> On 9/10/18, 4:15 PM, "vkulichenko"  wrote:
>
> Does it work without specifying sessionAffinity?
>
> -Val
>
>
>
> --
> Sent from:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=ObqWq9831a7badpzAhIKIA=GDHaUsXuW5l_tQf8Lb0qSFuKsPCA5WAkjY2k5nQo7uw=nGR_kKy4EMqQjzu9FfD9BViiBYvvkZabdxLvh7lXjYw=Z-AZJ1xOOCKZyUNQNOndPrBGu7rnXR0zTXTyOSUwNbM=
>
>
> This email and any files transmitted with it are confidential, proprietary
> and intended solely for the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
>


Re: The system cache size was slowly increased

2018-09-11 Thread ezhuravlev
Hi,

What do you mean by " system memory cache also grows "? How do you see this?

Evgenii



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error installing Ignite on K8s

2018-09-11 Thread Jeff Simon
Yes if I remove session affinity it works.  So my question is does ignite 
require sessionAffinity?  And to be honest, I'm not really sure what we are 
using ignite for since it seems to be a multi-purpose app.  I think we are 
going to use for caching.  So would session affinity be required for caching?  
I'm guessing the answer would be 'no, it's not required.'

Thanks, Jeff

On 9/10/18, 4:15 PM, "vkulichenko"  wrote:

Does it work without specifying sessionAffinity?

-Val



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__apache-2Dignite-2Dusers.70518.x6.nabble.com_=DwICAg=ObqWq9831a7badpzAhIKIA=GDHaUsXuW5l_tQf8Lb0qSFuKsPCA5WAkjY2k5nQo7uw=nGR_kKy4EMqQjzu9FfD9BViiBYvvkZabdxLvh7lXjYw=Z-AZJ1xOOCKZyUNQNOndPrBGu7rnXR0zTXTyOSUwNbM=


This email and any files transmitted with it are confidential, proprietary and 
intended solely for the individual or entity to whom they are addressed. If you 
have received this email in error please delete it immediately.


Re: a node fails and restarts in a cluster

2018-09-11 Thread es70
Hi Pavel

I've  prepared the logs you requested. Please download it from this link 

https://cloud.mail.ru/public/A9wK/bKGEXK397

hope this will help

regards,
Evgeny



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Error with Spark + IGFS (HDFS cache) through Hive

2018-09-11 Thread Evgenii Zhuravlev
Hi,

Do you really need to use Hive here? You can just use Spark integration
with Ignite, which allows to run sql: DataFrame(
https://apacheignite-fs.readme.io/docs/ignite-data-frame) or RDD(
https://apacheignite-fs.readme.io/docs/ignitecontext-igniterdd). For sure,
this solution will work much faster.

Evgenii

пн, 10 сент. 2018 г. в 23:08, Maximiliano Patricio Méndez <
mmen...@despegar.com>:

> Hi,
>
> I'm having an LinkageError in spark trying to read a hive table that has
> the external location in IGFS:
> java.lang.LinkageError: loader constraint violation: when resolving field
> "LOG" the class loader (instance of
> org/apache/spark/sql/hive/client/IsolatedClientLoader$$anon$1) of the
> referring class, org/apache/hadoop/fs/FileSystem, and the class loader
> (instance of sun/misc/Launcher$AppClassLoader) for the field's resolved
> type, org/apache/commons/logging/Log, have different Class objects for that
> type
>   at
> org.apache.ignite.hadoop.fs.v1.IgniteHadoopFileSystem.initialize(IgniteHadoopFileSystem.java:255)
>
> From what I can see the exception comes when spark tries to read a table
> from Hive and then through IGFS and passing the "LOG" variable of the
> FileSystem around to the HadoopIgfsWrapper (and beyond...).
>
> The steps I followed to reach this error were:
>
>- Create a file /tmp/test.parquet in HDFS
>- Create an external table test.test in hive with location =
>igfs://igfs@/tmp/test.parquet
>- Start spark-shell with the command:
>   - ./bin/spark-shell --jars
>   
> $IGNITE_HOME/ignite-core-2.6.0.jar,$IGNITE_HOME/ignite-hadoop/ignite-hadoop-2.6.0.jar,$IGNITE_HOME/ignite-shmem-1.0.0.jar,$IGNITE_HOME/ignite-spark-2.6.0.jar
>   - Read the table through spark.sql
>   - spark.sql("SELECT * FROM test.test")
>
> Is there maybe a way to avoid having this issue? Has anyone used ignite
> through hive as HDFS cache in a similar way?
>
>
>


Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-11 Thread Ilya Kasnacheev
Hello!

So I was increasing amount of RAM in the memory model, and it turns out
that Off-Heap usage will not grow past:

2018-09-11 12:47:46,603 INFO  [pub-#290] log4j.Log4JLogger
(Log4JLogger.java:566) - #
2018-09-11 12:47:56,605 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - Show metrics inside ignite
c2785f18-983c-490e-8ebc-3198b54ae132
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - Size : 10 of cache contactsEx
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - #
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - >>> Memory Region Name: Default_Region
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - AllocationRate: 0.0
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - PagesFillFactor: 0.8341349
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - PhysicalMemoryPages: 35493
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - OffHeapSize: 209715200
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - CheckpointBufferSize: 0
2018-09-11 12:47:56,606 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - CheckpointBufferPages: 0
2018-09-11 12:47:56,607 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - OffheapUsedSize: 145379328
2018-09-11 12:47:56,607 INFO  [pub-#292] log4j.Log4JLogger
(Log4JLogger.java:566) - #

After half an hour usage is still at this point. So I imagine that storing
this amount of data in Apache Ignite takes 140M and not less. But it won't
grow past this point.

There may be a lot of reasons why this number may grow before it reaches
plateau. There's, obviously, a lot of metadata pages, some of which may not
be allocated immediately. Then there's fragmentation: If you remove an
object from page, and write a slightly larger object, it may not fit the
page and it will use up space on some other page. PagesFillFactor is metric
that chases this.

Note that for such a small cache, PDS will be absolutely dominated by
metadata. On large datasets you will see growth due to fragmentation. But
neither of those are runaway growth. Unfortunately, your reproducer does
not show runaway growth either, so I can't tell you anything further.

Regards,
-- 
Ilya Kasnacheev


вт, 11 сент. 2018 г. в 11:28, Serg :

> Hi Ilya,
>
> I created reproducer with two tests
> https://github.com/SergeyMagid/ignite-reproduce-grow-memory
>
> Differents in this tests only is data which inserted to cache.
> I have previously suppose that problem caused with BinaryObject only but I
> reproduced this problem without BinaryObject too.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Ignite Thin Client Continuous Query

2018-09-11 Thread Igor Sapego
Guys,

Personally, I do not see any problems, why we can not implement
Continuous Queries for thin clients. This will require a decent amount
of work, and will not give such  strong guaranties as thick clients
give (for example, on server crash thin client will get an exception and
will need to re-register listener once again), but to me it seems totally
implementable.

Val,

Why do you think that such features are unlikely to appear in thin clients?

Best Regards,
Igor


On Tue, Sep 11, 2018 at 3:07 PM Alexey Kuznetsov 
wrote:

>
> Gordon,
>
> How about to start several client nodes "near" to cluster and use them as
> "proxies" for your desktop GUI apps?
> You may write some code that will push data from client node to you GUI
> app.
> This will require some coding of course.
>
> --
> Alexey Kuznetsov
>


Re: Ignite Thin Client Continuous Query

2018-09-11 Thread Alexey Kuznetsov
Gordon,

How about to start several client nodes "near" to cluster and use them as
"proxies" for your desktop GUI apps?
You may write some code that will push data from client node to you GUI app.
This will require some coding of course.

-- 
Alexey Kuznetsov


Re: How to create tables with JDBC, read with ODBC?

2018-09-11 Thread Igor Sapego
Nice to hear.

Please, keep us updated about what QLIK thinks about the issue.

Thank you in advance

Best Regards,
Igor


On Mon, Sep 10, 2018 at 10:50 PM limabean  wrote:

> Thank you very much for the thorough discussion/explanation and pending fix
> for public schemas.  Much appreciated !
>
> As an aside, I also contacted QLIK to see if they will fix their product
> behavior, which does not seem correct to me either.
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Unable to connect ignite pods in Kubernetes using Ip-finder

2018-09-11 Thread rishi007bansod
"serviceAccountName: ignite" should be present in Pod Deployment
specification as mentioned by Anton in post 
https://stackoverflow.com/questions/49395481/how-to-setmasterurl-in-ignite-xml-config-for-kubernetes-ipfinder/49405879#49405879

 
.  It is currently absent in 
https://apacheignite.readme.io/docs/stateless-deployment
  
"ignite-deployment.yaml" file

Thanks,
Rishikesh



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Fulltext matching

2018-09-11 Thread Ilya Kasnacheev
Hello!

The only way to know if it will be accepted is to fill those tickets and
pull-requests (and then write about it on developers list)

Regards,
-- 
Ilya Kasnacheev


вт, 11 сент. 2018 г. в 0:04, Courtney Robinson :

> Hi,
> Thanks for the response.
> I went ahead and implemented a custom indexing SPI. Works like a charm. As
> long as Ignite doesn't drop support for the indexing SPI interface this is
> exactly what we need.
> I'm happy to create Jira issues and extract this into something more
> generic for upstream if it'll be accepted.
>
> Regards,
> Courtney Robinson
> CTO, Hypi
> Tel: +4402032870961 (GMT+0) 
>
> 
> https://hypi.io
>
>
> On Thu, Sep 6, 2018 at 4:09 PM Ilya Kasnacheev 
> wrote:
>
>> Hello!
>>
>> Unfortunately, fulltext doesn't seem to have much traction, so I
>> recommend doing investigations on your side, possibly creating JIRA issues
>> in the process.
>>
>> Regards,
>> --
>> Ilya Kasnacheev
>>
>>
>> пн, 3 сент. 2018 г. в 22:34, Courtney Robinson > >:
>>
>>> Hi,
>>>
>>> We've got Ignite in production and decided to start using some fulltext
>>> matching as well.
>>> I've investigated and can't figure out why my queries are not matching.
>>>
>>> I construct a query entity e.g new QueryEntity(keyClass, valueClass) and
>>> in debug I can see it generates a list of fields
>>> e.g. a, b, c.a, c.b
>>> I then expected to be able to match on those fields that are marked as
>>> indexed. Everything is annotation driven. The appropriate fields have been
>>> annotated and appear to be detected as such
>>> when I inspect what gets put into the QueryEntityDescriptor. i.e. all
>>> expected indices and indexed fields are present.
>>>
>>> In LuceneGridIndex I see that the lucene document generated as fields
>>> a,b (c.a and c.b are not included). Now a couple of questions arise:
>>>
>>> 1. Is there a way to get Ignite to index the nested fields as well so
>>> that c.a and c.b end up in the doc?
>>>
>>> 2. If you use a composite object as a key, its fields are extracted into
>>> the top level so if you have Key.a and Value.a you cannot index both since
>>> Key.a becomes a which collides with Value.a - can this be changed, are
>>> there any known reasons why it couldn't be (i.e. I'm happy to send a PR
>>> doing so - but I suspect the answer to this is linked to the answer to the
>>> first question)
>>>
>>> 3. The docs simply say you can use lucene syntax, I presume it means the
>>> syntax that appears in
>>> https://lucene.apache.org/core/2_9_4/queryparsersyntax.html is all
>>> valid - checking the code that appears to be case as it does
>>> a MultiFieldQueryParser in GridLuceneIndex. However, when I try to run a
>>> query such as a: - none of the indexed documents match. In debug
>>> mode I've enabled parser.setAllowLeadingWildcard(true); and if I do a
>>> simple searcher.search * I get back the list of expected documents.
>>>
>>> What's even more odd is I tried querying each of the 6 indexed fields as
>>> found in idxdFields in GridLuceneIndex and 1 of them match. The other
>>> values are being typed exactly but also doing wild cards or other free text
>>> forms do not match.
>>>
>>> 4. I couldn't see a way to provide a custom GridLuceneIndex, I found the
>>> two cases where it's constructed in the code base and doesn't look like I
>>> can inject instances. Is it ok to construct and use a custom
>>> GridLuceneDirectory/IndexWriter/Searcher and so on in the same way
>>> GridLuceneIndex does it so I can do a custom IndexingSpi to change how
>>> indexing happens?
>>> There are a number of things I'd like to customise and from looking at
>>> the current impl. these things aren't injectable, I guess it's not
>>> considered a prime use case maybe.
>>>
>>> Yeah, the analyzer and a number of things would be handy to change.
>>> Ideally also want to customise how a field is indexed e.g. to be able to do
>>> term matches with lucene queries
>>>
>>> Looking at this impl as well it passes Integer.MAX_VALUE and pulls back
>>> all matches. That'll surely kill our nodes for some of the use cases we're
>>> considering.
>>> I'd also like to implement paging, the searcher API has a nice option to
>>> pass through a last doc it can continue from to potentially implement
>>> something like deep-paging.
>>>
>>> 5. If I were to do a custom IndexingSpi to make all of this happen, how
>>> do I get additional parameters through so that I could have paging params
>>> passed
>>>
>>> Ideally I could customise the indexing, searching and paging through
>>> standard Ignite means but I can't find any means of doing that in the
>>> current code and short of doing a custom IndexingSpi I think I've gone as
>>> far as I can debugging and could do with a few pointers of how to go about
>>> this.
>>>
>>> FYI, SQL isn't a great option for this part of the product, we're
>>> generating and compiling Java classes at runtime and generating SQL to do
>>> the queries is an order of magnitude more work 

Re: Off heap constantly grow on use BinaryObject as field of cached data

2018-09-11 Thread Serg
Hi Ilya,

I created reproducer with two tests
https://github.com/SergeyMagid/ignite-reproduce-grow-memory

Differents in this tests only is data which inserted to cache. 
I have previously suppose that problem caused with BinaryObject only but I 
reproduced this problem without BinaryObject too.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: IgniteUtils NoClassDefFoundError

2018-09-11 Thread Павлухин Иван
Hi Jack,

Could you provide logs and full console output? NoClassDefFoundError -- can
be thrown when class in question is on classpath but fails to initialize
(e.g. exception thrown from static initializer).

2018-09-11 6:05 GMT+03:00 Jack Lever :

> Hi All,
>
> I'm getting an error on application startup which has me stumped. I've
> imported ignite-core, indexing, slf4j and spring-data via maven, version
> 2.6.0. I'm using ignite to do some cache operations, basic stuff
> cross-node. However when I start it, it runs until the config of static ip
> discovery or Ignition.start(config) call depending on what I have in the
> setup and then stops with :
>
> Failed to instantiate [i.o.c.IgniteManager]: Constructor threw exception;
> nested exception is java.lang.NoClassDefFoundError: Could not initialize
> class org.apache.ignite.internal.util.IgniteUtils
>
> I can see the class inside intellij in the jar file in external libraries.
> I can use the class in the code but when I run it appears to be missing ...
>
> How do I go about fixing this or diagnosing it further?
>
> Thanks,
> Jack.
>



-- 
Best regards,
Ivan Pavlukhin