Re: Issues with sub query IN clause

2018-02-01 Thread Rajesh Kishore
Thanks Dmitriy,

The EXPLAIN PLAN

[[SELECT
STORE__Z1.ENTRYID AS __C0_0,
STORE__Z1.ATTRNAME AS __C0_1,
STORE__Z1.ATTRVALUE AS __C0_2,
STORE__Z1.ATTRSTYPE AS __C0_3
FROM "dn".IGNITE_DN DN__Z0
/* "dn".IGNITE_DN.__SCAN_ */
/* WHERE ((DN__Z0.PARENTDN LIKE 'dc=ignite,%')
OR ((DN__Z0.RDN = 'dc=ignite')
AND (DN__Z0.PARENTDN = ',')))
AND (DN__Z0.ENTRYID IN(
(SELECT
AT1__Z2.ENTRYID
FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2
/++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
'subentry' ++/
WHERE AT1__Z2.ATTRVALUE = 'subentry')
UNION
(SELECT
AT1__Z3.ENTRYID
FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3
/++ "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
'ldapsubentry' ++/
WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry')))
*/
INNER JOIN "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE STORE__Z1
/* "Ignite_DSAttributeStore".IGNITE_DSATTRIBUTESTORE_ENTRYID_IDX:
ENTRYID = DN__Z0.ENTRYID */
ON 1=1
WHERE (STORE__Z1.ATTRKIND IN('u', 'o'))
AND ((DN__Z0.ENTRYID = STORE__Z1.ENTRYID)
AND (((DN__Z0.PARENTDN LIKE 'dc=ignite,%')
OR ((DN__Z0.RDN = 'dc=ignite')
AND (DN__Z0.PARENTDN = ',')))
AND (DN__Z0.ENTRYID IN(
(SELECT
AT1__Z2.ENTRYID
FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z2
/* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
'subentry' */
WHERE AT1__Z2.ATTRVALUE = 'subentry')
UNION
(SELECT
AT1__Z3.ENTRYID
FROM "objectclass".IGNITE_OBJECTCLASS AT1__Z3
/* "objectclass".OBJECTCLASSNDEXED_ATTRVAL_IDX: ATTRVALUE =
'ldapsubentry' */
WHERE AT1__Z3.ATTRVALUE = 'ldapsubentry')
ORDER BY 1], [SELECT
__C0_0 AS ENTRYID,
__C0_1 AS ATTRNAME,
__C0_2 AS ATTRVALUE,
__C0_3 AS ATTRSTYPE
FROM PUBLIC.__T0
/* "Ignite_DSAttributeStore"."merge_sorted" */
ORDER BY 1
/* index sorted */]]


Thanks
-Rajesh

On Fri, Feb 2, 2018 at 5:32 AM, Dmitriy Setrakyan 
wrote:

> Rajesh, can you please show your query here together with execution plan?
>
> D.
>
>
> On Thu, Feb 1, 2018 at 8:36 AM, Rajesh Kishore 
> wrote:
>
>> Hi Andrey
>> Thanks for your response.
>> I am using native ignite persistence, saving data locally and as of now I
>> don't have distributed cache, having only one node.
>>
>> By looking at the doc, it does not look like affinity key is applicable
>> here.
>>
>> Pls suggest.
>>
>> Thanks Rajesh
>>
>> On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" 
>> wrote:
>>
>>> Hi Rajesh,
>>>
>>>
>>> Possibly, you data is not collocated and subquery return less retults as
>>> it executes locally.
>>> Try to rewrite IN into JOIN and check if query with
>>> query#setDistributedJoins(true) will return expected result.
>>>
>>> It is recommended
>>> 1. replace IN with JOIN due to performance issues [1].
>>> 2. use data collocation [2] if possible rather than turning on
>>> distributed joins.
>>>
>>> [1] https://apacheignite-sql.readme.io/docs/performance-and-
>>> debugging#section-sql-performance-and-usability-considerations
>>> [2] https://apacheignite.readme.io/docs/affinity-collocation
>>> #section-collocate-data-with-data
>>>
>>> On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore 
>>> wrote:
>>>
 Hi All,

 As of now, we have less than 1 M records , and attribute split into
 few(3) tables
 with index created.
 We are using combination of join &  IN clause(sub query) in the SQL
 query , for some reason this query does not return any response.
 But, the moment we remove the IN clause and use just the join, the
 query returns the result.
 Note that as per EXPLAIN PLAN , the sub query also seems to be using
 the defined
 indexes.

 What are the recommendations for using such queries , are there any
 guidelines, What we are doing wrong here?

 Thanks,
 Rajesh







>>>
>>>
>>> --
>>> Best regards,
>>> Andrey V. Mashenkov
>>>
>>
>

package org.kishore.backends.ignite;

import org.apache.ignite.cache.query.annotations.QuerySqlField;

/**
 * The DN vo for the dns.
 *
 */
public final class Ignite_DN
{

  /**
   * The id.
   */
  @QuerySqlField( index= true)
  private Long id;

  @QuerySqlField(orderedGroups = { @QuerySqlField.Group(name = "EP_DN_IDX",
  order = 0) })
  private Long entryID;

  public Long getId() {
return id;
}



public Long getEntryID() {
return entryID;
}



public String getRdn() {
return rdn;
}



public String getParentDN() {
return parentDN;
}



@QuerySqlField(orderedGroups = { @QuerySqlField.Group(name = "RP_DN_IDX",
  order = 1) })
  private String rdn;

  @QuerySqlField(orderedGroups = {
  @QuerySqlField.Group(name = "EP_DN_IDX", order = 1),
  @QuerySqlField.Group(name = "RP_DN_IDX", order = 0) })
  private 

Re: Issues with sub query IN clause

2018-02-01 Thread Dmitriy Setrakyan
Rajesh, can you please show your query here together with execution plan?

D.

On Thu, Feb 1, 2018 at 8:36 AM, Rajesh Kishore 
wrote:

> Hi Andrey
> Thanks for your response.
> I am using native ignite persistence, saving data locally and as of now I
> don't have distributed cache, having only one node.
>
> By looking at the doc, it does not look like affinity key is applicable
> here.
>
> Pls suggest.
>
> Thanks Rajesh
>
> On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" 
> wrote:
>
>> Hi Rajesh,
>>
>>
>> Possibly, you data is not collocated and subquery return less retults as
>> it executes locally.
>> Try to rewrite IN into JOIN and check if query with
>> query#setDistributedJoins(true) will return expected result.
>>
>> It is recommended
>> 1. replace IN with JOIN due to performance issues [1].
>> 2. use data collocation [2] if possible rather than turning on
>> distributed joins.
>>
>> [1] https://apacheignite-sql.readme.io/docs/performance-and-
>> debugging#section-sql-performance-and-usability-considerations
>> [2] https://apacheignite.readme.io/docs/affinity-collocation
>> #section-collocate-data-with-data
>>
>> On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore 
>> wrote:
>>
>>> Hi All,
>>>
>>> As of now, we have less than 1 M records , and attribute split into
>>> few(3) tables
>>> with index created.
>>> We are using combination of join &  IN clause(sub query) in the SQL
>>> query , for some reason this query does not return any response.
>>> But, the moment we remove the IN clause and use just the join, the query
>>> returns the result.
>>> Note that as per EXPLAIN PLAN , the sub query also seems to be using the
>>> defined
>>> indexes.
>>>
>>> What are the recommendations for using such queries , are there any
>>> guidelines, What we are doing wrong here?
>>>
>>> Thanks,
>>> Rajesh
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Best regards,
>> Andrey V. Mashenkov
>>
>


Re: Upcoming Apache Ignite events this month

2018-02-01 Thread Dmitriy Setrakyan
Great to see such a busy schedule!

Ignite community is unstoppable :)

D.

On Thu, Feb 1, 2018 at 3:19 PM, Tom Diederich 
wrote:

> Igniters,
>
> The following is a list of upcoming events in February. To view this list
> from the Ignite events page, click here
> .
>
> *Tokyo*
>
> *February 1:*  *Meetup*: Meet Apache Ignite In-Memory Computing Platform
>
>  Join Roman Shtykh at the Tech it Easy- Tokyo Meetup for an introductory
> talk on Apache Ignite.
>
>  In this talk you will learn about Apache Ignite memory-centric
> distributed database, caching, and processing platform. Roman will explain
> how one can do distributed computing, and use SQL with horizontal
> scalability and high availability of NoSQL systems with Apache Ignite.
>
>  Only six spots left so RSVP now! http://bit.ly/2nygyRI
>
>  *San Francisco Bay Area*
>
>  *February 7*: *Conference talk:* Apache Ignite Service Grid: Foundation
> of Your Microservices-Based Solution
>
>  Denis Magda will be attending DeveloperWeek 2018 in San Francisco to
> deliver presentation that provides a step-by-step guide on how to build a
> fault-tolerant and scalable microservices-based solution using Apache
> Ignite's Service Grid and other components to resolve these aforementioned
> issues.
>
>  Details here: http://bit.ly/2BHwFBr
>
>
>
> *London*
>
>  *February 7:* *Meetup:* Building consistent and highly available
> distributed systems with Apache Ignite
>
>  Akmal Chaudhri will speak at the inaugural gathering of the London
> In-Memory Computing Meetup.
>
>  He'll explain that while it is well known that there is a tradeoff
> between data consistency and high availability, there are many applications
> that require very strong consistency guarantees. Making such applications
> highly available can be a significant challenge. Akmal will explain how to
> overcome these challenges.
>
> This will be an outstanding event with free food and beverages. Space is
> limited, however. RSVP now to reserve your spot (you may also include 2
> guests).
>
> http://bit.ly/2BH893c
>
>
> *Boston*
>
> *February 12: Meetup*: Turbocharge your MySQL queries in-memory with
> Apache Ignite
>
> Fotios Filacouris will be the featured speaker at the Boston MySQL Meetup
> Group.
>
> The abstract of his talk: Apache Ignite is a unique data management
> platform that is built on top of a distributed key-value storage and
> provides full-fledged MySQL support.Attendees will learn how Apache Ignite
> handles auto-loading of a MySQL schema and data from PostgreSQL, supports
> MySQL indexes, supports compound indexes, and various forms of MySQL
> queries including distributed MySQL joins.
>
> Space is limited so RSVP today! http://bit.ly/2DP8W44
>
>
> *Boston*
>
> *February 13: Meetup* -- Java and In-Memory Computing: Apache Ignite
>
>  Fotios Filacouris will speak at the Boston Java Meetup Group
>
> In his talk, Foti will introduce the many components of the open-source
> Apache Ignite. Meetup members, as Java professionals, will learn how to
> solve some of the most demanding scalability and performance challenges.
> He’ll also cover a few typical use cases and work through some code
> examples. Attendees would leave ready to fire up their own database
> deployments!
>
> RSVP here: http://bit.ly/2BJ1nde
>
>
>
> *Sydney, Australia  *
>
> *February 13: Meetup:* Ignite your Cassandra Love Story: Caching
> Cassandra with Apache Ignite
>
> Rachel Pedreschi will be the guest speaker at the Sydney Cassandra Users
> Meetup. In this session attendees will learn how Apache Ignite can
> turbocharge a Cassandra cluster without sacrificing availability
> guarantees. In this talk she'll cover:
>
>
>
>- An overview of the Apache Ignite architecture
>- How to deploy Apache Ignite in minutes on top of Cassandra
>- How companies use this powerful combination to handle extreme OLTP
>workloads
>
>
>  RSVP now to secure your spot: http://bit.ly/2sydneytalk
>
>
>
> * February 14: Webinar:*  Getting Started with Apache® Ignite™ as a
> Distributed Database
>
> Join presenter Valentin Kulichenko in this live webinar featuring Apache
> Ignite native persistence --  a distributed ACID and SQL-compliant store
> that turns Apache Ignite into a full-fledged distributed SQL database.
>
>  In this webinar, Valentin will:
>
>
>
>-  Explain what native persistence is, and how it works
>- Show step-by-step how to set up Apache Ignite with native persistence
>- Explain the best practices for configuration and tuning
>
>
> RSVP now to reserve your spot: http://bit.ly/2E0SWiS
>
>
>
> *Copenhagen*
>
> *February 14: Meetup: *Apache Ignite: the in-memory hammer in your data
> science toolkit
>
> Akmal Chaudhri will be the guest speaker at the Symbion IoT Meetup
> (Copenhagen, Denmark). In this presentation, Akmal will explain some of the
> main components of Apache Ignite, such as the Compute Grid, Data Grid and
> the Machine 

Upcoming Apache Ignite events this month

2018-02-01 Thread Tom Diederich
Igniters, 

The following is a list of upcoming events in February. To view this list from 
the Ignite events page, click here . 

Tokyo

February 1:  Meetup: Meet Apache Ignite In-Memory Computing Platform 

 Join Roman Shtykh at the Tech it Easy- Tokyo Meetup for an introductory talk 
on Apache Ignite.


 In this talk you will learn about Apache Ignite memory-centric distributed 
database, caching, and processing platform. Roman will explain how one can do 
distributed computing, and use SQL with horizontal scalability and high 
availability of NoSQL systems with Apache Ignite.


 Only six spots left so RSVP now! http://bit.ly/2nygyRI  


 San Francisco Bay Area

 February 7: Conference talk: Apache Ignite Service Grid: Foundation of Your 
Microservices-Based Solution


 Denis Magda will be attending DeveloperWeek 2018 in San Francisco to deliver 
presentation that provides a step-by-step guide on how to build a 
fault-tolerant and scalable microservices-based solution using Apache Ignite's 
Service Grid and other components to resolve these aforementioned issues.


 Details here: http://bit.ly/2BHwFBr  


 

London

 February 7: Meetup: Building consistent and highly available distributed 
systems with Apache Ignite


 Akmal Chaudhri will speak at the inaugural gathering of the London In-Memory 
Computing Meetup.


 He'll explain that while it is well known that there is a tradeoff between 
data consistency and high availability, there are many applications that 
require very strong consistency guarantees. Making such applications highly 
available can be a significant challenge. Akmal will explain how to overcome 
these challenges.


This will be an outstanding event with free food and beverages. Space is 
limited, however. RSVP now to reserve your spot (you may also include 2 guests).


http://bit.ly/2BH893c  



Boston

February 12: Meetup: Turbocharge your MySQL queries in-memory with Apache 
Ignite 

Fotios Filacouris will be the featured speaker at the Boston MySQL Meetup Group.


The abstract of his talk: Apache Ignite is a unique data management platform 
that is built on top of a distributed key-value storage and provides 
full-fledged MySQL support.Attendees will learn how Apache Ignite handles 
auto-loading of a MySQL schema and data from PostgreSQL, supports MySQL 
indexes, supports compound indexes, and various forms of MySQL queries 
including distributed MySQL joins.


Space is limited so RSVP today! http://bit.ly/2DP8W44  




Boston

February 13: Meetup -- Java and In-Memory Computing: Apache Ignite


 Fotios Filacouris will speak at the Boston Java Meetup Group


In his talk, Foti will introduce the many components of the open-source Apache 
Ignite. Meetup members, as Java professionals, will learn how to solve some of 
the most demanding scalability and performance challenges. He’ll also cover a 
few typical use cases and work through some code examples. Attendees would 
leave ready to fire up their own database deployments!


RSVP here: http://bit.ly/2BJ1nde  


 
Sydney, Australia  

February 13: Meetup: Ignite your Cassandra Love Story: Caching Cassandra with 
Apache Ignite


Rachel Pedreschi will be the guest speaker at the Sydney Cassandra Users 
Meetup. In this session attendees will learn how Apache Ignite can turbocharge 
a Cassandra cluster without sacrificing availability guarantees. In this talk 
she'll cover:



An overview of the Apache Ignite architecture
How to deploy Apache Ignite in minutes on top of Cassandra
How companies use this powerful combination to handle extreme OLTP workloads


 RSVP now to secure your spot: http://bit.ly/2sydneytalk 
 


 

 February 14: Webinar:  Getting Started with Apache® Ignite™ as a Distributed 
Database


Join presenter Valentin Kulichenko in this live webinar featuring Apache Ignite 
native persistence --  a distributed ACID and SQL-compliant store that turns 
Apache Ignite into a full-fledged distributed SQL database.


 In this webinar, Valentin will:



 Explain what native persistence is, and how it works
Show step-by-step how to set up Apache Ignite with native persistence
Explain the best practices for configuration and tuning


RSVP now to reserve your spot: http://bit.ly/2E0SWiS  


 

Copenhagen

February 14: Meetup: Apache Ignite: the in-memory hammer in your data science 
toolkit


Akmal Chaudhri will be the guest speaker at the Symbion IoT Meetup (Copenhagen, 
Denmark). In this presentation, Akmal will explain some of the main components 
of Apache Ignite, such as the Compute Grid, Data Grid and the Machine Learning 
Grid. Through examples, attendees will learn how Apache Ignite can be used for 
data analysis.


This meetup is free but an RSVP is required to secure your spot: 
http://bit.ly/2DSO1Bi 

Re: NPE in attempt to load checkpoint

2018-02-01 Thread Artёm Basov
Hi Val.

Sorry i forgot to specify version.
It is ver. 2.3.0#20171028-sha1:8add7fd5

As for the reproducer, i'll try to make it tomorrow



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [Ignite 2.0.0] Stopping the node in order to prevent cluster wide instability.

2018-02-01 Thread Nikolay Izhikov
Hello, Valentin.

I try to take a look at this bug.


В Чт, 01/02/2018 в 12:35 -0700, vkulichenko пишет:
> Well, then you need IGNITE-3653 to be fixed I believe. Unfortunately, it's
> not assigned to anyone currently, so apparently no one is working on it. Are
> you willing to pick it up and contribute?
> 
> -Val
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/

signature.asc
Description: This is a digitally signed message part


Re: Discovery node port range

2018-02-01 Thread vkulichenko
By standalone cluster I just mean a regular Ignite cluster running
independently from Spark. The easiest way is to start a node is using
ignite.sh script providing proper configuration file.

Once you switch IgniteContext to standalone mode, all nodes started within
Spark processes will run in client mode and will only be used to access the
cluster. All the data will be on server nodes, so Spark lifecycle will never
cause rebalancing or data loss.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: how to create instance of CacheManager of ignite

2018-02-01 Thread vkulichenko
ak47,

What exactly doesn't work? Can you describe the issue you're having?

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: JDBC Thin Client with Transactions

2018-02-01 Thread vkulichenko
If client node based driver [1] is used, then you can also add
transactionsAllowed=true parameter to URL to overcome this error.

This will NOT enable transactional support, but will force the driver to go
through transaction related methods without exceptions. So for example, if
there are several updates executed in a transaction, they will be actually
executed as individual updates.

Obviously this option should be used with care, and most likely not in
production environments. But if you're testing integration of Ignite with
some external tools, it can be useful until full transactional support is
available.

[1] https://apacheignite-sql.readme.io/docs/jdbc-client-driver

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: ignite.sh spring xml file secret.properties file not found error

2018-02-01 Thread vkulichenko
Ganesh,

If you provide full path, you don't need classpath: prefix. If you choose to
have this file on classpath, then you should use the prefix and then provide
the path relative to one of classpath roots. Also note that it has to be on
classpath of your application, I don't know if $CLASSPATH variable somehow
affects this in your environment.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Discovery node port range

2018-02-01 Thread vkulichenko
Ranjit,

Generally, removing and adding nodes in unpredictable way (which happens in
embedded mode because we basically rely on Spark here) is a very bad anti
pattern when working with distributed data. It can have serous performance
implications as well as data loss.

Data nodes are supposed to be relatively stable, so having standalone Ignite
cluster is a correct way to architecture this.

-Val



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: [Ignite 2.0.0] Stopping the node in order to prevent cluster wide instability.

2018-02-01 Thread 1MoreIgniteUser
I tried that, and it does work with it disabled but i needed the
peerClassLoading enabled. we have a microservice setup and so lots of
different things are interacting with our ignite cluster to get data. we
have stuff making continuous queries and regular sql queries. so multiple
different apps are throwing runnable's into the ignite cluster and what not
so we need the peerClassLoading because it would be really really ugly code
for us to statically tell ignite that its going to get asked to use so many
different classes. 

Thankfully, I was able to figure this out. In reading more about the
peerClassLoading on the ignite api website.
https://apacheignite.readme.io/docs/zero-deployment 

I have added a few more properties to my ignite config (and the client
connection ignite config) and it's working now.









so i used the deployment Mode Continuous instead of the default 'shared' and
i've eliminated the missedResourceCacheSize to ensure that all the nodes
have the same info on the classes it needs. 




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issues with sub query IN clause

2018-02-01 Thread Rajesh Kishore
Hi Andrey
Thanks for your response.
I am using native ignite persistence, saving data locally and as of now I
don't have distributed cache, having only one node.

By looking at the doc, it does not look like affinity key is applicable
here.

Pls suggest.

Thanks Rajesh

On 1 Feb 2018 6:27 p.m., "Andrey Mashenkov" 
wrote:

> Hi Rajesh,
>
>
> Possibly, you data is not collocated and subquery return less retults as
> it executes locally.
> Try to rewrite IN into JOIN and check if query with
> query#setDistributedJoins(true) will return expected result.
>
> It is recommended
> 1. replace IN with JOIN due to performance issues [1].
> 2. use data collocation [2] if possible rather than turning on distributed
> joins.
>
> [1] https://apacheignite-sql.readme.io/docs/performance-
> and-debugging#section-sql-performance-and-usability-considerations
> [2] https://apacheignite.readme.io/docs/affinity-
> collocation#section-collocate-data-with-data
>
> On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore 
> wrote:
>
>> Hi All,
>>
>> As of now, we have less than 1 M records , and attribute split into
>> few(3) tables
>> with index created.
>> We are using combination of join &  IN clause(sub query) in the SQL query
>> , for some reason this query does not return any response.
>> But, the moment we remove the IN clause and use just the join, the query
>> returns the result.
>> Note that as per EXPLAIN PLAN , the sub query also seems to be using the
>> defined
>> indexes.
>>
>> What are the recommendations for using such queries , are there any
>> guidelines, What we are doing wrong here?
>>
>> Thanks,
>> Rajesh
>>
>>
>>
>>
>>
>>
>>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


NPE in attempt to load checkpoint

2018-02-01 Thread Artёm Basov
Hi,

I got exception*[1]* after 2nd execution (on 1-node grid) of my
IgniteRunnable implementation which looks like this:

*@ComputeTaskSessionFullSupport*
public class ProcessStream implements IgniteRunnable {
private static final long serialVersionUID = 6894222808783502630L;

private static final String LAST_TIMESTAMP = "LAST_TIMESTAMP";
*@TaskSessionResource
private transient ComputeTaskSession ses;*
@LoggerResource
private transient IgniteLogger log;

@Override
public void run() {
AudioFrameSource stream = ...;
*Duration last = ses.loadCheckpoint(LAST_TIMESTAMP);*

Consumer detector = ...;
AudioFrame frame = null;
try {
while (!Thread.currentThread().isInterrupted()) {
frame = stream.nextFrame();
if (frame == null) break;
if (last != null) {
// notify processing resumed
}
// do some long work
*ses.saveCheckpoint(LAST_TIMESTAMP, frame.pts());*
frame.close();
}
} catch (Exception e) {
//...
throw new ClusterTopologyException("Retry", e);
}
}
}

where " frame.pts()" will return object of the following class:

public class Duration {
private final long timeUnits;
private final java.time.Duration duration;
private final double timeBase;

// ctors, getters, etc.
}

configuration of the CheckpointingSpi:








There is no [other] mappings/bindings/etc. of Duration class to Ignite.

Is there anything i missed in order to get it working?


*[1]* - 2018-02-01 17:17:24.013 ERROR 18740 --- [pub-#52%stream%]
o.a.i.i.processors.task.GridTaskWorker   : Failed to obtain remote job
result policy for result from ComputeTask.result(..) method (will fail the
whole task): GridJobResultImpl [job=C4 [r=grid.ProcessStream@1b573de0],
sib=GridJobSiblingImpl
[sesId=b6f0ab15161-f04a2d3b-b4b1-4ca2-94b1-b6edaf9aed55,
jobId=d6f0ab15161-f04a2d3b-b4b1-4ca2-94b1-b6edaf9aed55,
nodeId=f04a2d3b-b4b1-4ca2-94b1-b6edaf9aed55, isJobDone=false],
jobCtx=GridJobContextImpl
[jobId=d6f0ab15161-f04a2d3b-b4b1-4ca2-94b1-b6edaf9aed55, timeoutObj=null,
attrs={}], node=TcpDiscoveryNode [id=f04a2d3b-b4b1-4ca2-94b1-b6edaf9aed55,
addrs=[0:0:0:0:0:0:0:1, 127.0.0.1, 192.168.6.251],
sockAddrs=[/0:0:0:0:0:0:0:1:47500, /127.0.0.1:47500,
Hostname./192.168.6.251:47500], discPort=47500, order=1, intOrder=1,
lastExchangeTime=1517494606256, loc=true, ver=2.3.0#20171028-sha1:8add7fd5,
isClient=false], ex=class o.a.i.IgniteException: null, hasRes=true,
isCancelled=false, isOccupied=true]

org.apache.ignite.IgniteException: Remote job threw user exception (override
or implement ComputeTask.result(..) method if you would like to have
automatic failover for this exception).
at
org.apache.ignite.compute.ComputeTaskAdapter.result(ComputeTaskAdapter.java:101)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1047)
at
org.apache.ignite.internal.processors.task.GridTaskWorker$5.apply(GridTaskWorker.java:1040)
at
org.apache.ignite.internal.util.IgniteUtils.wrapThreadLoader(IgniteUtils.java:6663)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.result(GridTaskWorker.java:1040)
at
org.apache.ignite.internal.processors.task.GridTaskWorker.onResponse(GridTaskWorker.java:858)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor.processJobExecuteResponse(GridTaskProcessor.java:1066)
at
org.apache.ignite.internal.processors.task.GridTaskProcessor$JobMessageListener.onMessage(GridTaskProcessor.java:1301)
at
org.apache.ignite.internal.managers.communication.GridIoManager.invokeListener(GridIoManager.java:1555)
at
org.apache.ignite.internal.managers.communication.GridIoManager.access$4100(GridIoManager.java:126)
at
org.apache.ignite.internal.managers.communication.GridIoManager$GridCommunicationMessageSet.unwind(GridIoManager.java:2751)
at
org.apache.ignite.internal.managers.communication.GridIoManager.unwindMessageSet(GridIoManager.java:1515)
at
org.apache.ignite.internal.managers.communication.GridIoManager.processOrderedMessage(GridIoManager.java:1472)
at
org.apache.ignite.internal.managers.communication.GridIoManager.send(GridIoManager.java:1627)
at
org.apache.ignite.internal.managers.communication.GridIoManager.sendOrderedMessage(GridIoManager.java:1749)
at
org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:916)
at
org.apache.ignite.internal.processors.job.GridJobWorker.finishJob(GridJobWorker.java:773)
at
org.apache.ignite.internal.processors.job.GridJobWorker.execute0(GridJobWorker.java:625)
at
org.apache.ignite.internal.processors.job.GridJobWorker.body(GridJobWorker.java:489)
at

The wrong sockAddrs are registered, and the cluster is broken when it tries to connect it occasionally.

2018-02-01 Thread dark
Cluster was broken some time ago. In my opinion, it seems to use Docker IP,
not normal IP, in communication. Is it possible to register only the IP of
the normal host to the IP?

I want to remove 127.0.0.1 and 172.17.0.1 as shown in the log below.

How to only register 10.xxx.xxx.x to Ignite Cluster communication sockAddrs?

WARN  2018-01-31 03:34:15 [tcp-disco-msg-worker-#2%null%]
o.a.ignite.logger.java.JavaLogger.warning:278 - Received EVT_NODE_FAILED
event with warning [nodeInitiatedEvt=TcpDiscoveryNode
[id=9c9ee88b-d462-4ef2-9be7-edd21e01a7eb, addrs=[10.xxx.xxx.x, 127.0.0.1,
172.17.0.1],
sockAddrs=[ip-172-17-0-1.ap-northeast-2.compute.internal/172.17.0.1:47500,
/127.0.0.1:47500, /10.xxx.xxx.x:47500], discPort=47500, order=131,
intOrder=74, lastExchangeTime=1516101623966, loc=false,
ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false], msg=TcpCommunicationSpi
failed to establish connection to node [rmtNode=TcpDiscoveryNode
[id=65dcc3d9-3593-4b5e-9b80-c398abf9806f, addrs=[10.xxx.xxx.x, 127.0.0.1,
172.17.0.1],
sockAddrs=[ip-172-17-0-1.ap-northeast-2.compute.internal/172.17.0.1:47500,
/10.xxx.xxx.x:47500, /127.0.0.1:47500], discPort=47500, order=135,
intOrder=78, lastExchangeTime=1516101638624, loc=false,
ver=2.0.0#20170430-sha1:d4eef3c6, isClient=false], errs=class
o.a.i.IgniteCheckedException: Failed to connect to node (is node still
alive?). Make sure that each ComputeTask and cache Transaction has a timeout
set in order to prevent parties from waiting forever in case of network
issues [nodeId=65dcc3d9-3593-4b5e-9b80-c398abf9806f,
addrs=[ip-172-17-0-1.ap-northeast-2.compute.internal/172.17.0.1:47100,
ip-10-xxx-xxx-x.ap-northeast-2.compute.internal/10.xxx.xxx.x:47100,
/127.0.0.1:47100]], connectErrs=[class o.a.i.IgniteCheckedException: Failed
to connect to address:
ip-172-17-0-1.ap-northeast-2.compute.internal/172.17.0.1:47100, class
o.a.i.IgniteCheckedException: Failed to connect to address:
ip-10-xxx-xxx-x.ap-northeast-2.compute.internal/10.xxx.xxx.x:47100, class
o.a.i.IgniteCheckedException: Failed to connect to address:
/127.0.0.1:47100]]]

http://apache-ignite-users.70518.x6.nabble.com/How-to-correctly-shut-down-Ignite-Application-td12548.html

-Djava.net.preferIPv4Stack=true has not been applied yet. Or is this
related?

To summarize, I have two questions.

1. Is it possible to register only the IP of the normal host to the IP? (
exclude 127.0.0.1, 172.17.0.1 (docker container ip) )

2. -Djava.net.preferIPv4Stack=true has not been applied yet. Or is this
issue related?

Thanks.




--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Issues with sub query IN clause

2018-02-01 Thread Andrey Mashenkov
Hi Rajesh,


Possibly, you data is not collocated and subquery return less retults as it
executes locally.
Try to rewrite IN into JOIN and check if query with
query#setDistributedJoins(true) will return expected result.

It is recommended
1. replace IN with JOIN due to performance issues [1].
2. use data collocation [2] if possible rather than turning on distributed
joins.

[1]
https://apacheignite-sql.readme.io/docs/performance-and-debugging#section-sql-performance-and-usability-considerations
[2]
https://apacheignite.readme.io/docs/affinity-collocation#section-collocate-data-with-data

On Thu, Feb 1, 2018 at 3:44 PM, Rajesh Kishore 
wrote:

> Hi All,
>
> As of now, we have less than 1 M records , and attribute split into few(3)
> tables
> with index created.
> We are using combination of join &  IN clause(sub query) in the SQL query
> , for some reason this query does not return any response.
> But, the moment we remove the IN clause and use just the join, the query
> returns the result.
> Note that as per EXPLAIN PLAN , the sub query also seems to be using the
> defined
> indexes.
>
> What are the recommendations for using such queries , are there any
> guidelines, What we are doing wrong here?
>
> Thanks,
> Rajesh
>
>
>
>
>
>
>


-- 
Best regards,
Andrey V. Mashenkov


Issues with sub query IN clause

2018-02-01 Thread Rajesh Kishore
Hi All,

As of now, we have less than 1 M records , and attribute split into few(3)
tables
with index created.
We are using combination of join &  IN clause(sub query) in the SQL query ,
for some reason this query does not return any response.
But, the moment we remove the IN clause and use just the join, the query
returns the result.
Note that as per EXPLAIN PLAN , the sub query also seems to be using the
defined
indexes.

What are the recommendations for using such queries , are there any
guidelines, What we are doing wrong here?

Thanks,
Rajesh


Re: CacheStore example for partitioned cache backed with a postgres database

2018-02-01 Thread Andrey Mashenkov
Pim,

Why you set WAL disabled? Do you use native ignite persistence together
with CacheStore?
If so, it is not supported, either native persistence or CacheStore should
be used, but not both.



On Sat, Jan 20, 2018 at 7:34 PM, Pim D  wrote:

> Hi,
>
> I can't seem to find a good example on how to implement a CacheStoreAdapter
> that also writes cache updates to a postgres database.
>
> I've tried the standaard CacheJdbcStoreFactory, yet it does not write the
> updates (even with read and write through set to true and WAL disabled).
> I've created my own implementation of de CacheStoreAdapter and load(key) is
> being called (and working), but Ignite does not call the write nor the
> delete actions of the store.
>
> I've been looking at the cassandra example, but that one is way more
> complex
> and seems to require a lot more than I need (and thus adds even more
> complexity).
>
> Many thanks in advance,
> Pim
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Option meta schema in cache level?

2018-02-01 Thread Andrey Mashenkov
Hi,

To get metadata prior to iterating over cursor, you can try to cast cursor
to internal class QueryCursorImpl and call #fieldsMetadata().


On Sun, Jan 21, 2018 at 11:27 PM, mamaco  wrote:

> I'm trying to use Binary Marshaller to replace old OptimizedMarshaller
> which
> is really a pain for deployment.
> But according to document of Type Metadata, it could be changed at runtime,
> that means if I access to a 3rd party cache which was created by someone
> else without explicit type settings, I'll have to get a sample row to
> retrieve its type manually prior to further operations. is this correct?
> why
> not just set a default optional meta type in cache level? as an option, it
> won't be negative for dynamic row schema.
>
>
>
> #access to a cache 'SQL_PUBLIC_CITY' (created by JDBC 'Create Table'
> statement) and do some normal operations.
> #GetSchema() is weird
>
> public class App
> {
> private static SchemaType schema=null;
> public static void main( String[] args )
> {
> Ignite ignite =
> Ignition.start("C://apache//ignite//apache-ignite-fabric-
> 2.2.0-bin//config//client.xml");
> CacheConfiguration cfg = new
> CacheConfiguration("SQL_PUBLIC_CITY");
> IgniteCache cache =
> ignite.getOrCreateCache(cfg).withKeepBinary();
>
> if(schema==null) schema=GetSchema(cache);
> Put(ignite,cache,4L,"Los Angeles");
> GetAll(cache);
> Get(ignite,cache,4L);
> Query(cache,4L);
> ignite.close();
> }
>
> public static void GetAll(IgniteCache
> cache)
> {
> Iterator>  itr =
> cache.iterator();
> while(itr.hasNext()){
> Cache.Entry item = itr.next();
> System.out.println("id="+item.getKey().field("id")+"
> KeyType="+item.getKey().type().typeName().toString()+"
> V="+item.getKey().type().field("id"));
> System.out.println("CITY="+item.getValue().field("name")+
> "  of
> id="+item.getValue().field("id"));
> }
> }
>
> public static SchemaType GetSchema(IgniteCache BinaryObject> cache) {
> SchemaType sch=new SchemaType();
> Cache.Entry item =
> cache.iterator().next();
> sch.KeyType=item.getKey().type().typeName();
> sch.KeyFields=item.getKey().type().fieldNames();
> sch.ValueType=item.getValue().type().typeName();
> sch.ValueFields=item.getValue().type().fieldNames();
> return sch;
> }
>
>
>
> public static void Get(Ignite ignite, IgniteCache BinaryObject> cache, Long CityId) {
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder(schema.KeyType)
> .setField("id", CityId);
>
> BinaryObject value = cache.get(keyBuilder.build());
> if(value!=null) System.out.println("CITY="+value.field("name"));
> }
>
> public static void Put(Ignite ignite, IgniteCache BinaryObject> cache,Long CityId,String CityName) {
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder(schema.KeyType)
> .setField("id", CityId);
> BinaryObjectBuilder valueBuilder =
> ignite.binary().builder(schema.ValueType)
> .setField("name", CityName);
>
> cache.put(keyBuilder.build(),valueBuilder.build());
> }
>
> public static void Query(IgniteCache
> cache,
> Long CityId) {
> QueryCursor> query = cache.query(new
> SqlFieldsQuery("select * from City where id="+CityId));
> System.out.println(query.getAll());
> }
> }
>
>
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Question about 'table' created by JDBC

2018-02-01 Thread Andrey Mashenkov
Hi,

1. No, it is not supported. It is possible to store entries of different
types in same cache, so such metadata is not available from cache level.
Types look strange as they are autogenerated names. Seems, you didn't
specify key\value types in create table clause [1] and no QueryEntity [2]
descibed in cache config.

2. No 'describe tablename' is supported. SQL system views are on design
stage for now and there an IEP-13 [3] document.
If you find smth is missed, please point us.


[1] https://apacheignite-sql.readme.io/docs/create-table
[2]
https://apacheignite.readme.io/docs/cache-queries#query-configuration-using-queryentity
[3]
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75962769

On Sun, Jan 21, 2018 at 5:16 AM, mamaco  wrote:

> Hi,
> I made a quick test to create a City table through DBeaver, and then I
> accessed it through java app successfully.
>
> But, when I loop the record, I found both of key and value were built by
> BinaryObject with strange type.
> Question 1:
> Is there any convenient API to get the type name from cache level?
> sth like this:
> cache.getMetrics.getBinaryTypeName().
> I'm just curious if JDBC operations could interact with JAVA api
>
> Question 2:
> In JDBC is there any command like 'describe tablename' to query cache
> structure?
>
>
> public static void GetAll(IgniteCache
> cache)
> {
> Iterator>  itr =
> cache.iterator();
> while(itr.hasNext()){
> Cache.Entry item = itr.next();
> System.out.println("id="+item.getKey().field("id")+"
> KeyType="+item.getKey().type().typeName().toString());
> System.out.println("CITY="+item.getValue().field("name")+
> "
> ValueType="+item.getValue().type().typeName().toString());
> }
> }
> -
> id=1  KeyType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_5f2d266080f6_KEY
> CITY=Forest Hill
> ValueType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_5f2d266080f6
> id=3  KeyType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_5f2d266080f6_KEY
> CITY=St. Petersburg
> ValueType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_5f2d266080f6
> id=2  KeyType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_5f2d266080f6_KEY
> CITY=Denver  ValueType=SQL_PUBLIC_CITY_9c004c1e_d2d2_4c13_8f38_
> 5f2d266080f6
>
>
> public static void Get(Ignite ignite, IgniteCache BinaryObject> cache) {
> Long keyValue = 3L;
> BinaryObjectBuilder keyBuilder =
> ignite.binary().builder("SQL_PUBLIC_CITY_9c004c1e_d2d2_
> 4c13_8f38_5f2d266080f6_KEY")
> .setField("id", keyValue);
>
> BinaryObject value = cache.get(keyBuilder.build());
> if(value!=null) System.out.println("CITY="+value.field("name"));
> else System.out.println("Empty!!!");
> }
> -
> CITY=St. Petersburg
>
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>



-- 
Best regards,
Andrey V. Mashenkov


Re: Apache Ignite & unixODBC and truncating text

2018-02-01 Thread Igor Sapego
I'm currently working on the fix.
Sorry guys, I was not able to fit it in 2.4.
I can share a separate patch when it will be ready though if you like.

Best Regards,
Igor

On Tue, Jan 30, 2018 at 5:45 AM, Rick Alexander 
wrote:

> Hey Igor or anyone who knows,
>
> Looking to see when this will be released as well.
>
> You mention 2.4 but the task says 2.5 release.
>
> Please help! My project is at a stand-still...
>
> On Mon, Jan 15, 2018 at 9:33 AM, bagsiur 
> wrote:
>
>> ok,
>>
>> I ask becouse I wont to be sure. Fix will be included in Ignite 2.4 or
>> 2.5?
>>
>> On the ticket: https://issues.apache.org/jira/browse/IGNITE-7362 in
>> details
>> is write that fix this bug is planning for 2.5 version...
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Rebalancing mode set to None

2018-02-01 Thread Evgenii Zhuravlev
But currently you start the same Ignite nodes which you will start in
standalone mode, but you run them embedded.

If you have 3 backups in the cluster and one of your nodes fails, then part
of your data will have 2 backups. Also, after the returning to the cluster,
without manual rebalancing, some data still will have only 2 backups of
some data and newly added node won't have any data on it. So, if 3 other
nodes will fail in future, you will lose all backups to the certain
partitions(without manual rebalancing, of course)

Evgenii

2018-02-01 12:03 GMT+03:00 Ranjit Sahu :

> yes i am using it in embeded mode. Standalone mode we can go for, but
> additional hardware needed for that which will stay idle when we don't use
> it. That was the reason to try this embedded stuff. We build the cache on
> fly , which gets shut down with spark.
>
> My question basically was, when i set rebalancing to NONE and have back up
> count as 3, how it behaves ?
> Will i still have all the data when one node stops cause of back up set to
> 3 ?
>
>
> On Thu, Feb 1, 2018 at 2:26 PM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> > When due to some reason a task fails, the ignite node stops and when
>> the task re-starts,
>> Looks like you run in embedded mode. Do avoid too frequently node
>> stopping events, you need to run Ignite in standalone mode, in this case,
>> the node will run even if your task fails.
>>
>> Please let me know if I missed something.
>>
>> Evgenii
>>
>> 2018-02-01 11:44 GMT+03:00 Ranjit Sahu :
>>
>>> I read the data which is in avro format using spark sql and load it to
>>> cache from spark program. I build the Ignite key-store inside spark
>>> executors. When due to some reason a task fails, the ignite node stops and
>>> when the task re-starts,
>>> the new node joins back.I see slowness from here onwards. I was thinking
>>> cause of rebalancing this is becoming slow. I can look at tuning the
>>> rebalancing too. Let me know if you have any suggestions.
>>>
>>> On Thu, Feb 1, 2018 at 1:54 PM, Evgenii Zhuravlev <
>>> e.zhuravlev...@gmail.com> wrote:
>>>
 Ranjit,

 How do you load data to the cache?

 Evgenii

 2018-02-01 11:18 GMT+03:00 Ranjit Sahu :

> Hi Val,
>
> Not always but out of 10, we see at least once the issue. Whats
> happening is when one node crashes\stops the new node joins . The loading
> process restarts but what ever was happening in few minutes (3-5) goes to
> 2-3 hours.
>
> Thanks,
> Ranjit
>
> On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Ranjit,
>>
>> Is it a really frequent event for node to crash in the middle of
>> loading
>> process? If so, then I think you should fix that instead of working
>> around
>> by disabling rebalancing. Such configuration definitely has a lot
>> drawbacks
>> and therefore can cause issues.
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>

>>>
>>
>


Re: Memory usage by ignite nodes

2018-02-01 Thread Ranjit Sahu
Thanks Dmitrty.

On Wed, Jan 24, 2018 at 8:27 PM, dkarachentsev 
wrote:

> Hi Ranjit,
>
> That metrics should be correct, you also may check [1], because Ignite
> anyway keeps data in offheap. But if enabled on-heap, it caches entries in
> java heap.
>
> [1] https://apacheignite.readme.io/docs/memory-metrics
>
> Thanks!
> -Dmitry
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


Re: Persistent Store Not enabled in Ignite Yarn Deployment

2018-02-01 Thread ilya.kasnacheev
Hello!

Unfortunately it's hard to tell why the node would stop without looking at
client & server logs. Can you share these somewhere?

Maybe you should also set memory policy for these nodes, to the values that
your Yarn configuration expect them to have:
https://apacheignite.readme.io/docs/memory-configuration

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Integration with Hibernate 5.2.X for L2 Cache?

2018-02-01 Thread Andrey Mashenkov
Hi,

Here is a ticket for Hibernate 5.2+ support [1].
Hope, it will be fixed in ignite-2.5 release.

[1] https://issues.apache.org/jira/browse/IGNITE-5848

On Thu, Jan 18, 2018 at 5:25 PM, Andrey Mashenkov <
andrey.mashen...@gmail.com> wrote:

> Hi,
>
> Hibernate 5.2.x requires java 8, while ignite requires java 7.
> AFAIK, we are going to drop java 7 in upcoming ignite-2.4 version.
>
> So, it will be possible to add Hibernate 5.2+ support in next 2.5 version.
> Feel free to create a ticket.
>
> On Thu, Jan 18, 2018 at 1:45 PM, SirSpart  wrote:
>
>> Nobody answered? Guess i'll answer myself.
>> Long story short: Not yet.
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>
>
> --
> Best regards,
> Andrey V. Mashenkov
>


Re: a2cf190a-6a44-4b94-baea-c9b88a16922e, class org.apache.ignite.IgniteCheckedException:Failed to execute SQL query

2018-02-01 Thread ilya.kasnacheev
Hello Rahul!

Unfortunately both nodes' loggers aren't properly configured, not all
messages are there and it's hard to follow up relative times on both nodes.

Moreover, according to logs, they are both client nodes (even one in
igniteServer.log), and I am not sure if they are on the same topology! I
think it's possible that igniteServer.log corresponds to a Visor node after
all.

Anyway, the error reads:
Failed to execute map query on the node:
7bd86058-2fcc-4dc0-bb57-b951acd49ace

But there's no log for node 7bd86058-2fcc-4dc0-bb57-b951acd49ace.
igniteServer.log is 6A983D97-A166-48DE-8B84-5966012B968F instead.

Please make sure you collect proper logs, and that the topology is what you
expect. According to client, there was 10 server nodes, were there?

It would make sense to run server nodes without -DIGNITE_QUIET=true (adding
-v to ignite.sh)

Regards,



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: Rebalancing mode set to None

2018-02-01 Thread Ranjit Sahu
yes i am using it in embeded mode. Standalone mode we can go for, but
additional hardware needed for that which will stay idle when we don't use
it. That was the reason to try this embedded stuff. We build the cache on
fly , which gets shut down with spark.

My question basically was, when i set rebalancing to NONE and have back up
count as 3, how it behaves ?
Will i still have all the data when one node stops cause of back up set to
3 ?


On Thu, Feb 1, 2018 at 2:26 PM, Evgenii Zhuravlev 
wrote:

> > When due to some reason a task fails, the ignite node stops and when
> the task re-starts,
> Looks like you run in embedded mode. Do avoid too frequently node stopping
> events, you need to run Ignite in standalone mode, in this case, the node
> will run even if your task fails.
>
> Please let me know if I missed something.
>
> Evgenii
>
> 2018-02-01 11:44 GMT+03:00 Ranjit Sahu :
>
>> I read the data which is in avro format using spark sql and load it to
>> cache from spark program. I build the Ignite key-store inside spark
>> executors. When due to some reason a task fails, the ignite node stops and
>> when the task re-starts,
>> the new node joins back.I see slowness from here onwards. I was thinking
>> cause of rebalancing this is becoming slow. I can look at tuning the
>> rebalancing too. Let me know if you have any suggestions.
>>
>> On Thu, Feb 1, 2018 at 1:54 PM, Evgenii Zhuravlev <
>> e.zhuravlev...@gmail.com> wrote:
>>
>>> Ranjit,
>>>
>>> How do you load data to the cache?
>>>
>>> Evgenii
>>>
>>> 2018-02-01 11:18 GMT+03:00 Ranjit Sahu :
>>>
 Hi Val,

 Not always but out of 10, we see at least once the issue. Whats
 happening is when one node crashes\stops the new node joins . The loading
 process restarts but what ever was happening in few minutes (3-5) goes to
 2-3 hours.

 Thanks,
 Ranjit

 On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko <
 valentin.kuliche...@gmail.com> wrote:

> Ranjit,
>
> Is it a really frequent event for node to crash in the middle of
> loading
> process? If so, then I think you should fix that instead of working
> around
> by disabling rebalancing. Such configuration definitely has a lot
> drawbacks
> and therefore can cause issues.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


>>>
>>
>


Re: Rebalancing mode set to None

2018-02-01 Thread Evgenii Zhuravlev
> When due to some reason a task fails, the ignite node stops and when the
task re-starts,
Looks like you run in embedded mode. Do avoid too frequently node stopping
events, you need to run Ignite in standalone mode, in this case, the node
will run even if your task fails.

Please let me know if I missed something.

Evgenii

2018-02-01 11:44 GMT+03:00 Ranjit Sahu :

> I read the data which is in avro format using spark sql and load it to
> cache from spark program. I build the Ignite key-store inside spark
> executors. When due to some reason a task fails, the ignite node stops and
> when the task re-starts,
> the new node joins back.I see slowness from here onwards. I was thinking
> cause of rebalancing this is becoming slow. I can look at tuning the
> rebalancing too. Let me know if you have any suggestions.
>
> On Thu, Feb 1, 2018 at 1:54 PM, Evgenii Zhuravlev <
> e.zhuravlev...@gmail.com> wrote:
>
>> Ranjit,
>>
>> How do you load data to the cache?
>>
>> Evgenii
>>
>> 2018-02-01 11:18 GMT+03:00 Ranjit Sahu :
>>
>>> Hi Val,
>>>
>>> Not always but out of 10, we see at least once the issue. Whats
>>> happening is when one node crashes\stops the new node joins . The loading
>>> process restarts but what ever was happening in few minutes (3-5) goes to
>>> 2-3 hours.
>>>
>>> Thanks,
>>> Ranjit
>>>
>>> On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko <
>>> valentin.kuliche...@gmail.com> wrote:
>>>
 Ranjit,

 Is it a really frequent event for node to crash in the middle of loading
 process? If so, then I think you should fix that instead of working
 around
 by disabling rebalancing. Such configuration definitely has a lot
 drawbacks
 and therefore can cause issues.

 -Val



 --
 Sent from: http://apache-ignite-users.70518.x6.nabble.com/

>>>
>>>
>>
>


Re: Re: Cannot connect the ignite server after running one or two days

2018-02-01 Thread xiang jie
OK.  I’ll try it when the problem appears next time. 

 

Thanks.

 

发件人: Evgenii Zhuravlev [mailto:e.zhuravlev...@gmail.com] 
发送时间: 2018年2月1日 16:13
收件人: user@ignite.apache.org
主题: Re: 答复: Re: Cannot connect the ignite server after running one or two days

 

" Failed to deserialize object " is just a consequence of the cause exception, 
as you see from stacktrace - Caused by: java.net.SocketException: Socket closed.

Could you run netstat -apnt on server when you face this problem and share 
results here?

 

 

 

2018-02-01 11:04 GMT+03:00 xiang jie  >:

But from server ping client is OK. And from log file, It seems communication
is OK before " Failed to deserialize object " error. It really confuses me.


[2018-01-30
07:40:41,646][DEBUG][exchange-worker-#42%igniteCosco%][GridDhtPartitionDeman
der] Adding partition assignments: GridDhtPreloaderAssignments
[exchangeId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=2364, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d,
addrs=[127.0.0.1, 172.41.6.81], sockAddrs=[/172.41.6.81:0 
 , /127.0.0.1:0  ],
discPort=0, order=2364, intOrder=1185, lastExchangeTime=1517269240080,
loc=false, ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0  , /127.0.0.1:0 
 ], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED],
topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
cancelled=false, exchId=GridDhtPartitionExchangeId
[topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0  , /127.0.0.1:0 
 ], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0  , /127.0.0.1:0 
 ], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED], super={}]
..
..
[sock=Socket[addr=/10.10.11.69  
,port=49474,localport=47500],
locNodeId=f290fb24-94ad-4aba-8a6c-78119ad5dd74,
rmtNodeId=b20f0495-ea0f-43be-b92f-f55803767a6b]
class org.apache.ignite.IgniteCheckedException: Failed to deserialize object
with given class loader: sun.misc.Launcher$AppClassLoader@764c12b6 
 
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:129)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(Abstr
actNodeNameAwareMarshaller.java:94)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9740)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$SocketReader.body(ServerImpl.
java:5946)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.apache.ignite.marshaller.jdk.JdkMarshallerInputStreamWrapper.
read(JdkMarshallerInputStreamWrapper.java:53)
at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.
java:2653)
at
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2
669)
at
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.j
ava:3146)
at
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:858)
at java.io.ObjectInputStream.(ObjectInputStream.java:354)
at
org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.(JdkMa
rshallerObjectInputStream.java:39)
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:119)
... 4 more


Re: Rebalancing mode set to None

2018-02-01 Thread Ranjit Sahu
I read the data which is in avro format using spark sql and load it to
cache from spark program. I build the Ignite key-store inside spark
executors. When due to some reason a task fails, the ignite node stops and
when the task re-starts,
the new node joins back.I see slowness from here onwards. I was thinking
cause of rebalancing this is becoming slow. I can look at tuning the
rebalancing too. Let me know if you have any suggestions.

On Thu, Feb 1, 2018 at 1:54 PM, Evgenii Zhuravlev 
wrote:

> Ranjit,
>
> How do you load data to the cache?
>
> Evgenii
>
> 2018-02-01 11:18 GMT+03:00 Ranjit Sahu :
>
>> Hi Val,
>>
>> Not always but out of 10, we see at least once the issue. Whats happening
>> is when one node crashes\stops the new node joins . The loading process
>> restarts but what ever was happening in few minutes (3-5) goes to 2-3
>> hours.
>>
>> Thanks,
>> Ranjit
>>
>> On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko <
>> valentin.kuliche...@gmail.com> wrote:
>>
>>> Ranjit,
>>>
>>> Is it a really frequent event for node to crash in the middle of loading
>>> process? If so, then I think you should fix that instead of working
>>> around
>>> by disabling rebalancing. Such configuration definitely has a lot
>>> drawbacks
>>> and therefore can cause issues.
>>>
>>> -Val
>>>
>>>
>>>
>>> --
>>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>>
>>
>>
>


Re: Rebalancing mode set to None

2018-02-01 Thread Evgenii Zhuravlev
Ranjit,

How do you load data to the cache?

Evgenii

2018-02-01 11:18 GMT+03:00 Ranjit Sahu :

> Hi Val,
>
> Not always but out of 10, we see at least once the issue. Whats happening
> is when one node crashes\stops the new node joins . The loading process
> restarts but what ever was happening in few minutes (3-5) goes to 2-3
> hours.
>
> Thanks,
> Ranjit
>
> On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Ranjit,
>>
>> Is it a really frequent event for node to crash in the middle of loading
>> process? If so, then I think you should fix that instead of working around
>> by disabling rebalancing. Such configuration definitely has a lot
>> drawbacks
>> and therefore can cause issues.
>>
>> -Val
>>
>>
>>
>> --
>> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>>
>
>


Re: Rebalancing mode set to None

2018-02-01 Thread Ranjit Sahu
Hi Val,

Not always but out of 10, we see at least once the issue. Whats happening
is when one node crashes\stops the new node joins . The loading process
restarts but what ever was happening in few minutes (3-5) goes to 2-3
hours.

Thanks,
Ranjit

On Wed, Jan 31, 2018 at 3:12 AM, vkulichenko 
wrote:

> Ranjit,
>
> Is it a really frequent event for node to crash in the middle of loading
> process? If so, then I think you should fix that instead of working around
> by disabling rebalancing. Such configuration definitely has a lot drawbacks
> and therefore can cause issues.
>
> -Val
>
>
>
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/
>


RE: Key Value Store - control TTL refresh

2018-02-01 Thread Stanislav Lukyanov
Hi,

Whenever an entry is touched, the expiry policy of the view that was used
for that will be consulted to get a new TTL. It means that each time you
touch an entry through a view with `EternalExpiryPolicy` its TTL will be
reset to ETERNAL. You could say that the `bypassCache` from your example
should actually be called `keepAliveCache` or something like that.

If you don't want your operations to have an effect on the TTL, you should
use the original cache without an expiry policy set. If you want to build a
"bypass expiry policy" (for the sake of uniformness or just for fun) you can
try creating an `ExpiryPolicy` that returns `null` for access and
modification and `ETERNAL` for creation - I believe that should have the
same effect as having no `ExpiryPolicy` at all.

Stan


Ariel Tubaltsev wrote
> Hi Stan
> 
> Thank you for the quick reply.
> 
> Let me clarify my use case: I want to have expiration for all regular
> operations.
> Along with that, I want to be able to read some or all entries without
> refreshing TTLs, for example for debugging.
> 
> Following your example, I create a view with expiration and a view without
> it, my understanding is that accessing through the view with
> EternalExpiryPolicy shouldn't refresh TTLs - which seems to work.
> 
> However, accessing through the view with  TouchedExpiryPolicy doesn't seem
> to refresh TTLs.
> 
> Do you think something like that should work?
> 
>   // Auto-close cache at the end of the example.
> try (IgniteCacheString, String cache =
> ignite.getOrCreateCache(CACHE_NAME)) {
> 
> // create not expiring view
> IgniteCacheString, String bypassCache =
> cache.withExpiryPolicy(new EternalExpiryPolicy());
> 
> // create expiring view, 10 seconds TTL
> System.out.println(">>> Set entries to expire in 10
> seconds");
> IgniteCacheString, String workCache =
> cache.withExpiryPolicy(new TouchedExpiryPolicy(new
> Duration(TimeUnit.SECONDS, 10)));
> 
> // entries shouldn't survive
> populate(workCache);
> sleep(5); // sleep for 5 seconds
> System.out.println("\n>>> Dump cache, don't refresh TTL");
> getAll(bypassCache);
> sleep(5);
> System.out.println("\n>>> Work cache should be empty");
> getAll(workCache);
> System.out.println("\n>>> Bypass cache should be empty");
> getAll(bypassCache);
> 
> // entries should survive
> populate(workCache);
> sleep(5);
> System.out.println("\n>>> Dump cache, refresh TTL"); //
> entries are still there
> getAll(workCache);
> sleep(5);
> System.out.println("\n>>> Bypass cache should be not
> empty"); // entries are gone
> getAll(bypassCache);
> System.out.println("\n>>> Work cache should be not
> empty");
> getAll(workCache);
> 
> ...
> 
> Ariel
> 
> 
> 
> 
> --
> Sent from: http://apache-ignite-users.70518.x6.nabble.com/





--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/


Re: 答复: Re: Cannot connect the ignite server after running one or two days

2018-02-01 Thread Evgenii Zhuravlev
" Failed to deserialize object " is just a consequence of the cause
exception, as you see from stacktrace - Caused by:
java.net.SocketException: Socket closed.
Could you run netstat -apnt on server when you face this problem and share
results here?



2018-02-01 11:04 GMT+03:00 xiang jie :

> But from server ping client is OK. And from log file, It seems
> communication
> is OK before " Failed to deserialize object " error. It really confuses me.
>
>
> [2018-01-30
> 07:40:41,646][DEBUG][exchange-worker-#42%igniteCosco%][
> GridDhtPartitionDeman
> der] Adding partition assignments: GridDhtPreloaderAssignments
> [exchangeId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
> [topVer=2364, minorTopVer=0], discoEvt=DiscoveryEvent
> [evtNode=TcpDiscoveryNode [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d,
> addrs=[127.0.0.1, 172.41.6.81], sockAddrs=[/172.41.6.81:0, /127.0.0.1:0],
> discPort=0, order=2364, intOrder=1185, lastExchangeTime=1517269240080,
> loc=false, ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
> nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
> [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
> sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
> intOrder=1185, lastExchangeTime=1517269240080, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
> tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED],
> topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
> cancelled=false, exchId=GridDhtPartitionExchangeId
> [topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
> discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
> [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
> sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
> intOrder=1185, lastExchangeTime=1517269240080, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
> nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
> [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
> sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
> intOrder=1185, lastExchangeTime=1517269240080, loc=false,
> ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
> tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED], super={}]
> ..
> ..
> [sock=Socket[addr=/10.10.11.69,port=49474,localport=47500],
> locNodeId=f290fb24-94ad-4aba-8a6c-78119ad5dd74,
> rmtNodeId=b20f0495-ea0f-43be-b92f-f55803767a6b]
> class org.apache.ignite.IgniteCheckedException: Failed to deserialize
> object
> with given class loader: sun.misc.Launcher$AppClassLoader@764c12b6
> at
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(
> JdkMarshaller.java
> :129)
> at
> org.apache.ignite.marshaller.AbstractNodeNameAwareMarshalle
> r.unmarshal(Abstr
> actNodeNameAwareMarshaller.java:94)
> at
> org.apache.ignite.internal.util.IgniteUtils.unmarshal(
> IgniteUtils.java:9740)
> at
> org.apache.ignite.spi.discovery.tcp.ServerImpl$
> SocketReader.body(ServerImpl.
> java:5946)
> at
> org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
> Caused by: java.net.SocketException: Socket closed
> at java.net.SocketInputStream.socketRead0(Native Method)
> at java.net.SocketInputStream.socketRead(SocketInputStream.
> java:116)
> at java.net.SocketInputStream.read(SocketInputStream.java:171)
> at java.net.SocketInputStream.read(SocketInputStream.java:141)
> at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
> at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
> at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
> at org.apache.ignite.marshaller.jdk.JdkMarshallerInputStreamWrappe
> r.
> read(JdkMarshallerInputStreamWrapper.java:53)
> at java.io.ObjectInputStream$PeekInputStream.read(
> ObjectInputStream.
> java:2653)
> at
> java.io.ObjectInputStream$PeekInputStream.readFully(
> ObjectInputStream.java:2
> 669)
> at
> java.io.ObjectInputStream$BlockDataInputStream.
> readShort(ObjectInputStream.j
> ava:3146)
> at
> java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:858)
> at java.io.ObjectInputStream.(ObjectInputStream.java:354)
> at
> org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream
> .(JdkMa
> rshallerObjectInputStream.java:39)
> at
> org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(
> JdkMarshaller.java
> :119)
> ... 4 more
>
> -邮件原件-
> 发件人: ezhuravlev [mailto:e.zhuravlev...@gmail.com]
> 发送时间: 2018年1月31日 21:50
> 收件人: user@ignite.apache.org
> 主题: Re: Re: Cannot connect the ignite server after running one or two days
>
>  >There are several clients connected by VPN,  is it possible to the
> client's restart regularly causing ignite socket communication to a certain
> degree of obstruction 

答复: Re: Cannot connect the ignite server after running one or two days

2018-02-01 Thread xiang jie
But from server ping client is OK. And from log file, It seems communication
is OK before " Failed to deserialize object " error. It really confuses me. 


[2018-01-30
07:40:41,646][DEBUG][exchange-worker-#42%igniteCosco%][GridDhtPartitionDeman
der] Adding partition assignments: GridDhtPreloaderAssignments
[exchangeId=GridDhtPartitionExchangeId [topVer=AffinityTopologyVersion
[topVer=2364, minorTopVer=0], discoEvt=DiscoveryEvent
[evtNode=TcpDiscoveryNode [id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d,
addrs=[127.0.0.1, 172.41.6.81], sockAddrs=[/172.41.6.81:0, /127.0.0.1:0],
discPort=0, order=2364, intOrder=1185, lastExchangeTime=1517269240080,
loc=false, ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED],
topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
cancelled=false, exchId=GridDhtPartitionExchangeId
[topVer=AffinityTopologyVersion [topVer=2364, minorTopVer=0],
discoEvt=DiscoveryEvent [evtNode=TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], topVer=2364,
nodeId8=f290fb24, msg=Node joined: TcpDiscoveryNode
[id=b07edbd2-3eaa-4b3b-acfc-4159f2bc047d, addrs=[127.0.0.1, 172.41.6.81],
sockAddrs=[/172.41.6.81:0, /127.0.0.1:0], discPort=0, order=2364,
intOrder=1185, lastExchangeTime=1517269240080, loc=false,
ver=2.3.0#20171028-sha1:8add7fd5, isClient=true], type=NODE_JOINED,
tstamp=1517269240130], nodeId=b07edbd2, evt=NODE_JOINED], super={}]
..
..
[sock=Socket[addr=/10.10.11.69,port=49474,localport=47500],
locNodeId=f290fb24-94ad-4aba-8a6c-78119ad5dd74,
rmtNodeId=b20f0495-ea0f-43be-b92f-f55803767a6b]
class org.apache.ignite.IgniteCheckedException: Failed to deserialize object
with given class loader: sun.misc.Launcher$AppClassLoader@764c12b6
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:129)
at
org.apache.ignite.marshaller.AbstractNodeNameAwareMarshaller.unmarshal(Abstr
actNodeNameAwareMarshaller.java:94)
at
org.apache.ignite.internal.util.IgniteUtils.unmarshal(IgniteUtils.java:9740)
at
org.apache.ignite.spi.discovery.tcp.ServerImpl$SocketReader.body(ServerImpl.
java:5946)
at
org.apache.ignite.spi.IgniteSpiThread.run(IgniteSpiThread.java:62)
Caused by: java.net.SocketException: Socket closed
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at org.apache.ignite.marshaller.jdk.JdkMarshallerInputStreamWrapper.
read(JdkMarshallerInputStreamWrapper.java:53)
at java.io.ObjectInputStream$PeekInputStream.read(ObjectInputStream.
java:2653)
at
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2
669)
at
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.j
ava:3146)
at
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:858)
at java.io.ObjectInputStream.(ObjectInputStream.java:354)
at
org.apache.ignite.marshaller.jdk.JdkMarshallerObjectInputStream.(JdkMa
rshallerObjectInputStream.java:39)
at
org.apache.ignite.marshaller.jdk.JdkMarshaller.unmarshal0(JdkMarshaller.java
:119)
... 4 more  

-邮件原件-
发件人: ezhuravlev [mailto:e.zhuravlev...@gmail.com] 
发送时间: 2018年1月31日 21:50
收件人: user@ignite.apache.org
主题: Re: Re: Cannot connect the ignite server after running one or two days

 >There are several clients connected by VPN,  is it possible to the
client's restart regularly causing ignite socket communication to a certain
degree of obstruction and becoming more and more serious as time goes by? 

Does this mean that it's impossible to connect from server to client nodes
directly? If so, it can definitely be the cause of this behavior.



--
Sent from: http://apache-ignite-users.70518.x6.nabble.com/