Re: IGNITE-2894 - Binary object inside of Externalizable still serialized with OptimizedMarshaller

2017-04-18 Thread Valentin Kulichenko
Nikita,

For Externalizable option 1 is the correct one. Externalizable objects
should not be treated as binary objects.

For read/writeObject, you indeed have to extend ObjectOutputStream.
writeObject() is final because you should extend writeObjectOverride()
instead. Take a look at ObjectOutputStream's JavaDoc and on how this is
done in OptimizedObjectOutputStream. Note that ideally we need to implement
everything that is included in Java serialization spec, including some
non-trivial stuff like PutField. I would check if it's possible to somehow
reuse the code that already exists in optimized marshaller as much as
possible.

-Val

On Tue, Apr 18, 2017 at 1:36 PM, Nikita Amelchev <nsamelc...@gmail.com>
wrote:

> I see two ways to support the Externalizable in the BM:
> 1. Add a new type constant to the GridBinaryMarshaller class etc and
> read/writeExternal in the BinaryClassDescriptor.
> 2. Make read/writeExternal through the BINARY type without updating
> metadata.
> I don't know how to make a support read/writeObject of the Serializable
> without delegating to the OM. Because read/writeObject methods need the
> Objectoutputstream class argument. One way is to delegate it to the
> OptimizedObjectOutputStream. Second way is to extend the Objectoutputstream
> in the BinaryWriterExImpl. But it is wrong way because the writeObject is
> final.
>
> 2017-01-19 20:46 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Nikita,
> >
> > In my view we just need to support Externalizable and
> > writeObject/readObject in BinaryMarshaller and get rid of delegation to
> > optimized marshaller. Once such classes also go through BinaryMarshaller
> > streams, they will be aware of binary configuration and will share the
> same
> > set of handles as well. This should take care of all the issues we have
> > here.
> >
> > -Val
> >
> > On Thu, Jan 19, 2017 at 7:26 AM, Nikita Amelchev <nsamelc...@gmail.com>
> > wrote:
> >
> > > I have some questions about single Marshaller.
> > > It seems not easy to merge OptimizedMarshaller with BinaryMarshaller
> and
> > is
> > > there any sense in it?
> > > When Binary object inside Externalizable serialized with optimized it
> > > losing all benefits.
> > > Will OptimizedMarshaller be supported at 2.0 version? Or to merge they
> is
> > > better?
> > > What do you think about it?
> > >
> > > In addition, Vladimir Ozerov, I would like to hear your opinion.
> > >
> > > 2017-01-17 23:32 GMT+03:00 Denis Magda <dma...@apache.org>:
> > >
> > > > Someone else added you to the contributors list in JIRA. This is why
> I
> > > > couldn’t add you for the second time. Ignite committers, please reply
> > on
> > > > the dev list if you add someone to the list.
> > > >
> > > > Nikita, yes, this ticket is still relevant. Go ahead and assign it on
> > > > yourself.
> > > >
> > > > Also please you may want to help with approaching 2.0 release and
> take
> > > > care of one of the sub-tasks that must be included in 2.0:
> > > > https://issues.apache.org/jira/browse/IGNITE-4547 <
> > > > https://issues.apache.org/jira/browse/IGNITE-4547>
> > > >
> > > > —
> > > > Denis
> > > >
> > > > > On Jan 15, 2017, at 9:02 PM, Nikita Amelchev <nsamelc...@gmail.com
> >
> > > > wrote:
> > > > >
> > > > > This issue was created long ago. Is still relevant?
> > > > >
> > > > > JIRA account:
> > > > > Username: NSAmelchev
> > > > > Full Name: Amelchev Nikita
> > > > >
> > > > >
> > > > > 2017-01-14 1:52 GMT+03:00 Denis Magda <dma...@apache.org>:
> > > > >
> > > > >> Hi Nikita,
> > > > >>
> > > > >> I can’t find provided account in Ignite JIRA
> > > > >> https://issues.apache.org/jira/browse/IGNITE <
> > > > https://issues.apache.org/
> > > > >> jira/browse/IGNITE>
> > > > >>
> > > > >> Please create an account there and share with me.
> > > > >>
> > > > >> This information might be useful for you as well.
> > > > >>
> > > > >> Subscribe to both dev and user lists:
> > > > >> https://ignite.apache.org/community/resources.html#mail-lists
> > > > >>
> > > > >> Get 

Re: Spark Data Frame support in Ignite

2017-08-03 Thread Valentin Kulichenko
This JDBC integration is just a Spark data source, which means that Spark
will fetch data in its local memory first, and only then apply filters,
aggregations, etc. This is obviously slow and doesn't use all advantages
Ignite provides.

To create useful and valuable integration, we should create a custom
Strategy that will convert Spark's logical plan into a SQL query and
execute it directly on Ignite.

-Val

On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan 
wrote:

> On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke  wrote:
>
> > I think the development effort would still be higher. Everything would
> > have to be put via JDBC into Ignite, then checkpointing would have to be
> > done via JDBC (again additional development effort), a lot of conversion
> > from spark internal format to JDBC and back to ignite internal format.
> > Pagination I do not see as a useful feature for managing large data
> volumes
> > from databases - on the contrary it is very inefficient (and one would to
> > have to implement logic to fetch al pages). Pagination was also never
> > thought of for fetching large data volumes, but for web pages showing a
> > small result set over several pages, where the user can click manually
> for
> > the next page (what they anyway not do most of the time).
> >
> > While it might be a quick solution , I think a deeper integration than
> > JDBC would be more beneficial.
> >
>
> Jorn, I completely agree. However, we have not been able to find a
> contributor for this feature. You sound like you have sufficient domain
> expertise in Spark and Ignite. Would you be willing to help out?
>
>
> > > On 3. Aug 2017, at 08:57, Dmitriy Setrakyan 
> > wrote:
> > >
> > >> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke 
> > wrote:
> > >>
> > >> I think the JDBC one is more inefficient, slower requires too much
> > >> development effort. You can also check the integration of Alluxio with
> > >> Spark.
> > >>
> > >
> > > As far as I know, Alluxio is a file system, so it cannot use JDBC.
> > Ignite,
> > > on the other hand, is an SQL system and works well with JDBC. As far as
> > the
> > > development effort, we are dealing with SQL, so I am not sure why JDBC
> > > would be harder.
> > >
> > > Generally speaking, until Ignite provides native data frame
> integration,
> > > having JDBC-based integration out of the box is minimally acceptable.
> > >
> > >
> > >> Then, in general I think JDBC has never designed for large data
> volumes.
> > >> It is for executing queries and getting a small or aggregated result
> set
> > >> back. Alternatively for inserting / updating single rows.
> > >>
> > >
> > > Agree in general. However, Ignite JDBC is designed to work with larger
> > data
> > > volumes and supports data pagination automatically.
> > >
> > >
> > >>> On 3. Aug 2017, at 08:17, Dmitriy Setrakyan 
> > >> wrote:
> > >>>
> > >>> Jorn, thanks for your feedback!
> > >>>
> > >>> Can you explain how the direct support would be different from the
> JDBC
> > >>> support?
> > >>>
> > >>> Thanks,
> > >>> D.
> > >>>
> >  On Thu, Aug 3, 2017 at 7:40 AM, Jörn Franke 
> > >> wrote:
> > 
> >  These are two different things. Spark applications themselves do not
> > use
> >  JDBC - it is more for non-spark applications to access Spark
> > DataFrames.
> > 
> >  A direct support by Ignite would make more sense. Although you have
> in
> >  theory IGFS, if the user is using HDFS, which might not be the case.
> > It
> > >> is
> >  now also very common to use Object stores, such as S3.
> >  Direct support could be leverage for interactive analysis or
> different
> >  Spark applications sharing data.
> > 
> > > On 3. Aug 2017, at 05:12, Dmitriy Setrakyan  >
> >  wrote:
> > >
> > > Igniters,
> > >
> > > We have had the integration with Spark Data Frames on our roadmap
> > for a
> > > while:
> > > https://issues.apache.org/jira/browse/IGNITE-3084
> > >
> > > However, while browsing Spark documentation, I cam across the
> generic
> >  JDBC
> > > data frame support in Spark:
> > > https://spark.apache.org/docs/latest/sql-programming-guide.
> >  html#jdbc-to-other-databases
> > >
> > > Given that Ignite has a JDBC driver, does it mean that it
> > transitively
> >  also
> > > supports Spark data frames? If yes, we should document it.
> > >
> > > D.
> > 
> > >>
> >
>


Hang when near cache is used

2017-08-03 Thread Valentin Kulichenko
Folks,

One of the users reported an issue with near cache in 2.0:
https://issues.apache.org/jira/browse/IGNITE-5926

There is a reproducer attached, I don't see anything obviously wrong and
can reproduce the issue. Can someone take a deeper look?

-Val


Re: Resurrect FairAffinityFunction

2017-08-15 Thread Valentin Kulichenko
Vladimir,

I would let other guys confirm, but I believe the reason is that if it
recalculates distribution every time from scratch, it would trigger too
much redundant data movement during rebalancing. Fair function not only
tries to provide best possible distribution, but also minimizes this data
movement, and for this it uses previous distribution.

-Val

On Tue, Aug 15, 2017 at 1:12 PM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> I do not like the idea as it would make it very hard to reason about
> whether your SQL will fail or not. Let's looks at the problem from the
> different angle. I have this question for years - why in the world *fair*
> affinity function, whose only ultimate goal is to provide equal partition
> distribution, depends on it's own previous state? Can we re-design in a way
> that it depends only on partition count and current topology state?
>
> On Thu, Aug 10, 2017 at 12:16 AM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > As far as I know, all logical caches with the same affinity function and
> > node filter will end up in the same group. If that's the case, I like the
> > idea. This is exactly what I was looking for.
> >
> > -Val
> >
> > On Wed, Aug 9, 2017 at 8:18 AM, Evgenii Zhuravlev <
> > e.zhuravlev...@gmail.com>
> > wrote:
> >
> > > Dmitriy,
> > >
> > > Yes, you're right. Moreover, it looks like a good practice to combine
> > > caches that will be used for collocated JOINs in one group since it
> > reduces
> > > overall overhead.
> > >
> > > I think it's not a problem to add this restriction to the SQL JOIN
> level
> > if
> > > we will decide to use this solution.
> > >
> > > Evgenii
> > >
> > >
> > >
> > >
> > > 2017-08-09 17:07 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> > >
> > > > On Wed, Aug 9, 2017 at 6:28 AM, ezhuravl <e.zhuravlev...@gmail.com>
> > > wrote:
> > > >
> > > > > Folks,
> > > > >
> > > > > I've started working on a https://issues.apache.org/
> > > > > jira/browse/IGNITE-5836
> > > > > ticket and found that the recently added feature with cacheGroups
> > doing
> > > > > pretty much the same that was described in this issue. CacheGroup
> > > > > guarantees
> > > > > that all caches within a group have same assignments since they
> > share a
> > > > > single underlying 'physical' cache.
> > > > >
> > > >
> > > > > I think we can return FairAffinityFunction and add information to
> its
> > > > > Javadoc that all caches with same AffinityFunction and NodeFilter
> > > should
> > > > be
> > > > > combined in cache group to avoid a problem with inconsistent
> previous
> > > > > assignments.
> > > > >
> > > > > What do you guys think?
> > > > >
> > > >
> > > > Are you suggesting that we can only reuse the same
> FairAffinityFunction
> > > > across the logical caches within the same group? This would mean that
> > > > caches from the different groups cannot participate in JOINs or
> > > collocated
> > > > compute.
> > > >
> > > > I think I like the idea, however, we need to make sure that we
> enforce
> > > this
> > > > restriction, at least at the SQL JOIN level.
> > > >
> > > > Alexey G, Val, would be nice to hear your thoughts on this.
> > > >
> > > >
> > > > >
> > > > > Evgenii
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > View this message in context: http://apache-ignite-
> > > > > developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-
> > > > > tp19987p20669.html
> > > > > Sent from the Apache Ignite Developers mailing list archive at
> > > > Nabble.com.
> > > > >
> > > >
> > >
> >
>


Re: Hibernate dialect for Ignite

2017-08-11 Thread Valentin Kulichenko
I think that's a great idea. Although I doubt anyone aver tried to use
Ignite with Hibernate this way, so we need to do some testing first to
identify limitations/issues we have there.

-Val

On Wed, Aug 9, 2017 at 12:58 PM, Dmitriy Setrakyan 
wrote:

> Igniters,
>
> Given that we have a very rich SQL support starting with version 2.1, does
> it make sense to create an Ignite SQL dialect for Hibernate?
>
> https://docs.jboss.org/hibernate/orm/3.6/reference/en-US/html/session-
> configuration.html#configuration-optional-dialects
>
> D.
>


Re: Control.sh script and cluster activation

2017-08-14 Thread Valentin Kulichenko
Agree that this is confusing. I think this functionality should be a part
of Visor CLI tool (likely a new command there).

-Val

On Mon, Aug 14, 2017 at 4:21 PM, Denis Magda  wrote:

> Dmitriy,
>
> I see you contributed control.sh script that activates a cluster after a
> restart. Honestly, I’m a bit confused by it:
>
> 1. How to use it? I could find out that there are some of the parameters
> but the ‘help’ is not implemented. Please fix this and provide a
> description for every parameter you introduced.
>
> 2. Why did we decide to create a specific script for that? Why can’t we
> use existing visorcmd script?
>
> 3. Why the script called “control.sh”?
>
> —
> Denis


Re: Failure to deserialize simple model object

2017-08-14 Thread Valentin Kulichenko
Cross-posting to dev

Folks,

I'm confused by the issue discussed in this thread.

Here is the scenario:
- Start server node with a cache with POJO store configured. There is one
type declared, read-through enabled.
- Start client node and execute get() for a key that exists in underlying
DB.
- During deserialization on the client, 'Requesting mapping from grid
failed for' exception is thrown.

Specifying the type explicitly in BinaryConfiguration solves the issue, and
I think I understand technical reasons for this. But is this really
expected? Is it possible to fix the issue without requiring to provide this
configuration?

I thought we do not require to provide types in configuration as long as
there is only one platform involved, am I wrong? If yes, we need to
identify scenarios when this documentation is required and document them.

-Val

On Mon, Aug 14, 2017 at 4:23 AM, franck102  wrote:

> My bad, here is the whole project.
>
> Franck ignite-binary-sample.zip
>  n16158/ignite-binary-sample.zip>
>
>
>
> --
> View this message in context: http://apache-ignite-users.
> 70518.x6.nabble.com/Failure-to-deserialize-simple-model-
> object-tp15440p16158.html
> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>


Re: IgniteSemaphore methods semantics

2017-08-10 Thread Valentin Kulichenko
If this is true, I think it should be fixed. availablePermits() returning
number of acquired permits sounds very confusing.

-Val

On Thu, Aug 10, 2017 at 7:38 AM, Andrey Kuznetsov  wrote:

> Hi, igniters!
>
>
>
> As IgniteSemaphore's javadoc states,
>
>
>
> "Distributed semaphore provides functionality similar to {@code
> java.util.concurrent.Semaphore}."
>
>
>
> At the same time method semantics of current implementation is inverted,
> i.e. acquire() decrements internal semaphore count and release() increments
> count. Then newlyCreatedSemaphore.acquire() call blocks until some other
> thread calls release(), and it looks confusing.Also, availablePermits()
> returns permits acquired so far, that is, semaphore count.
>
>
>
> Another difference is unbounded nature of IgniteSemaphore implementation,
> while java.util.concurrent.Semaphore is bounded.
>
>
>
> I think we are to do one of the following:
>
>
>
> - Document uncommon IgniteSemaphore semantics properly
>
>
>
> or
>
>
>
> - Change its semantics to conform java.util.concurrent counterpart.
>
>
>
> --
>
> Best regards,
>
>   Andrey Kuznetsov.
>


Nightly build

2017-08-10 Thread Valentin Kulichenko
Folks,

I noticed that the latest successful nightly build happened on May 31:
https://builds.apache.org/view/H-L/view/Ignite/job/Ignite-nightly/lastSuccessfulBuild/

Looks like it's failing since then. Does anyone know the reason?

-Val


Re: Spark Data Frame support in Ignite

2017-08-10 Thread Valentin Kulichenko
Denis,

This only allows to limit dataset fetched from DB to Spark. This is useful,
but does not replace custom Strategy integration. Because after you create
the FD, you will use its API to do additional filtering, mapping,
aggregation, etc., and this will happen within Spark. With custom strategy
the whole processing will be done on Ignite side.

-Val

On Thu, Aug 10, 2017 at 3:07 PM, Denis Magda <dma...@apache.org> wrote:

> >> This JDBC integration is just a Spark data source, which means that
> Spark
> >> will fetch data in its local memory first, and only then apply filters,
> >> aggregations, etc.
>
> Seems that there is a backdoor exposed via the standard SQL syntax. You
> can execute so called “pushdown” queries [1] that are sent by Spark to a
> JDBC database right away and the result is wrapped into a form of the
> DataFrame.
>
> I could do this trick using Ignite as a JDBC compliant datasource
> executing the query below over the data stored in the cluster:
>
> SELECT p.name as person, c.name as city " +
> "FROM person p, city c  WHERE p.city_id = c.id
>
> There are some limitations though because the actual query issued by Spark
> will be:
>
> SELECT * FROM (SELECT p.name as person, c.name as city " +
> "FROM person p, city c  WHERE p.city_id = c.id) as res
>
> Here [2] is a complete example.
>
>
> [1] https://docs.databricks.com/spark/latest/data-sources/sql-
> databases.html#pushdown-query-to-database-engine <
> https://docs.databricks.com/spark/latest/data-sources/sql-
> databases.html#pushdown-query-to-database-engine>
> [2] https://github.com/dmagda/ignite-dataframes <
> https://github.com/dmagda/ignite-dataframes>
>
> —
> Denis
>
> > On Aug 4, 2017, at 3:41 PM, Dmitriy Setrakyan <d...@gridgain.com> wrote:
> >
> > On Thu, Aug 3, 2017 at 9:04 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> >> This JDBC integration is just a Spark data source, which means that
> Spark
> >> will fetch data in its local memory first, and only then apply filters,
> >> aggregations, etc. This is obviously slow and doesn't use all advantages
> >> Ignite provides.
> >>
> >> To create useful and valuable integration, we should create a custom
> >> Strategy that will convert Spark's logical plan into a SQL query and
> >> execute it directly on Ignite.
> >>
> >
> > I get it, but we have been talking about Data Frame support for longer
> than
> > a year. I think we should advise our users to switch to JDBC until the
> > community gets someone to implement it.
> >
> >
> >>
> >> -Val
> >>
> >> On Thu, Aug 3, 2017 at 12:12 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> >> wrote:
> >>
> >>> On Thu, Aug 3, 2017 at 9:04 AM, Jörn Franke <jornfra...@gmail.com>
> >> wrote:
> >>>
> >>>> I think the development effort would still be higher. Everything would
> >>>> have to be put via JDBC into Ignite, then checkpointing would have to
> >> be
> >>>> done via JDBC (again additional development effort), a lot of
> >> conversion
> >>>> from spark internal format to JDBC and back to ignite internal format.
> >>>> Pagination I do not see as a useful feature for managing large data
> >>> volumes
> >>>> from databases - on the contrary it is very inefficient (and one would
> >> to
> >>>> have to implement logic to fetch al pages). Pagination was also never
> >>>> thought of for fetching large data volumes, but for web pages showing
> a
> >>>> small result set over several pages, where the user can click manually
> >>> for
> >>>> the next page (what they anyway not do most of the time).
> >>>>
> >>>> While it might be a quick solution , I think a deeper integration than
> >>>> JDBC would be more beneficial.
> >>>>
> >>>
> >>> Jorn, I completely agree. However, we have not been able to find a
> >>> contributor for this feature. You sound like you have sufficient domain
> >>> expertise in Spark and Ignite. Would you be willing to help out?
> >>>
> >>>
> >>>>> On 3. Aug 2017, at 08:57, Dmitriy Setrakyan <dsetrak...@apache.org>
> >>>> wrote:
> >>>>>
> >>>>>> On Thu, Aug 3, 2017 at 8:45 AM, Jörn Franke <jornfra...@gmail.com>
> >>>> wrote:
> >>

Re: SQL usability issues

2017-08-16 Thread Valentin Kulichenko
Denis,

I think this article should be more generic, describing how to connect a
generic JDBC tool to Ignite (probably using DBeaver as an example).

Right now it looks like out of all such tools we support only DBeaver for
some reason :)

What do you think?

-Val

On Wed, Aug 16, 2017 at 1:49 PM, Denis Magda  wrote:

> Igniters,
>
> I’ve prepared the getting started basing on DBeaver tool:
> https://apacheignite-tools.readme.io/docs/dbeaver <
> https://apacheignite-tools.readme.io/docs/dbeaver>
>
> That guide is based on the generic one made out by Akmal:
> https://apacheignite.readme.io/docs/getting-started-sql <
> https://apacheignite.readme.io/docs/getting-started-sql>
>
> During the work on the guide we faced one more issue that prevents us from
> using the bulk inserts:
> https://issues.apache.org/jira/browse/IGNITE-6092 <
> https://issues.apache.org/jira/browse/IGNITE-6092>
>
> *Alexander P.*, *Vladimir*, how difficult is to support this feature?
>
> —
> Denis
>
> > On Aug 13, 2017, at 9:20 AM, Dmitriy Setrakyan 
> wrote:
> >
> > On Fri, Aug 11, 2017 at 7:31 PM, Denis Magda 
> wrote:
> >
> >> Dmitriy,
> >>
> >> That's the documentation that shows how to configure Pentaho via JDBC:
> >> https://apacheignite-tools.readme.io/v2.1/docs/pentaho
> >
> >
> > I am not sure I agree. Why would I look at Pentaho integration when
> trying
> > to configure a generic SQL viewer tool? We need a generic documentation
> > section for configuring JDBC/ODBC drivers with 3rd party tools.
> >
> >
> >>
> >>
> >> Plus on the tools' domain you can see Tableau based ODBC example.
> >>
> >> So, I'm not sure we need something else from the documentation
> standpoint.
> >> I would rather add a direct reference from JDBC/ODBC docs to Pentaho and
> >> Tablea.
> >>
> >> Denis
> >>
> >> On Friday, August 11, 2017, Dmitriy Setrakyan 
> >> wrote:
> >>
> >>> Igniters,
> >>>
> >>> I have tried to connect to Ignite from DBeaver [1], and realized that
> >> there
> >>> are some usability issues we need to address before the next release:
> >>>
> >>>   1. We need to have documentation on how to configure JDBC and ODBC
> >>>   drivers with external SQL tools [2]
> >>>   2. You cannot highlight multiple SQL statements and run them together
> >>> [3]
> >>>   3. Commands like *DESCRIBE* or *SHOW* do not work [4]
> >>>   4. Schema, index, and table metadata is not displayed [5]. Looks like
> >>>   this fix was already implemented.
> >>>
> >>> The links to the tickets are below.
> >>>
> >>> [1] http://dbeaver.jkiss.org/
> >>> [2] https://issues.apache.org/jira/browse/IGNITE-6048
> >>> [3] https://issues.apache.org/jira/browse/IGNITE-6046
> >>> [4] https://issues.apache.org/jira/browse/IGNITE-6047
> >>> [5] https://issues.apache.org/jira/browse/IGNITE-5233
> >>>
> >>> D.
> >>>
> >>
>
>


Re: It seems WebSession's removeAttribute does not support HttpSessionBindingListener

2017-07-07 Thread Valentin Kulichenko
What is your Jira ID? I will add you as a contributor.

-Val

On Fri, Jul 7, 2017 at 2:44 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> This will not work. WebSessionV2 does not reinvent the wheel, it provides
> additional functionality. In particular, allows to use the same session in
> a clustered environment.
>
> Genuine session is local, so if you just rely on it, all session data will
> be lost when server that holds this session fails. Your listeners will not
> be invoked as well, BTW. That's exactly what we're trying to avoid by
> introducing this feature.
>
> However, I agree that there is an issue with expiration. It's currently
> handled based on ExpiryPolicy, i.e., if maxInactiveInterval is set, session
> will be removed from the cache. But looks like we do not invalidate the
> genuine session, which is wrong.
>
> I think we should add a CacheEntryListener that will listen to expirations
> and handle all required post actions - invalidation of genuine session and
> invoking the listeners.
>
> -Val
>
> On Fri, Jul 7, 2017 at 6:59 AM, yucigou <yuci@gmail.com> wrote:
>
>> Hi Val,
>>
>> The mechanism is similar to the implementation of invalidate() of the
>> WebSessionV2 class. The {@link #invalidate()} action is delegated to the
>> genuine session. Similarly, actions setAttribute(), removeAttribute(), and
>> setMaxInactiveInterval() should be delegated to the genuine session. This
>> way, the web container can do to the session whatever it promises to do,
>> such as calling the HttpSessionBindingListener's valueUnbound callback
>> function, etc.
>>
>> If you look at the HttpSession interface, this is the whole list of APIs
>> concerned:
>>
>> * setAttribute()
>> * removeAttribute()
>> * setMaxInactiveInterval()
>> * invalidate()
>> * putValue()
>>
>> And putValue() has been covered by setAttribute() in WebSessionV2
>>
>> There are two main reasons that WebSessionV2 should delegate to the
>> genuine
>> session:
>> 1. Avoid re-inventing the wheel. The web container has already implemented
>> the related APIs.
>> 2. WebSessionV2 is not visible to the web container. When the web
>> container
>> decides to expire the session, it will not reach the WebSessionV2
>> implementation. And this is exactly where I had the problem in the first
>> place.
>>
>> By the way, thanks for pointing out removing attributes, I've made another
>> pull request on GitHub:
>> https://github.com/apache/ignite/pull/2243
>>
>> Also I can't assign the ticket to myself because of lack of permission:
>> https://issues.apache.org/jira/browse/IGNITE-5607
>>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-developer
>> s.2346864.n4.nabble.com/It-seems-WebSession-s-removeAttri
>> bute-does-not-support-HttpSessionBindingListener-tp19184p19575.html
>> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>>
>
>


Re: It seems WebSession's removeAttribute does not support HttpSessionBindingListener

2017-07-07 Thread Valentin Kulichenko
This will not work. WebSessionV2 does not reinvent the wheel, it provides
additional functionality. In particular, allows to use the same session in
a clustered environment.

Genuine session is local, so if you just rely on it, all session data will
be lost when server that holds this session fails. Your listeners will not
be invoked as well, BTW. That's exactly what we're trying to avoid by
introducing this feature.

However, I agree that there is an issue with expiration. It's currently
handled based on ExpiryPolicy, i.e., if maxInactiveInterval is set, session
will be removed from the cache. But looks like we do not invalidate the
genuine session, which is wrong.

I think we should add a CacheEntryListener that will listen to expirations
and handle all required post actions - invalidation of genuine session and
invoking the listeners.

-Val

On Fri, Jul 7, 2017 at 6:59 AM, yucigou  wrote:

> Hi Val,
>
> The mechanism is similar to the implementation of invalidate() of the
> WebSessionV2 class. The {@link #invalidate()} action is delegated to the
> genuine session. Similarly, actions setAttribute(), removeAttribute(), and
> setMaxInactiveInterval() should be delegated to the genuine session. This
> way, the web container can do to the session whatever it promises to do,
> such as calling the HttpSessionBindingListener's valueUnbound callback
> function, etc.
>
> If you look at the HttpSession interface, this is the whole list of APIs
> concerned:
>
> * setAttribute()
> * removeAttribute()
> * setMaxInactiveInterval()
> * invalidate()
> * putValue()
>
> And putValue() has been covered by setAttribute() in WebSessionV2
>
> There are two main reasons that WebSessionV2 should delegate to the genuine
> session:
> 1. Avoid re-inventing the wheel. The web container has already implemented
> the related APIs.
> 2. WebSessionV2 is not visible to the web container. When the web container
> decides to expire the session, it will not reach the WebSessionV2
> implementation. And this is exactly where I had the problem in the first
> place.
>
> By the way, thanks for pointing out removing attributes, I've made another
> pull request on GitHub:
> https://github.com/apache/ignite/pull/2243
>
> Also I can't assign the ticket to myself because of lack of permission:
> https://issues.apache.org/jira/browse/IGNITE-5607
>
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/It-seems-WebSession-s-
> removeAttribute-does-not-support-HttpSessionBindingListener-
> tp19184p19575.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-12 Thread Valentin Kulichenko
Hi Max,

This ticket doesn't assume any API changes, it's about broken
functionality. I would start with checking what tests we have
for @AffinityKeyMapped and creating missing one. From what I understand
functionality is broken completely or almost completely, so I guess testing
coverage is very weak there.

-Val

On Wed, Jul 12, 2017 at 4:27 PM, Kozlov Maxim  wrote:

> Igniters,
>
> jira: https://issues.apache.org/jira/browse/IGNITE-5037 <
> https://issues.apache.org/jira/browse/IGNITE-5037>
> How do you look to solve this ticket by adding two methods to the public
> IgniteCompute API?
>
> @IgniteAsyncSupported
> public void affinityRun(@NotNull Collection cacheNames,
> Collection keys, IgniteRunnable job)
> throws IgniteException;
>
> @IgniteAsyncSupported
> public  R affinityCall(@NotNull Collection cacheNames,
> Collection keys, IgniteCallable job)
> throws IgniteException;
>
> There is also a question of how to act when changing the topology during
> the execution of the job.
> 1) complete with an exception;
> 2) stop execution and wait until the topology is rebuilt and continue
> execution;
>
> I think the second way, do you think?
>
> --
> Best Regards,
> Max K.
>
>
>
>
>


Re: [VOTE] Apache Ignite 2.1.0 RC2

2017-07-18 Thread Valentin Kulichenko
+1 (binding)

On Tue, Jul 18, 2017 at 11:40 AM, Denis Magda  wrote:

> +1 (binding)
>
> —
> Denis
>
> > On Jul 17, 2017, at 9:42 PM, Vladimir Ozerov 
> wrote:
> >
> > Igniters!
> >
> > We have uploaded a 2.1.0 release candidate to
> > https://dist.apache.org/repos/dist/dev/ignite/2.1.0-rc2/
> >
> > Git tag name is
> > 2.1.0-rc2
> >
> > This release includes the following changes:
> >
> > Ignite:
> > * Persistent cache store
> > * Added IgniteFuture.listenAsync() and IgniteFuture.chainAsync() mehtods
> > * Deprecated IgniteConfiguration.marshaller
> > * Updated Lucene dependency to version 5.5.2
> > * Machine learning: implemented K-means clusterization algorithm
> optimized
> > for distributed storages
> > * SQL: CREATE TABLE and DROP TABLE commands support
> > * SQL: New thin JDBC driver
> > * SQL: Improved performance of certain queries, when affinity node can be
> > calculated in advance
> > * SQL: Fixed return type of AVG() function
> > * SQL: BLOB type support added to thick JDBC driver
> > * SQL: Improved LocalDate, LocalTime and LocalDateTime support for Java 8
> > * SQL: Added FieldsQueryCursor interface to get fields metadata for
> > SqlFieldsQuery
> > * ODBC: Implemented DML statement batching
> > * Massive performance and stability improvements
> >
> > Ignite.NET:
> > * Automatic remote assembly loading
> > * NuGet-based standalone node deployment
> > * Added conditional data removeal via LINQ DeleteAll
> > * Added TimestampAttribute to control DateTime serialization mode
> > * Added local collections joins support to LINQ.
> >
> > Ignite CPP:
> > * Added Compute::Call and Compute::Broadcast methods
> >
> > Web Console:
> > * Implemented support for UNIQUE indexes for key fields on import model
> > from RDBMS
> > * Added option to show full stack trace on Queries screen
> > * Added PK alias generation on Models screen.
> >
> > Complete list of closed issues:
> > https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%
> > 20fixVersion%20%3D%202.1%20AND%20(status%20%3D%
> 20closed%20or%20status%20%3D%
> > 20resolved)
> >
> > DEVNOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> plain;f=DEVNOTES.txt;hb=refs/tags/2.1.0-rc2
> >
> > RELEASE NOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> plain;f=RELEASE_NOTES.txt;hb=refs/tags/2.1.0-rc2
> >
> > Please start voting.
> >
> > +1 - to accept Apache Ignite 2.1.0-rc2
> > 0 - don't care either way
> > -1 - DO NOT accept Apache Ignite 2.1.0-rc2 (explain why)
> >
> > This vote will go for 72 hours.
>
>


Re: [VOTE] Apache Ignite 2.1.0 RC3

2017-07-20 Thread Valentin Kulichenko
+1 (binding)

On Thu, Jul 20, 2017 at 9:35 PM, Sasha Belyak  wrote:

> +1
>
> 2017-07-21 5:34 GMT+07:00 Denis Magda :
>
> > Igniters,
> >
> > Setting off the vote one more time. Hope I’ll be successful this time,
> > keeping fingers crossed :)
> >
> > We have uploaded a 2.1.0 release candidate to
> > https://dist.apache.org/repos/dist/dev/ignite/2.1.0-rc3/
> >
> > Git tag name is
> > 2.1.0-rc3
> >
> > This release includes the following changes:
> >
> > Ignite:
> > * Persistent cache store
> > * Added IgniteFuture.listenAsync() and IgniteFuture.chainAsync() mehtods
> > * Deprecated IgniteConfiguration.marshaller
> > * Updated Lucene dependency to version 5.5.2
> > * Machine learning: implemented K-means clusterization algorithm
> optimized
> > for distributed storages
> > * SQL: CREATE TABLE and DROP TABLE commands support
> > * SQL: New thin JDBC driver
> > * SQL: Improved performance of certain queries, when affinity node can be
> > calculated in advance
> > * SQL: Fixed return type of AVG() function
> > * SQL: BLOB type support added to thick JDBC driver
> > * SQL: Improved LocalDate, LocalTime and LocalDateTime support for Java 8
> > * SQL: Added FieldsQueryCursor interface to get fields metadata for
> > SqlFieldsQuery
> > * ODBC: Implemented DML statement batching
> > * Massive performance and stability improvements
> >
> > Ignite.NET:
> > * Automatic remote assembly loading
> > * NuGet-based standalone node deployment
> > * Added conditional data removeal via LINQ DeleteAll
> > * Added TimestampAttribute to control DateTime serialization mode
> > * Added local collections joins support to LINQ.
> >
> > Ignite CPP:
> > * Added Compute::Call and Compute::Broadcast methods
> >
> > Web Console:
> > * Implemented support for UNIQUE indexes for key fields on import model
> > from RDBMS
> > * Added option to show full stack trace on Queries screen
> > * Added PK alias generation on Models screen.
> >
> > Complete list of closed issues:
> > https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%
> > 20fixVersion%20%3D%202.1%20AND%20(status%20%3D%
> > 20closed%20or%20status%20%3D%
> > 20resolved)
> >
> > DEVNOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=DEVNOTES.txt;hb=refs/tags/2.1.0-rc3
> >
> > RELEASE NOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=RELEASE_NOTES.txt;hb=refs/tags/2.1.0-rc3
> >
> > Please start voting.
> >
> > +1 - to accept Apache Ignite 2.1.0-rc3
> > 0 - don't care either way
> > -1 - DO NOT accept Apache Ignite 2.1.0-rc3 (explain why)
> >
> > This vote will go for 72 hours.
> >
> > —
> > Denis
> >
> >
>


Re: Timeouts in atomic cache

2017-07-21 Thread Valentin Kulichenko
Any thoughts?

-Val

On Wed, Jul 19, 2017 at 4:21 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Folks,
>
> Do we currently have any way to set a timeout for an atomic operation? I
> don't see neither a way to do this nor any related documentation.
>
> In the code there are CacheAtomicUpdateTimeoutException and
> CacheAtomicUpdateTimeoutCheckedException, but I can't find a single place
> where it's created and/or thrown. Looks like we used to have this
> functionality, but it's not there anymore. Is this really the case or I
> missed something?
>
> I think having a way to timeout atomic operation is very important. For
> example, two concurrent putAll operations with keys in different order can
> completely hang the whole cluster forever, which is unacceptable. Is it
> possible to timeout one of the operations (or both of them) in this case?
>
> -Val
>


Re: Ignite internal events tracing

2017-07-21 Thread Valentin Kulichenko
Alex,

That's a great idea. I would also add an option to dump information on
demand, for case when operation hanged and can't complete.

-Val

On Fri, Jul 21, 2017 at 6:15 AM, Alexey Goncharuk <
alexey.goncha...@gmail.com> wrote:

> Igniters,
>
> I've recently stumbled across a situation when occasionally Ignite
> transactions commit may take up to several seconds while in general most of
> the transactions completed in a period of milliseconds.
>
> After a few attempts to analyze this situation with logs, I realized that
> this is a no-go and I need a finer instrument for this. The idea is to
> introduce several trace points along the way of an Ignite operation and
> collect timings when an operation passes each of the trace points. When
> enabled, this information should be available upon the operation
> completion.
>
> I've implemented a prototype of this for TX commit operation, the
> implementation is available in ignite-5797 branch.
>
> I was wondering if something of this kind may be useful as a part of Ignite
> product and available to users. If so, I would like to discuss the public
> API for this so the feature can be finalized.
>
> Thanks,
> AG
>


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-21 Thread Valentin Kulichenko
Maxim,

The issue is that it's currently assumed to support job mapping, but it
actually doesn't. However, I agree that AffinityKeyMapped annotation
doesn't fit the use case well. Let's fix documentation and JavaDoc then.

As for the proposed API, it's overcomplicated, took me 15 minutes to
understand what it does :)

What is the use case for which current affinityRun/Call API doesn't work?

-Val

On Fri, Jul 21, 2017 at 5:57 AM, Kozlov Maxim <dreamx@gmail.com> wrote:

> Valentin,
>
> The author of tiket wants to see to provide some API allows to map
> ComputeJobs to partitions or keys. If we use @AffinityKeyMapped then you
> need to enter the cache name parameter, I think this is not convenient for
> the user. Therefore, I propose to extend the existing API.
> Having consulted with Anton V. decided to make a separate interface
> ReducibleTask, which will allow us to have different map logic at each
> inheritor.
>
> Old method, allows to map to node
> public interface ComputeTask<T, R> extends ReducibleTask {
> @Nullable public Map
> map(List subgrid, @Nullable T arg) throws IgniteException;
> }
>
> Brand new method with mapping to partitions, which solves topology change
> issues.
> public interface AffinityComputeTask<T, R> extends ReducibleTask {
> @Nullable public Map map(@NotnullString
> cacheName, List partIds, @Nullable T arg) throws IgniteException;
> }
>
> public interface ReducibleTask extends Serializable {
> public ComputeJobResultPolicy result(ComputeJobResult res,
> List rcvd) throws IgniteException;
>
> @Nullable public R reduce(List results) throws
> IgniteException;
> }
>
> We also need to implement AffinityComputeTaskAdapter and
> AffinityComputeTaskSplitAdapter, for implementation by default. It is
> right?
>
> In the IgniteCompute add:
>
> @IgniteAsyncSupported
> public <T, R> R affinityExecute(Class>
> taskCls, List partIds, @Nullable T arg) throws IgniteException;
> @IgniteAsyncSupported
> public <T, R> R affinityExecute(AffinityComputeTask<T, R> task,
> List partIds, @Nullable T arg) throws IgniteException;
>
> public <T, R> ComputeTaskFuture affinityExecuteAsync(Class AffinityComputeTask<T, R>> taskCls, List partIds, @Nullable T arg)
> throws IgniteException;
> public <T, R> ComputeTaskFuture affinityExecuteAsync(AffinityComputeTask<T,
> R> task, List partIds, @Nullable T arg) throws IgniteException;
>
>
> How do you like this idea or do you insist that you need to use
> @AffinityKeyMapped to solve the problem?
>
>
> > 13 июля 2017 г., в 6:36, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> написал(а):
> >
> > Hi Max,
> >
> > This ticket doesn't assume any API changes, it's about broken
> > functionality. I would start with checking what tests we have
> > for @AffinityKeyMapped and creating missing one. From what I understand
> > functionality is broken completely or almost completely, so I guess
> testing
> > coverage is very weak there.
> >
> > -Val
> >
> > On Wed, Jul 12, 2017 at 4:27 PM, Kozlov Maxim <dreamx@gmail.com>
> wrote:
> >
> >> Igniters,
> >>
> >> jira: https://issues.apache.org/jira/browse/IGNITE-5037 <
> >> https://issues.apache.org/jira/browse/IGNITE-5037>
> >> How do you look to solve this ticket by adding two methods to the public
> >> IgniteCompute API?
> >>
> >> @IgniteAsyncSupported
> >> public void affinityRun(@NotNull Collection cacheNames,
> >> Collection keys, IgniteRunnable job)
> >>throws IgniteException;
> >>
> >> @IgniteAsyncSupported
> >> public  R affinityCall(@NotNull Collection cacheNames,
> >> Collection keys, IgniteCallable job)
> >>throws IgniteException;
> >>
> >> There is also a question of how to act when changing the topology during
> >> the execution of the job.
> >> 1) complete with an exception;
> >> 2) stop execution and wait until the topology is rebuilt and continue
> >> execution;
> >>
> >> I think the second way, do you think?
> >>
> >> --
> >> Best Regards,
> >> Max K.
> >>
> >>
> >>
> >>
> >>
>
> --
> Best Regards,
> Max K.
>
>
>
>
>


Re: Resurrect FairAffinityFunction

2017-07-25 Thread Valentin Kulichenko
Semyon,

We had some improvements, but to knowledge fair affinity still provides
much better distribution (at least I haven't seen any results showing
otherwise). Please correct me if I'm wrong.

Actually, I think it's not an issue with fair function in particular, but
rather a design flow in affinity manager. The exact same issue will exist
not only with fair function, but with ANY function that
uses AffinityFunctionContext#previousAssignment to calculate assignments.
And the context is provided from the outside, function has nothing to do
with it.

So let's fix the root cause and bring innocent FairAF back :)

-Val

On Tue, Jul 25, 2017 at 1:07 AM, Semyon Boikov <sboi...@gridgain.com> wrote:

> Valentin,
>
> As far as I know in 2.0 some changes were made in rendezvous function so
> now it can provide better result. Do you have some numbers for 2.0 so that
> we can compare rendezvous and fair affinity functions?
>
> Thanks
>
> On Tue, Jul 25, 2017 at 5:13 AM, <dsetrak...@apache.org> wrote:
>
> > Agree with Val, we should bring it back.
> >
> > ⁣D.​
> >
> > On Jul 24, 2017, 8:14 PM, at 8:14 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> > >Guys,
> > >
> > >Some time ago we removed FairAffinityFunction from the project.
> > >However, my
> > >communication with users clearly shows that is was a rush decision.
> > >Distribution showed by Fair AF is much better than default and for some
> > >users it's extremely important. Basically, there are cases when
> > >rendezvous
> > >function is no-go.
> > >
> > >The reason for removal was that it was possible to get inconsistent
> > >results
> > >in case multiple caches were created on different topologies. However,
> > >I
> > >think this is fixable. As far as I understand, the only thing we need
> > >to do
> > >is to maintain a single AffinityFunctionContext for all the caches with
> > >same affinity function. Currently for each cache we have separate
> > >context
> > >which holds the state used by Fair AF. If the state is different, we
> > >have
> > >an issue.
> > >
> > >The only question is how to check whether two functions are the same or
> > >not. In case both cache node filter and backup filter are not
> > >configured,
> > >this is easy - if number of partitions and excludeNeighbors flag are
> > >equal
> > >for two functions, these functions are also equal.
> > >
> > >With filters it's a bit more complicated as these are custom
> > >implementations and in general case we don't know how to compare them.
> > >Although, to solve this problem, we can enforce user to implement
> > >equals()
> > >method for these implementation if Fair AF is used.
> > >
> > >I propose to make changes described above and bring Fair AF back.
> > >
> > >Thoughts?
> > >
> > >-Val
> >
>


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-25 Thread Valentin Kulichenko
Anton,

How does topology change break this functionality? Closures executed with
affinityRun/Call fail over in the same way as any ComputeJob.

-Val

On Tue, Jul 25, 2017 at 5:48 AM, Anton Vinogradov <avinogra...@gridgain.com>
wrote:

> Alexei,
>
> > How would task know the partition it is running over ?
> Not sure it necessary.
> You'll create pair partition-job at task's map phase.
>
> > How can I assign task for each cache partition ?
> Just implement map method generates map with size equals to partition
> count.
>
> > How can I enforce partition reservation if task works with multiple
> caches at once ?
> This possible only in case caches use safe affinity function.
> And it useful only it this case.
>
> On Tue, Jul 25, 2017 at 3:22 PM, Alexei Scherbakov <
> alexey.scherbak...@gmail.com> wrote:
>
> > Please read job instead task
> >
> > 2017-07-25 15:20 GMT+03:00 Alexei Scherbakov <
> alexey.scherbak...@gmail.com
> > >:
> >
> > > Main point of the issue is to provide clean API for working with
> > > computations requiring data collocation
> > >
> > > affinityCall/Run provide the ability to run closure near data, but
> > > map/reduce API is a way reacher: continuous mapping, task session, etc.
> > >
> > > As for proposed API, I do not understand fully how it solves the
> problem.
> > >
> > > Maxim, please provide detailed javadoc for each method and each
> argument
> > > for presented API, and the answers to the following questions:
> > >
> > > 1. How would task know the partition it is running over ?
> > >
> > > 2. How can I assign task for each cache partition ?
> > >
> > > 3. How can I enforce partition reservation if task works with multiple
> > > caches at once ?
> > >
> > >
> > >
> > >
> > >
> > > 2017-07-25 12:30 GMT+03:00 Anton Vinogradov <avinogra...@gridgain.com
> >:
> > >
> > >> Val,
> > >>
> > >> Sure, we can, but we'd like to use map/reduce without fearing that
> > >> topology
> > >> can change.
> > >>
> > >> On Mon, Jul 24, 2017 at 11:17 PM, Valentin Kulichenko <
> > >> valentin.kuliche...@gmail.com> wrote:
> > >>
> > >> > Anton,
> > >> >
> > >> > You can call affinityCallAsync multiple times and then reduce
> locally.
> > >> >
> > >> > -Val
> > >> >
> > >> > On Mon, Jul 24, 2017 at 3:05 AM, Anton Vinogradov <
> > >> > avinogra...@gridgain.com>
> > >> > wrote:
> > >> >
> > >> > > Val,
> > >> > >
> > >> > > > What is the use case for which current affinityRun/Call API
> > doesn't
> > >> > work?
> > >> > > It does not work for map/reduce.
> > >> > >
> > >> > > On Fri, Jul 21, 2017 at 11:42 PM, Valentin Kulichenko <
> > >> > > valentin.kuliche...@gmail.com> wrote:
> > >> > >
> > >> > > > Maxim,
> > >> > > >
> > >> > > > The issue is that it's currently assumed to support job mapping,
> > >> but it
> > >> > > > actually doesn't. However, I agree that AffinityKeyMapped
> > annotation
> > >> > > > doesn't fit the use case well. Let's fix documentation and
> JavaDoc
> > >> > then.
> > >> > > >
> > >> > > > As for the proposed API, it's overcomplicated, took me 15
> minutes
> > to
> > >> > > > understand what it does :)
> > >> > > >
> > >> > > > What is the use case for which current affinityRun/Call API
> > doesn't
> > >> > work?
> > >> > > >
> > >> > > > -Val
> > >> > > >
> > >> > > > On Fri, Jul 21, 2017 at 5:57 AM, Kozlov Maxim <
> > dreamx@gmail.com
> > >> >
> > >> > > > wrote:
> > >> > > >
> > >> > > > > Valentin,
> > >> > > > >
> > >> > > > > The author of tiket wants to see to provide some API allows to
> > map
> > >> > > > > ComputeJobs to partitions or keys. If we use
> @AffinityKeyMapped
> > >> then
> > >> > > you
> > >> > > > > 

Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-25 Thread Valentin Kulichenko
Alexey,

Is there exact use case that is currently not supported? I really would
like to see one, because such a big API change should add clear value.
ComputeGrid is not used very often, and so far I've never seen any
questions from users about using it in conjunction with affinity
collocation.

What if we solve this on job level instead by adding the following
interface:

interface AffinityComputeJob extends ComputeJob {
String cacheName();
Object affinityKey();
}

Whenever load balancer sees this job, it maps it based on affinity. Will
this work?

-Val

On Tue, Jul 25, 2017 at 12:37 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Anton,
>
> How does topology change break this functionality? Closures executed with
> affinityRun/Call fail over in the same way as any ComputeJob.
>
> -Val
>
> On Tue, Jul 25, 2017 at 5:48 AM, Anton Vinogradov <
> avinogra...@gridgain.com> wrote:
>
>> Alexei,
>>
>> > How would task know the partition it is running over ?
>> Not sure it necessary.
>> You'll create pair partition-job at task's map phase.
>>
>> > How can I assign task for each cache partition ?
>> Just implement map method generates map with size equals to partition
>> count.
>>
>> > How can I enforce partition reservation if task works with multiple
>> caches at once ?
>> This possible only in case caches use safe affinity function.
>> And it useful only it this case.
>>
>> On Tue, Jul 25, 2017 at 3:22 PM, Alexei Scherbakov <
>> alexey.scherbak...@gmail.com> wrote:
>>
>> > Please read job instead task
>> >
>> > 2017-07-25 15:20 GMT+03:00 Alexei Scherbakov <
>> alexey.scherbak...@gmail.com
>> > >:
>> >
>> > > Main point of the issue is to provide clean API for working with
>> > > computations requiring data collocation
>> > >
>> > > affinityCall/Run provide the ability to run closure near data, but
>> > > map/reduce API is a way reacher: continuous mapping, task session,
>> etc.
>> > >
>> > > As for proposed API, I do not understand fully how it solves the
>> problem.
>> > >
>> > > Maxim, please provide detailed javadoc for each method and each
>> argument
>> > > for presented API, and the answers to the following questions:
>> > >
>> > > 1. How would task know the partition it is running over ?
>> > >
>> > > 2. How can I assign task for each cache partition ?
>> > >
>> > > 3. How can I enforce partition reservation if task works with multiple
>> > > caches at once ?
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > 2017-07-25 12:30 GMT+03:00 Anton Vinogradov <avinogra...@gridgain.com
>> >:
>> > >
>> > >> Val,
>> > >>
>> > >> Sure, we can, but we'd like to use map/reduce without fearing that
>> > >> topology
>> > >> can change.
>> > >>
>> > >> On Mon, Jul 24, 2017 at 11:17 PM, Valentin Kulichenko <
>> > >> valentin.kuliche...@gmail.com> wrote:
>> > >>
>> > >> > Anton,
>> > >> >
>> > >> > You can call affinityCallAsync multiple times and then reduce
>> locally.
>> > >> >
>> > >> > -Val
>> > >> >
>> > >> > On Mon, Jul 24, 2017 at 3:05 AM, Anton Vinogradov <
>> > >> > avinogra...@gridgain.com>
>> > >> > wrote:
>> > >> >
>> > >> > > Val,
>> > >> > >
>> > >> > > > What is the use case for which current affinityRun/Call API
>> > doesn't
>> > >> > work?
>> > >> > > It does not work for map/reduce.
>> > >> > >
>> > >> > > On Fri, Jul 21, 2017 at 11:42 PM, Valentin Kulichenko <
>> > >> > > valentin.kuliche...@gmail.com> wrote:
>> > >> > >
>> > >> > > > Maxim,
>> > >> > > >
>> > >> > > > The issue is that it's currently assumed to support job
>> mapping,
>> > >> but it
>> > >> > > > actually doesn't. However, I agree that AffinityKeyMapped
>> > annotation
>> > >> > > > doesn't fit the use case well. Let's fix documentation and
>> JavaDoc
>> > >> > then.
>> > >>

Re: Resurrect FairAffinityFunction

2017-07-25 Thread Valentin Kulichenko
Create a ticket: https://issues.apache.org/jira/browse/IGNITE-5836

-Val

On Tue, Jul 25, 2017 at 11:54 AM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Semyon,
>
> We had some improvements, but to knowledge fair affinity still provides
> much better distribution (at least I haven't seen any results showing
> otherwise). Please correct me if I'm wrong.
>
> Actually, I think it's not an issue with fair function in particular, but
> rather a design flow in affinity manager. The exact same issue will exist
> not only with fair function, but with ANY function that
> uses AffinityFunctionContext#previousAssignment to calculate assignments.
> And the context is provided from the outside, function has nothing to do
> with it.
>
> So let's fix the root cause and bring innocent FairAF back :)
>
> -Val
>
> On Tue, Jul 25, 2017 at 1:07 AM, Semyon Boikov <sboi...@gridgain.com>
> wrote:
>
>> Valentin,
>>
>> As far as I know in 2.0 some changes were made in rendezvous function so
>> now it can provide better result. Do you have some numbers for 2.0 so that
>> we can compare rendezvous and fair affinity functions?
>>
>> Thanks
>>
>> On Tue, Jul 25, 2017 at 5:13 AM, <dsetrak...@apache.org> wrote:
>>
>> > Agree with Val, we should bring it back.
>> >
>> > ⁣D.​
>> >
>> > On Jul 24, 2017, 8:14 PM, at 8:14 PM, Valentin Kulichenko <
>> > valentin.kuliche...@gmail.com> wrote:
>> > >Guys,
>> > >
>> > >Some time ago we removed FairAffinityFunction from the project.
>> > >However, my
>> > >communication with users clearly shows that is was a rush decision.
>> > >Distribution showed by Fair AF is much better than default and for some
>> > >users it's extremely important. Basically, there are cases when
>> > >rendezvous
>> > >function is no-go.
>> > >
>> > >The reason for removal was that it was possible to get inconsistent
>> > >results
>> > >in case multiple caches were created on different topologies. However,
>> > >I
>> > >think this is fixable. As far as I understand, the only thing we need
>> > >to do
>> > >is to maintain a single AffinityFunctionContext for all the caches with
>> > >same affinity function. Currently for each cache we have separate
>> > >context
>> > >which holds the state used by Fair AF. If the state is different, we
>> > >have
>> > >an issue.
>> > >
>> > >The only question is how to check whether two functions are the same or
>> > >not. In case both cache node filter and backup filter are not
>> > >configured,
>> > >this is easy - if number of partitions and excludeNeighbors flag are
>> > >equal
>> > >for two functions, these functions are also equal.
>> > >
>> > >With filters it's a bit more complicated as these are custom
>> > >implementations and in general case we don't know how to compare them.
>> > >Although, to solve this problem, we can enforce user to implement
>> > >equals()
>> > >method for these implementation if Fair AF is used.
>> > >
>> > >I propose to make changes described above and bring Fair AF back.
>> > >
>> > >Thoughts?
>> > >
>> > >-Val
>> >
>>
>
>


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-24 Thread Valentin Kulichenko
Anton,

You can call affinityCallAsync multiple times and then reduce locally.

-Val

On Mon, Jul 24, 2017 at 3:05 AM, Anton Vinogradov <avinogra...@gridgain.com>
wrote:

> Val,
>
> > What is the use case for which current affinityRun/Call API doesn't work?
> It does not work for map/reduce.
>
> On Fri, Jul 21, 2017 at 11:42 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Maxim,
> >
> > The issue is that it's currently assumed to support job mapping, but it
> > actually doesn't. However, I agree that AffinityKeyMapped annotation
> > doesn't fit the use case well. Let's fix documentation and JavaDoc then.
> >
> > As for the proposed API, it's overcomplicated, took me 15 minutes to
> > understand what it does :)
> >
> > What is the use case for which current affinityRun/Call API doesn't work?
> >
> > -Val
> >
> > On Fri, Jul 21, 2017 at 5:57 AM, Kozlov Maxim <dreamx@gmail.com>
> > wrote:
> >
> > > Valentin,
> > >
> > > The author of tiket wants to see to provide some API allows to map
> > > ComputeJobs to partitions or keys. If we use @AffinityKeyMapped then
> you
> > > need to enter the cache name parameter, I think this is not convenient
> > for
> > > the user. Therefore, I propose to extend the existing API.
> > > Having consulted with Anton V. decided to make a separate interface
> > > ReducibleTask, which will allow us to have different map logic at each
> > > inheritor.
> > >
> > > Old method, allows to map to node
> > > public interface ComputeTask<T, R> extends ReducibleTask {
> > > @Nullable public Map
> > > map(List subgrid, @Nullable T arg) throws IgniteException;
> > > }
> > >
> > > Brand new method with mapping to partitions, which solves topology
> change
> > > issues.
> > > public interface AffinityComputeTask<T, R> extends ReducibleTask {
> > > @Nullable public Map
> > map(@NotnullString
> > > cacheName, List partIds, @Nullable T arg) throws
> > IgniteException;
> > > }
> > >
> > > public interface ReducibleTask extends Serializable {
> > > public ComputeJobResultPolicy result(ComputeJobResult res,
> > > List rcvd) throws IgniteException;
> > >
> > > @Nullable public R reduce(List results) throws
> > > IgniteException;
> > > }
> > >
> > > We also need to implement AffinityComputeTaskAdapter and
> > > AffinityComputeTaskSplitAdapter, for implementation by default. It is
> > > right?
> > >
> > > In the IgniteCompute add:
> > >
> > > @IgniteAsyncSupported
> > > public <T, R> R affinityExecute(Class > R>>
> > > taskCls, List partIds, @Nullable T arg) throws
> IgniteException;
> > > @IgniteAsyncSupported
> > > public <T, R> R affinityExecute(AffinityComputeTask<T, R> task,
> > > List partIds, @Nullable T arg) throws IgniteException;
> > >
> > > public <T, R> ComputeTaskFuture affinityExecuteAsync(Class > > AffinityComputeTask<T, R>> taskCls, List partIds, @Nullable T
> > arg)
> > > throws IgniteException;
> > > public <T, R> ComputeTaskFuture affinityExecuteAsync(
> > AffinityComputeTask<T,
> > > R> task, List partIds, @Nullable T arg) throws
> IgniteException;
> > >
> > >
> > > How do you like this idea or do you insist that you need to use
> > > @AffinityKeyMapped to solve the problem?
> > >
> > >
> > > > 13 июля 2017 г., в 6:36, Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com> написал(а):
> > > >
> > > > Hi Max,
> > > >
> > > > This ticket doesn't assume any API changes, it's about broken
> > > > functionality. I would start with checking what tests we have
> > > > for @AffinityKeyMapped and creating missing one. From what I
> understand
> > > > functionality is broken completely or almost completely, so I guess
> > > testing
> > > > coverage is very weak there.
> > > >
> > > > -Val
> > > >
> > > > On Wed, Jul 12, 2017 at 4:27 PM, Kozlov Maxim <dreamx@gmail.com>
> > > wrote:
> > > >
> > > >> Igniters,
> > > >>
> > > >> jira: https://issues.apache.org/jira/browse/IGNITE-5037 <
> > > >> https://issues.apache.org/jira/browse/IGNITE-5037>
> > > >> How do you look to solve this ticket by adding two methods to the
> > public
> > > >> IgniteCompute API?
> > > >>
> > > >> @IgniteAsyncSupported
> > > >> public void affinityRun(@NotNull Collection cacheNames,
> > > >> Collection keys, IgniteRunnable job)
> > > >>throws IgniteException;
> > > >>
> > > >> @IgniteAsyncSupported
> > > >> public  R affinityCall(@NotNull Collection cacheNames,
> > > >> Collection keys, IgniteCallable job)
> > > >>throws IgniteException;
> > > >>
> > > >> There is also a question of how to act when changing the topology
> > during
> > > >> the execution of the job.
> > > >> 1) complete with an exception;
> > > >> 2) stop execution and wait until the topology is rebuilt and
> continue
> > > >> execution;
> > > >>
> > > >> I think the second way, do you think?
> > > >>
> > > >> --
> > > >> Best Regards,
> > > >> Max K.
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > >
> > > --
> > > Best Regards,
> > > Max K.
> > >
> > >
> > >
> > >
> > >
> >
>


Re: [VOTE] Apache Ignite 2.1.0 RC4

2017-07-24 Thread Valentin Kulichenko
+1 (binding)

On Mon, Jul 24, 2017 at 6:39 AM, Dmitriy Setrakyan 
wrote:

> Anton,
>
> You should treat this vote as a brand new vote. According to Apache rules,
> you need 3 +1 votes and it has to go for 72 hours.
>
> D.
>
> On Mon, Jul 24, 2017 at 8:32 AM, Anton Vinogradov  wrote:
>
> > Igniters,
> >
> > This vote based on same files as RC3.
> > Only one change is that I signed zips with my signature.
> > KEYS files (https://dist.apache.org/repos/dist/release/ignite/KEYS) was
> > updated as well.
> >
> > We already got 5 "+1" at RC3, so, is there any reason to wait 72 hours?
> > This vote will go for 72 hours but may be closed earlier if Konstantin
> > Boudnik confirmed security issue solved.
> >
> > We have uploaded a 2.1.0 release candidate to
> > https://dist.apache.org/repos/dist/dev/ignite/2.1.0-rc4/
> >
> > Git tag name is
> > 2.1.0-rc4
> >
> > This release includes the following changes:
> >
> > Ignite:
> > * Persistent cache store
> > * Added IgniteFuture.listenAsync() and IgniteFuture.chainAsync() mehtods
> > * Deprecated IgniteConfiguration.marshaller
> > * Updated Lucene dependency to version 5.5.2
> > * Machine learning: implemented K-means clusterization algorithm
> optimized
> > for distributed storages
> > * SQL: CREATE TABLE and DROP TABLE commands support
> > * SQL: New thin JDBC driver
> > * SQL: Improved performance of certain queries, when affinity node can be
> > calculated in advance
> > * SQL: Fixed return type of AVG() function
> > * SQL: BLOB type support added to thick JDBC driver
> > * SQL: Improved LocalDate, LocalTime and LocalDateTime support for Java 8
> > * SQL: Added FieldsQueryCursor interface to get fields metadata for
> > SqlFieldsQuery
> > * ODBC: Implemented DML statement batching
> > * Massive performance and stability improvements
> >
> > Ignite.NET:
> > * Automatic remote assembly loading
> > * NuGet-based standalone node deployment
> > * Added conditional data removeal via LINQ DeleteAll
> > * Added TimestampAttribute to control DateTime serialization mode
> > * Added local collections joins support to LINQ.
> >
> > Ignite CPP:
> > * Added Compute::Call and Compute::Broadcast methods
> >
> > Web Console:
> > * Implemented support for UNIQUE indexes for key fields on import model
> > from RDBMS
> > * Added option to show full stack trace on Queries screen
> > * Added PK alias generation on Models screen.
> >
> > Complete list of closed issues:
> > https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%
> > 20fixVersion%20%3D%202.1%20AND%20(status%20%3D%
> > 20closed%20or%20status%20%3D%
> > 20resolved)
> >
> > DEVNOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=DEVNOTES.txt;hb=refs/tags/2.1.0-rc4
> >
> > RELEASE NOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=RELEASE_NOTES.txt;hb=refs/tags/2.1.0-rc4
> >
> > Please start voting.
> >
> > +1 - to accept Apache Ignite 2.1.0-rc4
> > 0 - don't care either way
> > -1 - DO NOT accept Apache Ignite 2.1.0-rc4 (explain why)
> >
> > This vote will go for 72 hours.
> >
>


Resurrect FairAffinityFunction

2017-07-24 Thread Valentin Kulichenko
Guys,

Some time ago we removed FairAffinityFunction from the project. However, my
communication with users clearly shows that is was a rush decision.
Distribution showed by Fair AF is much better than default and for some
users it's extremely important. Basically, there are cases when rendezvous
function is no-go.

The reason for removal was that it was possible to get inconsistent results
in case multiple caches were created on different topologies. However, I
think this is fixable. As far as I understand, the only thing we need to do
is to maintain a single AffinityFunctionContext for all the caches with
same affinity function. Currently for each cache we have separate context
which holds the state used by Fair AF. If the state is different, we have
an issue.

The only question is how to check whether two functions are the same or
not. In case both cache node filter and backup filter are not configured,
this is easy - if number of partitions and excludeNeighbors flag are equal
for two functions, these functions are also equal.

With filters it's a bit more complicated as these are custom
implementations and in general case we don't know how to compare them.
Although, to solve this problem, we can enforce user to implement equals()
method for these implementation if Fair AF is used.

I propose to make changes described above and bring Fair AF back.

Thoughts?

-Val


Re: Ignite internal events tracing

2017-07-24 Thread Valentin Kulichenko
Yakov,

How IgniteDiagnosticAware can be used? Is there any information?

-Val

On Mon, Jul 24, 2017 at 3:24 AM, Yakov Zhdanov  wrote:

> Alex, I like the idea very much, but I think we need to rethink the
> implementation approach to make it more generic. Passing parameter to each
> invocation seems dirty to me.
>
> Val,  we already have this. Please
> see org.apache.ignite.internal.IgniteDiagnosticAware
>
> Dmitry, what you suggest will be pretty hard to implement. I would better
> improve self-diagnostic system to extend list of metrics Ignite monitors.
>
> --Yakov
>


Re: Timeouts in atomic cache

2017-07-24 Thread Valentin Kulichenko
Yakov,

Thanks for response. I definitely like the idea of detecting Java level
deadlocks.

As for hangs caused by Ignite internal problems, do we have a ticket for
this as well? Do you have any idea about how this should be implemented?

-Val

On Mon, Jul 24, 2017 at 3:55 AM, Yakov Zhdanov  wrote:

> Val, it seems you spotted and issue. Please file a ticket - I would suggest
> to remove the exceptions entirely as in my understanding timeout logic for
> atomic operation will bring additional overhead, but most of the time
> atomic operations are instant. From timeout perspective, what differs
> atomic operation from a transaction is that you cannot predict when user
> releases lock he acquired inside a transaction, but atomic operation should
> have predictable timeout.
>
> As far as your example. Currently, this will lead to java-level deadlock on
> synchronized sections for the cache entries (but when we move to pure
> thread-per-partition for atomic caches this will not be an issue any more
> https://issues.apache.org/jira/browse/IGNITE-4506). I would suggest we
> file
> a ticket to implement detection of java-level deadlock and allow user to
> configure policy to take appropriate action on deadlock wherever it happens
> - https://issues.apache.org/jira/browse/IGNITE-5811
>
> Any other hang of the atomic operation seem to be caused by issues in
> Ignite's internal machinery - either hanged exchange or problems in message
> processing on some node (e.g. all threads are busy and/or in deadlock)
> which again should result in notifying user and stopping node (by default).
>
> --Yakov
>


Re: Non-UTF-8 string encoding support in BinaryMarshaller (IGNITE-5655)

2017-07-27 Thread Valentin Kulichenko
Pavel,

This forces user to implement Binarylizable for whole type in case they
want to change encoding for one-two fields, right? I really don't like it,
why not add default encoding to BinaryTypeConfiguration?

-Val

On Thu, Jul 27, 2017 at 7:54 AM, Pavel Tupitsyn 
wrote:

> > 1 byte for every field just for this
> GridBinaryMarshaller.STRING data type remains untouched.
> We add GridBinaryMarshaller.STRING_ENCODED, which has additional byte for
> encoding type.
>
> This means no overhead for existing code.
> I think the most common use case is English, which uses 1 byte per char in
> UTF-8.
> This is already as fast and compact as possible, and we don't want to
> introduce any lookup overhead here.
>
> And when user knows that their data will be more compact in some specific
> encoding,
> they use some BinaryWriter.writeString overload, which writes a different
> type code.
>
> Yes, it also writes an extra byte, but you save a byte per char of the
> actual string
> (for example, when using Windows-1251 for Russian text), so this does not
> matter.
>
> On Thu, Jul 27, 2017 at 5:35 PM, Dmitriy Setrakyan 
> wrote:
>
> > Pavel, what would be the size overhead? Are we adding 1 byte for every
> > field just for this? If you would like to have this info in the binary
> > object directly, can we in this case have some bitmap of
> field-to-encoding?
> >
> > D.
> >
> > On Thu, Jul 27, 2017 at 9:22 AM, Pavel Tupitsyn 
> > wrote:
> >
> > > I'm not sure I uderstand how this "per field" configuration is supposed
> > to
> > > be implemented.
> > > * Marshaller is not tied to a cache. It serializes all kinds of things,
> > > like compute job parameters and results.
> > > * Raw mode does not involve field names.
> > >
> > > Also it seems like a complicated and expensive solution - looking up
> > string
> > > format somewhere in the metadata will be slow.
> > >
> > > "encoded string" data type suggestion from Vladimir looks better to me
> > from
> > > performance and implementation standpoint.
> > >
> > > Thanks,
> > > Pavel
> > >
> > >
> > >
> > > On Thu, Jul 27, 2017 at 5:10 PM, Dmitriy Setrakyan <
> > dsetrak...@apache.org>
> > > wrote:
> > >
> > > > On Thu, Jul 27, 2017 at 9:04 AM, Igor Sapego 
> > wrote:
> > > >
> > > > > Just a note from the platforms guy:
> > > > >
> > > > > Solution with table-level configuration is going to be
> significantly
> > > > > harder to implement for platforms and ODBC then field-level one.
> > > > >
> > > >
> > > > Igor, it seems like you are advocating the per-cell configuration,
> not
> > > > per-field one. The per-field configuration can be defined at the
> > > > table/cache level.
> > > >
> > > > I see your point about C++ and .NET integrations however. Can't we
> > > provide
> > > > this info at node-join time or table-creation time? This way all
> nodes
> > > will
> > > > receive it and you will be able to grab it on different platforms.
> > > >
> > > >
> > > > >
> > > > > Also, what about binary objects, which are not stored in cache,
> > > > > but being marshalled?
> > > > >
> > > >
> > > > I think the default system encoding should be used here. If we don't
> > have
> > > > configuration for default encoding, we should add it.
> > > >
> > > >
> > > > >
> > > > >
> > > > > Best Regards,
> > > > > Igor
> > > > >
> > > > > On Wed, Jul 26, 2017 at 7:22 PM, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org>
> > > > > wrote:
> > > > >
> > > > > > On Wed, Jul 26, 2017 at 3:40 AM, Vyacheslav Daradur <
> > > > daradu...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > > Encoding must be set on per field basis. This will give us as
> > > most
> > > > > > > flexible
> > > > > > > > solution at the cost of 1-byte overhead.
> > > > > > >
> > > > > > > > Vova, I agree that the encoding should be set on per-field
> > basis,
> > > > but
> > > > > > at
> > > > > > > > the table level, not at a cell level.
> > > > > > >
> > > > > > > Dmitriy, Vladimir,
> > > > > > > Let's use both approaches :-)
> > > > > > > We can add parameter to CacheConfiguration.
> > > > > > > If parameter specifie to use cache level encoding then
> marshaller
> > > > will
> > > > > > use
> > > > > > > encoding in a cache,
> > > > > > > otherwise marshaller will use per-field encoding.
> > > > > > > Of course only if it doesn't complicate the solution.
> > > > > > >
> > > > > > >
> > > > > > I think that it will complicate the solution and will complicate
> > the
> > > > > > marshalling protocol. The advantage of specifying the encoding at
> > > > > > table/cache level is that we don't need to add extra encoding
> bytes
> > > to
> > > > > the
> > > > > > marshalling protocol.
> > > > > >
> > > > > > I think Vova was suggesting encoding at the cell level, not at
> the
> > > > field
> > > > > > level, which seems to be redundant to me.
> > > > > >
> > > > > > Vova, do you agree?
> > > > > >
> > > > >
> > > 

Re: Assertions as binary data validation checks in deserialization

2017-07-27 Thread Valentin Kulichenko
Andrey,

How will it corrupt the data? Assertions only reads the array, not updates
it, right?

-Val

On Thu, Jul 27, 2017 at 8:54 AM, Andrey Kuznetsov  wrote:

> Hi Igniters,
>
> While examining BinaryObjectImpl code I found this curious line in typeId()
> method:
>
>   assert arr[off] == GridBinaryMarshaller.STRING : arr[off];
>
> Is it OK to check external binary data with assertions?
> I think it can lead to undefined behaviour on corrupt data from the wire.
>
> --
> Best regards,
>   Andrey Kuznetsov.
>


Re: Assertions as binary data validation checks in deserialization

2017-07-27 Thread Valentin Kulichenko
Do you suggest to throw an exception instead of assertions?

-Val

On Thu, Jul 27, 2017 at 11:52 AM, Andrey Kuznetsov <stku...@gmail.com>
wrote:

> Valentin,
>
> I meant the behaviour of this code when corrupted data from the network are
> being deserialized. Assertion is no-op in production, and we silently
> ignore binary format violation.
>
> 27 июля 2017 г. 21:09 пользователь "Valentin Kulichenko" <
> valentin.kuliche...@gmail.com> написал:
>
> > Andrey,
> >
> > How will it corrupt the data? Assertions only reads the array, not
> updates
> > it, right?
> >
> > -Val
> >
> > On Thu, Jul 27, 2017 at 8:54 AM, Andrey Kuznetsov <stku...@gmail.com>
> > wrote:
> >
> > > Hi Igniters,
> > >
> > > While examining BinaryObjectImpl code I found this curious line in
> > typeId()
> > > method:
> > >
> > >   assert arr[off] == GridBinaryMarshaller.STRING : arr[off];
> > >
> > > Is it OK to check external binary data with assertions?
> > > I think it can lead to undefined behaviour on corrupt data from the
> wire.
> > >
> > > --
> > > Best regards,
> > >   Andrey Kuznetsov.
> > >
> >
>


Re: Assertions as binary data validation checks in deserialization

2017-07-27 Thread Valentin Kulichenko
Makes sense to me. Feel free to create a ticket unless someone else has any
objection. However, I think that we should revise other code for such
places then. Fixing only this one line doesn't change a lot.

-Val

On Thu, Jul 27, 2017 at 12:55 PM, Andrey Kuznetsov <stku...@gmail.com>
wrote:

> Indeed, "let it crash" approach is better than unclear error in some
> indeterminate place later. Here we depend on data from "inpredictable"
> source, so assertions are not suitable.
>
> 27 июля 2017 г. 22:35 пользователь "Valentin Kulichenko" <
> valentin.kuliche...@gmail.com> написал:
>
> Do you suggest to throw an exception instead of assertions?
>
> -Val
>


Re: ContinuousQueryWithTransformer implementation questions

2017-07-26 Thread Valentin Kulichenko
Nikolay,

We already have the following method for queries with transformer. It
currently throws exception for ContinuousQuery.

 QueryCursor query(Query qry, IgniteClosure transformer)

Would it be possible to utilize it instead of creating new API?

-Val

On Wed, Jul 26, 2017 at 5:26 AM, Николай Ижиков 
wrote:

> Hello, Igniters.
>
> I'm working on IGNITE-425 [1] issue.
> I made a couple of changes in my branch [2] so I want to confirm that
> changes with community before moving forward:
>
> Text of issue:
>
> ```
> Currently if updated entry passes the filter, it is sent to node initiated
> the query entirely.
> It would be good to provide user with the ability to transform entry and,
> for example,
> select only fields that are important. This may bring huge economy to
> traffic and lower GC pressure as well.
> ```
>
> 1. I create new class ContinuousQueryWithTransformer extends Query:
>
> Reasons to create entirely new class without extending ContinuousQuery:
>
> a. ContinuousQuery is final so user can't extends it. I don't want to
> change that.
> b. ContinuousQuery contains some deprecated methods(setRemoteFilter) so
> with new class we can get rid of them.
> c. Such public API design disallow usage of existing localEventListener
> with new transformedEventListenr in compile time.
>
> ```
> public final class ContinuousQueryWithTransformer extends
> Query> {
> public ContinuousQueryWithTransformer
> setRemoteFilterFactory(Factory>
> rmtFilterFactory) { /**/ }
>
> public ContinuousQueryWithTransformer
> setRemoteTransformerFactory(Factory>
> factory) { /**/ }
>
> public ContinuousQueryWithTransformer
> setLocalTransformedEventListener(TransformedEventListener
> locTransEvtLsnr) { /**/ }
>
> public interface TransformedEventListener {
> void onUpdated(Iterable events) throws
> CacheEntryListenerException;
> }
> }
> ```
>
> 2. I want to edit all tests from package
> `core/src/test/java/org/apach/ignite/internal/processors/
> cache/query/continuous/`
> to ensure my implementation fully support existing tests.
> I want to make each test can work both for regular ContinousQuery and
> ContinuousQueryWithTransformer:
>
> Existing test:
>
> ```
> ContinuousQuery qry = new ContinuousQuery<>();
>
> qry.setLocalListener(new CacheEntryUpdatedListener Object>() {
> @Override public void onUpdated(Iterable>
> evts) {
> for (CacheEntryEvent evt : evts) {
> if ((Integer)evt.getValue() >= 0)
> evtCnt.incrementAndGet();
> }
> }
> });
>
> ```
>
> To be:
>
> ```
> Query qry = createContinuousQuery();
>
> setLocalListener(qry, new CI1>() {
> @Override public void apply(T2 e) {
> if ((Integer)e.getValue() >= 0)
> evtCnt.incrementAndGet();
> }
> });
> ```
>
> Base class to support setLocalListener:
>
> ```
> protected  void setLocalListener(Query q, CI1> lsnrClsr)
> {
> if (isContinuousWithTransformer()) {
> ((ContinuousQueryWithTransformer)q)
> .setLocalTransformedEventListener(new
> TransformedEventListenerImpl(lsnrClsr));
> } else
> ((ContinuousQuery)q).setLocalListener(new
> CacheInvokeListener(lsnrClsr));
> }
>
> protected static class CacheInvokeListener  {
> private CI1> clsr;
>
> @Override public void onUpdated(Iterable> events)
> throws CacheEntryListenerException {
> for (CacheEntryEvent e : events)
> clsr.apply(ignite, new T2<>(e.getKey(), e.getValue()));
> }
> }
>
> protected static class TransformedEventListenerImpl implements
> TransformedEventListener {
> private IgniteBiInClosure> clsr;
>
> @Override public void onUpdated(Iterable evts) throws
> CacheEntryListenerException {
> for (Object e : evts) {
> clsr.apply((T2)e);
> }
> }
> }
> ```
>
> Thoughts?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-425
> [2] https://github.com/nizhikov/ignite/pull/9/files
>
> --
> Nikolay Izhikov
> nizhikov@gmail.com
>


Re: Changing public IgniteCompute API to improve changes in 5037 ticket

2017-07-26 Thread Valentin Kulichenko
Anton,

This seems to be a completely separate issue. I don't see how it can be
fixed by adding new APIs.

-Val

On Wed, Jul 26, 2017 at 3:56 AM, Anton Vinogradov <avinogra...@gridgain.com>
wrote:

> Val,
>
> AFAIK, affinityRun/Call has guarantee to be successfully executed on
> unstable topology in case partition was not losed, only relocated to
> another node during rebalancing.
>
> On Tue, Jul 25, 2017 at 10:44 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Alexey,
> >
> > Is there exact use case that is currently not supported? I really would
> > like to see one, because such a big API change should add clear value.
> > ComputeGrid is not used very often, and so far I've never seen any
> > questions from users about using it in conjunction with affinity
> > collocation.
> >
> > What if we solve this on job level instead by adding the following
> > interface:
> >
> > interface AffinityComputeJob extends ComputeJob {
> > String cacheName();
> > Object affinityKey();
> > }
> >
> > Whenever load balancer sees this job, it maps it based on affinity. Will
> > this work?
> >
> > -Val
> >
> > On Tue, Jul 25, 2017 at 12:37 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > > Anton,
> > >
> > > How does topology change break this functionality? Closures executed
> with
> > > affinityRun/Call fail over in the same way as any ComputeJob.
> > >
> > > -Val
> > >
> > > On Tue, Jul 25, 2017 at 5:48 AM, Anton Vinogradov <
> > > avinogra...@gridgain.com> wrote:
> > >
> > >> Alexei,
> > >>
> > >> > How would task know the partition it is running over ?
> > >> Not sure it necessary.
> > >> You'll create pair partition-job at task's map phase.
> > >>
> > >> > How can I assign task for each cache partition ?
> > >> Just implement map method generates map with size equals to partition
> > >> count.
> > >>
> > >> > How can I enforce partition reservation if task works with multiple
> > >> caches at once ?
> > >> This possible only in case caches use safe affinity function.
> > >> And it useful only it this case.
> > >>
> > >> On Tue, Jul 25, 2017 at 3:22 PM, Alexei Scherbakov <
> > >> alexey.scherbak...@gmail.com> wrote:
> > >>
> > >> > Please read job instead task
> > >> >
> > >> > 2017-07-25 15:20 GMT+03:00 Alexei Scherbakov <
> > >> alexey.scherbak...@gmail.com
> > >> > >:
> > >> >
> > >> > > Main point of the issue is to provide clean API for working with
> > >> > > computations requiring data collocation
> > >> > >
> > >> > > affinityCall/Run provide the ability to run closure near data, but
> > >> > > map/reduce API is a way reacher: continuous mapping, task session,
> > >> etc.
> > >> > >
> > >> > > As for proposed API, I do not understand fully how it solves the
> > >> problem.
> > >> > >
> > >> > > Maxim, please provide detailed javadoc for each method and each
> > >> argument
> > >> > > for presented API, and the answers to the following questions:
> > >> > >
> > >> > > 1. How would task know the partition it is running over ?
> > >> > >
> > >> > > 2. How can I assign task for each cache partition ?
> > >> > >
> > >> > > 3. How can I enforce partition reservation if task works with
> > multiple
> > >> > > caches at once ?
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > >
> > >> > > 2017-07-25 12:30 GMT+03:00 Anton Vinogradov <
> > avinogra...@gridgain.com
> > >> >:
> > >> > >
> > >> > >> Val,
> > >> > >>
> > >> > >> Sure, we can, but we'd like to use map/reduce without fearing
> that
> > >> > >> topology
> > >> > >> can change.
> > >> > >>
> > >> > >> On Mon, Jul 24, 2017 at 11:17 PM, Valentin Kulichenko <
> > >> > >> valentin.kuliche...@gmail.com> wrote:
> > >> &

Re: ContinuousQueryWithTransformer implementation questions

2017-07-26 Thread Valentin Kulichenko
Yeah, unfortunately current ContinuousQuery object can be used for querying
with transformer. That's actually not good, because adding transformers to
continuous queries and scan queries will be very inconsistent.

AFAIK, there are plans to completely rework query API since we added a lot
of stuff current API is not enough for (DML, DLL, etc.). Probably it makes
sense to consider transformers in the new API as well.

-Val

On Wed, Jul 26, 2017 at 1:32 PM, Nikolay Izhikov <nizhikov@gmail.com>
wrote:

> Hello, Valentin.
>
> As far as I can understand `query(Query qry, IgniteClosure<T, R>
> transformer)` is slightly different from what I should implement.
>
>
> I need to pass two parameter for ContinuousQuery instead of localListener:
>
> - Remote Transformer
> - Local Listener for transformed events
>
> and method you provide can accept only transformer.
>
> Moreover I think I should somehow "extend" ContinuousQuery(my proposal is
> new class with similar name) because issue is about possibility of
> optimization of continuous query mechanism.
>
> Thoughts?
>
>
> 26.07.2017 20:56, Valentin Kulichenko пишет:
>
> Nikolay,
>>
>> We already have the following method for queries with transformer. It
>> currently throws exception for ContinuousQuery.
>>
>> <T, R> QueryCursor query(Query qry, IgniteClosure<T, R> transformer)
>>
>> Would it be possible to utilize it instead of creating new API?
>>
>> -Val
>>
>> On Wed, Jul 26, 2017 at 5:26 AM, Николай Ижиков <nizhikov@gmail.com>
>> wrote:
>>
>> Hello, Igniters.
>>>
>>> I'm working on IGNITE-425 [1] issue.
>>> I made a couple of changes in my branch [2] so I want to confirm that
>>> changes with community before moving forward:
>>>
>>> Text of issue:
>>>
>>> ```
>>> Currently if updated entry passes the filter, it is sent to node
>>> initiated
>>> the query entirely.
>>> It would be good to provide user with the ability to transform entry and,
>>> for example,
>>> select only fields that are important. This may bring huge economy to
>>> traffic and lower GC pressure as well.
>>> ```
>>>
>>> 1. I create new class ContinuousQueryWithTransformer extends Query:
>>>
>>> Reasons to create entirely new class without extending ContinuousQuery:
>>>
>>>  a. ContinuousQuery is final so user can't extends it. I don't want
>>> to
>>> change that.
>>>  b. ContinuousQuery contains some deprecated
>>> methods(setRemoteFilter) so
>>> with new class we can get rid of them.
>>>  c. Such public API design disallow usage of existing
>>> localEventListener
>>> with new transformedEventListenr in compile time.
>>>
>>> ```
>>>  public final class ContinuousQueryWithTransformer<K, V, T> extends
>>> Query<Cache.Entry<K, V>> {
>>>  public ContinuousQueryWithTransformer<K, V, T>
>>> setRemoteFilterFactory(Factory>
>>> rmtFilterFactory) { /**/ }
>>>
>>>  public ContinuousQueryWithTransformer<K, V, T>
>>> setRemoteTransformerFactory(Factory>
>>> factory) { /**/ }
>>>
>>>  public ContinuousQueryWithTransformer<K, V, T>
>>> setLocalTransformedEventListener(TransformedEventListener
>>> locTransEvtLsnr) { /**/ }
>>>
>>>  public interface TransformedEventListener {
>>>  void onUpdated(Iterable events) throws
>>> CacheEntryListenerException;
>>>  }
>>>  }
>>> ```
>>>
>>> 2. I want to edit all tests from package
>>> `core/src/test/java/org/apach/ignite/internal/processors/
>>> cache/query/continuous/`
>>> to ensure my implementation fully support existing tests.
>>> I want to make each test can work both for regular ContinousQuery and
>>> ContinuousQueryWithTransformer:
>>>
>>> Existing test:
>>>
>>> ```
>>>  ContinuousQuery<Object, Object> qry = new ContinuousQuery<>();
>>>
>>>  qry.setLocalListener(new CacheEntryUpdatedListener<Object,
>>> Object>() {
>>>  @Override public void onUpdated(Iterable<CacheEntryEvent>> ?>>
>>> evts) {
>>>  for (CacheEntryEvent evt : evts) {
>>>  if ((Integer)evt.getValue() >= 0)
>>>  evtCnt.incre

Timeouts in atomic cache

2017-07-19 Thread Valentin Kulichenko
Folks,

Do we currently have any way to set a timeout for an atomic operation? I
don't see neither a way to do this nor any related documentation.

In the code there are CacheAtomicUpdateTimeoutException
and CacheAtomicUpdateTimeoutCheckedException, but I can't find a single
place where it's created and/or thrown. Looks like we used to have this
functionality, but it's not there anymore. Is this really the case or I
missed something?

I think having a way to timeout atomic operation is very important. For
example, two concurrent putAll operations with keys in different order can
completely hang the whole cluster forever, which is unacceptable. Is it
possible to timeout one of the operations (or both of them) in this case?

-Val


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-28 Thread Valentin Kulichenko
Yakov,

Private address should be published as well, of course.

-Val

On Wed, Jun 28, 2017 at 3:47 AM, Yakov Zhdanov  wrote:

> Val,
>
> What if client sits in  a private network, too?
>
> Btw, do we pass all addresses through address resolver prior to sealing
> node attributes? I mean communication, rest, etc?
>
> --Yakov
>


Re: By bytes access to binary format

2017-06-28 Thread Valentin Kulichenko
Vladislav,

Are you suggesting to stream directly from cache. or from a binary object
that is already copied from cache?

-Val

On Wed, Jun 28, 2017 at 2:52 AM, Vladislav Pyatkov 
wrote:

> Hi,
>
> Recently, from one of Ignite user, I listened interest idea.
> What if I want to pass some date to java stream from cache.
>
> With binary I do it like this:
>
> BinaryObject get = (BinaryObject) cache.get(key);
> byte[] dataFromCache = get.field("data");
> System.out.write(dataFromCache, 0, dataFromCache.length);
>
> But in this case we got garbage a lot, due to each time new bytes array is
> creating.
>
> This will lead to many GC events in case we load a some of million entries.
> Could we offer additional API for working with java stream:
>
> BinaryObject.writeBytesToBuf("data", ByteBuffer.allocate(1024));
>
> or with buffer
>
> BinaryObject.writeBytesToBuf("data", new byte[1000], 100);
>
> I already created a Jira ticket.
> https://issues.apache.org/jira/browse/IGNITE-5602
>
> --
> Vladislav Pyatkov
> Architect-Consultant "GridGain Rus" Llc.
> +7 963 716 68 99
>


Re: It seems WebSession's removeAttribute does not support HttpSessionBindingListener

2017-06-28 Thread Valentin Kulichenko
Hi,

Good catch! I created a ticket for this:
https://issues.apache.org/jira/browse/IGNITE-5607

Are you willing to pick it up and contribute the fix?

-Val

On Wed, Jun 28, 2017 at 3:25 AM, yucigou  wrote:

> When a session expires or is invalidated, or a session attribute gets
> removed, etc., HttpSessionBindingListener's valueUnbound callback function
> should be fired.
>
> However, it seems that WebSession's removeAttribute does not support
> HttpSessionBindingListener. (I'm referring to the Ignite Web module.)
>
> class WebSession implements HttpSession, Externalizable {
>
> /** {@inheritDoc} */
> @Override public void removeAttribute(String name) {
> if (!isValid)
> throw new IllegalStateException("Call on invalidated
> session!");
>
> attrs.remove(name);
>
> if (updates != null)
> updates.add(new T2<>(name, null));
> }
>
> ...
>
> Somehow, our application relies on the HttpSessionBindingListener's
> valueUnbound callback function getting called to clean up resources.
>
> Any advice please?
>
> PS.:
> Tomcat's implementation of HttpSession looks like:
>
> StandardSession.java
>
> protected void removeAttributeInternal(String name, boolean notify) {
> // Avoid NPE
> if (name == null) return;
> // Remove this attribute from our collection
> Object value = attributes.remove(name);
> // Do we need to do valueUnbound() and attributeRemoved()
> notification?
> if (!notify || (value == null)) {
> return;
> }
> // Call the valueUnbound() method if necessary
> HttpSessionBindingEvent event = null;
> if (value instanceof HttpSessionBindingListener) {
> event = new HttpSessionBindingEvent(getSession(), name,
> value);
> ((HttpSessionBindingListener) value).valueUnbound(event);
> }
> ...
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/It-seems-WebSession-s-
> removeAttribute-does-not-support-HttpSessionBindingListener-tp19184.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: By bytes access to binary format

2017-06-30 Thread Valentin Kulichenko
Vova,

Generally this can be useful. If you have a read-only binary object with a
large blob as a field, you don't want to copy this array when reading it.
Instead, we can return a ByteBuffer or a stream wrapping the corresponding
portion.

However, I currently don't see how this can be smoothly added to existing
API. Vlad, do you have any concrete proposal on how it should look like?

-Val

On Thu, Jun 29, 2017 at 2:11 PM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Hi Vlad,
>
> I am not quite sure I understand the problem. Can you show how the API you
> propose would look like? Remember that "field" method can return anything
> from primitive, String or byte array, to another BinaryObject. And returned
> BinaryObject can have references outside of itself, so it cannot be
> serialized easily without full rebuild. .
>
> On Thu, Jun 29, 2017 at 10:16 AM, Vladislav Pyatkov <vpyat...@gridgain.com
> >
> wrote:
>
> > Val,
> >
> > I proposal, access as a stream to binary object, because we have doubled
> > copy on touch a field (first at copy from cache and second - on getting a
> > field).
> >
> > For the stream in/out to cache I will be used IGFS.
> > Main idea to avoid GC pressure when make a massive read from key-value
> > storage.
> >
> > On Wed, Jun 28, 2017 at 9:36 PM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > > Vladislav,
> > >
> > > Are you suggesting to stream directly from cache. or from a binary
> object
> > > that is already copied from cache?
> > >
> > > -Val
> > >
> > > On Wed, Jun 28, 2017 at 2:52 AM, Vladislav Pyatkov <
> > vpyat...@gridgain.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > Recently, from one of Ignite user, I listened interest idea.
> > > > What if I want to pass some date to java stream from cache.
> > > >
> > > > With binary I do it like this:
> > > >
> > > > BinaryObject get = (BinaryObject) cache.get(key);
> > > > byte[] dataFromCache = get.<byte[]>field("data");
> > > > System.out.write(dataFromCache, 0, dataFromCache.length);
> > > >
> > > > But in this case we got garbage a lot, due to each time new bytes
> array
> > > is
> > > > creating.
> > > >
> > > > This will lead to many GC events in case we load a some of million
> > > entries.
> > > > Could we offer additional API for working with java stream:
> > > >
> > > > BinaryObject.writeBytesToBuf("data", ByteBuffer.allocate(1024));
> > > >
> > > > or with buffer
> > > >
> > > > BinaryObject.writeBytesToBuf("data", new byte[1000], 100);
> > > >
> > > > I already created a Jira ticket.
> > > > https://issues.apache.org/jira/browse/IGNITE-5602
> > > >
> > > > --
> > > > Vladislav Pyatkov
> > > > Architect-Consultant "GridGain Rus" Llc.
> > > > +7 963 716 68 99
> > > >
> > >
> >
> >
> >
> > --
> > Vladislav Pyatkov
> > Architect-Consultant "GridGain Rus" Llc.
> > +7 963 716 68 99
> >
>


Re: [jira] [Created] (IGNITE-5647) Suggestion for Apache Ignite Generic Transactional Receiver Implementation for Concurrency

2017-06-30 Thread Valentin Kulichenko
Hi Fatih,

You can find this information here:
https://ignite.apache.org/community/contribute.html#ignite-dev

-Val

On Fri, Jun 30, 2017 at 11:43 AM, fatih  wrote:

> Hi
>
> Could please send a guide for a new committer explaining how to create a
> branch to the dedicated Jira Ticket and other things that might be
> necessary?
>
> Regards
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/jira-Created-IGNITE-5647-
> Suggestion-for-Apache-Ignite-Generic-Transactional-
> Receiver-Implementation-y-tp19313p19314.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: Distributed scheduling

2017-06-30 Thread Valentin Kulichenko
I think this functionality should provide durable way of scheduled task or
closure execution on the cluster. Job descriptors should be persisted on
server side and executed there.

As for API, I believe this should be part of Compute Grid. I suggest to
introduce IgniteCompute#withSchedulingPolicy(SchedulingPolicy policy)
method, where SchedulingPolicy is smth like this:

public interface SchedulingPolicy {
/**
 * @return Timestamp of next execution.
 */
public Date nextTime();
}

This will enable scheduling for all compute features (tasks, callables,
closures, etc.) and also very flexible. Policy implementation can provide
simple periodic scheduling, scheduling based on Cron or anything else.

Thoughts?

-Val

On Fri, Jun 30, 2017 at 7:55 AM, Dmitriy Setrakyan 
wrote:

> On Fri, Jun 30, 2017 at 12:29 AM, Alexey Kuznetsov 
> wrote:
>
> > Dmitriy,
> >
> > >> Can you provide a simple example of API calls that will make this
> > possible?
> > API could be like this:
> > 1) via scheduler:
> > Ignite ignite = Ignition.start();
> >
> > ignite.scheduler().schedulel(job, "0 0 * * *"); // This will execute job
> > every day at 00:00
> >
> > 2) via compute
> >
> > Ignite ignite = Ignition.start();
> >
> > ignite.compute().schedulel(task, "0 0 * * *"); // This will execute
> > compute
> > task every day at 00:00
> >
> > Make sense?
> >
> >
> Yes, it does, but I am failing to see how is this a *distributed*
> scheduling. Are we persisting the scheduler somewhere in the cluster or is
> it only triggered on the client side?
>


Re: DataStreamer Transactional and Timestamp Implementation

2017-06-30 Thread Valentin Kulichenko
Hi Fatih,

This makes sense to me, but frankly I don't see anything that can be
included in the product here. It's very specific to your case and doesn't
add much value in general.

Do you have a blog by any chance? :) It looks like a very good topic for an
article (describing the use case, proposing solution, etc.).

-Val

On Fri, Jun 30, 2017 at 11:17 AM, fatih  wrote:

> Hi
>
> May I kindly ask if you had a chance to look into the implementation and
> the
> use case description in our previous post
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/DataStreamer-Transactional-and-Timestamp-
> Implementation-tp19129p19312.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: Custom string encoding

2017-06-30 Thread Valentin Kulichenko
Andrey,

Can you elaborate more on this? What is your concern?

-Val

On Fri, Jun 30, 2017 at 6:17 PM Andrey Mashenkov <andrey.mashen...@gmail.com>
wrote:

> Val,
>
> Looks like make sense.
>
> This will not affect FullText index, as Lucene has own format for storing
> data.
>
> But.. would it be compatible with H2 indexing ? I doubt.
>
> 1 июля 2017 г. 2:27 пользователь "Valentin Kulichenko" <
> valentin.kuliche...@gmail.com> написал:
>
> > Folks,
> >
> > Currently binary marshaller always encodes strings in UTF-8. However,
> > sometimes it can be useful to customize this. For example, if data
> contains
> > a lot of Cyrillic, Chinese or other symbols, but not so many Latin
> symbols,
> > memory is used very inefficiently. In this case it would be great to
> encode
> > most frequently used symbols in one byte instead of two or three.
> >
> > I propose to introduce BinaryStringEncoder interface that will convert
> > strings to byte arrays and back, and make it pluggable via
> > BinaryConfiguration. This will allow users to plug in any encoding
> > algorithms based on their requirements.
> >
> > Thoughts?
> >
> > https://issues.apache.org/jira/browse/IGNITE-5655
> >
> > -Val
> >
>


Custom string encoding

2017-06-30 Thread Valentin Kulichenko
Folks,

Currently binary marshaller always encodes strings in UTF-8. However,
sometimes it can be useful to customize this. For example, if data contains
a lot of Cyrillic, Chinese or other symbols, but not so many Latin symbols,
memory is used very inefficiently. In this case it would be great to encode
most frequently used symbols in one byte instead of two or three.

I propose to introduce BinaryStringEncoder interface that will convert
strings to byte arrays and back, and make it pluggable via
BinaryConfiguration. This will allow users to plug in any encoding
algorithms based on their requirements.

Thoughts?

https://issues.apache.org/jira/browse/IGNITE-5655

-Val


Re: Distributed scheduling

2017-07-03 Thread Valentin Kulichenko
Dmitry,

Yes, this can be implemented using services in many cases, but:

- It will require user to implement actual scheduling logic. It's quite a
generic task, so I think it makes sense to have it directly on the API.
- Most likely it will imply deploying separate service for each scheduled
task. I don't think it's a very good idea.
- Current services implementation is not durable. If cluster is restarted,
all services are lost.

-Val

On Sat, Jul 1, 2017 at 12:34 AM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> Val,
>
> In this case, we should have a notion of a named scheduler and ensure that
> we don't schedule the same task more than once. This is beginning to look
> more like a durable cluster singleton service, no?
>
> D.
>
> On Fri, Jun 30, 2017 at 1:39 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > I think this functionality should provide durable way of scheduled task
> or
> > closure execution on the cluster. Job descriptors should be persisted on
> > server side and executed there.
> >
> > As for API, I believe this should be part of Compute Grid. I suggest to
> > introduce IgniteCompute#withSchedulingPolicy(SchedulingPolicy policy)
> > method, where SchedulingPolicy is smth like this:
> >
> > public interface SchedulingPolicy {
> > /**
> >  * @return Timestamp of next execution.
> >  */
> > public Date nextTime();
> > }
> >
> > This will enable scheduling for all compute features (tasks, callables,
> > closures, etc.) and also very flexible. Policy implementation can provide
> > simple periodic scheduling, scheduling based on Cron or anything else.
> >
> > Thoughts?
> >
> > -Val
> >
> > On Fri, Jun 30, 2017 at 7:55 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org>
> > wrote:
> >
> > > On Fri, Jun 30, 2017 at 12:29 AM, Alexey Kuznetsov <
> > akuznet...@apache.org>
> > > wrote:
> > >
> > > > Dmitriy,
> > > >
> > > > >> Can you provide a simple example of API calls that will make this
> > > > possible?
> > > > API could be like this:
> > > > 1) via scheduler:
> > > > Ignite ignite = Ignition.start();
> > > >
> > > > ignite.scheduler().schedulel(job, "0 0 * * *"); // This will execute
> > job
> > > > every day at 00:00
> > > >
> > > > 2) via compute
> > > >
> > > > Ignite ignite = Ignition.start();
> > > >
> > > > ignite.compute().schedulel(task, "0 0 * * *"); // This will execute
> > > > compute
> > > > task every day at 00:00
> > > >
> > > > Make sense?
> > > >
> > > >
> > > Yes, it does, but I am failing to see how is this a *distributed*
> > > scheduling. Are we persisting the scheduler somewhere in the cluster or
> > is
> > > it only triggered on the client side?
> > >
> >
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-07-03 Thread Valentin Kulichenko
To my knowledge it's the case, there should be any issues with that.

-Val

On Mon, Jul 3, 2017 at 3:55 AM, Yakov Zhdanov  wrote:

> My point is that communication SPI should put both address types to its
> attributes to share - private and addresses processed by a resolver.
>
> --Yakov
>


Re: Custom string encoding

2017-07-03 Thread Valentin Kulichenko
Yes, this needs to be tested and confirmed. I will work on it.

Would be great to get more details about indexes. I'm not sure I understand
the limitation there.

-Val

On Mon, Jul 3, 2017 at 7:21 AM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> Agree with Valya on the system-wide default. We need to have it.
>
> Also, are we certain that the encoding will provide 1-byte length for UTF-8
> for different languages? Would be nice to test it to confirm, as it has a
> potential to decrease the Ignite storage space by 2x in certain cases.
>
> D.
>
> On Sun, Jul 2, 2017 at 12:26 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Vova,
> >
> > That's actually a good point. Probably that would be enough and there is
> no
> > need to introduce absract encoder. However, I still think it makes sense
> to
> > specify default encoding in BinaryConfiguration and
> > BinaryTypeConfiguration.
> >
> > -Val
> >
> > On Sun, Jul 2, 2017 at 10:31 AM Vladimir Ozerov <voze...@gridgain.com>
> > wrote:
> >
> > > Yes, this is exactly what non-UTF8 encodings do.
> > >
> > > вс, 2 июля 2017 г. в 20:08, Dmitriy Setrakyan <dsetrak...@apache.org>:
> > >
> > > > On Sun, Jul 2, 2017 at 9:50 AM, Vladimir Ozerov <
> voze...@gridgain.com>
> > > > wrote:
> > > >
> > > > > There is no need for custom encoders, as they are already built-in
> to
> > > > Java.
> > > > >
> > > >
> > > > Will non-ASCII encodings fit into 1 byte? The whole point here is to
> > save
> > > > space.
> > > >
> > > >
> > > > >
> > > > > вс, 2 июля 2017 г. в 19:16, Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >:
> > > > >
> > > > > > Vladimir, how would you plugin custom encoders in your design?
> > > > > >
> > > > > > On Sat, Jul 1, 2017 at 11:53 PM, Vladimir Ozerov <
> > > voze...@gridgain.com
> > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Valya,
> > > > > > >
> > > > > > > Personally I vote against this feature. BinaryConfiguration is
> > > proven
> > > > > to
> > > > > > be
> > > > > > > inconvenient, since it has to be configured before node start,
> it
> > > > > cannot
> > > > > > be
> > > > > > > changed in runtime, and it requires classes on the server.
> > > Moreover,
> > > > if
> > > > > > you
> > > > > > > decide to change encoding at some point, it would be
> impossible.
> > > > > > >
> > > > > > > I think, we should add this feature on API level instead. If
> > string
> > > > is
> > > > > > > written in non-UTF8 form, we will write in different format:
> > > > > > > [encoding_code][string]
> > > > > > >
> > > > > > > BInaryWriter.writeString(String fieldName, String val);
> > > > > > > BInaryWriter.writeString(String fieldName, String val, *String
> > > > > > encoding*);
> > > > > > >
> > > > > > > BinaryReader.readString(String fieldName);
> > > > > > > BinaryReader.readString(String fieldName, *String encoding*);
> > > > > > >
> > > > > > > BinaryObjectBuilder.writeString(String fieldName, String val,
> > > *String
> > > > > > > encoding*);
> > > > > > >
> > > > > > > class MyClass {
> > > > > > > *@BinaryString(encoding = "Cp1251")*
> > > > > > > private String myCyrillicString;
> > > > > > > }
> > > > > > >
> > > > > > > Vladimir.
> > > > > > >
> > > > > > > On Sat, Jul 1, 2017 at 7:26 PM, Dmitriy Setrakyan <
> > > > > dsetrak...@apache.org
> > > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > On Sat, Jul 1, 2017 at 2:24 AM, Sergi Vladykin <
> > > > > > sergi.vlady...@gmail.com
> > > > > > > >
> > > > > > > > wrote:
> > > > > > > >
> > &

Re: Custom string encoding

2017-07-02 Thread Valentin Kulichenko
Vova,

That's actually a good point. Probably that would be enough and there is no
need to introduce absract encoder. However, I still think it makes sense to
specify default encoding in BinaryConfiguration and BinaryTypeConfiguration.

-Val

On Sun, Jul 2, 2017 at 10:31 AM Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Yes, this is exactly what non-UTF8 encodings do.
>
> вс, 2 июля 2017 г. в 20:08, Dmitriy Setrakyan <dsetrak...@apache.org>:
>
> > On Sun, Jul 2, 2017 at 9:50 AM, Vladimir Ozerov <voze...@gridgain.com>
> > wrote:
> >
> > > There is no need for custom encoders, as they are already built-in to
> > Java.
> > >
> >
> > Will non-ASCII encodings fit into 1 byte? The whole point here is to save
> > space.
> >
> >
> > >
> > > вс, 2 июля 2017 г. в 19:16, Dmitriy Setrakyan <dsetrak...@apache.org>:
> > >
> > > > Vladimir, how would you plugin custom encoders in your design?
> > > >
> > > > On Sat, Jul 1, 2017 at 11:53 PM, Vladimir Ozerov <
> voze...@gridgain.com
> > >
> > > > wrote:
> > > >
> > > > > Valya,
> > > > >
> > > > > Personally I vote against this feature. BinaryConfiguration is
> proven
> > > to
> > > > be
> > > > > inconvenient, since it has to be configured before node start, it
> > > cannot
> > > > be
> > > > > changed in runtime, and it requires classes on the server.
> Moreover,
> > if
> > > > you
> > > > > decide to change encoding at some point, it would be impossible.
> > > > >
> > > > > I think, we should add this feature on API level instead. If string
> > is
> > > > > written in non-UTF8 form, we will write in different format:
> > > > > [encoding_code][string]
> > > > >
> > > > > BInaryWriter.writeString(String fieldName, String val);
> > > > > BInaryWriter.writeString(String fieldName, String val, *String
> > > > encoding*);
> > > > >
> > > > > BinaryReader.readString(String fieldName);
> > > > > BinaryReader.readString(String fieldName, *String encoding*);
> > > > >
> > > > > BinaryObjectBuilder.writeString(String fieldName, String val,
> *String
> > > > > encoding*);
> > > > >
> > > > > class MyClass {
> > > > > *@BinaryString(encoding = "Cp1251")*
> > > > > private String myCyrillicString;
> > > > > }
> > > > >
> > > > > Vladimir.
> > > > >
> > > > > On Sat, Jul 1, 2017 at 7:26 PM, Dmitriy Setrakyan <
> > > dsetrak...@apache.org
> > > > >
> > > > > wrote:
> > > > >
> > > > > > On Sat, Jul 1, 2017 at 2:24 AM, Sergi Vladykin <
> > > > sergi.vlady...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > In SQL indexes we may store partial strings and assume them to
> be
> > > in
> > > > > > UTF-8,
> > > > > > > I don't think this can be abstracted away. But may be this is
> > not a
> > > > big
> > > > > > > deal if in indexes we still will use UTF-8.
> > > > > > >
> > > > > >
> > > > > > Sergi, why does it matter if it is UTF8 or custom encoding? Why
> > can't
> > > > we
> > > > > > use our own compact encoding in indexes?
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > 2017-07-01 10:13 GMT+03:00 Dmitriy Setrakyan <
> > > dsetrak...@apache.org
> > > > >:
> > > > > > >
> > > > > > > > Val, do you know how we compare strings in SQL queries? Will
> we
> > > be
> > > > > able
> > > > > > > to
> > > > > > > > use this encoder?
> > > > > > > >
> > > > > > > > Additionally, I think that the encoder is a bit too abstract.
> > Why
> > > > not
> > > > > > go
> > > > > > > > even further and allow users create their own ASCII table for
> > > > > encoding?
> > > > > > > >
> > > > > > > > D.
> > > > > > &g

Re: Server stores cache data on-heap if client has near cache - IGNITE-4662

2017-06-27 Thread Valentin Kulichenko
I'm not sure this ticket is valid for 2.0. Semen, can you comment?

-Val

On Tue, Jun 27, 2017 at 1:14 AM, Vyacheslav Daradur 
wrote:

> Hi Igniters.
>
> I have some questions according to this task:
>
> 1. Does the method: GridCacheMapEntry#evictInternal do the
> eviction(on-heap
> -> off-heap)?
> 2. Is CacheOffheapEvictionManager responsible for managing the
> eviction(on-heap -> off-heap)? (if not, then who is?)
> 3. At what moment the eviction(on-heap -> off-heap) is called?
>
>
> --
> Best Regards, Vyacheslav D.
>


Re: It seems WebSession's removeAttribute does not support HttpSessionBindingListener

2017-07-05 Thread Valentin Kulichenko
Hi,

This fix seems to address only particular case when attribute is expired.
But this will not work in generic case, right? For example, if attribute is
put and removed explicitly, listener will not be invoked. I don't think we
should rely on underlying session here, the logic has to be properly
implemented in Ignite's session implementation.

BTW, if you're working on the ticket, please assign it to yourself and
follow the process. More details here:
https://cwiki.apache.org/confluence/display/IGNITE/How+to+Contribute

-Val

On Wed, Jul 5, 2017 at 3:25 AM, yucigou  wrote:

> Ignite WebSessionV2 uses genuineSes as the original HttpSession.
>
> Therefore, when setting an attribute or setting the maxInactiveInterval,
> Ignite should tell the original HttpSession about it.
>
> Otherwise, when the web container (such as Tomcat) thinks that a session
> expires, or is invalidated, or a session attribute gets removed, etc.,
> session attributes' HttpSessionBindingListener's valueUnbound callback
> function will not get fired.
>
> So once the original HttpSession gets updated with the session attributes
> and the maxInactiveInterval, the web container will transitively trigger
> the
> session attributes' HttpSessionBindingListener's valueUnbound callback
> function when a session expires, etc.
>
> (By the way, tested with our app, and our issue is fixed:
> https://github.com/apache/ignite/pull/2243)
>
>
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/It-seems-WebSession-s-
> removeAttribute-does-not-support-HttpSessionBindingListener-
> tp19184p19480.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2017-04-25 Thread Valentin Kulichenko
We allow to provide custom class loader via
IgniteConfiguration.setClassLoader. If this class loader is ignored during
deserialization of cache objects, then I believe it's a bug.

-Val

On Tue, Apr 25, 2017 at 7:36 PM, Denis Magda  wrote:

> Nick,
>
> This deserialization issue is related to cache objects. You will get the
> same kind of exception if you try to deserialize a cache entry inside of a
> compute task which class was preloaded using the peer-class-loading feature.
>
> Frankly, this is not treated as an issue on Ignite side. We designed cache
> objects serialization/deserialization in a way it works now - the class has
> to be in the local classpath.
>
> However, probably it makes sense to rethink this. *Val*, *Vovan*, what are
> your thoughts on this?
>
> —
> Denis
>
> > On Apr 24, 2017, at 6:06 PM, npordash  wrote:
> >
> > Hi Denis,
> >
> >>> if you want to deserialize an entry then most likely you’re doing this
> on
> >>> the app side that already has the class in the class path
> >
> > In this particular case the app is a deployed service and the hosting
> node
> > doesn't have the class files on its classpath. I implemented a way to
> deploy
> > and run services on the grid even if the class files are not known by
> ignite
> > (which is current service grid limitation). It works fine except for this
> > deserialization issue.
> >
> > -Nick
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/Re-BinaryObjectImpl-
> deserializeValue-with-specific-ClassLoader-tp17126p17173.html
> > Sent from the Apache Ignite Developers mailing list archive at
> Nabble.com.
>
>


Re: Handling of @AffinityKeyMapped and @QuerySqlField annotations in Cassandra store

2017-04-25 Thread Valentin Kulichenko
I agree. Using @QuerySqlField in Cassandra store seems to be incorrect
design decision in the first place.

-Val

On Tue, Apr 25, 2017 at 2:22 PM, Vladimir Ozerov 
wrote:

> Hi Igor,
>
> During API stabilization and improvement for Apache Ignite 2.0 we
> restricted usage of @AffinityKeyMapped and @QuerySqlField annotations to
> fields only, because annotations on method level are not supported by
> BinaryMarshaller (which is default) and goes against our general approach
> of having no user classes on server.
>
> This affected Cassandra store as it relied on these annotations in several
> places. I propose the following plan:
> 1) Remove handling of these annotations from Cassandra module in 2.0 as it
> no longer work anyway.
> 2) Define new Cassandra-specific annotations and return this logic in AI
> 2.1.
>
> The main idea is that both mentioned annotations were created for different
> purpose, and their usage in Cassandra module appears to be wrong. We need
> to have different annotations for this.
>
> Thoughts?
>
> Vladimir.
>


Webinar: Building Consistent and Highly Available Distributed Systems with Apache Ignite

2017-07-31 Thread Valentin Kulichenko
Igniters,

This Wednesday (August 2nd at 11am PT), I will host a webinar where I will
go through different Apache Ignite features and capabilities that allow to
build consistent and highly available distributed systems. More information
here:
https://ignite.apache.org/events.html#building-consistent-and-highly-available-distributed-systems

Tough questions are welcome!

-Val


Re: Resurrect FairAffinityFunction

2017-08-09 Thread Valentin Kulichenko
As far as I know, all logical caches with the same affinity function and
node filter will end up in the same group. If that's the case, I like the
idea. This is exactly what I was looking for.

-Val

On Wed, Aug 9, 2017 at 8:18 AM, Evgenii Zhuravlev 
wrote:

> Dmitriy,
>
> Yes, you're right. Moreover, it looks like a good practice to combine
> caches that will be used for collocated JOINs in one group since it reduces
> overall overhead.
>
> I think it's not a problem to add this restriction to the SQL JOIN level if
> we will decide to use this solution.
>
> Evgenii
>
>
>
>
> 2017-08-09 17:07 GMT+03:00 Dmitriy Setrakyan :
>
> > On Wed, Aug 9, 2017 at 6:28 AM, ezhuravl 
> wrote:
> >
> > > Folks,
> > >
> > > I've started working on a https://issues.apache.org/
> > > jira/browse/IGNITE-5836
> > > ticket and found that the recently added feature with cacheGroups doing
> > > pretty much the same that was described in this issue. CacheGroup
> > > guarantees
> > > that all caches within a group have same assignments since they share a
> > > single underlying 'physical' cache.
> > >
> >
> > > I think we can return FairAffinityFunction and add information to its
> > > Javadoc that all caches with same AffinityFunction and NodeFilter
> should
> > be
> > > combined in cache group to avoid a problem with inconsistent previous
> > > assignments.
> > >
> > > What do you guys think?
> > >
> >
> > Are you suggesting that we can only reuse the same FairAffinityFunction
> > across the logical caches within the same group? This would mean that
> > caches from the different groups cannot participate in JOINs or
> collocated
> > compute.
> >
> > I think I like the idea, however, we need to make sure that we enforce
> this
> > restriction, at least at the SQL JOIN level.
> >
> > Alexey G, Val, would be nice to hear your thoughts on this.
> >
> >
> > >
> > > Evgenii
> > >
> > >
> > >
> > > --
> > > View this message in context: http://apache-ignite-
> > > developers.2346864.n4.nabble.com/Resurrect-FairAffinityFunction-
> > > tp19987p20669.html
> > > Sent from the Apache Ignite Developers mailing list archive at
> > Nabble.com.
> > >
> >
>


igniterouter.sh

2017-08-07 Thread Valentin Kulichenko
Folks,

Our bin folder still contains igniterouter.{sh|bat} script. Previously it
was used to support indirect access from thin client to cluster. Since thin
client is deprecated for a while, is there a use case when router can be
used?

I think it makes sense to remove these scripts for now. If we ever
resurrect this functionality, we can always bring them back.

Thoughts?

-Val


Re: igniterouter.sh

2017-08-07 Thread Valentin Kulichenko
I'm pretty sure current router will not be able to work with new client
anyway, which will make even more confusing than now. I think it's better
to remove these scripts in next release (unless both client and router are
back in this release of course :) ).

-Val

On Mon, Aug 7, 2017 at 1:13 PM, Vladimir Ozerov <voze...@gridgain.com>
wrote:

> Val,
>
> Thin client is like Arnold - he'll be back ) We already working on it. I
> propose to delay this question a bit, until we understand that router is
> not needed for new thin client.
>
> пн, 7 авг. 2017 г. в 23:07, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Folks,
> >
> > Our bin folder still contains igniterouter.{sh|bat} script. Previously it
> > was used to support indirect access from thin client to cluster. Since
> thin
> > client is deprecated for a while, is there a use case when router can be
> > used?
> >
> > I think it makes sense to remove these scripts for now. If we ever
> > resurrect this functionality, we can always bring them back.
> >
> > Thoughts?
> >
> > -Val
> >
>


Re: ERROR: Heuristic transaction failure.

2017-08-17 Thread Valentin Kulichenko
Hi Usein,

Which Java version do you have? There was a similar thread already where
this exception was fixed by upgrading to the latest one:
http://apache-ignite-users.70518.x6.nabble.com/Caused-by-org-h2-jdbc-JdbcSQLException-General-error-quot-java-lang-IllegalMonitorStateException-Attt-td15684.html

Can you try to upgrade as well? If you confirm that it indeed helps, then
it needs to be documented.

-Val

On Thu, Aug 17, 2017 at 3:58 AM, Usein Faradzhev 
wrote:

> Hello.
>
>
>
> We are trying to use the Ignite Memory File System and sometimes Ignite
> can’t write file to IGFS and can’t read. What is this happens?
>
> Below is an example for Cloudera Quick Start VM 5.10.0 and error, also
> configuration and full log in attachments. This problems arise on our
> cluster with CentOS 7 and CDH 5.11.1 too.
>
>
>
> In-Memory Hadoop Accelerator:
>
> Version2.1.0
>
> Date  2017-07-27
>
> File http://apache-mirror.rbc.ru/pub/apache//ignite/2.1.0/
> apache-ignite-hadoop-2.1.0-bin.zip
> 
>
>
>
> [cloudera@quickstart ~]$ ls -l dtm_ekp_scoring_plan_oper75.csv
>
> -rw-r--r-- 1 cloudera cloudera 19579883 Aug 16 03:53
> dtm_ekp_scoring_plan_oper75.csv
>
>
>
> [cloudera@quickstart ~]$ hdfs dfs -mkdir -p igfs://igfs@/user/cloudera/
> dtm_ekp_scoring_plan_oper/
>
> [cloudera@quickstart ~]$ hdfs dfs -put dtm_ekp_scoring_plan_oper75.csv
> igfs://igfs@/user/cloudera/dtm_ekp_scoring_plan_oper/
>
> put: Failed to flush data during stream close [path=/user/cloudera/dtm_ekp_
> scoring_plan_oper/dtm_ekp_scoring_plan_oper75.csv._COPYING_,
> fileInfo=IgfsFileInfo [len=0, blockSize=65536, 
> lockId=4600eafed51-15b0cff9-0c6e-459c-8c1e-1d8f59d102e6,
> affKey=null, fileMap=IgfsFileMap [ranges=null], evictExclude=true]]
>
>
>
>
>
> [2017-08-17 03:13:07,951][ERROR][igfs-#47%null%][GridNearTxLocal]
> Heuristic transaction failure.
>
> class org.apache.ignite.internal.transactions.
> IgniteTxHeuristicCheckedException: Failed to locally write to cache (all
> transaction entries will be invalidated, however there was a window when
> entries for this transaction were visible to others): GridNearTxLocal
> [mappings=IgniteTxMappingsSingleImpl [mapping=GridDistributedTxMapping
> [entries=[IgniteTxEntry [key=KeyCacheObjectImpl [part=954, val=IgfsBlockKey
> [fileId=1600eafed51-cd651f8d-10b5-4cc3-9c14-e74963c7c2be, blockId=130,
> affKey=null, evictExclude=true], hasValBytes=true], cacheId=-313790114,
> txKey=IgniteTxKey [key=KeyCacheObjectImpl [part=954, val=IgfsBlockKey
> [fileId=1600eafed51-cd651f8d-10b5-4cc3-9c14-e74963c7c2be, blockId=130,
> affKey=null, evictExclude=true], hasValBytes=true], cacheId=-313790114],
> val=[op=CREATE, val=CacheObjectByteArrayImpl [arrLen=65536]],
> prevVal=[op=NOOP, val=null], oldVal=[op=NOOP, val=null],
> entryProcessorsCol=null, ttl=-1, conflictExpireTime=-1, conflictVer=null,
> explicitVer=null, dhtVer=null, filters=[], filtersPassed=false,
> filtersSet=true, entry=GridDhtCacheEntry [rdrs=[], part=954, 
> super=GridDistributedCacheEntry
> [super=GridCacheMapEntry [key=KeyCacheObjectImpl [part=954,
> val=IgfsBlockKey [fileId=1600eafed51-cd651f8d-10b5-4cc3-9c14-e74963c7c2be,
> blockId=130, affKey=null, evictExclude=true], hasValBytes=true], val=null,
> startVer=1502964754897, ver=GridCacheVersion [topVer=11755,
> order=1502964754897, nodeOrder=1], hash=236544549, 
> extras=GridCacheMvccEntryExtras
> [mvcc=GridCacheMvcc [locs=[GridCacheMvccCandidate
> [nodeId=15b0cff9-0c6e-459c-8c1e-1d8f59d102e6, ver=GridCacheVersion
> [topVer=11755, order=1502964754896, nodeOrder=1], threadId=69, id=152,
> topVer=AffinityTopologyVersion [topVer=1, minorTopVer=0], reentry=null,
> otherNodeId=15b0cff9-0c6e-459c-8c1e-1d8f59d102e6,
> otherVer=GridCacheVersion [topVer=11755, order=1502964754896,
> nodeOrder=1], mappedDhtNodes=null, mappedNearNodes=null, ownerVer=null,
> serOrder=null, key=KeyCacheObjectImpl [part=954, val=IgfsBlockKey
> [fileId=1600eafed51-cd651f8d-10b5-4cc3-9c14-e74963c7c2be, blockId=130,
> affKey=null, evictExclude=true], hasValBytes=true],
> masks=local=1|owner=1|ready=1|reentry=0|used=0|tx=1|single_
> implicit=1|dht_local=1|near_local=0|removed=0|read=0, prevVer=null,
> nextVer=null]], rmts=null]], flags=2]]], prepared=1, locked=false,
> nodeId=15b0cff9-0c6e-459c-8c1e-1d8f59d102e6, locMapped=false,
> expiryPlc=null, transferExpiryPlc=false, flags=0, partUpdateCntr=0,
> serReadVer=null, xidVer=GridCacheVersion [topVer=11755,
> order=1502964754896, nodeOrder=1]]], explicitLock=false, dhtVer=null,
> last=false, nearEntries=0, clientFirst=false, 
> node=15b0cff9-0c6e-459c-8c1e-1d8f59d102e6]],
> nearLocallyMapped=false, colocatedLocallyMapped=true, needCheckBackup=null,
> hasRemoteLocks=false, thread=igfs-#47%null%, 
> mappings=IgniteTxMappingsSingleImpl
> [mapping=GridDistributedTxMapping [entries=[IgniteTxEntry
> [key=KeyCacheObjectImpl [part=954, val=IgfsBlockKey

Re: Failure to deserialize simple model object

2017-08-17 Thread Valentin Kulichenko
Guys,

Does anyone has ideas?

-Val

On Mon, Aug 14, 2017 at 4:33 PM, Valentin Kulichenko <
valentin.kuliche...@gmail.com> wrote:

> Cross-posting to dev
>
> Folks,
>
> I'm confused by the issue discussed in this thread.
>
> Here is the scenario:
> - Start server node with a cache with POJO store configured. There is one
> type declared, read-through enabled.
> - Start client node and execute get() for a key that exists in underlying
> DB.
> - During deserialization on the client, 'Requesting mapping from grid
> failed for' exception is thrown.
>
> Specifying the type explicitly in BinaryConfiguration solves the issue,
> and I think I understand technical reasons for this. But is this really
> expected? Is it possible to fix the issue without requiring to provide this
> configuration?
>
> I thought we do not require to provide types in configuration as long as
> there is only one platform involved, am I wrong? If yes, we need to
> identify scenarios when this documentation is required and document them.
>
> -Val
>
> On Mon, Aug 14, 2017 at 4:23 AM, franck102 <franck...@yahoo.com> wrote:
>
>> My bad, here is the whole project.
>>
>> Franck ignite-binary-sample.zip
>> <http://apache-ignite-users.70518.x6.nabble.com/file/n16158/
>> ignite-binary-sample.zip>
>>
>>
>>
>> --
>> View this message in context: http://apache-ignite-users.705
>> 18.x6.nabble.com/Failure-to-deserialize-simple-model-object-
>> tp15440p16158.html
>> Sent from the Apache Ignite Users mailing list archive at Nabble.com.
>>
>
>


Re: Policy for update third-party dependencies

2017-08-20 Thread Valentin Kulichenko
Guys,

Keep in mind that some projects can use *older* version of third-party
libraries as well, and dependency upgrade can break them. In other words,
dependency upgrade is in many cases an incompatible change for us, so we
should do this with care.

Unless there is a specific reason to upgrade a specific dependency, I think
it's better to postpone it until major version.

-Val

On Sun, Aug 20, 2017 at 5:04 AM 李玉珏@163 <18624049...@163.com> wrote:

> If the third party library is incompatible with the new version and the
> old version (such as lucene3.5.0-5.5.2), and the dependent version of
> Ignite is older, it may cause conflicts in the user's system.
> For such scenarios, I think that updating third-party dependencies's
> major version is valuable.
>
>
> 在 2017/8/17 上午8:26, Denis Magda 写道:
> > I would respond why do we need to update? Some bug, new capabilities,
> security breach? Alexey K., please shed some light on this.
> >
> > —
> > Denis
> >
> >> On Aug 16, 2017, at 5:12 PM, Dmitriy Setrakyan 
> wrote:
> >>
> >> On Wed, Aug 16, 2017 at 5:02 PM, Denis Magda  wrote:
> >>
> >>> Honestly, I wouldn’t touch a dependency if it works like a charm and
> >>> nobody requested us to migrate to a new version.
> >>>
> >>> Why do you need to update Apache Common coded?
> >>>
> >> Not sure I agree. Why not update it?
> >>
> >>
> >>>
> >>> —
> >>> Denis
> >>>
>  On Aug 16, 2017, at 10:36 AM, Alexey Kuznetsov  >
> >>> wrote:
>  Done
> 
>  https://issues.apache.org/jira/browse/IGNITE-6090
> 
>  On Wed, Aug 16, 2017 at 8:01 PM, Dmitriy Setrakyan <
> >>> dsetrak...@apache.org>
>  wrote:
> 
> > The answer is Yes, we should update. Jira ticket assigned to the next
> > release should be enough in my view.
> >
> > D.
> >
> > On Wed, Aug 16, 2017 at 2:38 AM, Alexey Kuznetsov <
> >>> akuznet...@apache.org>
> > wrote:
> >
> >> Hi, All!
> >>
> >> Do we have any policy for updating third-party dependencies?
> >>
> >> For example, I found that we are using very old  Apache Common codec
> > v.1.6
> >> (released in 2011)
> >> And latest is Apache Common codec v.1.10
> >>
> >> Do we need to update to new versions from time to time?
> >> And how?
> >>
> >> Just create JIRA issue, update pom.xml and run all tests on TC -
> will
> >>> be
> >> enough?
> >>
> >> --
> >> Alexey Kuznetsov
> >>
> 
> 
>  --
>  Alexey Kuznetsov
> >>>
>
>
>


Re: Store data from Python in multiple caches using Redis or Memcached client in Apache ignite

2017-05-16 Thread Valentin Kulichenko
To my knowledge, Memcached does not allow to do this.

-Val

On Mon, May 15, 2017 at 9:27 PM, Roman Shtykh 
wrote:

> Denis, yes, I would like to enable switching caches via "CONFIG SET
> parameter value".I created https://issues.apache.org/
> jira/browse/IGNITE-5229 for this, and will discuss it in a separate
> thread.
> Roman
>
>
>
>
> On Tuesday, May 16, 2017 8:45 AM, Denis Magda 
> wrote:
>
>
>  Hi, see below
>
> > On May 15, 2017, at 7:32 AM, rishi007bansod 
> wrote:
> >
> > Examples at following link displays how data is stored in default
> > caches(single cache) only in Apache Ignite:
> >
> > https://apacheignite.readme.io/docs/redis#python
> > 
> >
> According to the docs there is no way to store data in multiple caches for
> now.
>
> *Roman*, do you have any plans to remove the limitation in the nearest
> releases?
>
> > https://apacheignite.readme.io/v1.8/docs/memcached-support
> > 
> >
>
> *Val*, is this achievable with Memcached client? I can’t find a hint how
> to do this.
>
> > I want to store data in multiple caches(with different cache names) in
> > Apache Ignite from Python either using redis client or using memcache
> > client. How can I do this? Also, are SQL queries supported when we use
> > Memcached or redis client for caching data in Ignite from Python.
> >
>
> If you want to use SQL queries from Python then I would suggest connecting
> to the cluster with ODBC driver:
> https://apacheignite.readme.io/docs/odbc-driver
>
> For instance, this is how it works from PHP and the same can be easily
> done from Python side:
> https://apacheignite-mix.readme.io/docs/php-pdo
>
> —
> Denis
>
> >
> >
> > --
> > View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/Store-data-from-Python-in-
> multiple-caches-using-Redis-or-Memcached-client-in-Apache-
> ignite-tp17660.html
> > Sent from the Apache Ignite Developers mailing list archive at
> Nabble.com.
>
>
>
>


Re: [DISCUSS] Webinar for Ignite Persistent Store walk-through

2017-06-09 Thread Valentin Kulichenko
+1

On Fri, Jun 9, 2017 at 3:15 PM, Andrey Mashenkov  wrote:

> +1
>
> 10 июня 2017 г. 0:08 пользователь "William Do" 
> написал:
>
> > +1
> >
> > On 9 June 2017 at 21:37, Dmitriy Setrakyan 
> wrote:
> >
> > > Hm... we have only 3 community members who are interested so far.
> Anyone
> > > else who may be willing to attend?
> > >
> > > On Fri, Jun 9, 2017 at 12:03 AM, Sergi Vladykin <
> > sergi.vlady...@gmail.com>
> > > wrote:
> > >
> > > > +1
> > > >
> > > > Sergi
> > > >
> > > > 2017-06-08 23:03 GMT+03:00 Dmitriy Setrakyan  >:
> > > >
> > > > > +1 (I will attend)
> > > > >
> > > > > On Thu, Jun 8, 2017 at 1:02 PM, Konstantin Boudnik  >
> > > > wrote:
> > > > >
> > > > > > That'd be great! Thank you!
> > > > > > --
> > > > > >   Take care,
> > > > > > Konstantin (Cos) Boudnik
> > > > > > 2CAC 8312 4870 D885 8616  6115 220F 6980 1F27 E622
> > > > > >
> > > > > > Disclaimer: Opinions expressed in this email are those of the
> > author,
> > > > > > and do not necessarily represent the views of any company the
> > author
> > > > > > might be affiliated with at the moment of writing.
> > > > > >
> > > > > >
> > > > > > On Thu, Jun 8, 2017 at 12:54 PM, Denis Magda 
> > > > wrote:
> > > > > > > Igniters,
> > > > > > >
> > > > > > > What’d you think if we arrange an internal webinar for our
> > > community
> > > > to
> > > > > > walk through the features, capabilities and implementation
> details
> > of
> > > > the
> > > > > > Ignite Persistent Store [1]? That should help us understanding
> the
> > > > > donation
> > > > > > better.
> > > > > > >
> > > > > > > Please reply if you will be happy to attend.
> > > > > > >
> > > > > > > [1] https://apacheignite.readme.io/docs/distributed-
> > > persistent-store
> > > > <
> > > > > > https://apacheignite.readme.io/docs/distributed-persistent-store
> >
> > > > > > >
> > > > > > > —
> > > > > > > Denis
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-21 Thread Valentin Kulichenko
Igor,

What version are you going on? I believe we already fixed this in the past.

-Val

On Wed, Jun 21, 2017 at 2:30 AM Igor Rudyak  wrote:

> Hi guys,
>
> How to force *TcpDiscoveryZookeeperIpFinder* to publish public IP address
> (in addition to private IP) of Ignite node when it's deployed in Amazon?
>
> By default it just publishing private IP addresses of nodes which makes it
> impossible to connect to cluster from outside using *Zookeeper Discovery
> SPI*.
>
> I tried to use something like this (see below) for *discoverySPI*:
>
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.TcpDiscoveryZookeeperIpFinder">
> 
> 
> 
> 
> 
>
> But such way it only publish public IPs to Zookeeper.
>
> Actually I am looking for something like *advertised.host.name
> * analog in Kafka. Which allows to publish
> private and public IP addresses for a node to Zookeeper.
>
> Such way all internal services communicates through private IPs, but
> external services communicates using public IPs.
>
> Igor
>


Re: Replace Cron4J with Quartz for ignite-schedule module.

2017-06-21 Thread Valentin Kulichenko
I think Michael brought up a very good point. Current ignite-scheduler
module schedules jobs only locally which is not very useful in distributed
system. I don't think I've ever seen it used and I don't think it makes
sense to spend time on it if we just replace one dependency with another.
However, if we switch to Quartz in order to enhance functionality and
introduce distributed scheduling - that can add value.

-Val

On Wed, Jun 21, 2017 at 3:21 PM, Michael André Pearce <
michael.andre.pea...@me.com> wrote:

> If taking the quartz route, it be great if ignite could expose a
> distributed ignite job store, so you could setup and use quartz in a
> distributed way, in a similar way to terracotta or hazelcasts quartz
> jobstores.
>
>
> Sent from my iPhone
>
> > On 21 Jun 2017, at 15:43, Alexey Kuznetsov 
> wrote:
> >
> > Hi!
> >
> > Good point, I will take a look.
> >
> >> On Wed, Jun 21, 2017 at 5:42 PM, 李玉珏  wrote:
> >>
> >> Hi,
> >>
> >>
> >> There is also an alternative that the community can consider using the
> >> scheduling functionality in the spring-context module, for the following
> >> reasons:
> >> 1.quartz is a very heavy framework, and most functions we don't need;
> >> 2., we already have spring dependencies in our project without
> introducing
> >> new dependencies;
> >> 3.spring is also Apache 2.0 license;
> >> 4.spring's scheduler supports standard CRON, and cron4j does not support
> >> standard CRON;
> >> 5.spring's code quality is very good, maintainability is good, and the
> >> quality of quartz code is not very good.
> >> On 06/21/2017 13:26,Alexey Kuznetsov wrote:
> >> Hi!
> >>
> >> 1) Cron4J is very old:
> >>  Latest Cron4j 2.2.5 released: *28-Dec-2011 *
> >>  Latest Quarz 2.3.0 released: *20-Apr-2017*
> >>
> >> 2) Not very friendly license:
> >>  CronJ4 licensed under GNU LESSER GENERAL PUBLIC LICENSE
> >>  Quartz is freely usable, licensed under the *Apache 2.0* license.
> >>
> >> So, if we replace Cron4J  with Quartz we can move *ignite-schedule*
> module
> >> from lgpl profile to main distribution.
> >>
> >> Any objections?
> >>
> >> If no, I will create JIRA issue and implement this change.
> >>
> >> --
> >> Alexey Kuznetsov
> >>
> >
> >
> >
> > --
> > Alexey Kuznetsov
> > GridGain Systems
> > www.gridgain.com
>


Re: IGNITE-2894 - Binary object inside of Externalizable still serialized with OptimizedMarshaller

2017-06-21 Thread Valentin Kulichenko
Hi Nikita,

1. Makes sense to me.

2. Externalizable object should not be written as binary with flag 103, it
should be written in the same way it's written now. I don't see any reason
to change the protocol. Purpose of this task it to move the logic to binary
marshaller instead of depending on optimized marshaller, and also fully
support handles for these objects and objects included in them. Currently
binary marshaller and optimized marshaller use different set of handles -
this is the main downside of current implementation.

3. I think this order is correct, but does it even make sense to implement
both Binarylizable and Externalizable?

-Val

On Mon, Jun 19, 2017 at 8:58 AM, Nikita Amelchev <nsamelc...@gmail.com>
wrote:

> Hello everebody.
>
> I would like to clarify about some moments in marshaller about custom
> serialization.
>
> 1. I suggest to divide the issue into two tasks: support the Externalizable
> and support the Serializable. The second task is to do as a separate issue.
>
> 2. In case the Optimized marshaller when object is the Extenalizable
> BinaryUtils.unmarshal() return deserialize value. But if we will not use
> Optimized marshaller and write the Extenalizable as the Object(103) it
> return the BinaryObjectExImpl. It break testBuilderExternalizable. (If we
> replace Externalizable to Binarilylizable it also dont work). Fix - check
> that object is the Extenalizable and deserialize
> manual(BinaryUtils.java:1833 in PR). We will use this fix or return
> BinaryObjectExImpl?
>
> 3. What are priority if was implemented several interfaces: Binarylizable
> -> Externalizable -> Serializable ?
>
> Also can you pre review this issue?
> PR: https://github.com/apache/ignite/pull/2160
>
> 2017-04-18 17:41 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Nikita,
> >
> > For Externalizable option 1 is the correct one. Externalizable objects
> > should not be treated as binary objects.
> >
> > For read/writeObject, you indeed have to extend ObjectOutputStream.
> > writeObject() is final because you should extend writeObjectOverride()
> > instead. Take a look at ObjectOutputStream's JavaDoc and on how this is
> > done in OptimizedObjectOutputStream. Note that ideally we need to
> implement
> > everything that is included in Java serialization spec, including some
> > non-trivial stuff like PutField. I would check if it's possible to
> somehow
> > reuse the code that already exists in optimized marshaller as much as
> > possible.
> >
> > -Val
> >
> > On Tue, Apr 18, 2017 at 1:36 PM, Nikita Amelchev <nsamelc...@gmail.com>
> > wrote:
> >
> > > I see two ways to support the Externalizable in the BM:
> > > 1. Add a new type constant to the GridBinaryMarshaller class etc and
> > > read/writeExternal in the BinaryClassDescriptor.
> > > 2. Make read/writeExternal through the BINARY type without updating
> > > metadata.
> > > I don't know how to make a support read/writeObject of the Serializable
> > > without delegating to the OM. Because read/writeObject methods need the
> > > Objectoutputstream class argument. One way is to delegate it to the
> > > OptimizedObjectOutputStream. Second way is to extend the
> > Objectoutputstream
> > > in the BinaryWriterExImpl. But it is wrong way because the writeObject
> is
> > > final.
> > >
> > > 2017-01-19 20:46 GMT+03:00 Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com>:
> > >
> > > > Nikita,
> > > >
> > > > In my view we just need to support Externalizable and
> > > > writeObject/readObject in BinaryMarshaller and get rid of delegation
> to
> > > > optimized marshaller. Once such classes also go through
> > BinaryMarshaller
> > > > streams, they will be aware of binary configuration and will share
> the
> > > same
> > > > set of handles as well. This should take care of all the issues we
> have
> > > > here.
> > > >
> > > > -Val
> > > >
> > > > On Thu, Jan 19, 2017 at 7:26 AM, Nikita Amelchev <
> nsamelc...@gmail.com
> > >
> > > > wrote:
> > > >
> > > > > I have some questions about single Marshaller.
> > > > > It seems not easy to merge OptimizedMarshaller with
> BinaryMarshaller
> > > and
> > > > is
> > > > > there any sense in it?
> > > > > When Binary object inside Externalizable serialized with optimized
> it
> > > > > losing all benefits.
> > &

Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-21 Thread Valentin Kulichenko
Igor,

Here is the ticket I'm talking about:
https://issues.apache.org/jira/browse/IGNITE-3230

-Val

On Wed, Jun 21, 2017 at 8:57 AM, Igor Rudyak <irud...@gmail.com> wrote:

> Val,
>
> Are there any ticket for this in Jira?
>
> Igor
>
> On Jun 21, 2017 5:50 AM, "Valentin Kulichenko" <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igor,
> >
> > What version are you going on? I believe we already fixed this in the
> past.
> >
> > -Val
> >
> > On Wed, Jun 21, 2017 at 2:30 AM Igor Rudyak <irud...@gmail.com> wrote:
> >
> > > Hi guys,
> > >
> > > How to force *TcpDiscoveryZookeeperIpFinder* to publish public IP
> > address
> > > (in addition to private IP) of Ignite node when it's deployed in
> Amazon?
> > >
> > > By default it just publishing private IP addresses of nodes which makes
> > it
> > > impossible to connect to cluster from outside using *Zookeeper
> Discovery
> > > SPI*.
> > >
> > > I tried to use something like this (see below) for *discoverySPI*:
> > >
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >  > >
> > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.
> > TcpDiscoveryZookeeperIpFinder">
> > > 
> > > 
> > > 
> > > 
> > > 
> > >
> > > But such way it only publish public IPs to Zookeeper.
> > >
> > > Actually I am looking for something like *advertised.host.name
> > > <http://advertised.host.name>* analog in Kafka. Which allows to
> publish
> > > private and public IP addresses for a node to Zookeeper.
> > >
> > > Such way all internal services communicates through private IPs, but
> > > external services communicates using public IPs.
> > >
> > > Igor
> > >
> >
>


Re: IgniteCache#localEvict method

2017-06-21 Thread Valentin Kulichenko
I agree. Ivan, do you have objections?

-Val

On Mon, Jun 19, 2017 at 3:55 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
wrote:

> Ivan,
>
> The semantic now is very confusing, because localEvict does not evict to
> off-heap, it just removes it from on-heap. The off-heap cache always has
> the entry anyway.
>
> My vote would be to remove this method as I don't see anyone every needing
> it. Perhaps a more useful method would be to flush the whole on-heap cache
> altogether.
>
> D.
>
> On Mon, Jun 19, 2017 at 4:16 PM, Ivan Rakov <ira...@gridgain.com> wrote:
>
> > Semantics in 2.0: if onheap cache enabled, method evicts entry from it.
> If
> > onheap cache is disabled (default case), implementation is no-op.
> > Probably we should keep the method and add some note in javadoc.
> >
> > Best Regards,
> > Ivan Rakov
> >
> > On 19.06.2017 17:01, Igor Sapego wrote:
> >
> >> What if user enables on-heap cache?
> >>
> >> Best Regards,
> >> Igor
> >>
> >> On Mon, Jun 19, 2017 at 3:34 AM, Dmitriy Setrakyan <
> dsetrak...@apache.org
> >> >
> >> wrote:
> >>
> >> Doesn't look useful to me.
> >>>
> >>> On Mon, Jun 19, 2017 at 1:03 AM, Valentin Kulichenko <
> >>> valentin.kuliche...@gmail.com> wrote:
> >>>
> >>> Folks,
> >>>>
> >>>> Does the subj make sense in 2.0? Before this method could be used to
> >>>>
> >>> evict
> >>>
> >>>> from on-heap memory to off-heap or swap. What are the semantics now?
> >>>>
> >>>> -Val
> >>>>
> >>>>
> >
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-21 Thread Valentin Kulichenko
Anton, Nikolay,

Looks like you participated in the fix. Can you please check?

-Val

On Wed, Jun 21, 2017 at 7:01 PM Igor Rudyak <irud...@gmail.com> wrote:

> Thanks Val,
>
> It looks like there is still this problem in version 2.0.0
>
> Igor
>
> On Wed, Jun 21, 2017 at 3:20 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igor,
> >
> > Here is the ticket I'm talking about:
> > https://issues.apache.org/jira/browse/IGNITE-3230
> >
> > -Val
> >
> > On Wed, Jun 21, 2017 at 8:57 AM, Igor Rudyak <irud...@gmail.com> wrote:
> >
> > > Val,
> > >
> > > Are there any ticket for this in Jira?
> > >
> > > Igor
> > >
> > > On Jun 21, 2017 5:50 AM, "Valentin Kulichenko" <
> > > valentin.kuliche...@gmail.com> wrote:
> > >
> > > > Igor,
> > > >
> > > > What version are you going on? I believe we already fixed this in the
> > > past.
> > > >
> > > > -Val
> > > >
> > > > On Wed, Jun 21, 2017 at 2:30 AM Igor Rudyak <irud...@gmail.com>
> wrote:
> > > >
> > > > > Hi guys,
> > > > >
> > > > > How to force *TcpDiscoveryZookeeperIpFinder* to publish public IP
> > > > address
> > > > > (in addition to private IP) of Ignite node when it's deployed in
> > > Amazon?
> > > > >
> > > > > By default it just publishing private IP addresses of nodes which
> > makes
> > > > it
> > > > > impossible to connect to cluster from outside using *Zookeeper
> > > Discovery
> > > > > SPI*.
> > > > >
> > > > > I tried to use something like this (see below) for *discoverySPI*:
> > > > >
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > >  > > > >
> > > > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.zk.
> > > > TcpDiscoveryZookeeperIpFinder">
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > > 
> > > > >
> > > > > But such way it only publish public IPs to Zookeeper.
> > > > >
> > > > > Actually I am looking for something like *advertised.host.name
> > > > > <http://advertised.host.name>* analog in Kafka. Which allows to
> > > publish
> > > > > private and public IP addresses for a node to Zookeeper.
> > > > >
> > > > > Such way all internal services communicates through private IPs,
> but
> > > > > external services communicates using public IPs.
> > > > >
> > > > > Igor
> > > > >
> > > >
> > >
> >
>


Re: IgniteCache#localEvict method

2017-06-26 Thread Valentin Kulichenko
Created ticket: https://issues.apache.org/jira/browse/IGNITE-5592

-Val

On Sun, Jun 25, 2017 at 7:49 AM, Ivan Rakov <ivan.glu...@gmail.com> wrote:

> Agree as well.
>
> Best Regards,
> Ivan Rakov
>
> On 22.06.2017 1:23, Valentin Kulichenko wrote:
>
> I agree. Ivan, do you have objections?
>
> -Val
>
> On Mon, Jun 19, 2017 at 3:55 PM, Dmitriy Setrakyan <dsetrak...@apache.org>
> wrote:
>
>> Ivan,
>>
>> The semantic now is very confusing, because localEvict does not evict to
>> off-heap, it just removes it from on-heap. The off-heap cache always has
>> the entry anyway.
>>
>> My vote would be to remove this method as I don't see anyone every needing
>> it. Perhaps a more useful method would be to flush the whole on-heap cache
>> altogether.
>>
>> D.
>>
>> On Mon, Jun 19, 2017 at 4:16 PM, Ivan Rakov <ira...@gridgain.com> wrote:
>>
>> > Semantics in 2.0: if onheap cache enabled, method evicts entry from it.
>> If
>> > onheap cache is disabled (default case), implementation is no-op.
>> > Probably we should keep the method and add some note in javadoc.
>> >
>> > Best Regards,
>> > Ivan Rakov
>> >
>> > On 19.06.2017 17:01, Igor Sapego wrote:
>> >
>> >> What if user enables on-heap cache?
>> >>
>> >> Best Regards,
>> >> Igor
>> >>
>> >> On Mon, Jun 19, 2017 at 3:34 AM, Dmitriy Setrakyan <
>> dsetrak...@apache.org
>> >> >
>> >> wrote:
>> >>
>> >> Doesn't look useful to me.
>> >>>
>> >>> On Mon, Jun 19, 2017 at 1:03 AM, Valentin Kulichenko <
>> >>> valentin.kuliche...@gmail.com> wrote:
>> >>>
>> >>> Folks,
>> >>>>
>> >>>> Does the subj make sense in 2.0? Before this method could be used to
>> >>>>
>> >>> evict
>> >>>
>> >>>> from on-heap memory to off-heap or swap. What are the semantics now?
>> >>>>
>> >>>> -Val
>> >>>>
>> >>>>
>> >
>>
>
>
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-26 Thread Valentin Kulichenko
Yakov,

Nodes that join outside of the network (usually these are clients) need to
know public addresses to connect. To make it work either of these must
happen:

1. Server nodes publish their public addresses in IP finder so that clients
can use them to connect.
2. Client nodes use address resolver to map published internal addresses to
public addresses.

Both will work, but frankly I like option 1 more. First of all, it's just
more intuitive that IP finder contains all possible addresses that can be
used to join. Second of all, option 2 introduces requirement to have
address resolver for server addresses configured on client nodes - this is
not very good from usability standpoint.

-Val

On Mon, Jun 26, 2017 at 3:17 AM, Yakov Zhdanov  wrote:

> Guys, I don't get the point.
>
> 1. Why addresses processed by address resolver should appear in shared
> finder? In my understanding finders contain only internal IPs which should
> be processed by a resolver.
>
> 2. This one is very critical. Nikolay and Anton, how can I review the
> changes?! Please update the ticket with PR or commit hash.
>
> --Yakov
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-26 Thread Valentin Kulichenko
Igor,

You need to investigate deeper then. It's not obvious what's going on and
why there is an issue.

-Val

On Mon, Jun 26, 2017 at 3:36 PM, Igor Rudyak <irud...@gmail.com> wrote:

> I am 100% sure, cause  "*telnet11211*"  works just perfect.
>
> Igor
>
> On Mon, Jun 26, 2017 at 3:32 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igor,
> >
> > Are you sure these connections are not blocked by firewall? If you
> provide
> > addresses explicitly in static IP finder, then it doesn't matter what is
> > published in shared IP finder. Is it possible that public addresses are
> > actually published and connectivity issue is caused by something else?
> >
> > -Val
> >
> > On Mon, Jun 26, 2017 at 3:01 PM, Igor Rudyak <irud...@gmail.com> wrote:
> >
> > > Val,
> > >
> > > Regarding resolver it makes sense.
> > >
> > > Actually as of now, Option 2 doesn't work to connect Ignite clients to
> > > cluster using private-to-public IPs mapping. It just falls into
> infinite
> > > connection loop and periodically reports something like this:
> > >
> > > *[14:42:15] Failed to connect to any address from IP finder (will retry
> > to
> > > join topology every 2 secs): [/0:0:0:0:0:0:0:1%1:47500, /
> 127.0.0.1:47500
> > > <http://127.0.0.1:47500>, /:47500,
> > > /:47500, /:47500]*
> > >
> > > Even if I manually specify all public IPs for discovery, like this:
> > >
> > > **
> > > * *
> > > *  > > class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> > > TcpDiscoveryMulticastIpFinder">*
> > > * *
> > > * *
> > > * *
> > > * *
> > > * *
> > > * *
> > > * *
> > > * *
> > > * *
> > > **
> > >
> > > It still can't connect to cluster and just periodically reports the
> same
> > > error.
> > >
> > > Does actually cluster membership protocol support the case when node
> > > available through multiple IP addresses and treats ,
> > >  and etc. as just different IPs corresponding to the same
> > > node?
> > >
> > >
> > > Igor
> > >
> > > On Mon, Jun 26, 2017 at 1:55 PM, Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com> wrote:
> > >
> > > > Igor,
> > > >
> > > > It depends on how address resolver works. But I agree, in general
> case
> > > it's
> > > > possible that a node can only resolve public address for itself. In
> > such
> > > > scenario we must publish public addresses in IP finder.
> > > >
> > > > -Val
> > > >
> > > > On Mon, Jun 26, 2017 at 1:02 PM, Igor Rudyak <irud...@gmail.com>
> > wrote:
> > > >
> > > > > Option 2 also will not work for IaaS environments, where node can
> > > > > dynamically join or leave cluster.
> > > > >
> > > > > Igor
> > > > >
> > > > > On Jun 26, 2017 12:12 PM, "Valentin Kulichenko" <
> > > > > valentin.kuliche...@gmail.com> wrote:
> > > > >
> > > > > > Yakov,
> > > > > >
> > > > > > Nodes that join outside of the network (usually these are
> clients)
> > > need
> > > > > to
> > > > > > know public addresses to connect. To make it work either of these
> > > must
> > > > > > happen:
> > > > > >
> > > > > > 1. Server nodes publish their public addresses in IP finder so
> that
> > > > > clients
> > > > > > can use them to connect.
> > > > > > 2. Client nodes use address resolver to map published internal
> > > > addresses
> > > > > to
> > > > > > public addresses.
> > > > > >
> > > > > > Both will work, but frankly I like option 1 more. First of all,
> > it's
> > > > just
> > > > > > more intuitive that IP finder contains all possible addresses
> that
> > > can
> > > > be
> > > > > > used to join. Second of all, option 2 introduces requirement to
> > have
> > > > > > address resolver for server addresses configured on client nodes
> -
> > > this
> > > > > is
> > > > > > not very good from usability standpoint.
> > > > > >
> > > > > > -Val
> > > > > >
> > > > > > On Mon, Jun 26, 2017 at 3:17 AM, Yakov Zhdanov <
> > yzhda...@apache.org>
> > > > > > wrote:
> > > > > >
> > > > > > > Guys, I don't get the point.
> > > > > > >
> > > > > > > 1. Why addresses processed by address resolver should appear in
> > > > shared
> > > > > > > finder? In my understanding finders contain only internal IPs
> > which
> > > > > > should
> > > > > > > be processed by a resolver.
> > > > > > >
> > > > > > > 2. This one is very critical. Nikolay and Anton, how can I
> review
> > > the
> > > > > > > changes?! Please update the ticket with PR or commit hash.
> > > > > > >
> > > > > > > --Yakov
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-26 Thread Valentin Kulichenko
Igor,

Are you sure these connections are not blocked by firewall? If you provide
addresses explicitly in static IP finder, then it doesn't matter what is
published in shared IP finder. Is it possible that public addresses are
actually published and connectivity issue is caused by something else?

-Val

On Mon, Jun 26, 2017 at 3:01 PM, Igor Rudyak <irud...@gmail.com> wrote:

> Val,
>
> Regarding resolver it makes sense.
>
> Actually as of now, Option 2 doesn't work to connect Ignite clients to
> cluster using private-to-public IPs mapping. It just falls into infinite
> connection loop and periodically reports something like this:
>
> *[14:42:15] Failed to connect to any address from IP finder (will retry to
> join topology every 2 secs): [/0:0:0:0:0:0:0:1%1:47500, /127.0.0.1:47500
> <http://127.0.0.1:47500>, /:47500,
> /:47500, /:47500]*
>
> Even if I manually specify all public IPs for discovery, like this:
>
> **
> * *
> *  class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.
> TcpDiscoveryMulticastIpFinder">*
> * *
> * *
> * *
> * *
> * *
> * *
> * *
> * *
> * *
> **
>
> It still can't connect to cluster and just periodically reports the same
> error.
>
> Does actually cluster membership protocol support the case when node
> available through multiple IP addresses and treats ,
>  and etc. as just different IPs corresponding to the same
> node?
>
>
> Igor
>
> On Mon, Jun 26, 2017 at 1:55 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
> > Igor,
> >
> > It depends on how address resolver works. But I agree, in general case
> it's
> > possible that a node can only resolve public address for itself. In such
> > scenario we must publish public addresses in IP finder.
> >
> > -Val
> >
> > On Mon, Jun 26, 2017 at 1:02 PM, Igor Rudyak <irud...@gmail.com> wrote:
> >
> > > Option 2 also will not work for IaaS environments, where node can
> > > dynamically join or leave cluster.
> > >
> > > Igor
> > >
> > > On Jun 26, 2017 12:12 PM, "Valentin Kulichenko" <
> > > valentin.kuliche...@gmail.com> wrote:
> > >
> > > > Yakov,
> > > >
> > > > Nodes that join outside of the network (usually these are clients)
> need
> > > to
> > > > know public addresses to connect. To make it work either of these
> must
> > > > happen:
> > > >
> > > > 1. Server nodes publish their public addresses in IP finder so that
> > > clients
> > > > can use them to connect.
> > > > 2. Client nodes use address resolver to map published internal
> > addresses
> > > to
> > > > public addresses.
> > > >
> > > > Both will work, but frankly I like option 1 more. First of all, it's
> > just
> > > > more intuitive that IP finder contains all possible addresses that
> can
> > be
> > > > used to join. Second of all, option 2 introduces requirement to have
> > > > address resolver for server addresses configured on client nodes -
> this
> > > is
> > > > not very good from usability standpoint.
> > > >
> > > > -Val
> > > >
> > > > On Mon, Jun 26, 2017 at 3:17 AM, Yakov Zhdanov <yzhda...@apache.org>
> > > > wrote:
> > > >
> > > > > Guys, I don't get the point.
> > > > >
> > > > > 1. Why addresses processed by address resolver should appear in
> > shared
> > > > > finder? In my understanding finders contain only internal IPs which
> > > > should
> > > > > be processed by a resolver.
> > > > >
> > > > > 2. This one is very critical. Nikolay and Anton, how can I review
> the
> > > > > changes?! Please update the ticket with PR or commit hash.
> > > > >
> > > > > --Yakov
> > > > >
> > > >
> > >
> >
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-26 Thread Valentin Kulichenko
Igor,

It depends on how address resolver works. But I agree, in general case it's
possible that a node can only resolve public address for itself. In such
scenario we must publish public addresses in IP finder.

-Val

On Mon, Jun 26, 2017 at 1:02 PM, Igor Rudyak <irud...@gmail.com> wrote:

> Option 2 also will not work for IaaS environments, where node can
> dynamically join or leave cluster.
>
> Igor
>
> On Jun 26, 2017 12:12 PM, "Valentin Kulichenko" <
> valentin.kuliche...@gmail.com> wrote:
>
> > Yakov,
> >
> > Nodes that join outside of the network (usually these are clients) need
> to
> > know public addresses to connect. To make it work either of these must
> > happen:
> >
> > 1. Server nodes publish their public addresses in IP finder so that
> clients
> > can use them to connect.
> > 2. Client nodes use address resolver to map published internal addresses
> to
> > public addresses.
> >
> > Both will work, but frankly I like option 1 more. First of all, it's just
> > more intuitive that IP finder contains all possible addresses that can be
> > used to join. Second of all, option 2 introduces requirement to have
> > address resolver for server addresses configured on client nodes - this
> is
> > not very good from usability standpoint.
> >
> > -Val
> >
> > On Mon, Jun 26, 2017 at 3:17 AM, Yakov Zhdanov <yzhda...@apache.org>
> > wrote:
> >
> > > Guys, I don't get the point.
> > >
> > > 1. Why addresses processed by address resolver should appear in shared
> > > finder? In my understanding finders contain only internal IPs which
> > should
> > > be processed by a resolver.
> > >
> > > 2. This one is very critical. Nikolay and Anton, how can I review the
> > > changes?! Please update the ticket with PR or commit hash.
> > >
> > > --Yakov
> > >
> >
>


Re: Session Replication Update on spring boot platform

2017-05-22 Thread Valentin Kulichenko
Hi Rishi,

I'm not sure I understand what the issue is. Can you elaborate a bit more
and provide exact examples of what is not working? What code tweaks are
required and how critical are they? Also I recall that your example was
working fine after the latest fixes in 1.9 (if I'm not mistaken). Did you
make any changes after that?

-Val

On Fri, May 19, 2017 at 11:34 PM, Rishi Yagnik 
wrote:

> Hello Dmitriy,
>
> Thank you for the response, I would await for Val's feedback.
>
> I would like to discuss the possible approach for implementation here, and
> it could be one of this -
>
> https://issues.apache.org/jira/browse/IGNITE-2741
>
> Hope we all come on to conclusion here.
>
> Thanks,
>
> On Fri, May 19, 2017 at 3:57 PM, Dmitriy Setrakyan 
> wrote:
>
> > Hi Rishi,
> >
> > I think the best way is to file a ticket in Ignite Jira with your
> > findings. This ticket does not seem tremendously difficult, so hopefully
> > someone from the community will pick it up. All we need to do is to take
> > our existing Web Session Clustering [1][2] code and port it to work with
> > Spring Boot.
> >
> > BTW, feel free to contribute it yourself if you have time.
> >
> > [1] https://ignite.apache.org/use-cases/caching/web-session-
> > clustering.html
> > [2] https://apacheignite-mix.readme.io/docs/web-session-clustering
> >
> > D.
> >
> > On Fri, May 19, 2017 at 11:43 AM, Rishi Yagnik 
> > wrote:
> >
> >> Hello Val,
> >>
> >> I tested out the session replication on spring boot cluster and here is
> >> the
> >> result.
> >>
> >> My finding were as follows with Ignite 2.0 on session replication and
> hope
> >> that helps the team –
> >>
> >>- Spring Security Filters requires context to be set with
> >> Authentication
> >>object, later on when user authentication object is set on the ignite
> >>filter from Ignite cache, the spring security treat that as a new
> >> session
> >>just to prevent session fixation issue.
> >>- As spring security creates a new session and since there is no way
> to
> >>tell Ignite that user session has been changed, the user session is
> no
> >> good
> >>here despite the fact that user session holds by the ignite is a true
> >>session for that user.
> >>- Configuring web session filter does not work OOTB in spring boot
> >>platform, it require some additional tweaking in the code to make it
> >> work.
> >>
> >>
> >> So in the nutshell, I would think having spring session on ignite
> platform
> >> would support full session replication with spring boot platform.
> >>
> >>
> >> Please note that we have 2 SB instances, serving request round robin via
> >> F5
> >> ( load balancers) supported by 2 node ignite cluster.
> >>
> >> Any suggestions on how we want to conquer the issue ?
> >>
> >> Thanks,
> >>
> >> --
> >> Rishi Yagnik
> >>
> >
> >
>
>
> --
> Rishi Yagnik
>


Re: Inefficient approach to executing remote SQL queries

2017-05-22 Thread Valentin Kulichenko
Hi Mike,

Generally, establishing connections in parallel could make sense, but note
that in most this would be a minor optimization, because:

   - Under load connections are established once and then reused. If you
   observe disconnections during application lifetime under load, then
   probably this should be addressed first.
   - Actual communication is asynchronous, we use NIO for this. If
   connection already exists, sendGeneric() basically just puts a message into
   a queue.

-Val

On Mon, May 22, 2017 at 7:04 PM, Michael Griggs  wrote:

> Hi Igniters,
>
>
>
> Whilst diagnosing a problem with a slow query, I became aware of a
> potential
> issue in the Ignite codebase.  When executing a SQL query that is to run
> remotely, the IgniteH2Indexing#send() method is called, with a
> Collection as one of its parameters.  This collection is
> iterated sequentially, and ctx.io().sendGeneric() is called synchronously
> for each node.  This is inefficient if
>
>
>
> a)   This is the first execution of a query, and thus TCP connections
> have to be established
>
> b)  The cost of establishing a TCP connection is high
>
>
>
> And optionally
>
>
>
> c)   There are a large number of nodes in the cluster
>
>
>
> In my current situation, developers want to run test queries from their
> code
> running locally, but connected via VPN to their UAT server environment.
> The
> cost of opening a TCP connection is in the multiple seconds, as you can see
> from this Ignite log file snippet:
>
> 2017-05-22 18:29:48,908 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:56924,
> rmtAddr=/10.132.80.3:47100]
>
> 2017-05-22 18:29:52,294 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:56923,
> rmtAddr=/10.132.80.30:47102]
>
> 2017-05-22 18:29:58,659 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:56971,
> rmtAddr=/10.132.80.23:47101]
>
> 2017-05-22 18:30:03,183 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:56972,
> rmtAddr=/10.132.80.21:47100]
>
> 2017-05-22 18:30:06,039 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:56973,
> rmtAddr=/10.132.80.21:47103]
>
> 2017-05-22 18:30:10,828 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:57020,
> rmtAddr=/10.132.80.20:47100]
>
> 2017-05-22 18:30:13,060 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:57021,
> rmtAddr=/10.132.80.29:47103]
>
> 2017-05-22 18:30:22,144 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:57022,
> rmtAddr=/10.132.80.22:47103]
>
> 2017-05-22 18:30:26,513 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:57024,
> rmtAddr=/10.132.80.20:47101]
>
> 2017-05-22 18:30:28,526 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/7.1.14.242:57025,
> rmtAddr=/10.132.80.30:47103]
>
>
>
> Comparing the same code that is executed inside of the UAT environment (so
> not using the VPN):
>
> 2017-05-22 18:22:18,102 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.175.11.38:53288,
> rmtAddr=/10.175.11.58:47100]
>
> 2017-05-22 18:22:18,105 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.175.11.38:45890,
> rmtAddr=/10.175.11.54:47101]
>
> 2017-05-22 18:22:18,108 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/127.0.0.1:47582,
> rmtAddr=/127.0.0.1:47100]
>
> 2017-05-22 18:22:18,111 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/127.0.0.1:45240,
> rmtAddr=/127.0.0.1:47103]
>
> 2017-05-22 18:22:18,114 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.175.11.38:46280,
> rmtAddr=/10.175.11.15:47100]
>
> 2017-05-22 18:22:18,118 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.132.80.21:51476,
> rmtAddr=/10.132.80.29:47103]
>
> 2017-05-22 18:22:18,120 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.132.80.21:56274,
> rmtAddr=pocfd-master1/10.132.80.22:47103]
>
> 2017-05-22 18:22:18,124 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.132.80.21:53558,
> rmtAddr=pocfd-ignite1/10.132.80.20:47101]
>
> 2017-05-22 18:22:18,127 INFO [TcpCommunicationSpi] - Established outgoing
> communication connection [locAddr=/10.132.80.21:56216,
> rmtAddr=/10.132.80.30:47103]
>
>
>
> This is a design flaw in the Ignite code, as we are relying on the client's
> network behaving in a particular way (i.e., port opening being very fast).
> We should instead try to mask this 

Re: [VOTE] Accept Contribution of Ignite Persistent Store

2017-05-23 Thread Valentin Kulichenko
+1

On Tue, May 23, 2017 at 8:42 AM, Semyon Boikov  wrote:

> +1
>
> On Tue, May 23, 2017 at 12:55 AM, Denis Magda  wrote:
>
> > Igniters,
> >
> > This branch (https://github.com/apache/ignite/tree/ignite-5267) adds a
> > distributed and transactional Persistent Store to Apache Ignite project.
> > The store seamlessly integrates with Apache Ignite 2.0 page memory
> > architecture. One of the main advantages of the store is that Apache
> Ignite
> > becomes fully operational from disk (SSD or Flash) without any need to
> > preload the data in memory. Plus, with full SQL support already available
> > in Apache Ignite, this feature will allow Apache Ignite serve as a
> > distributed SQL database, both in memory or on disk, while continuing to
> > support all the existing functionality on the current API.
> > More information here:
> > - Persistent Store Overview: https://cwiki.apache.org/
> > confluence/display/IGNITE/Persistent+Store+Overview
> > - Persistent Store Internal Design: https://cwiki.apache.org/
> > confluence/display/IGNITE/Persistent+Store+Internal+Design
> > The Persistent Store was developed by GridGain outside of Apache
> community
> > because it was requested by one of GridGain’s customers. Presently,
> > GridGain looks forward to donating the Persistent Store to ASF and given
> > the size of the contribution, it is prudent to follow Apache's IP
> clearance
> > process.
> > The SGA has been submitted and acknowledged by ASF Secretary. The IP
> > clearance form can be found here: http://incubator.apache.org/
> > ip-clearance/persistent-distributed-store-ignite.html
> > This vote is to discover if the Apache Ignite PMC and community are in
> > favour of accepting this contribution.
> > This vote will be open for at least 72 hours:
> > [ ] +1, accept contribution of the Persistent Store into the project
> > [ ] 0, no opinion
> > [ ] -1, reject contribution because...
> >
> > Regards,
> > Denis
> >
> >
>


Cassandra store and binary objects

2017-05-22 Thread Valentin Kulichenko
Hi Igor,

Can you please clarify if Cassandra store can work directly with binary
objects or not, when POJO strategy is used? In other words, are POJO
classes required on server nodes for this strategy? What will happen if
CacheConfiguration#keepBinaryInStore property set to true?

-Val


Re: Session Replication Update on spring boot platform

2017-05-23 Thread Valentin Kulichenko
Hi Rishi,

It was working for me in cluster environment after the fix [1] we discussed
in previous thread [2]. The fix was included in Ignite 2.0.

Can you please reattach the latest version of your app based on Ignite 2.0
and give a detailed step by step instruction on how to reproduce the issue
you're having?

[1] https://issues.apache.org/jira/browse/IGNITE-4948
[2]
http://apache-ignite-developers.2346864.n4.nabble.com/IGNITE-2741-spring-session-design-td14560.html

-Val

On Mon, May 22, 2017 at 8:35 PM, Rishi Yagnik <rishiyag...@gmail.com> wrote:

> Hello Val,
>
> As I discussed earlier, the problem arises in the cluster environment.
>
> we have 2 SB staleless instance backed by Ignite data store.
>
> The local environment is working fine and I am able to see the user
> sessions are being stored correctly.
>
> I could not make session replication working with Ignite 2.0 in cluster
> environment so my fixes are of no use.
>
> IMO,The web filter approach is very intrusive approach with spring
> security and that is why I thought we need to come up with the solution
> which sits on top of spring security.
>
> The possible solution could be spring session.
>
> The example which I posted can be tested on cluster as well. Would Ignite
> team try out the clustering ?
>
> Looking for your inputs / suggestions on the issue.
>
> Thank you for all your help,
> Rishi
>
>
>
>
> On Mon, May 22, 2017 at 1:02 PM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
>
>> Hi Rishi,
>>
>> I'm not sure I understand what the issue is. Can you elaborate a bit more
>> and provide exact examples of what is not working? What code tweaks are
>> required and how critical are they? Also I recall that your example was
>> working fine after the latest fixes in 1.9 (if I'm not mistaken). Did you
>> make any changes after that?
>>
>> -Val
>>
>> On Fri, May 19, 2017 at 11:34 PM, Rishi Yagnik <rishiyag...@gmail.com>
>> wrote:
>>
>>> Hello Dmitriy,
>>>
>>> Thank you for the response, I would await for Val's feedback.
>>>
>>> I would like to discuss the possible approach for implementation here,
>>> and
>>> it could be one of this -
>>>
>>> https://issues.apache.org/jira/browse/IGNITE-2741
>>>
>>> Hope we all come on to conclusion here.
>>>
>>> Thanks,
>>>
>>> On Fri, May 19, 2017 at 3:57 PM, Dmitriy Setrakyan <
>>> dsetrak...@apache.org>
>>> wrote:
>>>
>>> > Hi Rishi,
>>> >
>>> > I think the best way is to file a ticket in Ignite Jira with your
>>> > findings. This ticket does not seem tremendously difficult, so
>>> hopefully
>>> > someone from the community will pick it up. All we need to do is to
>>> take
>>> > our existing Web Session Clustering [1][2] code and port it to work
>>> with
>>> > Spring Boot.
>>> >
>>> > BTW, feel free to contribute it yourself if you have time.
>>> >
>>> > [1] https://ignite.apache.org/use-cases/caching/web-session-
>>> > clustering.html
>>> > [2] https://apacheignite-mix.readme.io/docs/web-session-clustering
>>> >
>>> > D.
>>> >
>>> > On Fri, May 19, 2017 at 11:43 AM, Rishi Yagnik <rishiyag...@gmail.com>
>>> > wrote:
>>> >
>>> >> Hello Val,
>>> >>
>>> >> I tested out the session replication on spring boot cluster and here
>>> is
>>> >> the
>>> >> result.
>>> >>
>>> >> My finding were as follows with Ignite 2.0 on session replication and
>>> hope
>>> >> that helps the team –
>>> >>
>>> >>- Spring Security Filters requires context to be set with
>>> >> Authentication
>>> >>object, later on when user authentication object is set on the
>>> ignite
>>> >>filter from Ignite cache, the spring security treat that as a new
>>> >> session
>>> >>just to prevent session fixation issue.
>>> >>- As spring security creates a new session and since there is no
>>> way to
>>> >>tell Ignite that user session has been changed, the user session
>>> is no
>>> >> good
>>> >>here despite the fact that user session holds by the ignite is a
>>> true
>>> >>session for that user.
>>> >>- Configuring web session filter does not work OOTB in spring boot
>>> >>platform, it require some additional tweaking in the code to make
>>> it
>>> >> work.
>>> >>
>>> >>
>>> >> So in the nutshell, I would think having spring session on ignite
>>> platform
>>> >> would support full session replication with spring boot platform.
>>> >>
>>> >>
>>> >> Please note that we have 2 SB instances, serving request round robin
>>> via
>>> >> F5
>>> >> ( load balancers) supported by 2 node ignite cluster.
>>> >>
>>> >> Any suggestions on how we want to conquer the issue ?
>>> >>
>>> >> Thanks,
>>> >>
>>> >> --
>>> >> Rishi Yagnik
>>> >>
>>> >
>>> >
>>>
>>>
>>> --
>>> Rishi Yagnik
>>>
>>
>>
>
>
> --
> Rishi Yagnik
>


IgniteCompute async methods do not return ComputeTaskFuture

2017-05-30 Thread Valentin Kulichenko
Folks,

I noticed that the new async API for IgniteCompute returns IgniteFuture,
while previously we used to have its extension - ComputeTaskFuture, which
contains useful information about the executed task session.

Should this be fixed?

-Val


Re: Cassandra and Ignite C++ Issue

2017-05-30 Thread Valentin Kulichenko
Igor,

There is another SO question about this, I already responded:
https://stackoverflow.com/questions/44254079/ignite-with-cassandra-integration

-Val

On Tue, May 30, 2017 at 10:15 AM, Igor Sapego  wrote:

> User's answer:
> I have changed my POJO to BLOB.
>
> Current persistence.xml file look like
>
> 
>column="assetid"/>
>column="asset_desc"/>
> 
>
> now it's produce an error like
>
> "Caused by: com.datastax.driver.core.exceptions.CodecNotFoundException:
> Codec not found for requested operation: [varchar <->
> java.nio.HeapByteBuffer] "
>
> Best Regards,
> Igor
>
> On Fri, May 26, 2017 at 2:37 PM, Igor Sapego  wrote:
>
> > Cross-posted Igor's answer to SO
> >
> > Best Regards,
> > Igor
> >
> > On Fri, May 26, 2017 at 8:58 AM, Igor Rudyak  wrote:
> >
> >> I assume that's because you are using this:
> >>
> >> 
> >>
> >>
> >> Current implementation supports only BLOB serialization for binary
> >> objects. We already have a ticket for this: https://issues.apache.or
> >> g/jira/browse/IGNITE-5270
> >>
> >> Igor
> >>
> >>
> >>
> >> On Thu, May 25, 2017 at 8:19 PM, Denis Magda 
> wrote:
> >>
> >>> Igor R., Igor S.,
> >>>
> >>> Please take a look at this issue reported on SO:
> >>> https://stackoverflow.com/questions/44178514/ignite-c-client
> >>> -for-cassandra-integration
> >>>
> >>> --
> >>> Denis
> >>>
> >>
> >>
> >
>


Re: IgniteCompute async methods do not return ComputeTaskFuture

2017-05-30 Thread Valentin Kulichenko
+1 to Yakov. Closure execution also creates a task and I don't see any
reason for hiding it. And actually we don't hide it - we fire task/job
events, apply the same failover mechanisms, etc.

What probably is confusing here is the name of the class. ComputeTaskFuture
indeed looks like applied only to execution of a ComputeTask. How about
renaming it to IgniteComputeFuture (or just ComputeFuture) then?

-Val

On Tue, May 30, 2017 at 12:56 PM, Yakov Zhdanov <yzhda...@apache.org> wrote:

> Vladimir,
>
> I disagree. I understand this is minor issue, but still.
>
> Here are the points:
>
> 1. TaskSession is supported for all compute methods. Please see -
> ComputeFailoverExample. Every compute method starts a task.
> 2. You still return task future, but method return type is a
> super-interface.
> 3. User cannot identify the spawned broadcast - returned future does not
> provide any ID.
>
> --Yakov
>
> 2017-05-30 11:28 GMT+03:00 Vladimir Ozerov <voze...@gridgain.com>:
>
> > Valya,
> >
> > This future contains task session. We intentionally changed return type
> to
> > plain IgniteFuture for closure methods, as there is no notion of
> "session"
> > and "task" for them. ComputeTaskFuture now returned only from
> task-related
> > methods ("execute"). Unless I am missing something, this approach looks
> > correct.
> >
> >
> > On Tue, May 30, 2017 at 11:06 AM, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com> wrote:
> >
> > > Folks,
> > >
> > > I noticed that the new async API for IgniteCompute returns
> IgniteFuture,
> > > while previously we used to have its extension - ComputeTaskFuture,
> which
> > > contains useful information about the executed task session.
> > >
> > > Should this be fixed?
> > >
> > > -Val
> > >
> >
>


Re: Data compression in Ignite 2.0

2017-06-07 Thread Valentin Kulichenko
Vyacheslav, Anton,

Are there any ideas and/or prototypes for the API? Your design suggestions
seem to make sense, but I would like to see how it all this will like from
user's standpoint.

-Val

On Wed, Jun 7, 2017 at 1:06 AM, Антон Чураев  wrote:

> Vyacheslav, correct me if something wrong
>
> We could provide opportunity of choose between CPU usage and MEM/NET usage
> for users by compression some attributes of stored objects.
> You have learned design, and it is possible to localize changes in
> marshalling without performance affect and current functionality.
>
> I think, that it's usefull for our project and users.
> Community, what do you think about this proposal?
>
>
> 2017-06-06 17:29 GMT+03:00 Vyacheslav Daradur :
>
> > In short,
> >
> > During marshalling a fields is represented as BinaryFieldAccessor which
> > manages its marshalling. It checks if the field is marked by annotation
> > @BinaryCompression, in that case - binary  representation of field (bytes
> > array) will be compressed. It will be marked as compressed by types
> > constant (GridBinaryMarshaller.COMPRESSED), after this the compressed
> > bytes
> > array wiil be include in binary representation of whole object. Note,
> > header of marshalled object will not be compressed. Compression affected
> > only object's field representation.
> >
> > Objects in IgniteCache is represented as BinaryObject which is wrapper
> over
> > bytes array of marshalled object.
> > BinaryObject provides some usefull methods, which are used by Ignite
> > systems.
> > For example, the Queries use BinaryObject#field method, which
> deserializes
> > only field of object, without deserializing of whole object.
> > BinaryObject#field method during deserialization, if meets the constant
> of
> > compressed type, decompress this bytes array, then continue unmarshalling
> > as usual.
> >
> > Now, I introduced the Compressor interface in IgniteConfigurations, it
> > allows user to use own implementation of compressor - it is the
> requirement
> > in the task[1].
> >
> > As far as I know, Vladimir Ozerov doesn't like the idea of granting this
> > opportunity to the user.
> > In that case we can choose a compression algorithm which we will provide
> by
> > default and will move the interface to internals of binary
> infractructure.
> > For this case I've prepared benchmarked, which I've sent earlier.
> >
> > I vote for ZSTD algorithm[2], it provides good compression ratio and good
> > throughput. It has implementation in Java, .NET and C++, and has
> > ASF-friendly license, we can use it in the all Ignite platforms.
> > You can look at an assessment of this algorithm in my benchmark's
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-3592
> > [2]https://github.com/facebook/zstd
> >
> >
> > 2017-06-06 16:02 GMT+03:00 Антон Чураев :
> >
> > > Looks good for me.
> > >
> > > Could You propose design of implementation in couple of sentences?
> > > So that we can estimate the completeness and complexity of the
> proposal.
> > >
> > > 2017-06-06 15:26 GMT+03:00 Vyacheslav Daradur :
> > >
> > > > Anton,
> > > >
> > > > Of course, the solution does not affect on existing implementation. I
> > > mean,
> > > > there is no changes if user not use the annotation
> @BinaryCompression.
> > > (no
> > > > performance changes)
> > > > Only if user make decision to use compression on specific field or
> > fields
> > > > of a class - in that case compression will be used at marshalling in
> > > > relation to annotated fields.
> > > >
> > > > 2017-06-06 15:10 GMT+03:00 Антон Чураев :
> > > >
> > > > > Vyacheslav,
> > > > >
> > > > > Is it possible to propose implementation that can be switched on
> > > > on-demand?
> > > > > In this case it should not affect performance of current solution.
> > > > >
> > > > > I mean, that users should make decision what is more important for
> > > them:
> > > > > throutput or memory/net usage.
> > > > > May be they will be choose not all objects, or only some attributes
> > of
> > > > > objects for compress.
> > > > >
> > > > > 2017-06-06 14:48 GMT+03:00 Vyacheslav Daradur  >:
> > > > >
> > > > > > Conclusion:
> > > > > > Provided solution allows reduce size of an object in IgniteCache
> at
> > > the
> > > > > > cost of throughput reduction (small - in some cases), it depends
> on
> > > > part
> > > > > of
> > > > > > object which will be compressed and compression algorithm.
> > > > > > I mean, we can make more effective use of memory, and in some
> cases
> > > it
> > > > > can
> > > > > > reduce loading of the interconnect. (replication, rebalancing)
> > > > > >
> > > > > > Especially, it will be particularly useful for object's fields
> > which
> > > > are
> > > > > > large text (>~ 250 bytes) and can be effectively compressed.
> > > > > >
> > > > > > 2017-06-06 12:00 GMT+03:00 Антон Чураев 

Re: BinaryObjectImpl.deserializeValue with specific ClassLoader

2017-06-07 Thread Valentin Kulichenko
Hi Nick,

How exactly do you replace the class loader and can you give a little bit
more detail about why you do this?

As for the issues, here are my comments:

   1. Can you clarify this? In which scenario it doesn't work and why?
   2. Why do you deploy domain classes in the first place? Ignite
   architecture assumes that there are no classes on server side at all, so I
   think it would be very hard to load them dynamically. Can you use binary
   objects instead?
   3. I can't recall any such places from the top of my head, but frankly I
   don't think you have such a guarantee. IgniteConfiguration object is
   assumed to be created prior to node startup. It should not be modified
   after startup.

-Val

On Thu, Jun 1, 2017 at 9:46 AM, npordash  wrote:

> I wanted to provide an update on where I am at with this as I'm still
> exploring different options.
>
> For the time being I've taken the path of using a delegating ClassLoader in
> IgniteConfiguration as was previously suggested; however, with the
> difference being that whenever a service is deployed/undeployed I end up
> replacing the ClassLoader with a new instance that has the service's
> ClassLoader added/removed. The replacement is necessary so that unused/old
> classes can be reclaimed by the garbage collector and that new versions can
> be deployed in the future.
>
> Overall I think this is a more comprehensive approach since it also allows
> execution constructs like CacheEntryProcessor implementations provided by
> the service to be used across the grid without enabling p2p classloading
> (assuming the service is deployed to every node).
>
> There are a few concerns/issues with this approach:
>
> 1) There are still isolation concerns with class versions across different
> services, but as long as the services don't have dependencies between each
> other or have overlapping class usage in a cache or execution context then
> it's a non-issue.
> 2) Using OnheapCacheEnabled in conjunction with service provided domain
> classes is impossible since once the service is reloaded and we get a new
> classloader all calls to deserialize will fail.
> 3) This approach only works if nothing caches the result of
> IgniteConfiguration.getClassLoader, which isn't obvious since the docs
> related to classloader behavior in Ignite is pretty sparse (or at least I
> could not find it). I don't think anything does when I check that method
> usage, but I'm not 100% sure about that.
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/Re-BinaryObjectImpl-
> deserializeValue-with-specific-ClassLoader-tp17126p18358.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Cassandra related question

2017-06-08 Thread Valentin Kulichenko
Igor,

Can you please take a look at this question from a user? Is it a bug?

http://apache-ignite-users.70518.x6.nabble.com/Ignite-Cassandra-Exception-td13529.html

-Val


Re: Key type name and value type name for CREATE TABLE

2017-06-06 Thread Valentin Kulichenko
Vova,

If you add unique suffix losing human-readable type names, how will the
builder approach work? Maybe it makes sense to add an API call that returns
current type name for a table?

-Val

On Tue, Jun 6, 2017 at 7:43 PM Dmitriy Setrakyan 
wrote:

> Vova,
>
> I am not sure I like the key type name the way it is. Can we add some
> separator between the table name and key name, like "_". To me "PERSON_KEY"
> reads a lot better than "PERSONKey".
>
> D.
>
> On Tue, Jun 6, 2017 at 4:00 AM, Sergi Vladykin 
> wrote:
>
> > Unique suffix is a good idea.
> >
> > Sergi
> >
> > 2017-06-06 13:51 GMT+03:00 Vladimir Ozerov :
> >
> > > Igniters,
> > >
> > > In the very first implementation of CREATE TABLE we applied the
> following
> > > rule to key and value type names:
> > > keyTypeName == tableName + "Key"
> > > valTypeName == tableName
> > >
> > > E.g.:
> > > CREATE TABLE Person ...
> > > keyTypeName == PERSONKey
> > > valTypeName == PERSON
> > >
> > > After that user could potentially create objects with these type names
> > > manually and add them to cache through native Ignite API:
> > >
> > > BinaryObject key =
> IgniteBinary.builder("PERSONKey").addField().build();
> > > BinaryObject val = IgniteBinary.builder("PERSON").addField().build();
> > > IgniteCache.put(key, val);
> > >
> > > This approach has two problems:
> > > 1) Type names are not unique between different tables. it means that if
> > two
> > > tables with the same name are created in different schemas, we got a
> > > conflict.
> > > 2) Type names are bound to binary metadata, so if user decides to do
> the
> > > following, he will receive and error about incompatible metadata:
> > > CREATE TABLE Person (id INT PRIMARY KEY);
> > > DROP TABLE Person;
> > > CREATE TABLE Person(in BIGINT PRIMARY KEY); // Fail because old meta
> > still
> > > has type "Integer".
> > >
> > > In order to avoid that I am going to add unique suffix or so (say,
> UUID)
> > to
> > > type names. This way there will be no human-readable type names any
> more,
> > > but there will be no conflicts either. In future releases we will relax
> > > this somehow.
> > >
> > > Thoughts?
> > >
> > > Vladimir.
> > >
> >
>


IgniteCache#localEvict method

2017-06-18 Thread Valentin Kulichenko
Folks,

Does the subj make sense in 2.0? Before this method could be used to evict
from on-heap memory to off-heap or swap. What are the semantics now?

-Val


Re: [VOTE] Apache Ignite 2.0.0 RC2

2017-05-01 Thread Valentin Kulichenko
+1

On Mon, May 1, 2017 at 3:12 PM Vladimir Ozerov  wrote:

> +1
>
> 30 апр. 2017 г. 17:46 пользователь "Denis Magda" 
> написал:
>
> > Igniters!
> >
> > We have uploaded a 2.0.0 release candidate to
> > https://dist.apache.org/repos/dist/dev/ignite/2.0.0-rc2/ <
> > https://dist.apache.org/repos/dist/dev/ignite/2.0.0-rc2/>
> >
> > Git tag name is
> > 2.0.0-rc2
> >
> > This release includes the following changes:
> >
> > Ignite:
> > * Introduced new page memory architecture.
> > * Machine Learning beta: distributed algebra support for dense and sparse
> > data sets.
> > * Reworked and simplified API for asynchronous operations.
> > * Custom thread pool executors for compute tasks.
> > * Removed CLOCK mode in ATOMIC cache.
> > * Deprecated schema-import utility in favor of Web Console.
> > * Integration with Spring Data.
> > * Integration with Hibernate 5.
> > * Integration with RocketMQ.
> > * Integration with ZeroMQ.
> > * SQL: CREATE INDEX and DROP INDEX commands.
> > * SQL: Ability to execute queries over specific set of partitions.
> > * SQL: Improved REPLICATED cache support.
> > * SQL: Updated H2 version to 1.4.195.
> > * SQL: Improved performance of MIN/MAX aggregate functions.
> > * ODBC: Added Time data type support.
> > * Massive performance improvements.
> >
> > Ignite.NET :
> > * Custom plugin API.
> > * Generic cache store.
> > * Binary types now can be registered dynamically.
> > * LINQ: join, "contains" and DateTime property support.
> >
> > Ignite CPP:
> > * Implemented Cache::Invoke.
> > * Added remote filters support to continuous queries.
> >
> > Ignite Web Console:
> > * Multi-cluster support.
> > * Possibility to configure Kubernetes IP finder.
> > * EnforceJoinOrder option on Queries screen.
> >
> > Complete list of closed issues:
> > https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%
> > 20fixVersion%20%3D%202.0%20AND%20(status%20%3D%
> > 20closed%20or%20status%20%3D%20resolved)  > jira/issues/?jql=project%20%3D%20IGNITE%20AND%20fixVersion%20%3D%202.0%
> > 20AND%20(status%20%3D%20closed%20or%20status%20%3D%20resolved)>
> >
> > DEVNOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=DEVNOTES.txt;hb=refs/tags/2.0.0-rc2  > org/repos/asf?p=ignite.git;a=blob_plain;f=DEVNOTES.txt;hb=
> > refs/tags/2.0.0-rc1>
> >
> > RELEASENOTES
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> > plain;f=RELEASE_NOTES.txt;hb=refs/tags/2.0.0-rc2 <
> > https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=
> > blob_plain;f=RELEASE_NOTES.txt;hb=refs/tags/2.0.0-rc1>
> >
> > Please start voting.
> >
> > +1 - to accept Apache Ignite 2.0.0-rc2
> > 0 - don't care either way
> > -1 - DO NOT accept Apache Ignite 2.0.0-rc2 (explain why)
> >
> > This vote will go for 72 hours.
>


Re: ContiniousQuery giving notification for different dataset

2017-05-08 Thread Valentin Kulichenko
Continuous queries are predicate-based. Initial query is a separate
optional feature that allows you to fetch the data that existed before
listener was registered, it does not affect further update notifications in
any way. Having said that, the behavior you observe is correct.

-Val

On Mon, May 8, 2017 at 12:12 PM, fatih  wrote:

> I have already mentioned below that i can use remotefilterfactory with more
> conditions but then this is basically the replication.
>
> Second,
> When I try  ContinuousQuery I am getting events for the records I am not
> subscribed to. I know that i can write the conditions again in the
> remotefilterfactory but then it is the replication of the sqlquery which i
> dont want to repeat again with a closure.
>
>
>
> --
> View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/ContiniousQuery-giving-
> notification-for-irrelevant-records-tp17538p17541.html
> Sent from the Apache Ignite Developers mailing list archive at Nabble.com.
>


Re: Zookeeper Discovery SPI & external IP address in AWS

2017-06-27 Thread Valentin Kulichenko
Yakov,

What do you mean by 'mixture'? :) Client obviously needs to know public
addresses to connect and I think it's natural to get them from IP finder.
Is there something wrong with this?

-Val

On Tue, Jun 27, 2017 at 5:01 AM, Yakov Zhdanov  wrote:

> >>Both will work, but frankly I like option 1 more. First of all, it's just
> >>more intuitive that IP finder contains all possible addresses that can be
> >>used to join. Second of all, option 2 introduces requirement to have
> >>address resolver for server addresses configured on client nodes - this
> is
> >>not very good from usability standpoint.
>
> Val, I see your point, but (1) means that clients should have public IPs
> for communication or server resolver should be able to resolve them. In any
> case there will be some mixture. Is that correct?
>
> --Yakov
>


Re: DataStreamer Transactional and Timestamp Implementation

2017-06-27 Thread Valentin Kulichenko
Fatih,

Can you give more details about the use case addressed by this
implementation?

-Val

On Tue, Jun 27, 2017 at 3:45 PM, Denis Magda  wrote:

> Faith,
>
> Thanks for sharing it with us.
>
> In general, we can always wrap this code up in a form of an example to be
> delivered with every Ignite release.
>
> But probably it makes sense to add it to the core streaming functionality.
> *Igniters*, what’d you think?
>
> —
> Denis
>
> > On Jun 27, 2017, at 12:29 AM, fatih  wrote:
> >
> > Hi
> >
> > We have implemented some receivers to be able to update data in data
> nodes
> > by ignite datastreamer api.
> >
> > There is an associated ticket as below already exist. We thought it
> would be
> > useful to use that implementation from ignite directly. Maybe can be
> added
> > to ignite directly.
> >
> > http://apache-ignite-users.70518.x6.nabble.com/
> Transaction-Boundary-Data-Streamer-td13803.html#a14078
> > AbstractTransactionalStreamReceiver.java
> >  AbstractTransactionalStreamReceiver.java>
> >
> > TimestampBasedUpdateStreamReceiver.java
> >  TimestampBasedUpdateStreamReceiver.java>
> >
> >
> >
> >
> > --
> > View this message in context: http://apache-ignite-
> developers.2346864.n4.nabble.com/DataStreamer-Transactional-and-Timestamp-
> Implementation-tp19129.html
> > Sent from the Apache Ignite Developers mailing list archive at
> Nabble.com.
>
>


AffinityKeyMapper alternatives

2017-09-11 Thread Valentin Kulichenko
Guys,

Some time ago we deprecated AffinityKeyMapper in favor
of CacheKeyConfiguration#affinityKeyFieldName and AffinityKeyMapped
annotation. While I understand the reasons why we did this, I think it's
not very flexible as requires to specify the field name on node startup.

First of all, CacheKeyConfiguration is set on IgniteConfiguration, but not
CacheConfiguration. Does anyone knows why? How can I specify the affinity
field name if I create a new cache dynamically?

Second of all, AffinityKeyMapped doesn't always work. There are cases when
model classes can't be modified with Ignite annotations, for example. For
this case I suggest to introduce something like
AffinityKeyFieldNameResolver that will allow to implement custom logic
instead. Of course, it will work in the same way as annotation, i.e.
invoked on client side only. Is this possible?

-Val


Re: Unintuitive error message when invalid marshaller files found

2017-09-15 Thread Valentin Kulichenko
Mike,

Can you show the exception that is thrown?

-Val

On Fri, Sep 15, 2017 at 7:12 AM, Michael Griggs 
wrote:

> This afternoon I came across an unusual case where there were files in
> my work/marshaller folder with invalid filenames.  It seems that the
> valid format is -[0-9]+.classname[0-9].  However, I had files that
> were in the format -[0-9]+.classname - i.e., no trailing zero.  Where
> these files came from I'm not sure, perhaps a significantly older
> version of Ignite?
>
> The error message could be improved, and unless there is an
> outstanding JIRA I will open one to
>
> 1. Print the full file path, not just the filename - this will help in
> determining where the work/marshaller folder is located
> 2. Suggesting to clear out the contents of the work/marshaller folder
> and restart
>
> Alternatively, can we just ignore files that do not end in [0-9] ?
>
> Regards
> Mike
>
>
>


Re: Unintuitive error message when invalid marshaller files found

2017-09-18 Thread Valentin Kulichenko
I agree, error message should be more informative. Mike, feel free to
create a Jira ticket for this.

-Val

On Mon, Sep 18, 2017 at 12:25 AM, Michael Griggs 
wrote:

> Sure
>
> SEVERE: Exception during start processors, node will be stopped and
> close connections
> class org.apache.ignite.IgniteCheckedException: Failed to start
> processor: GridProcessorAdapter []
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1813)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:946)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1904)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1646)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1074)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:992)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:878)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:777)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:647)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:616)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.gridgain.proserv.ServerNode.run(ServerNode.java:26)
> at com.gridgain.proserv.ServerNode.main(ServerNode.java:21)
> Caused by: class org.apache.ignite.IgniteCheckedException: Reading
> marshaller mapping from file 248380598.classname failed; last symbol
> of file name is expected to be numeric.
> at
> org.apache.ignite.internal.MarshallerMappingFileStore.getPlatformId(
> MarshallerMappingFileStore.java:186)
> at
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(
> MarshallerMappingFileStore.java:153)
> at
> org.apache.ignite.internal.MarshallerContextImpl.
> onMarshallerProcessorStarted(MarshallerContextImpl.java:524)
> at
> org.apache.ignite.internal.processors.marshaller.
> GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor
> .java:114)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1810)
> ... 12 more
> Caused by: java.lang.NumberFormatException: For input string: "e"
> at
> java.lang.NumberFormatException.forInputString(
> NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at java.lang.Byte.parseByte(Byte.java:149)
> at java.lang.Byte.parseByte(Byte.java:175)
> at
> org.apache.ignite.internal.MarshallerMappingFileStore.getPlatformId(
> MarshallerMappingFileStore.java:183)
> ... 16 more
>
> Sep 18, 2017 8:22:35 AM org.apache.ignite.logger.java.JavaLogger error
> SEVERE: Got exception while starting (will rollback startup routine).
> class org.apache.ignite.IgniteCheckedException: Failed to start
> processor: GridProcessorAdapter []
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1813)
> at
> org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:946)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(
> IgnitionEx.java:1904)
> at
> org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(
> IgnitionEx.java:1646)
> at
> org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1074)
> at
> org.apache.ignite.internal.IgnitionEx.startConfigurations(
> IgnitionEx.java:992)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:878)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:777)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:647)
> at
> org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:616)
> at org.apache.ignite.Ignition.start(Ignition.java:347)
> at com.gridgain.proserv.ServerNode.run(ServerNode.java:26)
> at com.gridgain.proserv.ServerNode.main(ServerNode.java:21)
> Caused by: class org.apache.ignite.IgniteCheckedException: Reading
> marshaller mapping from file 248380598.classname failed; last symbol
> of file name is expected to be numeric.
> at
> org.apache.ignite.internal.MarshallerMappingFileStore.getPlatformId(
> MarshallerMappingFileStore.java:186)
> at
> org.apache.ignite.internal.MarshallerMappingFileStore.restoreMappings(
> MarshallerMappingFileStore.java:153)
> at
> org.apache.ignite.internal.MarshallerContextImpl.
> onMarshallerProcessorStarted(MarshallerContextImpl.java:524)
> at
> org.apache.ignite.internal.processors.marshaller.
> GridMarshallerMappingProcessor.start(GridMarshallerMappingProcessor
> .java:114)
> at
> org.apache.ignite.internal.IgniteKernal.startProcessor(
> IgniteKernal.java:1810)
> ... 12 more
> Caused by: java.lang.NumberFormatException: For input string: "e"
> at
> java.lang.NumberFormatException.forInputString(
> NumberFormatException.java:65)
> at java.lang.Integer.parseInt(Integer.java:580)
> at 

Issues if -Djava.net.preferIPv4Stack=true is not set

2017-09-18 Thread Valentin Kulichenko
Guys,

I noticed there are many issues on user forum that occur
of -Djava.net.preferIPv4Stack=true system property is not set.

Can someone explain the nature of these issues? What exactly in Ignite
requires this system property do be set? And can this be fixed/automated
somehow?

If fix is not possible, we need to clearly document this and print out a
warning on startup.

-Val


Re: Expiry policy for Cache.invoke

2017-09-13 Thread Valentin Kulichenko
Denis,

I'm confused by the issue. Do you mean that we can use expiry policy other
than the one provided in configuration? How is this possible? Can you point
to the code that implements this logic?

-Val

On Wed, Sep 13, 2017 at 11:29 AM, Dmitriy Setrakyan 
wrote:

> Denis,
>
> I agree that the behavior should be consistent, but you will not find
> anything about transactions in JCache. To my knowledge, JCache does not
> have transactions.
>
> I would file a ticket about the issue you found, so the community could
> address it. If you are interested, perhaps you can contribute a fix
> yourself.
>
> Thanks,
> D.
>
> On Wed, Sep 13, 2017 at 5:47 AM, Denis Mekhanikov 
> wrote:
>
> > Igniters!
> >
> > I noticed a weird behavior regarding expiry policy in Ignite. You can
> find
> > an example in the attachment.
> > When you call invoke on a cache with configured CacheStore and
> > ExpiryPolicy, then chosen expiry depends on cache's atomicity mode.
> > If cache is atomic, then "creation" expiry timeout is chosen, but if it
> is
> > transactional - then "access".
> >
> > I think, this behavior should at least be identical in both cases, but
> > what is more important, it should conform to JCache specification.
> > But I wasn't able to find a clear statement regarding this question in
> the
> > specification. Can somebody point out a place in the specification that
> > defines a behavior in such case?
> >
> > Cheers,
> > Denis
> >
>


Re: Web Sessions not considered valid by our RequestWrapper - FIX

2017-09-13 Thread Valentin Kulichenko
Ilya,

I'm not suggesting to include this code as-is, but rather add tests that
for this use case and improve code coverage. Our current test seem to be
very artificial and we keep getting issues like this one. I'm open to
suggestions here, this can even be done as a separate task if we extend the
scope.

Adding dependencies in 'test' scope is perfectly acceptable as long as they
are good from licensing standpoint (which is the case for any Spring
artifacts of course).

-Val

On Wed, Sep 13, 2017 at 2:26 AM, Ilya Kasnacheev <ilya.kasnach...@gmail.com>
wrote:

> Hello Valentin,
>
> The application you are probably referring to is user code from Stack
> Overflow and its license is uncertain.
>
> Moreover, this will require depending our tests on spring-mvc and
> spring-security.
>
> If that is acceptable, I could throw together a clean room implementation.
> But I still think it tests too little for too much effort. What do you
> think?
>
> Regards,
>
> --
> Ilya Kasnacheev
>
> 2017-09-13 2:33 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com
> >:
>
> > Ilya,
> >
> > I see you have a fully-pledged application to test the scenario. Is it
> > possible to include it (probably simplified a bit) into our tests suites
> so
> > that it runs periodically? This will not only verify this particular fix,
> > but also prevent us from other issues that may occur.
> >
> > -Val
> >
> > On Tue, Sep 12, 2017 at 9:04 AM, Yakov Zhdanov <yzhda...@apache.org>
> > wrote:
> >
> > > Val, can you please help?
> > >
> > > --Yakov
> > >
> > > 2017-09-12 14:30 GMT+03:00 Ilya Kasnacheev <ilya.kasnach...@gmail.com
> >:
> > >
> > > > Hello Igniters,
> > > >
> > > > It came to our attention that our handling of Web Sessions is
> > > inconsistent:
> > > > https://stackoverflow.com/questions/45648884/apache-
> > > > ignite-spring-secutiry-error
> > > >
> > > > I've filed https://issues.apache.org/jira/browse/IGNITE-6070 and
> fixed
> > > the
> > > > issue in https://github.com/apache/ignite/pull/2621 (amended tests
> > pass)
> > > >
> > > > Please step forward to review and possibly merge this change, as I
> > could
> > > > not locate any commiters familiar with Web Sessions directly.
> > > >
> > > > Thanks,
> > > >
> > > > --
> > > > Ilya Kasnacheev
> > > >
> > >
> >
>


Re: Expiry policy for Cache.invoke

2017-09-14 Thread Valentin Kulichenko
Denis,

This looks like a bug. In my understanding, getExpiryForAccess should be
called for any update, plus one of getExpiryForCreation/getExpiryForUpdate
depending on whether it's a new entry or not. And this definitely should
not depend on cache mode.

-Val

On Thu, Sep 14, 2017 at 3:02 AM, Denis Mekhanikov <dmekhani...@gmail.com>
wrote:

> Dmitriy,
>
> You are right about transactions, but the spec describes invoke, so, if it
> specifies some behavior in general, then it should be followed in both
> cases.
>
> Here is the most relevant part I could find in the spec:
> https://static.javadoc.io/javax.cache/cache-api/1.0.0/
> javax/cache/processor/EntryProcessor.html
> I think, that if the value is loaded from CacheStore, then
> getExpiryForCreation() should be used. Other methods should be called
> depending on operations performed.
>
> Denis
>
> чт, 14 сент. 2017 г. в 12:02, Denis Mekhanikov <dmekhani...@gmail.com>:
>
> > Val,
> >
> > Sorry, I may be didn't formulate the issue clearly. Other than predefined
> > expiry policies (like CreatedExpiryPolicy, AccessedExpiryPolicy, etc) you
> > can provide a custom expiry policy by calling
> > setExpiryPolicyFactory(Factory)
> > <https://ignite.apache.org/releases/latest/javadoc/org/
> apache/ignite/configuration/CacheConfiguration.html#
> setExpiryPolicyFactory(javax.cache.configuration.Factory)>.
> > So, cache will consult the configured ExpiryPolicy by calling
> > getExpiryForCreation(), getExpiryForAccess() or getExpiryForUpdate(),
> > depending on the performed operation.
> >
> > So, the methods of ExpiryPolicy that are called when invoke(...)
> > <https://ignite.apache.org/releases/latest/javadoc/org/
> apache/ignite/IgniteCache.html#invoke(K,%20org.apache.ignite.cache.
> CacheEntryProcessor,%20java.lang.Object...)> is
> > used, somehow depend on the configured atomicity. Of course, the
> configured
> > ExpiryPolicy is used, but in some cases the wrong method is called.
> >
> > Denis
> >
> > чт, 14 сент. 2017 г. в 1:54, Valentin Kulichenko <
> > valentin.kuliche...@gmail.com>:
> >
> >> Denis,
> >>
> >> I'm confused by the issue. Do you mean that we can use expiry policy
> other
> >> than the one provided in configuration? How is this possible? Can you
> >> point
> >> to the code that implements this logic?
> >>
> >> -Val
> >>
> >> On Wed, Sep 13, 2017 at 11:29 AM, Dmitriy Setrakyan <
> >> dsetrak...@apache.org>
> >> wrote:
> >>
> >> > Denis,
> >> >
> >> > I agree that the behavior should be consistent, but you will not find
> >> > anything about transactions in JCache. To my knowledge, JCache does
> not
> >> > have transactions.
> >> >
> >> > I would file a ticket about the issue you found, so the community
> could
> >> > address it. If you are interested, perhaps you can contribute a fix
> >> > yourself.
> >> >
> >> > Thanks,
> >> > D.
> >> >
> >> > On Wed, Sep 13, 2017 at 5:47 AM, Denis Mekhanikov <
> >> dmekhani...@gmail.com>
> >> > wrote:
> >> >
> >> > > Igniters!
> >> > >
> >> > > I noticed a weird behavior regarding expiry policy in Ignite. You
> can
> >> > find
> >> > > an example in the attachment.
> >> > > When you call invoke on a cache with configured CacheStore and
> >> > > ExpiryPolicy, then chosen expiry depends on cache's atomicity mode.
> >> > > If cache is atomic, then "creation" expiry timeout is chosen, but if
> >> it
> >> > is
> >> > > transactional - then "access".
> >> > >
> >> > > I think, this behavior should at least be identical in both cases,
> but
> >> > > what is more important, it should conform to JCache specification.
> >> > > But I wasn't able to find a clear statement regarding this question
> in
> >> > the
> >> > > specification. Can somebody point out a place in the specification
> >> that
> >> > > defines a behavior in such case?
> >> > >
> >> > > Cheers,
> >> > > Denis
> >> > >
> >> >
> >>
> >
>


Re: IGNITE-2894 - Binary object inside of Externalizable still serialized with OptimizedMarshaller

2017-09-22 Thread Valentin Kulichenko
Nikita,

I think it should be consistent with Binarylizable.

-Val

On Fri, Sep 22, 2017 at 7:12 AM Nikita Amelchev <nsamelc...@gmail.com>
wrote:

> Another problem is support BinaryObject methods, for example, when we need
> to get a field(often case in queries with annotation QuerySqlField). In a
> binary object, fields are getting from the schema, which I don't have
> (BinaryObjectException: Cannot find schema for object with compact footer).
>
> I see such ways to resolve it:
>
> 1. Deserialize object and get a field.
>
> 2. Make methods like BinaryFieldImpl.value(obj) unavailable. I tried to
> reproduce similar behavior with Binarylizable(rawWriter) and it throws the
> same exception.
>
> Therefore, if we want to avoid deserialization we should get a format that
> is similar to Binarylizable with a raw writer. Is it right?
>
> What are your thoughts?
>
>
> 2017-09-19 20:10 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Nikita,
> >
> > It sounds like the test should be changed, no? In case I'm missing
> > something, can you please give more details about the scenario which
> > requires deserialization? Generally, this sounds weird - in cases when we
> > can get advantage of binary format and avoid deserialization, we
> definitely
> > should not deserialize.
> >
> > -Val
> >
> > On Tue, Sep 19, 2017 at 6:17 AM, Nikita Amelchev <nsamelc...@gmail.com>
> > wrote:
> >
> > > I have some problem when we don't deserialize Externalizable. Some
> > messages
> > > require deserializing in GridCacheIoManager.message0(). For example,
> > tests
> > > like testResponseMessageOnUnmarshallingFailed where readExternal throws
> > an
> > > exception. A message containing Externalizable is deserialized and
> > > processed as a failed message. If we do not deserialize here, we won't
> > > process this message as failed. What way to resolve it? I see we can
> try
> > to
> > > deserialize after a check on Externalizable in a finishUnmarshall
> method,
> > > but it looks bad. What are your thoughts?
> > >
> > > 2017-09-07 12:57 GMT+03:00 Nikita Amelchev <nsamelc...@gmail.com>:
> > >
> > > > I also agree that we should try to use Externalizable without
> > > > deserialization on servers. I will do it in a way suggested in the
> > > review.
> > > > The Externalizable will marshal through type
> GridBinaryMarshaller.OBJ,
> > in
> > > > addition, I’ll add a flag in BinaryConfiguration which will be used
> to
> > > turn
> > > > on the old way with OptimizedMarshaller for compatibility with the
> > > current
> > > > data format.
> > > >
> > > > 2017-09-06 4:21 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> > > >
> > > >> Vova, I agree. Let's stay loyal to our binary serialization
> protocol.
> > > >>
> > > >> Do you know if we will be loosing any functionality available in
> > > >> Externalizable, but not present in our binary protocol?
> > > >>
> > > >> D.
> > > >>
> > > >> On Mon, Sep 4, 2017 at 11:38 PM, Vladimir Ozerov <
> > voze...@gridgain.com>
> > > >> wrote:
> > > >>
> > > >> > Folks,
> > > >> >
> > > >> > Let's discuss how this should be handled properly. I proposed to
> use
> > > the
> > > >> > same format as regular binary object, with all data being written
> in
> > > >> "raw"
> > > >> > form. This would give us one critical advantage - we will be able
> to
> > > >> work
> > > >> > with such objects without deserialization on the server. Hence, no
> > > >> classes
> > > >> > will be needed on the server side. Current implementation (see PR
> in
> > > the
> > > >> > ticket) defines separate format which require deserialization, I
> am
> > > not
> > > >> OK
> > > >> > with it.
> > > >> >
> > > >> > Thoughts?
> > > >> >
> > > >> > On Wed, Aug 23, 2017 at 6:11 PM, Nikita Amelchev <
> > > nsamelc...@gmail.com>
> > > >> > wrote:
> > > >> >
> > > >> > > Hello, Igniters!
> > > >> > >
> > > >> > > I've developed Externalizable interface s

Re: IGNITE-2894 - Binary object inside of Externalizable still serialized with OptimizedMarshaller

2017-09-19 Thread Valentin Kulichenko
Nikita,

It sounds like the test should be changed, no? In case I'm missing
something, can you please give more details about the scenario which
requires deserialization? Generally, this sounds weird - in cases when we
can get advantage of binary format and avoid deserialization, we definitely
should not deserialize.

-Val

On Tue, Sep 19, 2017 at 6:17 AM, Nikita Amelchev <nsamelc...@gmail.com>
wrote:

> I have some problem when we don't deserialize Externalizable. Some messages
> require deserializing in GridCacheIoManager.message0(). For example, tests
> like testResponseMessageOnUnmarshallingFailed where readExternal throws an
> exception. A message containing Externalizable is deserialized and
> processed as a failed message. If we do not deserialize here, we won't
> process this message as failed. What way to resolve it? I see we can try to
> deserialize after a check on Externalizable in a finishUnmarshall method,
> but it looks bad. What are your thoughts?
>
> 2017-09-07 12:57 GMT+03:00 Nikita Amelchev <nsamelc...@gmail.com>:
>
> > I also agree that we should try to use Externalizable without
> > deserialization on servers. I will do it in a way suggested in the
> review.
> > The Externalizable will marshal through type GridBinaryMarshaller.OBJ, in
> > addition, I’ll add a flag in BinaryConfiguration which will be used to
> turn
> > on the old way with OptimizedMarshaller for compatibility with the
> current
> > data format.
> >
> > 2017-09-06 4:21 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> >
> >> Vova, I agree. Let's stay loyal to our binary serialization protocol.
> >>
> >> Do you know if we will be loosing any functionality available in
> >> Externalizable, but not present in our binary protocol?
> >>
> >> D.
> >>
> >> On Mon, Sep 4, 2017 at 11:38 PM, Vladimir Ozerov <voze...@gridgain.com>
> >> wrote:
> >>
> >> > Folks,
> >> >
> >> > Let's discuss how this should be handled properly. I proposed to use
> the
> >> > same format as regular binary object, with all data being written in
> >> "raw"
> >> > form. This would give us one critical advantage - we will be able to
> >> work
> >> > with such objects without deserialization on the server. Hence, no
> >> classes
> >> > will be needed on the server side. Current implementation (see PR in
> the
> >> > ticket) defines separate format which require deserialization, I am
> not
> >> OK
> >> > with it.
> >> >
> >> > Thoughts?
> >> >
> >> > On Wed, Aug 23, 2017 at 6:11 PM, Nikita Amelchev <
> nsamelc...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hello, Igniters!
> >> > >
> >> > > I've developed Externalizable interface support using
> BinaryMarshaller
> >> > [1].
> >> > >
> >> > > I have a misunderstanding with defining BinaryWriteMode in
> >> > > BinaryUtils.mode(cls): there is AffinityKey class which implements
> >> > > Externalizable and registered with ReflectiveSerialize,
> >> BinaryMarshaller
> >> > > defines it as BinaryWriteMode.OBJ and uses reflection according to
> >> > current
> >> > > logic. I want to say that AffinityKey must be defined as
> >> > > BinaryWriteMode.OBJ although the class implements the Externalizable
> >> > > interface.
> >> > > I have to add a new one more parameter in BinaryUtils.mode(cls) to
> >> define
> >> > > BinaryWriteMode in a proper way.
> >> > > Could you please review and comment my solution [2]?
> >> > >
> >> > > BTW, I have benchmarked my solution by GridMarshallerPerformanceTest
> >> and
> >> > it
> >> > > becomes faster up to 2 times (GridMarshaller).My JMH tests show that
> >> > > marshal faster up to 50% and unmarshal faster up to 100% on an
> >> > > Externalizable object.
> >> > >
> >> > > Also, I've filed an issue for Serializable interface support using
> >> > > BinaryMarshaller [3] as it has been described earlier.
> >> > >
> >> > > [1] https://issues.apache.org/jira/browse/IGNITE-2894
> >> > > [2] https://reviews.ignite.apache.org/ignite/review/IGNT-CR-278
> >> > > [3] https://issues.apache.org/jira/browse/IGNITE-6172
> >> > >
> >> > > 2017-08-21 20:43 GMT+03:

Re: Binary compatibility of persistent storage

2017-09-19 Thread Valentin Kulichenko
In my view, there are two different scenarios.

First - user just upgrades the version (to get some bug fix, for example),
but does not intend to change anything in their project and/or use any new
features. In this case it should work transparently and cluster must be
able to work with older format. Ideally, we should detect this
automatically, but if it's not possible we can introduce some kind of
'compatibility mode' enabled by a system property.

Second - user upgrades to get new features that require data format change.
In this case, I think it's OK to suggest using a conversion tool. Or
probably we can apply it automatically on node startup?

-Val

On Tue, Sep 19, 2017 at 6:38 AM, Yakov Zhdanov  wrote:

> >Any major change in data/index page format. E.g. this could happen once
> transactional SQL is ready.
>
> I would suggest we automatically disable this feature for databases created
> with older versions.
>
> --Yakov
>


<    1   2   3   4   5   6   7   8   9   10   >