Re: Java 9 support

2018-01-10 Thread Антон Чураев
Andrey, do You mean https://issues.apache.org/jira/browse/IGNITE-4908?

2018-01-10 13:25 GMT+03:00 Andrey Kuznetsov :

> Thanks, Petr.
>
> I heard of some activity related to performance consequences of
> ReentrantLocks in IGNITE-6736 fix. So, I'd like to get a reviewer feedback
> first.
>
> Andrey G., Vladimir O., is it possible to merge the fix to master?
>
> 2018-01-10 9:56 GMT+03:00 Petr Ivanov :
>
> > Andrey — double checked your solution and it works now. Guess there was
> > some merge error first time.
> > Sorry for misleading.
> >
> > So, there is working solution for Java 9 build and I’d like to save this
> > configuration in ignite-6730 (making IGNITE-7144 and IGNITE-6736 to
> become
> > subtasks in the process).
> > What do you think?
> >
> >
> > > On 9 Jan 2018, at 20:49, Andrey Kuznetsov  wrote:
> > >
> > > Hi Petr!
> > >
> > > Could you please clarify what is wrong with fix proposed in
> IGNITE-6736,
> > > and what is supposed to be a replacement for monitorEnter/monitorExit
> > now?
> > >
> > > 2018-01-09 19:08 GMT+03:00 Petr Ivanov :
> > >
> > >> Hi all.
> > >>
> > >>
> > >> After some thorough research and with help of fellow igniters, I’ve
> > >> managed to prepare more or less stable Java 9 build configuration of
> > Apache
> > >> Ignite.
> > >>
> > >> Here are changes to make it work:
> > >> - Java 8 profiles and build process revision, made in IGNITE-7203;
> > >> - Java 9 maven profile prepared in IGNITE-7144 (will be moved to
> > >> IGNITE-6730 as subtask);
> > >> - specific maven-compiler-plugin configuration with JVM args for Java
> 9
> > >> profile (as was proposed by Vladimir Ozerov);
> > >> - maven-bundle-plugin version is updated to 3.5.0;
> > >> - maven-compiler-plugin version synchronised to 3.7.0 (in Cassandra
> > >> modules);
> > >> - scala version updated to 2.12.4;
> > >> - disabled scalar-2.10, spark-2.10 and visor-console-2.10 modules (due
> > to
> > >> dependency in scala 2.10 which is unsupported by Java 9);
> > >> - sun.misc.JavaNioAccess import changed to jdk.internal.misc.
> > JavaNioAccess
> > >> in GridUnsafe.java and PageMemoryImpl.java;
> > >> - sun.misc.SharedSecrets import changed to jdk.internal.misc.
> > SharedSecrets
> > >> in GridUnsafe.java and PageMemoryImpl.java;
> > >> - methods monitorEnter and monitorExit bodies commented out (fix from
> > >> IGNITE-6736 did not work).
> > >>
> > >> I’d like to put these changes into ignite-6730 to have working
> compiling
> > >> under Java 9 branch — so that we can continue work on improving Apache
> > >> Ignite’s Java 9 support.
> > >
> > >
> > >
> > >
> > > --
> > > Best regards,
> > >  Andrey Kuznetsov.
> >
> >
>
>
> --
> Best regards,
>   Andrey Kuznetsov.
>



-- 

Best Regards, Anton Churaev


Re: Data compression in Ignite 2.0

2017-06-09 Thread Антон Чураев
Seems that Dmitry is referring to transparent data encryption. It is used
throughout the whale database industry.

2017-06-09 10:50 GMT+03:00 Vladimir Ozerov <voze...@gridgain.com>:

> Dima,
>
> Encryption of certain fields is as bad as compression. First, it is a huge
> change, which makes already complex binary protocol even more complex.
> Second, it have to be ported to CPP, .NET platforms, as well as to JDBC and
> ODBC.
> Last, but the most important - this is not our headache to encrypt
> sensitive data. This is user responsibility. Nobody in a sane mind will
> store passwords in plain form. Instead, user should encrypt it on his own,
> choosing proper encryption parameters - algorithms, key lengths, salts,
> etc.. How are you going to expose this in API or configuration?
>
> We should not implement data encryption on binary level, this is out of
> question. Encryption should be implemented on application level (user
> efforts), transport layer (SSL - we already have it), and possibly on
> disk-level (there are tools for this already).
>
>
> On Fri, Jun 9, 2017 at 9:06 AM, Vyacheslav Daradur <daradu...@gmail.com>
> wrote:
>
> > >> which is much less useful.
> > I note, in some cases there is profit more than twice per size of an
> > object.
> >
> > >> Would it be possible to change your implementation to handle the
> > encryption instead?
> > Yes, of cource, there's not much difference between compression and
> > encryption, including in my implementation of per-field-compression.
> >
> > 2017-06-09 8:55 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> >
> > > Vyacheslav,
> > >
> > > When this feature started out as data compression in Ignite, it sounded
> > > very useful. Now it is unfolding as a per-field compression, which is
> > much
> > > less useful. In fact, it is questionable whether it is useful at all.
> The
> > > fact that this feature is implemented does not make it mandatory for
> the
> > > community to accept it.
> > >
> > > However, as I mentioned before, per-field encryption is very useful, as
> > it
> > > would allow users automatically encrypt certain sensitive fields, like
> > > passwords, credit card numbers, etc. There is not much conceptual
> > > difference between compressing a field vs encrypting a field. Would it
> be
> > > possible to change your implementation to handle the encryption
> instead?
> > >
> > > D.
> > >
> > > On Thu, Jun 8, 2017 at 10:42 PM, Vyacheslav Daradur <
> daradu...@gmail.com
> > >
> > > wrote:
> > >
> > > > Guys, I want to be clear:
> > > > * "Per-field compression" design is the result of a research of the
> > > binary
> > > > infrastructure of Ignite and some other its places (querying,
> indexing,
> > > > etc.)
> > > > * Full-compression of object will be more effective, but in this case
> > > there
> > > > is no capability with querying and indexing (or there is large
> overhead
> > > by
> > > > way of decompressing of full object (or caches pages) on demand)
> > > > * "Per-field compression" is a one of ways to implement the
> compression
> > > > feature
> > > >
> > > > I'm new to Ignite also I can be mistaken in some things.
> > > > Last 3-4 month I've tryed to start dicussion about a design, but
> nobody
> > > > answers nothing (except Dmitry and Valentin who was interested how it
> > > > works).
> > > > But I understand that this is community and nobody is obliged to
> > anybody.
> > > >
> > > > There are strong Ignite experts.
> > > > If they can help me and community with a design of the compression
> > > feature
> > > > it will be great.
> > > > At the moment I have a desire and time to be engaged in development
> of
> > > > compression feature in Ignite.
> > > > Let's use this opportunity :)
> > > >
> > > > 2017-06-09 5:36 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> > > >
> > > > > Igniters,
> > > > >
> > > > > I have never seen a single Ignite user asking about compressing a
> > > single
> > > > > field. However, we have had requests to secure certain fields, e.g.
> > > > > passwords.
> > > > >
> > > > > I personally do not think per-field compression is needed, u

Re: Data compression in Ignite 2.0

2017-06-08 Thread Антон Чураев
Guys, could you please help me.
I thought that if there will storing compressed data in the memory, data
will transmit over wire in compression too. Is it right?

2017-06-08 13:30 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:

> Vladimir,
>
> The main problem which I'am trying to solve is storing data in memory in a
> compression form via Ignite.
> The main goal is using memory more effectivelly.
>
> >> here the much simpler step would be to full
> compression on per-cache basis rather than dealing with per-fields case.
>
> Please explain your idea. Compess data by memory-page?
> Is it compatible with quering and indexing?
>
> >> In the end, if user would like to compress particular field, he can
> always to it on his own
> I think we mustn't think in this way, if user need something he trying to
> choose a tool which has this feature OOTB.
>
>
>
> 2017-06-08 12:53 GMT+03:00 Vladimir Ozerov <voze...@gridgain.com>:
>
> > Igniters,
> >
> > Honestly I still do not see how to apply it gracefully this feature ti
> > Ignite. And overall approach to compress only particular fields looks
> > overcomplicated to me. Remember, that our main use case is an application
> > without classes on the server. It means that any kind of annotations are
> > inapplicable. To be more precise: proper API should be implemented to
> > handle no-class case (e.g. how would build such an object through
> > BinaryBuilder without a class?), and only then add annotations as
> > convenient addition to more basic API.
> >
> > It seems to me that full implementation, which takes in count proper
> > "classless" API, changes to binary metadata to reflect compressed fields,
> > changes to SQL, changes to binary protocol, and porting to .NET and CPP,
> > will yield very complex solution with little value to the product.
> >
> > Instead, as I proposed earlier, it seems that we'd better start with the
> > problem we are trying to solve. Basically, compression could help in two
> > cases:
> > 1) Transmitting data over wire - it should be implemented on
> communication
> > layer and should not affect binary serialization component a lot.
> > 2) Storing data in memory - here the much simpler step would be to full
> > compression on per-cache basis rather than dealing with per-fields case.
> >
> > In the end, if user would like to compress particular field, he can
> always
> > to it on his own, and set already compressed field to our BinaryObject.
> >
> > Vladimir.
> >
> >
> > On Thu, Jun 8, 2017 at 12:37 PM, Vyacheslav Daradur <daradu...@gmail.com
> >
> > wrote:
> >
> > > Valentin,
> > >
> > > Yes, I have the prototype[1][2]
> > >
> > > You can see an example of Java class[3] that I used in my benchmark.
> > > For example:
> > > class Foo {
> > > @BinaryCompression
> > > String data;
> > > }
> > > If user make decision to store the object in compressed form, he can
> use
> > > the annotation @BinaryCompression as shown above.
> > > It means annotated field 'data' will be compressed at marshalling.
> > >
> > > [1] https://github.com/apache/ignite/pull/1951
> > > [2] https://issues.apache.org/jira/browse/IGNITE-5226
> > > [3]
> > > https://github.com/daradurvs/ignite-compression/blob/
> > > master/src/main/java/ru/daradurvs/ignite/compression/
> model/Audit1F.java
> > >
> > >
> > >
> > > 2017-06-08 2:04 GMT+03:00 Valentin Kulichenko <
> > > valentin.kuliche...@gmail.com
> > > >:
> > >
> > > > Vyacheslav, Anton,
> > > >
> > > > Are there any ideas and/or prototypes for the API? Your design
> > > suggestions
> > > > seem to make sense, but I would like to see how it all this will like
> > > from
> > > > user's standpoint.
> > > >
> > > > -Val
> > > >
> > > > On Wed, Jun 7, 2017 at 1:06 AM, Антон Чураев <churaev...@gmail.com>
> > > wrote:
> > > >
> > > > > Vyacheslav, correct me if something wrong
> > > > >
> > > > > We could provide opportunity of choose between CPU usage and
> MEM/NET
> > > > usage
> > > > > for users by compression some attributes of stored objects.
> > > > > You have learned design, and it is possible to localize changes in
> > > > > marshalling without performance affect and current functionality.
> > >

Re: "Review workflow" changes to prevent "broken review" issues.

2017-06-07 Thread Антон Чураев
Igniters, Dmitry, basically agree

It's no secret that Ignite is a very complex project, and contributors need
a very high experience to become a committer.
But this is a potential problem for scaling the community and project
development.

I think that one of the options for solving the growth problem could be to
decompose issues for new contributors. This will allow them to be included
in the project more quickly, and it will be more convenient for the
committers to code review.

There are cons, but what do you think?

2017-06-05 19:58 GMT+03:00 Dmitry Pavlov :

> Hi Igniters, Anton,
>
> Let’s imagine that development process as a chain of production stages
> 1) Developing patch by contributor
> 2) Review changes by committer
>
> Reviews waiting too long time to be done at stage 2 may indicate that speed
> (potential throughput) of step 2 is less than throughput at step 1. T2
> In terms of this model (inspired by Goldratt’s Theory of Constraints
> (TOC)), I have a question:
> Will this responsibility movement (to find appropriate reviewer to
> contributor) help us to increase overall throughput?
>
> If we agree constraint in terms of TOC is throughput T2, I suggest
> following steps
> - Increase the throughput T2 of the committers
> - Reduce the load on the committers by improve quality of code after stage
> 1 given to review (pre review by non-committer, automatic review, code
> inspections)
>
> Best Regards,
> Dmitriy Pavlov
>
>
> пн, 5 июн. 2017 г. в 18:28, Anton Vinogradov :
>
> > Igniters,
> >
> > Currently, according to
> >
> > https://cwiki.apache.org/confluence/display/IGNITE/How+
> to+Contribute#HowtoContribute-SubmittingforReview
> > ,
> > contributor can ask for review by moving ticket to PATCH AVAILABLE state.
> >
> > And, as far as I can see, this cause broken tickets issue.
> > Contributor can wait somebody who'll review his changes for a month or
> even
> > more.
> >
> > I propose to change workflow and *make contributor responsible to find
> > reviewer*.
> > It's pretty easy to find a person able to review changes in most of
> cases.
> >
> > 1) You can check git history of files you modified and find persons with
> > expertise in this code
> > 2) In case you have problems with such search you can always use
> > maintainers list (
> >
> > https://cwiki.apache.org/confluence/display/IGNITE/How+
> to+Contribute#HowtoContribute-ReviewProcessandMaintainers
> > )
> >
> > Thoughts?
> >
>



-- 

Best Regards, Anton Churaev


Re: Data compression in Ignite 2.0

2017-06-07 Thread Антон Чураев
Vyacheslav, correct me if something wrong

We could provide opportunity of choose between CPU usage and MEM/NET usage
for users by compression some attributes of stored objects.
You have learned design, and it is possible to localize changes in
marshalling without performance affect and current functionality.

I think, that it's usefull for our project and users.
Community, what do you think about this proposal?


2017-06-06 17:29 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:

> In short,
>
> During marshalling a fields is represented as BinaryFieldAccessor which
> manages its marshalling. It checks if the field is marked by annotation
> @BinaryCompression, in that case - binary  representation of field (bytes
> array) will be compressed. It will be marked as compressed by types
> constant (GridBinaryMarshaller.COMPRESSED), after this the compressed
> bytes
> array wiil be include in binary representation of whole object. Note,
> header of marshalled object will not be compressed. Compression affected
> only object's field representation.
>
> Objects in IgniteCache is represented as BinaryObject which is wrapper over
> bytes array of marshalled object.
> BinaryObject provides some usefull methods, which are used by Ignite
> systems.
> For example, the Queries use BinaryObject#field method, which deserializes
> only field of object, without deserializing of whole object.
> BinaryObject#field method during deserialization, if meets the constant of
> compressed type, decompress this bytes array, then continue unmarshalling
> as usual.
>
> Now, I introduced the Compressor interface in IgniteConfigurations, it
> allows user to use own implementation of compressor - it is the requirement
> in the task[1].
>
> As far as I know, Vladimir Ozerov doesn't like the idea of granting this
> opportunity to the user.
> In that case we can choose a compression algorithm which we will provide by
> default and will move the interface to internals of binary infractructure.
> For this case I've prepared benchmarked, which I've sent earlier.
>
> I vote for ZSTD algorithm[2], it provides good compression ratio and good
> throughput. It has implementation in Java, .NET and C++, and has
> ASF-friendly license, we can use it in the all Ignite platforms.
> You can look at an assessment of this algorithm in my benchmark's
>
> [1] https://issues.apache.org/jira/browse/IGNITE-3592
> [2]https://github.com/facebook/zstd
>
>
> 2017-06-06 16:02 GMT+03:00 Антон Чураев <churaev...@gmail.com>:
>
> > Looks good for me.
> >
> > Could You propose design of implementation in couple of sentences?
> > So that we can estimate the completeness and complexity of the proposal.
> >
> > 2017-06-06 15:26 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:
> >
> > > Anton,
> > >
> > > Of course, the solution does not affect on existing implementation. I
> > mean,
> > > there is no changes if user not use the annotation @BinaryCompression.
> > (no
> > > performance changes)
> > > Only if user make decision to use compression on specific field or
> fields
> > > of a class - in that case compression will be used at marshalling in
> > > relation to annotated fields.
> > >
> > > 2017-06-06 15:10 GMT+03:00 Антон Чураев <churaev...@gmail.com>:
> > >
> > > > Vyacheslav,
> > > >
> > > > Is it possible to propose implementation that can be switched on
> > > on-demand?
> > > > In this case it should not affect performance of current solution.
> > > >
> > > > I mean, that users should make decision what is more important for
> > them:
> > > > throutput or memory/net usage.
> > > > May be they will be choose not all objects, or only some attributes
> of
> > > > objects for compress.
> > > >
> > > > 2017-06-06 14:48 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:
> > > >
> > > > > Conclusion:
> > > > > Provided solution allows reduce size of an object in IgniteCache at
> > the
> > > > > cost of throughput reduction (small - in some cases), it depends on
> > > part
> > > > of
> > > > > object which will be compressed and compression algorithm.
> > > > > I mean, we can make more effective use of memory, and in some cases
> > it
> > > > can
> > > > > reduce loading of the interconnect. (replication, rebalancing)
> > > > >
> > > > > Especially, it will be particularly useful for object's fields
> which
> > > are
> 

Re: Data compression in Ignite 2.0

2017-06-06 Thread Антон Чураев
Looks good for me.

Could You propose design of implementation in couple of sentences?
So that we can estimate the completeness and complexity of the proposal.

2017-06-06 15:26 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:

> Anton,
>
> Of course, the solution does not affect on existing implementation. I mean,
> there is no changes if user not use the annotation @BinaryCompression. (no
> performance changes)
> Only if user make decision to use compression on specific field or fields
> of a class - in that case compression will be used at marshalling in
> relation to annotated fields.
>
> 2017-06-06 15:10 GMT+03:00 Антон Чураев <churaev...@gmail.com>:
>
> > Vyacheslav,
> >
> > Is it possible to propose implementation that can be switched on
> on-demand?
> > In this case it should not affect performance of current solution.
> >
> > I mean, that users should make decision what is more important for them:
> > throutput or memory/net usage.
> > May be they will be choose not all objects, or only some attributes of
> > objects for compress.
> >
> > 2017-06-06 14:48 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:
> >
> > > Conclusion:
> > > Provided solution allows reduce size of an object in IgniteCache at the
> > > cost of throughput reduction (small - in some cases), it depends on
> part
> > of
> > > object which will be compressed and compression algorithm.
> > > I mean, we can make more effective use of memory, and in some cases it
> > can
> > > reduce loading of the interconnect. (replication, rebalancing)
> > >
> > > Especially, it will be particularly useful for object's fields which
> are
> > > large text (>~ 250 bytes) and can be effectively compressed.
> > >
> > > 2017-06-06 12:00 GMT+03:00 Антон Чураев <churaev...@gmail.com>:
> > >
> > > > Vyacheslav, thank you! But could you please provide a conclusions or
> > > > proposals based on this benchmarks?
> > > >
> > > > 2017-06-06 11:28 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:
> > > >
> > > > > Dmitry,
> > > > >
> > > > > Excel-pages:
> > > > >
> > > > > 1). "Compression ratio (2)" - shows object size, with compression
> and
> > > > > without compression. (Conditions: literal text)
> > > > > 1st graph shows compression ratios of using different compression
> > > > algrithms
> > > > > depending on size of compressed field.
> > > > > 2nd graph shows evaluation of size of objects depending on sizes
> and
> > > > > compression algorithms.
> > > > >
> > > > > 2). "Compression ratio (1)" - shows object size, with compression
> and
> > > > > without compression. (Conditions:  badly compressed character
> > sequence)
> > > > > 1st graph shows compression ratios of using different compression
> > > > > algrithms depending on size of compressed field.
> > > > > 2nd graph shows evaluation of size of objects depending on sizes
> and
> > > > > compression algorithms.
> > > > >
> > > > > 3) 'put-avg" - shows average time of the "put" operation depending
> on
> > > > size
> > > > > and compression algorithms.
> > > > >
> > > > > 4) 'put-thrpt" - shows throughput of the "put" operation depending
> on
> > > > size
> > > > > and compression algorithms.
> > > > >
> > > > > 5) 'get-avg" - shows average time of the "get" operation depending
> on
> > > > size
> > > > > and compression algorithms.
> > > > >
> > > > > 6) 'get-thrpt" - shows throughput of the "get" operation depending
> on
> > > > size
> > > > > and compression algorithms.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > 2017-06-06 10:59 GMT+03:00 Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >:
> > > > >
> > > > > > Vladimir, I am not sure how to interpret the graphs? What are we
> > > > looking
> > > > > > at?
> > > > > >
> > > > > > On Tue, Jun 6, 2017 at 12:33 AM, Vyacheslav Daradur <
> > > > daradu...@gmail.com
> > > > > >
> > > > > &

Re: Data compression in Ignite 2.0

2017-06-06 Thread Антон Чураев
Vyacheslav,

Is it possible to propose implementation that can be switched on on-demand?
In this case it should not affect performance of current solution.

I mean, that users should make decision what is more important for them:
throutput or memory/net usage.
May be they will be choose not all objects, or only some attributes of
objects for compress.

2017-06-06 14:48 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:

> Conclusion:
> Provided solution allows reduce size of an object in IgniteCache at the
> cost of throughput reduction (small - in some cases), it depends on part of
> object which will be compressed and compression algorithm.
> I mean, we can make more effective use of memory, and in some cases it can
> reduce loading of the interconnect. (replication, rebalancing)
>
> Especially, it will be particularly useful for object's fields which are
> large text (>~ 250 bytes) and can be effectively compressed.
>
> 2017-06-06 12:00 GMT+03:00 Антон Чураев <churaev...@gmail.com>:
>
> > Vyacheslav, thank you! But could you please provide a conclusions or
> > proposals based on this benchmarks?
> >
> > 2017-06-06 11:28 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com>:
> >
> > > Dmitry,
> > >
> > > Excel-pages:
> > >
> > > 1). "Compression ratio (2)" - shows object size, with compression and
> > > without compression. (Conditions: literal text)
> > > 1st graph shows compression ratios of using different compression
> > algrithms
> > > depending on size of compressed field.
> > > 2nd graph shows evaluation of size of objects depending on sizes and
> > > compression algorithms.
> > >
> > > 2). "Compression ratio (1)" - shows object size, with compression and
> > > without compression. (Conditions:  badly compressed character sequence)
> > > 1st graph shows compression ratios of using different compression
> > > algrithms depending on size of compressed field.
> > > 2nd graph shows evaluation of size of objects depending on sizes and
> > > compression algorithms.
> > >
> > > 3) 'put-avg" - shows average time of the "put" operation depending on
> > size
> > > and compression algorithms.
> > >
> > > 4) 'put-thrpt" - shows throughput of the "put" operation depending on
> > size
> > > and compression algorithms.
> > >
> > > 5) 'get-avg" - shows average time of the "get" operation depending on
> > size
> > > and compression algorithms.
> > >
> > > 6) 'get-thrpt" - shows throughput of the "get" operation depending on
> > size
> > > and compression algorithms.
> > >
> > >
> > >
> > >
> > > 2017-06-06 10:59 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> > >
> > > > Vladimir, I am not sure how to interpret the graphs? What are we
> > looking
> > > > at?
> > > >
> > > > On Tue, Jun 6, 2017 at 12:33 AM, Vyacheslav Daradur <
> > daradu...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hi, Igniters.
> > > > >
> > > > > I've prepared some benchmarking. Results [1].
> > > > >
> > > > > And I've prepared the evaluation in the form of diagrams [2].
> > > > >
> > > > > I hope that helps to interest the community and accelerates a
> > reaction
> > > to
> > > > > this improvment :)
> > > > >
> > > > > [1]
> > > > > https://github.com/daradurvs/ignite-compression/tree/
> > > > > master/src/main/resources/result
> > > > > [2] https://drive.google.com/file/d/0B2CeUAOgrHkoMklyZ25YTEdKcEk/
> > view
> > > > >
> > > > >
> > > > >
> > > > > 2017-05-24 9:49 GMT+03:00 Vyacheslav Daradur <daradu...@gmail.com
> >:
> > > > >
> > > > > > Guys, any thoughts?
> > > > > >
> > > > > > 2017-05-16 13:40 GMT+03:00 Vyacheslav Daradur <
> daradu...@gmail.com
> > >:
> > > > > >
> > > > > >> Hi guys,
> > > > > >>
> > > > > >> I've prepared the PR to show my idea.
> > > > > >> https://github.com/apache/ignite/pull/1951/files
> > > > > >>
> > > > > >> About querying - I've just copied existing tests and have
>

Re: Data compression in Ignite 2.0

2017-06-06 Thread Антон Чураев
Vyacheslav, thank you! But could you please provide a conclusions or
proposals based on this benchmarks?

2017-06-06 11:28 GMT+03:00 Vyacheslav Daradur :

> Dmitry,
>
> Excel-pages:
>
> 1). "Compression ratio (2)" - shows object size, with compression and
> without compression. (Conditions: literal text)
> 1st graph shows compression ratios of using different compression algrithms
> depending on size of compressed field.
> 2nd graph shows evaluation of size of objects depending on sizes and
> compression algorithms.
>
> 2). "Compression ratio (1)" - shows object size, with compression and
> without compression. (Conditions:  badly compressed character sequence)
> 1st graph shows compression ratios of using different compression
> algrithms depending on size of compressed field.
> 2nd graph shows evaluation of size of objects depending on sizes and
> compression algorithms.
>
> 3) 'put-avg" - shows average time of the "put" operation depending on size
> and compression algorithms.
>
> 4) 'put-thrpt" - shows throughput of the "put" operation depending on size
> and compression algorithms.
>
> 5) 'get-avg" - shows average time of the "get" operation depending on size
> and compression algorithms.
>
> 6) 'get-thrpt" - shows throughput of the "get" operation depending on size
> and compression algorithms.
>
>
>
>
> 2017-06-06 10:59 GMT+03:00 Dmitriy Setrakyan :
>
> > Vladimir, I am not sure how to interpret the graphs? What are we looking
> > at?
> >
> > On Tue, Jun 6, 2017 at 12:33 AM, Vyacheslav Daradur  >
> > wrote:
> >
> > > Hi, Igniters.
> > >
> > > I've prepared some benchmarking. Results [1].
> > >
> > > And I've prepared the evaluation in the form of diagrams [2].
> > >
> > > I hope that helps to interest the community and accelerates a reaction
> to
> > > this improvment :)
> > >
> > > [1]
> > > https://github.com/daradurvs/ignite-compression/tree/
> > > master/src/main/resources/result
> > > [2] https://drive.google.com/file/d/0B2CeUAOgrHkoMklyZ25YTEdKcEk/view
> > >
> > >
> > >
> > > 2017-05-24 9:49 GMT+03:00 Vyacheslav Daradur :
> > >
> > > > Guys, any thoughts?
> > > >
> > > > 2017-05-16 13:40 GMT+03:00 Vyacheslav Daradur :
> > > >
> > > >> Hi guys,
> > > >>
> > > >> I've prepared the PR to show my idea.
> > > >> https://github.com/apache/ignite/pull/1951/files
> > > >>
> > > >> About querying - I've just copied existing tests and have annotated
> > the
> > > >> testing data.
> > > >> https://github.com/apache/ignite/pull/1951/files#diff-c19a9d
> > > >> f4058141d059bb577e75244764
> > > >>
> > > >> It means fields which will be marked by @BinaryCompression will be
> > > >> compressed at marshalling via BinaryMarshaller.
> > > >>
> > > >> This solution has no effect on existing data or project
> architecture.
> > > >>
> > > >> I'll be glad to see your thougths.
> > > >>
> > > >>
> > > >> 2017-05-15 19:18 GMT+03:00 Vyacheslav Daradur  >:
> > > >>
> > > >>> Dmitriy,
> > > >>>
> > > >>> I have ready prototype. I want to show it.
> > > >>> It is always easier to discuss on example.
> > > >>>
> > > >>> 2017-05-15 19:02 GMT+03:00 Dmitriy Setrakyan <
> dsetrak...@apache.org
> > >:
> > > >>>
> > >  Vyacheslav,
> > > 
> > >  I think it is a bit premature to provide a PR without getting a
> > >  community
> > >  consensus on the dev list. Please allow some time for the
> community
> > to
> > >  respond.
> > > 
> > >  D.
> > > 
> > >  On Mon, May 15, 2017 at 6:36 AM, Vyacheslav Daradur <
> > >  daradu...@gmail.com>
> > >  wrote:
> > > 
> > >  > I created the ticket: https://issues.apache.org/jira
> > >  /browse/IGNITE-5226
> > >  >
> > >  > I'll prepare a PR with described solution in couple of days.
> > >  >
> > >  > 2017-05-15 15:05 GMT+03:00 Vyacheslav Daradur <
> > daradu...@gmail.com
> > > >:
> > >  >
> > >  > > Hi, Igniters!
> > >  > >
> > >  > > Apache 2.0 is released.
> > >  > >
> > >  > > Let's continue the discussion about a compression design.
> > >  > >
> > >  > > At the moment, I found only one solution which is compatible
> > with
> > >  > querying
> > >  > > and indexing, this is per-objects-field compression.
> > >  > > Per-fields compression means that metadata (a header) of an
> > object
> > >  won't
> > >  > > be compressed, only serialized values of an object fields (in
> > > bytes
> > >  array
> > >  > > form) will be compressed.
> > >  > >
> > >  > > This solution have some contentious issues:
> > >  > > - small values, like primitives and short arrays - there isn't
> > >  sense to
> > >  > > compress them;
> > >  > > - there is no possible to use compression with java-predefined
> > >  types;
> > >  > >
> > >  > > We can provide an annotation, @IgniteCompression - for
> example,
> > >  

Re: one point optimisation

2017-04-05 Thread Антон Чураев
Maybe it will be useful to update the documentation?

2017-04-05 15:15 GMT+03:00 ALEKSEY KUZNETSOV :

> Thank you for help!
>
> ср, 5 апр. 2017 г. в 15:14, Alexey Goncharuk :
>
> > This optimization does not work when near cache is enabled because we
> need
> > the same ordering on near nodes. You should see the expected number of
> > messages with near cache disabled.
> >
> > 2017-04-05 15:09 GMT+03:00 ALEKSEY KUZNETSOV :
> >
> > > yes
> > >
> > > ср, 5 апр. 2017 г. в 15:07, Alexey Goncharuk <
> alexey.goncha...@gmail.com
> > >:
> > >
> > > > Do you have a near cache enabled?
> > > >
> > > > 2017-04-05 15:00 GMT+03:00 ALEKSEY KUZNETSOV <
> alkuznetsov...@gmail.com
> > >:
> > > >
> > > > > The test shows as follows:
> > > > > assertMessageCount(GridNearTxPrepareRequest.class, 1);
> > > > > assertMessageCount(GridDhtTxPrepareRequest.class, 1);
> > > > > assertMessageCount(GridDhtTxPrepareResponse.class, 1);
> > > > > assertMessageCount(GridNearTxPrepareResponse.class,
> 1);
> > > > > assertMessageCount(GridNearTxFinishRequest.class, 1);
> > > > > assertMessageCount(GridDhtTxFinishRequest.class, 0);
> > > > > assertMessageCount(GridNearTxFinishResponse.class, 1);
> > > > >
> > > > > ср, 5 апр. 2017 г. в 14:53, Alexey Goncharuk <
> > > alexey.goncha...@gmail.com
> > > > >:
> > > > >
> > > > > > Aleksey,
> > > > > >
> > > > > > Can you elaborate on which of the extra messages you observe?
> > > > > >
> > > > > > --AG
> > > > > >
> > > > > > 2017-04-04 14:17 GMT+03:00 ALEKSEY KUZNETSOV <
> > > alkuznetsov...@gmail.com
> > > > >:
> > > > > >
> > > > > > > any thoughts on one phase commit realization ?
> > > > > > >
> > > > > > > пн, 3 апр. 2017 г. в 19:35, ALEKSEY KUZNETSOV <
> > > > > alkuznetsov...@gmail.com
> > > > > > >:
> > > > > > >
> > > > > > > > I've attached test that prints messages exchange . Which
> shows
> > us
> > > > > that
> > > > > > > > there are more messages then you declared in article.
> Perhaps,
> > > > > > > > implementation has changed.
> > > > > > > > I created it on base of IgniteOnePhaseCommitNearSelfTest
> > > > > > > >
> > > > > > > > пн, 3 апр. 2017 г. в 19:03, Dmitriy Setrakyan <
> > > > dsetrak...@apache.org
> > > > > >:
> > > > > > > >
> > > > > > > > Aleksey,
> > > > > > > >
> > > > > > > > The blog describes the 1-phase commit at a high level, but I
> am
> > > > still
> > > > > > > > curious about the differences you found. Can you share them
> > here?
> > > > > > > >
> > > > > > > > D.
> > > > > > > >
> > > > > > > > On Mon, Apr 3, 2017 at 2:11 AM, ALEKSEY KUZNETSOV <
> > > > > > > > alkuznetsov...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Regarding IgniteOnePhaseCommitNearSelfTest test , ignite's
> > one
> > > > > phase
> > > > > > > > > optimisation works not as you said.
> > > > > > > > > I attached picture of message exchange. There are partial
> > > prepare
> > > > > > phase
> > > > > > > > > exists, along with finish phase.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > пн, 3 апр. 2017 г. в 10:55, Christos Erotocritou <
> > > > > > > chris...@gridgain.com
> > > > > > > > >:
> > > > > > > > >
> > > > > > > > >> As far as I know a partition is always allocated to a
> > specific
> > > > > node
> > > > > > > and
> > > > > > > > >> does not span nodes. Ignite has default 1024 partitions on
> > > start
> > > > > > that
> > > > > > > > are
> > > > > > > > >> split equally across nodes.
> > > > > > > > >>
> > > > > > > > >> > On 3 Apr 2017, at 08:10, ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov...@gmail.com>
> > > > > > > > >> wrote:
> > > > > > > > >> >
> > > > > > > > >> > in ur blog u texted belonging to the same partition is
> > > > nessesary
> > > > > > > for 1
> > > > > > > > >> > phase commit. But its not guarantee belonging to the
> same
> > > > node.
> > > > > > > > >> Partition
> > > > > > > > >> > may span many nodes
> > > > > > > > >> >
> > > > > > > > >> > вс, 2 Апр 2017 г., 13:46 ALEKSEY KUZNETSOV <
> > > > > > > alkuznetsov...@gmail.com
> > > > > > > > >:
> > > > > > > > >> >
> > > > > > > > >> >> thank u !
> > > > > > > > >> >>
> > > > > > > > >> >> пт, 31 Мар 2017 г., 21:06 Denis Magda <
> dma...@apache.org
> > >:
> > > > > > > > >> >>
> > > > > > > > >> >> Here is a good blog post about 1phase commit impl in
> > Ignite
> > > > and
> > > > > > its
> > > > > > > > >> >> advantages:
> > > > > > > > >> >>
> > > > > > > > >> >> http://gridgain.blogspot.com/
> > > 2014/09/one-phase-commit-fast-
> > > > > > > > >> transactions-for.html
> > > > > > > > >> >> <
> > > > > > > > >> >> http://gridgain.blogspot.com/
> > > 2014/09/one-phase-commit-fast-
> > > > > > > > >> transactions-for.html
> > > > > > > > >> >>>
> > > > > > > > >> >>
> > > > > > > > >> >> Took a reference to it from there:
> > > > > > > > >> >>
> > > > > > > > >> >>
> > 

Re: Ignite-4795 - ready for review

2017-03-20 Thread Антон Чураев
Dmitry, thank you!

Could you please also change issue status to "Patch available".

2017-03-20 16:01 GMT+03:00 Дмитрий Рябов :

> Hello, community. Please, review and/or suggest something about javadocs of
> transactions.
>
> PR: https://github.com/apache/ignite/pull/1630/files
>
> JIRA: https://issues.apache.org/jira/browse/IGNITE-4795
>



-- 
С уважением,
Чураев Антон


Re: [VOTE] Apache Ignite 1.9.0 RC1

2017-02-28 Thread Антон Чураев
+1

2017-03-01 10:28 GMT+03:00 Anton Vinogradov :

> Dear Sirs!
>
> We have uploaded the 1.9.0 release candidate to
> https://dist.apache.org/repos/dist/dev/ignite/1.9.0-rc1/
>
> Git tag name is
> 1.9.0-rc1
>
> This release includes the following changes:
>
> Ignite:
> * Added Data streamer mode for DML
> * Added Discovery SPI Implementation for Ignite Kubernetes Pods
> * SQL: Query can utilize multiple threads
> * SQL: Improved distributed SQL support
> * Benchmarking simplified and automated
> * Fixed licenses generation during build
> * ignite-spark module upgraded to Spark 2.0
>
> Ignite.NET:
> * DML support
> * TransactionScope API for Ignite transactions support
>
> Ignite CPP:
> * DML support
> * Implemented LoadCache
> * ContinuousQuery support
>
> Complete list of closed issues:
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20IGNITE%20AND%
> 20fixVersion%20%3D%201.9%20AND%20(status%20%3D%
> 20closed%20or%20status%20%3D%20resolved)
>
> DEVNOTES
> https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> plain;f=DEVNOTES.txt;hb=refs/tags/1.9.0-rc1
>
> RELEASENOTES
> https://git-wip-us.apache.org/repos/asf?p=ignite.git;a=blob_
> plain;f=RELEASE_NOTES.txt;hb=refs/tags/1.9.0-rc1
>
> Please start voting.
>
> +1 - to accept Apache Ignite 1.9.0-rc1
> 0 - don't care either way
> -1 - DO NOT accept Apache Ignite 1.9.0-rc1 (explain why)
>
> This vote will go for 72 hours.
>



-- 
С уважением,
Чураев Антон


Re: affinityCall in one distributed transaction

2016-12-13 Thread Антон Чураев
Using JTA in current implementation Ignite is possible. But it is
expensive, because currently Ignite does not support distributed
transaction context within all grid.

I think it would be right to devide task into two:
1) Add support of switching transactional context between multiple thread
within single instance jvm;
2) Using distributed memory for keeping transaction context;

In my opinion, the first one is not so difficult to implement.

2016-12-13 1:29 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:

> Anton,
>
> Looking at this sequence, I don't see any way of achieving it other than
> enrolling all transactions into one JTA transaction. If you seen another
> way, can you please suggest it here?
>
> D.
>
> On Sat, Dec 10, 2016 at 2:07 PM, Антон Чураев <churaev...@gmail.com>
> wrote:
>
> > Dmitriy, it's ok
> >
> > To be abstract simple business transaction for execution payment
> > (preparation done before)  from the card looks like:
> > 1) Create a payment document (cache API);
> > 2) Write-off funds from the payer's card;
> > 2.1) Change in register #1 (cache API);
> > 2.2) Change in register #2 (cache API);
> > 2.3) Change in register #... (cache API);
> > 2.4) Change limits of card (cache API);
> > 3) Payment to service provider;
> > 3.1) Change in the balance of business unit having the contract with
> payer
> > (cache API);
> > 3.2) Change in the balance of business unit having the contract with
> > provider (cache API);
> > 3.3) Change in the balance of the account (cache API);
> > 3.4.1) Some change in registers... (cache API);
> > 3.4.2) ...;
> > 3.3) ...
> > 3.4) Invoke provider's API for billing payment of payer;
> > 4) Formation of financial statements (it's possible to implement
> off-line -
> > in other transational)
> > 4.1) ...
> > 4.2) ...
> >
> > And all steps have been designed into separate microservices. And, of
> > course, it have been designed via asynchronous transport.
> > On the other hand it seems to be that implementation of coordination of
> > 10-20 local transactions is not a good idea
> >
> > 2016-12-10 23:30 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> >
> > > Anton,
> > >
> > > Thanks for the explanation. I am sorry to keep asking questions on
> this.
> > > Can you change your example to include concrete Ignite calls on Compute
> > or
> > > Cache APIs (or other APIs)? I am still struggling to understand the
> > > boundaries between business and Ignite logic.
> > >
> > > D.
> > >
> > > On Sat, Dec 10, 2016 at 5:46 AM, Антон Чураев <churaev...@gmail.com>
> > > wrote:
> > >
> > > > For example:
> > > > 1) Front-end sends a request to perform a complex transaction.
> > > > 2) Some application (like a business transactional coordinator)
> > receives
> > > > message via asynchronous transport. This application implements logic
> > of
> > > > calling different services sequentially or in parallel via
> asynchronous
> > > > transport.
> > > > 3) Each service implement some little changes in a cache.
> > > >
> > > > Or:
> > > > 1) Front-end sends a request to perform a complex transaction.
> > > > 2) This transaction is implemented in microservice architecture
> (large
> > > > number microservices + asynchronous transport).
> > > > 3) Each microservice implement some little changes in a cache.
> > > >
> > > > I think it is possible to implement distributed transactional using
> XA
> > > > coordinator outside Ignite and local transaction in each service. But
> > its
> > > > cost may be unacceptable especially in the case of using a large
> number
> > > of
> > > > services.
> > > >
> > > > I think distributed transaction inside Ignite could be useful also
> for
> > > > using multiple ComputeTask in one transaction.
> > > >
> > > > 2016-12-09 21:45 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org
> >:
> > > >
> > > > > Sounds like you need a centralized JTA server for this type of
> > purpose,
> > > > no?
> > > > > In that case, Ignite transactions can already merge into ongoing
> JTA
> > > > > transactions.
> > > > >
> > > > > I would prefer to see a distributed flow of events to fully
> > understand
> > > > the
> > > > > issue. For ex

Re: affinityCall in one distributed transaction

2016-12-10 Thread Антон Чураев
Dmitriy, it's ok

To be abstract simple business transaction for execution payment
(preparation done before)  from the card looks like:
1) Create a payment document (cache API);
2) Write-off funds from the payer's card;
2.1) Change in register #1 (cache API);
2.2) Change in register #2 (cache API);
2.3) Change in register #... (cache API);
2.4) Change limits of card (cache API);
3) Payment to service provider;
3.1) Change in the balance of business unit having the contract with payer
(cache API);
3.2) Change in the balance of business unit having the contract with
provider (cache API);
3.3) Change in the balance of the account (cache API);
3.4.1) Some change in registers... (cache API);
3.4.2) ...;
3.3) ...
3.4) Invoke provider's API for billing payment of payer;
4) Formation of financial statements (it's possible to implement off-line -
in other transational)
4.1) ...
4.2) ...

And all steps have been designed into separate microservices. And, of
course, it have been designed via asynchronous transport.
On the other hand it seems to be that implementation of coordination of
10-20 local transactions is not a good idea

2016-12-10 23:30 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:

> Anton,
>
> Thanks for the explanation. I am sorry to keep asking questions on this.
> Can you change your example to include concrete Ignite calls on Compute or
> Cache APIs (or other APIs)? I am still struggling to understand the
> boundaries between business and Ignite logic.
>
> D.
>
> On Sat, Dec 10, 2016 at 5:46 AM, Антон Чураев <churaev...@gmail.com>
> wrote:
>
> > For example:
> > 1) Front-end sends a request to perform a complex transaction.
> > 2) Some application (like a business transactional coordinator) receives
> > message via asynchronous transport. This application implements logic of
> > calling different services sequentially or in parallel via asynchronous
> > transport.
> > 3) Each service implement some little changes in a cache.
> >
> > Or:
> > 1) Front-end sends a request to perform a complex transaction.
> > 2) This transaction is implemented in microservice architecture (large
> > number microservices + asynchronous transport).
> > 3) Each microservice implement some little changes in a cache.
> >
> > I think it is possible to implement distributed transactional using XA
> > coordinator outside Ignite and local transaction in each service. But its
> > cost may be unacceptable especially in the case of using a large number
> of
> > services.
> >
> > I think distributed transaction inside Ignite could be useful also for
> > using multiple ComputeTask in one transaction.
> >
> > 2016-12-09 21:45 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> >
> > > Sounds like you need a centralized JTA server for this type of purpose,
> > no?
> > > In that case, Ignite transactions can already merge into ongoing JTA
> > > transactions.
> > >
> > > I would prefer to see a distributed flow of events to fully understand
> > the
> > > issue. For example,
> > >
> > > Client
> > >   - start transaction
> > >   - send compute
> > >
> > > Server
> > >   - receive compute
> > >   - execute ...
> > >   - execute ...
> > >
> > > etc.
> > >
> > > D.
> > >
> > > On Fri, Dec 9, 2016 at 1:26 AM, Антон Чураев <churaev...@gmail.com>
> > wrote:
> > >
> > > > In some cases it is necessary to implement a transaction processing
> > logic
> > > > in several different application servers. In this case, working with
> > > Ignite
> > > > cache will be performed within the various applications. But all
> these
> > > > changes must be made within the same distributed transaction.
> > > >
> > > > In my opinion this will require context transfer between the threads
> > > within
> > > > a single node or multiple Ignite nodes.
> > > >
> > > > 2016-12-08 12:53 GMT+03:00 Alexei Scherbakov <
> > > alexey.scherbak...@gmail.com
> > > > >:
> > > >
> > > > > Hi.
> > > > >
> > > > > It's unclear from your description what are you trying to achieve.
> > > > >
> > > > > AffinityCall is unicast and wil be send to single node.
> > > > >
> > > > > To parallelise task among the cluster I would recommend to use
> > compute
> > > > task
> > > > > API [1]
> > > > >
> > > > > But the task execution is

Re: affinityCall in one distributed transaction

2016-12-10 Thread Антон Чураев
For example:
1) Front-end sends a request to perform a complex transaction.
2) Some application (like a business transactional coordinator) receives
message via asynchronous transport. This application implements logic of
calling different services sequentially or in parallel via asynchronous
transport.
3) Each service implement some little changes in a cache.

Or:
1) Front-end sends a request to perform a complex transaction.
2) This transaction is implemented in microservice architecture (large
number microservices + asynchronous transport).
3) Each microservice implement some little changes in a cache.

I think it is possible to implement distributed transactional using XA
coordinator outside Ignite and local transaction in each service. But its
cost may be unacceptable especially in the case of using a large number of
services.

I think distributed transaction inside Ignite could be useful also for
using multiple ComputeTask in one transaction.

2016-12-09 21:45 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:

> Sounds like you need a centralized JTA server for this type of purpose, no?
> In that case, Ignite transactions can already merge into ongoing JTA
> transactions.
>
> I would prefer to see a distributed flow of events to fully understand the
> issue. For example,
>
> Client
>   - start transaction
>   - send compute
>
> Server
>   - receive compute
>   - execute ...
>   - execute ...
>
> etc.
>
> D.
>
> On Fri, Dec 9, 2016 at 1:26 AM, Антон Чураев <churaev...@gmail.com> wrote:
>
> > In some cases it is necessary to implement a transaction processing logic
> > in several different application servers. In this case, working with
> Ignite
> > cache will be performed within the various applications. But all these
> > changes must be made within the same distributed transaction.
> >
> > In my opinion this will require context transfer between the threads
> within
> > a single node or multiple Ignite nodes.
> >
> > 2016-12-08 12:53 GMT+03:00 Alexei Scherbakov <
> alexey.scherbak...@gmail.com
> > >:
> >
> > > Hi.
> > >
> > > It's unclear from your description what are you trying to achieve.
> > >
> > > AffinityCall is unicast and wil be send to single node.
> > >
> > > To parallelise task among the cluster I would recommend to use compute
> > task
> > > API [1]
> > >
> > > But the task execution is not transactional. Nevertheless, each job
> > > triggered by task can use it's own local transaction.
> > >
> > > And please explain, why can't you use a generic Ignite transaction for
> > you
> > > task?
> > >
> > > [1] https://apacheignite.readme.io/docs/compute-tasks
> > >
> > > 2016-12-08 4:09 GMT+03:00 Dmitriy Setrakyan <dsetrak...@apache.org>:
> > >
> > > > Taras, is invokeAll() transactional? The javadoc is silent to this
> > fact.
> > > If
> > > > it is indeed transactional, then we should update the javadoc.
> > > >
> > > > D.
> > > >
> > > > On Wed, Dec 7, 2016 at 5:32 AM, Taras Ledkov <tled...@gridgain.com>
> > > wrote:
> > > >
> > > > > Ignite compute has no relation to the cache's transaction.
> > > > >
> > > > > I think that IgniteCache.invokeAll() is appropriate for described
> > case.
> > > > >
> > > > > On Wed, Dec 7, 2016 at 4:00 PM, Игорь Г <fre...@gmail.com> wrote:
> > > > >
> > > > > > Hi, igniters!
> > > > > >
> > > > > > Before openning JIRA ticket, I want to ask question about
> > > affinityCall
> > > > or
> > > > > > affinityRun transactions.
> > > > > >
> > > > > > For example I have batch task to modify many values in someCache
> > > > > according
> > > > > > to someRule. I want to parallel this task to whole cluster and
> > > minimize
> > > > > > network traffic.
> > > > > > So the resonable choice is affinityCall feature.
> > > > > >
> > > > > > But I want all this changes to be in one transactoin. i.e. with
> at
> > > > least
> > > > > > atomicity property (of ACID). And if for some reason my task will
> > be
> > > > > > canceled or failed on one node - it should change nothing at all.
> > > > > >
> > > > > > So, can I achieve this with existing functionality, or how can I
> > > > approach
> > > > > > to this task?
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > >
> > > Best regards,
> > > Alexei Scherbakov
> > >
> >
> >
> >
> > --
> > С уважением,
> > Чураев Антон
> >
>



-- 
С уважением,
Чураев Антон


Re: affinityCall in one distributed transaction

2016-12-09 Thread Антон Чураев
In some cases it is necessary to implement a transaction processing logic
in several different application servers. In this case, working with Ignite
cache will be performed within the various applications. But all these
changes must be made within the same distributed transaction.

In my opinion this will require context transfer between the threads within
a single node or multiple Ignite nodes.

2016-12-08 12:53 GMT+03:00 Alexei Scherbakov :

> Hi.
>
> It's unclear from your description what are you trying to achieve.
>
> AffinityCall is unicast and wil be send to single node.
>
> To parallelise task among the cluster I would recommend to use compute task
> API [1]
>
> But the task execution is not transactional. Nevertheless, each job
> triggered by task can use it's own local transaction.
>
> And please explain, why can't you use a generic Ignite transaction for you
> task?
>
> [1] https://apacheignite.readme.io/docs/compute-tasks
>
> 2016-12-08 4:09 GMT+03:00 Dmitriy Setrakyan :
>
> > Taras, is invokeAll() transactional? The javadoc is silent to this fact.
> If
> > it is indeed transactional, then we should update the javadoc.
> >
> > D.
> >
> > On Wed, Dec 7, 2016 at 5:32 AM, Taras Ledkov 
> wrote:
> >
> > > Ignite compute has no relation to the cache's transaction.
> > >
> > > I think that IgniteCache.invokeAll() is appropriate for described case.
> > >
> > > On Wed, Dec 7, 2016 at 4:00 PM, Игорь Г  wrote:
> > >
> > > > Hi, igniters!
> > > >
> > > > Before openning JIRA ticket, I want to ask question about
> affinityCall
> > or
> > > > affinityRun transactions.
> > > >
> > > > For example I have batch task to modify many values in someCache
> > > according
> > > > to someRule. I want to parallel this task to whole cluster and
> minimize
> > > > network traffic.
> > > > So the resonable choice is affinityCall feature.
> > > >
> > > > But I want all this changes to be in one transactoin. i.e. with at
> > least
> > > > atomicity property (of ACID). And if for some reason my task will be
> > > > canceled or failed on one node - it should change nothing at all.
> > > >
> > > > So, can I achieve this with existing functionality, or how can I
> > approach
> > > > to this task?
> > > >
> > >
> >
>
>
>
> --
>
> Best regards,
> Alexei Scherbakov
>



-- 
С уважением,
Чураев Антон