Re: IgniteSet implementation: changes required

2018-02-09 Thread Dmitriy Setrakyan
Hi Pavel,

We have 2 types of data structures, collocated and non-collocated. The
difference between them is that the collocated set is generally smaller and
will always end up on the same node. Users generally will have many
colllocated sets. On the other hand, a non-collocated set can span multiple
nodes and therefore is able to store a lot more data.

I can see how cache-per-set strategy can be applied to the non-collocated
set. As a matter of fact, I would be surprised if it is not implemented
that way already.

However, I do not see this strategy applied to the collocated sets. Users
can have 1000s of collocated sets or more. Are you suggesting that this
will translate into 1000s of caches?

D.

On Fri, Feb 9, 2018 at 8:10 AM, Pavel Pereslegin  wrote:

> Hello, Valentin.
>
> Thank you for the reply.
>
> As mentioned in this conversation, for now we have at least two issues
> with IgniteSet:
> 1. Incorrect behavior after recovery from PDS [1].
> 2. The data in the cache is duplicated on-heap [2], which is not
> documented and lead to heap/GC overhead when using large Sets.
>
> Without significant changes, it is possible to solve [1] with the
> workaround, proposed by Andrey Kuznetsov - iterate over all
> datastructure-backing caches entries during recover from checkpoint
> procedure, filter set-related entries and refill setDataMap's.
> As a workaround for [2] we can add configuration option which data
> structure to use for "local caching" (on-heap or off-heap).
> If we go this way then cache data duplication will remain and some
> kind of off-heap ConcurrentHashMap should be implemented in Ignite
> (probably, already exists in some form, need to investigate this topic
> properly).
>
> On the other hand, if we use separate cache for each IgniteSet instance:
> 1. It will be not necessary to maintain redundant data stored
> somewhere other than the cache.
> 2. It will be not necessary to implement workaround for recovery from PDS.
> For the collocated mode we can, for example, enforce REPLICATED cache mode.
>
> Why don't you like the idea with separate cache?
>
> [1] https://issues.apache.org/jira/browse/IGNITE-7565
> [2] https://issues.apache.org/jira/browse/IGNITE-5553
>
>
> 2018-02-09 0:44 GMT+03:00 Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
> > Pavel,
> >
> > I don't like an idea of creating separate cache for each data structure,
> > especially for collocated ones. Can actually, I'm not sure I understand
> how
> > that would help. It sounds like that we just need to properly persist the
> > data structures cache and then reload on restart.
> >
> > -Val
> >
> > On Thu, Feb 8, 2018 at 6:12 AM, Pavel Pereslegin 
> wrote:
> >
> >> Hello, Igniters!
> >>
> >> We have some issues with current IgniteSet implementation ([1], [2],
> [3],
> >> [4]).
> >>
> >> As was already described in this conversation, the main problem is
> >> that current IgniteSet implementation maintains plain Java sets on
> >> every node (see CacheDataStructuresManager.setDataMap). These sets
> >> duplicate backing-cache entries, both primary and backup. size() and
> >> iterator() calls issue distributed queries to collect/filter data from
> >> all setDataMap's.
> >>
> >> I believe we can solve specified issues if each instance of IgniteSet
> >> will have separate internal cache that will be destroyed on close.
> >>
> >> What do you think about such major change? Do you have any thoughts or
> >> objections?
> >>
> >> [1] https://issues.apache.org/jira/browse/IGNITE-7565
> >> [2] https://issues.apache.org/jira/browse/IGNITE-5370
> >> [3] https://issues.apache.org/jira/browse/IGNITE-5553
> >> [4] https://issues.apache.org/jira/browse/IGNITE-6474
> >>
> >>
> >> 2017-10-31 5:53 GMT+03:00 Dmitriy Setrakyan :
> >> > Hi Andrey,
> >> >
> >> > Thanks for a detailed email. I think your suggestions do make sense.
> >> Ignite
> >> > cannot afford to have a distributed set that is not fail-safe. Can you
> >> > please focus only on solutions that provide consistent behavior in
> case
> >> of
> >> > topology changes and failures and document them in the ticket?
> >> >
> >> > https://issues.apache.org/jira/browse/IGNITE-5553
> >> >
> >> > D.
> >> >
> >> > On Mon, Oct 30, 2017 at 3:07 AM, Andrey Kuznetsov 
> >> wrote:
> >> >
> >> >> Hi, Igniters!
> >> >>
> >> >> Current implementation of IgniteSet is fragile with respect to
> cluster
> >> >> recovery from a checkpoint. We have an issue (IGNITE-5553) that
> >> addresses
> >> >> set's size() behavior, but the problem is slightly broader. The text
> >> below
> >> >> is my comment from Jira issue. I encourage you to discuss it.
> >> >>
> >> >> We can put current set size into set header cache entry. This will
> fix
> >> >> size(), but we have broken iterator() implementation as well.
> >> >>
> >> >> Currently, set implementation maintains plain Java sets on every
> node,
> >> see
> >> >> 

Re: TcpCommunicationSpi in dockerized environment

2018-02-09 Thread Andrey Kornev
Sergey,

The way I "solved" this problem was to modify both

org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi#getNodeAddresses(TcpDiscoveryNode,
 boolean)

and

org.apache.ignite.spi.communication.tcp.TcpCommunicationSpi#nodeAddresses(ClusterNode)

to make sure the external IP addresses (the ones in ATTR_EXT_ADDRS attribute of 
the cluster node) are listed first in the returned collection.

It did fix the problem and significantly reduced the connection time as Ignite 
no longer had to waste time attempting to connect to the remote node's Docker's 
internal IP. It always results in a socket timeout (2 seconds, by default), and 
in case of multiple nodes, making the cluster startup very slow and unreliable.

Of course, having a Docker Swarm with an overlay network would probably solve 
this problem more elegantly without any code changes, but I'm not a Docker 
expert and Docker Swarm is not my target execution environment anyway. I'd like 
to be able to deploy Ignite nodes in standalone containers and have them join 
the cluster as if they were running on physical hardware.

Hope it helps.
Andrey



From: Sergey Chugunov 
Sent: Friday, February 9, 2018 3:54 AM
To: dev@ignite.apache.org
Subject: TcpCommunicationSpi in dockerized environment

Hello Ignite community,

When testing Ignite in dockerized environment I faced the following issue
with current TcpComminicationSpi implementation.

I had several physical machines and each Ignite node running inside Docker
container had at least two InetAddresses associated with it: one IP address
associated with physical host and one additional IP address of Docker
bridge interface *which was default and the same accross all physical
machines*.

Each node publishes address of its Docker bridge in the list of its
addresses although it is not reachable from remote nodes.
So when node tries to establish communication connection using remote
node's Docker address its request goes to itself like it was a loopback
address.

I would suggest to implement a simple heuristic to avoid this: before
connecting to some remote node's address CommunicationSpi should check
whether local node has exactly the same address. If "remote" and local
addresses are the same CommunicationSpi should skip such address from
remote node's list and proceed with the next one.

Is it safe to implement such heuristic in TcpCommunicationSpi or there are
some risks I'm missing? I would really appreciate any help from expert with
deep knowledge of Communication mechanics.

If such improvement makes sense I'll file a ticket and start working on it.

Thanks,
Sergey.


[jira] [Created] (IGNITE-7662) Slow event listener's work

2018-02-09 Thread Ruslan Gilemzyanov (JIRA)
Ruslan Gilemzyanov created IGNITE-7662:
--

 Summary: Slow event listener's work
 Key: IGNITE-7662
 URL: https://issues.apache.org/jira/browse/IGNITE-7662
 Project: Ignite
  Issue Type: Wish
  Components: cache
Affects Versions: 2.2
Reporter: Ruslan Gilemzyanov


I wroted some code that can run Ignite server node and attached to it 
EventListener. Then I putted 10 elements on the cache, and for each element 
recorded a time difference between putting and catching.

*When I created one server node and put 10 elements in IgniteCache I've got 
good results. For 10 elements it was:*

ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 51 ruslangm.sample.ignite.listener.EventListener - Time diff between 
put and listener - 2 ruslangm.sample.ignite.listener.EventListener - Time diff 
between put and listener - 1 ruslangm.sample.ignite.listener.EventListener - 
Time diff between put and listener - 1 
ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 1 ruslangm.sample.ignite.listener.EventListener - Time diff between 
put and listener - 2 ruslangm.sample.ignite.listener.EventListener - Time diff 
between put and listener - 2 ruslangm.sample.ignite.listener.EventListener - 
Time diff between put and listener - 2 
ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 2 ruslangm.sample.ignite.listener.EventListener - Time diff between 
put and listener - 1

*The results were the same when I added one node* (Topology snapshot became: 
[ver=2, servers=2, clients=0, CPUs=4, heap=3.6GB]).

*But when i applied setBackups(1) to IgniteCache the results became weird:*

ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 573 ruslangm.sample.ignite.listener.EventListener - Time diff 
between put and listener - 573 ruslangm.sample.ignite.listener.EventListener - 
Time diff between put and listener - 570 
ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 571 ruslangm.sample.ignite.listener.EventListener - Time diff 
between put and listener - 571 ruslangm.sample.ignite.listener.EventListener - 
Time diff between put and listener - 571 
ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener - 571 ruslangm.sample.ignite.listener.EventListener - Time diff 
between put and listener - 561 ruslangm.sample.ignite.listener.EventListener - 
Time diff between put and listener - 560

 

*My code for creating an IgniteCache and attaching to it event listener is very 
simple:*

{{Ignite ignite = Ignition.start("ignite.xml")) CacheConfiguration cfg = new CacheConfiguration<>(); 
cfg.setCacheMode(CacheMode.PARTITIONED); 
cfg.setAtomicityMode(CacheAtomicityMode.ATOMIC); 
cfg.setWriteSynchronizationMode(CacheWriteSynchronizationMode.FULL_ASYNC); 
cfg.setName("myCache"); cfg.setBackups(1); IgniteCache cache = 
ignite.getOrCreateCache(cfg); ContinuousQuery query = new 
ContinuousQuery<>(); query.setLocalListener(new EventListener()); 
query.setLocal(true); QueryCursor> cursor = 
cache.query(query);}}

In my listener I just print this message:

{{ruslangm.sample.ignite.listener.EventListener - Time diff between put and 
listener }}

You can look at it on [github|https://github.com/ruslangm/ignite-sample], it is 
really so simple.

Is Ignite is really so slow in listening events when there are exist backups?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7664) SQL: throw sane exception on unsupported SQL statements

2018-02-09 Thread Alexander Paschenko (JIRA)
Alexander Paschenko created IGNITE-7664:
---

 Summary: SQL: throw sane exception on unsupported SQL statements
 Key: IGNITE-7664
 URL: https://issues.apache.org/jira/browse/IGNITE-7664
 Project: Ignite
  Issue Type: Improvement
  Components: sql
Reporter: Alexander Paschenko
Assignee: Alexander Paschenko
 Fix For: 2.5


Inspired by this SO issue:

[https://stackoverflow.com/questions/48708238/ignite-database-create-schema-assertionerror]

We should handle unsupported stuff more gracefully both in core code and 
drivers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Ignite 2.4 release

2018-02-09 Thread Valentin Kulichenko
Nikolay,

To merge it to 2.4, you need to merge the change to ignite-2.4 release.
Let's do this if we come to an agreement in the neighbor thread.

-Val

On Thu, Feb 8, 2018 at 8:21 PM, Nikolay Izhikov  wrote:

> Hello, Dmitriy.
>
> IGNITE-7337 are merged to master [1]
>
> Do I need to do something more to include this feature into 2.4. release?
>
> [1] https://github.com/apache/ignite/commit/7c01452990ad0de0fb84ab4c0424a6
> d71e5bccba
>
> В Ср, 07/02/2018 в 11:26 -0800, Dmitriy Setrakyan пишет:
> > Agree on both, the performance fix and the spark data frames. Let's get
> > them into the release.
> >
> > However, Raymond is right. We should know how long the performance fix
> will
> > take. If it adds another month to the development, we should include it
> > into the next release. I am hoping that it can be done faster though.
> >
> >
> > Alexey Goncharuk, Dmitriy Pavlov, any ideas?
> >
> > D.
> >
> > On Wed, Feb 7, 2018 at 9:07 AM, Nikolay Izhikov 
> wrote:
> >
> > > Hello, Igniters.
> > >
> > > Please, consider including IGNITE-7337 - Spark Data Frames: support
> saving
> > > a data frame in Ignite [1] in the 2.4 release.
> > > It seems we can merge it into the master in a day or few.
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-7337
> > >
> > >
> > > В Ср, 07/02/2018 в 08:35 -0800, Denis Magda пишет:
> > > > I’m voting for the blocker addition into the release. Sergey K. how
> will
> > >
> > > it affect your testing cycles? Do you need to re-run everything from
> > > scratch and how many days you need?
> > > >
> > > > —
> > > > Denis
> > > >
> > > > > On Feb 6, 2018, at 11:29 PM, Alexey Goncharuk <
> > >
> > > alexey.goncha...@gmail.com> wrote:
> > > > >
> > > > > Guys,
> > > > >
> > > > > Thanks to Dmitriy Pavlov we found the ticket [1] which causes a
> major
> > > > > slowdown when page replacement starts. Even though it's not a
> > >
> > > regression, I
> > > > > suggest we consider it a blocker for 2.4 because this is a huge
> > >
> > > performance
> > > > > issue which can make it virtually impossible to use native
> persistence
> > >
> > > when
> > > > > data size is significantly larger than memory size.
> > > > >
> > > > > Any objections?
> > > > >
> > > > > [1] https://issues.apache.org/jira/browse/IGNITE-7638
> > > > >
> > > > > 2018-01-30 17:10 GMT+03:00 Pavel Tupitsyn :
> > > > >
> > > > > > Igniters, I will handle 2.4 release if there are no objections.
> > > > > > Let's take a bit more time for testing and start vote by the end
> of
> > >
> > > this
> > > > > > week.
> > > > > >
> > > > > > Pavel
> > > > > >
> > > > > > On Sat, Jan 27, 2018 at 3:32 AM, Denis Magda 
> > >
> > > wrote:
> > > > > >
> > > > > > > Hi Vyacheslav,
> > > > > > >
> > > > > > > According to the previous review notes the impact of the
> changes
> > >
> > > might be
> > > > > > > significant, thus, I would recommend us to move the changes to
> the
> > >
> > > next
> > > > > > > release.
> > > > > > >
> > > > > > > BTW, don’t hesitate to ping reviewers more frequently if there
> is a
> > > > > > > pending/abandon review. We are all the people who are tend to
> > >
> > > forget or
> > > > > > > miss notifications ;)
> > > > > > >
> > > > > > > > On Jan 26, 2018, at 2:04 AM, Vyacheslav Daradur <
> > >
> > > daradu...@gmail.com>
> > > > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > Hi, Vladimir, it's good news. I'm looking forward to new
> Ignite
> > > > > >
> > > > > > release!
> > > > > > > >
> > > > > > > > Could you please share a release schedule for 'varint'
> > >
> > > optimizations?
> > > > > > > >
> > > > > > > > The task [1] is waiting for review for 5 months.
> > > > > > > >
> > > > > > > > [1] https://issues.apache.org/jira/browse/IGNITE-5097
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Jan 26, 2018 at 12:51 PM, Vladimir Ozerov <
> > > > > >
> > > > > > voze...@gridgain.com>
> > > > > > > wrote:
> > > > > > > > > Hi Igniters,
> > > > > > > > >
> > > > > > > > > As far as I can see all required tasks and fixes were
> merged. I
> > > > > >
> > > > > > propose
> > > > > > > to
> > > > > > > > > take several days of silence to test what we've done and
> start
> > >
> > > vote at
> > > > > > >
> > > > > > > the
> > > > > > > > > beginning of the next week.
> > > > > > > > >
> > > > > > > > > Makes sense?
> > > > > > > > >
> > > > > > > > > On Mon, Jan 22, 2018 at 8:39 PM, Denis Magda <
> > >
> > > dma...@apache.org>
> > > > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Ok, let’s target Wednesday as a code freeze date.
> > > > > > > > > >
> > > > > > > > > > Community members who are involved in 2.4 release please
> > >
> > > merge you
> > > > > > >
> > > > > > > fixes
> > > > > > > > > > and optimizations by that time.
> > > > > > > > > >
> > > > > > > > > > —
> > > > > > > > > > Denis
> > > > > > > > > >
> > > > > > > > > > > On Jan 22, 2018, at 8:24 AM, Anton Vinogradov <
> > 

Re: Saving Spark Data Frames merged to master

2018-02-09 Thread Valentin Kulichenko
I think it's OK to merge it to 2.4, especially since the release is
delayed. This is a fairly small feature which is fully isolated from
everything else, so there are no risks. At the same time, it makes data
frames integration much more valuable.

-Val

On Fri, Feb 9, 2018 at 5:20 AM, Nikolay Izhikov  wrote:

> Hello, Anton.
>
> I have no any objections.
>
> Seems like some kind of misunderstanding from my side.
>
> As far as I can understand mail from Dmitriy Setrakyan [1] He agreed to
> include IGNITE-7337 to 2.4.
> If the Ccommuntiy decides to postpone this feature in 2.5 release I'm fully
> OK with it.
>
> [1]
> http://apache-ignite-developers.2346864.n4.nabble.
> com/Apache-Ignite-2-4-release-tp26031p26807.html
>
>
> 2018-02-09 14:58 GMT+03:00 Anton Vinogradov :
>
> > Nikolay,
> >
> > 2.4 is almost ready to be released.
> > We're fixing final issues to provide stable and fast release.
> > Merging something to 2.4 except blockers is not possible at this phase of
> > release process.
> >
> > Hope, 2.5, with your changes, will be released soon :)
> >
> > On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov 
> > wrote:
> >
> > > Hello, Igniters.
> > >
> > > Good news.
> > >
> > > IGNITE-7337 [1](Spark Data Frames: support saving a data frame in
> Ignite)
> > > are merged to master.
> > >
> > > For now we can both - read from and write to Ignite SQL table with Data
> > > Frame API.
> > > Big thanks to Valentin Kulichenko for a quick review.
> > > So it seems we can include this feature to 2.4 release.
> > >
> > > [1] https://issues.apache.org/jira/browse/IGNITE-7337
> >
>


Re: Removing "fabric" from Ignite binary package name

2018-02-09 Thread Valentin Kulichenko
Anton,

I don't think we necessarily need to remove 'fabric' word from every file
in the project, we just need to rename the name of downloadable package. Is
there any other place where 'fabric' is exposed to the user?

If that's the case, it should not be a big change, no?

-Val

On Fri, Feb 9, 2018 at 3:49 AM, Anton Vinogradov 
wrote:

> Denis,
>
> You're proposing changes without viewing a code :)
>
>
> On Thu, Feb 8, 2018 at 10:07 PM, Denis Magda  wrote:
>
> > Anton,
> >
> > What’s wrong if we just go ahead and:
> > - replace “fabric” with “ignite”
> > - replace “hadoop” with “ignite-hadoop"
> >
> > —
> > Denis
> >
> > > On Feb 8, 2018, at 1:51 AM, Anton Vinogradov  >
> > wrote:
> > >
> > > Denis,
> > >
> > > "hadoop" and "fabric" words work on same engine.
> > >
> > > We have special assembly desctiptors, for example:
> > > dependencies-fabric.xml
> > > dependencies-fabric-lgpl.xml
> > > dependencies-hadoop.xml
> > > release-base.xml
> > > release-fabric.xml
> > > release-fabric-base.xml
> > > release-fabric-lgpl.xml
> > > release-hadoop.xml
> > >
> > > So, I'ts impossible for now to remove "fabric" without "hadoop"
> removal.
> > > Only one case is to make some ditry hack, but that's not a good idea.
> > >
> > > On Thu, Feb 8, 2018 at 11:29 AM, Sergey Kozlov 
> > wrote:
> > >
> > >> +1 hadoop accelerator removing for AI 2.5
> > >>
> > >> Also probably IGFS should be either removed or refactored, e.g. create
> > FS
> > >> directly over the data region without using "cache" entity as an
> > >> intermidiate stage
> > >>
> > >> On Thu, Feb 8, 2018 at 2:13 AM, Denis Magda 
> wrote:
> > >>
> > >>> Anton,
> > >>>
> > >>> I don’t get how the hadoop editions are related to this task. The
> > project
> > >>> is not named as “data fabric” for a while. Check up the site or docs.
> > >>>
> > >>> The “fabric” word is being removed from all over the places and needs
> > to
> > >>> be removed from the editions’ names.
> > >>>
> > >>> As for the hadoop future, my personal position is to retire this
> > >> component
> > >>> and forget about it. I would restart the conversation again after we
> > done
> > >>> with 2.4.
> > >>>
> > >>> —
> > >>> Denis
> > >>>
> >  On Feb 7, 2018, at 2:13 AM, Anton Vinogradov  wrote:
> > 
> >  Denis, Petr,
> > 
> >  I checked PR and found we have *overcomplicated* logic with "fabric"
> > >> and
> >  "hadoop" postfixs.
> > 
> >  Do we really need to assembly 2 editions?
> >  "Hadoop" edition still valued?
> > 
> >  My proposal is to get rid of "hadoop" edition and replace it with
> >  instruction of how to use "fabric" edition instead.
> >  Instruction will be pretty easy -> move "hadoop" folder from
> > "optional"
> > >>> to
> >  root directory :)
> > 
> >  In that case we can just remove all postfix logic from maven poms
> and
> >  simplify release process.
> > 
> >  On Thu, Dec 28, 2017 at 9:20 PM, Denis Magda 
> > >> wrote:
> > 
> > > Petr, thanks for solving it!
> > >
> > > Hope that Anton V. or some other build master will double-check the
> > > changes and merge them.
> > >
> > > —
> > > Denis
> > >
> > >> On Dec 28, 2017, at 8:29 AM, Petr Ivanov 
> > >> wrote:
> > >>
> > >> IGNITE-7251 is done, needs review and some additional tests. See
> PR
> > > #3315 [1].
> > >>
> > >>
> > >> [1] https://github.com/apache/ignite/pull/3315 <
> > > https://github.com/apache/ignite/pull/3315>
> > >>
> > >>
> > >>
> > >>> On 20 Dec 2017, at 23:15, Denis Magda  wrote:
> > >>>
> > >>> Petr, thanks, such a swift turnaround!
> > >>>
> > >>> Have you found the one who can asses and review the changes?
> > >>>
> > >>> Maintainers label might be helpful. Just ping them directly:
> > >>> https://cwiki.apache.org/confluence/display/IGNITE/How+
> > > to+Contribute#HowtoContribute-ReviewProcessandMaintainers <
> > > https://cwiki.apache.org/confluence/display/IGNITE/How+
> > > to+Contribute#HowtoContribute-ReviewProcessandMaintainers>
> > >>>
> > >>>
> > >>> —
> > >>> Denis
> > >>>
> >  On Dec 20, 2017, at 12:24 AM, Petr Ivanov 
> > >>> wrote:
> > 
> >  Assigned myself — done the same work while preparing RPM
> package.
> >  But for fixing DEVNOTES.txt waiting for review and merge of
> > > IGNITE-7107 [1].
> > 
> > 
> >  [1] https://issues.apache.org/jira/browse/IGNITE-7107
> > 
> > 
> > 
> > > On 19 Dec 2017, at 22:55, Denis Magda 
> wrote:
> > >
> > > All the bids were accepted and the verdict is executed:
> > > 

Re: Reworking Ignite site's "Features" menu

2018-02-09 Thread Denis Magda
Eventually finished with the menu as agreed grooming the content and 
introducing the architecture section:
https://issues.apache.org/jira/browse/IGNITE-7061

Those who are ready to review before I merge the changes, do the following:
* switch to a special SVN branch: “svn switch 
https://svn.apache.org/repos/asf/ignite/site/branches/ignite-7061”
* run the site locally as described here: 
https://cwiki.apache.org/confluence/display/IGNITE/Website+Development


—
Denis

> On Nov 28, 2017, at 4:56 PM, Denis Magda  wrote:
> 
> Dmitriy,
> 
> Thanks for the feedback.
> 
> Split all the work into a set of JIRA tasks aggregated under this one:
> https://issues.apache.org/jira/browse/IGNITE-7061 
> 
> 
> Hope to complete it by the end of the year.
> 
> —
> Denis
> 
>> On Nov 22, 2017, at 5:03 PM, Dmitriy Setrakyan  wrote:
>> 
>> Sounds like a positive step forward. I have several comments:
>> 
>> 1. "More Features" should be all the way at the bottom
>> 2. "What is Ignite" should go under Features
>> 3. I would remove the words "Distributed" from the navigation menu and
>> leave "Key-Value" and "SQL". Otherwise, you would be adding the word
>> "distributed" to every menu item.
>> 
>> D.
>> 
>> 
>> On Wed, Nov 22, 2017 at 2:53 PM, Denis Magda  wrote:
>> 
>>> The list formatting was broken by ASF mail engine. Fixed below.
>>> 
 On Nov 22, 2017, at 2:51 PM, Denis Magda  wrote:
 
 - What’s Ignite?
 - Features
 — Distributed Key-Value
 — Distributed SQL
 — ACID Transactions
 — Machine Learning
 — Multi-Language Support
 — More Features…
 - Architecture
 — Overview
 — Clustering and Deployment
 — Distributed Database
 — Durable Memory
 — Collocated Processing
 - Tooling
 — Ignite Web Console
 — Data Visualization and Analysis
>>> 
>>> 
> 



[jira] [Created] (IGNITE-7663) AssertionError/NPE on "CREATE SCHEMA"

2018-02-09 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7663:
-

 Summary: AssertionError/NPE on "CREATE SCHEMA"
 Key: IGNITE-7663
 URL: https://issues.apache.org/jira/browse/IGNITE-7663
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
 Fix For: 2.5


Instead of UnsupportedOperationException exception, we have AssertionError:

[https://stackoverflow.com/questions/48708238/ignite-database-create-schema-assertionerror]

Errors mean that we can't continue work and should terminate process because 
now it is in unknown state and behavior is unpredictable, but I don't think 
that it's the case, isn't it?

With disabled assertions, we have NPE there, but anyway, I expect to see 
UnsupportedOperationException if we try to run SQL that we don't support yet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Ignite work directory usage?

2018-02-09 Thread Valentin Kulichenko
Dmitry,

I meant the persistence store itself, but just realized that we don't have
a marshaller cache anymore, we use discovery messages instead. However, we
still have the MarshallerMappingFileStore which is basically a persistence
space created specifically for marshaller mappings. I think it would be a
good idea to use something more generic for this (although this is not
critical of course).

In any case, my initial point was that using the same folder by different
nodes for these mappings should not be an issue, because mappings generated
by different nodes are supposed to be always the same. We just need to
avoid weird exceptions.

-Val

On Fri, Feb 9, 2018 at 1:21 AM, Dmitry Pavlov  wrote:

> Hi Val,
>
> Do you mean by
> > switching marshaller cache to persistence instead of using these files
> makes perfect sense to me,
> using 'metastore' for marshaller cache?
>
> Sincerely,
> Dmitriy Pavlov
>
> пт, 9 февр. 2018 г. в 1:05, Valentin Kulichenko <
> valentin.kuliche...@gmail.com>:
>
> > Sergey,
> >
> > These mappings are supposed to be the same on all nodes, so if the file
> > already exists, we can safely ignore this, or use a lock to avoid
> > concurrent access. Actually, I think we already fixed this in the past,
> > it's weird that issue came up again.
> >
> > But in any case, switching marshaller cache to persistence instead of
> using
> > these files makes perfect sense to me, and we definitely should do that.
> >
> > -Val
> >
> > On Tue, Feb 6, 2018 at 1:04 AM, Sergey Chugunov <
> sergey.chugu...@gmail.com
> > >
> > wrote:
> >
> > > Folks,
> > >
> > > There are several things here.
> > >
> > > Firstly user asked the initial question is sitting on Apache 1.x; it is
> > > clear from exception stack trace he provided.
> > > So although the issue exists for a while it looks like it doesn't hurt
> > > users a lot.
> > >
> > > Secondly I examined the code managing marshaller mappings and can say
> > that
> > > it the issue is still here even in the latest version; thus I filed a
> > > ticket [1] to address it.
> > >
> > > ​[1] https://issues.apache.org/jira/browse/IGNITE-7635
> > >
> > > Thanks,
> > > Sergey.
> > >
> >
>


Re: IgniteSet implementation: changes required

2018-02-09 Thread Valentin Kulichenko
Pavel,

I'm a bit confused. In my understanding, issue exists because we have local
in-memory maps which are used as the main source of truth about which
structures currently exist. During restart, we lose all this data even if
data structures cache(s) are persisted. Once we fix this, issue goes away,
regardless of weather we store data structure per cache or everything in
single cache. Am I missing something?

I also agree with Dmitry. While cache per set approach can make sense for
non-collocated sets, for collocated ones it definitely doesn't. So I would
fix the original issue first, and then change the architecture if it's
really needed.

-Val

On Fri, Feb 9, 2018 at 10:39 AM, Dmitriy Setrakyan 
wrote:

> Hi Pavel,
>
> We have 2 types of data structures, collocated and non-collocated. The
> difference between them is that the collocated set is generally smaller and
> will always end up on the same node. Users generally will have many
> colllocated sets. On the other hand, a non-collocated set can span multiple
> nodes and therefore is able to store a lot more data.
>
> I can see how cache-per-set strategy can be applied to the non-collocated
> set. As a matter of fact, I would be surprised if it is not implemented
> that way already.
>
> However, I do not see this strategy applied to the collocated sets. Users
> can have 1000s of collocated sets or more. Are you suggesting that this
> will translate into 1000s of caches?
>
> D.
>
> On Fri, Feb 9, 2018 at 8:10 AM, Pavel Pereslegin  wrote:
>
> > Hello, Valentin.
> >
> > Thank you for the reply.
> >
> > As mentioned in this conversation, for now we have at least two issues
> > with IgniteSet:
> > 1. Incorrect behavior after recovery from PDS [1].
> > 2. The data in the cache is duplicated on-heap [2], which is not
> > documented and lead to heap/GC overhead when using large Sets.
> >
> > Without significant changes, it is possible to solve [1] with the
> > workaround, proposed by Andrey Kuznetsov - iterate over all
> > datastructure-backing caches entries during recover from checkpoint
> > procedure, filter set-related entries and refill setDataMap's.
> > As a workaround for [2] we can add configuration option which data
> > structure to use for "local caching" (on-heap or off-heap).
> > If we go this way then cache data duplication will remain and some
> > kind of off-heap ConcurrentHashMap should be implemented in Ignite
> > (probably, already exists in some form, need to investigate this topic
> > properly).
> >
> > On the other hand, if we use separate cache for each IgniteSet instance:
> > 1. It will be not necessary to maintain redundant data stored
> > somewhere other than the cache.
> > 2. It will be not necessary to implement workaround for recovery from
> PDS.
> > For the collocated mode we can, for example, enforce REPLICATED cache
> mode.
> >
> > Why don't you like the idea with separate cache?
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-7565
> > [2] https://issues.apache.org/jira/browse/IGNITE-5553
> >
> >
> > 2018-02-09 0:44 GMT+03:00 Valentin Kulichenko <
> > valentin.kuliche...@gmail.com>:
> > > Pavel,
> > >
> > > I don't like an idea of creating separate cache for each data
> structure,
> > > especially for collocated ones. Can actually, I'm not sure I understand
> > how
> > > that would help. It sounds like that we just need to properly persist
> the
> > > data structures cache and then reload on restart.
> > >
> > > -Val
> > >
> > > On Thu, Feb 8, 2018 at 6:12 AM, Pavel Pereslegin 
> > wrote:
> > >
> > >> Hello, Igniters!
> > >>
> > >> We have some issues with current IgniteSet implementation ([1], [2],
> > [3],
> > >> [4]).
> > >>
> > >> As was already described in this conversation, the main problem is
> > >> that current IgniteSet implementation maintains plain Java sets on
> > >> every node (see CacheDataStructuresManager.setDataMap). These sets
> > >> duplicate backing-cache entries, both primary and backup. size() and
> > >> iterator() calls issue distributed queries to collect/filter data from
> > >> all setDataMap's.
> > >>
> > >> I believe we can solve specified issues if each instance of IgniteSet
> > >> will have separate internal cache that will be destroyed on close.
> > >>
> > >> What do you think about such major change? Do you have any thoughts or
> > >> objections?
> > >>
> > >> [1] https://issues.apache.org/jira/browse/IGNITE-7565
> > >> [2] https://issues.apache.org/jira/browse/IGNITE-5370
> > >> [3] https://issues.apache.org/jira/browse/IGNITE-5553
> > >> [4] https://issues.apache.org/jira/browse/IGNITE-6474
> > >>
> > >>
> > >> 2017-10-31 5:53 GMT+03:00 Dmitriy Setrakyan :
> > >> > Hi Andrey,
> > >> >
> > >> > Thanks for a detailed email. I think your suggestions do make sense.
> > >> Ignite
> > >> > cannot afford to have a distributed set that is not fail-safe. Can
> you
> > >> > please focus only on solutions that 

Re: IgniteSet implementation: changes required

2018-02-09 Thread Pavel Pereslegin
Hello, Valentin.

Thank you for the reply.

As mentioned in this conversation, for now we have at least two issues
with IgniteSet:
1. Incorrect behavior after recovery from PDS [1].
2. The data in the cache is duplicated on-heap [2], which is not
documented and lead to heap/GC overhead when using large Sets.

Without significant changes, it is possible to solve [1] with the
workaround, proposed by Andrey Kuznetsov - iterate over all
datastructure-backing caches entries during recover from checkpoint
procedure, filter set-related entries and refill setDataMap's.
As a workaround for [2] we can add configuration option which data
structure to use for "local caching" (on-heap or off-heap).
If we go this way then cache data duplication will remain and some
kind of off-heap ConcurrentHashMap should be implemented in Ignite
(probably, already exists in some form, need to investigate this topic
properly).

On the other hand, if we use separate cache for each IgniteSet instance:
1. It will be not necessary to maintain redundant data stored
somewhere other than the cache.
2. It will be not necessary to implement workaround for recovery from PDS.
For the collocated mode we can, for example, enforce REPLICATED cache mode.

Why don't you like the idea with separate cache?

[1] https://issues.apache.org/jira/browse/IGNITE-7565
[2] https://issues.apache.org/jira/browse/IGNITE-5553


2018-02-09 0:44 GMT+03:00 Valentin Kulichenko :
> Pavel,
>
> I don't like an idea of creating separate cache for each data structure,
> especially for collocated ones. Can actually, I'm not sure I understand how
> that would help. It sounds like that we just need to properly persist the
> data structures cache and then reload on restart.
>
> -Val
>
> On Thu, Feb 8, 2018 at 6:12 AM, Pavel Pereslegin  wrote:
>
>> Hello, Igniters!
>>
>> We have some issues with current IgniteSet implementation ([1], [2], [3],
>> [4]).
>>
>> As was already described in this conversation, the main problem is
>> that current IgniteSet implementation maintains plain Java sets on
>> every node (see CacheDataStructuresManager.setDataMap). These sets
>> duplicate backing-cache entries, both primary and backup. size() and
>> iterator() calls issue distributed queries to collect/filter data from
>> all setDataMap's.
>>
>> I believe we can solve specified issues if each instance of IgniteSet
>> will have separate internal cache that will be destroyed on close.
>>
>> What do you think about such major change? Do you have any thoughts or
>> objections?
>>
>> [1] https://issues.apache.org/jira/browse/IGNITE-7565
>> [2] https://issues.apache.org/jira/browse/IGNITE-5370
>> [3] https://issues.apache.org/jira/browse/IGNITE-5553
>> [4] https://issues.apache.org/jira/browse/IGNITE-6474
>>
>>
>> 2017-10-31 5:53 GMT+03:00 Dmitriy Setrakyan :
>> > Hi Andrey,
>> >
>> > Thanks for a detailed email. I think your suggestions do make sense.
>> Ignite
>> > cannot afford to have a distributed set that is not fail-safe. Can you
>> > please focus only on solutions that provide consistent behavior in case
>> of
>> > topology changes and failures and document them in the ticket?
>> >
>> > https://issues.apache.org/jira/browse/IGNITE-5553
>> >
>> > D.
>> >
>> > On Mon, Oct 30, 2017 at 3:07 AM, Andrey Kuznetsov 
>> wrote:
>> >
>> >> Hi, Igniters!
>> >>
>> >> Current implementation of IgniteSet is fragile with respect to cluster
>> >> recovery from a checkpoint. We have an issue (IGNITE-5553) that
>> addresses
>> >> set's size() behavior, but the problem is slightly broader. The text
>> below
>> >> is my comment from Jira issue. I encourage you to discuss it.
>> >>
>> >> We can put current set size into set header cache entry. This will fix
>> >> size(), but we have broken iterator() implementation as well.
>> >>
>> >> Currently, set implementation maintains plain Java sets on every node,
>> see
>> >> CacheDataStructuresManager.setDataMap. These sets duplicate
>> backing-cache
>> >> entries, both primary and backup. size() and iterator() calls issue
>> >> distributed queries to collect/filter data from all setDataMap's. And
>> >> setDataMaps remain empty after cluster is recovered from checkpoint.
>> >>
>> >> Now I see the following options to fix the issue.
>> >>
>> >> #1 - Naive. Iterate over all datastructure-backing caches entries during
>> >> recover from checkpoint procedure, filter set-related entries and refill
>> >> setDataMap's.
>> >> Pros: easy to implement
>> >> Cons: inpredictable time/memory overhead.
>> >>
>> >> #2 - More realistic. Avoid node-local copies of cache data. Maintain
>> linked
>> >> list in datastructure-backing cache: key is set item, value is next set
>> >> item. List head is stored in set header cache entry (this set item is
>> >> youngest one). Iterators build on top of this structure are fail-fast.
>> >> Pros: less memory overhead, no need to maintain node-local mirrors of

Re: Removing "fabric" from Ignite binary package name

2018-02-09 Thread Denis Magda
> I don't think we necessarily need to remove 'fabric' word from every file
> in the project, we just need to rename the name of downloadable package.

Couldn’t say it better than you, Val. Thanks for pitching in :) This is exactly 
what the ticket is about.

—
Denis

> On Feb 9, 2018, at 11:53 AM, Valentin Kulichenko 
>  wrote:
> 
> Anton,
> 
> I don't think we necessarily need to remove 'fabric' word from every file
> in the project, we just need to rename the name of downloadable package. Is
> there any other place where 'fabric' is exposed to the user?
> 
> If that's the case, it should not be a big change, no?
> 
> -Val
> 
> On Fri, Feb 9, 2018 at 3:49 AM, Anton Vinogradov 
> wrote:
> 
>> Denis,
>> 
>> You're proposing changes without viewing a code :)
>> 
>> 
>> On Thu, Feb 8, 2018 at 10:07 PM, Denis Magda  wrote:
>> 
>>> Anton,
>>> 
>>> What’s wrong if we just go ahead and:
>>> - replace “fabric” with “ignite”
>>> - replace “hadoop” with “ignite-hadoop"
>>> 
>>> —
>>> Denis
>>> 
 On Feb 8, 2018, at 1:51 AM, Anton Vinogradov >> 
>>> wrote:
 
 Denis,
 
 "hadoop" and "fabric" words work on same engine.
 
 We have special assembly desctiptors, for example:
 dependencies-fabric.xml
 dependencies-fabric-lgpl.xml
 dependencies-hadoop.xml
 release-base.xml
 release-fabric.xml
 release-fabric-base.xml
 release-fabric-lgpl.xml
 release-hadoop.xml
 
 So, I'ts impossible for now to remove "fabric" without "hadoop"
>> removal.
 Only one case is to make some ditry hack, but that's not a good idea.
 
 On Thu, Feb 8, 2018 at 11:29 AM, Sergey Kozlov 
>>> wrote:
 
> +1 hadoop accelerator removing for AI 2.5
> 
> Also probably IGFS should be either removed or refactored, e.g. create
>>> FS
> directly over the data region without using "cache" entity as an
> intermidiate stage
> 
> On Thu, Feb 8, 2018 at 2:13 AM, Denis Magda 
>> wrote:
> 
>> Anton,
>> 
>> I don’t get how the hadoop editions are related to this task. The
>>> project
>> is not named as “data fabric” for a while. Check up the site or docs.
>> 
>> The “fabric” word is being removed from all over the places and needs
>>> to
>> be removed from the editions’ names.
>> 
>> As for the hadoop future, my personal position is to retire this
> component
>> and forget about it. I would restart the conversation again after we
>>> done
>> with 2.4.
>> 
>> —
>> Denis
>> 
>>> On Feb 7, 2018, at 2:13 AM, Anton Vinogradov  wrote:
>>> 
>>> Denis, Petr,
>>> 
>>> I checked PR and found we have *overcomplicated* logic with "fabric"
> and
>>> "hadoop" postfixs.
>>> 
>>> Do we really need to assembly 2 editions?
>>> "Hadoop" edition still valued?
>>> 
>>> My proposal is to get rid of "hadoop" edition and replace it with
>>> instruction of how to use "fabric" edition instead.
>>> Instruction will be pretty easy -> move "hadoop" folder from
>>> "optional"
>> to
>>> root directory :)
>>> 
>>> In that case we can just remove all postfix logic from maven poms
>> and
>>> simplify release process.
>>> 
>>> On Thu, Dec 28, 2017 at 9:20 PM, Denis Magda 
> wrote:
>>> 
 Petr, thanks for solving it!
 
 Hope that Anton V. or some other build master will double-check the
 changes and merge them.
 
 —
 Denis
 
> On Dec 28, 2017, at 8:29 AM, Petr Ivanov 
> wrote:
> 
> IGNITE-7251 is done, needs review and some additional tests. See
>> PR
 #3315 [1].
> 
> 
> [1] https://github.com/apache/ignite/pull/3315 <
 https://github.com/apache/ignite/pull/3315>
> 
> 
> 
>> On 20 Dec 2017, at 23:15, Denis Magda  wrote:
>> 
>> Petr, thanks, such a swift turnaround!
>> 
>> Have you found the one who can asses and review the changes?
>> 
>> Maintainers label might be helpful. Just ping them directly:
>> https://cwiki.apache.org/confluence/display/IGNITE/How+
 to+Contribute#HowtoContribute-ReviewProcessandMaintainers <
 https://cwiki.apache.org/confluence/display/IGNITE/How+
 to+Contribute#HowtoContribute-ReviewProcessandMaintainers>
>> 
>> 
>> —
>> Denis
>> 
>>> On Dec 20, 2017, at 12:24 AM, Petr Ivanov 
>> wrote:
>>> 
>>> Assigned myself — done the same work while preparing RPM
>> package.
>>> But for fixing DEVNOTES.txt waiting for review and merge of
 IGNITE-7107 [1].
>>> 

Re: Saving Spark Data Frames merged to master

2018-02-09 Thread Dmitriy Setrakyan
Agree, it is ok to merge, since we are waiting for the page replacement
(eviction) performance fix anyway.

Spark data frames is a long-awaited feature by our users, so it does make
sense to provide a complete support in 2.4.

D.

On Fri, Feb 9, 2018 at 2:55 PM, Denis Magda  wrote:

> +1
>
> It wasn’t an undiscussed merge. The question was raised here before [1].
>
> Anyway, Anton thanks for being on guard all the times! :)
>
> [1] http://apache-ignite-developers.2346864.n4.nabble.
> com/Apache-Ignite-2-4-release-td26031i20.html#a26807 <
> http://apache-ignite-developers.2346864.n4.nabble.
> com/Apache-Ignite-2-4-release-td26031i20.html#a26807>
>
> —
> Denis
>
> > On Feb 9, 2018, at 11:41 AM, Valentin Kulichenko <
> valentin.kuliche...@gmail.com> wrote:
> >
> > I think it's OK to merge it to 2.4, especially since the release is
> > delayed. This is a fairly small feature which is fully isolated from
> > everything else, so there are no risks. At the same time, it makes data
> > frames integration much more valuable.
> >
> > -Val
> >
> > On Fri, Feb 9, 2018 at 5:20 AM, Nikolay Izhikov 
> wrote:
> >
> >> Hello, Anton.
> >>
> >> I have no any objections.
> >>
> >> Seems like some kind of misunderstanding from my side.
> >>
> >> As far as I can understand mail from Dmitriy Setrakyan [1] He agreed to
> >> include IGNITE-7337 to 2.4.
> >> If the Ccommuntiy decides to postpone this feature in 2.5 release I'm
> fully
> >> OK with it.
> >>
> >> [1]
> >> http://apache-ignite-developers.2346864.n4.nabble.
> >> com/Apache-Ignite-2-4-release-tp26031p26807.html
> >>
> >>
> >> 2018-02-09 14:58 GMT+03:00 Anton Vinogradov :
> >>
> >>> Nikolay,
> >>>
> >>> 2.4 is almost ready to be released.
> >>> We're fixing final issues to provide stable and fast release.
> >>> Merging something to 2.4 except blockers is not possible at this phase
> of
> >>> release process.
> >>>
> >>> Hope, 2.5, with your changes, will be released soon :)
> >>>
> >>> On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov 
> >>> wrote:
> >>>
>  Hello, Igniters.
> 
>  Good news.
> 
>  IGNITE-7337 [1](Spark Data Frames: support saving a data frame in
> >> Ignite)
>  are merged to master.
> 
>  For now we can both - read from and write to Ignite SQL table with
> Data
>  Frame API.
>  Big thanks to Valentin Kulichenko for a quick review.
>  So it seems we can include this feature to 2.4 release.
> 
>  [1] https://issues.apache.org/jira/browse/IGNITE-7337
> >>>
> >>
>
>


[jira] [Created] (IGNITE-7666) "Failed to parse query exception" has no description to find error in query

2018-02-09 Thread Mikhail Cherkasov (JIRA)
Mikhail Cherkasov created IGNITE-7666:
-

 Summary: "Failed to parse query exception" has no description to 
find error in query
 Key: IGNITE-7666
 URL: https://issues.apache.org/jira/browse/IGNITE-7666
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.3
Reporter: Mikhail Cherkasov
 Fix For: 2.5


As an example, in the query below there are wrong quote characters around alias 
name(it requires no quotes or double quotes), but exception has no clue to find 
the error in query. This query is simple and the error easy to find, but it 
becomes almost impossible to find the error in real life queries: 
{noformat}
0: jdbc:ignite:thin://127.0.0.1/> SELECT Name as 'super_name' from person p 
where p.name = 'test';
Error: Failed to parse query: SELECT Name as 'super_name' from person p where 
p.name = 'test' (state=42000,code=0)
java.sql.SQLException: Failed to parse query: SELECT Name as 'super_name' from 
person p where p.name = 'test'
 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinConnection.sendRequest(JdbcThinConnection.java:671)
 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute0(JdbcThinStatement.java:130)
 at 
org.apache.ignite.internal.jdbc.thin.JdbcThinStatement.execute(JdbcThinStatement.java:299)
 at sqlline.Commands.execute(Commands.java:823)
 at sqlline.Commands.sql(Commands.java:733)
 at sqlline.SqlLine.dispatch(SqlLine.java:795)
 at sqlline.SqlLine.begin(SqlLine.java:668)
 at sqlline.SqlLine.start(SqlLine.java:373)
 at sqlline.SqlLine.main(SqlLine.java:265){noformat}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Saving Spark Data Frames merged to master

2018-02-09 Thread Denis Magda
+1 

It wasn’t an undiscussed merge. The question was raised here before [1].

Anyway, Anton thanks for being on guard all the times! :)

[1] 
http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-4-release-td26031i20.html#a26807
 


—
Denis

> On Feb 9, 2018, at 11:41 AM, Valentin Kulichenko 
>  wrote:
> 
> I think it's OK to merge it to 2.4, especially since the release is
> delayed. This is a fairly small feature which is fully isolated from
> everything else, so there are no risks. At the same time, it makes data
> frames integration much more valuable.
> 
> -Val
> 
> On Fri, Feb 9, 2018 at 5:20 AM, Nikolay Izhikov  wrote:
> 
>> Hello, Anton.
>> 
>> I have no any objections.
>> 
>> Seems like some kind of misunderstanding from my side.
>> 
>> As far as I can understand mail from Dmitriy Setrakyan [1] He agreed to
>> include IGNITE-7337 to 2.4.
>> If the Ccommuntiy decides to postpone this feature in 2.5 release I'm fully
>> OK with it.
>> 
>> [1]
>> http://apache-ignite-developers.2346864.n4.nabble.
>> com/Apache-Ignite-2-4-release-tp26031p26807.html
>> 
>> 
>> 2018-02-09 14:58 GMT+03:00 Anton Vinogradov :
>> 
>>> Nikolay,
>>> 
>>> 2.4 is almost ready to be released.
>>> We're fixing final issues to provide stable and fast release.
>>> Merging something to 2.4 except blockers is not possible at this phase of
>>> release process.
>>> 
>>> Hope, 2.5, with your changes, will be released soon :)
>>> 
>>> On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov 
>>> wrote:
>>> 
 Hello, Igniters.
 
 Good news.
 
 IGNITE-7337 [1](Spark Data Frames: support saving a data frame in
>> Ignite)
 are merged to master.
 
 For now we can both - read from and write to Ignite SQL table with Data
 Frame API.
 Big thanks to Valentin Kulichenko for a quick review.
 So it seems we can include this feature to 2.4 release.
 
 [1] https://issues.apache.org/jira/browse/IGNITE-7337
>>> 
>> 



[jira] [Created] (IGNITE-7667) Improve services failover and load balancing

2018-02-09 Thread Valentin Kulichenko (JIRA)
Valentin Kulichenko created IGNITE-7667:
---

 Summary: Improve services failover and load balancing
 Key: IGNITE-7667
 URL: https://issues.apache.org/jira/browse/IGNITE-7667
 Project: Ignite
  Issue Type: Improvement
  Components: managed services
Affects Versions: 2.3
Reporter: Valentin Kulichenko


Currently Ignite services lack proper failover and load balancing capabilities. 
For example, if there are several node singletons, there is completely no 
control on which nodes they are deployed or redeployed. Also if all of them are 
deployed on a single node cluster, adding more nodes does not trigger load 
balancing, which makes it not scalable.

We need to come out with a mechanism to support this so that user can define 
behavior in different scenarios, probably something similar to what we have in 
Compute Grid.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (IGNITE-7668) Cover menu and main page references with GA labels

2018-02-09 Thread Denis Magda (JIRA)
Denis Magda created IGNITE-7668:
---

 Summary: Cover menu and main page references with GA labels
 Key: IGNITE-7668
 URL: https://issues.apache.org/jira/browse/IGNITE-7668
 Project: Ignite
  Issue Type: Sub-task
  Components: site
Reporter: Denis Magda
Assignee: Prachi Garg
 Fix For: 2.4


It's useful to see insights on how frequent people click on specific references 
shown on the main page or menus.

Let's add labels to all the benefits and features so that GA can track these 
events for us.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: IgniteSet implementation: changes required

2018-02-09 Thread Andrey Kuznetsov
Hi all,

Current set implementation has significant flaw: all set data are
duplicated in onheap maps on _every_ node in order to make iterator() and
size(). For me it looks like simple yet ineffective implementation.
Currently, these maps are damaged by checkpointing/recovery, and we could
patch them somehow. Another future change to Ignite caches can damage them
again. This looks fragile when datastructure is not entirely backed by
caches. Pavel's proposal seems to be a reliable solution for non-collocated
sets.

9 февр. 2018 г. 22:46 пользователь "Valentin Kulichenko" <
valentin.kuliche...@gmail.com> написал:

Pavel,

I'm a bit confused. In my understanding, issue exists because we have local
in-memory maps which are used as the main source of truth about which
structures currently exist. During restart, we lose all this data even if
data structures cache(s) are persisted. Once we fix this, issue goes away,
regardless of weather we store data structure per cache or everything in
single cache. Am I missing something?

I also agree with Dmitry. While cache per set approach can make sense for
non-collocated sets, for collocated ones it definitely doesn't. So I would
fix the original issue first, and then change the architecture if it's
really needed.

-Val


[jira] [Created] (IGNITE-7665) .NET: Target .NET Standard 2.0

2018-02-09 Thread Pavel Tupitsyn (JIRA)
Pavel Tupitsyn created IGNITE-7665:
--

 Summary: .NET: Target .NET Standard 2.0
 Key: IGNITE-7665
 URL: https://issues.apache.org/jira/browse/IGNITE-7665
 Project: Ignite
  Issue Type: Improvement
  Components: platforms
Reporter: Pavel Tupitsyn
 Fix For: 2.5


As explained in IGNITE-2662 and 
https://apacheignite-net.readme.io/v2.4/docs/cross-platform-support, our 
projects/assemblies still target .NET 4.0.

This simplifies build/release procedures, but has issues:
* Ignite.NET *can't be used from .NET Standard 2.0 libraries* (big one)
* Warning is displayed
* Incompatible API usages may sneak in despite tests

We should target {{netstandard2.0}} as well as .NET 4. Release package should 
contain two set of assemblies.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3494: IGNITE-7438 LSQR solver for Linear Regression

2018-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3494


---


[jira] [Created] (IGNITE-7661) SQL COPY: provide more tests for national Unicode characters (including surrogates and 0x10000+ range)

2018-02-09 Thread Kirill Shirokov (JIRA)
Kirill Shirokov created IGNITE-7661:
---

 Summary: SQL COPY: provide more tests for national Unicode 
characters (including surrogates and 0x1+ range)
 Key: IGNITE-7661
 URL: https://issues.apache.org/jira/browse/IGNITE-7661
 Project: Ignite
  Issue Type: Bug
  Components: sql
Affects Versions: 2.5
Reporter: Kirill Shirokov






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3498: IGNITE-3111 .NET: Configure SSL without Spring

2018-02-09 Thread apopovgg
GitHub user apopovgg opened a pull request:

https://github.com/apache/ignite/pull/3498

IGNITE-3111 .NET: Configure SSL without Spring



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-3111

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3498.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3498


commit 10fe94c28b62cadabdced66776f8761d602b0468
Author: apopov 
Date:   2018-02-09T11:55:29Z

IGNITE-3111 .NET: Configure SSL without Spring




---


[GitHub] ignite pull request #3499: IGNITE-7253

2018-02-09 Thread alexpaschenko
GitHub user alexpaschenko opened a pull request:

https://github.com/apache/ignite/pull/3499

IGNITE-7253



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-7253

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3499.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3499


commit 3c5c4117ef16c66ac83d37fb99b658d9db34d0c2
Author: Alexander Paschenko 
Date:   2018-02-06T22:36:47Z

IGNITE-7253 Added connection properties for streaming.

commit 07c6691ca774d5111954e79d47c91014880c5c8e
Author: Alexander Paschenko 
Date:   2018-02-07T19:20:54Z

IGNITE-7253 Continued.

commit f854fb6ccaf323fff534660c90867ad0885a2186
Author: Alexander Paschenko 
Date:   2018-02-07T19:22:08Z

IGNITE-7253 Continued.

commit 4da5c8fdbde78ea10fe38a37652ad0c3e709ec25
Author: Alexander Paschenko 
Date:   2018-02-08T17:28:37Z

IGNITE-7253 Continued.

commit 4d2d20a6ca58895ca6e48aef91cb576c1dc66714
Author: Alexander Paschenko 
Date:   2018-02-08T20:44:10Z

IGNITE-7253 Continued

commit 47e46ab5def97e6e57fda19349fdc3c968052fa8
Author: Alexander Paschenko 
Date:   2018-02-09T07:46:45Z

IGNITE-7253 Continued

commit 1b5bfda081e5e69fbd071a6ebb7e19a04412d8bd
Author: Alexander Paschenko 
Date:   2018-02-09T07:56:04Z

Merge remote-tracking branch 'apache/master' into ignite-7253

# Conflicts:
#   
modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/ConnectionProperties.java
#   
modules/core/src/main/java/org/apache/ignite/internal/jdbc/thin/ConnectionPropertiesImpl.java
#   
modules/core/src/main/java/org/apache/ignite/internal/jdbc2/JdbcQueryTask.java
#   
modules/core/src/main/java/org/apache/ignite/internal/processors/odbc/jdbc/JdbcRequestHandler.java
#   
modules/core/src/main/java/org/apache/ignite/internal/sql/SqlKeyword.java
#   modules/core/src/main/java/org/apache/ignite/internal/sql/SqlParser.java
#   
modules/indexing/src/main/java/org/apache/ignite/internal/processors/query/h2/IgniteH2Indexing.java
#   
modules/indexing/src/test/java/org/apache/ignite/testsuites/IgniteCacheQuerySelfTestSuite.java

commit e79fafa3d7e916baa12219b1389f4319fca74f27
Author: Alexander Paschenko 
Date:   2018-02-09T08:32:33Z

IGNITE-7253 Continued

commit 9a74346a0c2af9c08b8eefc7be256a50e8834f20
Author: Alexander Paschenko 
Date:   2018-02-09T09:59:32Z

IGNITE-7253 Post-merge fixes + batching

commit 8965daf8c392e9255c1823503f07b5d0dabf0317
Author: Alexander Paschenko 
Date:   2018-02-09T12:54:15Z

IGNITE-7253 More tests, some fixes.




---


[jira] [Created] (IGNITE-7660) Refactor LSQR algorithm

2018-02-09 Thread Anton Dmitriev (JIRA)
Anton Dmitriev created IGNITE-7660:
--

 Summary: Refactor LSQR algorithm
 Key: IGNITE-7660
 URL: https://issues.apache.org/jira/browse/IGNITE-7660
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Anton Dmitriev






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Removing "fabric" from Ignite binary package name

2018-02-09 Thread Anton Vinogradov
Denis,

You're proposing changes without viewing a code :)


On Thu, Feb 8, 2018 at 10:07 PM, Denis Magda  wrote:

> Anton,
>
> What’s wrong if we just go ahead and:
> - replace “fabric” with “ignite”
> - replace “hadoop” with “ignite-hadoop"
>
> —
> Denis
>
> > On Feb 8, 2018, at 1:51 AM, Anton Vinogradov 
> wrote:
> >
> > Denis,
> >
> > "hadoop" and "fabric" words work on same engine.
> >
> > We have special assembly desctiptors, for example:
> > dependencies-fabric.xml
> > dependencies-fabric-lgpl.xml
> > dependencies-hadoop.xml
> > release-base.xml
> > release-fabric.xml
> > release-fabric-base.xml
> > release-fabric-lgpl.xml
> > release-hadoop.xml
> >
> > So, I'ts impossible for now to remove "fabric" without "hadoop" removal.
> > Only one case is to make some ditry hack, but that's not a good idea.
> >
> > On Thu, Feb 8, 2018 at 11:29 AM, Sergey Kozlov 
> wrote:
> >
> >> +1 hadoop accelerator removing for AI 2.5
> >>
> >> Also probably IGFS should be either removed or refactored, e.g. create
> FS
> >> directly over the data region without using "cache" entity as an
> >> intermidiate stage
> >>
> >> On Thu, Feb 8, 2018 at 2:13 AM, Denis Magda  wrote:
> >>
> >>> Anton,
> >>>
> >>> I don’t get how the hadoop editions are related to this task. The
> project
> >>> is not named as “data fabric” for a while. Check up the site or docs.
> >>>
> >>> The “fabric” word is being removed from all over the places and needs
> to
> >>> be removed from the editions’ names.
> >>>
> >>> As for the hadoop future, my personal position is to retire this
> >> component
> >>> and forget about it. I would restart the conversation again after we
> done
> >>> with 2.4.
> >>>
> >>> —
> >>> Denis
> >>>
>  On Feb 7, 2018, at 2:13 AM, Anton Vinogradov  wrote:
> 
>  Denis, Petr,
> 
>  I checked PR and found we have *overcomplicated* logic with "fabric"
> >> and
>  "hadoop" postfixs.
> 
>  Do we really need to assembly 2 editions?
>  "Hadoop" edition still valued?
> 
>  My proposal is to get rid of "hadoop" edition and replace it with
>  instruction of how to use "fabric" edition instead.
>  Instruction will be pretty easy -> move "hadoop" folder from
> "optional"
> >>> to
>  root directory :)
> 
>  In that case we can just remove all postfix logic from maven poms and
>  simplify release process.
> 
>  On Thu, Dec 28, 2017 at 9:20 PM, Denis Magda 
> >> wrote:
> 
> > Petr, thanks for solving it!
> >
> > Hope that Anton V. or some other build master will double-check the
> > changes and merge them.
> >
> > —
> > Denis
> >
> >> On Dec 28, 2017, at 8:29 AM, Petr Ivanov 
> >> wrote:
> >>
> >> IGNITE-7251 is done, needs review and some additional tests. See PR
> > #3315 [1].
> >>
> >>
> >> [1] https://github.com/apache/ignite/pull/3315 <
> > https://github.com/apache/ignite/pull/3315>
> >>
> >>
> >>
> >>> On 20 Dec 2017, at 23:15, Denis Magda  wrote:
> >>>
> >>> Petr, thanks, such a swift turnaround!
> >>>
> >>> Have you found the one who can asses and review the changes?
> >>>
> >>> Maintainers label might be helpful. Just ping them directly:
> >>> https://cwiki.apache.org/confluence/display/IGNITE/How+
> > to+Contribute#HowtoContribute-ReviewProcessandMaintainers <
> > https://cwiki.apache.org/confluence/display/IGNITE/How+
> > to+Contribute#HowtoContribute-ReviewProcessandMaintainers>
> >>>
> >>>
> >>> —
> >>> Denis
> >>>
>  On Dec 20, 2017, at 12:24 AM, Petr Ivanov 
> >>> wrote:
> 
>  Assigned myself — done the same work while preparing RPM package.
>  But for fixing DEVNOTES.txt waiting for review and merge of
> > IGNITE-7107 [1].
> 
> 
>  [1] https://issues.apache.org/jira/browse/IGNITE-7107
> 
> 
> 
> > On 19 Dec 2017, at 22:55, Denis Magda  wrote:
> >
> > All the bids were accepted and the verdict is executed:
> > https://issues.apache.org/jira/browse/IGNITE-7251 <
> > https://issues.apache.org/jira/browse/IGNITE-7251>
> >
> > Who is ready to pick this up?
> >
> > —
> > Denis
> >
> >> On Dec 19, 2017, at 5:35 AM, Anton Vinogradov <
> > avinogra...@gridgain.com> wrote:
> >>
> >> +1б фо шур
> >>
> >> On Tue, Dec 19, 2017 at 9:59 AM, Vladimir Ozerov <
> > voze...@gridgain.com>
> >> wrote:
> >>
> >>> +1б вуаштшеудн
> >>>
> >>> On Tue, Dec 19, 2017 at 2:34 AM, Valentin Kulichenko <
> >>> valentin.kuliche...@gmail.com> wrote:
> >>>
>  +1
> 

Re: Saving Spark Data Frames merged to master

2018-02-09 Thread Anton Vinogradov
Nikolay,

2.4 is almost ready to be released.
We're fixing final issues to provide stable and fast release.
Merging something to 2.4 except blockers is not possible at this phase of
release process.

Hope, 2.5, with your changes, will be released soon :)

On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov  wrote:

> Hello, Igniters.
>
> Good news.
>
> IGNITE-7337 [1](Spark Data Frames: support saving a data frame in Ignite)
> are merged to master.
>
> For now we can both - read from and write to Ignite SQL table with Data
> Frame API.
> Big thanks to Valentin Kulichenko for a quick review.
> So it seems we can include this feature to 2.4 release.
>
> [1] https://issues.apache.org/jira/browse/IGNITE-7337


[jira] [Created] (IGNITE-7659) Reduce multiple Trainer interfaces to one

2018-02-09 Thread Anton Dmitriev (JIRA)
Anton Dmitriev created IGNITE-7659:
--

 Summary: Reduce multiple Trainer interfaces to one
 Key: IGNITE-7659
 URL: https://issues.apache.org/jira/browse/IGNITE-7659
 Project: Ignite
  Issue Type: Improvement
  Components: ml
Reporter: Anton Dmitriev


Currently there are two `Trainer` interfaces: in package `org.apache.ignite.ml` 
and `org.apache.ignite.ml.trainers`. We need to use only one.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] ignite pull request #3497: ignite-2.5.1.b1 - ZookeeperDiscoverySpi

2018-02-09 Thread sergey-chugunov-1985
GitHub user sergey-chugunov-1985 opened a pull request:

https://github.com/apache/ignite/pull/3497

ignite-2.5.1.b1 - ZookeeperDiscoverySpi



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/gridgain/apache-ignite ignite-2.5.1.b1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/ignite/pull/3497.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #3497


commit d56163c0e5d87062687ec2325a8b11fb9e8682cc
Author: sboikov 
Date:   2017-11-10T08:34:00Z

zk

commit d1f730789ed718f11b0654fb3c4622cf9725b3db
Author: sboikov 
Date:   2017-11-10T09:04:54Z

zk

commit 48175cf3b5a18578173736ba1cbc4493e1327333
Author: sboikov 
Date:   2017-11-10T12:10:59Z

zk

commit 6c1fe28c70e677619842f57af09ce182daf3b06e
Author: sboikov 
Date:   2017-11-10T12:27:04Z

zk

commit 246478186aaef2f1e06deacb19d5198aeb1157fa
Author: sboikov 
Date:   2017-11-13T09:01:06Z

zk

commit 2b75ecfb9f1f490dbab907efd3aab8db9622b09c
Author: sboikov 
Date:   2017-11-13T09:01:23Z

Merge remote-tracking branch 'remotes/origin/master' into ignite-zk

commit 740c3b24f5d5d9fec166f9258d7bb0e31b1117fd
Author: sboikov 
Date:   2017-11-13T09:41:35Z

zk

commit 9970b95029f1c94ec027aaa38fdfda5719e79cfe
Author: sboikov 
Date:   2017-11-13T10:25:06Z

zk

commit 6ed2564a8d68e651cb776e13302d62f415938bea
Author: sboikov 
Date:   2017-11-13T11:24:54Z

zk

commit b361a78022a01ff854d469e26e2e3a773a64839a
Author: sboikov 
Date:   2017-11-13T11:41:11Z

zk

commit 804c84171cc75e53bed549d13d5af6858786d9a7
Author: sboikov 
Date:   2017-11-13T12:25:25Z

zk

commit 42813c8b0bca4e8cf1074ba6cbeff1a14247fbd3
Author: sboikov 
Date:   2017-11-13T15:08:46Z

zk

commit 32f7fa89899bd005791d941aed50c8f4f35dd46c
Author: sboikov 
Date:   2017-11-14T08:05:08Z

zk

commit 8ab3b56675ef9cd6a1515d2cb33f0b6fef0f0acd
Author: sboikov 
Date:   2017-11-14T10:52:51Z

zk

commit 3736abe27a080ac7962dd8ede9e7b7414c4a82ae
Author: sboikov 
Date:   2017-11-14T11:14:25Z

zk

commit 1842bb4c48ce245a5b69b669087590351de686fa
Author: sboikov 
Date:   2017-11-14T12:07:55Z

zk

commit 73f5af60cc6701a88712735f94f14f4fe0cdd92c
Author: sboikov 
Date:   2017-11-14T12:41:47Z

zk

commit f6218ddf57252c42e6f48df97d292473163dffdb
Author: sboikov 
Date:   2017-11-14T13:05:09Z

zk

commit 54211bfaee22bc77714f6c821b0765815f11e386
Author: sboikov 
Date:   2017-11-14T13:05:49Z

zk

commit d2f5a76cc725052d76a216603b2579f17ba92d60
Author: sboikov 
Date:   2017-11-14T13:34:42Z

zk

commit 775a60f79904686b2e443789ae8c62df74f4d6fa
Author: sboikov 
Date:   2017-11-14T14:36:24Z

zk

commit bedc4e99e14bd597616b134d99ea75cb4d22ea08
Author: sboikov 
Date:   2017-11-14T19:37:24Z

zk

commit 1ee551c856e988287ed6beeb054d53d89cb8800d
Author: sboikov 
Date:   2017-11-14T20:41:53Z

zk

commit ac07cbee75f3f230a12075ac91e4dad0f1a89b0b
Author: sboikov 
Date:   2017-11-15T07:10:24Z

Merge remote-tracking branch 'origin/ignite-zk' into ignite-zk

commit aa0ca90cbaec809715190c1036654a6aad0fb0a3
Author: sboikov 
Date:   2017-11-15T09:50:14Z

zk

commit 4ec75fa2900e3f4624d6516e0c50fc1877d7b5cc
Author: sboikov 
Date:   2017-11-15T10:12:26Z

zk

commit c55d5c2d039f58e3bd0d3ac089f2dfc09d6f90b9
Author: sboikov 
Date:   2017-11-15T11:20:08Z

zk

commit 11e2567fffa724e6b4af6021cda1bfbcf775370b
Author: sboikov 
Date:   2017-11-16T13:10:04Z

zk

commit 98a171c68a1f5610e5f5830144306ee73df866d6
Author: sboikov 
Date:   2017-11-16T14:42:05Z

zk

commit b389f38cbc59f41dc1c95854684059f15b225b8c
Author: sboikov 
Date:   2017-11-17T06:33:30Z

zk




---


TcpCommunicationSpi in dockerized environment

2018-02-09 Thread Sergey Chugunov
Hello Ignite community,

When testing Ignite in dockerized environment I faced the following issue
with current TcpComminicationSpi implementation.

I had several physical machines and each Ignite node running inside Docker
container had at least two InetAddresses associated with it: one IP address
associated with physical host and one additional IP address of Docker
bridge interface *which was default and the same accross all physical
machines*.

Each node publishes address of its Docker bridge in the list of its
addresses although it is not reachable from remote nodes.
So when node tries to establish communication connection using remote
node's Docker address its request goes to itself like it was a loopback
address.

I would suggest to implement a simple heuristic to avoid this: before
connecting to some remote node's address CommunicationSpi should check
whether local node has exactly the same address. If "remote" and local
addresses are the same CommunicationSpi should skip such address from
remote node's list and proceed with the next one.

Is it safe to implement such heuristic in TcpCommunicationSpi or there are
some risks I'm missing? I would really appreciate any help from expert with
deep knowledge of Communication mechanics.

If such improvement makes sense I'll file a ticket and start working on it.

Thanks,
Sergey.


Re: Ignite work directory usage?

2018-02-09 Thread Dmitry Pavlov
Hi Val,

Do you mean by
> switching marshaller cache to persistence instead of using these files
makes perfect sense to me,
using 'metastore' for marshaller cache?

Sincerely,
Dmitriy Pavlov

пт, 9 февр. 2018 г. в 1:05, Valentin Kulichenko <
valentin.kuliche...@gmail.com>:

> Sergey,
>
> These mappings are supposed to be the same on all nodes, so if the file
> already exists, we can safely ignore this, or use a lock to avoid
> concurrent access. Actually, I think we already fixed this in the past,
> it's weird that issue came up again.
>
> But in any case, switching marshaller cache to persistence instead of using
> these files makes perfect sense to me, and we definitely should do that.
>
> -Val
>
> On Tue, Feb 6, 2018 at 1:04 AM, Sergey Chugunov  >
> wrote:
>
> > Folks,
> >
> > There are several things here.
> >
> > Firstly user asked the initial question is sitting on Apache 1.x; it is
> > clear from exception stack trace he provided.
> > So although the issue exists for a while it looks like it doesn't hurt
> > users a lot.
> >
> > Secondly I examined the code managing marshaller mappings and can say
> that
> > it the issue is still here even in the latest version; thus I filed a
> > ticket [1] to address it.
> >
> > ​[1] https://issues.apache.org/jira/browse/IGNITE-7635
> >
> > Thanks,
> > Sergey.
> >
>


[GitHub] ignite pull request #3448: IGNITE-7476 IGNITE-7519 needed for reproducer of ...

2018-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3448


---


[GitHub] ignite pull request #3490: IGNITE-7540 Sequential checkpoints cause overwrit...

2018-02-09 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/ignite/pull/3490


---


Re: Saving Spark Data Frames merged to master

2018-02-09 Thread Nikolay Izhikov
Hello, Anton.

I have no any objections.

Seems like some kind of misunderstanding from my side.

As far as I can understand mail from Dmitriy Setrakyan [1] He agreed to
include IGNITE-7337 to 2.4.
If the Ccommuntiy decides to postpone this feature in 2.5 release I'm fully
OK with it.

[1]
http://apache-ignite-developers.2346864.n4.nabble.com/Apache-Ignite-2-4-release-tp26031p26807.html


2018-02-09 14:58 GMT+03:00 Anton Vinogradov :

> Nikolay,
>
> 2.4 is almost ready to be released.
> We're fixing final issues to provide stable and fast release.
> Merging something to 2.4 except blockers is not possible at this phase of
> release process.
>
> Hope, 2.5, with your changes, will be released soon :)
>
> On Fri, Feb 9, 2018 at 7:19 AM, Nikolay Izhikov 
> wrote:
>
> > Hello, Igniters.
> >
> > Good news.
> >
> > IGNITE-7337 [1](Spark Data Frames: support saving a data frame in Ignite)
> > are merged to master.
> >
> > For now we can both - read from and write to Ignite SQL table with Data
> > Frame API.
> > Big thanks to Valentin Kulichenko for a quick review.
> > So it seems we can include this feature to 2.4 release.
> >
> > [1] https://issues.apache.org/jira/browse/IGNITE-7337
>